text
stringlengths
1
2.51M
meta
dict
\section{Introduction} Given their central role in many number theoretic applications, it is no surprise that Weyl sums and their properties have been subject to thorough investigation over the years. For a collection ${\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}$ of linearly independent polynomials ${\varphi}_1, \ldots, {\varphi}_r \in \mathbb Z[X]$ with respective degrees $k_1, \ldots, k_r$ we consider the {\it Weyl sums} $$ f_{{\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}}({\boldsymbol \alpha})= \sum_{1 \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec x \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P} {\mathbf{\,e}}({\alpha}} \def\talp{{\widetilde{\alpha}}_1 {\varphi}_1(x) + \ldots + {\alpha}} \def\talp{{\widetilde{\alpha}}_r {\varphi}_r(x)), $$ where ${\mathbf{\,e}}(z) = \exp\(2 \pi i z\)$ and ${\boldsymbol \alpha}=({\alpha}} \def\talp{{\widetilde{\alpha}}_1, \ldots, {\alpha}} \def\talp{{\widetilde{\alpha}}_r)$. We also write $\mathbb T = \mathbb R / \mathbb Z$ for the unit torus, and refer to the end of this section for other notational conventions we use. Whilst it is well known that $f_{{\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}}({\boldsymbol \alpha})$ can be of order $P$ when the entries of ${\boldsymbol \alpha}$ lie in the neighbourhood of fractions with a small denominator, the general expectation has always been that for a ``typical'' ${\boldsymbol \alpha}$ one should have the upper and lower bounds \begin{equation} \label{sqrt-bd} P^{1/2} \ll f_{{\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}}({\boldsymbol \alpha}) \ll P^{1/2 + o(1)}. \end{equation} This question has recently been investigated in work by Chen and Shparlinski~\cite{CS1}, which in particular implies that the bounds~\eqref{sqrt-bd} hold for a subset of ${\boldsymbol \alpha} \in \mathbb T^r$ of full Lebesgue measure whenever the polynomials ${\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}$ have a non-vanishing Wronskian~\cite[Corollary~2.2]{CS1}. A particularly strong version of this result, applicable to the situation when ${\varphi}_j(X)=X^j$ for $1 \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec r$, is available in subsequent work~\cite{CKMS}, where the interested reader will also find a more comprehensive bibliography on the subject. In practical applications it is often necessary to control the size of $f_{{\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}}({\boldsymbol \alpha})$ on linear slices of $\mathbb T^r$, where some of the ${\alpha}} \def\talp{{\widetilde{\alpha}}_i$ are fixed to lie in some set of full measure, whereas the remaining ones range over the entire unit interval. Such situations typically arise in ``minor arcs'' situations where some, but not all, entries of ${\boldsymbol \alpha}$ may have a good rational approximation and thus lie in an anticipated exceptional set. This problem has recently been studied in a very general setup by Chen and Shparlinski~\cite{CS1} (see also~\cite{CS3}), refining an approach developed by Wooley~\cite{TDW16}. Their main result~\cite[Theorem~2.1]{CS1} asserts that whenever the polynomials ${\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}$ have a non-vanishing Wronskian, then for almost all $({\alpha}} \def\talp{{\widetilde{\alpha}}_1, \ldots, {\alpha}} \def\talp{{\widetilde{\alpha}}_d) \in \mathbb T^d$ one has bounds of the shape $$ \sup_{{\alpha}} \def\talp{{\widetilde{\alpha}}_{d+1}, \ldots, {\alpha}} \def\talp{{\widetilde{\alpha}}_r \in \mathbb T} |f_{{\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}}({\alpha}} \def\talp{{\widetilde{\alpha}}_1, \ldots, {\alpha}} \def\talp{{\widetilde{\alpha}}_r)| \ll P^{1/2 + \Gamma(d, {\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}) + o(1)}, $$ where $\Gamma(d, {\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi})$ is a non-negative function depending on the degrees of the polynomials ${\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}$, for the precise definition of which we refer to~\cite{CS1}. Unfortunately, even though the bound of~\cite[Theorem~2.1]{CS1} gives strong results in a number of configurations and notably implies that one can take $\Gam(d, {\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi})=0$ for all admissible $r$-tuples of polynomials when $d=r$, in many other cases the bounds it furnishes do not beat even the trivial bound. In such situations, one has to resort to the more classical methods employing bounds of Weyl or Hua type and their subsequent generalisations (see~\cite[Lemma~2.4 and Theorem~5.2]{V:HL} for the former, and \cite[Lemma~2.5]{V:HL} as well as the results of \cite[Section~14]{TDW19} for the latter). Bounds of this nature provide also the crucial input in the work by Erdo\u{g}an and Shakan~\cite{ES}, as well as in recent work by Chen and Shparlinski~\cite{CS2} in which, motivated by some links to certain questions on classical partial differential equations, they establish upper bounds along linear slices of the exponential sum associated with pairs of polynomials ${\varphi}_1, {\varphi}_2$ differing by a linear term. Several related results have recently been obtained by Barron~\cite{Barr}. However, as these bounds use Vinogradov's mean value theorem (see~\cite[Theorem~1.1]{BDG} or~\cite[Theorem~1.1]{TDW19}) as their main input, which is inefficient for Weyl sums whose degree exceeds their dimension, they are inherently unable to provide bounds stronger than $O(P^{1-c_k})$ for some positive parameter $c_k$ of size $c_k \asymp k^{-2}$. Whilst exponents of this magnitude are not believed to be sharp in general, Brandes et al.~\cite{BPPSV} have recently shown that one cannot hope to have $\Gam(d, {\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi})=0$ for all choices of polynomials with non-vanishing Wronskian when $d <r$. In particular, for the choice ${\varphi}_1(x) = X^k + X$ and ${\varphi}_2(X)=X^k$ with $k=2$ or $k=3$, they show in~\cite[Theorem~1.3]{BPPSV} that for all ${\alpha}} \def\talp{{\widetilde{\alpha}}_2 \in \mathbb R \setminus \mathbb Q$ and any $\tau> 0$ there exist arbitrarily large values of $P$ for which we have the lower bound \begin{equation} \label{BPPSV-bd} \sup_{{\alpha}} \def\talp{{\widetilde{\alpha}} \in \mathbb T} |f_{{\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}}({\alpha}} \def\talp{{\widetilde{\alpha}}_1, {\alpha}} \def\talp{{\widetilde{\alpha}}_2)| \gg P^{3/4 -\tau}, \end{equation} and that for almost all ${\alpha}} \def\talp{{\widetilde{\alpha}}_2 \in \mathbb T$ this bound can be matched by a corresponding upper bound $$ \sup_{{\alpha}} \def\talp{{\widetilde{\alpha}} \in \mathbb T} |f_{{\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}}({\alpha}} \def\talp{{\widetilde{\alpha}}_1, {\alpha}} \def\talp{{\widetilde{\alpha}}_2)|\ll P^{3/4 + o(1)}. $$ To our knowledge, this is the first indication in the literature that the expectation that~\eqref{sqrt-bd} should hold for all ${\boldsymbol \alpha}$ on a linear slice of $\mathbb T^r$ may be too naive. In~\cite{BPPSV} the authors speculate that the same behaviour as in~\eqref{BPPSV-bd} might continue to hold for polynomials ${\varphi}_1(X) = X^k + X$ and ${\varphi}_2(X)=X^k$ with $k\ge 4$. The goal of this paper is therefore to extend the bound in~\eqref{BPPSV-bd} to more general polynomials, allowing also for higher degrees. \begin{theorem} \label{thm:main} Let ${\varphi} \in \mathbb Z[X]$ be a polynomial of degree $k \ge 2$, and set \begin{equation} \label{expsum} f({\alpha}} \def\talp{{\widetilde{\alpha}}_1, {\alpha}} \def\talp{{\widetilde{\alpha}}_2) = \sum_{1 \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec x \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P} {\mathbf{\,e}}({\alpha}} \def\talp{{\widetilde{\alpha}}_1 ({\varphi}(x)+x) + {\alpha}} \def\talp{{\widetilde{\alpha}}_2 {\varphi}(x)). \end{equation} There exists a set ${\mathscr C}} \def\scrCbar{{\overline \scrC}}\def\scrCtil{{\widetilde \scrC} \subseteq \mathbb T$ of full Lebesgue measure such that for any $\tau>0$ and all ${\alpha}} \def\talp{{\widetilde{\alpha}}_2 \in {\mathscr C}} \def\scrCbar{{\overline \scrC}}\def\scrCtil{{\widetilde \scrC}$ there exist arbitrarily large values of $P$ for which one has the bound $$ \sup_{{\alpha}} \def\talp{{\widetilde{\alpha}}_1 \in \mathbb T} |f({\alpha}} \def\talp{{\widetilde{\alpha}}_1, {\alpha}} \def\talp{{\widetilde{\alpha}}_2)| \gg P^{3/4-\tau}. $$ \end{theorem} Thus, whenever ${\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi} = ({\varphi}_1, {\varphi}_2)$ is a pair of polynomials differing only by a linear term, the associated exponential sum is are substantially larger than originally anticipated on almost all linear slices of $\mathbb T$. The fact that in our result the polynomials under consideration differ only by a linear term seems to play a role, since linear exponential sums do not exhibit square root cancellation in the same manner as their cousins of higher degree do. It is therefore an interesting question to investigate whether the behaviour observed in Theorem~\ref{thm:main} persists, perhaps in a weaker form, even when the polynomials occurring in the exponential sum differ by more than a linear term. Unlike in~\cite{BPPSV}, our result in Theorem~\ref{thm:main} is not complemented by a corresponding upper bound. The methods presented in~\cite{BPPSV} could conceivably be adapted to provide such upper bounds even in the more general case considered in the manuscript at hand for all ${\alpha}} \def\talp{{\widetilde{\alpha}}_2$ lying in a subset of full measure of a suitably defined set of ``major arcs''. This would be sufficient when $k \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 3$, as then the entire unit interval $\mathbb T$ can be covered by such major arcs. For higher degrees, these methods fail and we have no improvements over the existing results of~\cite{CS2}. Nonetheless, we believe that these difficulties are of a technical rather than fundamental nature, and consequently it seems likely that the exponent $3/4$ should be sharp in those cases also. Our argument is a streamlined version of that presented in~\cite[Section~8]{BPPSV}, which deals with the case of ${\varphi}(X)=X^k$ for $k=2, 3$. However, we augment this approach by two classical results. Firstly, we appeal to a bound of Bombieri~\cite[Theorem~6]{Bom} on exponential sums along a curve over a finite field, and secondly we make use of a result of Duffin and Schaeffer~\cite[Theorem~I]{DuSch} which allows us to restrict to the case where the diophantine approximations we consider have a prime denominator. \textbf{Notation.} Throughout the paper, we make use of the following conventions. When $x \in \mathbb R$ we denote by $\|x \|$ the distance from $x$ to the nearest integer. Moreover, $P$ always denotes a large positive number, and the letter $p$ is reserved for primes. We use the Vinogradov `$\ll$', `$\gg$' and equivalent Bachmann--Landau notations `$O(\cdot)$' liberally, and here the implied constants are allowed to depend on ${\boldsymbol \varphi}} \def\bPhi{{\boldsymbol \Phi}$ and $\tau$, but never on $P$ or ${\boldsymbol \alpha}$. \section{Assembling the toolbox} \subsection{Approximations by rational exponential sums} In our examination of the exponential sum~\eqref{expsum} we rely heavily on our understanding of the closely related sum $$ g({\alpha}} \def\talp{{\widetilde{\alpha}}, {\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}}) = \sum_{1 \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec x \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P} {\mathbf{\,e}}({\alpha}} \def\talp{{\widetilde{\alpha}} x + {\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}} {\varphi}(x)) $$ and its associated approximations. Indeed, it is apparent from the respective definitions of these exponential sums that \begin{equation} \label{f=g} f({\alpha}} \def\talp{{\widetilde{\alpha}}_1, {\alpha}} \def\talp{{\widetilde{\alpha}}_2) = g({\alpha}} \def\talp{{\widetilde{\alpha}}_1, {\alpha}} \def\talp{{\widetilde{\alpha}}_1+{\alpha}} \def\talp{{\widetilde{\alpha}}_2). \end{equation} When ${\varphi}(X)=X^k$, the latter one of these has been studied in~\cite{BR} and~\cite{BPPSV}, but it turns out that in the situation we are mainly interested in the pure power may be replaced by a more general polynomial. For $q \in \mathbb N$, $a, c \in \mathbb Z$ and ${\beta} \in \mathbb R$ set $$ S(q; a,c)= \sum_{x=1}^q {\mathbf{\,e}} \left( \frac{a x + c {\varphi}(x)}{q} \right) \qquad \text{ and } \qquad I({\beta}) = \int_0^P {\mathbf{\,e}}({\beta} x) {\,{\rm d}} x, $$ and recall that for non-vanishing ${\beta}$ we can compute \begin{equation} \label{I-bd} |I({\beta})| =P \left| \frac{\sin(\pi \beta P)}{\pi \beta P} \right| \ll \min \{P, \| {\beta} \|^{-1} \}, \end{equation} while a classical Weil bound (see, for example,~\cite[Corollary~II.2F]{Schmidt}) shows that when $p$ is prime and $c \nmid p$ one has \begin{equation} \label{S-bd} S(p; a,c) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec (k-1)p^{1/2}. \end{equation} We then have the following straightforward modification of~\cite[Theorem~3]{BR} or~\cite[Theorem~4.1]{V:HL}. \begin{lemma}\label{L1} Let ${\varphi} \in \mathbb Z[X]$ be a polynomial of degree $k \ge 2$. Suppose that ${\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}} \in \mathbb Q$ with ${\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}} = c/p$ in lowest terms, where $p$ is a prime number, and fix $a \in \mathbb Z$ such that $|{\alpha}} \def\talp{{\widetilde{\alpha}} - a/p| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec (2p)^{-1}$. Set then ${\beta} = {\alpha}} \def\talp{{\widetilde{\alpha}} - a/p$. In this notation we have $$ g({\alpha}} \def\talp{{\widetilde{\alpha}},{\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}}) = p^{-1} S(p; a,c) I({\beta}) + O(p^{1/2} \log p). $$ \end{lemma} \begin{proof} Just like in the proof of~\cite[Theorem~4.1]{V:HL}, we sort the variables into residue classes, which we then encode in terms of exponential sums. Thus $$ g({\alpha}} \def\talp{{\widetilde{\alpha}}, {\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}})=\frac{1}{p} \sum_{b=1}^p S(p; a+b, c) f({\beta}-b/p,0). $$ By~\cite[Lemma~4.2]{V:HL} we have $f({\beta}-b/p,0) = I({\beta}-b/p)+O(1)$, so that together with~\cite[Lemma~2.2]{BPPSV} we find that $$ g({\alpha}} \def\talp{{\widetilde{\alpha}}, {\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}})=\frac{1}{p} \sum_{b=1}^p S(p; a+b, c) I({\beta}-b/p) + O(p^{1/2}). $$ Since $c \nmid p$, it follows upon deploying~\eqref{I-bd} and~\eqref{S-bd} that $$ g({\alpha}} \def\talp{{\widetilde{\alpha}}, {\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}}) - p^{-1} S(p; a,c) I({\beta}) \ll p^{-1/2 } \sum_{b=1}^{p-1} \|{\beta} - b/p \|^{-1} \ll p^{1/2} \log p, $$ where in the last step we use that $$ \| {\beta}-b/p \| \ge (2p)^{-1} $$ for all $b \not \equiv 0 \mmod p$. This completes the proof. \end{proof} \subsection{A lower bound on rational exponential sums} Our second main tool shows that the complete exponential sum $S(p; a,c)$ cannot be smaller than $p^{1/2}$ too often. It is useful to denote the leading coefficient of ${\varphi}$ by $\lc({\varphi})$. \begin{lemma}\label{L2} Let $p$ be a prime satisfying $p>(2k)^4$ with $p \nmid\lc({\varphi})$, and let $c \in \mathbb Z$ with $p \nmid c$. Then there exists $a \in \mathbb Z$ with $p \nmid (a+c)$ such that $$ S(p; a, a+c) \ge \tfrac13 p^{1/2}. $$ \end{lemma} \begin{proof} When $k=2$, the desired result follows from classical bounds on Gauss sums, so it is sufficient to consider the case when $k \ge 3$. By averaging and shifting the variable of summation, the result follows if we can show that \begin{equation} \label{eq:2nd Mom} \sum_{a=1}^{p-1} |S(p; a-c,a)|^2 \ge \tfrac{1}{3} p^2 \end{equation} for all primes $p > (2k)^4$ not dividing $\lc({\varphi})$. We begin by noting that \begin{align*} \sum_{a=1}^{p-1} |S(p; a-c,a)|^2 & = p \sum_{\substack{m,n = 1 \\ {\varphi}(m)+m \equiv {\varphi}(n)+n \mmod p}}^p {\mathbf{\,e}} \left(\frac{c(m-n)}{p}\right) - \left| \sum_{m=1}^p {\mathbf{\,e}} \left(\frac{cm}{p}\right) \right|^2. \end{align*} The second sum vanishes, and in the first one we make the change of variables $n=m-h$ and isolate the term corresponding to $h=0$. Hence \begin{equation} \label{pre-Bom} \sum_{a=1}^{p-1} |S(p; a-c,a)|^2 = p^2 + p \sum_{m=1}^p \sum_{\substack{h=1 \\ \Del(m,h) \equiv 0 \mmod p}}^{p-1} {\mathbf{\,e}}(ch), \end{equation} where we put $$ \Del(m,h)=({\varphi}(m+h)-{\varphi}(m) + h)/h. $$ Upon re-inserting in the term corresponding to $h=0$ and noting that all exponential sums in question take real values we discern that \begin{align*} \sum_{m=1}^p \sum_{\substack{h=1 \\ \Del(m,h) \equiv 0 \mmod p}}^{p-1} {\mathbf{\,e}}(ch) &= \sum_{\substack{m,h=1 \\ \Del(m,h) \equiv 0 \mmod p}}^{p} {\mathbf{\,e}}(ch) - \sum_{\substack{m=1 \\ \Del(m,0) \equiv 0 \mmod p}}^p 1 \\ &\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \sum_{\substack{m,h=1 \\ \Del(m,h) \equiv 0 \mmod p}}^{p} {\mathbf{\,e}}(ch). \end{align*} If $k \ge 2$, then $\Del(X,Y)$ is a nontrivial polynomial in two variables of degree exactly $k-1$, so the congruence $$ \Del(m,h) \equiv 0 \mmod p $$ defines a curve over the finite field $\mathbb F_p$. Furthermore, if $k> 1$, then $\Del(X,Y)$ is a nontrivial polynomial of degree exactly $k-1$ with respect to $X$ with the leading monomial $ k \lc({\varphi})X^{k-1}$. Thus for $p> k$ and $p \nmid\lc({\varphi})$ the variable $h$ is not constant along this curve. We may therefore apply~\cite[Theorem~6]{Bom} and find that \begin{equation}\label{Bom-bd} \sum_{\substack{m,h=1 \\ \Del(m,h) \equiv 0 \mmod p}}^{p} {\mathbf{\,e}}(ch) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \((k-1)^2 +2(k-1) -3\) \sqrt p + (k-1)^2 . \end{equation} Under our assumption $p>(2k)^4$, for the right hand side in~\eqref{Bom-bd} we have $$ \((k-1)^2 +2(k-1) -3\) \sqrt p + (k-1)^2 < \tfrac{2}{3}p. $$ In view of~\eqref{pre-Bom}, we derive~\eqref{eq:2nd Mom}, which is sufficient to establish the result. \end{proof} \section{Proof of the main result} The following result, going back to Duffin and Schaeffer~\cite{DuSch}, is a key ingredient in our arguments as it allows us to focus on those ${\alpha}} \def\talp{{\widetilde{\alpha}} \in \mathbb T$ whose rational approximations have prime denominators. \begin{lemma}\label{lem: approx a/[p} There is a set $ {\mathscr C}} \def\scrCbar{{\overline \scrC}}\def\scrCtil{{\widetilde \scrC} \subseteq \mathbb T$ of full Lebesgue measure such that for any $\alpha\in {\mathscr C}} \def\scrCbar{{\overline \scrC}}\def\scrCtil{{\widetilde \scrC}$ there are infinitely many approximations $$ \left| \alpha - \frac{a}{p}\right| < \frac{1}{p^2} $$ with $a \in \mathbb Z$ and $p$ being a prime number. \end{lemma} \begin{proof} This is a direct application of~\cite[Theorem~I]{DuSch}, see also the remark on top of p.~245 of that paper. \end{proof} We also remark that Lemma~\ref{lem: approx a/[p} is a special case of the Duffin-Schaeffer conjecture, recently established as a theorem by Koukoulopoulos and Maynard~\cite{KM}. We now have the wherewithal to embark on the proof of Theorem~\ref{thm:main}. Fix $\tau>0$, and let ${\alpha}} \def\talp{{\widetilde{\alpha}}_2 \in {\mathscr C}} \def\scrCbar{{\overline \scrC}}\def\scrCtil{{\widetilde \scrC}$, where ${\mathscr C}} \def\scrCbar{{\overline \scrC}}\def\scrCtil{{\widetilde \scrC}$ is as in Lemma~\ref{lem: approx a/[p}. Then we can find an arbitrarily large prime number $p$, and $a_2 \in \mathbb Z$ not divisible by $p$, that satisfy $|{\alpha}} \def\talp{{\widetilde{\alpha}}_2 - a_2/p| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec p^{-2}$. For any fixed such $p$ satisfying $p>(2k)^4$ and not dividing $\lc({\varphi})$, define $P$ via the relation \begin{equation} \label{def-P} P^{1+\tau} = p^2. \end{equation} Lemma~\ref{L2} now guarantees the existence of an integer $a_1$ with $a_1+a_2 \not\equiv 0 \mmod p$ and having the property that \begin{equation} \label{large-S} S(p; a_1, a_1+a_2) \gg p^{1/2}. \end{equation} Take now ${\beta}_2 = {\alpha}} \def\talp{{\widetilde{\alpha}}_2-a_2/p$ and ${\beta}_1 = -{\beta}_2$, and put ${\alpha}} \def\talp{{\widetilde{\alpha}}_1 = a_1/p + {\beta}_1$. Then upon recalling that ${\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}}={\alpha}} \def\talp{{\widetilde{\alpha}}_1+{\alpha}} \def\talp{{\widetilde{\alpha}}_2$ in~\eqref{f=g}, we see that ${\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}}=c/p$ with $c = a_1 + a_2 \not\equiv 0 \mmod p$, whereupon Lemma~\ref{L1} yields the relation $$ g({\alpha}} \def\talp{{\widetilde{\alpha}}_1, {\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}}) = p^{-1} S(p; a_1, a_1+a_2) I({\beta}_1) + O(p^{1/2} \log p). $$ Recall now our definition of $P$ from~\eqref{def-P}. Since $|{\beta}_1| = |{\beta}_2| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec p^{-2} = P^{-1-\tau}$, it follows further from~\eqref{I-bd} that $|I({\beta}_1)| = P \, (1+O(P^{-2\tau}))$, so upon inserting~\eqref{large-S} we discern that $$ g({\alpha}} \def\talp{{\widetilde{\alpha}}_1, {\gamma}} \def\Gam{{\Gamma}} \def\tgam{{\widetilde{\gamma}}) \gg P p^{-1/2} \gg P^{3/4-\tau}. $$ In the light of~\eqref{f=g} and Lemma~\ref{lem: approx a/[p}, this establishes the desired result. \section*{Acknowledgements} The authors would like to thank James Maynard for drawing our attention to a result of Duffin and Schaeffer~\cite[Theorem~I]{DuSch} on Diophantine approximations with prime denominators. During the preparation of this manuscript, JB was supported by Starting Grant no.~2017-05110 of the Swedish Science Foundation (Ve\-tenskapsr{\aa}det) and IS was supported by the Australian Research Council Grant DP170100786.
{ "timestamp": "2020-12-17T02:15:51", "yymm": "2012", "arxiv_id": "2012.08877", "language": "en", "url": "https://arxiv.org/abs/2012.08877" }
\section{INTRODUCTION} \label{sec:intro} Many observations and theoretical studies over the years, and more so in the last decade, attribute major roles to jets in shaping planetary nebulae (PNe; e.g., \citealt{Morris1987, Soker1990AJ, SahaiTrauger1998, Boffinetal2012, Miszalskietal2013, Tocknelletal2014, Huangetal2016, Sahaietal2016, RechyGarciaetal2017, GarciaSeguraetal2016, Dopitaetal2018, Fangetal2018, KameswaraRaoetal2018, Lagadec2018, AliDopita2019, Derlopaeta2019, Jonesetal2019jets, Miszalskietal2019, Oroszetal2019, Scibellietal2019, Guerreroetal2020, MonrealIberoetal2020, RechyGarciaetal2020, Soker2020Galax, Tafoyaetal2020, Zouetal2020, Guerreroetal2021}, for a small fraction of many more papers). Observations show a link between the presence of a binary central star and shaping by jets (e.g., \citealt{Boffinetal2012, Miszalskietal2013, Miszalskietal2018a}). This link includes also post-asymptotic giant branch (AGB) stars that might not form a PN (e.g., \citealt{Thomasetal2013, Bollenetal2017, VanWinckel2017, Bollenetal2020, Bollenetal2021}). We clarify that we refer to any bipolar outflow, i.e., two opposite polar outflows with a mirror symmetry about the equatorial plane, as jets. The jets might be narrow, or the half opening angle of each jet might be large, even close to $90^\circ$. As well, the outflow in the jets might be continuous, periodic, or stochastic. We still refer to the polar outflow as a jet. The main aim of hydrodynamical simulations of jets in PNe is to show that jets can account for the different morphological features (e.g., \citealt{LeeSahai2004, Dennisetal2009, Leeetal2009, HuarteEspinosaetal2012, Balicketal2013, Akashietal2015, Balicketal2017, Akashietal2018, Balicketal2018, EstrellaTrujilloetal2019, RechyGarciaetal2019, Balicketal2020}). These and many other simulations have shown that jets can account for a very rich varieties of morphologies. One of the key advantages of jets is that they allow to make use of the energy source that results from mass accretion onto the companion. They introduce axially-symmetric flows that can affect the descendant nebula in many ways, depending, among others, on the intensity of the jets, their duration, and when their activity phase takes place. In this study we consider the jets to be weak, of short duration, and to take place before the main nebula ejection. Our present goal is to show that jets can form `Ears' in elliptical PNe. By ears we refer to two opposite protrusions from the main PN shell. Ears differ from bipolar lobes by three main properties. (1) Ears are smaller than the main inner shell from which they protrude. Most bipolar lobes are larger than the inner main shell. (2) An ear cross section (perpendicular to the symmetry axis) monotonically decreases outward, i.e., as we move from its base at the main inner shell to its tip. Most Bipolar lobes, on the other hand, widen first, and then get narrower toward their tip. A third criterion distinguishes ears from elliptical PNe. (3) The boundary between the ears and the main nebula has a dimple (two inflection points) on each side of each ear. Like bipolar lobes, in most cases ears are along the symmetry axis of the nebula and have different emission properties or brightness, like being fainter, than the main PN shell. By their definition, ears exist only in elliptical PNe. In that regards and related to our study, we assume that all elliptical PNe are shaped by binary interaction, mainly by a low mass main sequence companion that enters a common envelope evolution (CEE) with an AGB star (e.g., \citealt{Soker1997Rev}). We list the 10 best examples we could find of PNe with ears. We give one or two sources for the image of each PN. The images of the first two PNe with ears are from HASH (the Hong Kong/AAO/Strasbourg H$\alpha$ planetary nebula database; \citealt{PArkeretal2016}). In K~3-24 we identify the ears protruding to the north and to the south, while in IC~289 (also \citealt{Hajianetal1997}) the ears are to the north-west and to the south-east, and they are not exactly aligned with the central star. The PN K~3-4 (\citealt{Manchadoetal1996}) is an interesting case. Firstly, the two ears are not aligned with the center of the PN, as in IC~289. Secondly, the ears are large, and just on the border between being lobes and being ears because their width (cross section) stays constant for some distance above their base. Since their length as projected on the plane of the sky is shorter than the main shell, we term them ears (or border-ears). In M 2-53 \citep{Manchadoetal1996} we identify large ears, one in the west and one in the east. The PN NGC~6905 (\citealt{Balick1987, PhillipsRamosLarios2010}) has elongated ears. We term them ears because their width (cross section) decreases monotonically to their tips. The PN NGC~3242 has two pairs of ears along the same axis (\citealt{Schwarzetal1992}). The PN NGC~6563 (\citealt{Schwarzetal1992}) has point-symmetric ears in an `S' shape. Other PN with ears are NGC 6852 \citep{Manchadoetal1996}, Na~1 \citep{Manchadoetal1996}, and M~2-40 \citep{Manchadoetal1996}. The formation of ears in PNe might have relations to ears in some remnants of type Ia supernovae (SNe Ia). Most possibly is that some of these SNe Ia exploded inside a PN, i.e., a SN inside a PN (SNIP). We take the view that in remnants of SNe Ia, like in PNe, the ears are features along the polar (symmetry) axis (e.g., \citealt{TsebrenkoSoker2013}), rather than an equatorial dense gas (e.g., \citealt{Chiotellisetal2020}). In that respect we note that \cite{Blondinetal1996} form ears in type II supernovae by assuming a circumstellar gas with a high equatorial density into which the star explodes. They obtain polar ears, but not by the action of jets. In section \ref{sec:numerical} we describe the three-dimensional (3D) simulations and in section \ref{sec:results} we describe our results of 17 different simulations. We do not try to fit any PN particularly, but only to derive the general structure of ears, because the parameter space (jets' properties, shell properties) is very large. In section \ref{sec:Evolution} we show the evolution with time. We summarise our results in section \ref{sec:summary}. \section{NUMERICAL SET-UP} \label{sec:numerical} \subsection{The numerical scheme and the jets} \label{subsec:Jets} We use version 4.2.2 of the hydrodynamical FLASH code \citep{Fryxell2000} with the unsplit PPM (piecewise-parabolic method) solver to perform our 3D hydrodynamical simulations. FLASH is an adaptive-mesh refinement (AMR) modular code used for solving hydrodynamics and magnetohydrodynamics problems. We do not include radiative cooling in the simulations because the interaction takes place in a dense region close to the binary system, such that some zones are optically thin while others are not. The inclusion of radiative transfer in this 3D complicated flow is too demanding. We instead vary the values of the adiabatic index $\gamma$. We employ a full 3D AMR (7 levels; $2^{9}$ cells in each direction) using a Cartesian grid $(x,y,z)$ with outflow boundary conditions at all boundary surfaces. We take the $z=0$ plane to be in the equatorial plane of the binary system, which is also the equatorial plane of the nebula. We simulate the whole space (the two sides of the equatorial plane). In most simulations the size of the grid is $(4\times 10^{16}~\rm{cm})^{3}$. In two simulations we take twice as large a grid to follow the evolution to later times. At time $t=0$ we fill the grid with a spherical wind with velocity of $v_{\rm AGB}= 20 ~\rm{km} ~\rm{s}^{-1}$ and a mass loss rate $\dot M_{\rm AGB}=10^{-6} M_\odot ~{\rm yr}^{-1}$. We term this wind a regular AGB wind. We launch the two opposite jets from the inner $4\times 10^{14} ~\rm{cm}$ zone along the $z$-axis (at $x=y=0$) and within a half opening angle of $\alpha_{\rm j}$. We chose two values of $\alpha_{\rm j}$, one represents narrow jets, as observed in many young stellar objects, and one represents wide jets, as observed in some post-AGB binary systems (e.g., \citealt{Bollenetal2021}). The injection temperature of the jets is $10^4 ~\rm{K}$, a typical temperature of warm gas. The jets are active during the time period from $t=0$ to $t_{\rm j}$ and the ejection of the dense spherical shell starts one year after $t_{\rm j}$. These time scales are comparable to the dynamical time of the CEE, which we assume is the timescale during which the companion enters the envelope. The jets' initial velocity is $v_{\rm j} = 100$ or $v_{\rm j} = 200 ~\rm{km} ~\rm{s}^{-1}$, which is about the escape velocity from a low-mass main sequence star or from a brown dwarf (the companion star). The mass-loss rate into the two jets together is $\dot M_{\rm 2j} \simeq 10^{-4} - 10^{-5} M_\odot ~\rm{yr}^{-1}$. These mass loss rates are about $0.01-0.1$ times the rates that \cite{Shiberetal2019} take. The reasons for the lower values here are that we take a lower mass companion and that the giant in our case is a more extended AGB star with a lower envelope density compared with the red giant branch model of \cite{Shiberetal2019}. For numerical reasons (to avoid very low densities) we inject a very weak slow wind in the directions where we do not launch the jets, i.e., in the sector $\alpha_{\rm j}<\theta<90^\circ$ in each hemisphere (for more numerical details see \citealt{AkashiSoker2013}). \subsection{The spherical dense shell} \label{subsec:shell} In most of our previous studies (e.g., \citealt{AkashiSoker2013, Akashietal2018, AkashiSoker2018}) we injected the jets into a dense spherical shell (formed by an intensive wind), which itself was embedded in a much less dense wind (formed by the regular AGB wind). Namely, the jets active phase follows the high mass loss rate that formed the dense shell (jets are younger than the dense PN shell). Such interactions can form large bipolar lobes with different properties. In this study we have simulated about twenty different cases where we launched jets into a dense shell. We failed to obtain ears. Namely, we could not form polar lobes that are smaller than the dense shell and that have a cross section that decreases with distance from the center (for definition of ears see section \ref{sec:intro}). These failures led us to conduct simulations where we launch the dense shell after we launch the jets. Such a case might be, for example, when the companion accretes mass from the AGB progenitor of the PN and launches jets. Later it enters a common envelope evolution, a process that ejects the dense shell. The jets, therefore, interact with the less-dense (regular AGB) wind that preceded the ejection of the dense shell. We assume that the main sequence companion that launches the relatively weak jets is of low mass $M_2 \simeq 0.1-0.3 M_\odot$ (in most cases; might even be a brown dwarf), and therefore after it enters the CEE it ejects an elliptical nebula rather than a bipolar nebula or a dense equatorial torus \citep{Soker1997Rev}. This assumption is compatible with the observation that ears are present mainly in elliptical PNe. In assuming that a low mass companion ejects the elliptical shell we have in mind the PN A30 that has an almost spherical morphology (not including the central knots) and has a central binary system with an orbital period of 1.06 days \citep{Jacobyetal2020}. However, we note that in the case of K3-24 there is a dense torus. In this case we expect the companion mass to be $M_2 \ga 0.3 M_\odot$. We eject the dense (intensive) spherical wind that forms the dense shell starting one year after the end of the jet-launching episode, i.e., at $t=t_{\rm j}+1 ~\rm{yr}$, and continue with this mass loss until $t_{\rm w}=60 ~\rm{yr}$. We inject the dense wind at radius $r_{\rm w,in} = 4\times10^{14} ~\rm{cm} $. The mass loss rate and velocity of the spherical dense wind are $\dot M_{\rm w} = 10^{-3} M_\odot ~\rm{yr}^{-1}$ and $v_{\rm w} = 20 ~\rm{km} ~\rm{s}^{-1}$, respectively. In one case we inject the regular AGB winds rather than a dense wind. The simulations of a dense shell that is younger than the jets, i.e., a post-jets shell, is the main new ingredient of our study with respect to our group's previous studies. From observations we know that jets can be younger or older than the dense shell that was presumably ejected in a common envelope evolution (e.g., \citealt{Tocknelletal2014}). In most cases the age difference between the jets and the dense shell is very small and we can refer to them as coeval \citep{Guerreroetal2020}. We summarise the simulations we perform in Table \ref{Table:cases}. \begin{table* \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Simulation & $\dot M_{\rm 2j}$ & $v_{\rm j}$ & $t_{\rm j}$ & $\alpha_{\rm j}$ & $\gamma$ & Figures & $\theta_{ears}$ \\ & $10^{-6} M_\odot ~\rm{yr}^{-1}$ & $~\rm{km} ~\rm{s}^{-1}$& $~\rm{yrs}$ & & & & \\ \hline S1 & $38$ & $100$ & $1$ & $15^\circ$&$1.1$ & \ref{fig:sixteen_ears}, \ref{fig:3Gammas} & $40^\circ$ \\ \hline S2 & $38$ & $100$ & $1$ & $15^\circ$ &$1.33$ &\ref{fig:sixteen_ears}, \ref{fig:3Gammas} & $35^\circ$ \\ \hline S3 & $38$ & $100$ & $1$ & $15^\circ$ & $1.67$ & \ref{fig:sixteen_ears}, \ref{fig:3Gammas} & $25^\circ$\\ \hline S4 &$9.5$ & $200$ & $1$ & $15^\circ$ &$1.1$ & \ref{fig:sixteen_ears}, \ref{fig:evolS4} & $35^\circ$ \\ \hline S5 &$9.5$ & $200$ & $1$ & $15^\circ$ &$1.33$ & \ref{fig:sixteen_ears} & $30^\circ$\\ \hline S6 &$9.5$ & $200$ & $1$ & $15^\circ$ &$1.67$ & \ref{fig:sixteen_ears}, \ref{fig:evolS6} & $30^\circ$ \\ \hline S7 &$9.5$ & $200$ & $2$ & $15^\circ$ & $1.1$ & \ref{fig:sixteen_ears} & $30^\circ$ \\ \hline S8 &$152$ & $50$ & $1$ & $15^\circ$ &$1.67$ & \ref{fig:sixteen_ears} & $40^\circ$ \\ \hline S9 &$9.5$ & $200$ & $1$ & $50^\circ$&$1.1$ & \ref{fig:sixteen_ears} & $40^\circ$ \\ \hline S10 & $38$ & $100$ & $1$ & $50^\circ$ &$1.1$ & \ref{fig:sixteen_ears} & $35^\circ$ \\ \hline S11 &$9.5$ & $200$ & $2$ & $15^\circ$ &$1.67$ & \ref{fig:sixteen_ears} & ($35^\circ$) \\ \hline S12 &$9.5$ & $200$ & $3$ & $15^\circ$ & $1.67$ & \ref{fig:sixteen_ears} & ($10^\circ$)\\ \hline S13 & $38$ & $100$ & $1$ & $50^\circ$ &$1.67$ & \ref{fig:sixteen_ears} & ($20^\circ$) \\ \hline S14 & $38$ & $100$ & $1$ & $50^\circ$ &$1.33$ & \ref{fig:sixteen_ears} & ($20^\circ$) \\ \hline S15 & $9.5$ & $200$ & $1$ & $50^\circ$ &$1.33$ & \ref{fig:sixteen_ears} & ($40^\circ$) \\ \hline S16 & $9.5$ & $200$ & $1$ & $50^\circ$ &$1.67$ & \ref{fig:sixteen_ears} & ($30^\circ$) \\ \hline S1L & $38$ & $100$ & $1$ & $15^\circ$ & $1.1$ & \ref{fig:S1L} & $40^\circ$ \\ \hline \end{tabular} \caption{Summary of the 17 simulations we present in the paper. The columns list, from left to right and for each simulation, its number, the mass loss rate of the two jets combined $\dot M_{\rm 2j}$, the velocity of the jets $v_{\rm j}$, the time period of jets' activity $t_{\rm j}$, the half opening angle of the jets $\alpha_{\rm j}$, and the adiabatic index $\gamma$. In the next column we list the figures presenting each simulation. In all figures beside Fig. \ref{fig:incliden} that we present later on, the symmetry axis is on the plane of the sky, i.e., $i=90^\circ$. In the last column we list the critical inclination angle $i_{\rm ears}$ (defined as the angle between the PN symmetry axis and the line of sight) for each case, below which the ears disappear because they are projected on the main shell. In all cases we start at $t=0$ with a regular AGB wind that fills the grid, and we start to inject the dense shell one year after the end of the jets' activity, i.e., at $t=t_{\rm j} + 1 ~\rm{yr}$. In Simulation S1L we inject a regular AGB wind instead of a dense wind during the post-jets phase. } \label{Table:cases} \end{table*} \section{RESULTS} \label{sec:results} \subsection{A gallery of images} \label{subsec:gallery} We start by comparing 16 simulations that we performed when the bipolar structure reach about the same size as each other. In Fig. \ref{fig:sixteen_ears} we present the artificial intensity maps of these 16 cases. The artificial intensity map is a map of the integration of density squared along the line of sight, here along the $y$ axis. In all simulations we start to blow the dense shell a year after we turned off the jets. Namely, the jets are older than the main nebular shell. For other properties see Table \ref{Table:cases}. \begin{figure*}[ht!] \includegraphics[trim=0.6cm 10.2cm 0.0cm 2.4cm ,clip, scale=0.95]{sixteen_ears.pdf} \\ \caption{Artificial intensity maps for 16 models. Each artificial intensity map is a map of the integration of density squared along the $y$ axis (the line of sight). In all cases the symmetry axis of the two opposite jets is $(x,y,)=(0,0)$, namely, through the center and along the $z$ axis. All panels are square with sizes of $4 \times 10^{16} ~\rm{cm}$. The colors depict the artificial intensity values according to the color bars in the range of $10^{-23} ~\rm{g}^2 ~\rm{cm}^{-5} - 10^{-16} ~\rm{g}^2 ~\rm{cm}^{-5}$. We consider simulations S1 to S7 to yield ears, simulations S9, S15 and S16 to be marginal, and simulations S10-S14 to yield no ears. } \label{fig:sixteen_ears} \end{figure*} We recall our definition of ears as two opposite protrusions from the main shell that (1) are smaller than the main inner shell from which they protrude, (2) have a cross section (perpendicular to the symmetry axis) that monotonically decreases outward, and (3) the boundary between the ears and the main nebula has a dimple (two inflection points) on each side of each ear. We clearly identify ears in simulations S1 to S7, but in simulation S7 the ears are almost too large and very faint. Cases S9, S15, and S16 are marginal as the ears do not have a clear shape as in simulations S1-S7, and are fainter. In cases S10-S14 we do not identify the faint protrusions as ears. Our first conclusion is that the flow sequence of weak jets that interact with a regular AGB wind followed by the ejection of a dense shell (an intensive wind) can lead to ear formation, but not necessarily so. \subsection{The role of the adiabatic index} \label{subsec:Adiabatic} In Fig. \ref{fig:3Gammas} we compare the density, pressure, and temperature maps in the meridional plane $y=0$ of simulations S1, S2, and S3 (from left to right) that differ only by the value of the adiabatic index $\gamma$. \begin{figure*}[ht!] \includegraphics[trim=0.8cm 8.1cm 0.0cm 2.0cm ,clip, scale=0.90]{zzzGamma1.pdf} \\ \caption{Comparing the density (upper row), pressure (middle row), and temperature (lower row) maps in the meridional plane of three simulations that differ only in the value of the adiabatic index. Left column: simulation S1 with $\gamma=1.1$; Middle column: simulations S2 with $\gamma=1.333$; Right column: simulation S3 with $\gamma=1.67$. All panels are square with sizes of $4 \times 10^{16} ~\rm{cm}$ and at $t=44 ~\rm{yr}$. The numbers on the axis are in units of $10^{15}~\rm{cm}$. Densities according to color-bars in the range of $10^{-19} ~\rm{g} ~\rm{cm}^{-3}- 10^{-15} ~\rm{g} ~\rm{cm}^{-3}$, while pressure in the range of $10^{-8} ~\rm{erg} ~\rm{cm}^{-3}- 10^{-5} ~\rm{erg} ~\rm{cm}^{-3}$. Note that the densities in the zone $r \ge10 \times 10^{15} ~\rm{cm}$ are $\le 2.5 \times 10^{-20} ~\rm{g} ~\rm{cm}^{-3}$ (decreasing as $r^{-2}$) and so appear all blue. The temperature ranges are from $1000 ~\rm{K}$ (blue) to $3.5\times 10^4 ~\rm{K}$ in the lower-left panel, to $6.3\times 10^4 ~\rm{K}$ in the lower-middle panel, and to $7.5\times 10^4 ~\rm{K}$ in the lower-right panel. } \label{fig:3Gammas} \end{figure*} The adiabatic index plays a role in both increasing and decreasing the temperature. A higher value of $\gamma$ implies a steeper change in pressure as density changes. In these three simulations the jets start highly supersonic, with a mach number of $\mathcal{M}_{\rm j} = 6.7$. In the postshock region the Mach number and temperature increase as $\gamma$ increases. Indeed, in the lower three panels of Fig. \ref{fig:3Gammas} we see that the higher the value of $\gamma$ is the higher the temperature of the post-shock jets' gas is (note that the red color stands for a higher temperature as $\gamma$ increases in the three panels). On the other hand, as the gas expands a higher value of $\gamma$ implies more rapid loss of pressure; this reduces the expansion velocity. For example, in a gas that is set to expand freely into an empty tube the maximum velocity at the front of the expanding gas is $2 C_{0} / (\gamma-1)$, where $C_0$ is the initial sound speed of the gas. Namely, the maximum additional velocity of the expanding gas is proportional to $(\gamma-1)^{-1}$. In simulations where the jets are active for a long time, the effect of higher post-shock pressures for higher values of $\gamma$ dominates, and flow with higher values of $\gamma$ inflates larger bubbles. The present flow structure has short-lived and weak jets and a slow pre-jet wind and a slow post-jet intensive wind (the dense shell), i.e., a Mach number of only $\mathcal{M}_{\rm s} = 1.3$ for both winds. The result is that the effect of a faster cooling for higher values of $\gamma$ dominates in many parts. Indeed, we see that the high-pressure region (red color in middle row of Fig. \ref{fig:3Gammas}) gets smaller as $\gamma$ increases, and that the temperature in the center is the highest for the lowest value of $\gamma$. As well, the hot thin shell is larger for the lower values of $\gamma=1.1$ and smaller for $\gamma=1.67$, in particular in the equatorial plane. Another comparison is of simulation S7 and S11. In these two simulations the jets have the same power as in simulations S1-S6, but the jets are active for $t_{\rm j}=2 ~\rm{yr}$ instead of for only $t_{\rm j}=1 ~\rm{yr}$. Namely, the jets deposit twice as much energy to the lobes/ears they inflate with respect to simulations S1-S6 (we discuss this further in section \ref{subsec:EnergyMomentum}). In simulation S11 that has a larger value of $\gamma=1.67$ the jets inflate narrower lobes that form a bipolar PN rather than ears. These lobes are not ears because the cross section does not decrease monotonically as we move out. In simulation S7 for which $\gamma=1.1$ the lobes are wide, and almost larger than the dense shell. These are nonetheless ears. \subsection{The role of energy and momentum of the jets} \label{subsec:EnergyMomentum} There are five pairs and one triplet of simulations with the same adiabatic index $\gamma$ and the same power and duration of jets, but different momentum. The pairs are (S1,S4), (S2,S5), (S10,S9), (S13,S16), and (S14,S15), where the first simulation in each pair is the one with twice as large momentum flux compared with the second simulation in the pair. Overall, in the simulations with higher jets' momentum, all other parameters being similar, the lobes/ears are more elongated. As well, in the marginal cases (S10,S9) and (S13,S16) the higher momentum forms a wider lobe/ear on the far zone (far from the center), and therefore the cross section of the lobe/ear does not decrease monotonically. This prevents the lobes from being defined as ears. In the triple (S8,S3,S6) the jets in simulation S8 have twice the momentum of that in simulation S3, that in turn has twice the jets' momentum in simulation S6. While in simulations S3 and S6 we do obtain ears, in simulation S8 the jets' velocity of $v_{\rm j} =50 ~\rm{km} ~\rm{s}^{-1}$ is too low for the jets to inflate ears or lobes and we obtain an elliptical nebula. Simulations S7 and S11 are active for twice as long, while simulation S12 is active for three times as long as the other simulations. In these simulations, in particular S11 and S12, the jets inflate too large lobes to be defined as ears. As expected, energetic jets form bipolar nebulae. \subsection{The role of jets' opening angle} \label{subsec:Angle} There are simulations where we inject wide jets with a half opening angle of $\alpha_{\rm j}=50^\circ$ instead of $\alpha_{\rm j}=15^\circ$. Pairs with narrow and wide, in this order, jets but otherwise identical simulations are (S1, S10), (S4,S9), (S3,S13), (S2,S14), (S5,S15), and (S6,S16). We learn from these comparisons that too wide jets form complicated faint structures in the polar direction that are not what we refer to as ears. Basically, the wide jets inflate large lobes, which because of instabilities form a bumpy outer boundary of the ear, as well as a cross section that not always decreases monotonically outward. These effects prevent the inflated lobe to obey our definition of ears. \subsection{The appearance of ears} \label{subsec:appearance} The interaction of the jets with the regular pre-jets AGB wind forms the ears. The post-jets dense wind forms the dense nebular shell, but otherwise plays no hydrodynamical role in forming the ears. The dense shell serves to form nebulae similar to most of the observed PNe with ears, where the ears are fainter than the main nebula. If there is no dense wind but rather the regular AGB wind continues in the post-jets phase, the ears might merge with the nebula to form an elliptical PN without ears. We demonstrate this with simulation S1L. In simulation S1L the jets properties are as in simulation S1, but instead of injecting a post-jets dense wind, i.e., with a mass loss rate and velocity of $\dot M_{\rm w} = 10^{-3} M_\odot ~\rm{yr}^{-1}$ and $v_{\rm w} = 20 ~\rm{km} ~\rm{s}^{-1}$, respectively, we inject the regular AGB wind with a mass loss rate and velocity of $\dot M_{\rm AGB} = 10^{-6} M_\odot ~\rm{yr}^{-1}$ and $v_{\rm AGB} = 20 ~\rm{km} ~\rm{s}^{-1}$, respectively. We present the artificial intensity maps of simulation S1L in Fig. \ref{fig:S1L}. We do indeed form ears. However these ears are only marginally fainter than the main nebular outskirts, whereas in simulation S1 (upper left panel of Fig. \ref{fig:sixteen_ears}) the ears are much fainter than the nebula. We suspect that in the case of simulation S1L the ears will merge with the main nebula at a later phase after ionisation starts, and will form an elliptical PN without ears. In summary, to form ears that are fainter than the nebula (or nebula brighter than the ears) we find that we should increase the wind mass loss rate in the post-jets phase. \begin{figure*}[ht!] \includegraphics[trim=0.8cm 0.1cm 0.0cm 0.0cm ,clip, scale=0.20]{ears_22_proj.pdf} \\ \caption{ The artificial intensity maps of simulation S1L that has the same parameters as simulation S1 but with a regular AGB wind in the post-jets phase (Table \ref{Table:cases}). The time is the same time as in the upper left panel of Fig. \ref{fig:sixteen_ears} for simulation S1. } \label{fig:S1L} \end{figure*} \subsection{The critical angle for ears} \label{subsec:critical} Finally we refer to the inclination angle. In all figures beside Fig. \ref{fig:incliden} we present the images with an inclination angle of $i=90^\circ$, i.e., the symmetry axis of the PN (through the two ears) is in the plane of the sky. In Fig. \ref{fig:incliden} we present the artificial intensity maps of two cases and at two inclination angles as we indicate in the four panels. These demonstrate how the ears becomes less prominent as the inclination angle decreases. \begin{figure*}[ht!] \includegraphics[trim=0.9cm 11.3cm 0.0cm 1.0cm ,clip, scale=0.8]{S1S7_50_70.pdf} \\ \caption{ Artificial intensity maps for simulations S1 (left column) and S7 (right column) and for two inclination angles (the angle between the symmetry axis of the PN and the line of sight). Each artificial intensity map is a map of the integration of density squared along the line of sight for an inclination angle $i$ as indicated in the inset. The ears disappear as the inclination angle decreases. } \label{fig:incliden} \end{figure*} For small inclination angles of $i < i_{\rm ears}$ the ears are projected on the main nebula and we cannot notice them by the morphology. For each simulation we examine at what inclination angle, the critical inclination angle $i_{\rm ears}$, the ears disappear at the end of the simualtion. Namely, we can observe ears only for $i>i_{\rm ears}$. We list these values (to accuracy of $5^\circ$) in the last column of Table \ref{Table:cases}. We list the critical angle also for cases where we see no ears, cases where the angle is inside parenthesis. In these cases the angle is for the disappearance of the polar protrusions even if they are not ears. Because in all our simulations $i_{\rm ears} \la 35^\circ -40^\circ$, a random orientation of the PN symmetry axis implies that we miss ears because of projection on the main PN shell only in $\simeq 20 \%$ of the cases. \section{Evolution} \label{sec:Evolution} We present the evolution of two simulations. In Fig. \ref{fig:evolS4} we present, from top to bottom, the density, the temperature, and the velocity map in the meridional plane $y=0$ of simulation S4 at three times, from left to right. In the bottom row we present the artificial intensity map (integration of density squared along the line of sight, here along $y$). As we observe at $t=152 ~\rm{yr}$, when the ears reach the edge of the grid, the ears maintain their identity. As the entire nebula is supersonic, Mach numbers $\mathcal{M} > 3$ in most parts, and most of the motion is radial, the nebula will keep its structure at later times as well (unless a too massive circumstellar material further out will change that structure). This simulation shows that for some physical parameters the ears can exist for hundreds of years and more. \begin{figure*}[ht!] \includegraphics[trim=0.9cm 2.3cm 0.0cm 1.0cm ,clip, scale=0.8]{s4_LG.pdf} \\ \caption{Evolution of simulation S4 at three times, from left to right, $t=54 ~\rm{yr}$, $t=104 ~\rm{yr}$ and $t=152~\rm{yr}$. We present the density (upper row), temperature (second row), and velocity magnitude according to the colors with arrows indicating the flow direction (third row), all in the meridional plane $y=0$ and with the color-bars in cgs units. In the lower row we present the evolution of the artificial intensity map (in units of $~\rm{g}^2 ~\rm{cm}^{-5}$ according to the color-bar), where the first panel is as in Fig. \ref{fig:sixteen_ears}. } \label{fig:evolS4} \end{figure*} In Fig. \ref{fig:evolS6} we present the evolution of simulation S6. The same discussion above for simulation S4 holds for this case as well. Basically, although our simulations in both S4 and S6 are for less than 200 years, at the end of the simulation the flow is radial and supersonic, and we expect the ears morphological feature to stay for thousands of years. \begin{figure*}[ht!] \includegraphics[trim=1.0cm 3.cm 0.0cm 0.0cm ,clip, scale=0.8]{evol_LG.pdf} \\ \caption{Similar to Fig. \ref{fig:evolS4} but for simulation S6 and at the three times of $88 ~\rm{yr}$, $136 ~\rm{yr}$ and $171~\rm{yr}$. } \label{fig:evolS6} \end{figure*} Saying all these, we did not follow the nebula to the phases when the central star blows a fast ($\gg 100 ~\rm{km} ~\rm{s}^{-1}$) wind and starts to ionise the nebula. The fast wind interaction with the dense shell influences the evolution at later times (e.g., \citealt{Perinottoetal2004}), e.g., it suffers from instabilities and destroys the smooth structure of the dense shell (e.g., \citealt{ToalaArthur2016}). We expect that the dense shell will nonetheless contain the fast wind such that the fast wind will not affect the ears. The ionisation of the nebula will increase the sound speed and somewhat will change the flow (e.g., \citealt{Perinottoetal2004, Schonberneretal2010}). This might erase small and faint ears, or smear the differences between the ears and the main nebula such as in simulation S1L (Fig. \ref{fig:S1L}), but in most cases we expect ears to survive these late evolutionary phases. Future simulations should examine the role of the fast wind and the ionising radiation to examine whether the ears survive as we expect. \section{Summary} \label{sec:summary} The morphologies of a small fraction of elliptical PNe contain two opposite protrusions from the main PN shell that are smaller than the main PN shell, have a cross section that decreases monotonically outward, and the boundary between the ears and the main nebula has a dimple (two inflection points) on each side of each ear. These two opposite protrusions are termed `ears' (examples are in section \ref{sec:intro}). Our goal was to determine the outflow structure by which jets can inflate ears. In many trials that we do not present here, we could not obtain ears when we launched the jets after we blew the main dense shell. Namely, the jets that interact with the dense shell either do not inflate any protrusions, or if they do inflate protrusions these are large lobes that form bipolar PNe. We therefore simulated here a flow structure where low-energy jets (short-lived and not too powerful) interact with a regular AGB wind, and the dense PN shell is younger than the jets (for details see section \ref{sec:numerical}). In these simulations, that we summarise in Table \ref{Table:cases}, we started to blow the intensive wind that forms the dense inner shell one year after the jets ceased. We assumed that the main sequence companion is of low mass, $M_2 \simeq 0.1-0.3 M_\odot$ (might even be a brown dwarf), and for that it launches weak jets that form the ears, and after it enters the CEE it ejects an elliptical nebula rather than a bipolar nebula or a dense equatorial torus. This assumption is compatible with the presence of ears only in elliptical PNe. In many cases we expect that the low mass companion will not survive the CEE (it will spiral-in all the way to the core and be tidally destroyed). Even if it survives, its low mass implies that it is hard to detect such companions in the centres of PNe. We referred to the PN A30 as an example of an elliptical PN with a post-CEE central binary system \citep{Jacobyetal2020}. The full parameter space is huge as we can vary the jets' opening angle, the mass loss rate into the jets and their velocity, the properties of the regular AGB wind into which the jets expand, and the adiabatic index. For the influence of the adiabatic index see Fig. \ref{fig:3Gammas}. Indeed, from the 16 simulations we conducted we identify clear ears in seven, S1-S7 in Fig. \ref{fig:sixteen_ears}. We found that not under all conditions we form ears. We found that the jets cannot be too energetic, cannot be too wide, and cannot be too slow. At the end of our simulations the outflow is radial and supersonic, and so the jets maintain their morphology for hundreds of years (section \ref{sec:Evolution}; Figs. \ref{fig:evolS4}, \ref{fig:evolS6}), and probably much longer. Our main finding is that weak and short-lived jets that a companion launches before it enters the CEE might form ears in elliptical PNe. We can present this from another perspective where we refer to the large jets' parameter space. Namely, jets that are weak, short-lived, and launched before the main nebular ejection, lead to the formation of ears in elliptical PNe. Because the parameter space is too large to follow in one study, there is much more studies to do before we can clearly reproduce specific PNe with ears. For example, we should conduct 3D hydrodynamical simulations of a binary system that launches jets as it enters a CEE, similar to the simulation by \cite{Shiberetal2019}. As well, we should continue the simulations for thousands of year and include the central fast wind and the ionisation phase of the PN. However, we think that we can confidently state that to form PNe with ears, in the binary-jet paradigm, the progenitor binary system should launch the jets shortly before it blows the dense PN shell. Such a flow structure can come from a system that enters a common envelope evolution. The companion accretes mass through an accretion disk just before it enters the envelope of the AGB star (or even a red giant branch star), and launches jets for a short time. It then enters the envelope and ejects the envelope to form the dense shell of the descendant PN. In other words, our results are consistent with a scenario in the frame of the binary-jet paradigm where in PNe with ears, the progenitor binary system launched the jets shortly before the system entered the common envelope evolution. We do not claim that the binary-jet scenario is the only one to form ears. { For example, the two lobes of a bipolar PN with a small inclination angle, defined as the angle between the PN symmetry axis and the line of sight, (i.e., an almost pole-on PN) might appear as two ears protruding from the main nebula. } It does, however, have some expectations that we find in some PNe, and so might support this scenario. The binary-jets scenario includes the possibility that in some cases the accretion disk will precess, and so will the jets that it launches. As well, in some cases, mainly due to a more massive companion, there will be a dense equatorial outflow during the CEE phase. The PN K3-24 that we list in section \ref{sec:intro} has two pairs of ears not aligned perpendicular to the dense equatorial gas. This clearly suggests a binary interaction. The "S" shape of the ears both in K3-4 and in NGC~6563 suggests precession, which in turn suggests binary interaction. \section*{Acknowledgments} We thank an anonymous referee for very useful and detailed comments. This research was supported by a grant from the Israel Science Foundation (769/20) and a grant from Prof. Amnon Pazy Research Foundation.
{ "timestamp": "2021-04-14T02:18:42", "yymm": "2012", "arxiv_id": "2012.08917", "language": "en", "url": "https://arxiv.org/abs/2012.08917" }
\section*{Availability} \section*{Acknowledgment} The research is based upon work supported by the Department of Defense (DOD), Naval Information Warfare Systems Command (NAVWAR), via the Department of Energy (DOE) under contract DE-AC05-00OR22725. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies or endorsements, either expressed or implied, of the DOD, NAVWAR, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. \bibliographystyle{plain} \section{Introduction} Security operation centers (SOCs)\textemdash teams of security analysts who continually guard networks against cyber attacks\textemdash now employ widespread data collection capabilities \cite{bridges2018information} and follow a ``defense in depth'' strategy~\cite{colarik2015establishing,tirenin1999concept} that includes a tapestry of tools for blocking, alerting, logging, and providing situational awareness. To effectively defend networks and allow analysts to gain actionable insights from this wealth of SOC data, a robust research community and a burgeoning cyber tech industry are integrating machine learning (ML) into novel solutions. Common categories of tools integrating ML to effectively leverage SOC data include the following: modern endpoint protection/anti-virus (AV), endpoint detection and response (EDR), network situational awareness/anomaly detection (AD), user and entity behavioral analytics (UEBA), security incident and event management (SIEM) systems, and security orchestration and automated response (SOAR). Gartner anticipates that by 2024 80\% of SOCs will use ML-based tools to enhance their operations. In light of such widespread adoption, it is vital for the research community to both enumerate and address usability concerns. While prior work has sought to understand the issues that plague SOC operations ~\cite{kokulu2019matched,bridges2018information,goodall2004work,botta2007towards} and create more effective ML tools for SOCS~\cite{arendt2015ocelot,goodall2018situ,best2010real,sopan2018building}, no prior work examines analysts' usage of ML-based tools in situ. This gap in the research is understandable because it is non-trivial to gain access to a high fidelity testing environment and recruit actual SOC analysts to participate in such a study. In this work, we share the results of an in situ study made possible by our sponsor, the US Navy, who purchased time at a testing center known for conducting high fidelity cyber events\textemdash the National Cyber Range (NCR) in Orlando, Florida. The Navy also provided six analysts from their SOCs to participate in the study. With these resources at our disposal, we designed a test to identify potential usability issues in two ML-based tools\textemdash one AV tool that carved files out of network traffic and a real time network-level AD tool. We configured the NCR to simulate a network with $\sim$1000 IPs that included emulated users with access to email, social media, and general websites, as well as management infrastructure and an out-of-band network allowing analysts to access the technologies under evaluation. We then conducted red team campaigns against the network, one for each tool, and observed analysts as they interacted with the tools. After testing, we asked analysts to complete a follow-up survey and discussed their experiences in a focus group. Our analysis identified several serious usability issues, including multiple violations of established usability heuristics for user interface design. We also discovered that analysts lacked a clear mental model of how these tools generate scores, resulting in mistrust and/or misuse of the tools themselves. Surprisingly, we found no correlation between analysts' level of education or years of experience and their performance with either tool, suggesting that other factors such as prior background knowledge or personality play a significant role in ML-based tool usage. Our findings demonstrate that ML-based security tool vendors must put a renewed focus on working with analysts, both experienced and inexperienced, to ensure that their systems are usable and useful in real-world security operations settings. \section{Background}{\label{sec:background}} In this section, we describe the testbed where we conducted the evaluation and the two tools tested, as well as providing an overview of related work. \subsection{National Cyber Range} The National Cyber Range (NCR)~\cite{ferguson2014national} provided the high-fidelity environment for our study. The NCR is a large, air-gapped cyber testbed equipped with state-of-the-art network and user emulation capabilities that enables the rapid emulation of complex, operationally representative networks that can scale to over 50,000 virtual nodes. The range included ``user machines'', emulating real users, a management network with services such as DNS and Active Directory, a server network with on-premise servers such as Apache and IIS, and an "external network" for email, social media, and general websites. The technologies under test were all connected to a core router and/or to a passive tap so each had access to all network traffic and could communicate with any host-based clients forwarding data. User terminals connected to the two technologies under test via an out-of-band network and allowed evaluation team members and/or security analysts (users) access to the user interface (UI). \subsection{Tools Tested} \label{sec:tools} This study included two tools, a commercially available network-based malware detection tool and a government off-the-shelf, anomaly detection tool. Because of a non-disclosure agreement, we cannot disclose the name of the vendor who supplied the first tool. It is a network-based, static-analysis, malware detection tool (NSDT) that is capable of identifying both existing and new/polymorphic attacks in near real time using an on-premises (on-prem) appliance to passively monitor network traffic. The technology centers on a binary (benign/malicious) classification of files and code snippets extracted from network traffic. The second tool, Situ, is a government off-the-shelf (GOTS) tool for near real time network-level anomaly detection and situational awareness/exploration through visualization \cite{goodall2018situ}. Overall, the tool identifies anomalous\textemdash not necessarily malicious\textemdash network behavior and provides an interface for situational awareness, hunting, and forensic investigation. The system ingests network flows, the metadata of IP-to-IP communication and/or firewall logs. \input{30-related_work} \subsection{Related Work} \label{sec:related-works} Related works fall into four categories---visual analytics to aid security analysts, methods to evaluate the effectiveness of security tools in the context of a SOC, studies on SOC operations, and ML for cybersecurity. While prior work relied heavily on interviews or surveys for data collection, our work represents the first assessment of ML-based tool usability performed in situ via participant observation. Previous work on ML and visualization tool development includes tools such as Ocelot~\cite{arendt2015ocelot}, which was designed to help analysts make better decisions about poorly defined network intrusion events, Situ~\cite{goodall2018situ}, used to identify anomalous behavior in network traffic, and the work of Best et al.~\cite{best2010real}, which seeks to give analysts situational understanding of the network utilizing complementary visualization techniques. Bridges et al.~\cite{bridges2018forming} introduced the Interactive Data Exploration \& Analysis System (IDEAS), a research prototype allowing analysts to query data in their SOC log store and select ML models to be run ``under the hood'', then receive outputs in an interactive visualization. Sopan et al.~\cite{sopan2018building} generated a machine learning model to aid SOC analysts in isolating meaningful alerts by conducting two hour interviews with the five most experienced analysts in the SOC to better understand their workflow. They then created a prediction explanation visualization to aid analysts and stakeholders in understanding how the model was making decisions. Work in the second category considers methods for evaluating the effectiveness of security tools. Akinrolabu et al.~\cite{akinrolabu2018challenge} interviewed expert SOC analysts to better understand obstacles to detecting sophisticated attacks and Cashman~\cite{cashman2019user} conducted a user study of a novel approach to developing machine learning models that involved users in the selection process. They both suggest that involving the user in the creation of the machine learning model can provide significant benefits. Jaferian et al.~\cite{jaferian2014heuristics} proposed a new set of usability heuristics based on activity theory that would complement rather than replace traditional methods such as Nielsen's heuristics. Work in the third category focuses on understanding SOC operators. Gutzwiller et al.~\cite{gutzwiller2016task} performed a cognitive task analysis to understand the goals and abstracted elements of awareness cyber analysts use in their jobs. They found that data fusion in visualizations is most useful when it is combined with a strong knowledge of the network itself on the part of the analyst. These results match findings by Ben-Asher et al.~\cite{ben2015effects} that suggest situated knowledge about a network is necessary to make accurate decisions. Botta et al.~\cite{botta2007towards} interviewed a dozen SOC analysts in five companies and found that inferential analysis, pattern recognition and what they call ``bricolage'', or construction with whatever is at hand, are key skills for IT security professionals. Sundaramurthy et al.~\cite{sundaramurthy2016turning} conducted a 3.5 year long anthropological study of four academic and corporate SOCs and concluded that the only way to get new tools incorporated into existing workflows is to meet the spoken and unspoken requirements of analysts and their managers. In a previous study~\cite{sundaramurthy2015human}, they also developed a model for understanding SOC analyst burnout. Goodall et al.~\cite{goodall2004work}, Bridges et al.~\cite{bridges2018information}, and Kokulu et al.~\cite{kokulu2019matched} conducted interviews with security analysts to better understand SOC workflows and the problems plaguing SOC operations. Common problems include disagreements between managers and analysts and low visibility into network infrastructure and endpoints. Work in the fourth category is on ML for cybersecurity. As discussed by the position paper of Sommer and Paxon~\cite{sommer2010outside}, many pitfalls exist when applying machine learning to cybersecurity\textemdash most notably, the ``semantic-gap'', referring to the common difficulty of analysts understanding the output of ML algorithms. The challenge is presenting results in a context that is understandable to, and actionable by, the analysts. More generally, the role of humans interacting with machine learning (ML) systems and the related usability challenges are areas of open research~\cite{gillies2016human}. There is also a plethora of work on the interpretation of ML algorithms, but we do not have space to include it. For a summary, see the work of Gilpin et al.~\cite{gilpin2018explaining}. \section{Methodology} In this section, we discuss our study design, data analysis, and demographics. \subsection{Study Design} This study was not comparative, but rather exploratory in nature. Our goal in this work was to identify usability concerns in ML-based tools; not to compare the efficacy of the two tools being tested. In order to achieve this goal, we observed participants during tool usage, administered a follow-up survey, and held a focus group to better understand users' experience. We used the \emph{think-aloud} methodology~\cite{van1994think} during observation, in which participants verbalized their intentions, so that researchers would be able to understand the reasons behind participant actions. By conducting the focus group after direct observation of each analyst, we utilized it as a way to supplement and refine our observations rather than as a sole source of data~\cite{nielsen1997use,nielsen1994usability}. The participant observation consisted of two campaigns, one for each tool, in which we performed a sequence of malicious actions against the network and analysts utilized the user interface provided by the tool to attempt to gain insight into the attack. Each campaign lasted one hour and fifteen minutes. Prior to the campaign, analysts were given an introduction to each tool and time to familiarize themselves with the interface. During this familiarization period, analysts could ask any questions they had regarding usage of the tool. Answers were directed to the entire group. During each campaign, the same researcher was assigned to each analyst to record information about and observe the analyst's use of the tool. An additional researcher was responsible for monitoring network status and providing notices every fifteen minutes. Think-aloud was practiced during the familiarization period to ensure analysts understood it. Analysts also recorded insights from each tool they thought were significant as they used the tool and rated the significance of each insight. Following each test, analysts were surveyed to better understand their experience with the tool and the observers were able to ask for any necessary clarification. The survey included the System Usability Scale (SUS) along with additional questions designed by the researchers. The day after testing, we held a focus group to supplement and refine our observations. \subsection{Attack Campaigns} \label{sub:red} We created an attack campaign template that contained actions that one or both of the tools under test should catch. During each testing period, we ran through the actions specified in the attack campaign template while slightly permuting the IPs and payloads used so that the analysts' experience from one tool test would not impact their results in the next. Generally, the attack campaigns consisted of the following actions. First, the adversary gains initial access by dropping a customized version of Cobalt Strike's Beacon\footnote{\url{https://www.cobaltstrike.com//help-beacon/}}, a program mimicking APT's in allowing external access, on the initial target. This was meant to simulate a successful phishing attack, wherein an unsuspecting user of the target system is tricked into downloading and running a malicious email attachment. From the infected target, the adversary port scanned other hosts on the network of the first compromised system. The adversary then instructed the infected system to download additional malware over HTTP and then transfer the malware to another host on the network over Samba. The adversary then ascertained administrator credentials by using Beacon's Hashdump functionality. With the newly found administrator privileges, the adversary used \texttt{PSEXEC} to laterally move from the infected foothold to another target on its internal network. The adversary then exfiltrated some data from the file system of the newly infected host back to the command and control server (C2) and disconnected from the infected target. \subsection{Data Analysis} \label{sub:analysis} Our data analysis was broken down into quantitative and qualitative components. The System Usability Scale (SUS) and attacks detected by each analyst were quantitative metrics, while the post-test survey and focus group were qualitative. For the qualitative analysis, we used a modified version of the open coding approach~\cite{strauss1998basics} called pair coding~\cite{sarker2000building,salinger2008coding}, in which researchers create and assign codes collectively. For the follow-up survey, we also conducted a sentiment analysis. Each coder counted $p$, the number of positive, and $n$, the number of negative comments, for each question. We report and define $S_r: = (p-n) / (p+n)$, a sentiment ratio. Note that $S_r \in [-1,1]$ with $S_r = \pm 1$ if all comments were positive/negative, respectively, and $S_r = 0$ if the quantity of positive and negative comments were equal. We added the $p$ and $n$ values of both researchers together and then calculated a composite sentiment ratio. \subsection{Recruitment \& Ethics} This IRB-approved study was conducted as part of a tool evaluation exercise organized by our Navy sponsor. In order to participate, analysts were required to be actively employed in one of the sponsor's SOCs. The sponsor provided six analysts for the event, with both experienced and novice analysts included in the sample. Prior to testing, we went over an information sheet detailing the nature of the research and the participants' rights. \subsection{Demographics} Half of the analysts' highest level of education was high school, while two had completed a Bachelor's and one an Associate's degree. For context, most IT security professionals have either a Bachelor's or an Associate's degree.~\footnote{\url{https://itcareercentral.com/security-roles-salary-expectations-explained/}} Half of the analysts had one year or less of experience on the job, while the others had three, eight, and five years of experience. Ages ranged from twenty-six to thirty-seven. Table~\ref{tab:analysttools} shows the tools each analyst reported using on their job regularly. \section{Analysis \& Results} \label{sec:results} In this section, we discuss our key findings and make recommendations for UI designers based upon the usability issues we identified. While our study is preliminary in nature, our findings demonstrate that ML-based security tool vendors must put a renewed focus on working with analysts, both experienced and inexperienced, to ensure that their systems are usable and useful in real-world security operations settings. \subsection{Tool Usability} To evaluate the overall usability of each tool, we used the System Usability Scale (SUS). For the SUS, ten statements are ranked from 1 to 5, where 1 is strongly disagree and 5 is strongly agree. Half of the statements express a positive experience with the tool and half a negative experience with the tool. The responses are then converted to a composite score on a scale from 0-100, where a score above 68 is considered average, an 81 would be an `A’ and a 50 would be an `F’. The SUS results for the statements expressing a negative experience are shown in Figure~\ref{fig:negsent} and the results for the statements expressing a positive experience in Figure~\ref{fig:possent}. The mean score for Situ was 65.42, which is average, while NSDT was closer to the failure line with a 56.67. Given that NSDT is a commercially available tool, this result is disappointing. Analysts indicated that NSDT is cumbersome and that it contained inconsistencies, issues we will see again in the next section. For Situ, the main issue identified by the SUS was that analysts felt they needed to learn a lot before they could use the system effectively. We suspect analysts responded this way to Situ for two reasons. First, Situ required analysts to synthesize multiple views of the same data built on different statistics (anomaly score, PCR, geographic information). Second, Situ identified anomalous, rather than malicious, activity, requiring analysts to decide when anomalous behavior was worth investigating. The fact that most analysts lacked a clear mental model for how to use the anomaly scores presented by Situ, which we will discuss in Section~\ref{sub:mental}, supports this explanation. To verify that these results were approaching saturation (i.e. they would not change substantially even if we added more analysts), we also computed the hold-one-out average scores with only five of the six analysts for all six combinations. This yielded six average scores: 62.0, 62.5, 63.5, 65.5, 67.5, and 71.5 for Situ and 50.0, 55.0, 54.5, 58.0, 59.0, and 63.5 for NSDT. The similarity in these average scores verifies that our SUS results are near saturation. \begin{table}[t] \setuptable \begin{tabular}{c|ccccccc} \multicolumn{1}{l}{Analyst} \headline[4.5cm] & \headrow{Network Analysis Framework} & \headrow{Automated Malware Analysis} & \headrow{Network Packet Analyzer} & \headrow{Putty, Bash or Powershell} & \headrow{Full Stack Analytics} & \headrow{SIEM} & \headrow{IDS} \\ \hline 1 &\Circle &\Circle &\CIRCLE &\Circle &\Circle &\CIRCLE &\Circle \\ 2 &\CIRCLE &\Circle &\CIRCLE &\CIRCLE &\CIRCLE &\Circle &\CIRCLE \\ 3 &\Circle &\CIRCLE &\CIRCLE &\CIRCLE &\Circle &\CIRCLE &\Circle \\ 4 &\CIRCLE &\CIRCLE &\CIRCLE &\Circle &\Circle &\Circle &\CIRCLE \\ 5 &\CIRCLE &\CIRCLE &\CIRCLE &\CIRCLE &\Circle &\CIRCLE &\CIRCLE \\ 6 &\CIRCLE &\Circle &\Circle &\Circle &\Circle &\CIRCLE &\CIRCLE \\ \hline \end{tabular} \caption{Tools Analysts Reported Using Regularly} \label{tab:analysttools} \end{table} \begin{figure*}[!ht] \centering \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=45mm]{images/n_questions.png} \label{fig:sub0} \end{subfigure} \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=53mm]{images/nsdt_negative.png} \caption{NSDT} \label{fig:sub3} \end{subfigure} \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=53mm]{images/situ_negative.png} \caption{Situ} \label{fig:sub4} \end{subfigure} \caption{SUS Statements Expressing a Negative Experience} \label{fig:negsent} \end{figure*} \begin{figure*}[!ht] \centering \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=45mm]{images/p_questions.png} \label{fig:sub} \end{subfigure} \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=53mm]{images/nsdt_positive.png} \caption{NSDT} \label{fig:sub1} \end{subfigure} \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=53mm]{images/situ_positive.png} \caption{Situ} \label{fig:sub2} \end{subfigure} \caption{SUS Statements Expressing a Positive Experience} \label{fig:possent} \end{figure*} \subsection{User Interface Issues} Table~\ref{tab:heuristics} summarizes which of Nielsen's heuristics~\footnote{\url{https://www.nngroup.com/articles/ten-usability-heuristics/}} for user interface design each system violated. With NSDT, analysts felt particularly frustrated by a lack of consistency in the user interface. Multiple pages contained overlapping content and looked similar, which caused analysts to continually feel lost because they were trying to remember which page contained which content. Some content was also only available for certain file types, exacerbating this feeling of confusion. A2 said they "fought the GUI the entire hour" and A1 said they "had to click around a lot---inconsistency". Analysts main frustration with Situ was that the filters applied to the search bar were only visible in the URL and were not easily modifiable, forcing analysts to start a new search from scratch if they wanted to alter search parameters. A4 said he/she ``hated filters not listed except in the url''. One issue both tools had in common is that they failed to provide the analysts with as much information as they wanted about the scores produced by the tool. Discussing the score provided by NSDT, A4 noted, ``It seems accurate but I would want more info on why it thinks it's malicious provided in more of a clean way''. While Situ did provide explanations in the website documentation, some analysts found them difficult to understand. ML-based tools need to provide clear and easily accessible explanations for how the ML algorithm scores events. Pop-ups explaining each score should be provided with links to additional reading for those analysts who want to go more in depth. \begin{table}[t] \setuptable \begin{tabular}{|l|cc|} \hline Heuristic & NSDT & Situ \\ \hline Visibility of System Status & {\textcolor[RGB]{112,173,71}{\ding{51}}} & {\textcolor[RGB]{192,0,0}{\ding{55}}} \\ System Matches Real World & {\textcolor[RGB]{192,0,0}{\ding{55}}} & {\textcolor[RGB]{192,0,0}{\ding{55}}} \\ User Control and Freedom & {\textcolor[RGB]{112,173,71}{\ding{51}}} & {\textcolor[RGB]{192,0,0}{\ding{55}}} \\ Consistency and Standards & {\textcolor[RGB]{192,0,0}{\ding{55}}} & {\textcolor[RGB]{112,173,71}{\ding{51}}} \\ Error Prevention & {\textcolor[RGB]{112,173,71}{\ding{51}}} & {\textcolor[RGB]{112,173,71}{\ding{51}}} \\ Recognition Not Recall & {\textcolor[RGB]{192,0,0}{\ding{55}}} & {\textcolor[RGB]{112,173,71}{\ding{51}}} \\ Flexibility and Efficiency & {\textcolor[RGB]{112,173,71}{\ding{51}}} & {\textcolor[RGB]{112,173,71}{\ding{51}}} \\ Aesthetic and Minimalist & {\textcolor[RGB]{112,173,71}{\ding{51}}} & {\textcolor[RGB]{112,173,71}{\ding{51}}} \\ Help Users with Errors & {\textcolor[RGB]{112,173,71}{\ding{51}}} & {\textcolor[RGB]{112,173,71}{\ding{51}}} \\ Help and Documentation & {\textcolor[RGB]{192,0,0}{\ding{55}}} & {\textcolor[RGB]{192,0,0}{\ding{55}}} \\ \hline \end{tabular} \begin{tabular}{ll} {\textcolor[RGB]{112,173,71}{\ding{51}}} & No observed violations of this heuristic\\ {\textcolor[RGB]{192,0,0}{\ding{55}}} & Observed violations of this heuristic \end{tabular} \caption{Summary of whether or not each system observed Nielsen's heuristics for user interface design.} \label{tab:heuristics} \end{table} \subsection{How Mental Models Impact Distrust and Misuse of Tools}~\label{sub:mental} With both NSDT and Situ, some analysts distrusted and/or misused the tool because they had an incorrect mental model of how scores were generated. NSDT scored malicious files on a scale of 1 to 10, where 1 meant that the file was benign and 10 that it was malicious. While analysts had little trouble identifying malicious files using this score even if they did not understand how it was generated, the machine learning engine also provided a confidence level along with the score. This confidence level was always 100\%, a fact that A4 found suspicious, saying, "Why trust this score?". An unclear mental model of how NSDT generated the confidence level resulted in A4 mistrusting the tool because the confidence level was always the same. This result supports prior work~\cite{dovsilovic2018explainable}, which found that analysts who did not understand the ML algorithms distrusted the scores they provided Unlike NSDT, Situ produced an anomaly score based on the flow of network traffic. A more anomalous flow received a higher score. Analysts had varying mental models for how Situ worked and therefore approached anomaly scores very differently. For example, A4 focused on any anomaly scores above a particular value they deemed significant, but discounted events as insignificant if the number of bytes transmitted was small. A5 would investigate which model contributed most heavily to the score, but mainly focused on IP associations. And A6 understood that they should use the anomaly scores to identify a sequence of malicious actions composing a campaign, but they did not understand how to decide which anomalous activity warranted further investigation. In summary, analysts misused Situ for several reasons: (1) They did not understand the difference between anomalous and malicious, (2) They did not understand how to map anomaly scores to attacker actions, (3) They did not know how to prioritize anomalous events. Even though we explained how anomaly scores were calculated during the familiarization period prior to testing and allowed analysts to ask for clarification, only A2 claimed to understand how anomaly scores were calculated during the focus group. These results suggest that AD tools such as Situ may require a more accurate mental model of how scores are produced in order for analysts to use them properly because they require analysts to make complex inferences from the score and to differentiate between anomalous and malicious. In contrast, NSDT flagged files as malicious or non-malicious on a scale of 1 to 10 and would not necessarily require any understanding of the ML model to use effectively, though a lack of understanding can lead to distrust. \subsection{Experience, Tool Performance and Tool-Analyst Match} To assess performance, we let $fc$ and $tc$ denote the number of false and true conclusions made by an analysts, respectively, where $fcr: = fc/(fc + tc)$. A false conclusion occurred when an analyst thought they found malicious activity with a tool, and the activity was actually benign. Table~\ref{tab:expact} shows the number of attack actions identified by each analyst and their false conclusion rate, . We found that the mean false conclusion rate for analysts was .57 (std=.13) with Situ and .28 (std=.25) with NSDT. We did not find that an analyst's experience level directly correlated to an ability to use the tools. With NSDT, an analyst with only 1 year of experience (A3) performed as well as an analyst with 8 years of experience (A5). For Situ, an analyst with only 2 months of experience (A1) performed as well as another analyst with 5 years of experience (A6) and better than an analyst with 8 years of experience (A5). We used a scatter matrix to check for correlations between performance and other demographic data collected, such as education, but found none. This result is surprising. We expected analysts with more experience and education to outperform junior analysts. We also found that most analysts performed better with one tool or the other. A1 and A2 performed well with Situ, but poorly with NSDT. A3 and A5 performed well with NSDT, but poorly with Situ. This result may suggest a tool-analyst match, where individual analysts are predisposed to certain tool types. \begin{table}[h!] \begin{center} \begin{tabular}{ll|cc|cc} & & \multicolumn{2}{|c|}{Situ} & \multicolumn{2}{c}{NSDT} \\ \hline Analyst & Experience & $tc$ & $fcr$ & $tc$ & $fcr$ \\ \hline A1 & 2 months & 3 & .5 & 1 & 0\\ A2 & 3 years & 4 & .43 & 1 & .67\\ A3 & 1 year & 2 & .5 & 4 & .2\\ A4 & 1 year & 1 & .8 & 2 & .5\\ A5 & 8 years & 2 & .71 & 4 & 0\\ A6 & 5 years & 3 & .5 & 5 & .33\\ \end{tabular} \end{center} \caption{Analyst Experience and Performance metrics depicted. A false conclusion occurred when an analyst thought they found malicious activity with a tool, but the activity was actually benign. Because NSDT flagged malicious files, an $fcr$ of 0 was possible for analysts who focused solely on flagged files and did not attempt to draw further conclusions about the nature of the attack.} \label{tab:expact} \end{table} \begin{table*}[hbt!] \centering \begin{tabular}{@{}rcc@{}} \toprule \textbf{} & \textbf{NSDT} & \textbf{Situ}\\ \midrule What was your overall impression of the tool? & .39 & .68 \\ Was this tool easy and intuitive to use? & -.08 & .09\\ How do you see this tool fitting into your workflow? & .44 & .53\\ If this tool was in your current work environment, would you use it? & .83 & 1\\ What was your impression of the alerts raised by the tool? & .56 & .53\\ \midrule Average Sentiment Ratio & \textbf{0.43} & \textbf{0.53} \\%\textbf{2.14} & \textbf{2.83}\\ \bottomrule \end{tabular} \caption{Sentiment ratio, $S_r: = (p-n) / (p+n)$ with $p, n$ the number of positive/negative statements, on post-test questionnaire reported. Note that $S_r \in [-1,1]$ with $S_r = \pm 1$ iff all comments were positive/negative, respectively, and $S_r = 0$ iff $p=n$. In spite of the concerns regarding intuitiveness of the alerts raised by the tools, analysts expressed overwhelmingly positive sentiment that they would use both tools if they were integrated into their work environment.} \label{tab:sentiment} \end{table*} \subsection{User Attitudes} Overall, analysts were optimistic about the capabilities these tools could provide. The analysts liked Situ because it allowed them to discover a wide range of attacker actions \textit{during} an attack, whereas they felt most tools only allow them to respond \textit{after} the attack has already taken place. After using Situ, A2 shared that it was "better than waiting for a light to turn red to do your job”. While analysts viewed NSDT as a more retroactive tool, because it flagged malicious files rather than identifying anomalies, they also felt it could help them automate their workflow and conduct additional analysis. Table~\ref{tab:sentiment} summarizes the results of our sentiment analysis of the follow-up survey for each tool, described in Section~\ref{sub:analysis}. Analysts expressed a more positive overall impression of Situ than NSDT. One possible explanation for this fact is that several analysts were very frustrated with NSDT's user interface for reasons noted in the previous section. As a group, analysts did not find either tool particularly intuitive, expressing neutral sentiment for this question. Analysts also showed some reservations about the alerts raised by the tools and how each tool would fit into their workflow. In spite of these concerns, analysts expressed overwhelmingly positive sentiment that they would use both tools if they were integrated into their work environment. These results suggest that analysts are excited about the possibilities that ML tools provide and willing to use them in practice. However, ML-based security tool vendors still have plenty of work to do to enhance the usability of their products, including addressing UI issues, helping analysts interpret alerts, and establishing a more intuitive workflow. \section{Discussion \& Future Work}\label{discussion} This work identified several serious usability issues in the two ML-based tools studied, including failure to follow established usability heuristics for user interface design and a lack of transparency into how scores are produced that caused distrust and/or misuse among analysts. In light of these problems, we make the following recommendations: \begin{enumerate} \item Vendors should conduct usability tests with actual SOC analysts, both experienced and inexperienced, throughout the software development life cycle. While heuristic evaluations are valuable, they require expertise to apply properly~\cite{thovtrup1991assessing} and are not as effective at identifying major issues pertinent to real users~\cite{paz2015heuristic}. This suggestion is also supported by the work of Bano et al.~\cite{bano2013user}, which concluded that software systems benefit from the inclusion of users in early stages of product development. \item ML-based tools should provide analysts with more guidance on how to understand and utilize their output. The benefit of ML is lost if analysts cannot understand the meaning of the scores produced. Prior research recommends including analysts when developing machine learning models to ensure interpretability~\cite{akinrolabu2018challenge,cashman2019user}. At a minimum, the vendor should conduct usability tests to validate that analysts are able to comprehend and use the scores produced by ML tools as intended by the vendor. \end{enumerate} The lack of sufficient explanation of ML concepts in either of the user interfaces we examined resonates with prior work. Sopan et al.~\cite{sopan2018building} found that their initial user interface, which assumed a base level of knowledge about machine learning, had to be modified for the analysts who were not as familiar with relevant terminology. Usable ML tools must bridge the ``semantic gap''~\cite{sommer2010outside} to help analysts who are not machine learning experts identify actionable insights. In addition, our results showed not only that incorrect mental models can cause distrust and misuse of tools, but also suggest that certain categories of ML tools require analysts to have more accurate mental models. Specifically, we found that Situ, an AD tool, required a more accurate mental model to use because analysts had to make inferences based upon anomaly scores, whereas NSDT, an AV tool, flagged files as malicious or non-malicious and was therefore simple to interpret without any understanding of the underlying models. While prior research has explored how mental models impact the usability of encryption~\cite{wu2018tree}, the Tor browser~\cite{winter2018tor}, and password managers~\cite{pearman2019people}, no research has focused specifically on how mental models impact SOC analysts' usage of ML-based tools. Our research also uncovered the possibility of a tool-analyst match. All analysts performed better with one tool or the other, yet we found no correlation between the demographic information we collected and performance. These results suggest that other factors such as prior background knowledge or personality play a significant role in ML-based tool usage. While exploring personal attributes that impact tool usage was not the focus of our study, we believe this is an area that would be fruitful for researchers to explore further. We plan to continue this work in several ways. First, we want to analyze a broader set of ML-based tools in order to identify usability paradigms and common issues within each paradigm. Second, we want to categorize analysts' mental models of different tool types and understand how those mental models impact their ability to use the tools. The analysts in this study were excited about integrating ML tools into their SOCs and our research aims to help ensure that those tools are both usable and useful in real-world contexts. \section{Future Work} We plan to continue this work in several ways. First, we want to \section*{Acknowledgments} The authors thank all security analysts that agreed to participate in this study and the team at the National Cyber Range, without whom this study would not have been possible. [Rest of acknowledgements redacted for double-blind review.]
{ "timestamp": "2020-12-17T02:20:46", "yymm": "2012", "arxiv_id": "2012.09013", "language": "en", "url": "https://arxiv.org/abs/2012.09013" }
\section{Introduction} \subsection{Problem Description, Objectives and Context}\label{sec.intro} We consider the problem of computing the minimum of a set of numbers over a network, and we propose a distributed, iterative solution achieving \emph{global} and \emph{uniform}, albeit \emph{approximate}, asymptotic stability. We are given a set $\mathcal N$ of $N$ decision makers (or \emph{agents}), where each agent $i\in\mathcal N$ is provided with a number ${\rm M}_i\in \mathbb R_{\ge 0}$ not known a priori by the others. The agents exchange information over a communication network with only a subset of other agents (called their \emph{neighborhood}). The approximate minimum sharing problem consists in the design of an algorithm guaranteeing that each agent asymptotically obtains a ``sufficiently good'' estimate of the quantity \begin{equation}\label{d.uM} \M\sr}%{\und{\M} := \min_{i\in\mathcal N} {\rm M}_i . \end{equation} Clearly, ``$x_i=\M\sr}%{\und{\M},\ \forall i\in\mathcal N$'' is also the {\em unique} solution to every constrained optimization problem of the form \begin{equation}\label{s.min_opt} \begin{aligned} &\max \, \sum_{i\in\mathcal N} \psi_i(x_i) \\ &\quad x_i \le {\rm M}_i , & \forall &i\in\mathcal N\\ &\quad x_i=x_j,& \forall & i,j\in\mathcal N \end{aligned} \end{equation} obtained with $\psi_i$, $i\in\mathcal N$, continuous and strictly increasing functions. Therefore, the minimum sharing problem is equivalent to the constrained distributed optimization problem \eqref{s.min_opt}, thus intersecting the wide research field of distributed optimization \cite{NotarstefanoTutorial}. The problem of computing a minimum (or, equivalently, a maximum) over a network of decision makers is a classical problem in multi-agent control, with applications in distributed estimation and filtering, synchronization, leader election, and computation of network size and connectivity (see, e.g., \cite{Bullo2009,Santoro2006,nejad_maxconsensus_2009,iutzeler_analysis_2012,golfar_convergence_2019} and the references therein). Perhaps the most elementary existing algorithms solving the minimum sharing problem are the \emph{FloodMax} \cite{Bullo2009} and the \emph{Max-Consensus} \cite{nejad_maxconsensus_2009,iutzeler_analysis_2012,golfar_convergence_2019}. In its simplest form, Max-Consensus\footnote{For brevity, we only focus on Max-Consensus. However, the same conclusions applies also to the FloodMax.} requires each agent $i\in\mathcal N$ to store an estimate $x_i\in\mathbb R$ of $\M\sr}%{\und{\M}$ which is updated iteratively on the basis of the following update rule \begin{subequations}\label{s.ex.maxconsensus} \begin{equation}\label{s.ex.maxconsensus_updatelaws} x_i^{t+1} = \min_{j\in [i]} x_j^t ,\qquad \forall i\in\mathcal N , \end{equation} with the initialization \begin{equation}\label{s.ex.maxconsensus_initialization} x_i^{t_0}= {\rm M}_i ,\qquad \forall i\in\mathcal N, \end{equation} \end{subequations} where $t$ is the iteration variable, $t_0$ its initial value, and $[i]\subset\mathcal N$ denotes the neighborhood of agent $i$ (we assume $i\in[i]$). The update law \eqref{s.ex.maxconsensus_updatelaws} is decentralized and scalable, in that each agent needs only information coming from its neighbors and each agent stores only one variable. However, although \eqref{s.ex.maxconsensus_updatelaws} guarantees convergence of each $x_i$ to $\M\sr}%{\und{\M}$ when the estimates $x_i$ are initialized as specified in~\eqref{s.ex.maxconsensus_initialization}, \emph{convergence is not guaranteed for an arbitrary initialization}. In fact, if \begin{equation}\label{s.ex.init2} \exists i\in\mathcal N \ {\rm s.t.}\ x_i^{t_0}<\M\sr}%{\und{\M}, \end{equation} then the corresponding estimate $x_i^t$ produced by \eqref{s.ex.maxconsensus_updatelaws} satisfies $x^t_i<\M\sr}%{\und{\M}$ all subsequent $t$, so that $x_i^t\to \M\sr}%{\und{\M}$ cannot hold\footnote{In this specific case, we also observe that any \emph{consensual} configuration (i.e., $x_i=x_j$ for all $i,j\in\mathcal N$) is an equilibrium of \eqref{s.ex.maxconsensus_updatelaws}. This, in turn, is intimately linked to the unfeasibility result of \cite[Theorem 3.1.1]{Santoro2006}, and to the \emph{detectability} issues appearing in many control problems, such as \emph{Extremum Seeking} \cite{Ariyur2003,Tan2006}.}. Therefore, since convergence to $\M\sr}%{\und{\M}$ holds only for some specific initial values $x_i^{t_0}$, the Max-Consensus algorithm~\eqref{s.ex.maxconsensus} is not \emph{globally convergent}. While there are application domains for which attaining global convergence is not strictly necessary, there are many others in which it is a crucial requirement. This is the case, for instance, when the quantities ${\rm M}_i$ can change at run time (see the two use-cases illustrated in Section~\ref{sec.app}). To see how this may be a problem for the update law \eqref{s.ex.maxconsensus}, assume by way of example that the estimates $x_i^t$ have reached at a given $t_1$ the value $\M\sr}%{\und{\M}$, i.e. $x_i^{t_1}=\M\sr}%{\und{\M}$ for all $i\in\mathcal N$, and assume that there is a unique $k\in\mathcal N$ such that $\M\sr}%{\und{\M}={\rm M}_k$. Now, suppose that at some $t_2>t_1$ the value of ${\rm M}_k$ increases, thus determining an increment also of $\M\sr}%{\und{\M}$. Then, the condition \eqref{s.ex.init2} holds for $t_0=t_2$ so as, in view of the discussion above, the update law \eqref{s.ex.maxconsensus_updatelaws} fails to track the new minimum. Global attractiveness is not the only desirable property one may be interested in when the minimum sharing problem is considered over large networks with possibly changing conditions. In fact, a crucial role is also played by \begin{enumerate} \item \emph{Uniformity of the convergence}: the convergence rate does not depend on the initial value $t_0$ of the iteration variable and is constant over compact subsets of initial conditions. \item \emph{Stability of the steady state}: ensures that small variations in the parameters and initial conditions map into small deviations from the unperturbed trajectories. \item \emph{Scalability}: the number of variables stored by each agent does not grow with the network size or the number of interconnections. \item \emph{Decentralization of the updates}: the update law of each agent uses only local information and depends on parameters that are independent from those of the other agents. \end{enumerate} Indeed, uniform global attractiveness and stability of the steady state confer robustness against uncertain and time-varying conditions and parameters (see e.g. \cite[Chapter~7]{Goebel2012}), making the minimum sharing method suitable for applications in which the quantities ${\rm M}_i$ vary in time. Moreover, scalability and decentralization enable the application to large-scale networks. In this direction, in this paper we look for a novel solution to the minimum sharing problem having scalability and decentralization properties similar to those of Max-Consensus~\eqref{s.ex.maxconsensus}, but, in addition, possessing the aforementioned globality, uniformity and stability properties. \subsection{Motivating Applications}\label{sec.app} Our methodology is motivated by two application contexts described below. In both cases, a key element consists in solving an instance of the minimum-sharing problem \eqref{s.min_opt} in which the parameters~${\rm M}_i$, hence the minimum $\M\sr}%{\und{\M}$, may change over time. In this contexts, (i) global attractiveness allows to track the changing minimum $\M\sr}%{\und{\M}$, (ii)~uniformity of convergence guarantees that the convergence rate is always the same and does not decrease with time, and (iii) stability guarantees that relatively small variations of the parameters lead to small transitory deviations from the optimal steady state. \subsubsection{Cooperative Control of Traffic Networks} Consider a traffic network consisting of a set of vehicles driving on a highway in an intense traffic situation. Some of the vehicles have self-driving capabilities, and we can assign their driving policies. The other vehicles are instead human-driven and, thus, they are not controlled. The whole traffic network is seen as a \emph{plant} that, when not properly controlled, may exhibit undesired behaviors, such as ghost jams. The control goal consists in finding a control policy, distributed among the self-driving vehicles, which guarantees that the ``closed-loop'' traffic network behaves properly, leading to a smooth traffic flow where all the vehicles hold a common maximal cruise speed. At each time, the maximum attainable cruise speed of each vehicle $i$ is constrained by a \emph{personal maximum value}, denoted by ${\rm M}_i$, which may depend on mechanical constraints, on the traffic conditions, on standing speed limitations, or other exogenous factors. A key part of the control task consists in the distributed computation of the maximum common cruise speed, $\M\sr}%{\und{\M}$, compatible with all the personal velocity constraints. At each time, the problem of estimating $\M\sr}%{\und{\M}$ is an instance of~\eqref{s.min_opt}, whose solution is precisely~\eqref{d.uM}. \subsubsection{Dynamic Leader Election} Another important motivating application is the distributed \emph{leader election} problem in dynamic networks, which shares many similarities with the previous application. Single-leader election has been proved to be an unsolvable problem in general, even under bi-directionality, connectivity, and total reliability assumptions on the communication networks \cite[Theorem 3.1.1]{Santoro2006}. A standard additional assumption making the problem well-posed is that each agent is characterized by a \emph{unique identifier}~${\rm M}_i$. Hence, the problem of leader election can be cast as finding the minimum, $\M\sr}%{\und{\M}$, of such identifiers. The agent whose identifier coincides with $\M\sr}%{\und{\M}$ declares itself the leader, the others the followers. \subsection{Related Works and State of the Art}\label{sec.literature} Classical algorithmic approaches to the minimum sharing problem in arbitrary networks have been developed in the context of distributed algorithms and robotic applications. They include the \emph{FloodMax} \cite{Bullo2009}, the \emph{Max-Consensus}~\cite{nejad_maxconsensus_2009,iutzeler_analysis_2012,golfar_convergence_2019} (see \eqref{s.ex.maxconsensus}), the \emph{MegaMerger} \cite{Gallager1983}, and the \emph{Yo-Yo} algorithm. See \cite{Santoro2006,Bullo2009} for a more detailed overview. Some of these approaches, such as the basic Max-Consensus~\eqref{s.ex.maxconsensus}, have nice scalability and decentralization properties: the update laws do not depend on \emph{centralized quantities}, such as parameters that need to be known in advance by all the agents, and employ a number of local variables which does not grow with the network size or topology. However, all such approaches require a correct initialization or a pre-processing synchronization phase, which are undesired limitations in applications of interest such as, for example, the ones discussed in Section~\ref{sec.app}. If the minimum sharing problem is cast in terms of the optimization problem \eqref{s.min_opt}, then one can rely on a well-developed literature on discrete-time distributed optimization (see \cite{NotarstefanoTutorial} for a recent overview). If the functions $\psi_i$ in~\eqref{s.min_opt} are convex, indeed, different approaches can be used, such as {consensus-based (sub)gradient methods} \cite{nedic_distributed_2009,nedic_constrained_2010,lobel_distributed_2011,shi_extra_2015,shi_proximal_2015,yuan_convergence_2016}, {second-order methods} \cite{varagnolo_newton-raphson_2016,mokhtari_network_2017}, projected \cite{xie_distributed_2018} and primal-dual \cite{zhu_distributed_2012,chang_distributed_2014} methods with inequality constraints, methods based on the distributed Alternate Direction Method of Multipliers (ADMM) \cite{Boyd2011,mota_d-admm_2013,shi_linear_2014,jakovetic_linear_2015,ling_dlm_2015,chang_proximal_2016,makhdoumi_convergence_2017,NotarstefanoTutorial,bastianello_asynchronous_2020}, and methods based on gradient tracking \cite{xu_augmented_2015,nedic_achieving_2017,nedic_geometrically_2017,qu_harnessing_2018,xi_add-opt_2018,Bin2019}. Gradient methods typically achieve global attractiveness. However, among the cited references only \cite{nedic_constrained_2010} deals with constrained problems with different \emph{local constraints}\footnote{By the term ``local constraints'' we refer to private constraints an agent may have on its own variables that do not depend on the other agents' variables, e.g. the constraints $x_i\le {\rm M}_i$ in \eqref{s.min_opt}.} such as \eqref{s.min_opt}. Yet, \cite{nedic_constrained_2010} requires a vanishing stepsize, which makes convergence not uniform. Gradient methods employing a fixed stepsize thus guaranteeing uniformity are given in \cite{nedic_distributed_2009,lobel_distributed_2011,shi_extra_2015,shi_proximal_2015,yuan_convergence_2016,varagnolo_newton-raphson_2016,mokhtari_network_2017}. However, they do not cover constrained problems of the kind~\eqref{s.min_opt}. Moreover, the first-order methods in \cite{nedic_distributed_2009,lobel_distributed_2011,yuan_convergence_2016} lead to an approximate convergence result in which the convergence speed and the approximation error need to be traded off. This, in turn, is consistent with our results in which a compromise is more generally established between uniformity, approximation error and convergence rate. The approaches~\cite{xie_distributed_2018,zhu_distributed_2012,chang_distributed_2014} deal with inequality constraints including Problem~\eqref{s.min_opt}. Nevertheless, they require a correct initialization and, hence, they do not provide global attractiveness. The same issue applies to gradient-tracking methods~\cite{xu_augmented_2015,nedic_achieving_2017,nedic_geometrically_2017,qu_harnessing_2018,xi_add-opt_2018,Bin2019} (which, anyway, are developed for unconstrained problems), and also for the ``node-based'' formulations of ADMM \cite{mota_d-admm_2013,shi_linear_2014,jakovetic_linear_2015,makhdoumi_convergence_2017,ling_dlm_2015}. Instead, the ``edge-based'' formulations of ADMM (e.g. \cite[Section 3.3]{NotarstefanoTutorial}, \cite{bastianello_asynchronous_2020}) do not suffer from this initialization issue, and they provide a solution which is global and uniform. Nevertheless, the number of variables that each agent has to store grows with the dimension of its neighborhood, thus incurring in scalability issues. Moreover, stability is not usually considered in the analysis of the aforementioned designs, and typically the update laws employ coefficients (e.g. stepsizes) which must be common\footnote{Exceptions are given in the gradient-tracking designs of \cite{xu_augmented_2015,nedic_geometrically_2017}, where agents employ uncoordinated stepsizes. In both the designs, the discrepancy between the stepsizes must be small enough. Hence, these results may be seen as a ``robustness'' property relative to variations of the stepsizes with respect to their average. In turn, this property comes \emph{for free} if the algorithm is proved to be \emph{asymptotically stable} with a common stepsize (see, e.g., \cite[Chapter 7]{Goebel2012}).} to all agent (i.e., they are \emph{centralized} quantities). \subsection{Contributions \& Organization of the Paper} \label{sec.contribution} We propose a new approach to the minimum sharing problem that provides an adjustable \emph{approximate} (or \emph{sub-optimal} in terms of \eqref{s.min_opt}) solution enjoying the globality, uniformity, scalability and decentralization properties stated in Section~\ref{sec.intro}, which do not seem to be possessed altogether by any existing algorithm. The proposed update laws have the form \begin{equation}\label{s.xi_fi} x_i^{t+1} = f_i(t,x^t), \end{equation} for some suitable functions $f_i$, where $x_i\in\mathbb R$ represents the estimate of $\M\sr}%{\und{\M}$ stored by agent $i$, and $x:=(x_i)_{i\in\mathcal N}$ is the aggregate estimate. As formally specified later on in Section~\ref{sec.comm}, the actual structure of the functions $f_i$ encodes the decentralization constraints, allowing an agent update to depend only on the estimates of a subset of other agents (see Remark~\ref{rmk.decentralization}). We show that all the estimates $x_i$ converge, globally and uniformly, to a stable neighborhood of $\M\sr}%{\und{\M}$ whose size can be reduced arbitrarily around $\M\sr}%{\und{\M}$ by suitably tuning some control parameters. More precisely, the proposed approach enjoys the following properties: \begin{enumerate}[(a)] \item The algorithm is distributed and scalable, since the only one variable is stored for each agent. \item\label{item.dec} The update law of each agent employs a gain which can be tuned independently from the others. \item The estimates $x_i$ converge globally and uniformly to a stable steady state which can be made arbitrarily close to $\M\sr}%{\und{\M}$. \item Exact convergence (i.e., all the estimates converge to $\M\sr}%{\und{\M}$) can be achieved, at the price, however, of losing uniformity. \end{enumerate} In view of Item \eqref{item.dec}, the proposed method has good decentralization properties compared to most of the approaches mentioned in Section~\ref{sec.literature}. Nevertheless, we underline that the proposed method is not fully decentralized, as the agents are supposed to know a lower-bound on $\M\sr}%{\und{\M}$ (Assumption~\ref{ass.M_eps}) which explicitly enters in the update laws. The paper is organized as follows. After providing preliminary definitions and remarks in Section~\ref{sec.prelim}, in Section~\ref{sec:min:share} we formulate the minimum-sharing problem and we describe the proposed solution methodology. The main convergence results are given in Section~\ref{sec:conv} and proved in Section~\ref{sec.proof}. Finally, numerical results and concluding remarks are reported in Sections~\ref{sec:simul} and~\ref{sec:concl}, respectively. \section{Preliminaries} \label{sec.prelim} \subsection{Notation} We denote by $\mathbb R$ and $\mathbb N$ the set of real and natural numbers respectively. If $a\in\mathbb R$, $\mathbb R_{\ge a}$ denotes the set of all real numbers larger or equal to $a$, and similar definitions apply to other ordered sets and ordering relations. We denote by $\# A$ the cardinality of a set $A$. If $A,B\subset \mathbb R$, $A\setminus B:=\{ a\in A\,\mid\, a\notin B \}$ denotes the set difference between $A$ and $B$. We identify singletons with their unique element and, for a $b\in\mathbb R$, we thus write $A\setminus b$ in place of $A\setminus \{b\}$. We denote norms by $|\cdot|$ whenever they are clear from the context. With $A\subset\mathbb R^n$ and $x\in\mathbb R^n$, $\setdist{x}{A}:= \inf_{a\in A}|x-a|$ denotes the distance from $x$ to $A$. Sequences indexed by a set $S$ are denoted by $(x_s)_{s\in S}$. For a non-empty interval $[a,b]\subset \mathbb R$, we define the projection map $\projOp{[a,b]}:\mathbb R\to[a,b]$ as $\projOp{[a,b]}(s) := \min\{\max\{ s,\, a \},\, b\}$. A function $f:\mathbb R^n\to\mathbb R^m$, $n,m\in\mathbb N$, is \emph{locally bounded} if $f(K)$ is bounded for each compact set $K\subset\mathbb R^n$. In this paper, we consider discrete-time systems whose solutions are signals defined on a non-empty subset $\dom x$ of $\mathbb N$. For ease of notation, we will use $x^t$ in place of $x(t)$ to denote the values of a signal $x$. With $t_0\in\mathbb N$, we say that $x$ \emph{starts at $t_0$} if $\min \dom x = t_0$. \subsection{Communication Networks}\label{sec.comm} Throughout the paper, $\mathcal N$ denotes the (finite) set of agents in the network, and we let $N:=\#\mathcal N$. The network communication constraints are formally captured by the concept of ``communication structure'' defined below\footnote{A common way to define a communication structure on $\mathcal N$ is to consider an undirected graph $(\mathcal N,\mathcal E)$ with vertices set equal to $\mathcal N$ and edges set $\mathcal E\subset\mathcal N\times\mathcal N$ such that if $(i,j)\in\mathcal E$ then agents $i$ and $j$ can communicate. In this case, $[i]:=\{i\}\cup\{ j\in\mathcal N\,\mid\, (j,i)\in\mathcal E\}$.}. \begin{definition}\label{def.com_struct} A \emph{communication structure} on $\mathcal N$ is a sequence $\mathcal C=([i])_{i\in\mathcal N}$ of subsets $[i]$ of $\mathcal N$ which satisfy $i\in[i]$. \end{definition} For each $i\in\mathcal N$, the set $[i]$ is called the \emph{neighborhood} of $i$. A \emph{communication network} is a pair $(\mathcal N,\mathcal C)$, in which $\mathcal N$ is a set and $\mathcal C$ is a communication structure on $\mathcal N$. For a given $I\subset\mathcal N$, we define the sequence of sets \begin{equation}\label{d.nbds} \begin{array}{lcl} [I]^0 &:=& I \\{} [I]^n &:=& \bigcup_{j\in [I]^{n-1}} [j],\quad n\in\mathbb N_{\ge 1} \end{array} \end{equation} so as, in particular, $[\{i\}]^1=[i]$. If $I=\{i\}$ is a singleton, we use the short notation $[\{i\}]^n=[i]^n$. Moreover, for $n,m\in\mathbb N$ we let \begin{equation*} [I]_m^n := [I]^n \setminus [I]^m. \end{equation*} We consider networks that are \emph{connected} according to the following definition. \begin{definition}\label{def.Iconnected} With $I\subset\mathcal N$, a communication network $(\mathcal N,\mathcal C)$ is said to be $I$-\emph{connected} if there exists $n_I\le N$ such that $[I]^{n_I} = \mathcal N$. \end{definition} The notion of $I$-connectedness is in general weaker than usual \emph{strong connectedness}, which requires the existence of a path between any two agents. Later on, we shall assume that $\mathcal N$ is given a communication structure $\mathcal C$ which is $I^\star$-connected for a specific subset $I^\star\subset\mathcal N$. For the purpose of analysis, this communication structure is assumed static. Likewise also the quantities ${\rm M}_i$ are supposed constant. In fact, this corresponds to a well-defined ``nominal setting'' for the proposed method in which we can prove the desired uniform global attractiveness and stability properties. Proving such properties in the nominal case, in turn, guarantees that the proposed method can be applied also to relevant classes of problems where the communication structure and the parameters ${\rm M}_i$ (hence, their minimum $\M\sr}%{\und{\M}$) may change over time. Indeed, as already mentioned in Section~\ref{sec.intro}, uniform global attractiveness and stability ensure a proper approximate tracking of a time-varying minimum $\M\sr}%{\und{\M}$ provided that its dynamics is sufficiently slow. Moreover, classical results in the context of control under different time-scales (see, e.g., \cite{Kokotovic1999,Teel2003,Tan2006,Wang2012}) also guarantee good tracking performances under changes of the communication structure $\mathcal C$ that are, on average, sufficiently slow with respect to the dynamics of the update laws. In this respect, Section~\ref{sec:simul} provides numerical results in a scenario in which the communication structure and the numbers ${\rm M}_i$ are subject to impulsive changes separated by relatively large intervals of time. \subsection{Stability and Convergence Notions}\label{sec.convergence} We consider discrete-time systems of the form \begin{equation} \label{pre.s.x} x^{t+1} = f(t,x^t), \end{equation} with state $x^t\in\mathbb R^n$, $n\in\mathbb N$. Given a closed set $\mathcal A\subset\mathbb R^n$, we say that $\mathcal A$ is \emph{stable} for \eqref{pre.s.x} if for each $\epsilon>0$ there exists $\delta(\epsilon)>0$ such that every solution of~\eqref{pre.s.x} satisfying $\setdist{x^{t_0}}{\mathcal A}\le \delta(\epsilon)$ also satisfies $\setdist{x^{t}}{\mathcal A}\le \epsilon$, for all $t\ge t_0$. We say that $\mathcal A$ is \emph{attractive} for \eqref{pre.s.x} if there exists an open superset $\mathcal O$ of $\mathcal A$ and, for every $t_0\in\mathbb N$, every solution $x$ to \eqref{pre.s.x} with $x^{t_0}\in \mathcal O$, and every $\epsilon>0$, there exists $t^\star(t_0,x^{t_0},\epsilon)\in\mathbb N$, such that $\setdist{x^t}{\mathcal A}\le \epsilon$ holds for all $t\ge t_0+t^\star(t_0,x^{t_0},\epsilon)$. Different qualifiers can enrich this attractiveness property. In particular, the set $\mathcal A$ is said to be: \begin{itemize} \item \emph{Globally attractive} if $\mathcal O=\mathbb R^n$. \item \emph{Finite-time attractive} if the condition ``$\epsilon>0$'' can be replaced by ``$\epsilon\ge 0$''. \item \emph{Uniformly attractive in the initial time} $t_0$ if the map $t^\star(\cdot)$ does not depend on $t_0$. \item \emph{Uniformly attractive in the initial conditions} $x^{t_0}$ if for each $(t_0,\epsilon)\in\mathbb N\times\mathbb R_{\ge 0}$, the map $t^\star(t_0,\cdot,\epsilon)$ is locally bounded. \item \emph{Uniformly attractive} if it is both uniformly attractive in the initial time and in the initial conditions. \item {\emph{$\epsilon$-approximately attractive}} (with $\epsilon>0$) if the set $\{ x\in\mathbb R^n\,\mid\, \setdist{x}{\mathcal A}\le \epsilon \}$ is attractive. \end{itemize} If $\mathcal A$ is both stable and attractive, it is said to be \emph{asymptotically stable}. Moreover, with $(f_\gamma)_{\gamma\in \Gamma}$ representing a family of functions $f_\gamma:\mathbb N\times\mathbb R^n\to\mathbb R^n$ indexed by a set $\Gamma$, consider the family of systems \begin{equation}\label{pre.s.xa} x^{t+1} = f_\gamma(t,x^t),\qquad\gamma\in \Gamma. \end{equation} Then, we say that the set $\mathcal A$ is \emph{practically attractive} for the family \eqref{pre.s.xa}, if for each $\epsilon>0$, there exists $\gamma^\star(\epsilon)\in\Gamma$ such that the set $\mathcal A$ is $\epsilon$-approximately attractive for the system~\eqref{pre.s.xa} obtained with $\gamma=\gamma^\star(\epsilon)$. \section{Distributed Minimum Sharing}\label{sec:min:share} \subsection{Problem Formulation} We are given a communication network $(\mathcal N,\mathcal C)$. Each agent $i\in\mathcal N$ is provided with a number ${\rm M}_i$, not known a priori by the others, and it stores and updates a local estimate $x_i\in\mathbb R$ of the quantity $\M\sr}%{\und{\M}$ defined in \eqref{d.uM}. Thus, the problem at hand consists in designing an update law for each agent $i\in\mathcal N$ of the form \eqref{s.xi_fi} such that the resulting estimates $x_i^t$ converge to $\M\sr}%{\und{\M}$, in some of the senses defined in Section \ref{sec.convergence}. The resulting family $f:=(f_i)_{i\in\mathcal N}$ is called the \emph{distributed methodology}. In the following, we let $x:=(x_i)_{i\in\mathcal N}$ and we compactly rewrite \eqref{s.xi_fi} as \begin{equation}\label{s.x} x^{t+1} = f(t,x^t). \end{equation} As each agent is allowed to exchange information only with the agents belonging to its neighborhood $[i]$, the functions $f_i$ must respect this constraint. This is formally expressed by the following definitions. \begin{definition} With $V\subset\mathcal N$, a function $g$ on $\mathbb N\times\mathbb R^N$ is said to be \emph{adapted to $V$} if it satisfies $g(t,x)=g(t,z)$ for every $t\in\mathbb N$, and every $x,z\in\mathbb R^N$ satisfying $x_i=z_i$ for all $i\in V$. \end{definition} \begin{definition}\label{def.decentralized} The function $f=(f_i)_{i\in\mathcal N}$ is said to be $\mathcal C$-\emph{decentralized} if, for each $i\in\mathcal N$, the map $f_i$ is adapted to~$[i]$. \end{definition} Then, the \emph{distributed minimum sharing} problem is defined as follows. \begin{problem}\label{prob.1} Design a $\mathcal C$-decentralized function $f$, such that the set \begin{equation}\label{d.A} \mathcal A := \{\M\sr}%{\und{\M}\}^N \end{equation} is globally attractive for \eqref{s.x}. \end{problem} \begin{remark}\label{rmk.decentralization} We stress that, if $f$ is $\mathcal C$-decentralized, then each function $f_i$ in \eqref{s.xi_fi} depends only on $(x_j)_{j\in [i]}$ and not on the whole state $x$. \end{remark} \begin{remark} Depending on the additional qualifiers that may characterize the attractiveness property of $\mathcal A$ in Problem \ref{prob.1}, we may have solutions to Problem \ref{prob.1} in ``different senses''. In the forthcoming section, we propose a methodology obtaining both global attractiveness and global uniform {practical} attractiveness of $\mathcal A$, depending on the value of some user-decided control parameters. We will show that a compromise between how close we can get to $\mathcal A$ and uniformity in the initial time is necessary. In particular, we show that attractiveness is possible only at the price of losing uniformity in the initial time, and that, if such property is needed, then global practical uniform attractiveness is the best we can achieve. \end{remark} \subsection{Standing Assumptions} We consider Problem \ref{prob.1} under two main assumptions specified hereafter. We define the set \begin{equation}\label{d.Isr} \begin{array}{lcl} I^\star &:=& \displaystyle\argmin_{i\in\mathcal N} {\rm M}_i . \end{array} \end{equation} With the following assumption, we require the communication network to be connected with respect to $I^\star$. \begin{assumption}[Connectedness]\label{ass.connected} The communication network $(\mathcal N,\mathcal C)$ is $I^\star$-connected in the sense of Definition~\ref{def.Iconnected}. \end{assumption} The second assumption, instead, requires each agent to know a lower-bound on $\M\sr}%{\und{\M}$. \begin{assumption}[Consistency]\label{ass.M_eps} Each agent $i\in\mathcal N$ knows a number $\mu_i\in\mathbb R_{>0}$ such that $\mu_i\le\M\sr}%{\und{\M}$. \end{assumption} It is worth noting that Assumption \ref{ass.M_eps} is a ``centralized'' assumption, in that it asks each agent to know a lower bound on the common, unknown quantity $\M\sr}%{\und{\M}$. Nevertheless, it introduces almost no loss of generality in different applications of interest, including those mentioned in Section \ref{sec.app}, where knowing a lower-bound on $\M\sr}%{\und{\M}$ is a mild requirement. For instance, in both the traffic control and leader election problems we can assume that the quantities ${\rm M}_i$ are integers, so that ``$\mu_i\in(0,1)$ for all $i\in\mathcal N$'' is a feasible choice requiring no further knowledge on $\M\sr}%{\und{\M}$. Furthermore, this assumption is not in principle needed if an approximate or practical attractiveness result is sought. In fact, if for some $I\subset \mathcal N$, $\epsilon:=\max_{i\in I}\mu_{i}>\M\sr}%{\und{\M}$, then $\M\sr}%{\und{\M}\in[0,\epsilon)$, and, as clarified later on by the asymptotic analysis, we are able to claim that the set $[0,\epsilon]^N$ (which includes $\M\sr}%{\und{\M}$) is practically attractive for $x$, with $\epsilon$, however, that can be made arbitrarily small by choosing $\mu_i$ accordingly. In the following we let \begin{equation}\label{d.ulb} \und{\lb} := \displaystyle\min_{i\in\mathcal N} \mu_i. \end{equation} \subsection{The Update Laws} The proposed update law is obtained by choosing $f$ so that, for each $i\in\mathcal N$, Equation \eqref{s.xi_fi} reads as follows\footnote{Recall that $\projOp{[a,b]}(s) := \min\{\max\{ s,\, a \},\, b\}$.} \begin{equation}\label{s.xi} x_i^+ = \proj{[\mu_i,\, {\rm M}_i]}{{\rm e}^{h_i^t} x_i + k_i \sum_{j\in[i]}\big(x_j-x_i \big)}, \end{equation} in which $\mu_i>0$ is the same quantity of Assumption \ref{ass.M_eps}, $k_i>0$ is a free control gain chosen to satisfy \begin{equation} \label{inq.ki_1} 0< k_i \le \dfrac{1}{\#([i]\setminus i)} \end{equation} and $h_i:\mathbb N\to \mathbb R_{\ge 0}$ is a time signal to be designed later on. Notice that, as in \cite{nedic_constrained_2010}, the update laws \eqref{s.xi} have the form of a projected (onto the interval $[\mu_i,\,{\rm M}_i]$) consensus-like protocol. Unlike \cite{nedic_constrained_2010}, however, the matrix defining the estimates dynamics needs {\em not} be column or row-stochastic, and the coefficients $k_i$ are only constrained by \eqref{inq.ki_1} and, hence, they can be chosen in a completely decentralized way. Moreover, unlike all the aforementioned distributed optimization approaches, the restriction of the dynamics onto the \emph{consensus manifold}\footnote{That is, the set $\{x\in\mathbb R^N\,\mid\, x_i=x_j,\ \forall i,j\in\mathcal N\}$.} is not marginally stable. Rather, it is deliberately made unstable by the terms ${\rm e}^{h_i^t}$. \subsection{Excitation Properties} The signals $h_i$ will be chosen to guarantee one of the following \emph{excitation properties}. \begin{definition}[Sufficiency of Excitation]\label{d.SE} With $t_0\in\mathbb N$, the family $(h_i)_{i\in\mathcal N}$, is said to be \emph{sufficiently exciting from $t_0$} if there exist $\underline h(t_0)>0$ and $\Delta(t_0)\in\mathbb N_{\ge 1}$ such that, for each $m\in\mathbb N_{\ge 1}$ satisfying \begin{align}\label{e.SE.m} m &\le \dfrac{1}{\underline h(t_0)}\log\left( \dfrac{\M\sr}%{\und{\M}}{\und{\lb}} \right) \end{align} and each $i\in\mathcal N$, there exists at least one $s_i\in\{t_0+1+(m-1)\Delta(t_0),\,\dots,\,t_0+m\Delta(t_0)\}$ such that $h_i^{s_i}\ge \underline h(t_0)$. \end{definition} In qualitative terms, given an initial time $t_0$, sufficiency of excitation implies that the signals $h_i$ are positive ``frequently enough" for a ``large enough" amount of time succeeding $t_0$. When $(h_i)_{i\in\mathcal N}$ is sufficiently exciting from \emph{every} $t_0$, and independently on it, then we say that $(h_i)_{i\in\mathcal N}$ enjoys the \emph{uniformity of excitation} property. \begin{definition}[Uniformity of Excitation]\label{d.PE} The family $(h_i)_{i\in\mathcal N}$ is said to be \emph{uniformly exciting} if it is sufficiently exciting from every $t_0$, with $\underline h$ and $\Delta$ not dependent on $t_0$. \end{definition} Uniformity of excitation can be seen as a ``uniform in $t_0$'' version of sufficiency of excitation and, in particular, it implies that all the signals $h_i$ take positive values infinitely often. Defined in this way, both these properties are ``centralized'', in that they employ quantities common to all the agents. However, both can be easily obtained by means of decentralized design policies in which the signals $h_i$ are chosen independently on each other. This is the case, for instance, when the signals $h_i$ are \emph{periodic} (with possibly different periods) and not identically zero, as formalized in the following lemma (proved in \ref{apd.lemma_PE}). \begin{lemma}\label{lem.PE} Suppose that, for each $i\in\mathcal N$, $h_i$ is periodic and there exists $t\in\mathbb N$ for which $h_i^t >0$. Then, the family $(h_i)_{i\in\mathcal N}$ is uniformly exciting. \end{lemma} \begin{remark} If $h_i^t=0$ for all $i\in\mathcal N$ and $t\in\mathbb N$, each of the infinite points of the consensus manifold $\mathcal M$ is an equilibrium for \eqref{s.xi}. Since $\M\sr}%{\und{\M}\in\mathcal M$, this implies that $\M\sr}%{\und{\M}$ is a well-defined steady state for \eqref{s.xi}. However, in this case $\M\sr}%{\und{\M}$ cannot be reached by any of the initial conditions in $\mathcal M$, as they are indeed equilibria. This, in turn, is related to the impossibility result \cite[Theorem 3.1.1]{Santoro2006} in the leader election problem in absence of unique identifiers, and is at the basis of the non-globality of the FloodMax and Max-Consensus algorithms (see Section \ref{sec.intro}). In order to prevent the consensual states in $\mathcal M$ to be equilibria, the signals $h_i^t$ must carry enough excitation, in the sense of Definitions \ref{d.SE} or \ref{d.PE}. As formally stated later on in Theorem \ref{thm.main}, indeed, this permits to recover globality, although it ruins ``exactness'' of convergence of each estimate $x_i$ to $\M\sr}%{\und{\M}$, being it a consensual state. In these terms, the signals $h_i$ play the same role of the \emph{dithering} signals in Extremum Seeking approaches \cite{Tan2006,Ariyur2003}. \end{remark} \section{Convergence Results}\label{sec:conv} \subsection{Main result} For ease of notation, we write the update laws \eqref{s.xi} in the compact form \eqref{s.x}. The following theorem -- which is the main result of the paper -- relates the excitation properties of the signals $h_i$ to the asymptotic convergence of the estimates $x_i$ produced by the update laws \eqref{s.xi} to $\M\sr}%{\und{\M}$. In particular, it shows that sufficiency of excitation implies convergence (possibly exact) and uniformity of excitation implies uniform convergence, but ruins exactness. Further remarks and insights on the results given in the theorem follow thereafter in Section~\ref{sec.remarks}. \begin{theorem}\label{thm.main} Under Assumptions \ref{ass.connected} and \ref{ass.M_eps}, consider the update laws \eqref{s.xi}, in which $k_i$ satisfies \eqref{inq.ki_1}. Suppose that, for a given $t_0\in\mathbb N$, the family $(h_i)_{i\in\mathcal N}$ is sufficiently exciting from $t_0$ in the sense of Definition \ref{d.SE}. Then, the following claims hold: \begin{enumerate} \item \label{thm.main.item1} There exists $t^\star=t^\star(t_0)$ such that every solution $x$ to~\eqref{s.x} starting at $t_0$ satisfies \begin{equation*} \begin{array}{lclcl} x_i^t &\ge& \M\sr}%{\und{\M}, && \forall t\ge t^\star(t_0),\ \forall i \in\mathcal N\setminus I^\star\\ x_i^t &=& \M\sr}%{\und{\M}, && \forall t\ge t^\star(t_0),\ \forall i\in I^\star, \end{array} \end{equation*} with $I^\star$ given by \eqref{d.Isr}. \item \label{thm.main.item2} For each $\epsilon>0$, there exists $\delta(\epsilon)>0$ such that, if \begin{equation}\label{in.ls_hi} \limsup_{t\to\infty} h_i^t \le \delta(\epsilon),\quad \forall i\in\mathcal N , \end{equation} then each solution $x$ starting at $t_0$ satisfies \begin{equation}\label{e.lim_xi_Ai} \lim_{t\to\infty}|x_i^t-\M\sr}%{\und{\M}| \le \epsilon ,\quad \forall i\in\mathcal N. \end{equation} In particular, the set \begin{equation*} \mathcal A_\epsilon:= \prod_{i\in\mathcal N} \big[\M\sr}%{\und{\M},\,\min\{\M\sr}%{\und{\M}+\epsilon,\,{\rm M}_i\}\big] \end{equation*} is globally attractive for \eqref{s.x}. \item \label{thm.main.item3} If the family $(h_i)_{i\in\mathcal N}$ is uniformly exciting in the sense of Definition \ref{d.PE}, then $\mathcal A_\epsilon$ is globally uniformly attractive. \item If all the signals $h_i$ are non-zero and periodic (with possibly different periods), then there exists a compact set $\mathcal A_\epsilon^u\subset\mathcal A_\epsilon$ which is globally uniformly attractive and stable, hence, globally uniformly asymptotically stable. \item \label{thm.main.item4} If \begin{equation*} \lim_{t\to\infty} h_i^t =0,\quad \forall i\in\mathcal N \end{equation*} then, the set $\mathcal A$, given by \eqref{d.A}, is globally attractive for \eqref{s.x}, i.e. \begin{equation*} \lim_{t\to\infty} x_i^t=\M\sr}%{\und{\M} ,\quad\forall i\in\mathcal N. \end{equation*} \end{enumerate} \end{theorem} For the reader's convenience, the proof of Theorem~\ref{thm.main} is postponed to Section \ref{sec.proof}. \subsection{Remarks on the Result}\label{sec.remarks} Claim 1 of Theorem \ref{thm.main} states that, if the family $(h_i)_{i\in\mathcal N}$ is sufficiently exciting, then, in a finite time $t^\star$ the estimates $x_i$ of the agents $i\in I^\star$ satisfying ${\rm M}_i=\M\sr}%{\und{\M}$ reach the target value $\M\sr}%{\und{\M}$, while all the other estimates $x_i$ of the remaining agents $i\in\mathcal N\setminus I^\star$ become larger than $\M\sr}%{\und{\M}$. The time $t^\star$ is, however, a centralized quantity which depends on the excitation properties of all the signals $h_i$. Claim 2 characterizes the asymptotic behavior of the remaining agents, by stating that the update laws \eqref{s.xi} are able to drive the estimates $x_i$ arbitrarily close to $\M\sr}%{\und{\M}$, provided that the amplitude of the signals $h_i^t$ is eventually reduced accordingly. As the approximation $\mathcal A_\epsilon$ can be made arbitrarily tight, by acting on the asymptotic bounds of $h_i$ accordingly, it turns out that this is a \emph{global practical attractiveness} result of the target set $\mathcal A$ (defined in \eqref{d.A}). More precisely, let $\Gamma$ be the set of all the families $\gamma:=(h_i)_{i\in\mathcal N}$ of functions $h_i:\mathbb N\to\mathbb R_{\ge 0}$, and consider a family of systems of the form \eqref{pre.s.xa}, with $x^t\in\mathbb R^N$ and $f_\gamma:=(f_{\gamma}^i)_{i\in\mathcal N}$ satisfying \begin{equation}\label{d.fgamma} f_\gamma^i(t,x) := \proj{[\mu_i,\, {\rm M}_i]}{{\rm e}^{h_i^t} x_i + k_i \sum_{j\in[i]}\big(x_j-x_i \big)} . \end{equation} Then, the second claim of the theorem can be restated as follows. \begin{corollary}\label{cor.p} Under the assumptions of Theorem \ref{thm.main}, the set $\mathcal A$ is globally practically attractive for the family \eqref{d.fgamma}. \end{corollary} Claim 3 of the theorem further strengthen Corollary~\ref{cor.p} to a \emph{uniform} global practical asymptotic stability property of $\mathcal A$ in presence of uniformity of excitation. Moreover, in the relevant case in which the signals $h_i$ are periodic, Claim~4 guarantees the existence of a compact set included in $\mathcal A_\epsilon$ which is globally uniformly asymptotically stable. Finally, Claim 5 states that, if all the signals $h_i^t$ converge to zero, then a \emph{global attractiveness} result of the target set $\mathcal A$ holds (i.e. $x_i^t \to \M\sr}%{\und{\M}$ for all $i\in\mathcal N$). However, we observe that, if $h_i^t\to 0$ for some $i\in\mathcal N$, then the family $(h_i)_{i\in\mathcal N}$ fails to be uniformly exciting, and thus the convergence of the estimates $x_i$ to $\M\sr}%{\und{\M}$ is \emph{not} in general uniform in the initial time $t_0$. This underlines an important difference between sufficiency and uniformity of excitation: sufficiency of excitation allows exact convergence, but prevents uniformity in the initial time. Uniformity of excitation, instead, guarantees uniform convergence and stability but frustrates exact convergence, guaranteeing only a weaker practical result. This, in turn, reveals a somehow necessary compromise between complexity, uniformity and convergence. \subsection{On the Design of the Signals $h_i$} The signals $h_i$ are the only degrees of freedom left to be chosen in the update laws \eqref{s.xi}. In this respect, Theorem \ref{thm.main} links their amplitude and excitation properties to the corresponding asymptotic behavior of the estimates $x_i$, thus providing guidelines for their design. Based on the claims of Theorem \ref{thm.main}, in this section we discuss some possible designs guaranteeing sufficiency or uniformity of excitation. \subsubsection{Sufficiently Exciting Designs} Sufficiency of excitation of the family $(h_i)_{i\in\mathcal N}$ is guaranteed if each $h_i$ takes ``enough'' positive values. According to Definition \ref{d.SE}, and in particular to \eqref{e.SE.m}, how much is ``enough'' depends on centralized quantities. In turn, a design of the signals $h_i$ based on the knowledge of $t_0$ and of the quantities appearing in \eqref{e.SE.m} is undesirable as inevitably centralized and not robust. A simple decentralized way to design a sufficiently exciting family $(h_i)_{i\in\mathcal N}$ amounts to choose bounded signals $h_i$ satisfying \begin{equation}\label{e.sum_hi} \sum_{t\in\mathbb N} h_i^t = \infty ,\qquad\forall i\in\mathcal N. \end{equation} This, for instance, can be achieved by simply letting $h_i^t = a_i/(1+t)$ for some arbitrary $a_i>0$. \begin{lemma}\label{lem.SE} Suppose that, for each $i\in\mathcal N$, the signal $h_i$ is bounded and satisfies \eqref{e.sum_hi}. Then, the family $(h_i)_{i\in\mathcal N}$ is sufficiently exciting in the sense of Definition \ref{d.SE}. \end{lemma} The proof of Lemma~\ref{lem.SE} follows directly from~\eqref{e.sum_hi}, hence it is omitted. \smallskip In view of Claim 5 of Theorem \ref{thm.main}, exact convergence of the estimates $x_i$ to $\M\sr}%{\und{\M}$ is obtained if $\lim_{t\to\infty} h_i^t=0$ for all $i\in\mathcal N$. Moreover, convergence of $h_i$ to zero is implied by (although not equivalent to) the following property \begin{equation}\label{e.sum_hi2} \sum_{t\in\mathbb N} \big(h_i^t\big)^2 < \infty . \end{equation} It is interesting to notice that Properties \eqref{e.sum_hi}-\eqref{e.sum_hi2} are standard assumptions asked to the \emph{stepsize} in classical \emph{stochastic approximation algorithms} \cite{Robbins1951,Kushner1997}, as well as in modern distributed optimization algorithms using vanishing step sizes \cite{NotarstefanoTutorial,nedic_constrained_2010,Simonetto2016}. In the context of this paper, these two conditions are simply sufficient conditions for sufficiency of excitation, which can be easily satisfied by decentralized designs of the signals $h_i$. \subsubsection{Uniformly Exciting Designs}\label{sec.design_hi} In view of Lemma \ref{d.PE}, if every signal $h_i$ is periodic, then $(h_i)_{i\in\mathcal N}$ is uniformly exciting. While periodicity is not necessary for uniformity of excitation, it certainly is a relevant design choice due its simplicity and effectiveness. Possible decentralized design choices for periodic signals $h_i$ leading to a uniformly exciting family $(h_i)_{i\in\mathcal N}$ are listed below, where the quantities $A_i,T_i,\rho_i>0$ are arbitrary. From the theoretical viewpoint, all the following options are equally fine. Depending on the application domain, however, some choices may be more convenient than others. \begin{enumerate} \item \emph{Constant signals}: is the simplest design choice and consists in choosing $h_i^t = A_i$ for all $i\in\mathcal N$. \item \emph{Rectified sinusoids}: different versions can be defined, for instance $h_i^t = A_i |\sin(\pi t/T_i)|$ and $h_i=A_i\max\{0, \sin(2\pi t/T_i) \}$ both have period $T_i$. \item \emph{Square waves:} with $\rho_i\in(0,1]$ playing the role of a duty cycle, square waves have the form \begin{equation}\label{d.square_f} h_i^t = A_i {\rm step}\left({{\rm mod}(t,T_i)- (1-\rho_i)(T_i)}\right) \end{equation} in which ${\rm mod}(s) := s-\max\{n\in\mathbb N \,\mid\, n (T_i+1)\le s\}$, and ${\rm step}(\cdot)$ denotes the \emph{step function} satisfying ${\rm step}(s)=0$ for $s<0$ and ${\rm step}(s)=1$ for $s\ge 0$. The signal \eqref{d.square_f} has period $T_i$ and $h_i^t=A_i$ holds for $\rho_iT_i$ seconds each period. \end{enumerate} \section{Numerical Simulations}\label{sec:simul} \begin{figure}[h] \vspace*{-.2cm} \centering \includegraphics[width=\linewidth,trim=5em 1em 5em 0em,clip]{ex1_network} \vspace*{-.7cm} \caption{Communication structure of Simulation 1: \textbf{(a)} $[1]= \{1,3,4\}$, $[2]=\{2,3,4\}$, $[3]=\{1,2,3\}$, $[4]=\{1,2,4\}$; \textbf{(b)} $[1]= \{1,3,4\}$, $[2]=\{2,4\}$, $[3]=\{1,3,5,6\}$, $[4]=\{1,2,4,5\}$, $[5]=\{3,4,5\}$ and $[6]=\{3,6\}$; \textbf{(c)} $[1]=\{1,4\}$, $[4]=\{1,4,5,6\}$, $[5]=\{4,5\}$ and $[6]=\{4,6\}$.\vspace{-.3cm}} \label{Fig.ex1.top} \end{figure} \begin{figure*} \includegraphics[width=\linewidth,trim=4em 2em 4em 2em,clip]{ex1_sim} \vspace*{-.7cm} \caption{Evolution of the estimates $x_i$ in Scenario 1. The trajectory of the optimal value $\M\sr}%{\und{\M}$ is shown in dashed gray line. Colored lines depict instead the trajectory of the estimates $x_i$, $i=1,\dots,6$. In abscissa: iteration variable $t$.\vspace{-.25cm}} \label{Fig.ex1.sim} \end{figure*} In this section, we present two illustrative numerical simulation scenarios. In Scenario~1, a network with a time-changing topology (see Figure~\ref{Fig.ex1.top}) is considered while in Scenario~2, for a fixed network topology, the use of different signals $h_i$ is evaluated. \subsection{Scenario 1: Uniform Convergence} The first simulation, shown in Figure \ref{Fig.ex1.sim}, is obtained as follows. The simulation starts with a network of $4$ agents (Agents $1$, $2$, $3$, and $4$), provided with a communication structure shown in Figure \ref{Fig.ex1.top}-(a) and with numbers $({\rm M}_1,\,{\rm M}_2,\,{\rm M}_3,\,{\rm M}_4)=(10,\,12,\,13,\,13)$, implying $\M\sr}%{\und{\M}={\rm M}_1=10$. The update laws \eqref{s.xi} are implemented with $\mu_i=1/2$ for all $i\in\{1,\dots,4\}$, with $(k_1,\,k_2,\,k_3,\,k_4)=(0.1,\, 0.08,\,0.05,\,0.09)$, and with the signals $h_i$ chosen as the square waves discussed in Section \ref{sec.design_hi} with parameters $(T_1,A_1,\rho_1)=(15,10^{-3},0.2)$, $(T_2,A_2,\rho_2)=(10,5\cdot10^{-4},0.5)$, $(T_3,A_3,\rho_3)=(5, 10^{-3},0.3)$, $(T_4,A_4,\rho_4)=(10,5\cdot10^{-4},0.5)$. At time $t=500$, two new agents (Agents $5$ and~$6$) are added to the network, and the communication structure is changed to the one shown in Figure~{\ref{Fig.ex1.top}-(b)}. The new agents have numbers $({\rm M}_5,{\rm M}_6)=(7,11)$, lower bounds $\mu_5=\mu_6=1/2$, coefficients $(k_5,k_6)=(0.07,0.1)$, and signals $h_i$ given by the square waves presented in Section \ref{sec.design_hi} with $(T_5,A_5,\rho_5) = (5,10^{-3},0.4)$ and $(T_6,A_6,\rho_6) = (7,25\cdot10^{-4},0.1)$. Furthermore, the numbers of agents $1$ and~$3$ are changed to $({\rm M}_1,{\rm M}_3)=(11,13)$. The new optimum is thus $\M\sr}%{\und{\M}={\rm M}_5=7$. At time $t=1500$, Agents $2$ and $3$ leave the network, and the communication structure is changed to that depicted in Figure \ref{Fig.ex1.top}-(c). Moreover, the numbers of the agents are changed to $({\rm M}_1,{\rm M}_4,{\rm M}_5,{\rm M}_6)=(12,16,11,16)$, leading to $\M\sr}%{\und{\M}={\rm M}_5=11$. % Finally, at time $t=5000$, the number of Agent $4$ is changed to ${\rm M}_4=8$, so as $\M\sr}%{\und{\M}={\rm M}_4=8$. As Figure \ref{Fig.ex1.sim} shows, convergence to the (time-varying) optimum $\M\sr}%{\und{\M}$ is approximate, and the trajectories of the agents show residual oscillations. Figure~\ref{Fig.ex1.sim} also underlines that convergence to $\M\sr}%{\und{\M}$ ``from below'' (i.e. when the initial values of the agent are smaller than $\M\sr}%{\und{\M}$) is slower than convergence ``from above'' (i.e. when the initial values of the agent are larger than $\M\sr}%{\und{\M}$). As shown in the analysis of Section \ref{sec.proof}, this is due to the fact that (i) the convergence rate ``from below'', proved in Section~\ref{sec.proof.1}, is determined by the values of the signals $h_i^t$, while (ii) the convergence rate ``from above'', proved in Sections \ref{sec.proof.2}-\ref{sec.proof.3}, is determined by the values of the coefficients $k_i$. \begin{figure}[h] \vspace*{-.25cm} \centering \includegraphics[width=\linewidth,trim=4em 0em 3em 0em,clip]{maxc_1} \vspace*{-.7cm} \caption{Evolution of the Max-Consensus estimates (update law~\eqref{s.ex.maxconsensus}) in the setting of Scenario 1 (cf. Figure~\ref{Fig.ex1.sim}). In abscissa: iteration variable $t$.\vspace{-.25cm}} \label{Fig.maxc1} \end{figure} For the sake of comparison, Figure~\ref{Fig.maxc1} shows a simulation in which the Max-Consensus~\eqref{s.ex.maxconsensus} is employed in the same setting. As shown in Figure~\ref{Fig.maxc1}, although showing a faster convergence for the first two changes of $\M\sr}%{\und{\M}$, the Max-Consensus fails in tracking the other changes. As illustrated in Section~\ref{sec.intro}, this is due to the fact that it is not globally attractive. \subsection{Scenario 2: Non-Uniform Convergence} \begin{figure*} \includegraphics[width=\linewidth,trim=4em .5em 4em 2em,clip]{ex2_sim} \vspace*{-.7cm} \caption{Evolution of the estimates $x_i$ in Scenario 2. The trajectory of the optimal value $\M\sr}%{\und{\M}$ is shown in dashed gray line. Dark to light orange lines depict the trajectory of the estimates $x_i$, $i=1,\dots,4$ of the first network. Dark to light blue lines depicts the trajectory of the estimates $x_i$, $i=1',\dots,4'$ of the second network. In abscissa: iteration variable $t$. \vspace{-.45cm}} \label{Fig.ex2.sim} \end{figure*} \begin{figure}[h] \vspace*{-.1cm} \centering \includegraphics[width=\linewidth,trim=4em 0em 3em 0em,clip]{maxc_2} \vspace*{-.7cm} \caption{Evolution of the Max-Consensus estimates (update law~\eqref{s.ex.maxconsensus}) in the setting of Scenario 2 (cf. Figure~\ref{Fig.ex2.sim}). In abscissa: iteration variable $t$.\vspace{-.5cm}} \label{Fig.maxc2} \end{figure} In the second scenario, we compare two simple networks having the same data and communication structures, but different signals $h_i$. The first network, $\mathcal N$, includes Agents $1$, $2$, $3$ and $4$, and it is given the communication structure depicted in Figure \ref{Fig.ex1.top}-(a). Initially, the agents are given numbers $({\rm M}_1,{\rm M}_2,{\rm M}_3,{\rm M}_4)=(3,6,9,15)$, so as $\M\sr}%{\und{\M}={\rm M}_1=3$. At time $t=500$, ${\rm M}_1$ is changed to $15$, so as $\M\sr}%{\und{\M}={\rm M}_2=6$. At time $t=20000$, ${\rm M}_2$ is changed to $15$, so as $\M\sr}%{\und{\M}={\rm M}_3=9$. At time $t=35000$, ${\rm M}_3$ is changed to $12$, so as $\M\sr}%{\und{\M}=M_3=12$. Finally, at time $t=150000$, ${\rm M}_3$ is changed to $15$, so as $\M\sr}%{\und{\M}={\rm M}_1={\rm M}_2={\rm M}_3={\rm M}_4=15$. The update laws are implemented with $(k_1,k_2,k_3,k_4) = (0.1,0.08,0.05,0.09)$, $\mu_1=\mu_2=\mu_3=\mu_4=1/2$, and with a family $(h_i)_{i\in\mathcal N_1}$ of uniformly exciting signals defined as square waves with parameters $(T_1,A_1,\rho_1)=(15,10^{-3},0.2)$, $(T_2,A_2,\rho_2)=(10,5\cdot10^{-4},0.5)$, $(T_3,A_3,\rho_3)=(5, 10^{-3},0.3)$, $(T_4,A_4,\rho_4)=(10,5\cdot10^{-4},0.5)$. The second network, $\mathcal N'$, includes Agents $1'$, $2'$, $3'$ and $4'$ and has the same communication structure and data of~$\mathcal N$. The update laws have the same parameters $k_{i'}=k_i$ and $\mu_{i'}=\mu_i$, $i\in\mathcal N$, except for the family $(h_{i'})_{i'\in\mathcal N'}$ which is given by $h_{i'}^t = (1+t)^{-1}$ for all $i'\in\mathcal N'$. The signals $h_{i'}$ satisfy \eqref{e.sum_hi}-\eqref{e.sum_hi2} and, thus, $(h_{i'})_{i'\in\mathcal N'}$ is sufficiently exciting. However, it fails to be uniformly exciting. The simulation shown in Figure \ref{Fig.ex2.sim} compares the time behavior of the update laws $x_i$, $i\in\mathcal N$ and $x_{i'}$, $i'\in\mathcal N'$. As shown in the figure, each ``step'' of $\M\sr}%{\und{\M}$ is followed by the estimates $x_i$ with the same convergence rate. On the contrary, ${\M\sr}%{\und{\M}}'=\M\sr}%{\und{\M}$ is followed by the estimates $x_{i'}$ with a convergence rate which degrades in time. This is due to the fact that the family $(h_i)_{i\in\mathcal N}$ is uniformly exciting, while the family $(h_{i'})_{i'\in\mathcal N'}$ is only sufficiently exciting. Thus, uniformity of convergence is not guaranteed for the estimates $x_{i'}$. Nevertheless, the zoomed part of the plot clearly shows that the estimates $x_{i'}$ reach ${\M\sr}%{\und{\M}}'$ with higher precision (by Claim 5 of Theorem~\ref{thm.main}, indeed, since $h_{i'}^t\to 0$ the convergence of the estimates $x_{i'}$ is asymptotic if ${\M\sr}%{\und{\M}}'$ remains constant), whereas the estimates $x_i$ exhibit a non-zero residual error. The above simulations underline the necessary compromise, already mentioned in different parts of the paper, and formally characterized by Claims 3 and 5 of Theorem \ref{thm.main}, between \emph{exact convergence} and \emph{uniformity in time}, which characterizes the proposed methodology. Finally, Figure~\ref{Fig.maxc2} shows a simulation of the Max-Consensus \eqref{s.ex.maxconsensus} in the same setting (cf. Figure~\ref{Fig.ex2.sim}). Again, the Max-Consensus fails in tracking the time-varying $\M\sr}%{\und{\M}$. To see why this is the case, consider for instance the change of value of ${\rm M}_1$ at $t=500$. This determines an increment of $\M\sr}%{\und{\M}$, bringing the Max-Consensus algorithm in a situation in which \eqref{s.ex.init2} holds at $t_0=500$. Hence, as explained in Section~\ref{sec.intro}, $x^{500}$ falls outside the domain of attraction of the new $\M\sr}%{\und{\M}$, and thus convergence fails. \section{Concluding Remarks}\label{sec:concl} As detailed in the proof of the main result (Section~\ref{sec.proof}) and shown in the numerical simulations, the proposed solution is characterized by a necessary compromise between convergence rate and asymptotic error, as both are determined in the worst case by the signals $h_i$. In particular, if $(h_i)_{i\in\mathcal N}$ is uniformly exciting, uniform convergence is guaranteed, but the estimates will have a non-zero steady-state error. We stress that this residual error can be reduced arbitrarily by reducing the maximum value of the signals $h_i$ accordingly. But we also remark that, in general, this results in a reduction of the convergence rate. Larger values of the signals $h_i$ are associated instead with faster convergence but lead to larger steady-state errors. Moreover, in the limit case in which $h_i^t\to 0$ for all $i\in\mathcal N$, asymptotic convergence is obtained whenever $(h_i)_{i\in\mathcal N}$ is sufficiently exciting. The convergence rate, however, is superlinear and not lower-bounded, and thus uniformity is lost. Clearly, ``smart'' choices of the signals $h_i$ are possible adapting their value at run time to increase them when fast convergence is needed and decrease them when, instead, we desire a low residual error. ``Adaptive'' design choices of this kind will be the subject of future research. We prove all the proposed solution properties under the assumption that the communication structure and the parameters remain constant during the execution. Although uniform global asymptotic stability already guarantees a good behavior for ``slowly varying'' structures (also shown by the numerical simulations), additional work is needed to extend the analysis to handle time-varying networks with communication delays and noise. This extension, in turn, calls for a stochastic framework in which the aleatory nature of those phenomena is fully captured and is the subject of future research. \section{Proof of Theorem \ref{thm.main}} \label{sec.proof} \subsection{Proof of Claim 1}\label{sec.proof.1} In this subsection we prove Claim 1. In particular, we show that if the family $(h_i)_{i\in\mathcal N}$ is sufficiently exciting from some $t_0\in\mathbb N$, then there exists $t^\star=t^\star(t_0)>t_0$ such that, for each $i\in\mathcal N$, $x_i^t\ge \M\sr}%{\und{\M}$ holds for all $t\ge t ^\star$ and, for each $i\in I^\star$, $x_i^t= \M\sr}%{\und{\M}$ holds for all $t\ge t ^\star$. Define the function $\underline{i}:\mathbb R^N\to\mathcal N$, $x\mapsto \underline{i}(x):= \argmin_{i\in\mathcal N} x_i$. Then, $x_j\ge x_{\underline{i}(x)}$ holds for all $j\in\mathcal N$. Moreover, $h_i^t\ge 0$ and \eqref{inq.ki_1} imply ${\rm e}^{h_i^t}-\#([i]\setminus i)k_i \ge 0$ for all $i\in\mathcal N$. Since $\projOp{[\mu_i,{\rm M}_i]}$ is increasing, we have \begin{equation}\label{pf.inq.xplus_1} \begin{aligned} x_i^{t+1} &= \proj{[\mu_i,{\rm M}_i]}{\left({\rm e}^{h_i^t}-\#([i]\setminus i)k_i\right) x_i^t + k_i\sum_{j\in[i]\setminus i} x_j^t }\\ &\ge \projOp{[\mu_i,{\rm M}_i]}\bigg[\left({\rm e}^{h_i^t}-\#([i]\setminus i) k_i\right) x_{\underline i(x^t)}^t \\&\hspace{4em}+ \#([i]\setminus i) k_i x_{\underline i(x^t)}^t \bigg]\\ &= \proj{[\mu_i,{\rm M}_i]}{{\rm e}^{h_i^t}x_{\underline i(x^t)}^t } \\ &= \max\left\{ \mu_i, \min\left\{ {\rm e}^{h_i^t}x_{\underline i(x^t)}^t,\, {\rm M}_i \right\} \right\}\\ &\ge \min\left\{ {\rm e}^{h_i^t}x_{\underline i(x^t)}^t,\, {\rm M}_i \right\} \ge \min\left\{ {\rm e}^{h_i^t}x_{\underline i(x^t)}^t,\, \M\sr}%{\und{\M} \right\} \end{aligned} \end{equation} for all $t\ge t_0$ and all $i\in\mathcal N$. First, notice that, if for some $\bar t\in\mathbb N$, $x_{\underline i(x^{\bar t})}^{\bar t}\ge \M\sr}%{\und{\M}$, then \eqref{pf.inq.xplus_1} implies $x_{\underline i(x^{\bar t+1})}^{\bar t+1}\ge \M\sr}%{\und{\M}$, so that by induction it is possible to conclude that $x_i^t \ge \M\sr}%{\und{\M}$ holds for all $t\ge\bar t$. Namely, the claim holds with $t^\star=\bar t$. It thus suffices to show that such $\bar t$ exists. In doing so, we proceed by contradiction. We first assume that \begin{equation}\label{pf.e.xontr} x_{\underline i(x^{t})}^{t}< \M\sr}%{\und{\M}, \qquad \forall t\ge t_0. \end{equation} Then, we show that, if the signals $h_i$ are sufficiently exciting from $t_0$ (in the sense of Definition \ref{d.SE}), then \eqref{pf.e.xontr} leads to a contradiction, in this way proving the claim. Thus, assume that \eqref{pf.e.xontr} holds. Then, since $h_i^t\ge 0$ for all $i\in\mathcal N$, \eqref{pf.inq.xplus_1} yields \begin{equation}\label{pf.in.xplus_2} x_i^{t+s}\ge {\rm e}^{h_i^t}x_{\underline i(x^t)}^t,\qquad\forall t \ge t_0, \ s\ge 1. \end{equation} Suppose that the signals $h_i$ are sufficiently exciting from $t_0$, for some parameters $\underline h(t_0)$ and $\Delta(t_0)$. Then, for each $i\in\mathcal N$, there exists $s_i\in\{t_0+1,\,\dots,\,t_0+\Delta(t_0)\}$, such that $h_i^{s_i}\ge \underline h(t_0)$. In view of \eqref{pf.in.xplus_2}, this yields \begin{equation*} x^{t_0+1+\Delta(t_0)}_i \ge {\rm e}^{\underline h(t_0)}\, x_{\underline i(x^{t_0+1})}^{t_0+1},\qquad\forall i\in\mathcal N , \end{equation*} and thus, in particular, \begin{equation*} x^{t_0+1+\Delta(t_0)}_{\underline i\big(x^{t_0+1+\Delta(t_0)}\big)} \ge {\rm e}^{\underline h(t_0)}\, x_{\underline i(x^{t_0+1})}^{t_0+1} . \end{equation*} In the same way, in view of sufficiency of excitation of the signals $h_i$, for each $i\in\mathcal N$, there exists $s_i\in\{t_0+1+\Delta(t_0),\,\dots,\,t_0+2\Delta(t_0)\}$, such that $h_i^{s_i}\ge \underline h(t_0)$. Then, in view of \eqref{pf.in.xplus_2}, one has \begin{align*} x^{t_0+1+2\Delta(t_0)}_{\underline i\big(x^{t_0+1+2\Delta(t_0)}\big)} \ge {\rm e}^{\underline h(t_0)}\, x^{t_0+1+\Delta(t_0)}_{\underline i\big(x^{t_0+1+\Delta(t_0)}\big)} \ge {\rm e}^{2\underline h(t_0)}\, x_{\underline i(x^{t_0+1})}^{t_0+1}. \end{align*} By repeating the same arguments, it is thus possible to conclude that, for each $m\in\mathbb N$ satisfying \eqref{e.SE.m}, one has \begin{equation}\label{pf.in.xplus_3} x^{t_0+1+m\Delta(t_0)}_{\underline i\big(x^{t_0+1+m\Delta(t_0)}\big)} \ge {\rm e}^{m\underline h(t_0)}\, x_{\underline i(x^{t_0+1})}^{t_0+1} \ge {\rm e}^{m\underline h(t_0)} \und{\lb} , \end{equation} in which we used the fact that, by definition of $\projOp{[\und{\lb}_i,{\rm M}_i]}$, $x_i^t\ge \mu_i\ge\und{\lb}$ for all $i\in\mathcal N$ and all $t\ge t_0+1$. Since the latter relation holds in particular for \begin{equation*} m^\star(t_0) = \dfrac{1}{\underline h(t_0)}\log\left( \dfrac{\M\sr}%{\und{\M}}{\und{\lb}} \right) . \end{equation*} Then, with $\bar t:= t_0+1+m^\star(t_0)\Delta(t_0)$, from \eqref{pf.in.xplus_3} we obtain \begin{align*} x_i^{\bar t} \ge x^{\bar t}_{\underline i(x^{\bar t})} \ge {\rm e}^{m^\star(t_0)\underline h(t_0)} \und{\lb} = \M\sr}%{\und{\M} ,\qquad\forall i\in\mathcal N \end{align*} which contradicts \eqref{pf.e.xontr} and, thus, proves that $x_i^t \ge \M\sr}%{\und{\M}$ holds for all $i\in\mathcal N$ and all $t\ge t^\star:=\bar t$. Finally, for all $i\in I^\star$, we have $x_i^t\in[\und{\lb},M_i]\le \M\sr}%{\und{\M}$ for all $t\ge t_0+1$ and this, together with the bound $x_i^t\ge \M\sr}%{\und{\M}$ above, implies $x_i^t=\M\sr}%{\und{\M}$ for all $i\in I^\star$ and $t\ge t^\star$. \subsection{Proof of Claim 2}\label{sec.proof.2} Since by Claim 1 each $x_i$ satisfies $x_i^t\ge \M\sr}%{\und{\M}$ for all $t\ge t^\star$, then, in view of Assumption \ref{ass.M_eps}, each $x_i$ also satisfies $x_i^t\ge \mu_i$ for all $t\ge t^\star$. This, in turn, allows us to write \begin{equation*} x_i^{t+1} = \min\left\{ {\rm M}_i,\ {\rm e}^{h_i^t} x_i^t + k_i \sum_{j\in[i]} \big(x_j^t-x_i^t\big) \right\} \end{equation*} for all $i\in\mathcal N$ and all $t\ge t^\star$, which implies both \begin{equation}\label{pf.e.xi_le_Mi} x_i^{t} \le {\rm M}_i \end{equation} and \begin{equation}\label{pf.s.xi_le} x_i^{t+1} \le {\rm e}^{h_i^t} x_i^t + k_i \sum_{j\in[i]} \big(x_j^t-x_i^t\big) \end{equation} for all $i\in\mathcal N$ and all $t\ge t^\star$. From \eqref{pf.e.xi_le_Mi} we also obtain \begin{equation}\label{pf.ineq.limsup_1} \limsup_{t\to\infty} |x_i^t| \le {\rm M}_i <\infty,\qquad \forall i\in\mathcal N. \end{equation} In the following we rely on the forthcoming lemma, whose proof is postponed to \ref{apd.Lemma_limsup}. \begin{lemma}\label{lem.limsup} With $n\in\mathbb N$, let $x,\,y:\mathbb N\to \mathbb R^n$. Suppose that $y$ is {bounded} and that, for some $t_0\in\mathbb N$ and some $\lambda:\mathbb N\to\R_{\ge 0}$ fulfilling $\lambda^t \le \nu \in [0,1)$ for all $t\ge t_0$, $x$ and $y$ satisfy \begin{equation}\label{e.lem.xy} x^{t+1} \le \lambda^t x^t + y^t \end{equation} for all $t\ge t_0$. Then \begin{equation}\label{pf.s.ls_xi_le} \limsup_{t\to\infty}|x^t|\le \dfrac{1}{1-\limsup_{t\to\infty}\lambda^t}\limsup_{t\to\infty}|y^t|. \end{equation} \end{lemma} With $I^\star$ defined in \eqref{d.Isr}, let $n^\star$ be the least integer such that $[I^\star]^{n^\star}=\mathcal N$ (which exists finite in view of Assumption~\ref{ass.connected}). The case in which $n^\star=0$ (i.e. $I^\star=\mathcal N$) directly follows from Claim 1. Hence, we consider $n^\star>0$. Assume that, for some $m\in\{0,\dots, n^\star-1 \}$, there exist $\alpha_m\in[0,1)$ and $\beta_m>0$ such that\footnote{Here we let $[I^\star]^{-1}:=\emptyset$.} \begin{equation}\label{pf.in.induct0} \begin{aligned} &\max_{i\in [I^\star]^{m}_{m-1}}\limsup_{t\to\infty}|x_i^t| \le \alpha_m \max_{j\in [I^\star]^{m+1}_{m}} \limsup_{t\to\infty}|x_j^t| + \beta_m \M\sr}%{\und{\M} . \end{aligned} \end{equation} We will now prove that, if this is the case, then a similar property holds also for $m+1$. First notice that, for each $i\in [I^\star]^{m+1}_{m}$, every $j\in[i]$ belongs to exactly one among the sets $[I^\star]^{m+2}_{m+1}$, $[I^\star]^{m+1}_{m}$, and $[I^\star]^{m}_{m-1}$. Hence, in view of \eqref{pf.s.xi_le}, we can write \begin{equation}\label{pf.in.xi_1} \begin{aligned} x_i^{t+1} & \le\big({\rm e}^{h_i^t} - k_i \#([i]\setminus i)\big)x_i^t + k_i\sum_{j\in [i]\cap [I^\star]^{m} } x_j^t \\&\qquad + k_i\sum_{j\in ([i]\setminus i)\cap [I^\star]^{m+1}_{m}} x_j^t + k_i\sum_{j\in [i]\cap [I^\star]^{m+2}_{m+1}} x_j^t \end{aligned} \end{equation} for all $i\in[I^\star]^{m+1}_{m}$ and all $t\ge t^\star$, in which we used the fact that $[i]\cap [I^\star]^{m}_{m-1} = [i]\cap [I^\star]^{m}$, for all $i\in[I^\star]^{m+1}_{m}$. If \eqref{inq.ki_1} holds, then $1+k_i \#([i]\setminus i)>1$. With $\nu_1>0$ sufficiently small so that $ \log(1+k_i \#([i]\setminus i))-2\nu_1> 0$, let \begin{equation*} \bar h_{i,1} := \log(1+k_i \#([i]\setminus i)) -2\nu_1. \end{equation*} If $\limsup_{t\to\infty} h_i^t \le \bar h_{i,1}$ for all $i\in\mathcal N$, then there exists $T^\star>t^\star$ such that \begin{equation}\label{pf.in.barh} h_i^t \le \bar h_{i,1}+\nu_1 = \log(1+k_i \#([i]\setminus i)) - \nu_1 \end{equation} for all $t\ge T^\star$ and all $i\in\mathcal N$. Thus, \eqref{inq.ki_1} and \eqref{pf.in.barh} imply \begin{equation*} 0\le {\rm e}^{h_i^t} - k_i \#([i]\setminus i) \le {\rm e}^{\bar h_{i,1}+\nu_1}- k_i \#([i]\setminus i) < 1, \end{equation*} for all $t\ge T^\star$ and all $i\in\mathcal N$, so that \eqref{pf.ineq.limsup_1}, \eqref{pf.in.xi_1} and Lemma~\ref{lem.limsup} imply \begin{equation}\label{pf.in.xi_2} \begin{aligned} \limsup_{t\to\infty} |x_i^{t}| & \le \gamma_{i} \sum_{j\in [i]\cap [I^\star]^{m}} \limsup_{t\to\infty}|x_j^t| \\ &\qquad + \gamma_{i}\sum_{j\in ([i]\setminus i)\cap [I^\star]^{m+1}_{m}} \limsup_{t\to\infty}|x_j^t| \\&\qquad +\gamma_{i}\sum_{j\in [i]\cap [I^\star]^{m+2}_{m+1}} \limsup_{t\to\infty}|x_j^t| \end{aligned} \end{equation} for all $i\in [I^\star]^{m+1}_m$, in which we let \begin{equation}\label{pf.d.gammai} \gamma_i := \dfrac{k_i}{1- \limsup_{t\to\infty}\, \big({\rm e}^{h_i^t} - k_i \#([i]\setminus i)\big)} \end{equation} which exists finite in view of Lemma \ref{lem.limsup}. In view of \eqref{pf.in.induct0}, equation \eqref{pf.in.xi_2} implies \begin{equation}\label{pf.in.xi_3} \begin{aligned} \limsup_{t\to\infty} |x_i^{t}| & \le \big( c_{i,1} \alpha_m +c_{i,2}\big) \max_{j\in [I^\star]^{m+1}_{m}} \limsup_{t\to\infty}|x_j^t|\\ &\qquad+ c_{i,3} \max_{j\in [I^\star]^{m+2}_{m+1}} \limsup_{t\to\infty}|x_j^t|+ c_{i,1} \beta_m \M\sr}%{\und{\M}. \end{aligned} \end{equation} for all $i\in [I^\star]^{m+1}_m$, in which we let for convenience \begin{equation}\label{pf.d.ci} \begin{aligned} c_{i,1} &:= \gamma_i \#\left( [i] \cap [I^\star]^{m}\right) \\ c_{i,2} &:= \gamma_i \#\left(([i]\setminus i)\cap [I^\star]^{m+1}_{m}\right) \\ c_{i,3} &:= \gamma_i \#\left([i]\cap [I^\star]^{m+2}_{m+1}\right) . \end{aligned} \end{equation} With $\nu_2>0$ sufficiently small so that $k_i(1-\alpha_m)- \nu_2>0$ for all $i\in\mathcal N$ (recall that $\alpha_m<1$ by assumption), define \begin{equation*} \bar h_i := \min\Big\{ \bar h_{i,1},\ \log\big(1+k_i(1-\alpha_m)- \nu_2\big) \Big\}. \end{equation*} If \begin{equation}\label{pf.in.ls_hi} \limsup_{t\to\infty} h_i^t \le \bar h_i \end{equation} for all $i\in [I^\star]^{m+1}_m$, then, since $\#([i]\cap [I^\star]^{m} )\ge 1$, it holds that \begin{equation}\label{pf.in.hsr_0} \begin{aligned} 1-{\rm e}^{\limsup_{t\to\infty} h_i^t} & \ge 1-{\rm e}^{\bar h_i} \ge -k_i(1-\alpha_m)+\nu_2 \\ &\ge - k_i(1-\alpha_m)\#([i]\cap [I^\star]^{m}) + \nu_2 \end{aligned} \end{equation} for all $i\in [I^\star]^{m+1}_m$. Since for all $i\in [I^\star]^{m+1}_m$, \begin{align*} &\#\big(([i]\setminus i)\cap [I^\star]^{m+1}_{m}\big)\\ &= \#\left( [i]\setminus i\right) -\#\left( [i] \cap [I^\star]^{m}\right) - \#\left([i]\cap [I^\star]^{m+2}_{m+1}\right) \\ &\le \#\left( [i]\setminus i\right) -\#\left( [i] \cap [I^\star]^{m}\right), \end{align*} then, we conclude that \begin{equation}\label{pf.in.cis} \begin{aligned} c_{i,1}& \alpha_m + c_{i,2} \\ &\le \dfrac{ k_i(\alpha_m-1) \#\left( [i] \cap [I^\star]^{m}\right)+k_i \#\left( [i]\setminus i\right)}{1-{\rm e}^{\bar h_i}+ k_i \#([i]\setminus i) }\\ &\le \dfrac{(\alpha_m-1) k_i \#\left( [i] \cap [I^\star]^{m}\right) +k_i \#([i]\setminus i)}{(\alpha_m-1) k_i \#\left( [i] \cap [I^\star]^{m}\right) +k_i \#([i]\setminus i)+\nu_2 }\\&<1. \end{aligned} \end{equation} for all $i\in [I^\star]^{m+1}_m$. Now, since \eqref{pf.in.xi_3} holds for each $i\in [I^\star]^{m+1}_m$, it in particular holds for $\bar i$ satisfying \begin{equation}\label{pf.d.bari} \bar i\in \argmax_{i\in [I^\star]^{m+1}_m} \limsup_{t\to\infty}|x_i^t|, \end{equation} so that \eqref{pf.in.xi_3} implies \begin{equation*} \begin{aligned} \max_{i\in [I^\star]^{m+1}_{m}}& \limsup_{t\to\infty}|x_i^t| \le ( c_{\bar i,1} \alpha_m +c_{\bar i,2} ) \max_{i\in [I^\star]^{m+1}_{m}} \limsup_{t\to\infty}|x_i^t|\\ &\qquad\qquad+ c_{\bar i,3} \max_{j\in [I^\star]^{m+2}_{m+1}} \limsup_{t\to\infty}|x_j^t| + c_{\bar i,1}\beta_m \M\sr}%{\und{\M} \end{aligned} \end{equation*} which, in view of \eqref{pf.in.cis}, yields \begin{equation}\label{pf.in.induct1} \begin{aligned} \max_{i\in [I^\star]^{m+1}_{m}} \limsup_{t\to\infty}|x_i^t| &\le \alpha_{m+1} \max_{j\in [I^\star]^{m+2}_{m+1}} \limsup_{t\to\infty}|x_j^t| \\&\qquad + \beta_{m+1} \M\sr}%{\und{\M} \end{aligned} \end{equation} with \begin{equation}\label{pf.d.a_b} \begin{aligned} \alpha_{m+1} &= \dfrac{c_{\bar i,3}}{1-\big( c_{\bar i,1} \alpha_m +c_{\bar i,2}\big)}, \\ \beta_{m+1} &=\dfrac{c_{\bar i,1}}{1-\big( c_{\bar i,1} \alpha_m +c_{\bar i,2}\big)} \beta_m . \end{aligned} \end{equation} Furthermore, since $\limsup_{t\to\infty} h_i^t\le \bar h_i$, in view of \eqref{pf.in.hsr_0}, $\alpha_{m+1}$ satisfies \begin{align*} \alpha_{m+1} &\le \dfrac{k_{\bar i} \# \big( [{\bar i}]\cap [I^\star]^{m+2}_{m+1} \big) }{ k_{\bar i}\#([{\bar i}]\setminus {\bar i})- k_{\bar i} \#(([{\bar i}]\setminus {\bar i})\cap[I^\star]^{m+1})+\nu_2 }\\ &\le \dfrac{k_{\bar i} \# \big( [{\bar i}]\cap [I^\star]^{m+2}_{m+1} \big) }{ k_{\bar i} \# \big( [{\bar i}]\cap [I^\star]^{m+2}_{m+1} \big)+\nu_2 } <1. \end{align*} Therefore, we claim that if \eqref{pf.in.induct0} holds for some $m\in\{0,\dots, n^\star-1\}$ with $\alpha_m<1$ and $\beta_m\ge 0$, then \eqref{pf.in.induct1} holds as well for $m+1$ with $\alpha_{m+1}<1$ and $\beta_{m+1}\ge 0$ given above. Since by Claim 1, Equation \eqref{pf.in.induct0} trivially holds for $m=0$ with $\beta_0=1$ and $\alpha_0=0$, then we claim by induction that, if \begin{equation}\label{pf.d.barh} \limsup_{t\to\infty} h_i^t \le \bar h:= \min_{i\in\mathcal N} \bar h_i,\qquad \forall i\in\mathcal N, \end{equation} then Equation \eqref{pf.in.induct0} holds for each $m\in\{0,\dots,n^\star\}$. Now, for $m=n^\star$, we have $[I^\star]^{m+1}\setminus [I^\star]^m = \emptyset$, so that \eqref{pf.in.induct0} yields \begin{equation*} \limsup_{t\to\infty} x_i^t \le \beta_{n^\star}\M\sr}%{\und{\M} , \qquad \forall i\in [I^\star]^{n^\star}_{n^\star-1}. \end{equation*} Thus, iterating \eqref{pf.in.induct0} backwards and using \eqref{pf.e.xi_le_Mi} yield \begin{equation}\label{ps.LS_xi} \limsup_{t\to\infty}x_i^t \le \min\Big\{ {\rm M}_i,\ (1+\varepsilon_i)\M\sr}%{\und{\M} \Big\} \end{equation} in which \begin{equation*} \varepsilon_i = 0,\qquad \forall i\in I^\star \end{equation*} and \begin{equation}\label{pf.d.vep_i} \varepsilon_i = \sum_{\ell=0}^{n^\star-m} \left( \prod_{k=\ell+1}^{n^\star-m} \alpha_{n^\star-k} \right) \beta_{n^\star-\ell} -1, \end{equation} for all $i\in [I^\star]^{m}_{m-1} $ and all $m=1,\dots,n^\star$. Moreover, \eqref{pf.d.vep_i} directly implies that the quantities $\varepsilon_i$ also satisfy \begin{equation}\label{pf.d.vep_i_2} \max_{i\in[I^\star]^m_{m-1}} \varepsilon_i = \alpha_m \left(1+\max_{i\in[I^\star]^{m+1}_{m}}\varepsilon_i\right) + \beta_m -1 \end{equation} for all $m=1,\dots,n^\star$. We now prove that $\varepsilon_i$ in \eqref{ps.LS_xi}-\eqref{pf.d.vep_i} can be reduced arbitrarily by reducing $\limsup_{t\to\infty} h_i^t$ accordingly for each $i\in\mathcal N$. For convenience, let \begin{equation}\label{pf.d.upsilon} \upsilon_i := \limsup_{t\to\infty} h_i^t \in [0,\bar h_i]. \end{equation} Then, the quantities $\gamma_i$, defined in \eqref{pf.d.gammai}, satisfy \begin{equation*} \gamma_i(\upsilon_i) = \dfrac{k_i}{1-{\rm e}^{\upsilon_i} + k_i\#([i]\setminus i)}. \end{equation*} Thus, $\gamma_i$ is continuous in $[0,\infty)$, and \begin{equation*} \lim_{\upsilon_i\to 0} \gamma_i(\upsilon_i) = \dfrac{1}{\#([i]\setminus i)}. \end{equation*} In view of the definitions \eqref{pf.d.ci}, also the quantities $\alpha_m$ and $\beta_m$, as defined in \eqref{pf.d.a_b}, depend on $\upsilon_{\bar i}$ through $\gamma_{\bar i}$, in which $\bar i$ satisfies \eqref{pf.d.bari}. We now prove by induction that, by letting $\upsilon:=(\upsilon_1,\dots,\upsilon_N)$, the following holds \begin{equation}\label{pf.e.lim_ab_1} \lim_{\upsilon\to 0} \alpha_m(\upsilon)+\beta_m(\upsilon) = 1,\qquad\forall m=0,\dots,n^\star. \end{equation} First notice that \eqref{pf.e.lim_ab_1} trivially holds for $m=0$, as indeed $\alpha_m=0$ and $\beta_m=1$ despite the value of $\upsilon$. It thus suffices to show that if \eqref{pf.e.lim_ab_1} holds for a given $m\in\{0,\dots,n^\star-1\}$, then the same relation holds as well for $m+1$. For, assume that \eqref{pf.e.lim_ab_1} holds for a given $m\in\{0,\dots,n^\star-1\}$. Then, we can write $\lim_{\upsilon\to 0} \beta_m(\upsilon)=1-\lim_{\upsilon\to 0}\alpha_m(\upsilon)$. Thus, by letting for convenience $\rho_1:=\#([\bar i]\cap[I^\star]^m)$, $\rho_2:=\#(([\bar i]\setminus \bar i)\cap ([I^\star]^{m+1}_m))$, $\rho_3:=\#([\bar i]\cap([I^\star]^{m+2}_{m+1}))$, and noting that $\#([\bar i]\setminus \bar i)-\rho_2 = \rho_1+\rho_3$, we obtain \begin{equation*} \begin{aligned} &\lim_{\upsilon\to 0} \alpha_{m+1}(\upsilon)+ \beta_{m+1}(\upsilon) \\&\quad = \dfrac{\rho_3 + \left(1-\lim_{\upsilon\to 0}\alpha_m(\upsilon)\right) \rho_1}{\#([\bar i]\setminus \bar i) - \lim_{\upsilon\to 0}\alpha_m(\upsilon)\rho_1 - \rho_2}\\&\quad = \dfrac{\rho_3 + \left(1-\lim_{\upsilon\to 0}\alpha_m(\upsilon)\right) \rho_1}{\rho_3 + \left(1-\lim_{\upsilon\to 0}\alpha_m(\upsilon)\right) \rho_1}=1. \end{aligned} \end{equation*} Thus, by induction, we claim \eqref{pf.e.lim_ab_1} for all $m\in\{0,\dots,n^\star\}$. Since for every $i\in[I^\star]^{n^\star}_{n^\star-1}$, $c_{i,3} = 0$ (in fact $[I^\star]^{n^\star+1}_{n^\star}=\emptyset$), then $\alpha_{n^\star}=0$. Thus, \begin{equation*} \lim_{\upsilon\to 0}\beta_{n^\star}(\upsilon)=1. \end{equation*} In view of \eqref{pf.d.vep_i}, this implies \begin{equation*} \lim_{\upsilon\to 0}\max_{i\in[I^\star]_{n^\star}^{n^\star-1}} \varepsilon_{i}(\upsilon) = 0. \end{equation*} In view of \eqref{pf.d.vep_i_2}, $\lim_{\upsilon\to 0}\max_{i\in[I^\star]_{m}^{m+1}}\varepsilon_i(\upsilon)=0$ implies \begin{align*} \lim_{\upsilon\to 0} \max_{i\in[I^\star]_{m-1}^{m}}\varepsilon_{i}(\upsilon) = \lim_{\upsilon\to 0} (\alpha_{m}(\upsilon)+\beta_m(\upsilon)) -1 = 0, \end{align*} so that, by induction, we conclude that \begin{equation*} \lim_{\upsilon\to 0} \max_{i\in[I^\star]_{m}^{m-1}}\varepsilon_i(\upsilon) = 0,\quad \forall m\in\{0,\dots,n^\star\}, \end{equation*} i.e. \begin{equation}\label{pf.eq.lim_vep} \lim_{\upsilon\to 0} \varepsilon_i(\upsilon) = 0,\quad \forall i\in\mathcal N. \end{equation} The latter equation thus implies that, given any $\epsilon\ge0$, there exists $\delta'(\epsilon)\ge 0$ such that $|\upsilon|\le \delta'(\epsilon)$ implies $\M\sr}%{\und{\M} \varepsilon_i\le \epsilon$ for all $i\in\mathcal N$. Therefore, if \begin{equation}\label{pf.inq.ls_hi} \limsup_{t\to\infty}h_i^t \le \delta(\epsilon) := \min\left\{\bar h,\,\dfrac{\delta'(\epsilon)}{N}\right\},\qquad\forall i\in\mathcal N \end{equation} then $|\upsilon|\le \delta'(\epsilon)$, which implies $\M\sr}%{\und{\M}\varepsilon_i\le \epsilon$. In turn, in view of \eqref{ps.LS_xi}, this implies \begin{equation}\label{ps.LS_xi2} \limsup_{t\to\infty}x_i^t \le \min\Big\{ {\rm M}_i,\ \M\sr}%{\und{\M} +\epsilon \Big\}. \end{equation} Claim 2 thus follows from \eqref{ps.LS_xi2} and by noticing that Claim~1 implies $\limsup_{t\to\infty} x_i\ge \M\sr}%{\und{\M}$. \subsection{Proof of Claim 3}\label{sec.proof.3} The third claim of the theorem, i.e., that uniformity of excitation (in the sense of Definition \ref{d.PE}) of $(h_i)_{i\in\mathcal N}$ implies uniform attractiveness of $\mathcal A_\epsilon:= \prod_{i\in\mathcal N} \big[\M\sr}%{\und{\M},\,\min\{\M\sr}%{\und{\M}+\epsilon,\,{\rm M}_i\}\big]$, directly follows by the fact that, if the family $(h_i)_{i\in\mathcal N}$ is uniformly exciting, then in the above analysis $t^\star$ does not depend on $t_0$ and, therefore, the convergence \eqref{ps.LS_xi2} is uniform in the initial time. \subsection{Proof of Claim 4} In this subsection we prove the fourth claim of the theorem. With $(\tau_i)_{i\in\mathcal N}\in\mathbb N^N$ arbitrary, let $F_i\in\mathbb R^{\tau_i\times \tau_i}$ and $C_i\in\mathbb R^{1\times \tau_i}$ denote the matrices \begin{align*} F_i &:= \begin{bmatrix} 0_{(\tau_i-1)\times 1 } & I_{(\tau_i-1)\times (\tau_i-1)}\\ 1 & 0_{1\times(\tau_i-1)} \end{bmatrix}, & C_i&:= \begin{bmatrix} 1 & 0_{1\times(\tau_i-1)} \end{bmatrix}. \end{align*} Then, each $\tau_i$-periodic signal $h_i$ satisfies \begin{equation}\label{pf.s.xxi} \begin{aligned} \xi^{t+1}_i &= F_i\xi^t_i, & h^t_i &= C_i \xi^t_i \end{aligned} \end{equation} for a suitable initial condition $\xi^{t_0}_i\in\mathbb R^{\tau_i}$. Moreover, if all the signals $h_i$ are non-zero, then, by Lemma \ref{lem.PE}, $(h_i)_{i\in\mathcal N}$ is uniformly exciting in the sense of Definition \ref{d.PE} for some $\underline h>0$. For a fixed $\epsilon>0$, let $\delta(\epsilon)$ be defined as above in~\eqref{pf.inq.ls_hi}, and let \begin{align*} \Xi_i :=\Big\{ \xi_i\in\mathbb R^{\tau_i} \,\mid\,\, & \forall j\in\{1,\dots,\tau_i\},\, \xi_{i,j}\in[0,\delta(\epsilon)],\text{ and }\\ & \exists j\in\{1,\dots, \tau_i\},\, \xi_{i,j}\ge \underline h\Big\}, \end{align*} where $\xi_{i,j}$ denotes the $j$-th component of $\xi_i$. Then, $\Xi_i$ is compact and invariant for \eqref{pf.s.xxi}. % We now consider the interconnection between \eqref{pf.s.xxi} and the update laws \eqref{s.xi} for all $i\in\mathcal N$, with the dynamics restricted to the invariant set $Z:=\Xi\times\mathbb R^N$, being $\Xi:=\prod_{i\in\mathcal N} \Xi_i$. We compactly rewrite this interconnections as follows \begin{equation}\label{pf.s.z} z^{t+1} = \phi(z^t),\qquad z^t\in Z \end{equation} with $\phi$ suitably defined and $z^t:=(\xi^t,x^t)\in\mathbb R^r\times\mathbb R^N$, being $\xi:=(\xi_i)_{i\in\mathcal N}$ and $r:=\sum_{i\in\mathcal N}\tau_i$. Clearly, for every solution $x_a$ to \eqref{s.xi} starting at a given $t_0\in\mathbb N$ and subject to the signals $(h_i)_{i\in\mathcal N}$, there is a solution $z_b=(\xi_b,x_b)$ to \eqref{pf.s.z} starting at $0$ and such that $x_b(t)=x_a(t_0+t)$ for all $t\in\mathbb N$. For each compact $K\subset\Xi\times\mathbb R^N$, let $\mathcal S(K)$ denote the set of solutions to \eqref{pf.s.z} starting at $0$ from $K$ and, for each $t\in\mathbb N$, define the reachable set from $K$ as $\mathcal R^t(K) := \big\{ (\xi^s,x^s)\in\Xi\times \mathbb R^{N}\,\mid\, (\xi,x)\in\mathcal S(K),\, s\ge t \big\}$. In view of the above analysis, and since $\Xi$ is invariant for~\eqref{pf.s.z}, it follows that $\mathcal R_{t}(K)$ is included in $\Xi\times\mathbb R^N$ and bounded uniformly in $K$ and $t$ for each $t\ge 1$. Thus, the limit set $\Omega(K) := \bigcap_{t\in\mathbb N} \closure{\mathcal R^t(K)}$ (where $\closure{\mathcal R^t(K)}$ denotes the closure of $\mathcal R^t(K)$) is compact, non-empty, and included in $\Xi\times\mathbb R^N$. Moreover, since $\phi$ is continuous by construction, then $\Omega(K)$ is also forward invariant, uniformly globally attractive for \eqref{pf.s.z} from $K$ (see e.g. \cite[Proposition 6.26]{Goebel2012}), and it is the smallest set having the above properties. Furthermore, we notice that, by definition of the update laws \eqref{s.xi}, $x_i^t\in[\mu_{i},{\rm M}_i]$ for all $t\ge t_0$ despite the value of the initial conditions and of $t_0$, so that we conclude that $\Omega(K_1)=\Omega(K_2)$ for all $K_1,K_2$ supersets of $K^\star:=\prod_{i\in\mathcal N} [\mu_{i},{\rm M}_i]$. In the following we let $\Omega:=\Omega(K^\star)$. As $(h_i)_{i\in\mathcal N}$ is uniformly exciting, by Claim 3 the convergence \eqref{ps.LS_xi2} holds uniformly in the initial time. By the properties of $\Omega$, this implies that $\Omega\subset \Xi\times\mathcal A_\epsilon$, and the projection $\mathcal A_\epsilon^u := \big\{ x\in\mathbb R^N\,\mid\, (\xi,x)\in \Omega \big \}$ satisfies $\mathcal A_\epsilon^u\subset \mathcal A_\epsilon$. Therefore, it remains to show that $\mathcal A_\epsilon^u$ is stable for $x$, i.e. that for each $\ell>0$, there exists $b(\ell)>0$, such that every solution to \eqref{pf.s.z} satisfying $\setdist{x^0}{\mathcal A_\epsilon^u}\le b(\ell)$ also satisfies $\setdist{x^t}{\mathcal A_\epsilon^u}\le \ell$ for all $t\in\mathbb N$. This, in turn, can be proved by similar arguments of \cite[Proposition 7.5]{Goebel2012}). In particular, suppose that the above stability property does not hold, and fix an $\ell>0$ arbitrarily. If $\mathcal A_\epsilon^u$ is not stable, then for each $m\in\mathbb N$ there exist $\tau_m\in\mathbb N$ and a solution $z_m=(\xi_m,x_m)\in\mathcal S(Z)$ such that $\setdist{x^{0}_m}{\mathcal A_\epsilon^u}\le 2^{-m}$ and $\setdist{x^{\tau_m}_m}{\mathcal A_\epsilon^u}>\ell$. This, in turn implies \begin{equation}\label{pf.Omg} \setdist{z^{\tau_m}_m}{\Omega}>\ell. \end{equation} Since $X_0:=\{x\in\mathbb R^N\,\mid\, \setdist{x}{\mathcal A_\epsilon^u}\le 1\}$ is compact, $Z_0:=\Xi\times X_0$ is compact. Thus, since $z^0_m\in Z_0$ for all $m\in\mathbb N$, by uniform attractiveness of $\Omega$, there exists $\bar\tau=\bar\tau(\ell)\in\mathbb N$ such that $\tau_m\le \bar\tau$ for all $m\in\mathbb N$. We are thus given a sequence $(z_m|_{\le \bar\tau})_{m\in\mathbb N}$ of uniformly bounded signals $z_m|_{\le \bar\tau}$, obtained by restricting the solutions $z_m$ to $\{0,\dots, \bar\tau\}$, which satisfies $\lim_{m\to\infty} \setdist{z^0_m}{\Omega} =0$. As $\phi$ is continuous, $Z$ is closed, and since $\Omega$ is forward invariant, then in view of \cite[Theorem 6.8]{Goebel2012} we can extract a subsequence of $(z_m|_{\le \bar\tau})_{m\in\mathbb N}$ (which we do not re-index) that satisfies $\lim_{m\to\infty} \setdist{z^t_m}{\Omega} =0$ for all $t\in\{0,\dots,\bar\tau\}$. This, however, contradicts \eqref{pf.Omg} and proves the claim. \subsection{Proof of Claim 5} The last claim of the theorem, i.e. that if $(h_i)_{i\in\mathcal N}$ is sufficiently exciting according to Definition \ref{d.SE} and $\lim_{t\to\infty} h_i^t=0$, then $\lim_{t\to\infty} x_i^t = \M\sr}%{\und{\M}$ for all $i\in\mathcal N$, follows directly from \eqref{pf.d.upsilon}-\eqref{pf.eq.lim_vep}. \ignorespaces~\hfill$\blacksquare$
{ "timestamp": "2021-06-28T02:14:18", "yymm": "2012", "arxiv_id": "2012.08940", "language": "en", "url": "https://arxiv.org/abs/2012.08940" }
\section*{Abstract} We develop a novel Monte Carlo strategy for the simulation of the Boltzmann-BGK model with both low-collisional and high-collisional regimes present. The presented solution to maintain accuracy in low-collisional regimes and remove exploding simulation costs in high-collisional regimes uses hybridized particles that exhibit both kinetic behaviour and diffusive behaviour depending on the local collisionality. In this work, we develop such a method that maintains the correct mean, variance, and correlation of the positional increments over multiple time steps of fixed step size for all values of the collisionality, under the condition of spatial homogeneity during the time step. In the low-collisional regime, the method reverts to the standard velocity-jump process. In the high-collisional regime, the method collapses to a standard random walk process. We analyze the error of the presented scheme in the low-collisional regime for which we obtain the order of convergence in the time step size. We furthermore provide an analysis in the high-collisional regime that demonstrates the asymptotic-preserving property. Keywords: Monte Carlo, asymptotic preserving, Boltzmann-BGK, kinetic-diffusion \section{Introduction} Applications such as rarefied gas modeling~\cite{boyddeschenes2011aerospace}, radiation transport~\cite{fleck1971implMCphotontrans}, and neutral or ion transport in nuclear fusion~\cite{larsen1974neutrontransportbecomesdiff,stangeby2000plasmaboundary} frequently have to cope with large differences in collision rates between simulated regions. In low-collisional regions, a kinetic description is often required for accuracy, whereas this kinetic description becomes computationally intractable in high-collisional regions. However, in high-collisional regimes, a limiting fluid description can become valid, which is cheaper to simulate. One strategy to handle such situations uses domain decomposition, where part of the domain is described via the kinetic model and part by the fluid description~\cite{boyddeschenes2011aerospace,densmore2007domdecMC}. Another popular method is separating the density in both a kinetic part and a fluid part throughout the domain~\cite{pareschi1999implicitMCrarefied,crouseilles2004hybridgas,dimarco2008hybridIIkin,horsten2018hybrid}. Both types of solutions involve complications in determining the partition of the domain and in coupling the two different parts of the model. A different solution type is formed by the asymptotic-preserving (AP) methods that avoid couplings by using a single method throughout the domain. Such methods are designed with the accuracy of the kinetic simulation in the kinetic regions and the efficiency of a fluid simulation in the fluid regions. The first such method was developed in the context of radiation transport~\cite{fleck1971implMCphotontrans,fleck1984APpBrownian}, for neutron transport~\cite{borgers1992asymptoticdiffusionLTE}, and for the Boltzmann equation~\cite{gabetta1997timerelaxation}. Most of these methods are fully deterministic~\cite{bennoune2008apboltmzannNS,boscarino2013apdiflimRK,buet2007apradtransfer, crouseilles2011apmMvlasov,dimarco2012highorderapboltzmann,klar1998asinddifflim, jin1998diffrelaxdiscrvel,lemou2008APTLEdiffmM,naldi2000aphyperbolicdifflim} and generally fully resolve the velocity domain, which is unnecesary in the fluid limit. Asymptotic preserving Monte Carlo methods (APMC)~\cite{pareschi2001timerelaxed,gorji2014particleFPrarefied,dimarco2018APdiff,bufferand2013plasmaparallelheat,dicintio2017plasmaparallelheatextrafactor,crestetto2018APdiff} avoid unnecessarily resolving the velocity dimensions and have as additional advantages dimension independence and the capability to easily cope with complex geometries Here, we develop a new kinetic-diffusion APMC method that combines a standard Monte Carlo (MC) method for the kinetic equation with a random walk MC method for the limiting advection-diffusion equation. In each time step, the MC particles in the new method move according to the kinetic equation until they collide. After a collision, the MC particle moves diffusively according to a random walk process for the remainder of that time step. This diffusive motion is such that the positional increments match the kinetic process' mean and variance exactly in every time step. Furthermore, the correlation of the motion between subsequent time steps is maintained by the proposed combination of kinetic and advection-diffusion parts. In the low-collisional regime, the kinetic parts of the methods prevail, resulting in only very minor parts of the particle paths where the particle moves according to the (in that limit invalid) diffusive process. In the high-collisional regime, the kinetic parts make up only a marginal part of the process, resulting in a large gain in efficiency. The presented method is furthermore easily implementable due to its simple structure: kinetic motion with filling of the time steps with a diffusive step. In Section~\ref{sec:ap_kinetic}, we present the kinetic description of the Boltzmann-BGK model and the corresponding standard MC simulation, which forms the basis of the new algorithm. In Section~\ref{sec:ap_aggregtodiff}, we derive expressions for the mean and variance of a positional increment in the standard MC simulation of the kinetic model. These expressions are used for the advection-diffusion coefficients in the diffusive motion in the kinetic-diffusion (KD) simulation method. The KD simulation method is presented in Section~\ref{sec:ap_newscheme}. In Sections~\ref{sec:ap_limnul} and~\ref{sec:ap_liminf}, we provide error bounds on the presented simulation schemes in respectively the low-collisional and high-collisional limit, proving both consistency and the asymptotic-preserving property. Finally, in Section~\ref{sec:ap_num}, a numerical illustration of the low error and the speed-up is presented. \section{The kinetic model and its simulation\label{sec:ap_kinetic}} In Section~\ref{subsec:ap_kinetic_kinetic}, we present the kinetic model in its integro-differential form. In Section~\ref{subsec:ap_kinetic_standardMC}, we introduce the standard Monte Carlo method for the kinetic model, which is founded in the particle description of the kinetic model. Then, in Section~\ref{subsec:ap_modelsim_difflimit}, we consider the limiting equation in the diffusive limit. Finally, in Section~\ref{subsec:ap_modelsim_strategy}, we discuss the exploding simulation cost of the standard Monte Carlo method in diffusive regimes and shortly introduce our strategy to mitigate this computational burden, which will be based on the limiting behaviour as found in Section~\ref{subsec:ap_modelsim_difflimit}. \def\mathcal{M}{\mathcal{M}} \defT{T} \defu{u} \def\sigma{\sigma} \subsection{Kinetic model\label{subsec:ap_kinetic_kinetic}} The BGK kinetic model describes particles that move according to a velocity jump process: the particle moves with a constant velocity until it collides, at which point its velocity changes to a new velocity, sampled from the position-dependent distribution $\mathcal{M}(v;x)$. We write the integro-differential equation for the density $f(x,v,t)$ of such particles as \begin{align} \frac{\partial f(x,v,t)}{\partial t}+\underbrace{\frac{v}{\epsilon}\frac{\partial f(x,v,t)}{\partial x}\vphantom{\left(\int\right)}}_\text{transport}&=\underbrace{\frac{\sigma(x)}{\epsilon^2}\left(\mathcal{M}(v;x)\int f(x,v,t)\text{d}v-f(x,v,t)\right)}_\text{collision}\,,\label{eq:ap_kinetic_integrodiff}\\ f(x,v,0)&=\VAnsource(x,v)\,, \end{align} with initial condition $\VAnsource(x,v)$, scattering rate $\frac{\sigma(x)}{\epsilon^2}$, scaling parameter $\epsilon$, and post-collisional velocity distribution $\mathcal{M}(v;x)$. The scaling parameter $\epsilon$ captures the diffusive scaling~\cite{bardos1993kintofluid} as $\epsilon\rightarrow0$, which describes a situation with large velocities and a very high collision rate. The post-collisional velocity distribution, $\mathcal{M}(v;x)$, is often a Maxwellian and we will denote the mean velocity as $u(x)$ and the temperature as $T(x)$, leading to a probability density function \begin{equation} \mathcal{M}(v;x)=\frac{1}{\sqrt{2\piT(x)}}e^{-\frac{(v-\epsilonu(x))^2}{2T(x)}}\,.\label{eq:ap_kinetic_postcolveldistr_pdf} \end{equation} The post-collisional velocity distribution being normal is not a requirement for the method proposed in this manuscript, but knowledge of the mean plasma velocity $u(x)$ and temperature $T(x)$ is essential. In a general 3D setting, the dimensionality of this equation is seven, prompting the use of Monte Carlo methods. We restrict our discussion to the 1D version of the equation, since both the standard Monte Carlo method and the simulation method we propose can be readily extended to higher dimensions by treating the different dimensions identically and independently. \subsection{Kinetic particle description and standard Monte Carlo\label{subsec:ap_kinetic_standardMC}} Underlying Equation~\eqref{eq:ap_kinetic_integrodiff} is a particle description which can be used for a standard MC method and which is the topic of this section. We denote the discrete times at which the particle velocity changes as $\tau_\VAeventno$, $\VAeventno\in\{1,2,\dots\}$. The equations for the position $t\mapsto {x}(t)$ and for the velocity $t\mapsto \frac{{v}(t)}{\epsilon}$ can be written as \begin{align} \left({x}(0),{v}(0)\right)&\sim \VAnsource({x},{v})\\ \frac{\text{d}{x}(t)}{\text{d}t}&=\frac{{v}(t)}{\epsilon}\\ {v}(t)&=\VAdiscreteveleend_\VAeventno\text{ for }t\in\left[\tau_{\VAeventno},\tau_{\VAeventno+1}\right),\ k\in\left\{0,\dots,\VAnofevents-1\right\}\,. \end{align} The collisions with the plasma occur with position-dependent rate $\frac{\sigma({x})}{\epsilon^2}$. Sampling the event times can be done by using standard exponentially distributed samples $\VAstandardexp_\VAeventno\sim\mathcal{E}(1)$ and solving the equation \begin{equation} \int_{0}^{\Delta \VAeventtime_{\VAeventno}}\frac{\sigma\left({x}(\tau_\VAeventno)+\frac{\VAdiscreteveleend_\VAeventno}{\epsilon}t\right)}{\epsilon^2}\text{d}t=\VAstandardexp_\VAeventno\,,\label{eq:ap_kin_eventtime} \end{equation} for the free-flight intervals $\Delta \VAeventtime_{\VAeventno}$, from which the next event time can be found as $\tau_{\VAeventno+1}=\tau_\VAeventno+\Delta \VAeventtime_\VAeventno$. We will refer to this sampling of $\VAstandardexp_\VAeventno$ and the following inversion of the integral in Equation \eqref{eq:ap_kin_eventtime} by the function \textproc{SampleCollision}$({x}(\tau_\VAeventno),\VAdiscreteveleend_\VAeventno,\sigma(x),\epsilon)$.\label{plaats:ap_kin_eventtime_function} At a collision, a new velocity is sampled according to the post-collisional velocity distribution, which is a Maxwellian distribution, Equation~\eqref{eq:ap_kinetic_postcolveldistr_pdf}, in this text: \begin{equation} \VAdiscreteveleend_\VAeventno\sim\mathcal{M}(v;x(\tau_\VAeventno))\,. \end{equation} A Monte Carlo algorithm to simulate Equation~\eqref{eq:ap_kinetic_integrodiff} based on this particle description of the kinetic model up to the final time $\bar{t}$ is algorithm~\ref{alg:ap_kin}. \algrenewcommand\alglinenumber[1]{\footnotesize #1 \begin{algorithm} \caption{A kinetic simulation up to time $\bar{t}$ \label{alg:ap_kin}} \begin{algorithmic}[1] \Function{KineticSimulation}{$\VAdiscreteposeend,\VAdiscreteveleend,\bar{t},\sigma({x}),\epsilon,\mathcal{M}(v;x)$} \State $t\gets0 \While{$t<\bar{t}$} \State $\Delta \VAeventtime\gets$ \Call{SampleCollision}{$\VAdiscreteposeend,\VAdiscreteveleend,\sigma({x}),\epsilon$} \State $\tau\gets$\Call{min}{$\Delta \VAeventtime,\bar{t}-t$}\Comment{determine the kinetically moved time} \State $\VAdiscreteposeend\gets \VAdiscreteposeend+\frac{\VAdiscreteveleend}{\epsilon}\tau$\Comment{execute the free flight} \If{$\Delta \VAeventtime<\bar{t}-t$}\Comment{check if a collision occurred} \State $\VAdiscreteveleend\gets\mathcal{M}(v;x)$\Comment{sample the post-collisional velocity} \EndIf \State $t\getst+\tau$ \EndWhile \State \Return{$\VAdiscreteposeend,\VAdiscreteveleend$} \EndFunction \end{algorithmic} \end{algorithm} \newcommand{\rho}{\rho} \subsection{Advection-diffusion limit in the high-collisional regime\label{subsec:ap_modelsim_difflimit}} In this section, we will uncover the limiting behaviour of Equation~\eqref{eq:ap_kinetic_integrodiff} when $\epsilon\rightarrow0$, according to a standard strategy such as presented in~\cite{othmer2000veljumpdifflimit}. When the scaling parameter $\epsilon$ becomes small, the collision rate $\frac{\sigma(x)}{\epsilon^2}$ will become very large, resulting in a very quick relaxation of the distribution $f(x,v,t)$ to $\mathcal{M}(v;x)\rho(x,t)$ where $\rho(x,t)=\int f(x,v,t)\text{d}v$ represents the density. We capture this behaviour by writing the distribution $f(x,v,t)$ as \begin{equation} f(x,v,t)=\mathcal{M}(v;x)\rho(x,t)+\epsilon g(x,v,t)\,,\label{eq:ap_modelsim_difflimit_pluscorr} \end{equation} i.e., the distribution $\mathcal{M}(v;x)\rho(x,t)$ to which $f(x,v,t)$ relaxes plus a small correction term $\epsilon g(x,v,t)$. Plugging Equation~\eqref{eq:ap_modelsim_difflimit_pluscorr} in Equation~\eqref{eq:ap_kinetic_integrodiff} gives \begin{equation} \frac{\partial\!\left(\!\mathcal{M}(v;x)\!\rho(x,t)\!+\!\epsilon g(x,v,t)\!\right)\!}{\partial t}+\frac{v}{\epsilon}\frac{\partial\!\left(\!\mathcal{M}(v;x)\!\rho(x,t)\!+\!\epsilon g(x,v,t)\!\right)\!}{\partial x}=-\frac{\sigma(x)}{\epsilon}g(x,v,t)\,,\label{eq:ap_modelsim_difflimit_pluggedin} \end{equation} which, after averaging over $v$ becomes \begin{equation} \frac{\partial\rho(x,t)}{\partial t}+\frac{1}{\epsilon}\frac{\partial\left(\int v\mathcal{M}(v;x)\text{d}v\rho(x,t)\right)}{\partial x}+\int v\frac{\partial g(x,v,t)}{\partial x}\text{d}v=0\,.\label{eq:ap_modelsim_difflimit_averaged} \end{equation} When we consider the dominant terms of Equation~\eqref{eq:ap_modelsim_difflimit_pluggedin}, which have $\frac{1}{\epsilon}$ as a factor, we find that \begin{equation} v\frac{\partial (\mathcal{M}(v;x)\rho(x,t))}{\partial x}=-\sigma(x) g(x,v,t)\,, \end{equation} which can be used to transform Equation~\eqref{eq:ap_modelsim_difflimit_averaged} into \begin{equation} \frac{\partial\rho(x,t)}{\partial t}+\frac{\partial\left(u(x)\rho(x,t)\right)}{\partial x}-\frac{\partial}{\partial x}\left(\frac{1}{\sigma(x)}\frac{\partial(T(x)\rho(x,t))}{\partial x}\right)=0\,,\label{eq:ap_modelsim_difflimit_advdif} \end{equation} as $\epsilon\rightarrow0$. Equation~\eqref{eq:ap_modelsim_difflimit_advdif} is an advection-diffusion-type equation, which validates the name {diffusive limit}, when $\epsilon\rightarrow0$. Note that the effects of the collision rate, $\sigma(x)$, and the temperature, $T(x)$, differ in nature, since these coefficients appear at different places in the equation. In a spatially homogeneous setting with $u(x)\equivu$, $T(x)\equivT$, and $\sigma(x)\equiv\sigma$, Equation~\eqref{eq:ap_modelsim_difflimit_advdif} is the Fokker-Planck equation of the stochastic differential equation (SDE) \begin{equation} \text{d}X=u\text{d}t+\sqrt{\frac{2T}{\sigma}}\text{d}W\,,\label{eq:ap_modelsim_difflimit_sdeequiv} \end{equation} with $\text{d}W$ expressing a standard Wiener process. \subsection{Exploding simulation costs in the diffusive limit\label{subsec:ap_modelsim_strategy}} When we consider the kinetic simulation presented in Section~\ref{subsec:ap_kinetic_standardMC} in the limit $\epsilon\rightarrow0$, the collision rate $\frac{\sigma}{\epsilon^2}$ becomes very large and therefore the execution of all individual collisions and free flight becomes computationally expensive. The simulation of the SDE of Equation~\eqref{eq:ap_modelsim_difflimit_sdeequiv}, however, can be done at a computational cost that does not depend on $\epsilon$ via a random walk simulation. This random walk simulation of Equation~\eqref{eq:ap_modelsim_difflimit_sdeequiv} with a fixed time step $\Delta \VAtimevar$ amounts to \begin{equation} \VAdiscreteposeend(t+\Delta \VAtimevar)=\VAdiscreteposeend(t)+A_0\Delta \VAtimevar+\sqrt{2D_0\Delta \VAtimevar}\xi\,,\label{eq:ap_modelsim_difflimit_sderandomwalk} \end{equation} with $A_0=u$, $D_0=\frac{T}{\sigma}$, and $\xi\sim\mathcal{N}(0,1)$. However, use of Equations~\eqref{eq:ap_modelsim_difflimit_sdeequiv} and~\eqref{eq:ap_modelsim_difflimit_sderandomwalk} is only allowed in the limit $\epsilon\rightarrow0$, and therefore introduces a modeling error for finite $\epsilon$. To cope with the high computational cost for small but finite $\epsilon$, we aim at replacing most free flights in the high-collisional regime by a single random walk step with $\epsilon$-dependent coefficients $A_\epsilon$ and $D_\epsilon$ that covers a fraction of the time step. This corresponds to applying a random walk Monte Carlo discretization of an advection-diffusion equation for that fraction of the time step. To remove the modeling error for finite $\epsilon$ as the time step $\Delta \VAtimevar$ decreases, we propose two kinetic corrections in the algorithm of Section~\ref{sec:ap_newscheme}. \newcommand{\theta}{\theta} The first correction to control the modeling error is not executing the entirety of the time step $\Delta \VAtimevar$ diffusively, but maintaining kinetic motion at the beginning of each time step. The resulting algorithm presented in Section~\ref{sec:ap_newscheme} is consequently called the kinetic-diffusion (KD) scheme. As a second correction, we do not use the limiting advection and diffusion coefficients $u$ and $T$, but instead use coefficients $A_\epsilon$ and $D_\epsilon$ such that the first two moments of the diffusive motion match the kinetic process exactly for any fixed and finite value of $\epsilon$. Then, the diffusive substep of the new Monte Carlo algorithm becomes \begin{equation} \VAdiscreteposeend(t+\theta)=\VAdiscreteposeend(t)+A_\epsilon\theta+\sqrt{2D_\epsilon\theta}\xi\,,\label{eq:ap_modelsim_strategy_randomwalk} \end{equation} with $0\leq\theta\leq\Delta \VAtimevar$ the diffusive part of the stepped time and $\xi\sim\mathcal{N}(0,1)$ a standard normal distributed random number. The expressions for the advection and diffusion coefficients $A_\epsilon$ and $D_\epsilon$ are derived in Section~\ref{sec:ap_aggregtodiff}. In Section~\ref{sec:ap_newscheme}, the new kinetic-diffusion scheme is presented, which is analyzed in the remainder of the paper \section{Mean and variance of the kinetic motion\label{sec:ap_aggregtodiff}} In this Section, we will derive the mean and variance of the motion of a kinetically simulated particle during a time step of length $\Delta \VAtimevar$ for a spatially homogeneous background. In Section~\ref{sec:ap_newscheme}, these moments will be used for the advection and diffusion coefficients $A_\epsilon$ and $D_\epsilon$ for positional increments. The assumption of a spatially homogeneous background translates to a constant rate $\frac{\sigma}{\epsilon^2}$, and a post-collisional velocity distribution with constant mean $\epsilonu$ and variance $T$. \begin{figure}[H] \centering \begin{minipage}{\textwidth} \centering \resizebox{\textwidth}{!}{\includetikz{illustraties/}{eventtime_tijdsstap}} \captionof{figure}{The part of the free flight times (time between collisions) that overlap with the $n$-th time step for an example with three collisions in the time interval $[n\Delta \VAtimevar,(n+1)\Delta \VAtimevar)$.} \label{fig:ap_eventtime_tijdsstap} \end{minipage}\qquad \end{figure} During the $n$-th time step $[n\Delta \VAtimevar,(n+1)\Delta \VAtimevar)$ the change in position of a particle is the result of the free flights during that time step. In Figure~\ref{fig:ap_eventtime_tijdsstap}, these free flights occur in the time between the collisions, which are indicated with circles. Only the parts of the free flights that overlap with the time step contribute to the motion in the time step. We denote the index of the last collision before the time interval $[n\Delta \VAtimevar,(n+1)\Delta \VAtimevar)$ with $\VAeventnotilltimestep{n}$ and we denote the part of the $\VAeventnotilltimestep{n}+1$-th flight time that is in the time interval $[n\Delta \VAtimevar,(n+1)\Delta \VAtimevar)$ with $\Delta \VAeventtime_{\VAeventnotilltimestep{n}+1|n}$ and similarly for the overlap of the $\VAeventnotilltimestep{n}+1$-th flight time with the time interval $[n\Delta \VAtimevar,(n+1)\Delta \VAtimevar)$, we write $\Delta \VAeventtime_{\VAeventnotilltimestep{n}+1|n}$. With this notation, the change in position during the $n$-th time step can be written as { \newcommand{\vphantom{\sum_{\VAeventnointimestep=\VAeventnotilltimestep{\VAtimestepno}+2}^{\VAeventnotilltimestep{\VAtimestepno+1}}}}{\vphantom{\sum_{{\VAeventno}=\VAeventnotilltimestep{n}+2}^{\VAeventnotilltimestep{n+1}}}} \begin{equation} \Delta {\VAdiscreteposeend}_n=\underbrace{\vphantom{\sum_{\VAeventnointimestep=\VAeventnotilltimestep{\VAtimestepno}+2}^{\VAeventnotilltimestep{\VAtimestepno+1}}}\frac{\VAdiscreteveleend_{\VAeventnotilltimestep{n}}}{\epsilon}\Delta \VAeventtime_{\VAeventnotilltimestep{n}+1|n}}_\text{entering flight}+\!\!\underbrace{\vphantom{\sum_{\VAeventnointimestep=\VAeventnotilltimestep{\VAtimestepno}+2}^{\VAeventnotilltimestep{\VAtimestepno+1}}}\sum_{{\VAeventno}=\VAeventnotilltimestep{n}+2}^{\VAeventnotilltimestep{n+1}}\!\!\frac{\VAdiscreteveleend_{{\VAeventno}-1}}{\epsilon}\Delta \VAeventtime_{\VAeventno}}_\text{internal flights}+\underbrace{\vphantom{\sum_{\VAeventnointimestep=\VAeventnotilltimestep{\VAtimestepno}+2}^{\VAeventnotilltimestep{\VAtimestepno+1}}}\frac{\VAdiscreteveleend_{\VAeventnotilltimestep{n+1}}}{\epsilon}\Delta \VAeventtime_{\VAeventnotilltimestep{n+1}+1|n}}_\text{exiting flight}\label{eq:ap_deltax_real} \end{equation} }It is important to note that in this model, there is correlation between the positional increments during different time steps, because the last velocity of a time step, $\VAdiscreteveleend_{\VAeventnotilltimestep{n}}$, is also the first velocity of the next time step. Since we are only considering a single time step, it is convenient to discard the dependence on the time step index, $n$. We rename the velocities $\{\VAdiscreteveleend_{\VAeventnotilltimestep{n}},\cdots,\VAdiscreteveleend_{\VAeventnotilltimestep{n+1}}\}$ as $\{\VAdiscreteveleend_0,\cdots,\VAdiscreteveleend_{\VAnofevents}\}$ and the flight times $\{\Delta \VAeventtime_{\VAeventnotilltimestep{n}+1|n},\Delta \VAeventtime_{\VAeventnotilltimestep{n}+2},\cdots,\Delta \VAeventtime_{\VAeventnotilltimestep{n+1}},\Delta \VAeventtime_{\VAeventnotilltimestep{n+1}+1|n}\}$ as $\{\Delta \VAeventtime_0,\cdots,\Delta \VAeventtime_{{\VAnofevents}}\}$, with ${\VAnofevents}=\VAeventnotilltimestep{n+1}-\VAeventnotilltimestep{n}$. This rephrases equation~\eqref{eq:ap_deltax_real} as \begin{equation} \Delta \VAdiscreteposeend=\sum_{{\VAeventno}=0}^{\VAnofevents}\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\Delta \VAeventtime_{\VAeventno}\,,\label{eq:ap_deltax_assum} \end{equation} where we always have \begin{equation} \sum_{{\VAeventno}=0}^{\VAnofevents}\Delta \VAeventtime_{\VAeventno}=\Delta \VAtimevar\,.\label{eq:ap_deltat_assum} \end{equation} Note that ${\VAnofevents}$, the number of collisions during a time step of length $\Delta \VAtimevar$, is a random variable which follows the Poisson distribution: \begin{equation} P({\VAnofevents}=\VAeventno)=\frac{1}{\VAeventno!}\left(\frac{\sigma}{\epsilon^2}\Delta \VAtimevar\right)^{\VAeventno}e^{-\frac{\sigma}{\epsilon^2}\Delta \VAtimevar}\,.\label{eq:ap_poissondistri} \end{equation} We will now derive the mean and variance of $\Delta \VAdiscreteposeend$. In Section~\ref{subsec:ap_randomv0}, we first neglect correlation with previous and next time steps by treating the initial and final velocity identically to the other velocities. From the resulting formulas from Section~\ref{subsec:ap_randomv0}, we proceed to derive formulas for the mean and variance of the positional increment conditioned on a known final velocity, $\VAdiscreteveleend_{\VAnofevents}$, in Section~\ref{subsec:ap_fixedv0}. The formulas resulting from this second section allow incorporating correlation with subsequent time steps in the random walk steps itself, leading to the algorithm of Section~\ref{sec:ap_newscheme}. \subsection{Neglecting correlation with other time steps\label{subsec:ap_randomv0}} In this first section, we derive the exact mean and variance of the positional increment of a particle adhering to the model of Section~\ref{subsec:ap_kinetic_kinetic} during a time step, while neglecting correlation between different time steps. The resulting expressions form the foundation of the derivation in Section~\ref{subsec:ap_fixedv0}, where we do consider correlation with the subsequent time step. \subsubsection{Neglecting correlation with other time steps: mean\label{subsubsec:ap_randomv0_mean}} The change in position of a particle following the model of Section~\ref{subsec:ap_kinetic_kinetic} during a time step $\Delta \VAtimevar$ is written as $\Delta\VAdiscreteposeend$ and can be expressed as in Equation~\eqref{eq:ap_deltax_assum}. With this description, and by conditioning on the Poisson-distributed number of collisions ${\VAnofevents}$, we can write the expected value of $\Delta\VAdiscreteposeend$ as \begin{equation} \VAexpec{\Delta \VAdiscreteposeend}=\VAexpec{\sum_{{\VAeventno}=0}^{\VAnofevents}\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\Delta \VAeventtime_{\VAeventno}}=\VAexpecover{{\VAnofevents}}{\VAexpec{\left.\sum_{{\VAeventno}=0}^{\VAnofevents}\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\Delta \VAeventtime_{\VAeventno}\right|{\VAnofevents}}}\,,\label{eq:ap_randomv0_mean_initial} \end{equation} where $\mathbb{E}_{{\VAnofevents}}$ is the expected value over the specific random variable in the subscript and the expected value without subscript goes over all random variables except the ones on which the expected value is conditioned. The different $\VAdiscreteveleend_\VAeventno$ in Equation~\eqref{eq:ap_randomv0_mean_initial} are random variables that follow the distribution $\mathcal{M}(v)$ with mean $\epsilonu$ and variance $T$ and the different $\VAdiscreteveleend_\VAeventno$ are independent of all other random variables. Using this transforms Equation~\eqref{eq:ap_randomv0_mean_initial} into \begin{equation} \VAexpec{\left.\sum_{{\VAeventno}=0}^{\VAnofevents}\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\Delta \VAeventtime_{\VAeventno}\right|{\VAnofevents}}=\VAexpec{\left.\sum_{{\VAeventno}=0}^{\VAnofevents}\VAexpec{\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}}\Delta \VAeventtime_{\VAeventno}\right|{\VAnofevents}}=u\VAexpec{\left.\sum_{{\VAeventno}=0}^{\VAnofevents}\Delta \VAeventtime_{\VAeventno}\right|{\VAnofevents}}\,.\label{eq:ap_randomv0_mean_vdistriworkedout} \end{equation} Since the $\Delta \VAeventtime_{\VAeventno}$ always sum to $\Delta \VAtimevar$ by construction of $\Delta \VAdiscreteposeend$ (see Equation~\eqref{eq:ap_deltat_assum}) regardless of ${\VAnofevents}$, we find \begin{equation} \VAexpec{\Delta \VAdiscreteposeend}=u\Delta \VAtimevar\,.\label{eq:ap_randomv0_mean_result} \end{equation} \subsubsection{Neglecting correlation with other time steps: variance\label{subsubsec:ap_fixedv0_var}} To find an expression for the variance of the positional increment, we start from the law of total variance, \begin{equation} \VAvar{\Delta \VAdiscreteposeend}=\VAexpecover{{\VAnofevents}}{\VAvar{\left.\sum_{{\VAeventno}=0}^{\VAnofevents} \frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\Delta \VAeventtime_{\VAeventno}\right|{\VAnofevents}}}+\VAvarover{{\VAnofevents}}{\VAexpec{\left.\sum_{{\VAeventno}=0}^{\VAnofevents} \frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\Delta \VAeventtime_{\VAeventno}\right|{\VAnofevents}}}\,.\label{eq:ap_randomv0_var_lawtotalvarapplied} \end{equation} The same notational convention is used for the variance as for the expected value introduced in the previous section: the subscript indicates the random variable with respect to which the variance is taken and the absence of a subscript indicating that the variance is taken over all random variables, except for the ones for which it is conditioned. The second term on the right hand side of Equation~\eqref{eq:ap_randomv0_var_lawtotalvarapplied} equals zero, since the argument for $\text{Var}_{{\VAnofevents}}$ equals the left hand side of Equation~\eqref{eq:ap_randomv0_mean_vdistriworkedout}, and was shown to be independent of ${\VAnofevents}$ in Section~\ref{subsubsec:ap_randomv0_mean}, see Equation~\eqref{eq:ap_randomv0_mean_result}. The first term on the right hand side of Equation~\eqref{eq:ap_randomv0_var_lawtotalvarapplied} can be rewritten as a series by using Equation~\eqref{eq:ap_poissondistri}, resulting in \begin{equation} \VAvar{\Delta \VAdiscreteposeend}=\sum_{{\VAnofevents}=0}^\infty\frac{1}{{\VAnofevents}!}\left(\frac{\sigma}{\epsilon^2}\Delta \VAtimevar\right)^{{\VAnofevents}}e^{-\frac{\sigma}{\epsilon^2}\Delta \VAtimevar}\VAvar{\left.\sum_{{\VAeventno}=0}^{\VAnofevents}\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\Delta \VAeventtime_{\VAeventno}\right|{\VAnofevents}}\label{eq:ap_randomv0_var_poissonseriesapplied} \end{equation} We will now rewrite the conditional variance in Equation~\eqref{eq:ap_randomv0_var_poissonseriesapplied}. To do so, we first show all the time intervals $\Delta \VAeventtime_{\VAeventno}$, ${\VAeventno}=0\dots{\VAnofevents}$ are identically distributed for a fixed ${\VAnofevents}$. Then, we will be able to find a convenient formulation for the conditional variance which we will use to find an expression for the variance at the end of this section. \paragraph{Identical distribution of the flight times in a time step.} The probability of the ${\VAeventno}$-th inter-event time to begin after a time in the interval $[\theta,\theta+\text{d}\theta]$ and to have a size in the interval $[\Delta \VAeventtime_{\VAeventno},\Delta \VAeventtime_{\VAeventno}+\text{d}\Delta \VAeventtime_{\VAeventno}]$ is equal to the probability of ${\VAeventno}-1$ events occurring in time $\theta$ of which one occurs in the interval $[\theta,\theta+\text{d}\theta]$, multiplied by the probability of a collision occurring after exactly $\Delta \VAeventtime_{\VAeventno}$, multiplied by the probability of ${\VAnofevents}-{\VAeventno}-1$ events occurring in the remaining time $\Delta \VAtimevar-\theta-\Delta \VAeventtime_{\VAeventno}$. Integrating over $\theta$, we can write the probability distribution of the time length $\Delta \VAeventtime_{\VAeventno}$ up to a constant factor as { \newcommand{\vphantom{\frac{\left(\VArate(\VAtimestep-\VAeventtimedif_\VAeventnointimestep-\theta)\right)^{\VAnofeventsintimestepnon-\VAeventnointimestep-1}}{(\VAnofeventsintimestepnon-\VAeventnointimestep-1)!}}}{\vphantom{\frac{\left(\VArate(\Delta \VAtimevar-\Delta \VAeventtime_{\VAeventno}-\theta)\right)^{{\VAnofevents}-{\VAeventno}-1}}{({\VAnofevents}-{\VAeventno}-1)!}}} \begin{equation} \underbrace{\vphantom{\frac{\left(\VArate(\VAtimestep-\VAeventtimedif_\VAeventnointimestep-\theta)\right)^{\VAnofeventsintimestepnon-\VAeventnointimestep-1}}{(\VAnofeventsintimestepnon-\VAeventnointimestep-1)!}} e^{\!-\frac{\sigma\Delta \VAeventtime_{\VAeventno}}{\epsilon^2}}\!\frac{\sigma}{\epsilon^2}\text{d}\Delta \VAeventtime_{\VAeventno}\!\!}_{\substack{\text{one event}\\ \text{after $\Delta \VAeventtime_{\VAeventno}$}}}\int_0^{\Delta \VAtimevar-\Delta \VAeventtime_{\VAeventno}}\!\!\underbrace{\vphantom{\frac{\left(\VArate(\VAtimestep-\VAeventtimedif_\VAeventnointimestep-\theta)\right)^{\VAnofeventsintimestepnon-\VAeventnointimestep-1}}{(\VAnofeventsintimestepnon-\VAeventnointimestep-1)!}}\frac{\left(\frac{\sigma\theta}{\epsilon^2}\right)^{{\VAeventno}-1}}{({\VAeventno}-1)!}e^{\!-\frac{\sigma\theta}{\epsilon^2}}\!\frac{\sigma}{\epsilon^2}\text{d}\theta\!}_{\substack{\text{${\VAeventno}-1$ events in $\theta$,} \\ \text{and one at $\theta$}}}\underbrace{\frac{\!\left(\frac{\sigma(\Delta \VAtimevar-\Delta \VAeventtime_{\VAeventno}-\theta)}{\epsilon^2}\right)^{{\VAnofevents}-{\VAeventno}-1\!\!\!}}{({\VAnofevents}-{\VAeventno}-1)!}e^{\!-\frac{\sigma(\Delta \VAtimevar-\Delta \VAeventtime_{\VAeventno}-\theta)}{\epsilon^2}}}_\text{the remaining ${\VAnofevents}-{\VAeventno}-1$ events}\,. \end{equation}} Restructuring this equation gives \begin{equation} \left(\frac{\sigma}{\epsilon^2}\right)^{{\VAnofevents}} e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}\int_0^{\Delta \VAtimevar-\Delta \VAeventtime_{\VAeventno}}\frac{\theta^{{\VAeventno}-1}}{({\VAeventno}-1)!}\frac{(\Delta \VAtimevar-\Delta \VAeventtime_{\VAeventno}-\theta)^{{\VAnofevents}-{\VAeventno}-1}}{({\VAnofevents}-{\VAeventno}-1)!}\text{d}\theta\text{d}\Delta \VAeventtime_{\VAeventno}\,, \end{equation} and after repeated partial integration we find \begin{comment} {\color{gray} \begin{equation} \frac{\sigma^{\VAnofevents}}{\epsilon^{2{\VAnofevents}}} e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}\left(\left.\frac{\theta^{{\VAeventno}-1}}{({\VAeventno}-1)!}\frac{(\Delta \VAtimevar-\Delta \VAeventtime_{\VAeventno})^{{\VAnofevents}-{\VAeventno}}}{({\VAnofevents}-{\VAeventno})!}\right|_0^{\Delta \VAtimevar-\Delta \VAeventtime_{\VAeventno}}+\int_0^{\Delta \VAtimevar-\Delta \VAeventtime_{\VAeventno}}\frac{\theta^{{\VAeventno}-2}}{({\VAeventno}-2)!}\frac{(\Delta \VAtimevar-\Delta \VAeventtime_{\VAeventno}-\theta)^{{\VAnofevents}-{\VAeventno}}}{({\VAnofevents}-{\VAeventno})!}\text{d}\theta\right)\text{d}\VAteventtimedif_{\VAeventno} \end{equation}} \end{comment} \begin{equation} \left(\frac{\sigma}{\epsilon^2}\right)^{{\VAnofevents}} e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}\frac{(\Delta \VAtimevar-\Delta \VAeventtime_{\VAeventno})^{{\VAnofevents}-1}}{({\VAnofevents}-1)!}\text{d}\Delta \VAeventtime_{\VAeventno}\,,\label{eq:ap_randomv0_var_taudistri_intermediate} \end{equation} which is independent of ${\VAeventno}$, proving the identical distribution for each $\Delta \VAeventtime_{\VAeventno},$ ${\VAeventno}\in\{0,\dots,{\VAnofevents}\}$. The quantity in Equation~\eqref{eq:ap_randomv0_var_taudistri_intermediate} is the probability density function of $\Delta \VAeventtime_{\VAeventno}$ up to a scaling, but it can be easily rescaled to obtain the probability density function that holds for all ${\VAeventno}\leq{\VAnofevents}$: \begin{equation} \text{P}(\Delta \VAeventtime_{\VAeventno}|{\VAnofevents})=\left\{\begin{array}{ll} {\VAnofevents}\dfrac{(\Delta \VAtimevar-\Delta \VAeventtime_{\VAeventno})^{{\VAnofevents}-1}}{\Delta \VAtimevar^{{\VAnofevents}}}&\text{if }\Delta \VAeventtime_{\VAeventno}\in[0,\Delta \VAtimevar]\\ 0&\text{else}\,. \end{array}\right. \label{eq:ap_randomv0_var_Dtprobdens} \end{equation} The resulting Equation~\eqref{eq:ap_randomv0_var_Dtprobdens} holds for all ${\VAeventno}\in\{0,1,\dots,{\VAnofevents}\}$. \paragraph{Expression for the conditional variance.} To find an expression for the conditional variance in Equation~\eqref{eq:ap_randomv0_var_poissonseriesapplied}, the mean and variance of $\Delta \VAeventtime_{\VAeventno}$ will be required. These can be computed from Equation~\eqref{eq:ap_randomv0_var_Dtprobdens} to be \begin{align} \VAexpec{\left.\Delta \VAeventtime_{\VAeventno}\right|{\VAnofevents}}&=\int_0^{\Delta \VAtimevar} \Delta \VAeventtime_{\VAeventno}{\VAnofevents}\frac{(\Delta \VAtimevar-\Delta \VAeventtime_{\VAeventno})^{{\VAnofevents}-1}}{\Delta \VAtimevar^{{\VAnofevents}}}\text{d}\Delta \VAeventtime_{\VAeventno}=\frac{\Delta \VAtimevar}{{\VAnofevents}+1}\label{eq:ap_condtimestepmean}\\ \VAexpec{\left.\Delta \VAeventtime_{\VAeventno}^2\right|{\VAnofevents}}&=\int_0^{\Delta \VAtimevar}\Delta \VAeventtime_{\VAeventno}^2{\VAnofevents}\frac{(\Delta \VAtimevar-\Delta \VAeventtime_{\VAeventno})^{{\VAnofevents}-1}}{\Delta \VAtimevar^{\VAnofevents}}\text{d}\Delta \VAeventtime_{\VAeventno}=\frac{2\Delta \VAtimevar^2}{({\VAnofevents}+1)({\VAnofevents}+2)}\label{eq:ap_condtimestepmom2}\\ \VAvar{\left.\Delta \VAeventtime_{\VAeventno}\right|{\VAnofevents}}&=\VAexpec{\left.\Delta \VAeventtime_{\VAeventno}^2\right|{\VAnofevents}}-\left(\VAexpec{\left.\Delta \VAeventtime_{\VAeventno}\right|{\VAnofevents}}\right)^2=\frac{{\VAnofevents}\Delta \VAtimevar^2}{({\VAnofevents}+1)^2({\VAnofevents}+2)}\,.\label{eq:ap_condtimestepvar} \end{align} Due to the fact that the different $\Delta \VAeventtime_{\VAeventno}$ and the different velocities are identically distributed, the variance of the sum formula can be applied to the conditional variance in Equation~\eqref{eq:ap_randomv0_var_poissonseriesapplied}, yielding \begin{equation} \VAvar{\left.\sum_{{\VAeventno}=0}^{\VAnofevents}\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\Delta \VAeventtime_{\VAeventno}\right|{\VAnofevents}}=({\VAnofevents}+1)\left(\VAvar{\left.\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\Delta \VAeventtime_{\VAeventno}\right|{\VAnofevents}} +{\VAnofevents}\VAcov{\left.\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\Delta \VAeventtime_{\VAeventno}\right|{\VAnofevents}}\right)\label{eq:ap_randomv0_var_conddxvar_sumformula} \end{equation} Because of the independence of the $\VAdiscreteveleend_{\VAeventno}$ to other random variables, we hav \begin{multline} \VAvar{\left.\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\Delta \VAeventtime_{\VAeventno}\right|{\VAnofevents}}=\VAvar{\left.\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\right|{\VAnofevents}}\VAvar{\left.\Delta \VAeventtime_{\VAeventno}\right|{\VAnofevents}} +\VAvar{\left.\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\right|{\VAnofevents}}\VAexpec{\left.\Delta \VAeventtime_{\VAeventno}\right|{\VAnofevents}}^2\\ +\VAvar{\left.\Delta \VAeventtime_{\VAeventno}\right|{\VAnofevents}}\VAexpec{\left.\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\right|{\VAnofevents}}^2 \label{eq:ap_randomv0_var_condvdtvar_splitup} \end{multline} and \begin{align} \VAcov{\left.\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\Delta \VAeventtime_{\VAeventno}\right|{\VAnofevents}}&=\VAcov{\Delta \VAeventtime_{\VAeventno}|{\VAnofevents}}\VAexpec{\left.\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\right|{\VAnofevents}}^2\,.\label{eq:ap_randomv0_var_condvtcov} \end{align} Applying the standard formula for the variance of a sum formula to the event times, we find \begin{equation} \VAvar{\left.\sum_{{\VAeventno}=0}^{\VAnofevents}\Delta \VAeventtime_{\VAeventno}\right|{\VAnofevents}}=({\VAnofevents}+1)\VAvar{\Delta \VAeventtime_{\VAeventno}|{\VAnofevents}}+({\VAnofevents}+1){\VAnofevents}\VAcov{\Delta \VAeventtime_{\VAeventno}|{\VAnofevents}}=0\,, \end{equation} which allows the expression of the covariance of the flight time contributions from Equation~\eqref{eq:ap_randomv0_var_condvtcov} as \begin{equation} \VAcov{\left.\Delta \VAeventtime_{\VAeventno}\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\right|{\VAnofevents}}=-\frac{\VAvar{\Delta \VAeventtime_{\VAeventno}|{\VAnofevents}}}{{\VAnofevents}}\VAexpec{\left.\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\right|{\VAnofevents}}^2\,.\label{eq:ap_randomv0_var_condvdtcov} \end{equation} Equations~\eqref{eq:ap_randomv0_var_condvdtvar_splitup} and~\eqref{eq:ap_randomv0_var_condvdtcov} can be used to transform the conditional variance of Equation~\eqref{eq:ap_randomv0_var_poissonseriesapplied} to \begin{equation} \VAvar{\left.\sum_{{\VAeventno}=0}^{\VAnofevents}\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\Delta \VAeventtime_{\VAeventno}\right|{\VAnofevents}}=({\VAnofevents}+1)\VAvar{\left.\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\right|{\VAnofevents}}\left(\VAvar{\left.\Delta \VAeventtime_{\VAeventno}\right|{\VAnofevents}} +\VAexpec{\left.\Delta \VAeventtime\right|{\VAnofevents}}^2\right)\,.\label{eq:ap_randomv0_var_conddxvar_intermediate} \end{equation} All the terms of Equation~\eqref{eq:ap_randomv0_var_conddxvar_intermediate} are known: the mean and variance of $\Delta \VAeventtime_{\VAeventno}$ conditioned on ${\VAnofevents}$ are given by Equations~\eqref{eq:ap_condtimestepmean} and~\eqref{eq:ap_condtimestepvar}, and the velocities are independent of ${\VAnofevents}$ and have variance $T$. Using this knowledge, transforms Equation~\eqref{eq:ap_randomv0_var_conddxvar_intermediate} into \begin{equation} \VAvar{\left.\sum_{{\VAeventno}=0}^{\VAnofevents}\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\Delta \VAeventtime_{\VAeventno}\right|{\VAnofevents}}=\frac{T{\VAnofevents}\Delta \VAtimevar^2}{\epsilon^2({\VAnofevents}+1)({\VAnofevents}+2)} +\frac{T\Delta \VAtimevar^2}{\epsilon^2({\VAnofevents}+1)}=\frac{T}{\epsilon^2}\Delta \VAtimevar^2\frac{2}{{\VAnofevents}+2\,.}\label{eq:ap_randomv0_conddxvar} \end{equation} \paragraph{Expression for the variance.} Expression~\eqref{eq:ap_randomv0_conddxvar} for the conditional variance of Equation~\eqref{eq:ap_randomv0_var_poissonseriesapplied} can be used to obtain \begin{align} \VAvar{\Delta\VAdiscreteposeend}&=\sum_{{\VAnofevents}=0}^\infty \frac{T}{\epsilon^2}\Delta \VAtimevar^2e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}\frac{2}{{\VAnofevents}+2}\frac{\left(\frac{\sigma}{\epsilon^2}\Delta \VAtimevar\right)^{\VAnofevents}}{{\VAnofevents}!}\\ &=2\frac{T}{\epsilon^2}\Delta \VAtimevar^2e^{-\frac{\sigma}{\epsilon^2}\Delta \VAtimevar}\sum_{{\VAnofevents}=0}^\infty({\VAnofevents}+1)\frac{\left(\frac{\sigma\Delta \VAtimevar}{\epsilon^2}\right)^{{\VAnofevents}}}{({\VAnofevents}+2)!}\,, \intertext{which can be rewritten in the form of the series expansion of an exponential and its derivative as} \VAvar{\Delta\VAdiscreteposeend}&=2\frac{T}{\epsilon^2}\Delta \VAtimevar^2e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}\left(\sum_{{\VAnofevents}=0}^\infty({\VAnofevents}+2)\frac{\left(\frac{\sigma\Delta \VAtimevar}{\epsilon^2}\right)^{{\VAnofevents}}}{({\VAnofevents}+2)!}-\sum_{{\VAnofevents}=0}^\infty\frac{\left(\frac{\sigma\Delta \VAtimevar}{\epsilon^2}\right)^{{\VAnofevents}}}{({\VAnofevents}+2)!}\right)\\ &=2\frac{T}{\epsilon^2}\Delta \VAtimevar^2e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}\left(\frac{\epsilon^2}{\sigma\Delta \VAtimevar}\sum_{{\VAnofevents}=2}^\infty{\VAnofevents}\frac{\left(\frac{\sigma\Delta \VAtimevar}{\epsilon^2}\right)^{{\VAnofevents}-1}}{{\VAnofevents}!}-\left(\frac{\epsilon^2}{\sigma\Delta \VAtimevar}\right)^2\sum_{{\VAnofevents}=2}^\infty\frac{\left(\frac{\sigma\Delta \VAtimevar}{\epsilon^2}\right)^{{\VAnofevents}}}{{\VAnofevents}}\right)\\ &=2\frac{\epsilon^2}{\sigma^2}T e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}\left(\frac{\sigma\Delta \VAtimevar}{\epsilon^2}\frac{\text{d}\left(e^{\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}-1-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}\right)}{\text{d}\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}-\left(e^{\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}-1-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}\right)\right)\,, \end{align} resulting in the formula for the variance \begin{equation} \VAvar{\Delta\VAdiscreteposeend}=2\frac{\epsilon^2}{\sigma^2}T\left(e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}-1+\frac{\sigma\Delta \VAtimevar}{\epsilon^2}\right)\,.\label{eq:ap_randomv0_var} \end{equation} In the diffusive limit, $\epsilon\rightarrow0$, the variance in Equation~\eqref{eq:ap_randomv0_var} indeed becomes equal to $2\frac{T}{\sigma}\Delta \VAtimevar$, in correspondence with Equation~\eqref{eq:ap_modelsim_difflimit_sdeequiv} of Section~\ref{subsec:ap_modelsim_difflimit}. On the other hand, for fixed $\epsilon$ and decreasing time step size, we obtain a variance proportional to $\frac{T}{\epsilon^2}\Delta \VAtimevar^2$. This equals the variance of the motion during a time $\Delta \VAtimevar$ where the particle has a constant but randomly sampled velocity with variance $\frac{T}{\epsilon^2}$, which is exactly the kinetic limit of the particle process, with velocities $\frac{v}{\epsilon}$ with $v$ distributed according to the Maxwellian of Equation~\eqref{eq:ap_kinetic_postcolveldistr_pdf}. \subsection{Including correlation by conditioning on the final velocity\label{subsec:ap_fixedv0}} In Section~\ref{subsec:ap_randomv0}, all velocities were treated as independent new samples. When multiple time steps are considered, the last velocity of the previous time step is equal to the first of the next time step, which results in correlation of the positional increment in different time steps. To correlate the motion within a time step to its subsequent time steps, the mean and variance of the motion can take into account the final velocity during the time step. In a low-collisional regime, this correlation is an important feature of the kinetic behaviour. \subsubsection{Including correlation by conditioning on the final velocity: mean} As in the previous section, we condition the expected value on the number of events and can use the independence of the velocities with respect to the other variables, resulting in \begin{align} \VAexpec{\Delta \VAdiscreteposeend}&=\VAexpec{\VAexpec{\left.\sum_{{\VAeventno}=0}^{\VAnofevents}\VAexpec{\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}}\Delta \VAeventtime_{\VAeventno}\right|{\VAnofevents}}}\,. \end{align} The difference with before is the treatment of $\VAdiscreteveleend_{\VAnofevents}$ as a fixed predetermined value, resulting in \begin{align} \VAexpec{\Delta \VAdiscreteposeend}&=\VAexpec{\VAexpec{\left.\sum_{{\VAeventno}=0}^{{\VAnofevents}-1}u\Delta \VAeventtime_{\VAeventno}+\frac{\VAdiscreteveleend_{\VAnofevents}}{\epsilon}\Delta \VAeventtime_{\VAnofevents}\right|{\VAnofevents}}}\\ &=\VAexpec{\VAexpec{\left.u(\Delta \VAtimevar-\Delta \VAeventtime_{\VAnofevents})+\frac{\VAdiscreteveleend_{\VAnofevents}}{\epsilon}\Delta \VAeventtime_{\VAnofevents}\right|{\VAnofevents}}}\\ &=\VAexpec{u(\Delta \VAtimevar-\Delta \VAeventtime_{\VAnofevents})+\frac{\VAdiscreteveleend_{\VAnofevents}}{\epsilon}\Delta \VAeventtime_{\VAnofevents}}\\ &=u\Delta \VAtimevar+\left(\frac{\VAdiscreteveleend_{\VAnofevents}}{\epsilon}-u\right)\VAexpec{\Delta \VAeventtime_{\VAnofevents}}\,. \end{align} Since $\Delta \VAeventtime_{\VAnofevents}$ equals the time before any collision occurs in $\Delta \VAtimevar$, it is exponentially distributed with rate $\frac{\sigma}{\epsilon^2}$, and with $\Delta \VAtimevar$ as a ceiling. Its expected value can consequently be computed as \begin{align} \VAexpec{\Delta \VAeventtime_{\VAnofevents}}&=\int_0^{\Delta \VAtimevar}\Delta \VAeventtime\frac{\sigma}{\epsilon^2} e^{-\frac{\sigma\Delta \VAeventtime}{\epsilon^2}}\text{d}\Delta \VAeventtime+\Delta \VAtimevar\int_{\Delta \VAtimevar}^\infty\frac{\sigma}{\epsilon^2} e^{-\frac{\sigma}{\epsilon^2}\Delta \VAeventtime}\text{d}\Delta \VAeventtime\\ &=\frac{\epsilon^2}{\sigma}-\frac{\epsilon^2}{\sigma} e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}-\Delta \VAtimevar e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}+\Delta \VAtimevar e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}\\ &=\frac{\epsilon^2}{\sigma}\left(1-e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}\right)\,,\label{eq:ap_fixedv0_dt0mean} \end{align} resulting in the mean \begin{equation} \VAexpec{\Delta\VAdiscreteposeend}=u\Delta \VAtimevar+\left(\frac{\VAdiscreteveleend_{\VAnofevents}}{\epsilon}-u\right)\frac{\epsilon^2}{\sigma}\left(1-e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}\right)\,.\label{eq:ap_fixedv0_mean_result} \end{equation} In the diffusive limit, the mean becomes equal to $u\Delta \VAtimevar$. Conversely, for fixed $\epsilon$ and $\Delta \VAtimevar\rightarrow0$, the mean becomes equal to $\VAdiscreteveleend_{{\VAnofevents}}$ since the probability of a velocity changes becomes zero. \subsubsection{Including correlation by conditioning on the final velocity: variance} As in Section~\ref{subsubsec:ap_fixedv0_var}, we start with the law of total variance, but now applied to the last time interval $\Delta \VAeventtime_{\VAnofevents}$, giving \begin{equation} \VAvar{\Delta\VAdiscreteposeend}=\VAexpec{\VAvar{\left.\sum_{{\VAeventno}=0}^{\VAnofevents}\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\Delta \VAeventtime_{\VAeventno}\right|\Delta \VAeventtime_{\VAnofevents}}}+\VAvar{\VAexpec{\left.\sum_{{\VAeventno}=0}^{\VAnofevents}\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\Delta \VAeventtime_{\VAeventno}\right|\Delta \VAeventtime_{\VAnofevents}}}\,. \label{eq:ap_fixedv0_var_lawtotalvarapplied} \end{equation} For the first term of Equation~\eqref{eq:ap_fixedv0_var_lawtotalvarapplied}, we can use that $V_{\VAnofevents}$ is a known and fixed value to write \begin{align} \VAexpec{\VAvar{\left.\sum_{{\VAeventno}=0}^{\VAnofevents}\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\Delta \VAeventtime_{\VAeventno}\right|\Delta \VAeventtime_{\VAnofevents}}}&=\VAexpec{\VAvar{\left.\sum_{{\VAeventno}=0}^{{\VAnofevents}-1}\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\Delta \VAeventtime_{\VAeventno}\right|\Delta \VAeventtime_{\VAnofevents}}}\,.\label{eq:ap_fixedv0_var_lawtotalvar_term1} \end{align} Equation~\eqref{eq:ap_fixedv0_var_lawtotalvar_term1} conforms to the case in Section~\ref{subsec:ap_randomv0} but with a time step of length $\Delta \VAtimevar-\Delta \VAeventtime_{\VAnofevents}$, enabling the use of Equation~\eqref{eq:ap_randomv0_var} to find \begin{equation} \VAexpec{\VAvar{\left.\sum_{{\VAeventno}=0}^{\VAnofevents}\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\Delta \VAeventtime_{\VAeventno}\right|\Delta \VAeventtime_{\VAnofevents}}}=\VAexpec{2\frac{\epsilon^2}{\sigma^2}T\left(e^{-\frac{\sigma\left(\Delta \VAtimevar-\Delta \VAeventtime_{\VAnofevents}\right)}{\epsilon^2}}-1+\frac{\sigma(\Delta \VAtimevar-\Delta \VAeventtime_{\VAnofevents})}{\epsilon^2}\right)}\,, \end{equation} \begin{align} \hphantom{jajaja}&=\int_0^{\Delta \VAtimevar}2\frac{\epsilon^2}{\sigma^2}T\left(e^{-\frac{\sigma(\Delta \VAtimevar-\Delta \VAeventtime_{\VAnofevents})}{\epsilon^2}}-1+\frac{\sigma(\Delta \VAtimevar-\Delta \VAeventtime_{\VAnofevents})}{\epsilon^2}\right)\frac{\sigma}{\epsilon^2} e^{-\frac{\sigma(\Delta \VAeventtime_{\VAnofevents})}{\epsilon}}\text{d}\Delta \VAeventtime_{\VAnofevents}\\ &\ \ \ \ \ \ \ +\left.2\frac{\epsilon^2}{\sigma^2}T\left(e^{-\frac{\sigma(\Delta \VAtimevar-\Delta \VAeventtime_{\VAnofevents})}{\epsilon^2}}-1+\frac{\sigma(\Delta \VAtimevar-\Delta \VAeventtime_{\VAnofevents})}{\epsilon^2}\right)\right|_{\Delta \VAeventtime_{\VAnofevents}=\Delta \VAtimevar}e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}\\ &=2\frac{\epsilon^2}{\sigma^2}T\left(2e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}+\frac{\sigma\Delta \VAtimevar}{\epsilon^2}+\frac{\sigma\Delta \VAtimevar}{\epsilon^2} e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}-2\right)\,.\label{eq:ap_fixedv0_lawoftotalvar_term1} \end{align} The second term in Equation~\eqref{eq:ap_fixedv0_var_lawtotalvarapplied} can be reworked as \begin{align} \VAvar{\VAexpec{\left.\sum_{{\VAeventno}=0}^{\VAnofevents}\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\Delta \VAeventtime_{\VAeventno}\right|\Delta \VAeventtime_{\VAnofevents}}}&=\VAvar{\frac{\VAdiscreteveleend_{\VAnofevents}}{\epsilon}\Delta \VAeventtime_{\VAnofevents}+u(\Delta \VAtimevar-\Delta \VAeventtime_{\VAnofevents})}\\ &=\left(\frac{\VAdiscreteveleend_{\VAnofevents}}{\epsilon}-u\right)^2\VAvar{\Delta \VAeventtime_{\VAnofevents}}\,,\label{eq:ap_fixedv0_lawoftotalvar_term2} \end{align} by using the independence of the $\VAdiscreteveleend_{\VAeventno}$ and the fact that the time intervals sum to $\Delta \VAtimevar$. For the variance on $\Delta \VAeventtime_{{\VAnofevents}}$, we use Equation~\eqref{eq:ap_fixedv0_dt0mean} and partial integration to find \begin{align} \VAvar{\Delta \VAeventtime_{{\VAnofevents}}}&=\int_0^{\Delta \VAtimevar}\frac{\sigma\Delta \VAeventtime_{\VAnofevents}^2}{\epsilon^2} e^{-\frac{\sigma\Delta \VAeventtime_{\VAnofevents}}{\epsilon^2}}\text{d}\Delta \VAeventtime_{\VAnofevents}+\int_{\Delta \VAtimevar}^{\infty}\frac{\sigma\Delta \VAtimevar^2}{\epsilon^2} e^{-\frac{\sigma\Delta \VAeventtime_{\VAnofevents}}{\epsilon^2}}\text{d}\Delta \VAeventtime_{\VAnofevents}-\left(\frac{\epsilon^2}{\sigma}\left(1-e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}\right)\right)^2\\ &=\frac{\epsilon^4}{\sigma^2}\left(1-2\frac{\sigma\Delta \VAtimevar}{\epsilon^2} e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}-e^{-2\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}\right)\,.\label{eq:ap_fixedv0_vardt0} \end{align} Using the result of Equation~\eqref{eq:ap_fixedv0_vardt0} in Equation~\eqref{eq:ap_fixedv0_lawoftotalvar_term2} yields an expression the second term of Equation~\eqref{eq:ap_fixedv0_var_lawtotalvarapplied}. Together with the first term, as expressed by Equation~\eqref{eq:ap_fixedv0_lawoftotalvar_term1}, we find the expression for the variance of a positional increment conditioned on the final velocity: \begin{multline} \VAvar{\Delta \VAdiscreteposeend}=2T\frac{\epsilon^2}{\sigma^2}\left(2e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}+\frac{\sigma\Delta \VAtimevar}{\epsilon^2}+\frac{\sigma\Delta \VAtimevar}{\epsilon^2} e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}-2\right)\\ +\left(\frac{\VAdiscreteveleend_{\VAnofevents}}{\epsilon}-u\right)^2\frac{\epsilon^4}{\sigma^2}\left(1-2\frac{\sigma}{\epsilon^2}\Delta \VAtimevar e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}-e^{-2\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}\right)\,.\label{eq:ap_fixedv0_var} \end{multline} In the diffusive limit, $\epsilon\rightarrow0$, Equation~\eqref{eq:ap_fixedv0_var} becomes equal to $2\frac{T}{\sigma}\Delta \VAtimevar$. This is in accordance with Equation~\eqref{eq:ap_modelsim_difflimit_sderandomwalk} and is identical to the diffusive limit for Equation~\eqref{eq:ap_randomv0_var}. For fixed $\epsilon$ and decreasing $\Delta \VAtimevar$, Equation~\eqref{eq:ap_fixedv0_var} now becomes \begin{equation} \left(T+\left(V_K-\epsilon u\right)^2\right)\frac{\sigma}{3\epsilon^4}\Delta \VAtimevar^3\,,\ \ \Delta \VAtimevar\rightarrow0\,. \end{equation} This limiting variance is of higher order in $\Delta \VAtimevar$ when $\Delta \VAtimevar\rightarrow0$, than for Equation~\eqref{eq:ap_randomv0_var}, where the final velocity was not fixed. This means that with fixed final velocity, the variance becomes lower, which is to be expected since part of the randomness has been removed by fixing the final velocity. \section{Kinetic-diffusion scheme\label{sec:ap_newscheme}} In this Section, we introduce the modified simulation strategy that has been announced in Section~\ref{subsec:ap_modelsim_strategy} in which the many free flights in the high-collisional regime are replaced by a single random walk step of the form given by Equation~\eqref{eq:ap_modelsim_strategy_randomwalk}. This replacement by a random walk step is expected to function well when the collision rate is very high, since an advection-diffusion process is the limit of the kinetic process in the high-collisional limit as shown in Section~\ref{subsec:ap_modelsim_difflimit}. In lower-collisional regimes on the other hand, an advection-diffusion description of the particle motion is invalid. We will design the algorithm in such a way that it also functions well in a low-collisional setting. A first important error when moving diffusively with a random walk step instead of kinetically is that the next position will falsely become normally distributed. A second important error is the supposed independence of the motion during subsequent time steps by a random walk. When moving according to the kinetic model, subsequent time steps are correlated due to the final velocity in a time step featuring as the initial velocity in the next time step. To maintain the kinetic nature of the motion when the collisionality is low, we combine two different strategies in our algorithm. The first strategy consists of executing the first flight during a time step kinetically with as the initial velocity, correctly, the final velocity of the previous time step. Only if a collision occurs within the time step, the remainder, $\theta\leq\Delta \VAtimevar$, is filled with a random walk step. This way, if the probability of a collision occurring in a time step is low, the particle will nearly always move according to the kinetic process, and only for a small fraction of the time steps, partly diffusively. The second strategy improves the situation further by incorporating the correlation with the next time step in the advection and diffusion coefficient of the random walk movement for a time $\theta$. To do so, the background parameters, $\theta$, $\sigma$, $u$, and $T$ are assumed to be spatially homogeneous. Then, the results from Section~\ref{subsec:ap_fixedv0} apply and the exact mean and variance of the kinetic process conditioned on the final velocity equalling the initial velocity of the next time step can be used. We mimic Equation~\eqref{eq:ap_modelsim_strategy_randomwalk}, but use the advection term \begin{equation} A_\epsilon\theta=\VAexpec{\Delta \VAdiscreteposeend}\,, \end{equation} with $\VAexpec{\Delta \VAdiscreteposeend}$ from Equation~\eqref{eq:ap_fixedv0_mean_result} and the diffusion term \begin{equation} \sqrt{2D_\epsilon\theta}=\sqrt{\VAvar{\Delta \VAdiscreteposeend}}\,, \end{equation} with $\VAvar{\Delta \VAdiscreteposeend}$ as in Equation~\eqref{eq:ap_fixedv0_var}. For the time step in these equations, we use the time $\theta$ during which the particle moves diffusively. For the other parameters $\sigma$, $u$, and $T$ that appear in Equations~\eqref{eq:ap_fixedv0_mean_result} and~\eqref{eq:ap_fixedv0_var}, we use the local values at the beginning of the diffusive motion. For the final velocity in the time step, we use a fresh sample, which is then used as the initial velocity in the next time step. To cope with highly heterogeneous $\sigma$, some modifications are presented in~\cite{mortier2019KDfusioncase}. The resulting algorithm is shown as Algorithm~\ref{alg:ap_kindifeNe}, which we refer to as KD. \newcommand\mycommfont[1]{\footnotesize\ttfamily\textcolor{blue}{#1}} \begin{algorithm}[H] \caption{A kinetic-diffusion simulation up to time $\bar{t}=N\Delta \VAtimevar$\label{alg:ap_kindifeNe}} \begin{algorithmic}[1] \Function{DiffusionKinetic\_KD}{$\VAdiscreteposeend,\VAdiscreteveleend,\Delta \VAtimevar,\sigma(x),\epsilon,\mathcal{M}(v;x),u(x),T(x)$} \State $t\leftarrow0$ \While{$t<\bar{t}$} \State $\Delta \VAeventtime\gets$ \Call{SampleCollision}{$\VAdiscreteposeend,\VAdiscreteveleend,\sigma(\VAdiscreteposeend),\epsilon$}\Comment{see page~\pageref{plaats:ap_kin_eventtime_function}} \State $\tau\gets$\Call{min}{$\Delta \VAeventtime,\bar{t}-t$}\Comment{determine the kinetically moved time} \State $X\gets X+\frac{V}{\epsilon}\tau$\Comment{kinetic part of the motion} \If{$\Delta \VAeventtime<\bar{t}-t$}\Comment{check if a collision occurred} \State $\VAdiscreteveleend\gets\mathcal{M}(v;x)$\Comment{sample the velocity for the next time step} \State $\theta\gets \Delta \VAtimevar-(\tau$ \textproc{mod} $\Delta \VAtimevar)$\Comment{determine the remainder of the time step} \State $\VAdiscreteposeend\gets \VAdiscreteposeend+\left.\mathcal{N}\left(\text{eq.}\eqref{eq:ap_fixedv0_mean_result},\text{eq.}\eqref{eq:ap_fixedv0_var}\right)\right|_{\begin{subarray}{l}\Delta \VAtimevar=\Delta \VAtimevar-\Delta \VAeventtime,\VAdiscreteveleend_{\VAnofevents}=\VAdiscreteveleend,\epsilon=\epsilon\\ \sigma=\sigma(\VAdiscreteposeend),u=u(\VAdiscreteposeend),T=T(\VAdiscreteposeend)\end{subarray}}$\label{alg:ap_kindifeNe_D}\Comment{random walk} \EndIf \State $t\rightarrow t+\Delta \VAtimevar$ \EndWhile \State \Return{$\VAdiscreteposeend,\VAdiscreteveleend$} \EndFunction \end{algorithmic} \end{algorithm} When the collisionality $\frac{\sigma}{\epsilon^2}\Delta \VAtimevar$ is large, Algorithm~\ref{alg:ap_kindifeNe} has an enormous computational advantage over Algorithm~\ref{alg:ap_kin} in that it does not require the execution of an expected $\frac{\sigma}{\epsilon^2}\Delta \VAtimevar$ collisions in a time step but at most one. In low-collisional regimes, the algorithm collapses to the standard Monte Carlo method, meaning approximately the same number of collisions occur. In the next two sections, we will present an analysis of the error of the KD scheme. The result of Section~\ref{sec:ap_limnul} will show Algorithm~\ref{alg:ap_kindifeNe} converges to the kinetic algorithm of Section~\ref{sec:ap_kinetic} when the scaling parameter $\epsilon$ is finite and the time step decreases. In Section~\ref{sec:ap_liminf}, we will show the error vanishes for $\epsilon\rightarrow0$ as well. Then, in Section~\ref{sec:ap_num}, the low error and reduction in computational time are illustrated numerically. \section{Consistency and convergence analysis for finite values of the scaling parameter\label{sec:ap_limnul}} In this section, we analyze the error by replacing a kinetic simulation by the algorithm proposed in Section~\ref{sec:ap_newscheme} when the scaling parameter $\epsilon$ is finite. In this regime, the consistency and convergence analyses require only an analysis of how the error depends on the time step $\Delta \VAtimevar$. We conduct this analysis for a spatially homogeneous setting, where $\sigma(x)\equiv\sigma$, $u(x)\equivu$, and $T(x)\equivT$. In Section~\ref{sec:ap_liminf}, we will treat the diffusive limit, when $\epsilon\rightarrow0$, separately. We evaluate the error in terms of its Wasserstein metric~\cite{villani2008optimaltransportWasserstein}. The Wasserstein distance between two distributions $g(x)$ and $h(x)$ considers the distance between two distributions as the minimal distance the mass needs to be moved on average to obtain one distribution ($g(x)$) from the other ($h(x)$). The Wasserstein distance can be expressed as the Wasserstein norm of the difference between the distributions $g(x)-h(x)$, \begin{equation} \zelfkalk{g(x)-h(x)}=\inf_{\mathcal{J}(g,h)}\iint|x-y|j(x,y)\text{d}x\text{d}y\,,\label{eq:ap_intro_W1def} \end{equation} with $\mathcal{J}(g,h)$ the class of all joint probabilities $j(x,y)$ with marginal distributions $g(x)$ and $h(y)$. An important feature of the Wasserstein distance is the consideration of the distance of mass differences, which is relevant when considering the physical meaning of particle distributions. Other often used distances like those derived from the L$_p$-norms penalize mass difference only, regardless of how far the mass of the other distribution is located. We will compare the exact distribution at time $t^{(n+1)}=(n+1)\Delta \VAtimevar$, denoted as $f(t^{(n+1)})$, with the distribution $f^{(n+1)}$ obtained by the kinetic-diffusion simulation. We write $\mathcal{S}_{\Delta \VAtimevar}$ for the operator that evolves a distribution according to the kinetic equation, Equation~\eqref{eq:ap_kinetic_integrodiff}, over a time interval of length $\Delta \VAtimevar$ and $\tilde{\mathcal{S}}_{\Delta \VAtimevar}$ for the operator that corresponds to the hybridized KD simulation scheme with time step $\Delta \VAtimevar$. With these notations, we can write \begin{equation} f(t^{(n+1)})=f(t^{(n)})+\mathcal{S}_{\Delta \VAtimevar}(f(t^{(n)}))\quad\text{and}\quad f^{(n+1)}=f^{(n)}+\tilde{\mathcal{S}}_{\Delta \VAtimevar}(f^{(n)})\,. \end{equation} The difference between both is thus \begin{equation} f(t^{(n+1)})-f^{(n+1)}=f(t^{(n)})-f^{(n)}+\mathcal{S}_{\Delta \VAtimevar}(f(t^{(n)}))-\tilde{\mathcal{S}}_{\Delta \VAtimevar}(f^{(n)})\,.\label{eq:ap_limnul_distridif_simple} \end{equation} To the right hand side of~\eqref{eq:ap_limnul_distridif_simple}, we add and subtract the term $\mathcal{S}_{\Delta \VAtimevar}(f^{(n)})$, and we can then bound the Wasserstein error on the distribution via the triangle inequality as \begin{multline} \zelfkalkdun{f(t^{(n+1)})-f^{(n+1)}}\leq\underbrace{\zelfkalkdun{f(t^{(n)})+\mathcal{S}_{\Delta \VAtimevar}(f(t^{(n)}))-\left(f^{(n)}+\mathcal{S}_{\Delta \VAtimevar}(f^{(n)})\right)}}_\text{error propagation}\\ +\underbrace{\zelfkalkdun{\mathcal{S}_{\Delta \VAtimevar}(f^{(n)})-\tilde{\mathcal{S}}_{\Delta \VAtimevar}(f^{(n)})}}_\text{local error}\,.\label{eq:ap_limnul_errorpartsW1} \end{multline} The error at time $t^{(n+1)}$ thus consists of the error at time $t^{(n)}$ that is propagated over the time step, augmented by the local error due to using the approximate KD process instead of the exact kinetic process. In the remainder of this section, we will use Equation~\eqref{eq:ap_limnul_errorpartsW1} to derive a bound on the Wasserstein error at some final time $\bar{t}=N\Delta \VAtimevar$ for finite $\epsilon$. We will first treat the last term, expressing the local error due to the difference in distribution when a kinetic process is replaced by a diffusion step, in Section~\ref{subsec:ap_limnul_KD_distrerror}. Then, we consider how the error propagates through different time steps and show the resulting error at time $\bar{t}=N\Delta \VAtimevar$ in Section~\ref{subsec:ap_limnul_KD_propagation}. \subsection{Local error when the scaling parameter is finite\label{subsec:ap_limnul_KD_distrerror}} The KD operator $\tilde{\mathcal{S}}_{\Delta \VAtimevar}$ is only different from the kinetic operator $\mathcal{S}_{\Delta \VAtimevar}$ when a collision occurs in the time step. Hence, the paths without a collision do not contribute to the term $\zelfkalk{\mathcal{S}_{\Delta \VAtimevar}(f^{(n)})-\VAapkindifoperator_{\Delta \VAtimevar}(f^{(n)})}$ of Equation~\eqref{eq:ap_limnul_errorpartsW1}. We thus only need to consider the paths conditioned on the occurrence of a collision, which we indicate in our operators with a superscript $>0$ as \begin{align} \VAaprealoperator^{>0}_{\Delta \VAtimevar}(f^{(n)})&=\mathcal{S}_{\Delta \VAtimevar}(f^{(n)}|\#\text{collisions in }[t^{(n)},t^{(n+1)}]>0)\\ \VAapKDoperator^{>0}_{\Delta \VAtimevar}(f^{(n)})&=\VAapkindifoperator_{\Delta \VAtimevar}(f^{(n)}|\#\text{collisions in }[t^{(n)},t^{(n+1)}]>0)\,. \end{align} The local error term can thus be written as \begin{multline} \zelfkalkdun{\mathcal{S}_{\Delta \VAtimevar}(f^{(n)})-\VAapkindifoperator_{\Delta \VAtimevar}(f^{(n)})}\\ =\text{P}\left(\#\text{collisions in }\Delta \VAtimevar>0\right)\zelfkalkdun{\VAaprealoperator^{>0}_{\Delta \VAtimevar}(f^{(n)})-\VAapKDoperator^{>0}_{\Delta \VAtimevar}(f^{(n)})}\,.\label{eq:ap_limnul_KD_distrerr_condD_real} \end{multline} In the remainder of this section, we will first find an expression for the probability on the right hand side of Equation~\eqref{eq:ap_limnul_KD_distrerr_condD_real} and then bound the second factor, to obtain a bound for the local error. At the end of this section, we will numerically show this is not a sharp bound in general. \paragraph{Split the local error based on the number of events.} Paths without a collision constitute a fraction $e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}=1-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}+\mathcal{O}\left(\Delta \VAtimevar^2\right)$, $\Delta \VAtimevar\rightarrow0$ of the population with $\mathcal{O}$ the Landau symbol. Hence, in the limit $\Delta \VAtimevar\rightarrow0$, we have \begin{multline} \zelfkalkdun{\mathcal{S}_{\Delta \VAtimevar}(f^{(n)})-\VAapkindifoperator_{\Delta \VAtimevar}(f^{(n)})}\\ =\frac{\sigma\Delta \VAtimevar}{\epsilon^2}\left(1+\mathcal{O}\left(\Delta \VAtimevar\right)\right)\zelfkalkdun{\VAaprealoperator^{>0}_{\Delta \VAtimevar}(f^{(n)})-\VAapKDoperator^{>0}_{\Delta \VAtimevar}(f^{(n)})}\,.\label{eq:ap_limnul_KD_distrerr_condD} \end{multline} To bound the factor $\zelfkalkdun{\VAaprealoperator^{>0}_{\Delta \VAtimevar}(f^{(n)})-\VAapKDoperator^{>0}_{\Delta \VAtimevar}(f^{(n)})}$ in Equation~\eqref{eq:ap_limnul_KD_distrerr_condD}, we first introduce the additional operator $\VAaprealoperator^{=1}_{\Delta \VAtimevar}$, indicating a kinetic process with exactly one collision. Via a triangle inequality, we can bound Equation~\eqref{eq:ap_limnul_KD_distrerr_condD} as \begin{multline} \zelfkalkdun{\mathcal{S}_{\Delta \VAtimevar}(f(t^{(n)}))-\VAapkindifoperator_{\Delta \VAtimevar}(f(t^{(n)}))}\\ \leq\frac{\sigma\Delta \VAtimevar}{\epsilon^2}\left(1+\mathcal{O}\left(\Delta \VAtimevar\right)\right)\zelfkalkdun{\VAaprealoperator^{>0}_{\Delta \VAtimevar}(f(t^{(n)}))-\VAaprealoperator^{=1}_{\Delta \VAtimevar}(f(t^{(n)}))}\\ +\frac{\sigma\Delta \VAtimevar}{\epsilon^2}\left(1+\mathcal{O}\left(\Delta \VAtimevar\right)\right)\zelfkalkdun{\VAaprealoperator^{=1}_{\Delta \VAtimevar}(f(t^{(n)}))-\VAapKDoperator^{>0}_{\Delta \VAtimevar}(f(t^{(n)}))}\,.\label{eq:ap_limnul_KD_1colsplitup} \end{multline} \paragraph{The first term of Equation~\eqref{eq:ap_limnul_KD_1colsplitup}.} $\zelfkalkdun{\VAaprealoperator^{>0}_{\Delta \VAtimevar}(f(t^{(n)}))-\VAaprealoperator^{=1}_{\Delta \VAtimevar}(f(t^{(n)}))}$ captures differences between a kinetic process with at least one collision and a kinetic process with exactly one collision. Only if a second collision takes place, the effect of these operators differs. If the first collision takes place after a time $\tau=\Delta \VAtimevar-\theta$, a second collision takes place with a probability $\frac{\sigma\tau}{\epsilon^2}$ and the expected difference in distance in such a case is of order $\theta$ as well, resulting in a contribution proportional to $\Delta \VAtimevar\theta^2$. With $\theta\leq\Delta \VAtimevar$, we can write \begin{equation} \frac{\sigma\Delta \VAtimevar}{\epsilon^2}\left(1+\mathcal{O}\left(\Delta \VAtimevar\right)\right)\zelfkalkdun{\VAaprealoperator^{>0}_{\Delta \VAtimevar}(f(t^{(n)}))-\VAaprealoperator^{=1}_{\Delta \VAtimevar}(f(t^{(n)}))}\rightarrow\mathcal{O}(\Delta \VAtimevar^3),\quad\Delta \VAtimevar\rightarrow0\,, \end{equation} for the first term of Equation~\eqref{eq:ap_limnul_KD_1colsplitup}. In the remainder of this section, we will derive a bound for the second term of the right hand side of Equation~\eqref{eq:ap_limnul_KD_1colsplitup} which will turn out to be of order $2.5$ in $\Delta \VAtimevar$ and will be the dominant term of Equation~\eqref{eq:ap_limnul_KD_1colsplitup}. \newcommand{s}{s} \defv{v} \defx{x} \paragraph{Conditioning to bound the second term of Equation~\eqref{eq:ap_limnul_KD_1colsplitup}.} The second term on the right hand side of Equation~\eqref{eq:ap_limnul_KD_1colsplitup} expresses the difference between a kinetic process with exactly one collision taking place and a KD process where exactly one collision takes place. Until this single collision, both operators behave identically, but after this collision, the kinetic operator $\VAaprealoperator^{=1}_{\Delta \VAtimevar}$ lets the particle move with a newly sampled velocity whereas the KD operator $\VAapKDoperator^{>0}_{\Delta \VAtimevar}$ moves the particle diffusively conditioned on a newly sampled final velocity. To bound the Wasserstein distance between $\VAaprealoperator^{=1}_{\Delta \VAtimevar}$ and $\VAapKDoperator^{>0}_{\Delta \VAtimevar}$, we restrict the joint distribution in Equation~\eqref{eq:ap_intro_W1def} to a subset of all joint distributions $\mathcal{J}(g,h)$ by coupling the paths in two ways. This conditioning can thus lead to overestimating the error. The first coupling is based on the initial state at the beginning of the timestep $\left(x_n,v_n\right)=\left(x(n\Delta \VAtimevar),v(n\Delta \VAtimevar)\right)$, denoted compactly as $s_n$, and the remaining time $\theta_n=\Delta \VAtimevar-\Delta\tau_{\VAeventnotilltimestep{n}+1|n}$, where the time index is as defined in Figure~\ref{fig:ap_eventtime_tijdsstap}. The resulting bound on the error will decrease with decreasing $\Delta \VAtimevar$, proving that the KD simulation is consistent with the kinetic model. At the end of this section, we will numerically show the actual error to be significantly lower. The second coupling strategy is based on the final velocity in the time step, $\nu_n=v_{{n+1}}$. This coupling is pertinent when we consider the error propagation in Section~\ref{subsec:ap_limnul_KD_propagation}, since a coupling based on significantly different final velocities would have the particles drift apart during a time step and would generally not correspond to the minimizer in Equation~\eqref{eq:ap_intro_W1def}. We denote the resulting distance for the coupled subset of particles as \begin{equation} \zelfkalkdun{\left.\VAaprealoperator^{=1}_{\Delta \VAtimevar}(f(t^{(n)}))-\VAapKDoperator^{>0}_{\Delta \VAtimevar}(f(t^{(n)}))\right|s_n,\theta_n,\nu_n}\,. \end{equation}The resulting restriction to a subset of the possible joint probabilities of Equation~\eqref{eq:ap_intro_W1def} bounds the second term of the right hand side of Equation~\eqref{eq:ap_limnul_KD_1colsplitup} as \begin{align} \zelfkalkdun{\VAaprealoperator^{=1}_{\Delta \VAtimevar}(f(t^{(n)}))\!-\!\VAapKDoperator^{>0}_{\Delta \VAtimevar}(f(t^{(n)}))}\!\leq\!\mathbb{E}\!\left[\!\zelfkalkdun{\!\left.\VAaprealoperator^{=1}_{\Delta \VAtimevar}(f(t^{(n)}))\!-\!\VAapKDoperator^{>0}_{\Delta \VAtimevar}(f(t^{(n)}))\right|\!s_n,\theta_n,\nu_n}\!\right]\!\!.\label{eq:ap_KD_limnul_conditioningbound} \end{align} This final inequality can also be interpreted as an application of the subadditivity property of the Wasserstein distance \cite[p.~94]{villani2008optimaltransportWasserstein}. The remainder of this section is devoted to finding an analytical expression for $\mathbb{E}\!\left[\zelfkalkdun{\left.\VAaprealoperator^{=1}_{\Delta \VAtimevar}(f(t^{(n)}))\!-\!\VAapKDoperator^{>0}_{\Delta \VAtimevar}(f(t^{(n)}))\right|\!s_n,\theta_n,\nu_n}\right]$. \paragraph{Determining $\zelfkalkdun{\left.\VAaprealoperator^{=1}_{\Delta \VAtimevar}(f(t^{(n)}))-\VAapKDoperator^{>0}_{\Delta \VAtimevar}(f(t^{(n)}))\right|s_n,\theta_n,\nu_n}$.} Conditioning on $s_n$ and $\theta_n$ also implies conditioning on the position at which the first collision takes place, which is $x_n+v_n(\Delta \VAtimevar-\theta_n)$. Furthermore conditioning on the final velocity $\nu_n$ means the additional motion $\Delta X$ by the kinetic operator conditioned on only one collision taking place, $\VAaprealoperator^{=1}_{\Delta \VAtimevar}$, is also fixed and equals $\frac{\nu_n}{\epsilon}\theta_n$. The additional motion from the position $x_n+v_n(\Delta \VAtimevar-\theta_n)$ by the KD operator is a normally distributed step with mean and variance according to Equations~\eqref{eq:ap_fixedv0_mean_result} and~\eqref{eq:ap_fixedv0_var}. A comparison between both comes down to a comparison between a dirac delta distribution $\delta_{\frac{\nu_n}{\epsilon}\theta_n}$ with probability density function \begin{equation} \delta\left(\Delta\VAdiscreteposeend-\frac{\nu_n}{\epsilon}\theta_n\right)\label{eq:ap_limnul_KD_dirac} \end{equation} and the normal distribution $\mathcal{N}\left(\text{Eq.}\eqref{eq:ap_fixedv0_mean_result},\text{Eq.}\eqref{eq:ap_fixedv0_var}\right)$ with probability density function \begin{multline} \frac{1}{\sqrt{\frac{2\pi}{3}\left((\frac{\nu_n}{\epsilon}-u)^2+\frac{T}{\epsilon^2}\right)\frac{\sigma\theta_n^3}{\epsilon^2}\left(1+\mathcal{O}\left(\theta_n\right)\right)}}e^{-\frac{\left(\Delta\VAdiscreteposeend-\frac{\nu_n}{\epsilon}\theta_n+\mathcal{O}(\theta_n^2)\right)^2}{\frac{2}{3}\left((\frac{\nu_n}{\epsilon}-u)^2+\frac{T}{\epsilon^2}\right)\frac{\sigma\theta_n^3}{\epsilon^2}\left(1+\mathcal{O}\left(\theta_n\right)\right)}},\ \ \Delta \VAtimevar\rightarrow0\,.\label{eq:ap_limnul_KD_normalinlim} \end{multline} To establish the Wasserstein distance between the dirac delta of Equation~\eqref{eq:ap_limnul_KD_dirac} and the normal distribution of Equation~\eqref{eq:ap_limnul_KD_normalinlim}, we first bound it via a triangle inequality, \begin{equation} \zelfkalkdun{\left.\VAaprealoperator^{=1}_{\Delta \VAtimevar}(f(t^{(n)}))\!-\!\VAapKDoperator^{>0}_{\Delta \VAtimevar}(f(t^{(n)}))\right|\!s_n,\theta_n,\nu_n}\!=\!\zelfkalkdun{\delta_{\frac{\nu_n}{\epsilon}\theta_n}\!\!-\!\mathcal{N}\!\left(\text{Eq.}\eqref{eq:ap_fixedv0_mean_result},\text{Eq.}\eqref{eq:ap_fixedv0_var}\right)} \end{equation} \begin{equation} \hspace{2cm}\leq\zelfkalkdun{\delta_{\frac{\nu_n}{\epsilon}\theta_n}\!\!-\!\delta_{\text{Eq.}\eqref{eq:ap_fixedv0_mean_result}}}+\zelfkalkdun{\delta_{\text{Eq.}\eqref{eq:ap_fixedv0_mean_result}}\!\!-\!\mathcal{N}\left(\text{Eq.}\eqref{eq:ap_fixedv0_mean_result},\text{Eq.}\eqref{eq:ap_fixedv0_var}\right)}\,,\label{eq:ap_limnul_KD_triangle2} \end{equation} where we use a dirac delta at the mean of Equation~\eqref{eq:ap_limnul_KD_normalinlim} as an additional distribution. The Wasserstein distance between the dirac delta of Equation~\eqref{eq:ap_limnul_KD_dirac} and a dirac delta at the mean of Equation~\eqref{eq:ap_limnul_KD_normalinlim}, the second term on the right hand side of Equation~\eqref{eq:ap_limnul_KD_triangle2}, is of size $\mathcal{O}(\theta_n^2)$, $\theta_n\rightarrow0$. The Wasserstein distance of a normal distribution to a dirac delta positioned at its mean simply equals the expected distance of the normal distribution to its mean, or $\sqrt{\frac{2}{\pi}}$ times the standard deviation of the normal distribution. For the distribution of Equation~\eqref{eq:ap_limnul_KD_normalinlim}, the Wasserstein distance thus becomes \begin{equation} \zelfkalkdun{\delta_{\text{Eq.}\eqref{eq:ap_fixedv0_mean_result}}\!-\!\mathcal{N}\!\left(\text{Eq.}\eqref{eq:ap_fixedv0_mean_result},\text{Eq.}\eqref{eq:ap_fixedv0_var}\right)}=\!\sqrt{\frac{2}{3\pi}\!\left(\!\left(\frac{\nu_n}{\epsilon}\!-\!u\right)^2\!\!\!+\!\frac{T}{\epsilon^2}\!\right)\!\frac{\sigma\theta_n^3}{\epsilon^2}}\!+\!\mathcal{O}\!\left(\!\sqrt{\theta_n^5}\right)\!\,.\label{eq:ap_limnul_KD_W1_NvsD} \end{equation} Since the $\mathcal{O}\left(\theta^2\right)$ correction due to the first term of Equation~\eqref{eq:ap_limnul_KD_triangle2} does not alter the dominant term in Equation~\eqref{eq:ap_limnul_KD_W1_NvsD}, Equation~\eqref{eq:ap_limnul_KD_W1_NvsD} is an asymptotic bound for $\zelfkalkdun{\left.\VAaprealoperator^{=1}_{\Delta \VAtimevar}(f(t^{(n)}))\!-\!\VAapKDoperator^{>0}_{\Delta \VAtimevar}(f(t^{(n)}))\right|\!s_n,\theta_n,\nu_n}$. \paragraph{Determining the right hand side of Equation~\eqref{eq:ap_KD_limnul_conditioningbound}.} Taking the expected value over possible values of $s_n$, $\theta_n$, and $\nu_n$ of the right hand side term of Equation~\eqref{eq:ap_limnul_KD_W1_NvsD} results in a bound on $\zelfkalkdun{\VAaprealoperator^{=1}_{\Delta \VAtimevar}(f(t^{(n)}))-\VAapKDoperator^{>0}_{\Delta \VAtimevar}(f(t^{(n)}))}$ as stated by Equation~\eqref{eq:ap_KD_limnul_conditioningbound}. We can use the independence of the right hand side of Equation~\eqref{eq:ap_limnul_KD_W1_NvsD} on $s_n$, and the independence of $\theta_n$ and $\nu_n$ to simplify the calculation of the expected value to \begin{multline} \mathbb{E}_{s_n,\theta_n,\nu_n}\!\!\left[\sqrt{\frac{2}{3\pi}\left(\left(\frac{\nu_n}{\epsilon}-u\right)^2+\frac{T}{\epsilon^2}\right)\frac{\sigma\theta_n^3}{\epsilon^2}}\right]\\ =\frac{2}{3\pi}\mathbb{E}_{\theta_n}\!\!\left[\sqrt{\left(\frac{\nu_n}{\epsilon}-u\right)^2+\frac{T}{\epsilon^2}}\right]\mathbb{E}_{\nu_n}\!\!\left[\sqrt{\frac{\sigma\theta_n^3}{\epsilon^2}}\right]\,,\label{eq:ap_limnul_KD_expectovercond} \end{multline} where the expectation over $\theta_n$ is conditioned on at least one collision taking place, resulting in the formula \begin{align} \mathbb{E}_{\theta_n}\!\!\left[\sqrt{\frac{\sigma\theta_n^3}{\epsilon^2}}\right]&=\frac{\int_0^{\Delta \VAtimevar}\sqrt{\frac{\sigma^3(\Delta \VAtimevar-\tau)^3}{\epsilon^6}} e^{-\frac{\sigma(\Delta \VAtimevar-\tau)}{\epsilon^2}}\text{d}\tau}{1-e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}}\\ &=\frac{\int_0^{\Delta \VAtimevar}\sqrt{\frac{\sigma^3(\Delta \VAtimevar-\tau)^3}{\epsilon^6}}\left(1+\mathcal{O}(\Delta \VAtimevar)\right)\text{d}\theta}{\frac{\sigma\Delta \VAtimevar}{\epsilon^2}\left(1+\mathcal{O}(\Delta \VAtimevar)\right)},\quad\Delta \VAtimevar\rightarrow0,\quad\tau\leq\Delta \VAtimevar\\ &=\frac{2}{5}\sqrt{\frac{\sigma\Delta \VAtimevar^3}{\epsilon^2}}\left(1+\mathcal{O}(\Delta \VAtimevar)\right),\quad\Delta \VAtimevar\rightarrow0\,.\label{eq:ap_limnul_KD_expectovertheta_res} \end{align} The other expectation in Equation~\eqref{eq:ap_limnul_KD_expectovercond} is over the final velocity $\nu_n$, which is normally distributed with mean $\epsilonu$ and variance $T$. Using the definition of the expectation and a simple change of variable, $\upsilon=\frac{\nu_n-\epsilonu}{\sqrt{T}}$, yields \begin{align} \mathbb{E}_{\nu_n}\!\!\left[\sqrt{\left(\frac{\nu_n}{\epsilon}-u\right)^2+\frac{T}{\epsilon^2}}\right]&=\int\sqrt{\left(\frac{\nu_n}{\epsilon}-u\right)^2+\frac{T}{\epsilon^2}}\frac{1}{\sqrt{2\piT}}e^{-\frac{(\nu_n-\epsilonu)^2}{2T}}\text{d}\nu_n\\ &=\sqrt{\frac{T}{\epsilon^2}}\underbrace{\int\sqrt{\upsilon^2+1}\frac{1}{\sqrt{2\pi}}e^{-\frac{\upsilon^2}{2}}\text{d}\upsilon}_{\approx 1.354 \text{ (numerically)}}\,.\label{eq:ap_limnul_KD_expectovernu_simplifiedintegral} \end{align} The remaining integral in Equation~\eqref{eq:ap_limnul_KD_expectovernu_simplifiedintegral} can be numerically computed to equal $1.3545$. Substituting the results from Equations~\eqref{eq:ap_limnul_KD_expectovertheta_res} and~\eqref{eq:ap_limnul_KD_expectovernu_simplifiedintegral} in Equation~\eqref{eq:ap_limnul_KD_expectovercond}, gives, according to Equation~\eqref{eq:ap_KD_limnul_conditioningbound}, the asymptotic bound \begin{equation} \zelfkalkdun{\VAaprealoperator^{=1}_{\Delta \VAtimevar}(f(t^{(n)}))\!-\!\VAapKDoperator^{>0}_{\Delta \VAtimevar}(f(t^{(n)}))}\leq0.24959\sqrt{\frac{T\sigma\Delta \VAtimevar^3}{\epsilon^4}}+\mathcal{O}\left(\sqrt{\Delta \VAtimevar^5}\right),\quad\Delta \VAtimevar\rightarrow0\,.\label{eq:ap_limnul_KD_distrerr_res} \end{equation} \paragraph{Resulting bound.} As stated before, the bound on $\zelfkalkdun{\VAaprealoperator^{=1}_{\Delta \VAtimevar}(f(t^{(n)}))\!-\!\VAapKDoperator^{>0}_{\Delta \VAtimevar}(f(t^{(n)}))}$ dominates the right hand side of Equation~\eqref{eq:ap_limnul_KD_1colsplitup}, since the first term is at least third order in $\Delta \VAtimevar$. We thus find as the bound for the local error term of Equation~\eqref{eq:ap_limnul_errorpartsW1} \begin{equation} \zelfkalkdun{\mathcal{S}_{\Delta \VAtimevar}(f(t^{(n)}))-\VAapkindifoperator_{\Delta \VAtimevar}(f(t^{(n)}))}\leq0.24959\sqrt{\frac{T\sigma^3\Delta \VAtimevar^5}{\epsilon^8}}+\mathcal{O}\left(\sqrt{\Delta \VAtimevar^7}\right),\quad\Delta \VAtimevar\rightarrow0\,. \label{eq:ap_limnul_KD_distrerr_res} \end{equation} \paragraph{Numerical validation.} We have validated the bound of Equation~\eqref{eq:ap_limnul_KD_distrerr_res} on the new error during a time step numerically. To do so, we assumed all particles have an identical initial state $s_n=(x_n,v_n)$ and we compare the outcome of the kinetic and KD simulation conditioned on the final velocity $\nu_n$, with $\sigma=\epsilon=T=1$ and $u=1$. Then, for different $v_n$, we see in Figure~\ref{fig:ap_num_nt1_KD_cond} that the analytical bound of Equation~\eqref{eq:ap_limnul_KD_distrerr_res} overestimates the actual error. This difference originates from the conditioning on the remaining time $\theta_n$, which was also included in the analysis. If we do include the conditioning on $\theta_n$, we achieve a tight fit with the overestimating bound, as also shown in Figure~\ref{fig:ap_num_nt1_KD_cond}. We thus conclude the error bound of Equation~\eqref{eq:ap_limnul_KD_distrerr_res} to be valid, but not tight. In the next section, we show the error at a final time $\bar{t}=N\Delta \VAtimevar$ based on this error bound, and conclude convergence to the kinetic process as $\Delta \VAtimevar\rightarrow0$. \begin{figure}[H \centering \deffiguren/{figuren/} \def\ref{eq:ap_limnul_KD_distrerr_res}{\ref{eq:ap_limnul_KD_distrerr_res}} \def\ref{eq:ap_liminf_KD_distrerror}{\ref{eq:ap_liminf_KD_distrerror}} \begin{minipage}{.65\textwidth} \centering \resizebox{\textwidth}{!}{\includetikz{figuren/}{ap_homogeneous_singlestep_KD_correct}} \end{minipage} \captionof{figure}{The Wasserstein distance between the distribution of the kinetic process and of the KD process after conditioning on the final velocity and the time of the diffusion process as a function of $\Delta \VAtimevar$, conducted with 200,000 particle paths with at least one collision.} \label{fig:ap_num_nt1_KD_cond} \end{figure}% \subsection{Error propagation and total error when the scaling parameter is finite\label{subsec:ap_limnul_KD_propagation}} In this Section, we first find a bound on the error akin to Equation~\eqref{eq:ap_limnul_errorpartsW1}, but with conditioning on the velocity at the end of the time steps. This conditioning results in an overestimation of the error, but it enables the computation of the error propagation from time $t^{(n)}$ to time $t^{(n+1)}$. Then, we combine the result with the outcome of the local error analysis in Section~\ref{subsec:ap_limnul_KD_distrerror} to find the total error at the simulation end time $\bar{t}=N\Delta \VAtimevar$. Since Equation~\eqref{eq:ap_limnul_errorpartsW1} also holds when conditioning on the velocities $\{v_n\}_{n=0}^{N}=\{v(n\Delta \VAtimevar)\}_{n=0}^{N}$ at the discrete times $\{n\Delta \VAtimevar\}_{n=0}^{N}$, we can write \begin{multline} \mathbb{E}\left[\zelfkalkdun{\left.f(t^{(n+1)})-f^{(n+1)}\right|\{v_{{n}}\}_{n=0}^{N}}\right]\\ \leq\underbrace{\mathbb{E}\left[\zelfkalkdun{\left.f(t^{(n)})+\mathcal{S}_{\Delta \VAtimevar}(f(t^{(n)}))-\left(f^{(n)}+\mathcal{S}_{\Delta \VAtimevar}(f^{(n)})\right)\right|\{v_{{n}}\}_{n=0}^{N}}\right]}_\text{error propagation}\\ +\underbrace{\mathbb{E}\left[\zelfkalkdun{\left.\mathcal{S}_{\Delta \VAtimevar}(f^{(n)})-\tilde{\mathcal{S}}_{\Delta \VAtimevar}(f^{(n)})\right|\{v_{{n}}\}_{n=0}^N}\right]}_\text{local error}\,,\label{eq:ap_limnul_errorpartsW1_condVs} \end{multline} and we note that the subadditivity property \begin{equation} \zelfkalkdun{f(t^{(n)})-f^{(n)}}\leq\mathbb{E}\left[\zelfkalkdun{\left.f(t^{(n)})-f^{(n)}\right|\{v_{{n}}\}_{n=0}^{N}}\right] \end{equation} holds for all $n$. The error propagation term from Equation~\eqref{eq:ap_limnul_errorpartsW1_condVs} can easily be bounded by the above conditioning on the velocities, since the kinetic operator will have an identical effect on the population that has the same initial and final velocity, and thus \begin{multline} \mathbb{E}\left[\zelfkalkdun{\left.f(t^{(n)})+\mathcal{S}_{\Delta \VAtimevar}(f(t^{(n)})-\left(f^{(n)}+\mathcal{S}_{\Delta \VAtimevar}(f^{(n)})\right)\right|\{v_{{n}}\}_{n=0}^{N}}\right]\\ \leq\mathbb{E}\left[\zelfkalkdun{\left.f(t^{(n)})-f^{(n)}\right|\{v_{{n}}\}_{n=0}^{N}}\right]. \end{multline Since we also derived a bound for the local error after conditioning on the velocity at time $t^{(n)}=n\Delta \VAtimevar$ in Section~\ref{subsec:ap_limnul_KD_distrerror} by conditioning on the final velocity and on the assumption of an identical initial state, we can use the bound from Equation~\eqref{eq:ap_limnul_KD_distrerr_res} as a bound for the local error term in Equation~\eqref{eq:ap_limnul_errorpartsW1_condVs}. Combining the above elements means we can bound the right hand side of Equation~\eqref{eq:ap_limnul_errorpartsW1_condVs} as \begin{multline} \mathbb{E}\left[\zelfkalkdun{\left.f(t^{(n+1)})-f^{(n+1)}\right|\{v_n\}_{n=0}^N}\right] \leq\mathbb{E}\left[\zelfkalkdun{\left.f(t^{(n)})-f^{(n)}\right|\{v_n\}_{n=0}^N}\right]\\ +0.24959\sqrt{\frac{T\sigma^3\Delta \VAtimevar^5}{\epsilon^{8}}}+\mathcal{O}\left(\Delta \VAtimevar^3\right)\,,\qquad\Delta \VAtimevar\rightarrow0\,. \end{multline} With an initial error of zero, this recursive equation has as a solution at time $\bar{t}=N\Delta \VAtimevar$ which bounds the error at that final time: \begin{equation} \zelfkalkdun{f(t^{(n+1)})-f^{(n+1)}}\leq0.24959\bar{t}\sqrt{\frac{T\sigma^3\Delta \VAtimevar^3}{\epsilon^{8}}}+\mathcal{O}\left(\Delta \VAtimevar^2\right),\qquad\Delta \VAtimevar\rightarrow0\,. \end{equation} This proves the new KD simulation scheme becomes identical to a standard, unbiased, particle tracing simulation of the Boltzmann-BGK equation when $\Delta \VAtimevar\rightarrow0$. \section{Asymptotic preserving property and convergence analysis in high-collisional regimes\label{sec:ap_liminf}} In this section, we consider the error in the diffusive regime, i.e., when $\epsilon$ becomes very small. In contrast to the treatment of in Section~\ref{sec:ap_limnul}, the correlation between subsequent time steps is not a dominant feature in this regime. The first kinetic flight in a time step, which is the origin of this correlation, only makes up an expected fraction $\mathcal{O}\left(\epsilon^2\right)$, $\epsilon\rightarrow0$ of the time step. Furthermore, the execution of this initial flight is identical in the kinetic simulation and the KD simulation. As a consequence, we can ignore the initial flight. We thus consider the kinetic simulation beginning with a resampled velocity, which is represented by the newly introduced operator ${\mathcal{S}}'_{\Delta \VAtimevar}$, and we consider the KD scheme only consisting of a diffusion part, which is denoted by the newly introduced operator $\widehat{\mathcal{S}}_{\Delta \VAtimevar}$. As we just argued, these operators become identical to the actual kinetic and KD operators in the diffusive limit, i.e., \begin{align} {\mathcal{S}}'_{\Delta \VAtimevar}\rightarrow\mathcal{S}_{\Delta \VAtimevar}\text{ and }\widehat{\mathcal{S}}_{\Delta \VAtimevar}\rightarrow\VAapkindifoperator_{\Delta \VAtimevar}\,,\quad\epsilon\rightarrow0\,. \end{align} We use the above defined operators to again write the Wasserstein error at time $t^{(n+1)}$ in terms of the error propagated from time $t^{(n)}$ and a new local error \begin{multline} \zelfkalkdun{f(t^{(n+1)})\!-\!f^{(n+1)}}\leq\underbrace{\zelfkalkdun{f(t^{(n)})\!+\!{\mathcal{S}}'_{\Delta \VAtimevar}(f(t^{(n)}))\!-\!\left(f^{(n)} \!+\!{\mathcal{S}}'_{\Delta \VAtimevar}(f^{(n)})\right)}}_\text{error propagation}\\ +\underbrace{\zelfkalkdun{{\mathcal{S}}'_{\Delta \VAtimevar}(f^{(n)})-\widehat{\mathcal{S}}_{\Delta \VAtimevar}(f^{(n)})}}_\text{local error}\,,\ \ \epsilon\rightarrow0\,,\label{eq:ap_liminf_errorpartsW1} \end{multline} where the usage of the operators without the inital kinetic flight forms an additional overestimation of the error. The derivation will follow the same structure as in the low-collisional case. We will first find the local error during a time step in Section~\ref{subsec:ap_liminf_KD_distrerror}. Then, we discuss the error propagation and find the total error bound via a geometric series in Section~\ref{subsec:ap_liminf_KD_propagation}. \subsection{Local error in the diffusive limit\label{subsec:ap_liminf_KD_distrerror}} The final term in Equation~\eqref{eq:ap_liminf_errorpartsW1} expresses the difference between a diffusion step and a kinetic step starting from the same initial condition $f^{(n)}$. We bound this local error with the subadditivity property of the Wasserstein distance via conditioning on the initial position $x(n\Delta \VAtimevar)=x_n$ at time $t^{(n)}$ and on the final velocity $\nu_n$ in a similar fashion as in Equation~\eqref{eq:ap_KD_limnul_conditioningbound}, \begin{equation} \zelfkalkdun{{\mathcal{S}}'_{\Delta \VAtimevar}(f^{(n)})-\widehat{\mathcal{S}}_{\Delta \VAtimevar}(f^{(n)})}\leq\mathbb{E}\left[\zelfkalkdun{\left.{\mathcal{S}}'_{\Delta \VAtimevar}(f^{(n)})-\widehat{\mathcal{S}}_{\Delta \VAtimevar}(f^{(n)})\right|x_n,\nu_n}\right]\,.\label{eq:ap_liminf_KD_conditioningbound} \end{equation} The above conditioning is equivalent to having a dirac delta at $x_n$, $\delta_{x_n}$, as the initial distribution $f^{(n)}$. Note that the velocity information at $f^{(n)}$ is not used by either of the operators ${\mathcal{S}}'_{\Delta \VAtimevar}$ and $\widehat{\mathcal{S}}_{\Delta \VAtimevar}$, because they consider velocity resampling and only a Brownian increment, respectively. We can thus write \begin{equation} \zelfkalkdun{\left.{\mathcal{S}}'_{\Delta \VAtimevar}(f^{(n)})-\widehat{\mathcal{S}}_{\Delta \VAtimevar}(f^{(n)})\right|x_n,\nu_n}=\zelfkalkdun{\left.{\mathcal{S}}'_{\Delta \VAtimevar}(\delta_{x_n})-\widehat{\mathcal{S}}_{\Delta \VAtimevar}(\delta_{x_n})\right|x_n,\nu_n}\,. \end{equation} To obtain an estimate of $\zelfkalk{\left.{\mathcal{S}}'_{\Delta \VAtimevar}\left(\delta_{x_n}\right)-\widehat{\mathcal{S}}_{\Delta \VAtimevar}\left(\delta_{x_n}\right)\right|x_n,\nu_n}$ in the diffusive limit, we perform an Edgeworth expansion~\cite{hall2013edgeworth} of the distribution $\delta_{x_n}+{\mathcal{S}}'_{\Delta \VAtimevar}\left(\delta_{x_n}\right)$ that is the result of kinetic evolution of the initial condition $\delta_{x_n}$ over a time step $\Delta \VAtimevar$. The Edgeworth expansion approximates a probability distribution in terms of its cumulants. The cumulants of a random variable $X$, here the position at time $(n+1)\Delta \VAtimevar$, are defined via the cumulant-generating function $\mathcal{K}'(t)$, \begin{equation} \mathcal{K}'(\Delta \VAtimevar)=\log\left(\mathbb{E}\left[e^{\Delta \VAtimevar X}\right]\right)=\sum_{i=1}^\infty\kappa_i\frac{\Delta \VAtimevar^i}{i!}=m\Delta \VAtimevar+\frac{s^2\Delta \VAtimevar^2}{2}+\sum_{i=3}^\infty\kappa_i\frac{\Delta \VAtimevar^i}{i!}\,, \end{equation} in which the expectation is taken with respect to the probability distribution $\delta_{x_n}+{\mathcal{S}}'_{\Delta \VAtimevar}\left(\delta_{x_n}\right)$, $\kappa_i$ denotes the $i$-th cumulant, and $m=\kappa_1$ and $s^2=\kappa_2$ are the mean and variance of $\delta_{x_n}+{\mathcal{S}}'_{\Delta \VAtimevar}\left(\delta_{x_n}\right)$. By construction, the position distribution of the diffusion process, denoted as $\delta_{x_n}+\widehat{\mathcal{S}}_{\Delta \VAtimevar}\left(\delta_{x_n}\right)$ is a normal distribution with the same mean and variance as that of the kinetic process, meaning the cumulant generating function is \begin{equation} \hat{\mathcal{K}}(\Delta \VAtimevar)=m\Delta \VAtimevar+\frac{s^2\Delta \VAtimevar^2}{2}\,. \end{equation} We can therefore write the Edgeworth expansion of the distribution \begin{equation} \delta_{x_n}(x)\!+\!{\mathcal{S}}'_{\Delta \VAtimevar}(\delta_{x_n})(x)\!=\!\left(\delta_{x_0}(x)\!+\!\widehat{\mathcal{S}}_{\Delta \VAtimevar}(\delta_{x_n})(x)\right)\!\!\left(\!1+\sum_{i=3}^\infty\frac{\kappa_i}{i!s^i}\VAhermite{i}\!\left(\!\frac{x\!-\!m}{s}\!\right)\!\right)\,,\label{eq:ap_KD_liminf_homo_EW} \end{equation} in which $\VAhermite{i}$ denotes the Hermite polynomial of degree $i$. Bounding the Wasserstein distance $\zelfkalk{\left.{\mathcal{S}}'_{\Delta \VAtimevar}\left(\delta_{x_n}\right)-\widehat{\mathcal{S}}_{\Delta \VAtimevar}\left(\delta_{x_n}\right)\right|x_n,\nu_n}$ therefore reduces to bounding the cumulants $\kappa_i$. \paragraph{Bounding the odd cumulants.} To obtain an estimate on the odd cumulants, it is sufficient to realize that the dominant odd cumulant can be expressed in terms of odd standardized moments~\cite{hall2013edgeworth}, on which we can reason more intuitively. The standardized moments $m_i$ are defined as \begin{equation} m_i=\mathbb{E}\left[\left(\frac{\Delta X-m}{s}\right)^i\right]\,. \end{equation} When $\epsilon\rightarrow0$, the dominant odd term of Equation~\eqref{eq:ap_KD_liminf_homo_EW} will be formed by the third moment $m_3$ of the distribution $\delta_{x_n}(x)+{\mathcal{S}}'_{\Delta \VAtimevar}\left(\delta_{x_n}\right)\left(x\right)$ of the position $X$ due to the kinetic process conditioned on the initial position $x_n$ and on the final velocity $\VAdiscreteveleend_{\VAnofevents}$. The increment $\Delta X=X-x_n$ arises as the sum of flight path contributions, as expressed by Equation~\eqref{eq:ap_deltax_assum}. Following on the computation of the first and second moment of the $\Delta X$ in Section~\ref{subsec:ap_fixedv0}, the third moment can also be found. Here, we will however only point out that the third moment arises due to asymmetry of the distribution of $\Delta X$ and the only asymmetry in Equation~\eqref{eq:ap_deltax_assum} arises due to the final flight $\frac{\VAdiscreteveleend_{\VAnofevents}}{\epsilon}\Delta \VAeventtime_{\VAnofevents}$. With the expected duration of the final flight expressed by Equation~\eqref{eq:ap_fixedv0_dt0mean}, the third central moment of this final flight conditioned on the final velocity is found to be \begin{equation} \mathbb{E}\left[\left(\frac{\VAdiscreteveleend_{\VAnofevents}}{\epsilon}\Delta \VAeventtime_{\VAnofevents}-\frac{\VAdiscreteveleend_{\VAnofevents}}{\epsilon}\frac{\epsilon^2}{\sigma}\left(1-e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}\right)\right)^3\right]=\mathcal{O}\left(\epsilon^3\right),\ \ \epsilon\rightarrow0\,. \end{equation} This gives the size of $m_3$, since $s\rightarrow\mathcal{O}(1)$, $\epsilon\rightarrow0$. In the next paragraph, we will see the dominant even error term of Equation~\eqref{eq:ap_KD_liminf_homo_EW} is asymptotically larger, since it is proportional to $\epsilon^2$, $\epsilon\rightarrow0$. \begin{comment} Using Equation~\eqref{eq:ap_deltax_assum} for $\Delta X$ and Equation~\eqref{eq:ap_fixedv0_mean_result} for the mean $m$ of the positional increment conditioned on the final velocity, we can write the third moment as \begin{align} m_3=\frac{1}{s^3}\mathbb{E}\!\!\left[\!\left(\sum_{{\VAeventno}=0}^{{\VAnofevents}}\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}\Delta \VAeventtime_{\VAeventno}-u\Delta \VAtimevar-\left(\frac{\VAdiscreteveleend_{\VAnofevents}}{\epsilon}-u\!\right)\frac{\epsilon^2}{\sigma}\left(1-e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}\right)\!\right)^3\right]\!\,. \end{align} Equation~\eqref{eq:ap_deltat_assum} can be used to rewrite $m_3$ to find \begin{align} m_3&=\frac{1}{s^3}\mathbb{E}\!\!\left[\!\left(\sum_{{\VAeventno}=0}^{{\VAnofevents}-1}\!\left(\frac{\VAdiscreteveleend_{\VAeventno}}{\epsilon}-u\!\right)\Delta \VAeventtime_{\VAeventno}+\left(\frac{\VAdiscreteveleend_{\VAnofevents}}{\epsilon}-u\!\right)\!\left(\Delta \VAeventtime_{\VAnofevents}-\frac{\epsilon^2}{\sigma}\left(1-e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}\right)\right)\!\right)^3\right]\!\,. \end{align} The first and third term on the right hand side of Equation .. equal zero, since all $\VAdiscreteveleend_\VAeventno$ are, independent of any values of $\Delta \VAeventtime_\VAeventno$, symmetrically distributed around $u$. For the second term on the right hand side of Equation~\eqref{} a small computation shows that it is of order $\mathcal{O}(\epsilon^4)$, $\epsilon\rightarrow0$: From Equation~\eqref{} we find, for a fixed value of .. ! dat je die formule mag gebruiken komt omdat elke tijd dezelfde verdeling heeft, dus ook $\Delta \VAeventtime_{{\VAnofevents}-1}$ \begin{align} \mathbb{E}\left[\left.\left(\sum_{{\VAeventno}=0}^{{\VAnofevents}-1}\right)^2\right|\Delta \VAeventtime_{\VAnofevents}\right]=2\frac{T}{\sigma}(\Delta \VAtimevar-\Delta \VAeventtime_{\VAnofevents})+\mathcal{O}(\epsilon^2),\ \ \epsilon\rightarrow0 \end{align} and \begin{equation} \left(\Delta \VAeventtime_{\VAnofevents}-\frac{\epsilon^2}{\sigma}\left(1-e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}\right)\right)=\Delta \VAeventtime_{\VAnofevents}-\frac{\epsilon^2}{\sigma}+\mathcal{O}(\epsilon^4)\,\ \ \epsilon\rightarrow0 \end{equation} and the expected value of this product is \begin{equation} \int_0^{\Delta \VAtimevar}\frac{2T}{\sigma}(\Delta \VAtimevar-t)(t-\frac{\epsilon^2}{\sigma})\frac{\sigma}{\epsilon^2}e^{-\frac{\sigma}{\epsilon^2}t}\text{d}t \end{equation} \begin{align} &=\left.\frac{2T}{\sigma}(\Delta \VAtimevar-t)(t-\frac{\epsilon^2}{\sigma})\frac{\sigma}{\epsilon^2}\frac{-\epsilon}{\sigma}e^{-\frac{\sigma}{\epsilon^2}t}\right|_0^{\Delta \VAtimevar}+\int_0^{\Delta \VAtimevar}\frac{2T}{\sigma}(\Delta \VAtimevar-t-t+\frac{\epsilon^2}{\sigma})e^{-\frac{\sigma}{\epsilon^2}t}\text{d}t\\ &=-\frac{2T}{\sigma}\Delta \VAtimevar\frac{\epsilon^2}{\sigma}+\int_0^{\Delta \VAtimevar}\frac{2T}{\sigma}(\Delta \VAtimevar-2t+\frac{\epsilon^2}{\sigma})e^{-\frac{\sigma}{\epsilon^2}t}\text{d}t\\ &=-\frac{2T}{\sigma}\Delta \VAtimevar\frac{\epsilon^2}{\sigma}+\left.\frac{2T}{\sigma}(\Delta \VAtimevar-2t+\frac{\epsilon^2}{\sigma})\frac{-\epsilon^2}{\sigma}e^{-\frac{\sigma}{\epsilon^2}t}\right|_0^{\Delta \VAtimevar}-\int_0^{\Delta \VAtimevar}\frac{\epsilon^2}{\sigma}\frac{2T}{\sigma}e^{-\frac{\sigma}{\epsilon^2}t}\text{d}t\\ &=-\frac{2T}{\sigma}\Delta \VAtimevar\frac{\epsilon^2}{\sigma}+\frac{2T}{\sigma}(\Delta \VAtimevar+\frac{\epsilon^2}{\sigma})\frac{\epsilon^2}{\sigma}+\frac{2T}{\sigma}(-\Delta \VAtimevar+\frac{\epsilon^2}{\sigma})\frac{-\epsilon^2}{\sigma}e^{-\frac{\sigma}{\epsilon^2}\Delta \VAtimevar}-2\int_0^{\Delta \VAtimevar}\frac{\epsilon^2}{\sigma}\frac{2T}{\sigma}e^{-\frac{\sigma}{\epsilon^2}t}\text{d}t\\ &=\frac{2T}{\sigma}\frac{\epsilon^4}{\sigma^2}+{o}(\epsilon^2)-2\int_0^{\Delta \VAtimevar}\frac{\epsilon^2}{\sigma}\frac{2T}{\sigma}e^{-\frac{\sigma}{\epsilon^2}t}\text{d}t\\ &=\frac{2T}{\sigma}\frac{\epsilon^4}{\sigma^2}+2\frac{\epsilon^4}{\sigma^2}\frac{2T}{\sigma}\left(e^{-\frac{\sigma}{\epsilon^2}\Delta \VAtimevar}-1\right) \end{align} For the final term on the right hand side of equation~\eqref{}, we find that the .. is equal to .., giving as the size of the third moment. Since the expected size of .. is .., we find an expected size of the .. proportional to $\epsilon^3$. \end{comment} \paragraph{Bounding the even cumulants.} For the even cumulants, we take a different viewpoint by considering the time step $\Delta \VAtimevar$ as consisting of $\frac{\sigma\Delta \VAtimevar}{\epsilon^2}$ subtimesteps of equal duration $\frac{\epsilon^2}{\sigma}$. We thus split the positional increment $\Delta X$ over its contributions during these subtimesteps as \begin{equation} \Delta X=\sum_{j=1}^{\sigma\Delta \VAtimevar/\epsilon^2}\Delta X_j\,. \end{equation} Except for an overall negligible effect due to the final flight path, the different $\Delta X_j$ are identically distributed but not independent, since subtimesteps are correlated if no collision takes place during intermediate subtimesteps. This correlation will only be significant for a few neighbouring subtimesteps since, for the $j$-th and $j'$-th subtimesteps, the probability of no collision taking place in the $|j'-j|-1$ intermediate subtimesteps, is \begin{equation} e^{-\frac{\sigma}{\epsilon^2}(|j'-j|-1)\frac{\epsilon^2}{\sigma}}=e^{1-|j'-j|}\,, \end{equation} which is independent of $\epsilon$ and decreases exponentially as a function of the number of intermediate subtimesteps. If the $\Delta X_j$ would be independent, the cumulant for $\Delta X$ would equal $\frac{\sigma\Delta \VAtimevar}{\epsilon^2}$ times the cumulant of a single $\Delta X_j$. For all $\Delta X_j$, the correlation with other subtimesteps is identical, except for those close to the beginning and end of the full time step, since fewer correlated subtimesteps are present for those. When $\epsilon\rightarrow0$, and the number of subtimesteps in which the time step is split thus increases, the portion of subtimesteps at the edges decreases as $\epsilon^2$ and we can effectively consider the time step as a combination of a number of independent, identically distributed subtimesteps. The correlation between different $\Delta X_j$ means each increment has a larger effective contribution than if it is considered in isolation, but there is a lower effective number of subtimesteps. We can capture these effects in a multiplicative constant $k'_i$ to obtain \begin{equation} \kappa_i\rightarrow k'_i\frac{\sigma\Delta \VAtimevar}{\epsilon^2}\tilde{\kappa}_i,\ \ \epsilon\rightarrow0,\ \ \text{ if }i\text{ is even}\,, \end{equation} with $\tilde{\kappa}_i$ the cumulant of the kinetic process over a time $\frac{\epsilon^2}{\sigma}$. The cumulant $\tilde{\kappa}_i$ of a single part equals a constant times $\left(\frac{\sqrt{\epsilon^2T}}{\sigma}\right)^i$, since the deviation of the mean is a constant times $\frac{\sqrt{T}}{\epsilon}$ and the size of the step is $\frac{\epsilon^2}{\sigma}$ when $\frac{\sigma\Delta \VAtimevar}{\epsilon^2}$ is high. In conclusion, \begin{equation} \kappa_i\rightarrow k_i\frac{\sqrt{T^i}\epsilon^{i-2}}{\sigma^{i-1}\Delta \VAtimevar^{i-1}},\ \ \epsilon\rightarrow0,\ \ \text{ if }i\text{ is even}\,,\label{eq:ap_KD_liminf_homo_cumeven} \end{equation} with $k_i$ a constant. \paragraph{Determining the dominant term.} The dominant error term is the term with $i=4$ in Equation~\eqref{eq:ap_KD_liminf_homo_EW}, since it results in a second power of $\epsilon$ as can be seen from Equation~\eqref{eq:ap_KD_liminf_homo_cumeven}. For this dominant term, we use the limiting value for $s$, which equals $\sqrt{2\frac{T}{\sigma}\Delta \VAtimevar}$ as given at the end of section~\ref{subsec:ap_fixedv0}. The dominant error contribution due to a difference in distribution thus equals {\def\VAapKoperatoronlyD_{\VAtimestep}(f^{(\VAtimestepno)})\!-\!\VAapDoperator_{\VAtimestep}(f^{(\VAtimestepno)})}\begin{align{{\mathcal{S}}'_{\Delta \VAtimevar}(f^{(n)})\!-\!\widehat{\mathcal{S}}_{\Delta \VAtimevar}(f^{(n)})}\begin{align} \zelfkalkdun{\left.\VAapKoperatoronlyD_{\VAtimestep}(f^{(\VAtimestepno)})\!-\!\VAapDoperator_{\VAtimestep}(f^{(\VAtimestepno)})}\begin{align\right.\right.&\left.\left.\!\!\!\!\vphantom{\VAapKoperatoronlyD_{\VAtimestep}(f^{(\VAtimestepno)})\!-\!\VAapDoperator_{\VAtimestep}(f^{(\VAtimestepno)})}\begin{align}\right|x_n,\nu_n}=\zelfkalk{\frac{1}{\sqrt{2\pi}s}\!e^{-\frac{(x-\mu)^2}{2s^2}}\!\!\!\!\frac{k_4\epsilon^2}{4!\sqrt{8}\sigma\Delta \VAtimevar}\VAhermite{4}\!\left(\!\frac{x-\mu}{s}\!\right)\!}\\ &=\frac{k_4\epsilon^2s}{4!\sqrt{8}\sigma\Delta \VAtimevar}\zelfkalk{\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}(x^4-6x^2+3)}\\ &=\frac{k_4\sqrt{T}\epsilon^{3}}{4!4\sqrt{\sigma^{3}\Delta \VAtimevar}}\zelfkalk{\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}(x^4-6x^2+3)}\,,\label{eq:ap_liminf_KD_distrerror_domterm} \end{align}} as $\frac{\sigma\Delta \VAtimevar}{\epsilon^2}\rightarrow\infty$. The values for $k_4$ and $\zelfkalk{\frac{1}{\sqrt{2\pi}}e^{-x^2/2}(x^4-6x^2+3)}$ have been computed numerically, resulting in values $18.3$ and $1.51$. Since neither $x_n$ nor $\nu_n$ feature in the right hand side of Equation~\eqref{eq:ap_liminf_KD_distrerror_domterm}, taking the expectation over all values of $x_n$ and $\nu_n$ does not alter the result. In accordance to Equation~\eqref{eq:ap_liminf_KD_conditioningbound}, we thus find the bound \begin{equation} \zelfkalk{{\mathcal{S}}'_{\Delta \VAtimevar}(f^{(n)})-\widehat{\mathcal{S}}_{\Delta \VAtimevar}(f^{(n)})}\leq 0.58\sqrt{T}\frac{\epsilon^{3}}{\sqrt{\sigma^{3}\Delta \VAtimevar}}\label{eq:ap_liminf_KD_distrerror}\,. \end{equation} \paragraph{Numerical validation.} We illustrate the accuracy of the bound in Equation~\eqref{eq:ap_liminf_KD_distrerror} in Figure~\ref{fig:ap_liminf_W1err} with $T=1$ and $u=0$, conditioned on different dedimensionalized values for the velocity $v_{n+1}$ at time $t^{(n+1)}$. Independently of the value of the final velocity, the bound of Equation~\eqref{eq:ap_liminf_KD_distrerror} holds. \begin{figure}[H \centering \deffiguren/{figuren/} \def\ref{eq:ap_limnul_KD_distrerr_res}{\ref{eq:ap_limnul_KD_distrerr_res}} \def\ref{eq:ap_liminf_KD_distrerror}{\ref{eq:ap_liminf_KD_distrerror}} \begin{minipage}{.65\textwidth} \centering \resizebox{\textwidth}{!}{\includetikz{figuren/}{ap_homogeneous_singlestep_KD_highcol}} \end{minipage} \captionof{figure}{The Wasserstein distance between the distribution of the kinetic process and of the KD process as a function of collisionality for a single time step. The experiment was conducted by averaging over 4,000,000 simulated particles and with $\Delta \VAtimevar=1$.} \label{fig:ap_liminf_W1err} \end{figure}% \subsection{Error propagation and total error in the diffusive limit\label{subsec:ap_liminf_KD_propagation}} The second term on the right hand side of Equation~\eqref{eq:ap_liminf_errorpartsW1} has been treated in Section~\ref{subsec:ap_liminf_KD_distrerror}. In this Section, we will first treat the first term, expressing how the error at the start of the time step propagates to the next time step. Then, we will solve the resulting recursion to find the total error as in Section~\ref{subsec:ap_limnul_KD_propagation}. An important distinction with the error analysis for fixed values of $\epsilon$ is that now the kinetic contributions have a negligible effect. This means the conditioning on the velocities is not necessary here, and we can immediately use that propagation with the same operator can not increase the error. The first term of Equation~\eqref{eq:ap_liminf_errorpartsW1} can thus be bounded as \begin{equation} \zelfkalkdun{f(t^{(n)})\!+\!{\mathcal{S}}'_{\Delta \VAtimevar}(f(t^{(n)}))\!-\!\left(f^{(n)} \!+\!{\mathcal{S}}'_{\Delta \VAtimevar}(f^{(n)})\right)}\leq\zelfkalkdun{f(t^{(n)})-f^{(n)}}\,.\label{eq:ap_liminf_KD_errprop} \end{equation} The bound in Equation~\eqref{eq:ap_liminf_KD_errprop} for the error propagation term in Equation~\eqref{eq:ap_liminf_errorpartsW1} together with the bound from Equation~\eqref{eq:ap_liminf_KD_distrerror} for the local error term in Equation~\eqref{eq:ap_liminf_errorpartsW1}, gives us the recursion formula \begin{equation} \zelfkalkdun{f(t^{(n+1)})-f^{(n+1)}}\leq\zelfkalkdun{f(t^{(n)})-f^{(n)}}+0.58\sqrt{T}\frac{\epsilon^{3}}{\sqrt{\sigma^{3}\Delta \VAtimevar}}\,,\quad\epsilon\rightarrow0\,. \end{equation} The solution of this recursion at the final simulation time $\bar{t}=N\Delta \VAtimevar=t^{(N)}$ is a bound on the Wasserstein error at that time, and equals \begin{equation} \zelfkalkdun{f(t^{(N)})-f^{(N)}}\leq0.58\sqrt{T}\frac{\epsilon^3}{\sqrt{\sigma^3\Delta \VAtimevar^3}}\bar{t}\,,\qquad\epsilon\rightarrow0\,.\label{eq:ap_liminf_KD_total} \end{equation} The error bound in Equation~\eqref{eq:ap_liminf_KD_total} proves the asymptotic preserving property, since, as the scale parameter becomes zero, the error due to replacing the real simulation with the KD scheme vanishes. \section{Numerical experiment\label{sec:ap_num}} In Sections~\ref{sec:ap_limnul} and~\ref{sec:ap_liminf}, it was proven that the simulation error by using the KD simulation of Section~\ref{sec:ap_newscheme}, is low in both the low-collisional and high-collisional regime. In this section, we numerically illustrate and extend these results with the simulation outcome of a model problem. We furthermore numerically show the, in Section~\ref{sec:ap_newscheme} predicted, computational speed-up in high-collisional regimes. The model problem under study, is the Boltzmann equation of Equation~\eqref{eq:ap_kinetic_integrodiff} for spatially homogeneous parameters $u(x)\equiv0$, $T(x)\equiv1$, and $\sigma(x)\equiv1$, for different values of $\epsilon$. The initial condition is \begin{equation} \VAnsource({x},{v})=\delta_0(x)\left(\frac{1}{2\sqrt{2\pi}}e^{-\frac{(v+10)^2}{2}}+\frac{1}{2\sqrt{2\pi}}e^{-\frac{(v-10)^2}{2}}\right)\,, \end{equation} i.e., the particles start at $x=0$ and half of them have a normally distributed velocity with mean $-10$ and variance 1, and the other half have a normally distributed velocity with mean $10$ and variance $1$. The considered result of this model problem is the distribution at time $\bar{t}=1$ and for the KD simulation a single timestep of duration $\Delta \VAtimevar=\bar{t}=1$ is used. The distribution of the position at time $\bar{t}$ resulting from the KD simulation is compared with the result of the standard MC method in Figures~\ref{fig:ap_num_hist_eps10}--\ref{fig:ap_num_hist_epsk1} for this model problem with $\epsilon\in\{10,1,0.1\}$. In all three cases, the match is near-perfect. The values $\epsilon=10$ and $\epsilon=0.1$ correspond to respectively a low-collisional and high-collisional regime, and the close match is thus in accordance with the analytical results of Section~\ref{sec:ap_limnul}, respectively Section~\ref{sec:ap_liminf}. For the intermediate value $\epsilon=1$, the close match in Figure~\ref{fig:ap_num_hist_eps1}, illustrates that the low error persists, even beyond the asymptotic cases studied in the previous two sections. \begin{figure}[H] \centering \begin{subfigure}{.3\textwidth} \centering \resizebox{\textwidth}{!}{\begin{tikzpicture} \begin{axis}[ width=120pt height=103pt xmin=-15, xmax=15, ymin=0, ymax=.29, axis y line*=left, ytick={0,.1,.2}, axis x line*=bottom, xlabel=$x$, xtick={-10,0,10}, tick label style={font=\tiny}, every axis x label/.style={at={(current axis.right of origin)},anchor=north}, legend cell align={left}, legend style={draw=none, fill=none,at={(axis cs:10,0.000002)}}] \addplot[blue] table[x index=0, y index=1] {figuren/histogram_K_E10_P100000.txt}; \label{fig_res_K} \addplot[thick, densely dotted, black] table[x index=0, y index=1] {figuren/histogram_KD_E10_P100000.txt}; \label{fig_res_KD} \end{axis} \end{tikzpicture}} \caption{The result for $\epsilon=10$.} \label{fig:ap_num_hist_eps10} \end{subfigure}\quad \begin{subfigure}{.3\textwidth} \centering \resizebox{\textwidth}{!}{\begin{tikzpicture} \begin{axis}[ width=120pt height=103pt xmin=-15, xmax=15, ymin=0, ymax=.29, axis y line*=left, ytick={0,.1,.2}, axis x line*=bottom, xlabel=$x$, xtick={-10,0,10}, tick label style={font=\tiny}, every axis x label/.style={at={(current axis.right of origin)},anchor=north}, legend cell align={left}, legend style={draw=none, fill=none,at={(axis cs:10,0.000002)}}] \addplot[blue] table[x index=0, y index=1] {figuren/histogram_K_E1_P100000.txt}; \label{fig_res_K} \addplot[densely dotted, black, thick] table[x index=0, y index=1] {figuren/histogram_KD_E1_P100000.txt}; \label{fig_res_KD} \end{axis} \end{tikzpicture}} \caption{The result for $\epsilon=1$.} \label{fig:ap_num_hist_eps1} \end{subfigure}\quad \begin{subfigure}{.3\textwidth} \centering \resizebox{\textwidth}{!}{\begin{tikzpicture} \begin{axis}[ width=120pt height=103pt xmin=-15, xmax=15, ymin=0, ymax=.29, axis y line*=left, ytick={0,.1,.2}, axis x line*=bottom, xlabel=$x$, xtick={-10,0,10}, tick label style={font=\tiny}, every axis x label/.style={at={(current axis.right of origin)},anchor=north}, legend cell align={left}, legend style={draw=none, fill=none,at={(axis cs:10,0.000002)}}] \addplot[blue] table[x index=0, y index=1] {figuren/histogram_K_Ek1_P100000.txt}; \label{fig_res_K} \addplot[densely dotted, black, thick] table[x index=0, y index=1] {figuren/histogram_KD_Ek1_P100000.txt}; \label{fig_res_KD} \end{axis} \end{tikzpicture}} \caption{The result for $\epsilon=0.1$.} \label{fig:ap_num_hist_epsk1} \end{subfigure} \caption{The probability distribution of the final position of the KD simulation (\ref{fig_res_KD}) and a standard MC method (\ref{fig_res_K}) for a model problem for different values of $\epsilon$. For both methods $100,\!000$ particles were used.} \end{figure} The motivation to use the KD scheme lies in its huge reduction of computational cost compared to the standard MC method when the collisionality is high. Then, most of the collisions that occur in the standard MC method, are not explicitly executed in the KD scheme, but aggregated into a diffusive step. The resulting speed-up in computational time is numerically computed and shown in Figure~\ref{fig:ap_speedup}. The computational speed-up is approximately proportional to the reduction in the number of collisions that has to be executed, as can also be seen in Figure~\ref{fig:ap_speedup}. In a standard MC method, the expected number of collisions occurring in a time step $\Delta \VAtimevar$ equals $\frac{\sigma\Delta \VAtimevar}{\epsilon^2}$, whereas in a KD simulation only the first potential collision is actually executed. Hence, the expected number of collisions in a KD simulation equals the probability of at least one collision occurring in a standard MC simulation, which equals $1-e^{-\frac{\sigma\Delta \VAtimevar}{\epsilon^2}}$. The ratio of both is shown in Figure~\ref{fig:ap_speedup}. \begin{figure}[H] \centering \begin{minipage}{.65\textwidth} \centering \resizebox{\textwidth}{!}{\begin{tikzpicture} \begin{axis}[ width=240pt height=207pt xmin=2e-4, xmax=5e3, ymin=3e-1, ymax=5e2, axis y line*=left, ytick = {.0001, .01,1,10,100,10000}, xtick = {.0001,.01,1,100,10000}, axis x line*=bottom, xmode=log, ymode=log, xlabel=$\sigma\Delta \VAtimevar/\epsilon^2$, xlabel near ticks, legend cell align={left}, legend style={draw=none, fill=none,at={(axis cs:10,0.000002)}}] \addplot[draw=white, very thick, mark=*, mark options={black}] table[x index=0, y index=3] {figuren/speedup_P50000.txt}; \label{fig_speedup} \addplot[very thick, black, domain=.0004:.0202,samples=100] {1/(1-x/2+x*x/6)}; \addplot[very thick, black, domain=.02:250,samples=100] {x/(1-exp(-x))}; \label{fig_speedup_theoretisch} \node [draw=none,fill=none,anchor=south] at (axis cs: .08,90) {\shortstack[l]{ \ref{fig_speedup} speed-up\\ \ref{fig_speedup_theoretisch} ratio of the expected\\ \hphantom{\ref{fig_speedup_theoretisch}} number of collisions}}; \end{axis} \end{tikzpicture}} \end{minipage} \caption{The speed-up by using the KD scheme instead of a standard MC method as function of the collisionality $\frac{\sigma\Delta \VAtimevar}{\epsilon^2}$. The results are found from numerical experiments with $50,\!000$ particles.} \label{fig:ap_speedup} \end{figure} \section{Conclusion} To overcome high simulation costs in high-collisional regimes and erroneous results in low-collisional regimes, we present a hybridized simulation algorithm for the Boltzmann-BGK equation that combines the standard kinetic simulation and a random walk simulation that captures the high-collisional behaviour, at a much reduced computational cost. We have proven consistency with the kinetic process and we have shown the asymptotic behaviour in the high-collisional limit is also preserved. Furthermore, resulting from the matching first two moments, errors are expected to be low for intermediate collisionality, as is numerically illustrated. To keep the exposition simple, we have ignored absorption in this paper. Absorption can be easily included in the scheme by simulating the time to the next absorption separately from the other motion. If an absorption would then occur during a diffusive substep, the full diffusive substep can be replaced with kinetic motion or it can be replaced by a diffusive substep until the absorption time. An important improvement has been presented in~\cite{mortier2019KDfusioncase}, where the scheme was applied to a realistic fusion related case. There, an advection term was added to cope with the heterogeneity of the collision rate, and the treatment of reflective boundary conditions was presented. In future work, we will focus further on the application of this simulation scheme to fusion reactor simulations, where neutrals play a crucial role in the plasma edge by providing source terms for the plasma equations. The neutrals undergo both low-collisional and high-collisional regimes, motivating the use of hybridized schemes. The main challenge in applying the algorithm presented here in that context, is the extraction of source terms from the diffusion part of the simulation. Other possible extensions include the addition of a deterministic fluid model as control variate and multilevel Monte Carlo where the time step size can be used as a level parameter. \section*{Acknowledgments} The first author was funded a personal grant from the Research Foundation - Flanders (FWO) under fellowship number 1189919N.\\ This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 and 2019-2020 under grant agreement No 633053. The views and opinions expressed herein do not necessarily reflect those of the European Commission. \bibliographystyle{abbrv}
{ "timestamp": "2020-12-17T02:19:48", "yymm": "2012", "arxiv_id": "2012.08985", "language": "en", "url": "https://arxiv.org/abs/2012.08985" }
\section{Introduction} \subsection{Generalized Sampling in Shift-Invariant Spaces} Since the formulation of Nyquist-Shannon's celebrated sampling theorem \cite{Shannon1949}, the reconstruction of a function from discrete measurements has been extended in many ways \cite{Jerri1977,Unser2000}. In particular, Papoulis proposed the framework of generalized sampling \cite{Papoulis1977}, where he showed that any bandlimited function $f$ is uniquely determined by the sequences of discrete measurements (generalized samples) \begin{equation} \label{generalizedsampling} g_n(kT) = (h_n*f)(kT) = \dotprod{f}{\psi_n(\cdot-kT)}, \quad n=1,...,N, \quad k\inZ, \end{equation} where $(g_n(t))_{n=1,...,N}$ are the outcome of $N$ linearly independent systems applied to $f$. The sampling is assumed to proceed at $1/N$ the Nyquist rate ({\it i.e.}, $T = NT_{\mathrm{Nyq}} = 2N\pi/\omega_{\max}$, where $\omega_{\max}$ is the maximum frequency of $f$). The functions $\psi_n(t) = h_n(-t)$, $t\inR$, are called the analysis functions. They are the time-reversed versions of the impulse responses. The sampling theorem was also generalized to many different function spaces such as integer-shift-invariant spaces \cite{Aldroubi1994,Unser1994}, including spline spaces \cite{Hummel1983,Unser1992,Aldroubi1992}. Following this extension and Papoulis' theory, Unser and Zerubia introduced a framework to perform generalized sampling without the bandlimited constraint \cite{Unser1998,Unser1997a} which includes important cases such as interlaced and derivative sampling in spline spaces. In this paper, we adopt the same framework and propose to reconstruct a function $f$ from discrete samples $g_n(k), k=1,...,N$ in an integer-shift-invariant space generated by a finite collection of generators as in some recent works \cite{Garcia2008,Pohl2012,Radha2019}. The structure of such reconstruction spaces has been thoroughly studied \cite{DeBoor1994a,Aldroubi1996,Grochenig2018} and there exist theoretical results that lead to the critical choice of relevant generating functions \cite{DeBoor1994}. As a minimal requirement to get a good approximation space, the generating functions should satisfy jointly the partition-of-unity condition \cite{DeBoor1985}. In addition, there exists a tradeoff between the approximation power of the space and the size of the support of the generating functions \cite{Blu2001}. \subsection{Polynomial Splines} A polynomial spline is a piecewise polynomial function defined over the real line. Of special interest are the splines of degree $n$ because they provide one free parameter per segment. They are defined by distinct knots and polynomial pieces of degree $n$ that are connected smoothly so that the global function has continuous derivatives up to order $(n-1)$. The splines whose knots are uniformly spaced are called cardinal splines and they are relevant to many applications such as image processing \cite{Unser1999}. In the 50s, Isaac Schoenberg laid the foundation of cardinal splines \cite{Schoenberg1973,schoenberg1967spline} when he showed that the set $S_n$ of cardinal splines of degree $n$ could be generated by a single function \cite{DeBoor1976}, the B-spline of degree $n$. In this paper we will consider the causal B-spline and denote it by $\beta_+ ^n$. This simple building block is also the shortest nonzero spline of degree $n$. Interestingly, the B-splines can be constructed recursively with the relation \begin{equation} \beta_+^{n+1} = \beta_+^n*\beta_+^0, \end{equation} starting from $\beta_+^0$, which is the rectangular window over $[0,1)$ \begin{equation} \beta_+^0(x) = \begin{cases}1 ,& 0\leq x<1\\ 0, & \text{otherwise.} \end{cases} \end{equation} The convolution by $\beta_+^0$ can be decomposed in two successive operations: an integration (which transforms a spline of degree $n$ into a spline of degree $(n+1)$) followed by a finite difference (which gives back a compactly supported function). Indeed, $(f*\beta_+^0)(x) = \Delta\{\int_{-\infty}^x f(t)\dint t\}$, where $\Delta\{f\}=(f(\cdot)-f(\cdot-1))$ is the finite difference of $f$. Along with their great reproducing properties and shortest support, B-splines allow an efficient and practical implementation, which is exploited in many fields \cite{DeBoor1972,DeBoor1980,Unser1993,Unser1993a}. \subsection{Multi-Splines} To perform generalized sampling, it is natural to look at multi-spline spaces since they offer additional degrees of freedom. A cardinal multi-spline space is defined as the sum of $N\inN$ spline spaces: $S_{\mathbf{n}} = S_{n_1}+\cdots+S_{n_N}$, $\mathbf{n} = ({n_1,...,n_{N}})$ and $n_1<\cdots<n_N\inN$. From now on, any spline will be assumed to be a cardinal spline unless stated otherwise. It is worth noting that, in the case of consecutive spaces specified by $n_k = n_1+(k-1)$, the resulting space is exactly the space of piecewise polynomials of degree $n_N$ that are in $C^{n_{1}-1}(\mathbb{R})$, the space of functions with $(n_{1}-1)$ continuous derivatives (see Proposition \ref{consecutiveMS}). Some multi-spline spaces have proved to be of great interest for derivative sampling, where the goal is to reconstruct a signal from the samples of the function and of its first-order derivative. We should mention the well-known bicubic Hermite splines $(h_1,h_2)$, first introduced by Schoenberg and Lipow in \cite{Lipow1973}. They constitute a basis of $S_2+S_3$ with the shortest support and provide the direct interpolation formula \begin{equation} \forall f\in S_2+S_3, \quad \forall x\inR:f(x) = \sum_{k\inZ}\left(f(k)h_1(x-k)+f^{'}(k)h_2(x-k)\right), \end{equation} where $f^{'}=f^{(1)}$ is the derivative of $f$. The excellent approximation capabilities and minimal-support property of the Hermite splines \cite{Fageot2020} give a strong incentive to investigate more general multi-spline spaces. The bicubic Hermite splines are the backbone of many computer-graphics applications and closely linked to B\'ezier curves \cite{Farouki2012,Uhlmann2016,Conti2015,Conti2016,Romani2020}. Schoenberg and Lipow also found two fundamental functions to reconstruct any function in $S_4+S_5$ from its samples and the samples of its first-order derivative. Nonetheless, those functions are not well-suited to practical applications since they are not compactly supported. \\Building on top of an impressive body of work from various communities, we propose a systematic study of shortest bases for any multi-spline space. In particular, the main goal is to generalize the concept of B-splines to any multi-spline space. \\ \\The paper is organized as follows: in Section \ref{sec:Formulation}, we formulate the problem in the framework of finitely generated shift-invariant spaces. We then state the properties that relevant generating functions should satisfy. In Section \ref{sec:Theory}, we show that the conditions imposed can only be met if the sum of the support of the generating functions is large enough. In Section \ref{sec:Multispline}, we present a method to construct shortest-support bases for any multi-spline space. This has important implications in practice, which we illustrate in Section \ref{sec:Applications} where we give practical examples to implement generalized sampling with the new set of functions, including interpolation, derivative sampling, and a new way to envision Bézier curves. \section{Formulation of the Problem} \label{sec:Formulation} Let $\boldsymbol{\phi} = (\phi_1,\phi_2,\ldots,\phi_N)$ be a finite collection of functions in $ L_2(\mathbb{R})$, Lebesgue's space of square-integrable functions. The integer-shift-invariant subspace of $\LDR$ generated by $\boldsymbol{\phi}$ is denoted by $\shiftspace{\boldsymbol{\phi}}$ and is defined as \begin{equation} \shiftspace{\boldsymbol{\phi}} = \shiftspace{\phi_1} + \shiftspace{\phi_2} + \cdots+ \shiftspace{\phi_N} ,\end{equation} where \begin{equation} \shiftspace{\phi_n}=\overline{\mathrm{Span}}\left(\{\phi_n(\cdot-k)\}_{k\in\mathbb{Z}}\right) \subseteq L_2(\mathbb{R}),\quad n=1,\ldots,N. \end{equation} We shall not restrict ourselves to multi-spline spaces for now and rather consider finitely generated integer-shift-invariant spaces. To formulate the problem, we recall three properties of $\boldsymbol{\phi}$ that have been imposed in previous works for practical applications. Multi-spline spaces will then naturally stand out as practical and important reconstruction spaces (Sections III and IV). \subsection{Riesz Basis} \begin{definition} The set of functions $\{\phi_n(\cdot-k):k\inZ, n=1,\ldots ,N\} \subset L_2(\mathbb{R})$ is said to be a Riesz basis with bounds $A, B\inR$ with $0<A\leq B<+\infty$ if, for any vector of square-summable sequences $\boldsymbol{c} = (c_1,...,c_N)\inlDZN$, we have that \begin{equation} A \norm{\boldsymbol{c}}_{\ell_2}\leq \norm{\sum_{k\inZ}\boldsymbol{c}[k]^T \boldsymbol{\phi}(\cdot-k)}_{\LDR}\leq B \norm{\boldsymbol{c}}_{\ell_2} , \end{equation} where $ \|{\boldsymbol{c}}\|_{\ell_2}=\left( \sum_{n=1}^N \|{c}_n\|_{\ell_2}^2\right)^{\frac{1}{2}}$, $\boldsymbol{\phi} = (\phi_1,\phi_2,\ldots,\phi_N)$ and where A and B are the tightest constants. \end{definition} When this property is satisfied, we say that $\boldsymbol{\phi}$ generates a Riesz basis. The Riesz-basis property guarantees that any $f\in \shiftspace{\boldsymbol{\phi}}$ has the unique and stable representation (\cite{Christensen2016}) \begin{equation} f(\cdot) = \sum_{k\inZ}\boldsymbol{c}[k]^T \boldsymbol{\phi}(\cdot-k) = \sum_{k\inZ}\sum_{n=1}^N c_n[k] \phi_n(\cdot-k) . \end{equation} This property is well characterized in the Fourier domain via the Gramian matrix-valued function \begin{equation} \label{eq:gramian} \hat{\boldsymbol{G}}(\omega) = \sum_{k\inZ}\hat{\boldsymbol{\phi}}(\omega+2k\pi)\hat{\boldsymbol{\phi}}(\omega+2k\pi)^{H} = \sum_{k\inZ}\dotprod{\boldsymbol{\phi}}{\boldsymbol{\phi}^{T}(\cdot -k)}\ee^{-\jj\omega k} ,\end{equation} where the inner product is defined as $\dotprod{f}{g} = \int_{\mathbb{R}}f(t)g^*(t)\dint t$, $\phantom{g}^*$ is the complex conjugate operator, and $^{H}$ is the conjugate transpose operator. Equality (\ref{eq:gramian}) follows from Poisson's formula applied to the sampling at the integers of the matrix-valued autocorrelation function $t\mapsto \dotprod{\boldsymbol{\phi}}{\boldsymbol{\phi}^{T}(\cdot -t)} = (\boldsymbol{\phi}*\boldsymbol{\phi}^{H\vee})(t)$\cite{Unser2014}. The Fourier equivalent of the Riesz-basis condition is \cite{Aldroubi1996} \begin{equation} \label{eigenvalueRB} 0<A^2=\essinf_{\omega \in [0,2\pi)}\lambda_{\min}(\omega)\leq \esssup_{\omega \in [0,2\pi)}\lambda_{\max}(\omega) = B^2<+\infty ,\end{equation} where $\lambda_{\min}(\omega)$ and $\lambda_{\max}(\omega)$ are the smallest and largest eigenvalues of $\hat{\boldsymbol{G}}(\omega)$. \subsection{Reproducing Polynomials} \begin{definition} The space $\shiftspace{\boldsymbol{\phi}}$ is said to reproduce polynomials of degree up to $M$ if, for all $m = 0,1,...,M$, there exist vector sequences $\boldsymbol{c}_m$ (not necessarily in $(\lDZ)^N$) such that \footnote{for $m=0$, we use in (\ref{eq:polyrepro}) the convention that $x^m = 1$, including for $x=0$.} \begin{equation} \label{eq:polyrepro} \forall x \inR,\quad x^m = \sum_{k\inZ}\boldsymbol{c}_m[k]^T \boldsymbol{\phi}(x-k) . \end{equation} \end{definition} Strang and Fix showed that the property of the reproduction of polynomials of degree up to $M$ is directly linked to the approximation power of the reconstruction space \cite{Strang1971}. More precisely, let \begin{equation} \mathrm{S}_h(\boldsymbol{\phi}) = \{f(\cdot/h):f\in \shiftspace{\boldsymbol{\phi}}\} \end{equation} be the $h$-dilate of $\shiftspace{\boldsymbol{\phi}}$. The space $\shiftspace{\V \phi}$ is said to have an approximation power of order $M$ if any sufficiently smooth and decaying function can be approached by an element of $\mathrm{S}_h(\V \phi)$ with an error decaying as $O(h^{M})$. The so called “Strang-Fix conditions" give sufficient conditions to have a space with an approximation power of order $M$ \cite{Fageot2020,DeBoor1998,Unser1997}. In particular, for compactly supported and integrable generating functions, it is sufficient to have the space $\shiftspace{\V \phi}$ reproduce polynomials of degree up to $(M-1)$. A straightforward implication is that the spline space $S_n$ has an approximation power of order $(n+1)$ since \begin{enumerate}[label=(\roman*)] \item it can reproduce polynomials of degree up to $n$; \item it can be generated by the compactly supported function $\beta_+ ^n$. \end{enumerate} The multi-spline space $S_{n_1}+\cdots+S_{n_N}$ inherits the highest approximation power of its spline spaces. Its approximation power is $(n_N+1)$, since $S_{n_N}\subset S_{n_1}+\cdots+S_{n_N}$. \subsection{Compact Support} The evaluation of $f\in \shiftspace{\boldsymbol{\phi}}$ at a given $x\inR$ from its discrete representation $\boldsymbol{c}\inlDZN$ requires a number of computations more or less proportional to the support size of $\boldsymbol{\phi}$. So, ideally, we want to minimize the support of $\boldsymbol{\phi}$ while maintaining a good approximation power \cite{Blu2001}. The support of a function $f\in \LDR$ is written as $\supp{f}=\overline{\{x\inR:f(x)\neq 0\}}$. If it is a compact subset of $\mathbb{R}$, then the support size is defined as $ \suppsize{f} =\int_{\mathbb{R}}\mathbbm{1}_{\supp{f}}(t)\dint t $, where $\mathbbm{1}_{\supp{f}}$ is the indicator function of $\supp{f}$. For a finite collection of compactly supported functions $\V \phi = (\phi_{1},...,\phi_{N}),$ the natural extension for the support size is \begin{equation} \suppsize{\boldsymbol{\phi}} = \sum_{n=1}^N \suppsize{\phi_n}. \end{equation} In Section \ref{sec:Theory}, we present theoretical results that clarify the relation between the desired properties. \section{Shortest Bases} \label{sec:Theory} For a single generator $\phi$ such that $\shiftspace{\phi}$ reproduces polynomials of degree up to $M$, Schoenberg stated that $\suppsize{\phi}\geq M+1$ \cite{Schoenberg1973}. The result was proved in \cite{Aziznejad2019} for $N=2$. We now extend the proof to any $N\inN\setminus \{0\}$. \begin{theorem}[Minimal support] \label{MinimalSupport} If $\shiftspace{\boldsymbol{\phi}} = \shiftspace{\phi_1,\phi_2,\ldots,\phi_N}$ reproduces polynomials of degree up to $M$, then $\suppsize{\boldsymbol{\phi}} \geq M+1$. In addition, if there is equality, then \begin{equation} \label{minimalsupportcharacterization} \sum_{k\inZ} \sum_{n=1}^N \mathbbm{1}_{\supp{\phi_n}}(x+k) = \suppsize{\boldsymbol{\phi}} \quad \text{for almost every } x\inR. \end{equation} \end{theorem} \begin{proof} If $\boldsymbol{\phi}$ is not compactly supported, then the inequality is clear. Now, we can assume that $\boldsymbol{\phi}$ is compactly supported. This implies that, for any $x\inR$, the sum $\sum_{k\inZ}\boldsymbol{c}[k]^T \boldsymbol{\phi}(x-k) = \sum_{k\inZ} \sum_{n=1}^N c_n[k]\phi_n(x-k)$ has only a finite number of nonzero terms that are identified by the set \begin{equation} \Lambda(x)= \left\{(n,k)\in \{1,\ldots,N\}\times \mathbb{Z} : \quad x \in \supp{\phi_n(\cdot-k)}\right \} , \end{equation} and its cardinality \begin{equation} \label{eq:cardinality} \lambda(x) = \#(\Lambda(x))=\sum_{k\inZ} \sum_{n=1}^N \mathbbm{1}_{\supp{\phi_n}}(x+k) \inN . \end{equation} Equation (\ref{eq:cardinality}) follows from the fact that $\mathbbm{1}_{\supp{\phi_n}}(x+k)$ is 1 if and only if $(n,k)\in \Lambda (x)$ and 0 otherwise. The function $x\mapsto \lambda(x)$ is 1-periodic and bounded because $\supp{\phi_n}$ are compact subsets of $\mathbb{R}$. Its average over one period reads (note that the sums are in fact all finite) \begin{align} \label{eq:fubini} \overline{\lambda} &= \int_{0}^1 \sum_{n=1}^N \sum_{k\inZ} \mathbbm{1}_{\supp{\phi_n}}(x+k) \dint x = \sum_{n=1}^N \sum_{k\inZ} \int_{0}^1 \mathbbm{1}_{\supp{\phi_n}}(x+k) \dint x \\ &= \sum_{n=1}^N \int_{-\infty}^{\infty} \mathbbm{1}_{\supp{\phi_n}}(x) \dint x = \suppsize{\boldsymbol{\phi}} , \end{align} where we applied Fubini's Theorem in (\ref{eq:fubini}). Because $\lambda$ is bounded and takes values in $\mathbb{N}$, it only takes a finite number of values. Consequently, there exists a set $\mathrm{A}\subset [0,1]$ of nonzero measure such that $\lambda$ is constant on A and no greater than its average, as in \begin{equation} \forall x\in A : \quad \lambda(x) = \lambda_{A} \leq \overline{\lambda} = \suppsize{\boldsymbol{\phi}} . \end{equation} The function $\#(\Lambda)$ restricted to A is constant, but this does not imply that $\Lambda$ is constant on A. Noting that A is bounded and that the $\phi_n$ are compactly supported, the image of A under $\Lambda$, denoted by $\Lambda(A)$, is a finite set. Therefore, there exists $\mathrm{B}\subset \mathrm{A}\subset[0,1]$ of nonzero measure such that $\Lambda$ is constant on B. This means that the set $\shiftspace{\boldsymbol{\phi}}_{|B}$ of functions of $\shiftspace{\boldsymbol{\phi}}$ restricted to $B$ is spanned by $\lambda_A$ functions $(\phi_n(\cdot-k))_{(n,k)\in \Lambda(B)}$. \newline Moreover, due to the reproducing property, the polynomials of degree up to $M$ restricted to B form a linear subspace of $\shiftspace{\boldsymbol{\phi}}_{|B}$ whose dimension is $(M+1)$, because B is infinite. Then, we must have that $\lambda_A\geq M+1$ and, since $\lambda_A\leq \suppsize{\boldsymbol{\phi}}$, we deduce the announced bound $\suppsize{\V \phi}\geq M+1$. \newline If $\lambda$ is not $a.e.$ constant, then $\mathrm{A}$ can be chosen so that $\lambda_A<\overline{\lambda} = \suppsize{\boldsymbol{\phi}}$ and $\shiftspace{\boldsymbol{\phi}}_{|B}$ is spanned by fewer than $\suppsize{\boldsymbol{\phi}}$ functions. The reproduction property implies that $\suppsize{\boldsymbol{\phi}} > M+1$. This means that the equality $\suppsize{\boldsymbol{\phi}} = M+1$ is possible only if $\lambda$ is $a.e.$ constant. \end{proof} Following Theorem \ref{MinimalSupport}, we can introduce the central notion of shortest-support basis. \begin{definition} A collection of functions $\boldsymbol{\phi}\in (\LDR)^N$ is said to be a shortest-support basis of degree $M$ if $\shiftspace{\boldsymbol{\phi}}$ reproduces polynomials of degree up to $M$ with the shortest support, \textit{i.e.} with $\suppsize{\boldsymbol{\phi}} = M+1$. \end{definition} The qualifier of basis comes from Theorem \ref{MinimalSupport_RB}. \begin{theorem}[Shortest support and Riesz basis] \label{MinimalSupport_RB} Any shortest basis generates a Riesz basis. \end{theorem} Before proving the theorem, we define the $k$th slice of any function $f$ as \begin{equation} \forall x \inR : \quad \mathrm{S}_k\{f\}(x) = \begin{cases}f(x+k),&x\in[0,1)\\0,& \text{otherwise,}\end{cases} \end{equation} and the set of nonzero slices of all the generating functions as \begin{equation} \mathcal{T}(\boldsymbol{\phi}) = \{\mathrm{S}_k\{\phi_n\} \textnormal{}: \mathrm{S}_k\{\phi_n\} \not\equiv 0 \textnormal{ and }k\inZ, n=1,...,N\} .\end{equation} The proof will also invoke Lemma \ref{LemmaRB}. \begin{lemma} \label{LemmaRB} Let $\boldsymbol{\phi}\in (\LDR)^{N}$ be compactly supported. If $\mathcal{T}(\boldsymbol{\phi})$ is a set of linearly independent functions, then $\boldsymbol{\phi}$ generates a Riesz basis. \end{lemma} \begin{proof} The generating functions can be expressed in terms of their slices as $\phi_n(x) = \sum_{k\inZ}\mathrm{S}_k\{\phi_n\}(x - k)$. The Riesz-basis property is best characterized in the Fourier domain with the Gramian matrix (note that, $\boldsymbol{\phi}$ being compactly supported, all the sums are in fact finite), which leads to \begin{align} (\hat{\boldsymbol{G}}(\omega))_{mn} &= \sum_{q\inZ}\dotprod{\phi_m}{\phi_n(\cdot -q)}\ee^{-\jj\omega q} \nonumber \\ &=\sum_{q\inZ}\sum_{k_1\inZ}\sum_{k_2 \inZ}\dotprod{\mathrm{S}_{k_1}\{\phi_m\}}{\mathrm{S}_{k_2}\{\phi_n\}(\cdot - q - (k_2-k_1))}\ee^{-\jj\omega q} \nonumber\\ &=\sum_{k_1\inZ}\sum_{k_2 \inZ}\dotprod{\mathrm{S}_{k_1}\{\phi_m\}}{\mathrm{S}_{k_2}\{\phi_n\}}\ee^{\jj\omega(k_2-k_1)}&&\text{if $q\neq (k_1-k_2)$, the inner product vanishes} \nonumber \\ &=\dotprod{\sum_{k_1 \inZ}\mathrm{S}_{k_1}\{\phi_m\}\ee^{-\jj\omega k_1}}{\sum_{k_2 \inZ}\mathrm{S}_{k_2}\{\phi_n\}\ee^{-\jj\omega k_2}} \nonumber \\ &=\dotprod{\tilde{\phi}_m(\omega,\cdot)}{\tilde{\phi}_n(\omega,\cdot)}, \end{align} where $\Tilde{\phi_n}(\omega,\cdot)$ is the finite weighted sum of slices \begin{equation} \Tilde{\phi_n}(\omega,x) = \sum_{k \inZ}\mathrm{S}_{k}\{\phi_n\}(x)e^{-j\omega k} . \end{equation} If, now, $\mathcal{T}(\boldsymbol{\phi})$ is a set of linearly independent functions, then, for any $\omega \inR$, the functions $(\Tilde{\phi_n}(\omega,\cdot))_{n=1,...,N}$ are linearly independent because the sums are finite. This means that $\hat{\boldsymbol{G}}(\omega)$ is the Gramian matrix of a linearly independent family of functions, which is known to be equivalent to $\det \hat{\boldsymbol{G}}(\omega) > 0$. In addition $g:\omega \mapsto \det(\hat{\boldsymbol{G}}(\omega))$ is a finite weighted sum of $\ee^{\jj\omega k}$ since $\boldsymbol{\phi}$ is compactly supported. It is therefore continuous and $2\pi$-periodic. The image of $[0,2\pi]$ under $g$ is therefore a closed interval such that \begin{equation} 0<\essinf_{\omega\in [0,2\pi]} \det(\hat{\boldsymbol{G}}(\omega))=\min_{\omega\in [0,2\pi]} \det(\hat{\boldsymbol{G}}(\omega))<\esssup_{\omega\in [0,2\pi]} \det(\hat{\boldsymbol{G}}(\omega))=\max_{\omega\in [0,2\pi]} \det(\hat{\boldsymbol{G}}(\omega))<+\infty. \end{equation} Noting that $\det(\hat{\boldsymbol{G}}(\omega))$ is the product of the eigenvalues of $\hat{\boldsymbol{G}}(\omega)$, Condition (\ref{eigenvalueRB}) is satisfied, which means that $\boldsymbol{\phi}$ is a Riesz basis. \end{proof} Note that the converse of Lemma \ref{LemmaRB} is not necessarily true. For a counterexample, consider the function in (\ref{eq:counterexample}) made of two side-by-side rectangles of different height, so that \begin{equation} \label{eq:counterexample} \forall x \inR:\phi(x) = \begin{cases}1,& x\in[0,1)\\ \alpha,& x \in [1,2)\\0, & \text{otherwise.}\end{cases} \end{equation} In this case, with a single generator, the Gramian matrix is just a scalar and reads $\hat{g}(\omega) = (1+\alpha^2)+2\alpha \cos \omega,$ which verifies, for any $\omega\inR$, that \begin{equation} (1-\abs{\alpha})^2\leq\abs{\hat{g}(\omega)}\leq (1+\abs{\alpha})^2. \end{equation} So for $\abs{\alpha} \neq 1$, $\phi$ is a Riesz basis with bound $A=(1-\abs{\alpha})$ and $B=(1+\abs{\alpha})$. Yet, $\mathcal{T}(\boldsymbol{\phi})$ is clearly not a set of linearly independent functions since the second slice is a scaled version of the first one. For a more practical counterexample, see \cite[Proposition 2.2.]{Antonelli2014}. \\ \begin{lemma} \label{lm:sliceslinearlyindep} Let $\V \phi \in (\LDR)^{N}$. If $\V \phi$ is a shortest-support basis, then $\mathcal{T}(\boldsymbol{\phi})$ is a set of linearly independent functions. \end{lemma} \begin{proof} It is equivalent to prove the contrapositive of the lemma, which states that if $\mathcal{T}(\boldsymbol{\phi})$ is not a set of linearly independent functions, then $\V \phi$ is not a shortest-support basis. To that end, suppose that $\mathcal{T}(\boldsymbol{\phi})$ is not a set of linearly independent functions. This means that one can find a slice, say $\mathrm{S}_{k_0}\{\phi_{q_0}\}$, that depends linearly on the others. Now, consider the integer-shift-invariant space generated by the set of functions $\mathcal{T}(\boldsymbol{\phi}) \backslash \{\mathrm{S}_{k_0}\{\phi_{q_0}\}\}$. Note that the new generating functions differ now both in size (support size of at most 1) and in number (possibly greater than $N$). On one hand, the new integer-shift-invariant space is larger than the initial space and, in particular, is still able to reproduce polynomials of degree up to $M$. On the other hand, the sum of the support size of the generating functions is smaller than $\suppsize{\boldsymbol{\phi}}$ because a nonzero slice was removed. So, $\boldsymbol{\phi}$ cannot be of minimal support. \end{proof} We can now prove Theorem \ref{MinimalSupport_RB}. \begin{proof}[Proof of Theorem \ref{MinimalSupport_RB}] Let $\V \phi \in (\LDR)^{N}$ be compactly supported. By contraposition, if it is not a Riesz basis, then $\mathcal{T}(\boldsymbol{\phi})$ is not a set of linearly independent functions (Lemma \ref{LemmaRB}). Then, by Lemma \ref{lm:sliceslinearlyindep}, $\boldsymbol{\phi}$ cannot be of minimal support. \end{proof} To conclude this section, we present two results for finitely generated integer-shift-invariant spaces in preparation to a characterization of multi-spline spaces (Theorem \ref{prop:MN}). The unit sample sequence is written $\delta[\cdot]$ and is defined by $\delta [k] = \begin{cases}1, & k=0\\0, & k\neq 0\end{cases}$, and its matrix version $\V \delta_{N\times N}$ is defined by $(\V \delta_{N\times N})_{pq}[\cdot] = \begin{cases}\delta[\cdot], & p=q\\0, & p\neq q\end{cases}$. \begin{lemma} \label{lm:convo} Let $N,M\inN$, $\V C \in (\mathbb{R}^{\mathbb{Z}})^{N\times M}$, $\V B \in (\mathbb{R}^{\mathbb{Z}})^{M\times N}$. If the sequence of matrices $\V B$ is compactly supported and $\V C*\V B = \V \delta_{N\times N}$, then $M\geq N$. \end{lemma} \begin{proof} There exists $s\inN$ such that $\supp{\V B}\subset \{-s,...,s\}\subset \mathbb{N}$. The behavior of $\V C[k]$ when $\abs{k}\rightarrow \infty$ is not known, and it is easier to work with the truncated version $\V C_{m}= \mathbbm{1}_{\{-s,...,ms\}} \times \V C $, where $m\inN$ is a large enough integer $m>2N+1$. The sequence of matrices $\V C_{m}*\V B$ is compactly supported and satisfies $\supp{\V C_{m}*\V B}\subset \{-2s,...,(m+1)s\}$. Following the properties of convolution of compact sequences, we have, for any $k=0,...,(m-1)s$, that $\V C_{m}*\V B[k]=\V C*\V B[k] = \V \delta_{N\times N}[k]$. Therefore, one can write that \begin{equation} \V C_{m}*\V B= \V \delta_{N\times N}+\sum_{\substack{-2s\leq k < 0\\(m-1)s+1 \leq k \leq (m+1)s}}\mathbf{M}_{k}\delta[\cdot-k], \end{equation} where $\mathbf{M}_{k}\in \mathbb{R}^{N\times N}$ are matrices that account for the fact that $\V C_{m}$ is a truncated version of $\V C$. This then translates into the following z-transform matrix relation (note that all sequences are compactly supported so the z-transforms are well defined) \begin{equation} \hat{\V C}_{m}(z)\hat{\V B}(z) = \mathbf{I}_{N\times N}+\sum_{\tiny \substack{-2s\leq k < 0\\(m-1)s < k \leq (m+1)s}}z^{-k} \mathbf{M}_{k} =\mathbf{M}_{N\times N}+\sum_{k=-2s}^{-1}z^{-k} \mathbf{M}_{k}+\sum_{k=(m-1)s+1}^{(m+1)s}z^{-k} \mathbf{M}_{k}= z^{-2s} \V A(z) , \end{equation} where $\V A(z)$ can be decomposed as \begin{equation} \V A(z) = z^{2s} \mathbf{I}_{N\times N}+\V P(z) +z^{(m+1)s+1}\V Q(z), \end{equation} where $\V P(z)$ and $\V Q(z)$ are polynomial matrices of degree $(2s-1)$. The determinant of $\V A(z)$ can be expressed in terms of the columns of $\mathbf{I}_{N\times N}, \V P(z)$, and $\V Q(z)$ (denoted respectively $\mathbf{e}_k,\V p_k(z)$, and $\V q_k(z)$), so that \begin{equation} z\mapsto \det A(z) = \det(z^{2s} \mathbf{e}_1+\V p_1(z)+z^{(m+1)s} \V q_1(z),\ldots,z^{2s} \mathbf{e}_N+\V p_N(z)+z^{(m+1)s} \V q_N(z)) . \end{equation} Knowing that the determinant is $n$-linear with respect to the columns, $z\mapsto \det \V A(z)$ is a polynomial function of degree at most $(m+3)sN$. We now want to prove that it cannot be identically zero. To that end, we expand the determinant with respect to the columns and find that there is a unique term of the form $\lambda z^{2sN}$. It is obtained by picking for $k=1,\ldots,N$ the column $\mathbf{e}_k z^{2s}$. The coefficient in front of $z^{2sN}$ is therefore $\det(\mathbf{e}_1,\ldots, \mathbf{e}_N) = 1 \neq 0$. Indeed, for other combinations of columns in the expansion, we would have that \begin{itemize} \item if at least one column of the form $z^{(m+1)s}\V q_k(z)$ is chosen, then it results in a term of degree at least $(m+1)s>(2N+2)s>2sN$; \item else, at least one column of the form $\V p_k(z)$ is chosen. Since the degree of $\V p_k(z)$ is lower than $2s$, the resulting term in the expansion has a degree lower than $2sN$. \end{itemize} In the end, we proved that $z\mapsto \det \V A(z)$ cannot be identically zero. Therefore, there exists $z_{0}\inR$ so that $\mathrm{rank}(\V A (z_{0}))=N$. It implies that $N = \mathrm{rank}(\hat{\V C}_{m}(z_{0})\hat{\V B}(z_{0}))\leq \min(\mathrm{rank}(\hat{\V C_{m}}(z_{0})),\mathrm{rank}(\hat{\V B}(z_{0})))\leq \min(M,N)\leq M$. \end{proof} \begin{lemma} \label{lm:MinimalGeneratingFunctions} Let $\V \psi \in (\LDR)^M$ and $\V \eta \inLDRN$ be two collections of compactly supported functions that are able to reproduce each other (the reproducing sequences might not be in $\lDZ$). If $\V\eta$ is a shortest-support basis, then $M\geq N$. \end{lemma} \begin{proof} By hypothesis, there exist vector sequences $\V c_p \in (\mathbb{R}^{\mathbb{Z})^M}$ such that $\eta_p = \sum_{k\inZ}\V c_p[k]^T\boldsymbol{\psi}(\cdot-k) = \V {c}_p^T*\boldsymbol{\psi}$, which reads in matrix form \begin{equation} \V\eta = \V C*\V \psi, \quad \V C \in (\mathbb{R}^{\mathbb{Z}})^{N\times M} . \end{equation} Similarly, one can write that \begin{equation} \V\psi= \V B*\V \eta, \quad \V B \in (\mathbb{R}^{\mathbb{Z}})^{M\times N} . \end{equation} From Lemma \ref{lm:sliceslinearlyindep}, we know that the nonzero slices of $\V \eta$ are linearly independent (shortest-support basis). This implies that, to generate the compactly supported function $\V \psi$, the sequence of matrices $\V B$ must be compactly supported as well since the only way to generate the zero function on a segment for $\V \eta$ is to set the active coefficient of $\V B$ to $0$. Now, one can mix the equations and find that \begin{equation} \V\eta = \V C*(\V B*\V \eta) = (\V C*\V B)*\V \eta . \end{equation} The associativity of the convolution operations is justified by the fact that both $\V \eta$ and $\V B$ are compactly supported, meaning that, for a given argument $x$, all sums are finite. Because the slices of $\V \eta$ are linearly independent, $\V \eta$ can reproduce itself in a unique way, which gives \begin{equation} \label{eq:convolemma} \V C*\V B= \V \delta_{N\times N}, \end{equation} We can now conclude that $M\geq N$ with Lemma \ref{lm:convo}. \end{proof} \section{Multi-Spline Shortest Bases} \label{sec:Multispline} With a single generator, the unique shortest basis of degree $n\inN$ (up to a scaling and a shift operation) is the B-spline of degree $n$, which is a generator of $S_n$. For multiple generators, it is natural to consider spaces generated by a finite number of B-splines $\boldsymbol{\beta}_{\mathbf{n}} = \left(\beta_+^{n_1},..., \beta_+^{n_N}\right)$, where $\mathbf{n} = (n_1,\ldots,n_N)$ and $n_1<\ldots<n_N$. In this way, the reproducing and approximation properties are inherited from the higher-degree spline $\beta_+^{n_N}$. Yet, multi-spline spaces are not generated optimally by the classical B-splines. \begin{proposition} Let $N\inN\setminus{\{0\}}$ and $\mathbf{n} = (n_1,\ldots,n_N)$ with $n_1<\cdots <n_N\inN$. If $N>1$, then $\boldsymbol{\beta}_{\mathbf{n}} = \left(\beta_+^{n_1},..., \beta_+^{n_N}\right)$ is neither a shortest-support basis nor a Riesz basis. \end{proposition} \begin{proof} \begin{itemize}[leftmargin=*] \item The space $\shiftspace{\boldsymbol{\beta}_{\mathbf{n}}}$ can reproduce polynomials of degree at most $n_N$ due to the inclusion $\shiftspace{\beta_+^{n_N}}\subset \shiftspace{\V \phi}$. Moreover, the sum of the support of $\V \beta_{\mathbf{n}}$ is $\sum_{m=1}^N (n_m+1)>n_N+1$, which shows that the basis is not a shortest-support one. \item From the proof of Lemma \ref{LemmaRB}, the Gramian matrix can be written \begin{align} (\hat{\boldsymbol{G}}(\omega))_{pq}=\dotprod{\Tilde{\beta}^{n_p}(\omega,\cdot)}{\Tilde{\beta}^{n_q}(\omega,\cdot)}, \end{align} where $\Tilde{\beta}^{n_p}(\omega,\cdot)$ is the finite weighted sum of slices \begin{equation} \Tilde{\beta}^{n_p}(\omega,x) = \sum_{k \inZ}\mathrm{S}_{k}\{\beta^{n_p}\}(x)\ee^{-\jj\omega k} .\end{equation} It is known that $\beta_+^{n_p}$ satisfies the partition of unity, meaning that, for any $x\inR, \sum_{k\inZ}\beta_+^{n_p}(x-k) = 1$. In terms of slices, it means that $\Tilde{\beta}^{n_p}(0,x) =\sum_{k \inZ}\mathrm{S}_{k}\{\beta^{n_p}\} (x)= \mathbbm{1}_{[0,1)}(x)$. The functions $(\Tilde{\beta}^{n_p}(0,\cdot))_{p=1,...,N}$ are therefore not linearly independent (because they are equal) and $\det \hat{\boldsymbol{G}}(0) = 0$. As stated in the proof of Lemma \ref{LemmaRB}, $\omega \mapsto \det \hat{\boldsymbol{G}}(\omega)$ is a continuous function (because the B-splines are compactly supported), meaning that \begin{equation} \essinf_{\omega \in [0,2\pi]}{\det \hat{\boldsymbol{G}}(\omega)} = \min_{\omega \in [0,2\pi]}{\det \hat{\boldsymbol{G}}}(\omega) = 0. \end{equation} Following (\ref{eigenvalueRB}), $\boldsymbol{\beta}_{\mathbf{n}}$ cannot be a Riesz basis. \end{itemize} \end{proof} For $N>1$, only few shortest bases are known, with the most prominent being the Hermite splines presented by Lipow and Schoenberg \cite{Lipow1973}. They are solution of the direct interpolation problem \begin{equation} \label{eq:Hermite} \text{find } \eta_p\in S_{n,N}: \eta_p^{(\nu)}(k) = \begin{cases}1,& \text{if } \nu=p \text{ and } k=0,\\0,&\text{otherwise,}\end{cases} \end{equation} with $k\inZ, \quad \nu,p = 0,...,(N-1)$, and $S_{n,N} = S_{n}+\cdots+S_{n+N-1}$. The function $\eta_p$ has all its derivatives set to zero at the integers, except for the $p$th derivative that is one at zero. The multi-spline space must be chosen so that $\eta_p$ is sufficiently differentiable, yielding the condition $n\geq N$. When $n=N$, shortest-support functions were found (the Hermite splines, see plots \cite{Ranirina2019} for instance) but, unfortunately, in a higher-order approximation space, {\it i.e.} for $n>N$, the functions are not compactly supported anymore. For instance, for derivative sampling (interpolate $f$ and $f^{'}$), the smaller order of approximation solution ($N=2$) is given by the cubic Hermite-spline generators of $S_2+S_3$. \subsection{Consecutive Multi-Spline Spaces} The derivatives up to order $(n-1)$ of a compactly-supported spline of degree $n$ must vanish on the edges of the support. This constraint cannot be satisfied if the function is too short. In particular, the shortest nonzero function of $S_n$ has a support size of $(n+1)$ and, interestingly, it is precisely the B-spline of degree $n$. In the special case of a consecutive multi-spline space $S_{n,N}=S_{n}+S_{n+1}+\cdots+S_{n+N-1}$, this result can be directly extended. To that end, we define the space \begin{equation} P_{m}^{m'} = \{p \in C^{m'}(\mathbb{R}): \text{$p$ is a polynomial of degree $m$ on each } [k,k+1), k\inZ\} . \end{equation} Note that the space $P_{m}^{m'}$ can be viewed as a spline space with knots of multiplicity $(m-m'-1)$ (\cite[Section 5.11]{Lachance1990}). In our setting with simple knots, $P_{m}^{m'}$ is rather regarded as multi-spline space (Proposition \ref{consecutiveMS}). \begin{proposition} \label{consecutiveMS} Let $n,N>0$. Then $ S_{n,N}= P^{n-1}_{n+N-1}$. \end{proposition} \begin{proof}The definition of a spline of degree $n$ implies that, for $q=0,...,N-1$, we have that $S_{n+q}\subset P^{n-1}_{n+N-1}$, from which we deduce that $S_{n,N}\subset P^{n-1}_{n+N-1}$.\\ The other inclusion is proven by induction over $N$, with the induction hypothesis \begin{equation} H_{N}:\forall n\inN, P^{n-1}_{n+N-1}\subset S_{n,N}. \end{equation} \begin{itemize} \item For $N=1$ and any $n\inN\setminus{\{0\}}$, the result is directly given by the definition of $S_{n,1}=S_{n} = P^{n-1}_{n}$. \item Suppose that $H_{N}$ holds for $N\inN^*$. Let $p\in P^{n-1}_{n+N}$. We have that $p^{(n-1)}\in P^{0}_{N+1}$ and, consequently, $p^{(n)}$ is a piecewise polynomial function with finite jumps at the knots. There exists $f_{0}\in S_{0}$ that has the same jumps on the knots as $p^{(n)}$. Then, $(p^{(n)}-f_{0})$ is continuous on the integers, which implies that $(p^{(n)}-f_{0})\in P^{0}_{N}$. The induction hypothesis guarantees that $(p^{(n)}-f_{0})\in S_{1,N}$ and, therefore, that $p^{(n)}\in S_{0,N+1}$. After $n$ integrations, we finally have that $p\in S_{n,N+1}$, which concludes the induction step and the proof. \end{itemize} \end{proof} For a given $L\inN$, the space of functions in $P^{m'}_m$ that are supported in $[0,L]$ is a vector space of the known finite dimension \cite{Alfeld1985} \begin{equation} \dim(\{p\in P_m^{m'}:\supp{p}\subset [0,L]\}) = ((m-m')L-(m'+1))_+ , \end{equation} where $x_+=\max(0,x)$. Indeed, any $p\in P_m^{m'}$ supported in $[0,L]$ is uniquely defined by $L$ pieces that are polynomials of degree $m$. So, $L \times(m+1)$ coefficients have to be set. The smoothness constraints imply that the pieces cannot be set independently. On the first interval $[0,1)$, the $(m+1)$ coefficients must be chosen so that $p^{(0)},...,p^{(m')}(0) = 0$, which leaves $(m-m')$ degrees of freedom. For the next interval, $(m+1)$ new coefficients have to be set but the values $p^{(0)}(1),...,p^{(m')}(1)$ are already fixed, giving only $(m-m')$ new degrees of freedom. We see that each interval provides $(m-m')$ extra degrees of freedom. In the end, there remain $LN$ degrees of freedom. Now, to enforce that $p\in P_{m}^{m'}$, we must have that $p^{(0)}(L),...,p^{(m')}(L)=0$. The total number of degrees of freedom gives the announced dimension $((m-m')L-(m'+1))_+$. \begin{corollary} \label{DimensionCompact} Let $n,N,L\inN$. The set of functions of $S_{n,N}$ that have their support in $[0,L]$ is a vector space of dimension $(LN-n)_+=\max(0,LN-n)$. \end{corollary} \begin{corollary} Let the Euclidean division of $n$ by $N$ be written as $n = pN+r$. Then, the shortest-support nonzero functions of $S_{n,N}$ have a support size of $(p+1)$. Moreover, the set $\{ f \in S_{n,N}:\supp{f}\subset [0,p+1] \}$ is a vector space of dimension $(N-r)$. \end{corollary} \begin{proof} The set of functions of $S_{n,N}$ that have their support in $[0,L]$ is a vector space of dimension $(LN-n)_+$ (Corollary \ref{DimensionCompact}). To find at least one non-vanishing function in the vector space, its dimension must be greater than one meaning that $(LN-n)\geq 1 \Leftrightarrow L\geq (n+1)/N = p+(r+1)/N$. Knowing that $L\in \mathbb{N}$ and $r<N$, we conclude that one must have that $L = (p+1)$ to find a nonzero compactly supported function. In this case, the dimension reads $((p+1)N-n)=(N+pN-n) = (N-r)$. \end{proof} With a single generator, the shortest-support basis is provided by the shortest function. In a consecutive multi-spline space, one would ideally take $(N-r)$ functions of size $(p+1)$ (the shortest) and complete with $r$ functions of size $(p+2)$. This would result in $N$ functions with a total support size of $(N-r)(p+1)+r(p+2) = Np+r+N = n+N=n_N+1$, which is the objective for a shortest-support basis. For nonconsecutive multi-spline spaces, similar results should exist, but in a more complicated form. \subsection{Existence and Construction of mB-Splines} We say that a finite collection $\V \phi$ of multi-spline functions is an mB-spline of degree $\mathbf{n} = (n_1,\ldots,n_N)$ with $n_1<\cdots <n_N\inN$, if it is a shortest-support basis of the space $S_{\mathbf{n}}$. This is the natural extension of B-splines. Similar to the latter, mB-splines can be constructed recursively for any multi-spline space. Indeed, two basic transformations (the “increment step" and the “insertion step") allow one to convert a shortest-support basis of a given space into a shortest-support basis of a different space. To simplify the explanation, we say that the collection $\boldsymbol{\phi} = (\phi_1,...,\phi_N) \inLDRN$ of compactly supported functions is standardized if, for $n=1,\ldots,N$, we have that \begin{enumerate}[label=(\roman*)] \item $\int_\mathbb{R}\phi_n(t)\dint t \in \{0,1\}$, \item $\inf \{t\inR\colon \phi_n(t)\neq 0\}\in [0,1)$. \end{enumerate} The second condition implies that the generating functions are causal, {\it i.e.} $\phi_{n}(t<0) = 0$. Note that any $\boldsymbol{\phi}$ compactly supported can be standardized without altering $\shiftspace{\boldsymbol{\phi}}$. \subsubsection{Increment Step} The B-splines $\beta_+^{n+1}$ can be constructed recursively by noting that \begin{equation} \beta_+^{n+1}(x) = \Delta \left \{ \int_{-\infty}^x \beta_+^{n}(t)\dint t\right \} , \end{equation} where $\Delta$ is the finite difference operator $\Delta \{f\}(x) =(f(x) - f(x-1))$. The integration increases the polynomial degree, along with the smoothness at the knots (Step 1), while $\Delta$ ultimately returns a compactly supported function (Step 2). For multiple generating functions, a similar two-step recursive approach is proposed. The general process is mathematically detailed below, while an intuitive example is proposed in Figure \ref{incremementStep}. \begin{figure}[t!] \begin{minipage}{1.0\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{./figures/incrementStep.pdf} \caption{Increment step that yields a shortest-support basis of $S_1+S_4$ starting from $S_0+S_3$. (a) A shortest-support basis $(\eta_1,\eta_2)$ for $S_0+S_3$ ($\suppsize{\boldsymbol{\eta}}=4$). (b) The integration of $\eta_1$ and $\eta_2$ results in two generators of $S_1+S_4$, $H_1$ and $H_2$. (c) To get compactly supported functions with the same generating properties, we choose $\theta_1 = \Delta H_1$ and $\theta_2=(H_1-H_2)$. We found a shortest support-basis of $S_1+S_4$ ($\suppsize{\boldsymbol{\theta}}=5$).} \label{incremementStep} \medskip \end{minipage} \end{figure} Suppose $\boldsymbol{\eta} = (\eta_1,...,\eta_N)\inLDRN$ is an mB-spline of $S_{n_1}+\cdots+S_{n_N}$. The goal is to find an mB-spline of $S_{n_1+1}+\cdots+S_{n_N+1}$. It will be a generator with a support size of $(n_N+2)$, able to reproduce the B-splines of degree $n_1+1,...,n_N+1$. \subsubsection*{{\it Integration}} The collection of functions $\boldsymbol{\eta}$ is able to reproduce the B-splines of degree $n_1,\ldots,n_N$, that is, for any $s \in \{1,...,N\}$ there exists a vector sequence $\boldsymbol{c}^s = (c_1^s,...,c_N^s)$ (not necessarily in $(\lDZ)^N$) so that \begin{equation} \label{reprobeta} \forall x\inR:\beta_+^{n_s}(x) = \sum_{k\inZ}\boldsymbol{c}^s[k]^T\boldsymbol{\eta}(x-k) .\end{equation} To justify the calculations to come, we assume that \begin{equation*} c_1^s,...,c_N^s \hbox{ are causal sequences, {\it i.e.}, } c_n^s[k] = 0 \text{ for any } k<0. \qquad (A_{\mathbf{n}}) \end{equation*} The assumption $(A_{\mathbf{n}})$ is not overly restrictive because it will hold for the starting basis of our algorithm and then be preserved by the construction process. In the end, all the bases constructed will be able to reproduce the B-splines with causal sequences. Let $\boldsymbol{H} = (H_1,...,H_N)$ be defined as \begin{equation} \boldsymbol{H}(x) = \int_{-\infty}^x \boldsymbol{\eta}(t) \dint t . \end{equation} The integration of equation (\ref{reprobeta}), followed by the application of the operator $\Delta$, yields \begin{align} \beta_+^{n_s+1}(x)&= \Delta \left\{\sum_{k\inZ} \boldsymbol{c}^s[k]^T \boldsymbol{H}(x-k) \right\}= \sum_{k\inZ} \boldsymbol{c}^s[k]^T \Delta \{\boldsymbol{H}\}(x-k) \nonumber \\&= \sum_{k\inZ} \boldsymbol{c}^s[k]^T (\boldsymbol{H}(x-k)-\boldsymbol{H}(x-1-k)) =\sum_{k\inZ} (\boldsymbol{c}^s[k]^T -\boldsymbol{c}^s[k-1]^T)\boldsymbol{H}(x-k) .\label{reproducecausal}\end{align} The assumption that $c_1^s,...,c_N^s$ are causal and the fact that $\boldsymbol{H}$ is also causal (because $\boldsymbol{\eta}$ is compactly supported and standardized) implies that, for any $x\inR$, the sums in (\ref{reproducecausal}) have a finite number of nonzero terms. This enables us to switch the order of the operations (sum, integral, and $\Delta$). Note that the sequence $(\boldsymbol{c}^s[k]^T -\boldsymbol{c}^s[k-1]^T)_{k\inZ}$ is causal. In short, $\boldsymbol{H}$ can reproduce $(\beta_+^{n_1+1},...,\beta_+^{n_N+1})$ with causal sequences, but it is obviously not a shortest-support basis because its support is infinite. \subsubsection*{{\it Finite Difference}} The aim now is to find a basis with the same reproducing properties as $\boldsymbol{H}$, but with minimal support. To that end, we denote by $s_0$ the index so that $\eta_{s_0}$ is the shortest function in $\boldsymbol{\eta}$ that satisfies $\int_{\mathbbm{R}}\eta_{s_0} \neq 0$. It must exist; if not, the generating $S(\V \eta)$ would only contains zero-mean functions and could not reproduce the B-splines that are not zero-mean. A shortest-support basis $\boldsymbol{\theta} = (\theta_1,...,\theta_N)$ is then given by \begin{equation} \theta_s = \begin{cases} H_s& \text{if } s\neq s_0 \text{ and } \int_{\mathbbm{R}}\eta_{s}(t)\dint t = 0\\ H_s-H_{s_0}& \text{if } s\neq s_0\text{ and } \int_{\mathbbm{R}}\eta_{s} (t)\dint t\neq 0\\ \Delta H_{s_0}& s = s_0 \end{cases} \end{equation} Because $\boldsymbol{\eta}$ is compactly supported and standardized, the choice of $s_0$ ensures that \begin{equation} \suppsize{\theta_s} = \begin{cases} \suppsize{\eta_s}&s\neq s_0\\ \suppsize{\eta_{s_0}}+1&s = s_0 \end{cases} \end{equation} In short, $\suppsize{\boldsymbol{\theta}} = 1 + \suppsize{\boldsymbol{\eta}} = n_N+2$. Noting that $H_{s_0}=\sum_{k\inN}\theta_{s_0}(\cdot -k)$, it is clear that $\boldsymbol{\theta}$ can reproduce $\boldsymbol{H}$ with causal coefficients. It also implies that $\boldsymbol{\theta}$ can reproduce $(\beta_+^{n_1+1},...,\beta_+^{n_N+1})$ with causal coefficients (see (\ref{reproducecausal})), which justifies the assumption $(A_{\mathbf{n}})$. In conclusion, $\boldsymbol{\theta}$ is a shortest-support basis of $S_{n_1+1}+\cdots+S_{n_N+1}$. \subsubsection{Insertion Step} The present step enables us to add a generator to a shortest-support basis. Suppose $\boldsymbol{\eta} = (\eta_1,...,\eta_{N})$ is a standardized shortest-support basis of $S_{n_1}+\cdots+S_{n_N}$ and let $\boldsymbol{\eta}' = (\delta,\eta_1,...,\eta_{N})$, where $\delta$ is the Dirac distribution. The increment step applied to $\boldsymbol{\eta}' $ yields a shortest-support basis for $S_{0}+S_{n_1+1}+\cdots+S_{n_N+1}$. Indeed, the shortest function of $\boldsymbol{\eta}$ being $\delta$, the new basis $\boldsymbol{\theta}' = (\theta_0',...,\theta_{N}')$ is given by \begin{equation} \theta_n':x\mapsto \begin{cases} \Delta \{\int_{-\infty}^{x} \delta(t)\dint t\} = \beta_+^0(x), & n=0\\\int_{-\infty}^{x} \eta_n(t)\dint t,& n>0 \text{ and } \int_{\mathbbm{R}}\eta_{n}(t)\dint t = 0\\ \int_{-\infty}^{x} (\eta_n(t)-\delta(t))\dint t,& n>0\text{ and } \int_{\mathbbm{R}}\eta_{n}(t)\dint t \neq 0 . \end{cases} \end{equation} Because $\boldsymbol{\eta}$ is compactly supported and standardized, we have that $$ \suppsize{\theta_n'} = \begin{cases}1,&n=0\\ \suppsize{\eta_n},&\text{otherwise,} \end{cases} $$ which means that $\suppsize{\boldsymbol{\theta}'} = \suppsize{\eta'}+1 = n_N+2$. The process also ensures that $\boldsymbol{\theta}'$ is a shortest-support basis of $S_{0}+S_{n_1+1}+\cdots+S_{n_N+1}$. \begin{theorem} \label{th:existence} Let $n_1<\cdots<n_N \inN \setminus{\{0\}}$. There exists an mB-spline $\boldsymbol{\eta} = (\eta_1,...,\eta_N)\inLDRN$ of $S_{n_1}+\cdots+S_{n_N}$ that can be constructed recursively with increment and insertion steps. \end{theorem} \begin{proof} The increment and insertion steps are sufficient to construct an mB-spline for any multi-spline space. Indeed, take $\boldsymbol{\eta_0} = (\beta_+^{n_N-n_{N-1}-1})$ a shortest support basis for $S_{n_N-n_{N-1}-1}$. The insertion step gives a shortest-support basis for $S_0+S_{n_N-n_{N-1}}$. After $(n_{N-1}-n_{N-2}-1)$ increment steps and one insertion step, the process gives a shortest-support basis for $S_0+S_{n_{N-1}-n_{N-2}}+S_{n_N-n_{N-2}}$. By iteration, a shortest-support basis for $S_0+S_{n_2-n_1}+\cdots+S_{n_N-n_1}$ is obtained. Applying $n_1$ increment steps, we finally obtain a shortest-support basis for $S_{n_1}+\cdots+S_{n_N}$ \end{proof} Examples of mB-splines will be provided in Section \ref{sec:Applications}. Note that our algorithm does not always output functions with the most practical form. This is corrected by appropriate linear combinations and, possibly, translations that do not alter the reproducing properties and the support size. For instance, for the space $S_2+S_3$, our construction will need a simple linear combination to obtain the wellknown bicubic Hermite splines. We conclude this section with a result on the minimal number of generating functions required to generate multi-spline spaces. \begin{theorem} \label{prop:MN} Let $n_1<\cdots<n_N\inN\setminus{\{0\}}$. The space $S_{\mathbf{n}} = S_{n_1}+\cdots+S_{n_N}$ cannot be generated by fewer than $N$ compactly supported generating functions. \end{theorem} \begin{proof} From Theorem \ref{th:existence}, there exists an mB-spline of $S_{\mathbf{n}}$ composed of $N$ functions, say, $\V \eta=(\eta_1,\ldots,\eta_N)\in (S_{\mathbf{n}})^N$. Let $\V \psi = (\psi_1,...,\psi_M)\in (S_{\mathbf{n}})^M$ be a collection of compactly supported functions able to generate $S_{\mathbf{n}}$. It means that $\V \eta$ and $\V \psi$ can reproduce each other and, by Lemma \ref{lm:MinimalGeneratingFunctions}, $M\geq N$. \end{proof} Note that $N$ is a lower bound and the number of generating function of a shortest-support basis can exceed $N$. For instance, take $\V \eta = (\eta_1,\eta_2)$ with \begin{align} \eta_1&:x \mapsto \beta_0(2x) = \mathbbm{1}_{[0,1/2)}(x)\\ \eta_2&:x \mapsto \beta_0(2(x-1/2)) = \mathbbm{1}_{[1/2,1)(x)} .\end{align} Since $\eta_1+\eta_2 = \beta_0$, $\V \eta$ can reproduce $S_0$. In addition, the fact that $\suppsize{\V \eta} = 1$ means that it is a shortest-support basis of degree 0 and now it is composed of two generating functions. (Note that the space they generate is larger than $S_{0}$). \section{Applications} \label{sec:Applications} \subsection{Generalized Sampling in Multi-Spline Spaces} We consider a multi-spline space $S_{\mathbf{n}}$ along with the $N$-component mB-spline $\boldsymbol{\phi}=(\phi_1,\ldots,\phi_N)$ and some corresponding analysis functions $\boldsymbol{\psi} = (\psi_1,\ldots,\psi_N)$. As we now show, the generalized-sampling formulation presented in \cite{Unser1998} can be extended to multiple generators. Let $\mathcal{H}$ be a space considerably larger than $\shiftspace{\boldsymbol{\phi}}$. Consider $f\in \mathcal{H}$, from which we know only some discrete measurements $(\boldsymbol{g}[n])_{n\inZ}$ written $$ \boldsymbol{g}[n] =\dotprod{\boldsymbol{\psi}(\cdot-n)}{f} = (\dotprod{\psi_1(\cdot-n)}{f},...,\dotprod{\psi_N(\cdot-n)}{f}) .$$ To construct an approximation $\Tilde{f}\in \shiftspace{\boldsymbol{\phi}}$ of $f$, a standard way is to enforce consistency \cite{Unser1994,Unser1997a}, in the sense that $f$ and $\Tilde{f}$ must give the same measurements. This formulation generalizes the notion of interpolation. For instance, to interpolate the value of $f$ and its derivative at the sampling locations, take $\psi_1 = \delta$ and $\psi_2 = \delta^{'}$. In such a case, consistency simply means that $f$ and $\Tilde{f}$ should have the same value and the same derivative at the grid points. In general, the consistency requirement translates into \begin{align} \dotprod{\boldsymbol{\psi}(\cdot-n)}{f} &= \dotprod{\boldsymbol{\psi}(\cdot-n)}{\tilde{f}} \nonumber\\ &= \sum_{k \inZ}\dotprod{\boldsymbol{\psi}(\cdot-n)}{\boldsymbol{\phi}^T(\cdot-k)}\cdot \boldsymbol{c}[k] \nonumber\\ &= \sum_{k \inZ}\dotprod{\boldsymbol{\psi}(\cdot-(n-k))}{\boldsymbol{\phi}^T}\cdot \boldsymbol{c}[k] \nonumber\\ & = (\boldsymbol{A}_{\boldsymbol{\Phi \Psi}} * \boldsymbol{c})[n] \end{align} where $(\boldsymbol{c}[n])_{n\inZ}$ is the unique vector sequence representing $\Tilde{f} = \sum_{k\inZ}{\V c}[k]^T {\V \phi}(\cdot-k)$ and $\boldsymbol{A}_{\boldsymbol{\Phi \Psi}}[n] = \dotprod{\boldsymbol{\psi}(\cdot-n)}{\boldsymbol{\phi}^T(\cdot)}$ is the matrix-valued sequence of the measurements of the basis functions. To solve our problem, we rely on the theory of signal and systems, including the z-transform. Indeed, with this framework efficient implementation techniques naturally stand out. When the matrix-valued filter $\boldsymbol{A}_{\boldsymbol{\Phi \Psi}}$ is invertible (see \cite[Proposition 1]{Unser1998} for the invertibility condition), the vector ${\V c}$ of sequences can be computed from the measurements by applying the matrix-valued inverse filter $\V Q$, like in \begin{equation} \V c [n] = (\V Q *\V g)[n] .\end{equation} Its transfer function verifies in the z-domain $ \hat{\V Q}(z) = \hat{\V A}_{\V \Phi \V\Psi}^{-1}(z) $. This matrix filter has not necessarily a finite impulse response (FIR) but it can be decomposed as $\hat{\V Q}(z)= \frac{1}{\det \hat{\V A}_{\V \Phi \V\Psi}(z)}\mathrm{com} (\hat{\V A}_{\V \Phi \V\Psi}(z))^T$, where $\mathrm{com} (\hat{\V A}_{\V \Phi \V\Psi})$ denotes the cofactor matrix of $\hat{\V A}_{\V \Phi \V\Psi}$. For compactly supported analysis functions, the comatrix $\mathrm{com} (\hat{\V A}(z))$ is FIR because it is a Laurent polynomial in $z$, so it is straightforward to implement. On the contrary, $\frac{1}{\det \hat{\bold{A}}_{\V \Phi \V\Psi}(z)}$ is often not FIR. Nonetheless, it can usually be implemented efficiently too, using the same techniques as in \cite{Unser1993a}. \paragraph{Online Interactive Tutorial} Some examples are implemented in an online interactive demo \footnote{\url{https://bigsplinesepfl.github.io/}}, a screenshot being provided in Figure \ref{fig:demoDerivativeSampling}. The user can control the discrete measurements of a function (value, derivative), choose a multi-spline reconstruction space, and see in live the reconstructed function. \subsection{Derivative Sampling with High-Degree Multi-Splines in $S_{2p}+S_{2p+1}$} The derivative sampling problem reads for $f\in\mathcal{H}$ \begin{equation} \label{DerivativeSampling} \text{find } \tilde{f}\in S_{\mathbf{n}}: \begin{cases}\tilde{f}(k)=f(k)\\\tilde{f}^{'}(k)=f^{'}(k)\end{cases}, k\inZ .\end{equation} The most relevant reconstruction spaces have the form $S_{\mathbf{n}} = S_{2p}+S_{2p+1}$. The underlying reason is that the filter complexity is the same for the spaces $S_{2p}+S_{2p+1}$ and $S_{2p-1}+S_{2p}$, so, the higher degree is preferred (the filter has $2(p-1)$ roots). Note that the same occurs when one performs classical interpolation with B-splines and odd degrees are usually preferred. To the best of our knowledge, when $p>1$, no solution based on shortest-support bases and recursive filtering has been proposed so far. Our construction of shortest-bases results in the functions $\eta_1$ and $\eta_2$. They have a support size $(p+1)$ and are plotted in Figure \ref{fig:derivativeSampling}. Due to the symmetry properties of those functions, the entries of $\boldsymbol{\hat{A}}_{\boldsymbol{\Phi \Psi}}(z)$ have poles that come in reciprocal pairs. Consequently, the inverse matrix filter can be implemented with efficient recursive techniques, as detailed in \cite{Unser1993,Unser1993a}. \\The case of quintic-degree derivative sampling is detailed now. The basis functions are specified in Table \ref{tab:derivativesampling}. \begin{figure}[t!] \begin{minipage}{1.0\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{./figures/derivativeSampling.pdf} \caption{Shortest-support bases for derivative sampling, obtained with the shortest-basis algorithm and some linear combinations to get a symmetric and an antisymmetric function. (a) The well-known bicubic Hermite splines. (b)-(c)-(d) New bases for derivative sampling with high-degree splines. These functions are piecewise polynomials of degree 5, 7, 9 with continuity of the derivatives of order 3, 5, 7, respectively.} \label{fig:derivativeSampling} \medskip \end{minipage} \end{figure} \begin{comment} \begin{align} \eta_1(x) &= \frac{1}{4}\times \begin{cases}5x^4-3x^5&x\in [0,1)\\ 2+5x_1-10x_1^3+5x_1^4&x\in [1,2), x_1= x-1\\ 2-5x_2 +10x_2 ^3-10x_2^4+3x_2^5&x\in [2,3),x_2 = x-2\\0&\text{otherwise}\end{cases} \\ \eta_2(x) &= \frac{1}{8}\times \begin{cases}15x^4-11x^5&x\in [0,1)\\ 4+5x_120\tilde{x} ^2-50x_1^3+95x_1^4-38x_1^5&x\in [1,2),x_1= x-1\\ -4+5x_2+20x_2^2-50x_2^3+40x_2^4-11x_2^5&x\in [2,3),x_2=x-2\\0&\text{otherwise}\end{cases} \end{align} \end{comment} \begin{table}[ht] \centering {\renewcommand{\arraystretch}{1}% \begin{tabular}{l |l|l|r r r r r r r r r r} \bottomrule\bottomrule & & slice $\#$&$x_k^0$&$x_k^1$&$x_k^2$&$x_k^3$&$x_k^4$&$x_k^5$&$x_k^6$&$x_k^7$\\ \hline \multirow{4}{*}{$S_2+S_3$}&\multirow{2}{*}{$\eta_1$} & $k=0$ &&&-3&2&&&&\\ &&$k=1$&1&&-3&1&&&&\\ \cline{2-11} &\multirow{2}{*}{$\eta_2$} & $k=0$ &&&-1&1&&&&\\ &&$k=1$&&1&-2&1&&&&\\ \hline\multirow{6}{*}{$S_4+S_5$}&\multirow{3}{*}{4$\eta_1$} & $k=0$ &&&&&5&-3&&\\ & &$k=1$&2&5&&-10&5&&&\\ & &$k=2$&2&-5&&10&-10&3&&\\ \cline{2-11}& \multirow{3}{*}{8$\eta_2$} & $k=0$ &&&&&15&-11&&\\ & &$k=1$&4&5&-20&-50&95&-38&&\\ & &$k=2$&-4&5&20&-50&40&-11&&\\ \hline\multirow{8}{*}{$S_6+S_7$}&\multirow{4}{*}{$108\eta_1$}&$k=0$&&&&&&&21&-11\\ &&$k=1$&10&49&84&35&-70&-105&112&-27\\ &&$k=2$&88&&-168&&140&&-77&27\\ &&$k=3$&10&-49&84&-35&-70&105&-56&11\\ \cline{2-11}&\multirow{4}{*}{$\frac{918}{5}\eta_2$}&$k=0$&&&&&&&42&-25\\ &&$k=1$&17&77&105&-35&-245&-273&539&-185\\ &&$k=2$&&-224&&560&&-924&756&-185\\ &&$k=3$&-17&77&-105&-35&245&-273&133&-25\\ \toprule \toprule \end{tabular} } \caption{Slices of shortest-support bases for derivative sampling. The slices are given as linear combinations of the shifted monomials $x_k^n = (x-k)^n$ if $x\in [k,k+1)$ and $x_k^n = 0$ otherwise.} \label{tab:derivativesampling} \end{table} The z-transform of the filter $\boldsymbol{\hat{A}}_{\boldsymbol{\Phi \Psi}}(z)$ reads \begin{equation} \boldsymbol{\hat{A}}_{\boldsymbol{\Phi \Psi}}(z) = \begin{bmatrix} \frac{z^{-1}+z^{-2}}{2}&\frac{z^{-1}-z^{-2}}{2}\\ \frac{5(z^{-1}-z^{-2})}{4}&\frac{5(z^{-1}+z^{-2})}{8} \end{bmatrix} .\end{equation} It follows that the transpose comatrix satisfies \begin{equation} \rm com (\hat{\bold{A}}(z))^T \quad \xleftrightarrow{ \quad z \quad} \quad \frac{1}{2} \begin{bmatrix} \frac{5(\delta[\cdot-1]+\delta[\cdot-2])}{4}&-\delta[\cdot-1]+\delta[\cdot-2]\\ -\frac{5(\delta[\cdot-1]-\delta[\cdot-2])}{2}&{\delta[\cdot-1]+\delta[\cdot-2]} \end{bmatrix} \end{equation} and the determinant \begin{equation} \frac{z^{-1}}{\det \hat{\bold{A}}(z)}=\frac{16}{5}\frac{-z}{(1-z_0z^{-1})(1-z_0^{-1}z^{-1})} \quad \xleftrightarrow{ \quad z \quad} \quad d[n], \end{equation} where $z_0 = (3-2\sqrt{2})$. This means that the convolution of any sequence with $d$ can be implemented recursively. Interestingly, it is the same inverse filter as in cubic-spline interpolation. The reader can therefore refer to \cite{Unser1999} for a detailed explanation of the implementation. The expansion coefficients can be evaluated as \begin{align} c_1&= d*\left (\frac{5}{8}\Delta^+\{f\}-\frac{1}{2}\Delta \{f^{'}\}\right) \nonumber \\ c_2 &= d*\left(-\frac{5}{4}\Delta \{f\} + \frac{1}{2}\Delta^+ \{f^{'}\}\right) ,\end{align} where $\Delta^+ \{f\} [k] = f[k]+f[k-1]$. Finally, the multi-spline that is consistent with the measurements is given by \begin{equation} \tilde{f}(x) = \sum_{k\inZ}c_1[k]\eta_1(x-k)+\sum_{k\inZ}c_2[k]\eta_2(x-k) . \end{equation} \begin{figure}[t!] \begin{minipage}{1.0\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{./figures/demoDerivativeSampling.pdf} \caption{Derivative sampling with optimal bases. The solid curve lies in $S_2+S_3$ (cubic piecewise polynomials with continuous derivative) and the dashed curve lies in $S_4+S_5$ (quintic piecewise polynomials with continuous third derivative).} \label{fig:demoDerivativeSampling} \medskip \end{minipage} \end{figure} \subsubsection{Derivative Sampling in $S_2+S_3+S_{4}$} Here, we consider the setting $\V \psi = (\delta,\delta^{'},\delta(\cdot-1/2))$, which means that the value of the function to be reconstructed is sampled twice more often than its derivative. The specification of $S_2+S_3+S_4$ as reconstruction space provides then an explicit interpolation formula, which involves the shortest-support basis $\V \eta$, plotted in Figure \ref{fig:samplingS2S3S4}. This formula reads \begin{equation} \tilde{f}(x) = \sum_{k\inZ}\left ( f(k)\eta_1(x-k)+f'(k)\eta_2(x-k) +f(k+1/2)\eta_3(x-k)\right ). \end{equation} \begin{figure}[t!] \begin{minipage}{1.0\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{./figures/samplingS2S3S4.pdf} \caption{Shortest basis of $S_2+S_3+S_4$ associated to the analysis functions $\V \psi = (\delta,\delta^{'},\delta(\cdot-1/2))$.} \label{fig:samplingS2S3S4} \medskip \end{minipage} \end{figure} More generally, we observed that the addition of $N$ consecutive spline spaces to $S_2+S_3$ ({\it i.e.}, choosing $S_{2}+S_{3}+\cdots+S_{3+N}$) allows one to perform derivative sampling and interpolate the function $N$ times between the integers with a direct interpolation formula. \subsubsection{Direct Derivative Sampling in $S_2+\cdots+S_{2p+1}$} The space $S_2+S_3+S_4+S_5$ is also well suited for derivative sampling with $\psi = (\delta,\delta',\delta(\cdot-1/2),\delta'(\cdot-1/2))$ because of the structure of its shortest-support generating functions $\eta_1,\eta_2,\eta_3$, and $\eta_4$ (Figure \ref{fig:doubleDerivativeSampling}). Indeed, it yields the direct interpolation formula \begin{equation} \tilde{f}(x) = \sum_{k\inZ} \left (f(k+1/2)\eta_1(x-k)+f(k)\eta_2(x-k+1) +f'(k+1/2)\eta_3(x-k)+f'(k)\eta_4(x-k+1) \right ) . \end{equation} The sampling step is $1/2$, but the spline knots are still located at the integers. Note that the sampling step can be tuned at will by dilation of the generating functions. More generally, we conjecture that there exist basis functions with the interpolatory property for any space of the form $S_2+\cdots+S_{2p+1}$ and the sampling step $1/p$. This conjecture was verified for $p=1$ (bicubic Hermite splines), $p=2$ (Figure \ref{fig:doubleDerivativeSampling}) and $p\in\{3,4\}$. \begin{figure}[t!] \begin{minipage}{1.0\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{./figures/doubleDerivativeSampling.pdf} \caption{Shortest basis of $S_2+S_3+S_4+S_5$ for direct derivative sampling.} \label{fig:doubleDerivativeSampling} \medskip \end{minipage} \end{figure} \subsection{Classical Interpolation} The classical interpolation problem reads for $f\in\mathcal{H}$ \begin{equation} \label{ClassicalInterpolation} \text{find } \tilde{f}\in S_{\mathbf{n}}: f(k)=\tilde{f}(k), k\inZ .\end{equation} When the number $N$ of generating functions is greater than 1, we have two equivalent options: \begin{enumerate}[label=(\roman*)] \item to sample the function $f$ with the sampling step $1/N$; \item to dilate the generators by a factor of $N$, keeping a unit sampling step. \end{enumerate} We present the result in accordance with Option (i). \subsubsection{Modified Lagrange Polynomials in $S_1+\cdots+S_N$} Classical interpolation is well solved by B-splines but, starting from degree 2, the filter is neither FIR nor causal. Exact operations such as local interpolation or interpolation with a finite delay are therefore not possible. Some workarounds exist \cite{Petrinovic2008}; we present now one that is based on modified Lagrange polynomials. Let $\V l= (l_1,\ldots,l_N)$ be a collection of $N$ generating function such that, for $x\in[0,1]$, $l_q(x)=\prod_{\substack{p = 0\\p\neq q}}^{N}\frac{Nx-p}{q-p}$. In this way, when $q=1,\ldots,(N-1)$, $l_q$ is zero at $x=0$ and $x=1$ so it can be set to zero for $x\not\in[0,1]$ and $l_q\in S_1$. Noting that $l_N(1)=1$, to make sure that $l_N\in S_1$, we extend its support to $[1,2]$ and set, $\forall x\in [1,2]$, $l_N(x)=l_N(2-x)$ (see Figure \ref{fig:modifiedLagrange}). These functions constitute a shortest-support basis of $S_1+\cdots+S_N$ and give a direct interpolation formula. Interestingly, those basis functions are sometimes used for finite-element methods \cite{Langtangen2019}. \begin{figure}[t!] \begin{minipage}{1.0\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{./figures/modifiedLagrange.pdf} \caption{Shortest-support basis for $S_1+\cdots+S_N$. The basis functions are continuous and able to reproduce any polynomial of degree up to $N$.} \label{fig:modifiedLagrange} \medskip \end{minipage} \end{figure} \subsubsection{Bi-Spline Classical Interpolation in $S_{2p+1}+S_{2p+2}$} A bi-spline is the sum of two splines of different degrees, and it can be used to perform classical interpolation. In particular, interpolation in the reconstruction space $S_{\mathbf{n}} = S_{2p+1}+S_{2p+2}$ leads to a filter with $p$ pairs of reciprocal roots. In terms of filtering, it has therefore the same complexity as for the interpolation inverse filter associated with the single space $S_{2p+1}$. Shortest-support basis functions for such spaces are plotted in Figure \ref{fig:bisplineInterpolation}. \begin{figure}[t!] \begin{minipage}{1.0\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{./figures/bisplineInterpolation.pdf} \caption{Shortest bi-spline bases for classical interpolation with a half-integer sampling step. (a) In $S_1+S_2$, the functions presented give a direct interpolation formula. (b) (c) (d) The functions are piecewise polynomials of degree 4, 6, 8 with continuity of the derivatives of order 2, 4, 6 respectively. To perform interpolation, a filter with 2, 4, 6 roots respectively has to be inverted.} \label{fig:bisplineInterpolation} \medskip \end{minipage} \end{figure} We now detail how this interpolation is performed for $S_3+S_4$, keeping in mind that the other cases are similar. The z-transform of the filter $\boldsymbol{\hat{A}}_{\boldsymbol{\Phi \Psi}}(z)$ reads \begin{equation} \boldsymbol{\hat{A}}_{\boldsymbol{\Phi \Psi}}(z) = \begin{bmatrix} \frac{z^{-1}}{2}&\frac{z^{-1}+z^{-2}}{4}\\ \frac{5(z^{-1}+z^{-2})}{32}&\frac{5(z^{-1}+z^{-3})+210z^{-2}}{320} \end{bmatrix} ,\end{equation} while the z-transform of the inverse filter can be decomposed as \begin{equation} \hat{\boldsymbol{Q}}(z) = \hat{p}(z)\times \hat{\boldsymbol{P}}(z), \end{equation} where \begin{equation} \hat{\boldsymbol{P}}(z) = \begin{bmatrix} \frac{5(1+z^{-2})+210z^{-1}}{320}&-\frac{1+z^{-1}}{4}\\ -\frac{5(1+z^{-1})}{32}&\frac{1}{2} \end{bmatrix} \end{equation} and \begin{equation} \hat{p}(z)=\frac{32}{(1-z_0z^{-1})(1-z_0^{-1}z^{-1})} \end{equation} with $z_0 = (4-\sqrt{15})$. The final steps are identical to the detailed case of derivative sampling (recursive filtering). \subsection{B\'ezier Curves and Computer Graphics in $S_1+S_2+S_3$ and $S_1+S_2$} In this section, we use our multi-spline formulation to revisit some B\'ezier curves and, in particular, the cubic B\'ezier curves that are popular in computer graphics. Each portion of the curve is a cubic polynomial defined by four control points. \begin{itemize} \item Starting point and ending point of the portion. \item Two handles that control the tangent of the curve at each extremity of the portion. \end{itemize} Thus, the value of the function and its left and right derivatives are controlled on the knots. From a multi-spline perspective, any cubic B\'ezier curve lies in the space $S_1+S_2+S_3$. With the well chosen generating functions $\eta_1,\eta_2$, and $\eta_3$ plotted in Figure \ref{fig:bezierMS}, the interpolation formula is explicit and reads \begin{equation} \tilde{f}(x) = \sum_{k\inZ}f(k)\eta_1(x-k)+\sum_{k\inZ}f^{'}(k^-)\eta_2(x-k)+\sum_{k\inZ}f^{'}(k^+)\eta_3(x-k) , \end{equation} where $f'(k^-)$ and $f'(k^{+})$ denote the left and right derivatives at $k$, respectively. Interestingly, $\eta_2$ and $\eta_3$ can be obtained from the bi-cubic Hermite splines, by splitting the antisymmetric function into two functions (see Figure \ref{fig:derivativeSampling} (a)). It gives a simple interpretation to cubic B\'ezier curves as illustrated in Figure \ref{fig:2DBezier}. Similarly, quadratic B\'ezier curves are also multi-splines, this time associated to the space $S_1+S_2$ (Figure \ref{fig:bezierMS}). \begin{figure}[t!] \begin{minipage}{1.0\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{./figures/bezierMS.pdf} \caption{Shortest-support bases for application in classical computer-graphics. (a) Shortest basis for $S_1+S_2$. The function $\eta_1$ controls the value of the function on the knots while $\eta_2$ controls the left derivative on the knots. These functions reproduce any quadratic B\'ezier curve. (b) Shortest basis for $S_1+S_2+S_3$. The function $\eta_1$ controls the value of the function on the knots while $\eta_2$ and $\eta_3$ control the left and right derivatives, respectively, on the knots. These functions can reproduce any cubic B\'ezier curve with the shortest support. They also give a simple interpretation of such curves.} \label{fig:bezierMS} \medskip \end{minipage} \end{figure} \begin{figure}[t!] \begin{minipage}{1.0\linewidth} \centering \centerline{\includegraphics[width=100mm]{./figures/2DBezier.pdf} \caption{Screenshot from the online demo. The shortest basis of the space $S_1+S_2+S_3$ allows one to control the value of the function (green dots) and the left/right derivatives (handles). It yields the same curve as with standard vector-graphics editors relying on cubic B\'ezier curves. In this figure, the parametric curves are two-dimensional and the interpolation is performed component-wise.} \label{fig:2DBezier} \medskip \end{minipage} \end{figure} \begin{comment} \subsection{} The last example is chosen to be more uncommon to show the wide range of application offered by multi-splines. The space $S_3+S_4$ provides an explicit interpolation to reconstruct a function from its value at the knots and the value of its second derivative at the midpoint of the knots \begin{equation} \tilde{f}(x) = \sum_{k\inZ}f(k)\theta_1(x-k)+f^{''}(k+1/2)\theta_2(x-k) . \end{equation} The basis function $\V \theta$ is not of shortest support but it is easily obtained from the optimal basis $\V \eta = (\eta_1,\eta_2)$. See the plots in Figure \ref{fig:samplingS3S4} for more details. \begin{figure}[t!] \begin{minipage}{1.0\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{./figures/samplingS3S4.pdf} \caption{(a) Shortest support bases for the space $S_3+S_4$. These functions are well suited to recover a function with the analysis functions $\V \psi = (\delta,\delta^{(2)}(\cdot-1/2))$ (value at the knots and value of the second derivative in-between the knots). (b) The interpolation from the samples $(\delta[k],0)_{k\inZ}$ and $(0,\delta[k])_{k\inZ}$ yields compactly supported functions which means that an explicit interpolation formula is available. Note however that the support is no longer minimal.} \label{fig:samplingS3S4} \medskip \end{minipage} \end{figure} \end{comment} \subsection{Nonconsecutive Bi-spline Spaces} Nonconsecutive multi-spline spaces are relevant to represent signals that have components of different regularity \cite{debarre2019}. For instance, the space $S_0+S_p$, with $p>0$, consists of smooth signals with sharp jumps. In Figure \ref{fig:hybridBiSplines}, we show shortest-support bases of $S_0+S_p$, for $p\in\{2,3,4\}$, that were obtained with our construction algorithm. \begin{figure}[t!] \begin{minipage}{1.0\linewidth} \centering \centerline{\includegraphics[width=0.8\linewidth]{./figures/hybridBiSplines.pdf} \caption{(a) (b) (c) Shortest-support bases for the spaces $S_0+S_2$, $S_0+S_3$ and $S_0+S_4$. (d) An example of a hybrid bi-spline that lies in the space $S_0+S_4$.} \label{fig:hybridBiSplines} \medskip \end{minipage} \end{figure} \section{Conclusion} In this work, we have introduced the notion of shortest-support bases of degree $M$. They are the shortest-support collections of functions that generate a reconstruction space with an approximation power of order $(M+1)$. We proved that shortest-support bases necessarily generate Riesz bases, a minimal requirement for practical applications. With a single generator, the unique shortest-support basis of degree $M$ is the well-known B-spline of degree $M$. We extended this notion to multiple generators and proposed a recursive method that yields shortest bases for any multi-spline space. These new sets of functions helped us transpose the efficient reconstruction techniques developed for B-splines, and perform generalized sampling. In particular, we have provided a method to perform fast derivative sampling with any approximation power. Finally, we presented a new way to approach some B\'ezier curves. \newpage
{ "timestamp": "2021-06-18T02:26:01", "yymm": "2012", "arxiv_id": "2012.08954", "language": "en", "url": "https://arxiv.org/abs/2012.08954" }
\section{Introduction} Discovering novel user intents is important to improve the service quality in dialogue systems. By analyzing the discovered new intents, we may find underlying user interests, which could provide business opportunities and guide the improvement direction~\cite{lin-xu-2019-deep}. Intent discovery has attracted much attention in recent years~\cite{perkins-yang-2019-dialog,ijcai2020-532,10.1145/3366423.3380268}. Many researchers regard it as an unsupervised clustering problem, and they manage to incorporate some weak supervised signals to guide the clustering process. For example,~\citet{hakkani-tr2013a} propose a hierarchical semantic clustering model and collect web page clicked information as implicit supervision for intent discovery.~\citet{hakkani2015clustering} utilize a semantic parsing graph as extra knowledge to mine novel intents during clustering.~\citet{Padmasundari2018} benefit from the consensus predictions of multiple clustering techniques to discover similar semantic intent-wise clusters.~\citet{haponchyk2018supervised} cluster questions into user intent categories under the supervision of structured outputs.~\citet{shi2018auto} extract intent features with an autoencoder and automatically label the intents with a hierarchical clustering method. \begin{figure}[t!] \centering \includegraphics[scale=.4]{figures/example.pdf} \caption{\label{example} An example for our task. We use limited known intent labeled data as a guide to discover new intents. } \end{figure} \begin{figure*} \centering \includegraphics[scale=.6]{figures/model.pdf} \caption{The model architecture of our approach. Firstly, we extract intent features with BERT. We pre-train the model under the supervision of few labeled samples, and predict the cluster number $K$ if we do not know in advance. Then, we perform k-means to produce cluster centroids and use cluster assignments as pseudo-labels. Next, we align the obtained centroids in the current training epoch $\{c_{i}^{c}\}_{i=1}^{K}$ with the saved centroids in the last epoch $\{c_{i}^{l}\}_{i=1}^{K}$, and produce the alignment projection $G$. Finally, we use $G$ on the pseudo-labels to produce the aligned labels for self-supervised learning.} \label{model} \end{figure*} However, all of the above methods fail to leverage the prior knowledge of known intents. These methods assume that the unlabeled samples are only composed of undiscovered new intents. A more common case is that some labeled data of known intents are accessible and the unlabeled data are mixed with both known and new intents. As illustrated in Figure~\ref{example}, we may have a few labeled samples (e.g., with a labeled proportion of 10\%) of known intents in advance. The remaining known and new intent samples are all unlabeled. Our goal is to find known intents and discover new intents with the prior knowledge of limited labeled data. Our previous work CDAC+~\cite{lin2020discovering} directly tackles this problem. Nevertheless, it uses pairwise similarities as weak supervised signals, which are ambiguous to distinguish a mixture of unlabeled known and new intents. Thus, the performance drops with more new intents. To summarize, there are two main difficulties in our task. On the one hand, it is challenging to effectively transfer the prior knowledge from known intents to new intents with limited labeled data. On the other hand, it is hard to construct high-quality supervised signals to learn friendly representations for clustering both unlabeled known and new intents. To solve these problems, we propose an effective method to leverage the limited prior knowledge of known intents and provide high-quality supervised signals for feature learning. As illustrated in Figure~\ref{model}, we firstly use the pre-trained BERT model~\cite{devlin2018bert} to extract deep intent features. Then, we pre-train the model with the limited labeled data under the supervision of the softmax loss. We retain the pre-trained parameters and use the learning information to obtain well-initialized intent representations. Next, we perform clustering on the extracted intent features and estimate the cluster number $K$ (unknown beforehand) by eliminating the low-confidence clusters. As most of the training samples are unlabeled, we propose an original alignment strategy to construct high-quality pseudo-labels as supervised signals for learning discriminative intent features. For each training epoch, we firstly perform k-means on the extracted intent features, and then use the produced cluster assignments as pseudo-labels for training the neural network. However, the inconsistent assigned labels cannot be directly used as supervised signals, so we use the cluster centroids as the targets to obtain the alignment mapping between pseudo-labels in consequent epochs. Finally, we perform k-means again for inference. Benefit from the relatively consistent aligned targets, our method can inherit the history learning information and boost the clustering performance. We summarize our contributions as follows. Firstly, we propose a simple and effective method that successfully generalizes to mass of new intents and estimate the number of novel classes with limited prior knowledge of known intents. Secondly, we propose an effective alignment strategy to obtain high-quality self-supervised signals by learning discriminative features to distinguish both known and new intents. Finally, extensive experiments on two benchmark datasets show our approach yields better and more robust results than the state-of-the-art methods. \section{Related Work} \subsection{Intent Modeling} Many researchers try modeling user intents in dialogue systems in recent years. A line for these works is to enrich the intent information jointly with other tasks, such as sentiment classification~\cite{Qin_Che_Li_Ni_Liu_2020}, slot filling~\cite{qin-etal-2019-stack,goo-etal-2018-slot,wang-etal-2018-bi} and so on. Another line is to leverage hidden semantic information to construct supervised signals for intent feature learning~\cite{shi2018auto,Brychcin2017UnsupervisedDA,hakkani-tr2013a}. In this work, we follow the second line to model intents. \subsection{Unsupervised Clustering} There are many classical unsupervised clustering methods, such as partition-based methods~\cite{macqueen1967some}, hierarchical methods~\cite{gowda1978agglomerative} and density-based methods~\cite{ester1996density}. However, the high-dimensional pattern representations suffer from high computational complexity and poor performance. Though some feature dimensionality reduction~\cite{1984A} and data transformation methods~\cite{wold1987principal} have been proposed, these methods still can not capture high-level semantics of intent features~\cite{lin-xu-2019-deep}. \subsubsection{Deep Clustering} With the development of deep learning, researchers adopt deep neural networks (DNNs) to extract friendly features for clustering. The joint unsupervised learning (JULE)~\cite{yang2016joint} combines deep feature learning with hierarchical clustering but needs huge computational and memory cost on large-scale datasets. Deep Embedded Clustering (DEC)~\cite{xie2016unsupervised} trains the autoencoder with the reconstruction loss and iteratively refines the cluster centers by optimizing KL-divergence with an auxiliary target distribution. Compared with DEC, Deep Clustering Network (DCN)~\cite{yang2017towards} further introduces a k-means loss as the penalty term to reconstruct the clustering loss. Deep Adaptive Image Clustering (DAC)~\cite{chang2017deep} utilizes the pairwise similarities as the learning targets and adopts an adaptive learning algorithm to select samples for training. However, all these clustering methods cannot provide specific supervised signals for representation learning. DeepCluster~\cite{caron2018deep} benefits from the structured outputs to boost the discriminative power of the convolutional neural network (CNN). It alternately performs k-means and representation learning. It considers the cluster assignments as pseudo-labels, which are explicit supervised signals for grouping each class. However, it needs to reinitialize the classifier parameters randomly before each training epoch. To deal with this issue, we propose an alignment strategy to produce aligned pseudo-labels for self-supervised learning without reinitialization. \subsection{Semi-supervised Clustering} Although there are various unsupervised clustering methods, the performances of these methods are still limited without the prior knowledge for guiding the clustering process. Therefore, researchers perform semi-supervised clustering with the aid of some labeled data. Classical constrained clustering methods use the pairwise information as constraints for guiding the representation learning and clustering process. COP-KMeans~\cite{Wagstaff2001} uses instance-level constraints (must-link and cannot-link) and modifies k-means to satisfy these constraints. PCK-means~\cite{basu2004active} presents a framework for pairwise constrained clustering, and it further selects informative pairwise constraints with an active learning method. MPCK-means~\cite{bilenko2004integrating} incorporates the metric-learning approach into PCK-means and combined the centroid-based methods and metric-based methods into a unified framework. However, these methods need huge computational cost by enumerating pairwise conditions. KCL~\cite{hsu2018learning} uses deep neural networks to perform pairwise constraint clustering. It firstly trains an extra network for binary similarity classification with a labeled auxiliary dataset. Then, it transfers the prior knowledge of pairwise similarity to the target dataset and uses KL-divergence to evaluate the pairwise distance. MCL~\cite{hsu2018multiclass} uses the meta classification likelihood as the criterion to learn pairwise similarities. However, the domain adaptation methods are still limited in our task. CDAC+~\cite{lin2020discovering} is specifically designed for discovering new intents. It uses limited labeled data as a guide to learn pairwise similarities. However, it is limited in providing specific supervised signals and fails to estimate the number of novel classes. DTC~\cite{Han2019learning} is a method for discovering novel classes in computer vision. It improves the DEC algorithm and transfers the knowledge of labeled data to estimate the number of novel classes. However, the amount of the labeled data has a great influence on its performance. \begin{table*}[t!] \centering \begin{tabular}{@{} ccccccc @{}} \toprule Dataset & \#Classes (Known + Unknown) & \#Training & \#Validation & \#Test & Vocabulary & Length (max / mean) \\ \midrule CLINC & 150 (113 + 37) & 18,000 & 2,250 & 2,250 & 7,283 & 28 / 8.31 \\ BANKING & 77 (58 + 19) & 9,003 & 1,000 & 3,080 & 5,028 & 79 / 11.91 \\ \bottomrule \end{tabular} \caption{ \label{datasets} Statistics of CLINC and BANKING datasets. \# indicates the total number of sentences. In each run of the experiment, we randomly select 75\% intents as known intents. Taking the CLINC dataset as an example, we randomly select 113 known intents and treat the remaining 37 intents as new intents. } \end{table*} \section{Our Approach} In this section, we will describe the proposed method in detail. As shown in Figure~\ref{model}, we firstly extract intent representations with BERT. Then, we transfer the knowledge from known intents with limited labeled data. Finally, we propose an alignment strategy to provide self-supervised signals for learning clustering-friendly representations. \subsection{Intent Representation} The pre-trained BERT model demonstrates its remarkable effect in NLP tasks~\cite{devlin2018bert}, so we use it to extract deep intent representations. Firstly, we feed the $i^{th}$ input sentence $\boldsymbol{s}_{i}$ to BERT, and take all its token embeddings $[CLS, T_1, \cdots, T_M]$ $\in$ $\mathds R^{(M+1) \times H}$ from the last hidden layer. Then, we apply mean-pooling to get the averaged sentence feature representation $\boldsymbol{z}_{i} \in \mathds R^{H}$: \begin{align} \boldsymbol{z}_{i} = \text{mean-pooling}([CLS, T_1, \cdots, T_M]), \end{align} where $CLS$ is the vector for text classification, $M$ is the sequence length, and $H$ is the hidden size. To further enhance the feature extraction capability, we add a dense layer $h$ to get the intent feature representation $\boldsymbol{I}_{i} \in \mathds R^{D}$: \begin{align} \boldsymbol{I}_{i}=h(\boldsymbol{z}_i) = \sigma(W_h\boldsymbol{z}_{i}+b_h), \end{align} where $D$ is the dimension of the intent representation, $\sigma$ is the Tanh activation function, $W_h \in \mathds R^{H \times D}$ is the weight matrix and $b_h \in \mathds R^{D}$ is the corresponding bias term. \subsection{Transferring Knowledge from Known Intents} To effectively transfer the knowledge, we use the limited labeled data to pre-train the model and leverage the well-trained intent features to estimate the number of clusters. \subsubsection{Pre-training} We hope to incorporate the limited prior knowledge to obtain a good representation initialization for grouping both known and novel intents. As suggested in~\cite{Han2019learning}, we capture such intent feature information by pre-training the model with the labeled data. Specifically, we learn the feature representations under the supervision of the cross-entropy loss. After pre-training, we remove the classifier and use the rest of the network as the feature extractor in the subsequent unsupervised clustering process. \subsubsection{Predict $K$} In real scenarios, we may not always know the number of new intent categories. In this case, we need to determine the number of clusters $K$ before clustering. Therefore, we propose a simple and effective method to estimate $K$ with the aid of the well-initialized intent features. We assign a big $K'$ as the number of clusters (e.g., two times of the ground truth number of intent classes) at first. As a good feature initialization is helpful for partition-based methods (e.g., k-means)~\cite{platt1999probabilistic}, we use the well pre-trained model to extract intent features. Then, we perform k-means with the extracted features. We suppose that real clusters tend to be dense even with $K'$, and the size of more confident clusters is larger than some threshold $t$. Therefore, we drop the low confidence cluster which size smaller than $t$, and calculate $K$ with: \begin{align} K = \sum_{i=1}^{K'}\delta(|S_{i}| >= t), \end{align} where $|S_{i}|$ is the size of the $i^{th}$ produced cluster, and $\delta{(condition)}$ is an indicator function. It outputs 1 if $condition$ is satisfied, and outputs 0 if not. Notably, we assign the threshold $t$ as the expected cluster mean size $\frac{N}{K'}$ in this formula. \begin{table*}[t!]\small \centering \begin{tabular}{@{\extracolsep{4pt}}clcccccc} \toprule \centering & & \multicolumn{3}{c}{CLINC} & \multicolumn{3}{c}{BANKING}\\ \addlinespace[0.1cm] \cline{3-5} \cline{6-8} \addlinespace[0.1cm] & Method & NMI & ARI & ACC & NMI & ARI & ACC \\ \midrule \multirow{7}{*}{Unsupervised.} & KM & 70.89 & 26.86 & 45.06 & 54.57 & 12.18 & 29.55 \\ & AG & 73.07 & 27.70 & 44.03 & 57.07 & 13.31 & 31.58 \\ & SAE-KM & 73.13 & 29.95 & 46.75 & 63.79 & 22.85 & 38.92 \\ & DEC & 74.83 & 27.46 & 46.89 & 67.78 & 27.21 & 41.29 \\ & DCN & 75.66 & 31.15 & 49.29 & 67.54 & 26.81 & 41.99 \\ & DAC & 78.40 & 40.49 & 55.94 & 47.35 & 14.24 & 27.41 \\ & DeepCluster & 65.58 & 19.11 & 35.70 & 41.77 & 8.95 & 20.69\\ \midrule \multirow{6}{*}{Semi-supervised.} & PCK-means & 68.70 & 35.40 & 54.61 & 48.22 & 16.24 & 32.66\\ & BERT-KCL & 86.82 & 58.79 & 68.86 & 75.21 & 46.72 & 60.15 \\ & BERT-MCL & 87.72 & 59.92 & 69.66 & 75.68 & 47.43 & 61.14 \\ & CDAC+ & 86.65 & 54.33 & 69.89 & 72.25 & 40.97 & 53.83 \\ & BERT-DTC & 90.54 & 65.02 & 74.15 & 76.55 & 44.70 & 56.51 \\ & DeepAligned & \textbf{93.89} & \textbf{79.75} & \textbf{86.49} & \textbf{79.56} & \textbf{53.64} & \textbf{64.90}\\ \bottomrule \end{tabular} \caption{ \label{results-main} The clustering results on two datasets. We evaluate both unsupervised and semi-supervised clustering methods. } \end{table*} \begin{table*}[t!]\small \centering \begin{tabular}{@{\extracolsep{4pt}}clcccccc} \toprule \centering & & \multicolumn{3}{c}{CLINC} & \multicolumn{3}{c}{BANKING}\\ \addlinespace[0.1cm] \cline{3-5} \cline{6-8} \addlinespace[0.1cm] & Method & NMI & ARI & ACC & NMI & ARI & ACC \\ \midrule \multirow{2}{*}{Without Pre-training} & Reinitialization & 57.80 & 9.63 & 23.02 & 34.34 & 4.49 & 13.67\\ & Alignment & 62.53 & 14.10 & 28.63 & 36.91 & 5.23 & 15.42\\ \midrule \multirow{2}{*}{With Pre-training} & Reinitialization & 82.90 & 45.67 & 55.80 & 68.12 & 31.56 & 41.32\\ & Alignment & \textbf{93.89} & \textbf{79.75} & \textbf{86.49} & \textbf{79.56} & \textbf{53.64} & \textbf{64.90}\\ \bottomrule \end{tabular} \caption{ \label{results-aba-1} Effectiveness of the pre-training and the alignment strategy on two datasets. } \end{table*} \subsection{Deep Aligned Clustering} After transferring knowledge from known intents, we propose an effective clustering method to find unlabeled known classes and discover novel classes. We firstly perform clustering and obtain cluster assignments and centroids. Then, we propose an original strategy to provide aligned targets for self-supervised learning. \subsubsection{Unsupervised Learning by Clustering} As most of the training data are unlabeled, it is important to effectively use a mass of unlabeled samples for discovering novel classes. Inspired by DeepCluster~\cite{caron2018deep}, we can benefit from the discriminative power of BERT to produce structured outputs as weak supervised signals. Specifically, we firstly extract intent features of all training data from the pre-trained model. Then, we use a standard clustering algorithm, K-Means, to learn both the optimal cluster centroid matrix $\boldsymbol{C}$ and the cluster assignments $\{y_{i}\}_{i=1}^{N}$: \begin{align} \min _{\boldsymbol{C}\in \mathds R^{K \times D}}\frac{1}{N}\sum_{i=1}^{N}\min _{y_{i}\in\{1, \ldots, K\}}\left\|\boldsymbol{I}_i-\boldsymbol{C}_{y_{i}}\right\|_{2}^{2}, \label{k-means} \end{align} where $N$ is the number of training samples and $\|\cdot\|_{2}^{2}$ denotes the squared Euclidean distance. Then, we leverage the cluster assignments as pseudo-labels for feature learning. \subsubsection{Self-supervised Learning with Aligned Pseudo-labels} DeepCluster alternates between clustering and updating network parameters. It performs k-means to produce cluster assignments as pseudo-labels and uses them to train the neural network. However, the indices after k-means are permuted randomly in each training epoch, so the classifier parameters have to be reinitialized before each training epoch~\cite{Zhan_2020_CVPR}. Thus, we propose an alignment strategy to tackle the assignment inconsistency problem. We notice that DeepCluster lacks the use of the centroid matrix $\boldsymbol{C}$ in Eq.~\ref{k-means}. However, $\boldsymbol{C}$ is a crucial part, which contains the optimal averaged assignment target of clustering. As each embedded sample is assigned to its nearest centroid in Euclidean space, we naturally adopt $\boldsymbol{C}$ as the prior knowledge to adjust the inconsistent cluster assignments in different training epochs. That is, we convert this problem into the centroid alignment. Though the intent representations are updated continually, similar intents are distributed in near locations. The centroid synthesizes all similar intent samples in its cluster, so it is more stable and suitable for guiding the alignment process. We suppose the centroids in contiguous training epochs are relatively consistently distributed in Euclidean space, and adopt the Hungarian algorithm~\cite{kuhn1955hungarian} to obtain the optimal mapping $G$: \begin{align} \boldsymbol{C}^{c} = G(\boldsymbol{C}^{l}), \end{align} where $\boldsymbol{C}^{c}$ and $\boldsymbol{C}^{l}$ respectively denote the centroid matrix in the current and last training epoch. Then, we obtain the aligned pseudo-labels $y^{align}$ with $G(\cdot)$: \begin{align} y^{align} = G^{-1}(y^{c}), \end{align} where $G^{-1}$ denotes the inverse mapping of $G$ and $y^{c}$ denotes the pseudo-labels in the current training epoch. Finally, we use the aligned pseudo-labels to perform self-supervised learning under the supervision of the softmax loss $\mathcal{L}_{s}$: \begin{align} \mathcal{L}_{s}=-\frac{1}{N}\sum_{i=1}^{N} \log\frac{\exp(\phi(\boldsymbol{I}_{i})^{y_{i}^{align}})}{\sum_{j=1}^{K}\exp(\phi(\boldsymbol{I}_{i})^{j})}, \end{align} where $\phi(\cdot)$ is the pseudo-classifier for self-supervised learning, and $\phi(\cdot)^{j}$ denotes the output logits of the $j^{th}$ class. We use the cluster validity index (CVI) to evaluate the quality of clusters obtained during each training epoch after clustering. Specifically, we adopt an unsupervised metric Silhouette Coefficient~\cite{ROUSSEEUW198753} for evaluation: \begin{align} SC=\frac{1}{N}\sum_{i=1}^{N} \frac{b(\boldsymbol{I}_{i})-a(\boldsymbol{I}_{i})}{\max \{a(\boldsymbol{I}_{i}), b(\boldsymbol{I}_{i})\}}, \label{8} \end{align} where $a(\boldsymbol{I}_{i})$ is the average distance between $\boldsymbol{I}_{i}$ and all other samples in the $i^{th}$ cluster, which indicates the intra-class compactness. $b(\boldsymbol{I}_{i})$ is the smallest distance between $\boldsymbol{I}_{i}$ and all samples not in the $i^{th}$ cluster, which indicates the inter-class separation. The range of $SC$ is between -1 and 1, and the higher score means the better clustering results. \section{Experiments} \subsection{Datasets} We conduct experiments on two challenging benchmark intent datasets. Detailed statistics are shown in Table ~\ref{datasets}. \subsubsection{CLINC} It is an intent classification dataset~\cite{larson-etal-2019-evaluation}, which contains 22,500 queries covering 150 intents across 10 domains. \subsubsection{BANKING} It is a fine-grained dataset in the banking domain~\cite{Casanueva2020}, which contains 13,083 customer service queries with 77 intents. \subsection{Baselines} \subsubsection{Unsupervised} We firstly compare with unsupervised clustering methods, including K-means (KM)~\cite{macqueen1967some}, agglomerative clustering (AG)~\cite{gowda1978agglomerative}, SAE-KM, DEC~\cite{xie2016unsupervised}, DCN~\cite{yang2017towards}, DAC~\cite{chang2017deep}, and DeepCluster~\cite{caron2018deep}. \begin{figure*}[t!] \centering \includegraphics[scale=.28]{figures/cls_clinc.pdf} \caption{\label{results-aba-2-1} Influence of the known class ratio on CLINC dataset.} \end{figure*} \begin{figure*}[t!] \centering \includegraphics[scale=.28]{figures/cls_banking.pdf} \caption{\label{results-aba-2-2} Influence of the known class ratio on BANKING dataset.} \end{figure*} For KM and AG, we represent the sentences with the averaged pre-trained 300-dimensional word embeddings from GloVe~\cite{pennington2014glove}. For SAE-KM, DEC, and DCN, we encode the sentences with the stacked autoencoder (SAE), which is helpful to capture meaningful semantics on real-world datasets~\cite{xie2016unsupervised}. As DAC and DeepCluster are unsupervised clustering methods in computer vision, we replace the backbone with the BERT model for extracting text features. \begin{table}\small \centering \begin{tabular}{@{\extracolsep{0.6pt}}clcccc} \toprule \centering & & \multicolumn{2}{c}{CLINC (K'=300)} & \multicolumn{2}{c}{BANKING (K'=154)} \\ \addlinespace[0.1cm] \cline{3-4} \cline{5-6} \addlinespace[0.1cm] & Methods & K (Pred) & Error& K (Pred) & Error\\ \midrule \multirow{3.5}{*}[1ex]{\rotatebox[origin=c]{90}{\it 25\%}} & BERT-MCL & 38 & 75.00 & 19 & 75.32 \\ & BERT-DTC & 94 & 37.33 & 37 & 51.95 \\ & DeepAligned & \textbf{122} & \textbf{18.67} & \textbf{66} & \textbf{14.29}\\ \midrule \midrule \multirow{1.5}{*}[-1.5ex]{\rotatebox[origin=c]{90}{\it 50\%}} & BERT-MCL & 75 & 50.00 & 38 & 50.65 \\ & BERT-DTC & \textbf{131} & \textbf{12.67} & \textbf{71} & \textbf{7.79} \\ & DeepAligned &130 &13.33 &64 &16.88\\ \midrule \midrule \multirow{1.5}{*}[-1.5ex]{\rotatebox[origin=c]{90}{\it 75\%}} & BERT-MCL & 112 & 25.33 & 58 & 24.68 \\ & BERT-DTC & 195 & 30.00 & 110 & 42.86 \\ & DeepAligned &\textbf{129} &\textbf{14.00} &\textbf{67} &\textbf{12.99} \\ \bottomrule \end{tabular} \caption{ The results of predicting $K$ with an unknown number of clusters. We vary the known class ratio in the range of 25\%, 50\% and 75\%, and set $K'$ as two times of the ground truth number of clusters during clustering. } \label{aba-4} \end{table} \subsubsection{Semi-supervised} We also compare our method with semi-supervised clustering methods, including PCK-means~\cite{basu2004active}, BERT-KCL~\cite{hsu2018learning}, BERT-MCL~\cite{hsu2018multiclass}, BERT-DTC~\cite{Han2019learning} and CDAC+~\cite{lin2020discovering}. For a fairness comparison, we replace the backbone of these methods with the same BERT model as ours. \subsection{Evaluation Metrics} We adopt three widely used metrics to evaluate the clustering results: Normalized Mutual Information (NMI), Adjusted Rand Index (ARI), and Accuracy (ACC). To calculate ACC, we use the Hungarian algorithm to obtain the mapping between the predicted classes and ground-truth classes. \subsection{Evaluation Settings} Following the same settings as in~\cite{lin2020discovering}, we randomly select 10\% of training data as labeled and choose 75\% of all intents as known. We split datasets into the training, validation, and test sets. The number of intent categories is set as ground-truth. We first use the little labeled data of known intents for pre-training, and tune with the validation set. Then, we use all training data for self-supervised learning and evaluate the cluster performance with Silhouette Coefficient (as mentioned in Eq.~\ref{8}). Finally, we evaluate the performance on the test set and report the averaged results over ten runs of experiments with different random seeds. \subsection{Implementation Details} We use the pre-trained BERT model (bert-uncased, with 12-layer transformer) implemented in PyTorch~\cite{Wolf2019HuggingFacesTS} as our network backbone, and adopt most of its suggested hyper-parameters for optimization. The training batch size is 128, the learning rate is $5e^{-5}$, and the dimension of intent features $D$ is 768. Moreover, as suggested in~\cite{lin2020discovering}, we freeze all but the last transformer layer parameters to speed up the training procedure and improve the training efficiency with the backbone of BERT. \section{Results and Discussion} Table~\ref{results-main} shows the results of all compared methods. We highlight the best results in bold. Compared with baselines, our method consistently achieves the best results and outperforms other baselines by a large margin on all metrics and datasets. It demonstrates the effectiveness of our method to discover new intents with limited known intent data. We also find most semi-supervised methods perform better than unsupervised methods. It indicates that even with limited labeled data as prior knowledge, it is also helpful to improve the performance of unsupervised clustering. \begin{figure*}[t!] \centering \includegraphics[scale=.28]{figures/k_banking.pdf} \caption{\label{results-aba-3-1} Influence of the number of clusters on BANKING dataset.} \end{figure*} \begin{figure*}[t!] \centering \includegraphics[scale=.282]{figures/k_clinc.pdf} \caption{\label{results-aba-3-2} Influence of the number of clusters on CLINC dataset.} \end{figure*} \subsection{Effect of the Alignment Strategy} To investigate the contribution of the alignment strategy, we compare our method with the reinitialization strategy~\cite{caron2018deep}. As shown in Table~\ref{results-aba-1}, our method has significant improvements over the reinitialization strategy on both semi-supervised and unsupervised settings. We suppose the reason is that random initialization drops out the well-trained parameters in the classifier in the former epochs. By contrast, our method saves the history embedding information by finding the mapping of produced pseudo-labels between contiguous epochs, which provides stronger supervised signals for representation learning. \subsection{Estimate $K$} To investigate the effectiveness to predict $K$, we assign $K'$ as two times of the ground truth number of intent classes and compare with another two state-of-the-art methods (BERT-MCL and BERT-DTC). We vary the ratio of known classes in the range of 25\%, 50\%, and 75\%, and calculate the error rate (the lower is better) for evaluation. As shown in Table~\ref{aba-4}, our method achieves the alomost the lowest error rates with different known class ratios. It shows the reasonability to estimate the cluster number by removing low-confidence clusters with well-initialized intent features. We notice that BERT-DTC is a strong baseline, especially with 50\% known classes. The reason is that BERT-DTC also relies the labeled samples to generate the probe set for determining the optimal number of classes. Nevertheless, the performance is unstable. We also find the predicted $K$ of BERT-MCL is close to the number of known classes. The reason is that BERT-MCL jointly performs clustering and classification. However, the classification part dominates in training under the supervision of labeled data, so it tends to misclassifies new intents into known intents during testing. \subsection{Effect of the Known Class Ratio} To investigate the influence of the number of known intents, we vary the known class ratio in the range of 25\%, 50\% and 75\% during training. As shown in Figure~\ref{results-aba-2-1} and Figure~\ref{results-aba-2-2}, our method achieves the best results with different number of known intents. All semi-supervised methods are sensitive to the number of known intents. Particularly, though BERT-MCL and BERT-DTC achieve competitive results with 75\% known intents, their performances drop dramatically as the known class ratio decreases. We suppose the reason is that they largely depend on the prior knowledge of known intents to construct supervised signals (e.g., the pairwise similarities in BERT-MCL and the initialized centroids in BERT-DTC) for clustering. Therefore, the learned features of these methods are much more biased towards the labeled data. By contrast, our method only needs labeled intent data for learning feature representations. Thus, it is free from the bias towards labeled data during self-supervised learning process. Moreover, our method achieves more robust results with fewer known intents. \subsection{Effect of the Number of Clusters} To investigate the sensitiveness of the assigned cluster number $K'$, we vary $K'$ from the ground-truth number to four times of it. The known class ratio is assigned as 75\%. As shown in Figure~\ref{results-aba-3-1} and Figure~\ref{results-aba-3-2}, our method achieves the best results with different number of assigned clusters. We notice that most semi-supervised clustering methods are vulnerable to the number of clusters, and their performances drop to some extent with large $K'$. It is because many redundant classes may result in splitting fine-grained clusters of originally one cluster with the same intent-label. Compared with all these methods, our method benefits from a more accurate estimated cluster number for clustering. Therefore, it achieves better results even with a large $K'$. \section{Conclusion and Future Work} In this work, we have introduced an effective method for discovering new intents. Our method successfully transfers the prior knowledge of limited known intents and estimates the number of intents by eliminating low-confidence clusters. Moreover, it provides more stable and concrete supervised signals to guide the clustering process. We conduct extensive experiments on two challenging benchmark datasets to evaluate the performance. Our method achieves significant improvements over the compared methods and obtains more accurate estimated cluster numbers with limited prior knowledge. In the future, we will try different clustering methods to produce supervised signals and explore more self-supervised methods for representation learning. \section{Acknowledgments} This work is supported by seed fund of Tsinghua University (Department of Computer Science and Technology)-Siemens Ltd., China Joint Research Center for Industrial Intelligence and Internet of Things.
{ "timestamp": "2021-03-23T01:26:00", "yymm": "2012", "arxiv_id": "2012.08987", "language": "en", "url": "https://arxiv.org/abs/2012.08987" }
\section{Introduction} \label{sec:introduction} Graph Matching (GM) aims to find node correspondence between two or multiple graphs. As a standing and fundamental problem, GM spans wide applications in different areas including computer vision and pattern recognition. In its general form, GM can be formulated as a combinatorial optimization problem namely Lawler's Quadratic Assignment Problem (Lawler's QAP)~\cite{Lawler1963TheQA}, which is known as NP-hard. Generally speaking, solving a graph matching problem often involves two steps: extracting features from input images to formulate the QAP instance for constrained optimization, namely \textbf{front-end} feature extractor and \textbf{back-end} solver, respectively. We emphasize this perspective in this paper. Impressive progress has been made for graph matching with the introduction of machine learning, especially by deep learning techniques. For the graph matching problem, the learning models are mainly applied on the front-end, especially for visual images using convolutional neural networks (CNNs)~\cite{ZanfirCVPR18} for node feature learning and graph neural networks (GNN) for structure embedding~\cite{li2019graph, WangICCV19,YuICLR20, LiICML19}. Compared with traditional learning-free methods, which adopt handcrafted descriptors e.g. SIFT for keypoint feature extraction, learnable features are shown more expressive tailored to the training data. Another important advantage by using deep networks is that the graph structure information can be readily embedded into unary node features, as such the classic NP-hard graph matching problem in fact can be converted into the linear assignment problem, which can be readily solved by the Hungarian method in polynomial time, as the back-end solver. Perhaps for this reason, existing graph matching learning methods~\cite{WangICCV19, FeyICLR20, YuICLR20, JiangPR21, ZhaoAAAI21, GaoCVPR2021} mostly focus on the front-end learning, including both the graph features as well as affinity metrics, basically by supervised learning using manually labeled node correspondence as ground truth. While the back-end solver is relatively little considered for learning in literature. They simply combine their front-end feature extractors with some traditional combinatorial solvers for the QAP optimization, which means they hardly utilize deep learning to improve the back-end solvers. As a matter of fact, the above discussion refers to Koopmans-Beckmann's QAP~\cite{KBQAP57}, which requires the explicit input of two graphs. We argue that such raw information may not always be available for some reasons in practice e.g. privacy. For graph matching, the most general form is Lawler's QAP~\cite{Lawler1963TheQA} whose input is the pairwise affinity matrix whereby the raw graph information is removed and in fact the Koopmans-Beckmann's QAP is a special case for Lawler's QAP. There are also standing and widely adopted public benchmarks for Lawler's QAP e.g. QAPLIB~\cite{Burkard1997QAPLIBA}. For its generality and popularity, there are also recent works~\cite{wang2020learning,Wang2019NeuralGM} following this line, whereby a GNN model is applied on the so-called association graph whose weighted adjacent matrix is the affinity matrix. This GNN model is trained for node embedding on the association graph, which selects the node correspondence via supervised learning in one shot. Despite the above progress made in deep graph matching, little care is taken for dealing with the presence of outliers which is ubiquitous in the real world. Though there are a line of works for image matching by effectively dismissing the outliers, dating back to the classic RANSAC~\cite{fischler1981random}, while these works are basically based on specific pose and motion models in vision, which can be too specific to incorporate for general graph matching literature. Moreover, as mentioned above, existing models are all supervised (or based on supervised learning modules), while in reality, the labeling is costive and even almost impossible to obtain for the large-scale QAP instances in practice. Towards practical and robust graph matching learning, in the absence of labels and in the presence of outliers (in both graphs), this paper proposes a new learning method for graph matching, especially for its most general QAP form. In particular, reinforcement learning is conceptually well suited for its label-free nature and the flexibility in finding the node correspondence by sequential decision making, which provides a direct way of avoiding outlier over-matching by an early stopping of node matching. In contrast, in most works~\cite{ZanfirCVPR18, WangICCV19, wang2020learning, rolinek2020deep, Wang2019NeuralGM} matching is performed in one shot which incurs coupling of the inliers and outliers, and it lacks an explicit way to distinguish outliers. Moreover, an any-time algorithm is often welcomed which will generate a subset of the feasible solution at any time, instead of waiting for a long time to get the whole solution without any feedback, e.g. in car dispatching. Based on the above motivation, we specifically devise a so-called revocable deep reinforcement learning framework to allow small mistakes over the matching procedure, and the current action is revocable to research a better node correspondence based on up-to-date environment information. Our technique is shown cost-effective and empirically outperforms very recent techniques for refining the local decision making~\cite{Chen2019LearningTP}. Moreover, since the standard graph matching objective refers to maximizing the affinity score between matched nodes, it causes the so-called over-matching issue i.e. the outliers are also incorporated for matching to increase the overall score. To make the objective more sensitive to outliers, we propose to regularize the affinity matrix such that it discourages unwanted matchings by assigning a negative score to those pairs. Intuitively, the RL agent will naturally stop matching spurious outliers as the objective score will otherwise decrease. In this sense, utilizing label-free revocable RL with affinity regularization for designing a new back-end solver becomes a promising tool for pushing the frontier of graph matching research. However, here we emphasize our method is focused on the back-end part whose input is the affinity matrix, and our method cannot be combined with learnable CNN and GNN for input graph feature extraction and metric learning part e.g. by MLP, for joint differentiable front-back-end learning. The reason is that, as will be shown in our approach, the RL reward is in fact a function parameterized by the front-end CNN/GNN/MLP, which makes end-to-end impossible. For the above reason, our RL solver is trained on the input affinity matrix, by fixing the parameters of the front-end models which can be pretrained via existing supervised learning methods. Table~\ref{tab:intro} gives a comparison of existing works for their learning modules and learning techniques. This protocol is akin to the QAP learning part in the recent work~\cite{Wang2019NeuralGM}, which can also be regarded as the inherent limitation brought by RL. Fortunately, as will be shown in our extensive experiments, our method outperforms end-to-end supervised methods, especially given a large ratio of outliers. Our two-stage training pipeline is also in fact more efficient than joint learning, and thus suitable for our method as RL is more costive than supervised learning. \begin{table*}[t!] \centering \caption{Representative deep GM works. KB's means Koopmans-Beckmann's QAP which is a special form of Lawler's QAP. The appearance feature and structure feature are often modeled by CNN and by GNN, respectively. The affinity model is often relatively simple by a Gaussian kernel or MLP.} \resizebox{0.98\textwidth}{!} { \renewcommand{\arraystretch}{1.5} \begin{tabular}{l|c|c||c|c|c|c||c|c||c|c|c} \hline & \multicolumn{2}{c||}{QAP Form} & \multicolumn{4}{c||}{Learning Components$^*$} & \multicolumn{2}{c||}{Matching Process} & \multicolumn{3}{c}{Learning Protocol} \\ \cline{2-12} & Lawler's & KB's & Appearance & Structure & Affinity & Back-end Solver & One-shot & Sequential & Supervised & Self-supervised & Reinforcement \\ \hline GMN~\cite{ZanfirCVPR18} & $\surd$ & & $\surd$ & & $\surd$ & & $\surd$ & & $\surd$ & & \\ \hline PCA~\cite{WangICCV19} & & $\surd$ & $\surd$ & $\surd$ & $\surd$ & & $\surd$ & & $\surd$ & & \\ \hline CIE~\cite{Yu2020LearningDG} & & $\surd$ & $\surd$ & $\surd$ & $\surd$ & & $\surd$ & & $\surd$ & & \\ \hline NGM~\cite{Wang2019NeuralGM} & $\surd$ & & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & & $\surd$ & & \\ \hline LCS~\cite{wang2020learning} & $\surd$ & & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & & $\surd$ & & \\ \hline BBGM~\cite{rolinek2020deep} & $\surd$ & & $\surd$ & $\surd$ & $\surd$ & & $\surd$ & & $\surd$ & & \\ \hline GANN~\cite{Wang2020GraduatedAF} & & $\surd$ & $\surd$ & & & & $\surd$ & & & $\surd$ & \\ \hline \textbf{RGM (ours)} & $\surd$ & & & & & $\surd$ & & $\surd$ & & & $\surd$ \\ \bottomrule \end{tabular}% } \vspace{5pt} \emph{\textbf{Remark:} Despite the benefits of label-free and timely node correspondence generation, our RL module cannot be combined with front-end models for joint feature and solver learning, as mentioned in Sec.~\ref{sec:introduction}: under the RL framework, at least by our presented reward based on the affinity objective score, it is impossible to jointly train the front-end models with the back-end solver. This is because the reward itself is a function w.r.t. the model parameters of the front end appearance and structure model e.g. CNN/GNN. While in the supervised NGM~\cite{Wang2019NeuralGM}, the ground truth node correspondences as used for loss are computationally irrelevant to the front-end modules thus joint learning is feasible. Only NGM~\cite{Wang2019NeuralGM}/LCS~\cite{wang2020learning} (these concurrent works are essentially similar to each other for the core idea) and RGM can be directly applied to QAP given an input affinity matrix. See results in experiments on QAPLIB in Sec.~\ref{sec:QAPLIB_test}.} \label{tab:intro}% \end{table*} In this paper, we propose two key components to entail the success of our proposed RL solver for GM namely \textbf{RGM}. The first is our revocable node selection mechanism to accommodate the unavoidable local matching mistake over steps, and the revoke is cautiously performed on an on-demand basis using a prediction score. This scheme is particularly useful when the number of common inliers is given as the sequential decision making can early stop without over-matching outliers. However, in practice such information might not be available. Hence our second key technique refers to a regularization imposed on the input affinity matrix such that the corresponding score value for outlier matching can be decreased and even becomes negative. Then the sequential matching will stop to avoid the affinity score decrease. Besides, our RL framework also adopts the Double Dueling DQN~\cite{Wang2016DuelingNA} algorithm with priority experience replay memory~\cite{Schaul2016PrioritizedER}, for cost-effective learning. The highlights of this paper are: 1) We propose a new graph matching learning method by sequentially selecting the node correspondences from two graphs, in contrast to the majority of existing learning literature that obtain the whole matching in one shot. Accordingly, our approach can naturally handle the case for partial matching when the outliers are in one graph or in both by the early stopping the correspondence selecting procedure, which can be otherwise nontrivial to handle in one-shot models, especially for general graph matching. 2) Specifically, we first devise a revocable approach to select the possible node correspondence from one pair to another, whose mechanism is adapted to the unlabeled graph data with a given affinity score as the reward, under the reinforcement learning paradigm. To our knowledge, this is the first attempt for successfully adapting reinforcement learning to graph matching, which was previously dominated by label-intensive supervised learning. The RL scheme also naturally supports seeded graph matching which has not been explored in previous deep GM models. 3) For the inherent ambiguity in avoiding matching the outliers in both graphs, we develop a regularization technique to the affinity matrix, whose resulting elements can be negative such that the affinity score maximization reward may no longer pursuit to match as many node pairs as possible. This mechanism can also be naturally combined with the reinforcement learning paradigm as an independent reward function pre-processing module and an improved input state. To our best knowledge, this is also the first work for regularizing the affinity matrix to avoid over-matching among outliers, even without knowing the exact number of common inliers which is not always available in practice. 4) On synthetic datasets, Willow Object dataset, Pascal VOC dataset, and QAPLIB, our RGM shows competitive performance in both F1 score and objective affinity score, compared with both learning-free and learning-based baselines. Note that RGM focuses on learning the back-end solver and hence it is orthogonal to many existing front-end feature learning based GM methods, which enables our solver to directly solve the QAP problem without knowing the input graphs and also can serve as a post-solver to further boost the front-end learning solvers' performance as shown in our experiments. The source code of this work will be made publicly available at: \url{https://github.com/Thinklab-SJTU/}. \section{Related Work}\label{sec:related} We discuss the existing works closely related to ours: i) graph matching as the problem we address in this paper; ii) deep learning of graph matching, which is the emerging line of research in graph matching; and iii) graph matching with outliers, which is our main focus; and iv) reinforcement learning for combinatorial optimization as our work readily falls into this more broad category. \textbf{Graph matching.} Formally, in this paper, we consider weighted graph matching, which aims to find the node correspondence by considering both the node features as well as the edge attributes, which is known as NP-hard in its general form~\cite{YanIJCAI20,LoiolaEJOR07}. Moreover, graph matching falls into a more general so-called Quadratic Assignment Problem (QAP). Classic methods mainly resort to different optimization heuristics ranging from a random walk~\cite{Cho2010ReweightedRW}, spectral matching~\cite{Leordeanu2005AST}, path-following algorithm~\cite{ZhouCVPR12}, graduated assignment~\cite{Gold1996AGA}, to SDP relaxation based technique~\cite{schellewald2005probabilistic}, to name a few. However, such optimization based methods are in face of saturation of performance, and hence learning based approaches are receiving more and more attention recently. Our work also falls in the deep graph matching literature. \textbf{Deep learning of GM.} Since the seminal work~\cite{ZanfirCVPR18}, deep neural networks have been an promising paradigm for its recently recognized potential for solving combinatorial problems~\cite{BengioEJOR20} including graph matching~\cite{YanIJCAI20}. Among the deep GM methods, we generally discuss three representative lines of research, according to their methodology. The first line of works~\cite{WangICCV19,YuICLR20} apply CNNs or/and GNNs for learning the input graph's node and structure features, as well as the affinity metric. By using GNN to embed the structure into node embedding, the resulting problem degrades into linear assignment that can be optimally solved by the Sinkhorn network~\cite{cuturiNIPS13} to fulfill double-stochasticity which is non-learnable. Hence, we in this paper regard them as the front-end feature learning models. Instead, another line of studies~\cite{wang2020learning} follow the general Lawler's QAP form exactly, and the problem becomes combinatorial selecting of nodes on the association graphs, whose weights form the affinity matrix. Accordingly, GNN based embedding learning is performed on the association graph instead of on the input graphs which in fact are not always available in practice e.g. for privacy. For the learning part on the association graph, we refer it as back-end learning and it is shown~\cite{Wang2019NeuralGM} that this paradigm can be directly applied to the QAPLIB benchmark~\cite{Burkard1997QAPLIBA}. Meanwhile, there emerge seminal works for differentiable learning of combinatorial tasks~\cite{rolinek2020deep} and specifically tuned model for graph matching~\cite{rolinek2020deep} whereby the learning-free solvers can be integrated as a black-box component. The key idea is perturbation based gradient estimation. Yet all the above models adopt supervised learning and there is little reported success using RL to solve GM despite their label-free advantage and popularity in solving other combinatorial problems~\cite{BengioEJOR20}. \textbf{Subgraph matching against outliers.} Matching against outliers has been a long standing task especially for vision, and seminal works date back to RANSAC~\cite{fischler1981random}. There are many efforts~\cite{torresani2012dual,yang2015outlier,yi2018learning,zhang2019learning,liu2020partial} (just name a few) in exploring the specific problem structure and clues in terms of spatial and geometry coherence, e.g. motion, homography, pose etc. to achieve robust point or graph matching in the presence of outliers. While this paper is focused on the general setting of graph matching without using additional assumptions or parametric transform models, the relevant works are relatively less crowded. One simple and popular technique for handling outliers refers to adding the dummy nodes, especially when it is assumed the outliers are only present in one graph. There are also some general and more complex methods in literature for GM. The strategy based on domain adaptation is utilized in \cite{wang2019functional}, which removes the outliers as a part of the data normalization module. A heuristic and effective max-pooling strategy is developed in \cite{ChoCVPR14Maxpooling} for dismissing excessive outliers. Interestingly, the recent work \cite{wang2020zero} proposes a principled way to suppress the matching of outliers by assigning zero-valued vectors to the potential outliers. In fact, GM with outliers is similar to the maximal common subgraph problem~\cite{YangXuTNNLS17}, in the sense that keypoints to be matched can be regarded as a subgraph of the input graph. It is worth noting that however all the above methods are learning-free and the trending learning-based solvers to our best knowledge, have not addressed the outlier problems explicitly except for adding dummy nodes~\cite{Wang2019NeuralGM}, which is the gap to fill by this paper. In contrast, we propose a tailored mechanism for regularizing the affinity matrix (i.e. the weighted matrix of the association graph), such that the elements corresponding to outliers are more prone to be negative. As a result, the widely used affinity score maximization protocol naturally becomes refrained to matching the outliers, which otherwise are always over-matched given a non-negative affinity matrix. \begin{figure*}[tb!] \centering \includegraphics[width=0.99\textwidth]{figures/agpipeline.pdf} \caption{The association graph ($G^{a}$ on the top) can be derived from the raw input graphs ($G^{1}, G^{2}$ at the bottom) as shown on the left. We show the matching process on the right of our RL procedure: the blue vertices denote available vertices, the green vertices denote selected (or equivalently, matched) vertices, and the blurred vertices denote the unavailable vertices. The agent selects ``1a", ``2b", and ``3c" progressively on $G^{a}$.} \label{fig:ag} \end{figure*} \textbf{Reinforcement learning for combinatorial optimization.} There is growing interest in using reinforcement learning in solving combinatorial optimization problems~\cite{BengioEJOR20}. Researchers have considered value based~\cite{Mnih2013PlayingAW} and policy based~\cite{Silver2014DeterministicPG} reinforcement learning in some NP-hard combinatorial optimization problems, such as traveling salesman problem (TSP)~\cite{Khalil2017LearningCO}, vehicle routing problem (VRP)~\cite{Nazari2018ReinforcementLF}, job scheduling problem (JSP)~\cite{Chen2019LearningTP}, bipartite matching~\cite{WangICDE19}, maximal common subgraph (MCS)~\cite{bai2021glsearch}, causal discovery~\cite{zhu2019causal}, game-theoretic semantics~\cite{xu2021first}. The main challenges of these approaches are designing suitable problem representation and tuning reinforcement learning algorithms. For combinatorial optimization problems on single graph, pointer networks~\cite{Vinyals2015PointerN} and graph neural networks~\cite{KipfICLR17} are the most widely used representations. However, for graph matching, there are two graphs for input and the agent needs to pick a node from each of the two graphs every step, which differs from these aforementioned single-graph combinatorial problems. In recent years, RL starts to be adopted in linear assignment problems like bipartite graph matching~\cite{Hamzehi1ITSC19}, and its dynamic setting~\cite{WangICDE19}. However, these works only consider the node affinity but not edge affinity, and focus on solving the dynamic case rather than the basic matching problem itself. In contrast, GM considers the additional edge information which can be often more challenging and has not been (successfully) addressed by RL to our best knowledge. \section{Preliminaries}\label{sec:prelim} Graph matching aims to find the node correspondence among two or multiple graphs, which is a fundamental problem in vision and pattern recognition. In this paper, we mainly focus on two graph matching, which is also known as pairwise graph matching. Specifically, we consider a more difficult situation, where there are some outliers in both two graphs, and we want to match the similar inliers while ignoring all outliers. Given two weighted graphs $G^{1}$ and $G^{2}$, it aims to find the matching between their nodes such that the affinity score is maximized. We use $V^{1}$ and $V^{2}$ to represent the nodes of graph $G^{1}$ and $G^{2}$. We suppose that $|V^{1}| = n_{1}, |V^{2}| = n_{2}$, and \textbf{there can be outliers in $G^{1}$, $G^{2}$, or both}. $E^{1}$ and $E^{2}$ denote the edge attributes of graph $G^{1}$ and $G^{2}$. The affinities of pairwise graph matching include the first order (node) affinities and the second order (edge) affinities. Generally, the graph matching problem can be regarded as Lawler's Quadratic Assignment Problem~\cite{Lawler1963TheQA}: \begin{equation} \begin{split} &J(\mathbf{X}) = \text{vec}(\mathbf{X})^\top\ \mathbf{K}\ \text{vec}(\mathbf{X}), \\ &\mathbf{X} \in \left\{ 0, 1 \right\}^{n_{1} \times n_{2}},\ \mathbf{X}\mathbf{1}_{n_{2}} \le \mathbf{1}_{n_{1}},\ \mathbf{X}^\top\mathbf{1}_{n_{1}} \le \mathbf{1}_{n_{2}} \end{split} \label{eq:lawler} \end{equation} where $\mathbf{X}$ is the permutation matrix of which the element is $0$ or $1$, $\mathbf{X}_{i,a} = 1$ denotes node $i$ in graph $G^{1}$ is matched with node $a$ in graph $G^{2}$. The operator $\text{vec}(\cdot)$ means column-vectorization. $\mathbf K \in \mathbb{R}^{n_{1}n_{2} \times n_{1}n_{2}}$ is the affinity matrix. For node $i$ in $G^{1}$ and node $a$ in $G^{2}$ the node-to-node affinity is encoded by the diagonal element $\mathbf{K}_{ia, ia}$, while for edge $ij$ in $G^{1}$ and edge $ab$ in $G^{2}$ the edge-to-edge affinity is encoded by the off-diagonal element $\mathbf{K}_{ia, jb}$. Assuming $i,a$ both start from 0, the index $ia$ means $i \times n_2 + a$. The objective of Lawler's QAP is to maximize the sum of both first order and second order affinity score $J(\mathbf{X})$ given the affinity matrix $\mathbf{K}$ by finding an optimal permutation $\mathbf{X}$. Graph matching involves (at least) two input graphs. Instead of directly working on two individual graphs which can disclose raw data information which might be sensitive, in this paper, we first construct the association graph of the pairwise graphs as the input representation~\cite{Leordeanu2005AST,Wang2019NeuralGM}. We also hope in this way, the problem more easily handled by RL as the learning can be focused on one graph. In fact, RL has achieved seminal success on many other combinatorial problems~\cite{Khalil2017LearningCO,Nazari2018ReinforcementLF,bai2021glsearch,Chen2019LearningTP}. We leave the counterpart RL solver directly on input graphs for future work. Specifically, following~\cite{Leordeanu2005AST,Wang2019NeuralGM}, we construct an association graph $G^{a} = (V^{a}, E^{a})$ from the original pairwise graph $G^{1}$ and $G^{2}$, with the help of the affinity matrix $\mathbf{K}$. We merge each two nodes $(v_{i}, v_{a}) \in V^{1} \times V^{2}$ as a vertex $v_{p} \in V^{a}$. We can see that the association graph contains $|V^{a}| = n_{1} \times n_{2}$ vertices\footnote{To distinguish input graphs from the derived association graph, this paper uses ``node" for the raw graphs and ``vertex" for the association graph.}. There exists an edge for every two vertices as long as they do not contain the same node from the original pairwise graphs, so every vertex is connected to $(n_{1} - 1) \times (n_{2} - 1)$ edges. There exist both vertex weights $w(v_{p})$ and edge weights $w(v_{p}, v_{q})$ in the association graph. The vertex and edge weights denote the first and second order affinities of Lawler's QAP, respectively: \begin{equation} \begin{split} &\mathbf{F}_{pp} = w(v_{p}) = \mathbf{K}_{ia, ia},\ \text{where} \ p = ia \\ &\mathbf{W}_{pq} = w(v_{p}, v_{q}) = \mathbf{K}_{ia, jb},\ \text{where} \ p = ia, q = jb \end{split} \end{equation} where the vertex index $p$ in the association graph $G^{a}$ means a combination of the indices $i$ and $a$ in the original pairwise graphs $G^{1}$ and $G^{2}$. $\mathbf{F}, \mathbf{W} \in \mathbb{R}^{n_{1}n_{2}} $ are the weight matrices that contain the vertex weights and edge weights in the association graph, respectively. Fig.~\ref{fig:ag} shows an example to construct the association graph from the input graphs. Selecting a vertex $p$ in the association graph equals to matching nodes $i$ and $a$ in the inputs. After constructing the association graph, we can rewrite the original objective by Eq.~\ref{eq:lawler}. In the association graph $G^{a}$, we select a set of vertices $\mathbb{U}$, which forms a subgraph of $G^{a}$ and is also a complete graph. \textbf{The set of vertices $\mathbb{U}$ in the association graph is equivalent to the permutation matrix $\mathbf{X}$}, as long as the set $\mathbb{U}$ does not violate the constraint in Eq.~\ref{eq:lawler} (will be discussed later in Sec.~\ref{subsec:agent}). Besides, the original objective score in Eq.~\ref{eq:lawler} can be regarded as maximizing the sum of the vertex weights (original first order affinities) and edge weights (original second order affinities) in the complete graph formed by $\mathbb{U}$: \begin{equation} \begin{split} J(\mathbb{U}) = \sum_{v_{p} \in \mathbb{U}}w(v_{p}) + \sum_{v_{p}, v_{q} \in \mathbb{U}}w(v_{p}, v_{q}) \end{split} \label{eq:newobj} \end{equation} Then, we can optimize $J(\mathbb{U})$ on the association graph by learning how to add vertices to the set $\mathbb{U}$. Note that $J(\mathbb{U})$ is equivalent to $J(\mathbf{X})$, they are two forms of the objective function, as $\mathbb{U}$ is equivalent $\mathbf{X}$. $J(\mathbf{X})$ is for the permutation matrix formulation while $J(\mathbb{U})$ is for the vertex set formulation. \section{Approach}\label{sec:method} In this section, we present our deep RL based solver RGM for graph matching, in the sense of maximizing the affinity objective function. We first introduce basic RL framework in Sec.~\ref{subsec:framework}, in which we show the design of our agent in Sec.~\ref{subsec:agent}. We further describe the network structure in Sec.~\ref{subsec:network}. The experience replay memory is described in Sec.~\ref{subsec:memory} and the updating algorithm in Sec.~\ref{subsec:update}. Then, we introduce our designed revocable reinforcement learning network in Sec.~\ref{sec:rev}, which allows the agent to undo the action made before. Finally in Sec.~\ref{sec:regualr}, we introduce our design for outlier-robust matching, and show why RL based sequential matching is a natural way for outlier-robust graph matching, which has not been fulfilled by existing methods. \subsection{Reinforcement Learning for Graph Matching} \label{subsec:framework} The idea of RL is to learn from the interactions between the agent and the environment \cite{Sutton2005ReinforcementLA}. The agent’s observation of the current environment is called state $s$. The agent chooses an action $a$ given the current state by a specific policy. After the agent performs an action, the environment will transfer to another state $s'$. Meanwhile, the environment will feedback to the agent with a reward $r$. This pipeline solves the problem progressively. For graph matching, ``progressively" means to select the vertex in the association graph one by one. The environment is defined as a partial solution to the original combinatorial problem (Eq.~\ref{eq:lawler}) and equivalently the association graph, where the reward denotes the improvement of the objective function by matching a new pair of nodes. The interactions between the agent and the environment are recorded as a transition $(s, a, r, s')$ into an experience replay memory $\mathcal{M}$. After several episodes, the agent updates its networks $f_{\theta}$ according to the transitions sampled from $\mathcal{M}$. In this paper, we design our RL method based on Double Dueling DQN (D3QN)~\cite{Mnih2013PlayingAW, Wang2016DuelingNA, Hasselt2016DeepRL}, since graph matching is a typical discrete decision making problem. It is widely accepted~\cite{sutton2018reinforcement} that value based RL algorithms work well in the discrete RL scenario, while D3QN is the state-of-the-art value based RL algorithm. We will show ablation study results in experiments to verify the effectiveness of our choice. The training process of RGM is described in Algorithm~\ref{alg:train}, in which some modules will be introduced later. \begin{algorithm}[htb!] \DontPrintSemicolon \caption{Training RGM (with revocable action (Sec.~\ref{sec:rev}), inlier count information(Sec.~\ref{sec:ici}), and affinity regularization (Sec.~\ref{sec:ar})).} \label{alg:train} \SetKwInOut{Input}{\textbf{Input}}\SetKwInOut{Output}{\textbf{Output}} \KwIn{Dataset $\mathbb{D}$; step size $\eta$; exploration rate $\epsilon$; updating frequency $c_{1}, c_{2}$; inlier count $n_{i}$ (optional).} \KwOut{Well trained Q-value network $f_{\theta}$.} \BlankLine Randomly initialize Q-value network $f_{\theta}$; \\ Initialize target Q-value network $f_{\theta^{-}} \leftarrow f_{\theta}$; \\ Initialize experience replay memory $\mathcal{M}$; \\ Global count $cnt \leftarrow 0$; \\ \For{episode $\longleftarrow$ 0, 1, 2, \dots} { Sample pairwise graphs $G^{1}, G^{2}$ from dataset $\mathbb{D}$; \\ Construct the association graph $G^{a}$ from $G^{1} G^{2}$, and get its affinity matrix $\mathbf{K}$; \\ Acquire the initialization state $s$ of $G^{a}$;\\ \For{ind $\longleftarrow$ 0, 1, 2, \dots} { \textcolor{blue}{\emph{\# Estimate Q-value:}}\\ \If{\textcolor{red}{Affinity Regularization}} { Calc $\hat\mathbf{K}$ for regularization given $s$ by Eq.~\ref{equ:tau}; \\ Input $\hat\mathbf{K}$ and $s$ to GNN in Eq.~\ref{eq:embedding};\\ } \Else { Input $\mathbf{K}$ and $s$ to GNN in Eq.~\ref{eq:embedding};\\ } Get $\mathbf{Q}$ from the Q-value network in Eq.~\ref{eq:qvalue};\\ -------------------------------------------------------------\\ \textcolor{blue}{\emph{\# Choose next action:}}\\ \If{\textcolor{red}{Revocable Mechanism}} { Set available vertices set $\mathbb{V}$ to the whole vertices $V^{a}$ of the association graph $G^{a}$;\\ } \Else { Calculate available vertices set $\mathbb{V}$ by Eq.~\ref{eq:available_set};\\ } With probability $\epsilon$ select a random action $a \in \mathbb{V}$ otherwise select $a = \arg\max_{a \in \mathbb{V}}\mathbf{Q}(s, a; f_{\theta})$; \\ -------------------------------------------------------------\\ \textcolor{blue}{\emph{\# Interact with the environment:}}\\ $s' \xleftarrow{a} s$;\\ \If{\textcolor{red}{Affinity Regularization}} { $r = J(s') \cdot f(|s'|) - J(s) \cdot f(|s|)$ by Eq.~\ref{eq:r_reward};\\ } \Else { $r = J(s') - J(s)$ by Eq.~\ref{eq:reward};\\ } Store the transition $(s,a,r,s')$ in $\mathcal{M}$;\\ -------------------------------------------------------------\\ \textcolor{blue}{\emph{\# Update neural networks:}}\\ $cnt \leftarrow cnt + 1 $;\\ \If{$cnt\ \%\ c_{1} == 0$} { Calculate $\mathcal L(f_{\theta}; \mathcal M)$ by Eq.~\ref{eq:ddqn};\\ Update $f_{\theta}$ : $\theta \leftarrow \theta - \eta \nabla_{\theta}\mathcal L(f_{\theta}; \mathcal M);$ } Update the transition priority in $\mathcal{M}$;\\ \If{$cnt\ \%\ c_{2} == 0$} { Update $f_{\theta^{-}}$ : $f_{\theta^{-}} \leftarrow f_{\theta}$;\\ } -------------------------------------------------------------\\ \textcolor{blue}{\emph{\# Transition:}}\\ $s \leftarrow s'$; \\ \If{\textcolor{red}{Inlier Count} and $|s| == n_{i}$}{ \textbf{break};\\ } } } \end{algorithm} \subsubsection{Agent Design for Graph Matching} \label{subsec:agent} First, we show the details of state, action, and reward: \textbf{1) State.} The state $s$ is the current partial solution $\mathbb{U'}$. Please note $\mathbb{U'}$ is also a set of vertices in the association graph, with $|\mathbb{U'}| \le min(n_{1}, n_{2})$. The size of $\mathbb{U'}$ increases from $0$ at the beginning of each episode, and finally partial solution $\mathbb{U'}$ becomes a complete solution $\mathbb{U}$ when the agent decides to stop the episode. \textbf{2) Action.} The action $a$ of our reinforcement learning agent is to select a vertex in the association graph and add it to the partial solution $\mathbb{U'}$. \textbf{By the definition of graph matching, we can not match two nodes in $G^{1}$ to the same node in $G^{2}$ and vice versa.} Therefore, in our basic RL framework, we can only select the vertices in the available vertex set. Take Fig.~\ref{fig:ag} for an example, once we select the vertex ``1a", it means we have matched node ``1" in $G^{1}$ and node ``a" in $G^{2}$. Then, we can not match node ``1" to node ``b" or ``c" later, which means we can not select vertices ``1b" or ``1c". Given partial solution $\mathbb{U'}$, the available vertices set $\mathbb{V}$ is written as (supposed the association graph is fully connected): \begin{equation} \mathbb{V} = \left\{v \ \middle|\ \mathbf{A}(v, v') = 1,\ \forall\ v' \in \mathbb{U'},\ v \in V^{a} \right\} \label{eq:available_set} \end{equation} where $V^{a}$ is the vertices in the association graph, whose adjacent matrix is $\mathbf{A}$. Eq.~\ref{eq:available_set} holds since two vertices are connected if they do not contain the same node from the input graphs. If a vertex is connected to all vertices in $\mathbb{U'}$, then it has no conflict. Given the available set, we mask all unavailable vertices in the association graph to make sure that the agent can not select them, as illustrated by the blurred vertices in Fig.~\ref{fig:ag} . Then, the action is to pick a node $v$ from the available vertices set $\mathbb{V}$, where $\mathbb{U}_{old}$ is the old partial solution, and $\mathbb{U}_{new}$ is the new partial solution after an action. \begin{equation} \mathbb{U}_{old} \xrightarrow{v \in \mathbb{V}} \mathbb{U}_{new} \label{eq:action} \end{equation} It is worth noting that, the requirement that the agent need to select vertex from the available set only exists in our basic RL framework. \textbf{In our later proposed revocable RL framework, the agent can select any vertex without constraint.} \textbf{3) Reward.} We define the reward $r$ as the improvement of the objective score between the old partial solution and the new partial solution after executing an action according to Eq.~\ref{eq:newobj}: \begin{equation} r = J(\mathbb{U}_{new}) - J(\mathbb{U}_{old}) \label{eq:reward} \end{equation} \subsubsection{Network Structure} \label{subsec:network} Our networks include the state representation network and Q-Value estimation networks, which are detailed as follows. \textbf{1) State Representation Networks.} \ \ To better represent the current state on the association graph, we choose graph neural networks (GNN)~\cite{KipfICLR17} to compute its embedding. GNN extracts the vertex features based on their adjacent neighbors. However, traditional GNN is not sensitive to edge weights. To better use the edge weights in the association graph, we derive from the idea of struct2vec~\cite{Dai2016DiscriminativeEO}. In our embedding networks, the current solution, node weights, and edge weights of the association graph are considered. The embedding formula is: \begin{equation} \begin{split} \mathbf{E}^{t + 1} &= \text{ReLU}(\mathbf{h}_{1} + \mathbf{h}_{2} + \mathbf{h}_{3} + \mathbf{h}_{4}) \\ \mathbf{h}_{1} &= \mathbf{X'} \cdot \theta_{1}^{\top}, \quad \mathbf{h}_{2} = \frac{\mathbf{A} \cdot \mathbf{E}^{t}}{(n_{1} - 1)(n_{2} - 1)} \cdot \theta_{2}\\ \mathbf{h}_{3} &= \frac{\mathbf{A} \cdot \mathbf{F}\cdot \theta_{3}^{\top}}{(n_{1} - 1)(n_{2} - 1)} , \quad \mathbf{h}_{4} = \frac{\sum\text{ReLU}(\mathbf{W} \cdot \theta_{5})}{(n_{1} - 1)(n_{2} - 1)} \cdot \theta_{4} \\ \end{split} \label{eq:embedding} \end{equation} where $\mathbf{E}^{t} \in \mathbb{R}^{n_{1}n_{2} \times d}$ denotes the embedding in the $t$-th iteration, with $d$ as the hidden size. At every iteration, the embedding is calculated by four hidden parts $\mathbf{h}_{1}, \mathbf{h}_{2}, \mathbf{h}_{3}, \mathbf{h}_{4} \in \mathbb{R}^{n_{1}n_{2} \times d}$. $\theta_{1} \in \mathbb{R}^{d}$ , $\theta_{2} \in \mathbb{R}^{d \times d}$, $\theta_{3} \in \mathbb{R}^{d}$, $\theta_{4}\in \mathbb{R}^{d \times d}$ and $\theta_{5} \in \mathbb{R}^{d}$ are the weight matrices in the neural networks. $t$ is the index of the iteration and the total number of the iterations is $T$. We set the initial embedding $\mathbf{E}^{0} = \mathbf{0}$ and use ReLU as the activation function. As for the hidden parts, each hidden part represents a kind of feature: $\mathbf{h}_{1}$ is to calculate the impact of current permutation matrix $\mathbf{X'}$ which is transformed from the current partial solution $\mathbb{U'}$. $\mathbf{h}_{2}$ is to take neighbor's embedding into consideration, where $\mathbf{A}$ is the adjacency matrix of the association graph and divide $(n_1 - 1)(n_2 - 1)$ is for average since every vertex has $(n_1 - 1)(n_2 - 1)$ neighbors. $\mathbf{h}_{3}$ calculates the average of neighbor's vertex weights, where $\mathbf{F}$ is the vertex weight matrix. $\mathbf{h}_{4}$ is designed to extract the features of adjacent edges, where $\mathbf{W}$ is the edge weight matrix. Please note that the core inputs of our GNN are the permutation matrix $\mathbf{X}$ and the affinity matrix $\mathbf{K}$ ($\mathbf{A}$, $\mathbf{F}$ and $\mathbf{W}$ are derived from the affinity matrix $\mathbf{K}$). \begin{figure*}[tb!] \centering \includegraphics[width=0.99\textwidth]{figures/rrgm_new.pdf} \caption{The different pipelines: (A) (B) (C) shows the pipeline of our proposed revocable RGM in different situations. Specifically, (A) shows a simple situation which is the same as the basic RGM in Fig.~\ref{fig:ag}; (B) shows an example of the revocable mechanism. When the agent makes a mistake on choosing ``2c" as ``Action 2", the agent can revert it by taking ``Action 3" to choose ``2b", which will cause the environment to undo ``2c" and select ``2b" instead. (C) shows another example when the agent has matched all keypoints by ``1a", ``2c", and ``3b". If it turns out this matching permutation is not good, the agent can choose the vertex ``2b" to reverse two actions ``2c" and ``3b" together as ``Action 4".} \label{fig:rgm++} \end{figure*} \textbf{2) Q-Value Estimation Networks.} The Q-learning based algorithms use $\mathbf{Q}(s, a)$ to represent the value of taking action $a$ in state $s$, as an expected value of the acquired reward after choosing this action. The agent picks the next action given the estimation of $\mathbf{Q}(s, a)$. The Q-value estimation network $f_{\theta}$ takes the embedding of the current state as input and predicts the Q-value for each possible action. We adopt Dueling DQN~\cite{Wang2016DuelingNA} as our approximator to estimate the Q-value function. The architecture of our $f_{\theta}$ is: \begin{equation} \begin{split} \mathbf{h}_{5} &= \text{ReLU}(\mathbf{E}^{\top} \cdot \theta_{6} + b_{1}) ,\quad \mathbf{h}_{v} = \frac{\sum \mathbf{h}_{5} \cdot \theta_{7}}{n_{1}n_{2}} + b_{2},\\ \mathbf{h}_{a} &= \mathbf{h}_{5} \cdot \theta_{8} + b_{3}, \quad \mathbf{Q} = \mathbf{h}_{v} + \left(\mathbf{h}_{a} - \frac{\sum \mathbf{h}_{a}}{n_{1}n_{2}}\right) \end{split} \label{eq:qvalue} \end{equation} where $\mathbf{E}^{\top}$ is the final output of the embedding network by Eq.~\ref{eq:embedding}. $\mathbf{h}_{5} \in \mathbb{R}^{n_{1}n_{2} \times d}$ is the hidden layer for embedding. $\mathbf{h}_{v} \in \mathbb{R}^{1}$ is the hidden layer for the state function. $\mathbf{h}_{a} \in \mathbb{R}^{n_{1}n_{2}}$ is the hidden layer for the advantage function. $\theta_{6} \in \mathbb{R}^{d \times d}, \theta_{7} \in \mathbb{R}^{d}, \theta_{8} \in \mathbb{R}^{d}$ are the weights of the neural networks. $b_{1}$, $b_{2}$, and $b_{3}$ are the bias vectors. $\mathbf{Q} \in \mathbb{R}^{n_{1}n_{2}}$ is the final output of our Q-value estimate network. It predicts the value of each action given the current state. The state function and advantage function are designed to separate the value of state and action. Specifically, the state function predicts the value of different states and the advantage predicts the value of each action given the particular state. The previous work~\cite{Wang2016DuelingNA} shows that this dueling architecture can better learn the impact of different actions. Besides, we force the sum of the output vector of the advantage function to $0$ by subtracting the mean of it, which makes the separation of the state value and advantage easier. We use $\mathbf{Q}(s, a; f_{\theta})$ to denote the estimated Q-value by $f_{\theta}$ when the agent takes action $a$ on state $s$. \subsubsection{Experience Replay Memory} \label{subsec:memory} For sample efficiency, we maintain a prioritized experience replay memory $\mathcal{M}$~\cite{Schaul2016PrioritizedER} that stores the experience of the agent, defined as the transition $(s_{i}, a_{i}, r_{i}, s'_{i})$ (denoting state, action, reward, and state of next step respectively). As the training progresses, we add new transitions to $\mathcal{M}$ and remove old transitions. The agents will take samples from the experience replay memory to update their neural networks. We follow the idea of prioritized experience replay memory, which adds a priority for each transition and higher priority denotes higher probabilities to be sampled: \begin{equation} \begin{split} P(i) = \frac{(p_{i})^{\alpha}}{\sum_{j}(p_{j})^{\alpha}} \end{split} \end{equation} where $P(i)$ is the probability and $p_{i}$ is the priority of the $i$-th transition. $\alpha$ is a hyperparameter. The calculation of $p_{i}$ is based on the underfitting extent of the transition, and a larger bias rate of the agent's Q-value estimation means a relatively higher priority. \subsubsection{Model Updating} \label{subsec:update} We follow Double DQN~\cite{Hasselt2016DeepRL} to calculate the loss function and update the parameters. We pick the next action $a'$ by the current Q-value estimate network $f_{\theta}$, but use the target Q-value estimate network $f_{\theta^{-}}$ to predict its value as Eq.~\ref{eq:ddqn} shows. The motivation of designing this loss function is: the Q-value that is overestimated in one network will be mitigated to an extent in another network. \begin{align} \label{eq:ddqn} a' &= \arg\max_{a'} \mathbf{Q}(s', a'; f_{\theta}) \\ \notag \mathcal{L}(f_{\theta}; \mathcal{M}) &= \mathbb E_{s,a,r,s'\sim \mathcal M}\Big[\Big (r + \gamma \mathbf{Q}(s', a'; f_{\theta^{-}}) - \mathbf{Q}(s, a; f_{\theta})\Big)^{2}\Big] \end{align} where $\mathcal{L}(\cdot)$ is the loss function to be optimized. $f_{\theta}$ is the current Q-value estimation network and $f_{\theta^{-}}$ is the target Q-value estimation network. $s, a, r, s', a'$ stand for the state, action, reward, next state, and next action. The target network $f_{\theta^{-}}$ shares the same architecture with $f_{\theta}$ and the parameters of $f_{\theta^{-}}$ will be replaced by the parameters of $f_{\theta}$ every period. The design of such a target network is for keeping the target Q value remains unchanged for a period of time, which reduces the correlation between the current Q value and the target one and improves the training stability. \subsection{The Revocable Action Mechanism} \label{sec:rev} So far we have presented a general deep RL framework for solving the graph matching problem. In such a vanilla form, it can not undo the actions that have been executed. In another word, the agent can not ``regret", which means the agent has no chance to adjust if the agent makes a wrong decision and the error may accumulate until obtaining a disastrous solution. To strike a reasonable trade-off between efficiency and efficacy, we develop a mechanism to allow the agent to re-select the vertex on the association in one revocable step. Specifically, recall the vanilla design of RGM, where the agent can only choose the vertices on the association graph that are not in conflict with every vertex that has been chosen before. This is fulfilled by maintaining an extra available set $\mathbb{V}$ of all vertices, and the Q value of unavailable vertices is set to negative infinite, as illustrated by the blurred vertices in Fig.~\ref{fig:ag}. This available set guarantees that all the vertices selected by the RGM agent are legal, but also prevents the agent from modifying any possible mistake. In the following, we present the revocable version of RGM as our main method in this paper\footnote{If there is no ambiguity from the context, from later on, we interchangeably to use the term RGM to denote our vanilla version and revocable version. In experiments, we always use our revocable version if not otherwise specified.}. \subsubsection{Design Details} To allow the agent to modify the decisions made before, we design a new revocable RL framework. We remove the available set used before, and \textbf{the agent is free to choose any vertex, even if it is in conflict.} Then, we modify the strategy in our RL environment. If the environment receives a new vertex from the agent's action that is in conflict with currently selected vertices, the environment will remove one or two vertices that are in conflict with the new coming vertex, and then add the new vertex to the current solution. Our proposed revocable RL framework is illustrated in Fig.~\ref{fig:rgm++}. As Fig.~\ref{fig:rgm++} shows, the pipelines of the our revocable RGM are a little different from our basic framework. The available set in the basic RGM does not exist if the revocable mode is on. Pipeline (A) shows a simple situation that the agent matched the vertices directly without any reverse operation, which is the same with our basic RGM framework. In pipeline (B), we suppose the agent chooses the vertex ``2c" for the second action, which is not a good choice. When choosing the third action, the agent realizes that it made a mistake by selecting ``2c", therefore, the agent chooses ``2b" to fix this mistake. Then, this action is passed to the environment, and the environment reverts ``2c" and selects ``2b" instead. In other words, the agent can reverse ``2c" to ``2b" in our revocable framework. In pipeline (C), we show another revocable situation. Suppose that the agent selects ``2c" and ``3b" as its second and third actions, and acquires a complete matching solution. However, it turns out the matching solution (``1a", ``2c", and ``3b") is not as good as expected. To roll this situation back, the agent can select ``2b" as the next action. By selecting ``2b", the environment will automatically release the vertices ``2c" and ``3b", and then select ``2b". Finally, the agent chooses ``3c" as the last action, and decides to end the episode with the matching solution (``1a", ``2b", and ``3c"). Above all, our proposed revocable reinforcement learning framework allows the agent to make better decisions by giving the opportunity to turn the way back. From now on in the paper, the default setting of RGM contains the revocable action framework, and we update the definition of action and reward: \begin{align} \begin{split} &\mathbb{U}_{old} \xrightarrow{v \in V^{a}} \mathbb{U}_{new} \\ r &= J(\mathbb{U}_{new}) - J(\mathbb{U}_{old}) \end{split} \label{eq:revocable} \end{align} Note that our revocable action mechanism requires most changes in our environment settings. Therefore, we design a new RL environment for the revocable action mechanism, and the main differences from the original environment are: the available set is gone; the agent can choose any vertex in the vertices $V^{a}$ of the association graph $G^{a}$; when the environment receives a vertex that is in conflict with the current partial solution, it releases the conflicted vertices and adds the new-coming vertex to the partial solution. The agent design is irrelevant to the basic or the revocable environment, and the training process of the revocable framework almost remains the same, as Algorithm~\ref{alg:train} shows. While the revocable flexibility also brings some side effects, e.g. we can not adopt the acceleration tricks for GNN, such as dynamic embedding~\cite{wang2021combinatorial}, which can otherwise speedup the inference time. \subsubsection{Difference to Other Revocable Action Design} To our best knowledge, there are in general two existing techniques allowing revocable actions, at least for RL-based combinatorial optimization: Local Rewrite~\cite{Chen2019LearningTP} and ECO-DQN~\cite{barrett2020exploratory}. Here we discuss our difference from these methods. The local rewrite framework keeps improving the solution given as input by exchange the parts of it. However, the performance of the local rewrite highly relies on the input solution. In our empirical tries, the efficiency and effectiveness of local rewrite are unsatisfactory, as will be shown in our experiments. The ECO-DQN framework does have a promising performance in the Maximum Cut problem, but it is inherently designed to work on this specific Maximum Cut problem, which only has fewer or no constraints, and is clearly impossible to adapt to the graph matching problem. Therefore, in this paper, we devise a new revocable framework RGM to meet the characteristic of the graph matching problem, which is more friendly to graph with relatively more constraints. To verify the effectiveness of our proposed revocable framework, we compare it with the local rewrite framework in our experiments in Sec.~\ref{sec:lr}. In fact, we believe our devised scheme for revocable action is suited to the setting when the graph size is moderate to afford such a costive scheme, while the constraints are relatively heavy and complex to make the revocable action necessary, whereby graph matching has become a suited problem setting. \begin{figure*}[tb!] \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.31\textwidth]{figures/regular_vs_original_1.pdf} & \includegraphics[width=0.31\textwidth]{figures/regular_vs_original_2.pdf} & \includegraphics[width=0.31\textwidth]{figures/regular_vs_original_3.pdf} \\ (a) $f_{1}(n) = \frac{3\cdot max(n1, n2) - n}{3\cdot max(n1, n2)}$ & (b) $f_{2}(n) = \frac{1 + n}{1 + 3\cdot n}$ & (c) $f_{3}(n) = \frac{1}{n^{2}}$ \end{tabular} \end{center} \caption{The empirical relation between F1 Score and the affinity matrices, under different regularization function forms, on the Willow Object dataset. We construct a set of permutation solutions, and calculate the F1 score and the objective score of two kind affinity matrices (original affinity and regularized affinity). Each pair of points represents one case. The score of the original affinity matrix is calculated as $\frac{\textrm{vec}(\mathbf{X}^{pred})^\top \mathbf{K} \textrm{vec} (\mathbf{X}^{pred})}{\textrm{vec}(\mathbf{X}^{gt})^\top \mathbf{K} \textrm{vec} (\mathbf{X}^{gt})}$, and score of the regularized one is by $\frac{\textrm{vec}(\mathbf{X}^{pred})^\top \hat \mathbf{K} \textrm{vec} (\mathbf{X}^{pred})}{\textrm{vec}(\mathbf{X}^{gt})^\top \hat \mathbf{K} \textrm{vec} (\mathbf{X}^{gt})}$. The ground truth is constantly 1 as shown in the plot. We show three different regularization functions $f(n)$ as shown in the sub-caption whereby the trend is similar.} \label{fig:norm_vs_affinity} \end{figure*} \subsection{Outlier-Robust Graph Matching} \label{sec:regualr} For practical GM, outliers are common in both input graphs. In general, the solution is supposed to contain only the inlier correspondences without outliers'. However, most existing GM methods are designed to match all keypoints whether they are inliers or outliers. This design is based on pursuing the highest objective score, and matching outliers also can increases the objective score. We believe that the outliers should not be matched in any sense. Therefore, we propose two strategies to guide the agent to match the inliers only and ignore the outliers. \subsubsection{Inlier Count Information Exploration} \label{sec:ici} Our first strategy is to inform the agent how many common inliers are in the graphs. Similar input setting can also be found in learning-free outlier-robust GM methods~\cite{YangXuTNNLS17,wang2020zero}. Given the exact number of common inliers $n_i$, the sequential matching can be readily stopped when the size of the solution set $|\mathbb{U}| = n_i$, or we can write it with the following matrix formulation: \begin{equation} \begin{split} &J(\mathbf{X}) = \text{vec}(\mathbf{X})^\top\ \mathbf{K}\ \text{vec}(\mathbf{X}), \quad \mathbf{X} \in \left\{ 0, 1 \right\}^{n_{1} \times n_{2}},\\ & \mathbf{X}\mathbf{1}_{n_{2}} \le \mathbf{1}_{n_{1}}, \mathbf{X}^\top\mathbf{1}_{n_{1}} \le \mathbf{1}_{n_{2}}, \mathbf{1}_{n_{2}}^\top\mathbf{X}^\top\mathbf{1}_{n_{1}} = n_i \\ \end{split} \label{eq:ic} \end{equation} Note it is nontrivial and still an open question for existing deep GM methods to explicitly utilize the additional input $n_i$. As will be shown in our ablation study in Table~\ref{tab:willow}, even an inexact $n_i$ can effectively boost the performance under our RL framework. \subsubsection{Affinity Regularization via Quadratic Approximation} \label{sec:ar} Our second strategy is to regularize the affinity. As mentioned before, existing GM methods tend to match all keypoints including both inliers and outliers to make the objective as high as possible, as the objective is to maximize thd overall affinity score, given the non-negative affinity matrix $\mathbf{K}$. More keypoints are matched, more entries in the affinity matrix $\mathbf{K}$ will be added to the objective, which, however, hurts the overall matching performance. Therefore, one straightforward idea is the affinity regularization which exerts penalization on over-matching terms to balance the effect of them w.r.t the optimization objective. We introduce a modified form of affinity, namely \textit{regularized affinity} to deal with the outliers. This new affinity views the number of matched keypoints as a regularization term by: \begin{equation} \label{eq:normalized_affinity} \hat{J}(\mathbf{X}) = \textrm{vec}(\mathbf{X})^\top \mathbf{K} \textrm{vec}(\mathbf{X}) \cdot f(\|\textrm{vec}(\mathbf{X})\|_1) \end{equation} where the $f(\|\textrm{vec}(\mathbf{X})\|_1)$ is the regularization term we added, and we suppose this term is bigger when there is more matched keypoints as a penalized term. Theoretically, $f(n)$ can be any functions as long as it is negatively correlated. In our experiments, we choose the following three functions that all work as expected: $\frac{3\cdot max(n1, n2) - n}{3\cdot max(n1, n2)}$, $\frac{1 + n}{1 + 3\cdot n}$, $\frac{1}{n^{2}}$. Fig.~\ref{fig:norm_vs_affinity} illustrates the effectiveness of \textit{regularized affinity}, with three different regularization functions $\{f_i(n)\}_{i=1,2,3}$. We construct a set of permutation solutions, and calculate the F1 score and the objective score of the original and regularized affinity. As one can see, the score of the original affinity matrix fluctuates a lot with the increase of the f1 score. Besides, there are some cases where original affinity is even larger than ground truth (GT) matching. In contrast, the objective score of the \textit{regularized affinity} is more stable and overall nearly positively correlates to the F1 score. It proves that the \textit{regularized affinity} is a better optimization objective that is consistent with matching accuracy. Besides, all three different regularization functions $f(n)$ work well, which means our method is not sensitive to $f(n)$ to some extent. With \textit{regularized affinity}, the pairwise graph matching problem could be re-formulated as: \begin{align} \label{equ:target} \argmax_{\mathbb{\mathbf{X}} }\ & \textrm{vec}(\mathbf{X})^\top \mathbf{K} \textrm{vec}(\mathbf{X}) \cdot f(\|\textrm{vec}(\mathbf{X})\|_1) \end{align} With the help of Eq.~\ref{equ:target}, we can update the reward calculation in RGM. However, in the sequential decision making process of RGM, we can not predict the next action is going to increase or decrease the number of matched keypoints. It means $\|\textrm{vec}(\mathbf{X})\|_1$ is not fixed when choosing the next action, and we cannot directly acquire the regularized affinity when choosing the next action. It means the agent can only acquire the origin affinity information instead of the regularized one. Especially in the GNN of the agent, the input of the GNN is still the original affinity matrix $\mathbf{K}$, which can not give the agent any guide for the regularization. We hope the affinity matrix given to the agent's GNN can reflect the impact of affinity regularization, by changing the original $\mathbf{K}$ to a new regularized $\hat\mathbf{K}$. Therefore, we plan to use a technique called \textit{quadratic approximation} to turn Eq.~\ref{equ:target} into a QAP formulation. First of all, we define the $\tau_{x}$ as pairwise optimization objective between the pairwise graph $G_1$ and $G_2$: \begin{equation} \label{equ:phi} \tau_{x} = \textrm{vec}(\mathbf{X})^\top \mathbf{K} \textrm{vec}(\mathbf{X}) \cdot f(\|\textrm{vec}(\mathbf{X})\|_1)\\ \end{equation} Then, we denote ${C}_{x} = \textrm{vec}(\mathbf{X})^\top \mathbf{K} \textrm{vec}(\mathbf{X})$ the original score, and $n_{x}$ the number of matching points in $\mathbf{X}$ that $n_{x} = \|\textrm{vec}(\mathbf{X})\|_1$. Eq.~\ref{equ:phi} can be written as: \begin{equation} \tau_{x} = \textrm{vec}(\mathbf{X})^\top \mathbf{K} \textrm{vec}(\mathbf{X}) - {C}_{x} \cdot(1 - f(n_{x})) \label{equ:phi_2} \end{equation} Without loss of generality, we choose a quadratic function $g(n)=an^2+bn+c$ to approximate a series of 2D data points $\{(n, 1 - f(n))\}$ by least square fitting, where the range of data points is denoted by $S$. The range of the data points is close to the amount of current matching vertices $\|\textrm{vec}(\mathbf{X})\|_1$ for a better fitting. Specifically, we have: \begin{align} \begin{split} &{C}_{x} \cdot \left(1 - f(n_{x})\right) \approx {C}_{x} \cdot g(n_{x}) \\ =& {C}_{x} \cdot \left(a\sum_{ijkl} \mathbf{X}(i,j) \cdot \mathbf{X}(k,l) + b \sum_{ij} \mathbf{X}(i,j) \cdot \mathbf{X}(i,j) + c\right) \\ =& {C}_{x} \cdot \left(a\cdot \textrm{vec}(\mathbf{X})^\top \mathbf{K}_{A} \textrm{vec}(\mathbf{X}) + b \cdot \textrm{vec}(\mathbf{X}) \mathbf{K}_{B}^\top \textrm{vec}(\mathbf{X}) + c\right) \\ =& {C}_{x} \cdot \left(\textrm{vec}(\mathbf{X})^\top ( a \mathbf{K}_{A} + b \mathbf{K}_{B} ) \textrm{vec}(\mathbf{X}) + c\right), \end{split} \end{align} where $\mathbf{K}_A = \mathbf{1}$ (all-one matrix) and $\mathbf{K}_B = \mathbf{I}$. During the optimization iteration, the original affinity score would not vary a lot in practice. Thereby we could treat ${C}_{x}$ as a constant that would not change during the optimization, and the term $c \cdot {C}_{x}$ can also be ignored. With the techniques above, $\tau_{x}$ can be approximated into a QAP formulation as: \begin{equation} \tau_{x} \approx \textrm{vec}(\mathbf{X})^\top (\mathbf{K} - a {C}_{x}\cdot \mathbf{K}_{A} + b {C}_{x} \cdot\mathbf{K}_{B} ) \textrm{vec}(\mathbf{X}) \label{equ:tau} \end{equation} Let $\hat{\mathbf{K}} = \mathbf{K} - a {C}_{x} \cdot \mathbf{K}_A - b {C}_{x} \cdot \mathbf{K}_B$ and Eq.~\ref{equ:target} becomes: \begin{align} \begin{split} \mathbf{X} &= \argmax_{\mathbb{\mathbf{X}} }\ \textrm{vec}(\mathbf{X})^\top \mathbf{K} \textrm{vec}(\mathbf{X}) \cdot f(\|\textrm{vec}(\mathbf{X})\|_1) \\ &\approx \argmax_{\mathbf{X}} \textrm{vec}(\mathbf{X})^\top \hat\mathbf{K} \textrm{vec}(\mathbf{X})\\ \end{split} \label{equ:final_target} \end{align} \begin{figure}[tb!] \centering \begin{tabular}{cc} \includegraphics[width=0.22\textwidth]{figures/am.pdf} & \includegraphics[width=0.22\textwidth]{figures/am_n_2.pdf} \\ \footnotesize{(a) $\mathbf{K}$} & \footnotesize{(b) $\hat\mathbf{K}_{1}$ } \\ \includegraphics[width=0.22\textwidth]{figures/am_n.pdf} & \includegraphics[width=0.22\textwidth]{figures/am_n_3.pdf} \\ \footnotesize{(c) $\hat\mathbf{K}_{2}$} & \footnotesize{(d) $\hat\mathbf{K}_{3}$ } \end{tabular} \caption{Examples of the regularized affinity matrix by different forms of regularization functions: (a) origin affinity matrix $\mathbf{K}$; (b)(c)(d) regularized affinity matrix $\hat\mathbf{K}$ by quadratic approximation, with three different regularization functions, corresponding to Fig.~\ref{fig:norm_vs_affinity}: $f_{1}(n) = \frac{3\cdot max(n1, n2) - n}{3\cdot max(n1, n2)}$, $f_{2}(n) = \frac{1 + n}{1 + 3\cdot n}$, $f_{3}(n) = \frac{1}{n^{2}}$. Note some values in the affinity matrix become negative in the regularized affinity matrix $\hat\mathbf{K}$.} \label{fig:am} \end{figure} Fig.~\ref{fig:am} gives the results of our approximation regularized affinity. Notably, some values in the approximation regularized affinity $\hat\mathbf{K}$ are negative. Intuitively, the negative elements in the affinity matrix denote that the affinity score maximization reward may no longer pursuit to match as many keypoints pairs as possible, which will prevent the agent from picking up outliers to some extent. Besides, most traditional graph matching solvers~\cite{Cho2010ReweightedRW, Egozi2013APA} assume the values in the affinity matrix are non-negative, while RGM has no such a restriction. It also shows the effectiveness of combing RL and affinity regularization. This approximate optimization process can be merged into our revocable reinforcement learning framework RGM as Algorithm~\ref{alg:train} shows. In the sequential decision making process of RGM, every step when the agent needs to select the next vertex, we make an approximate optimization based on the current solution, and get the approximate regularized affinity. Then, we feed regularized affinity matrix $\hat\mathbf{K}$ instead of the origin affinity matrix $\mathbf{K}$ to the GNN of our RGM. At last, we update the reward function of the agent to Eq.~\ref{eq:normalized_affinity} with the regularized affinity, which can also be written in the vertex set formulation: \begin{equation} r = J(\mathbb{U}_{new}) \cdot f(|\mathbb{U}_{new}|) - J(\mathbb{U}_{old}) \cdot f(|\mathbb{U}_{old}|) \label{eq:r_reward} \end{equation} The above introduced regularization technique can be easily adopted in our sequential node matching scheme, while it may be nontrivial for them to be integrated in existing GM methods which are mainly performed in one-shot for the whole matching. We leave the adaption to these methods in future work. \section{Experiments}\label{sec:exp} We test the compared methods on various benchmarks including image data as well as pure combinatorial optimization problem instances, as the latter is especially suited for our back-end solver nature. In particular, we evaluate the robustness against outliers for graph matching, as the (ground truth or estimated) number of inliers is given as hyperparameter. Whenever this information is known or not, one can always apply our proposed regularization technique to improve its robustness against outlier. The experiments are conducted on a Linux workstation with Nvidia 2080Ti GPU and AMD Ryzen Threadripper 3970X 32-Core CPU with 128G RAM. \subsection{Protocols} \subsubsection{Hyperparameter Settings} For the hyper parameters in our reinforcement learning module, we set $\gamma$ in Eq.~\ref{eq:ddqn} as 0.9, the target network update frequency as 40, the replay size as 100,000. For the greedy part, the greedy rate $\epsilon$ decays from 1.0 to 0.02 in 20,000 episodes. For the learning module, we set 1e-5 as the learning rate and 64 as the batch size. The hidden size of our GNN is 128 and the number of layers $T$ is 3. The hidden size of the Q value network is 64. For the affinity regularization module, the range of the data points used for approximation is $S=[n_{x} - 2, n_{x} + 2]$. \subsubsection{Evaluation Metrics} For testing, given the affinity matrix $\mathbf{K}$, RGM predicts a permutation matrix $\mathbf{X}^{pred}\in\mathbb{R}^{n_1\times n_2}$ transformed from its solution set $\mathbb{U}$. Based on $\mathbf{X}^{pred}$ and ground truth $\mathbf{X}^{gt}\in\mathbb{R}^{n_1\times n_2}$ (note that $\sum \mathbf{X}^{gt}$ equals the number of inliers, since the rows and columns of outliers are always zeros.). Two evaluation metrics are used: objective score (i.e. affinity score), and F1 score in the experiments on the real image datasets: \begin{equation} \begin{split} &\textbf{Objective\ score} = \frac{\text{vec}(\mathbf{X}^{pred})^\top\ \mathbf{K}\ \text{vec}(\mathbf{X}^{pred})}{\text{vec}(\mathbf{X}^{gt})^\top\ \mathbf{K}\ \text{vec}(\mathbf{X}^{gt})} \\ &\text{Recall} = \frac{\sum \left(\mathbf{X}^{pred} * \mathbf{X}^{gt}\right)}{\sum \mathbf{X}^{gt}} \\ &\text{Precision} = \frac{\sum \left(\mathbf{X}^{pred} * \mathbf{X}^{gt}\right)}{\sum \mathbf{X}^{pred}} \\ &\textbf{F1\ score} = \frac{2 \cdot \text{Recall} \cdot \text{Precision}}{\text{Recall} + \text{Precision}} \end{split} \label{eq:metric_1} \end{equation} where $*$ denotes element-wise matrix multiplication. Note that the defined objective score here is agnostic to the presence of outliers, which is a common protocol used in existing works. Specifically, for a traditional affinity matrix as used in previous works, its elements are set non-negative, and thus the solver generally aims to match node correspondences as many as possible. To test the performance of RGM in solving the QAP problem, we also conduct experiments on the well-known QAPLIB dataset. For the problem instances in QAPLIB, the goal is to minimize the objective score. Besides, the ground truth solution is supposed unknown due to its NP-hard nature. Therefore, we use the gap between the score of the predicted solution and the optimal score provided in the benchmark which is continuously updated by uploaded new best solutions, as the metric: \begin{equation} \begin{split} &\textbf{Optimal\ gap} = \frac{\text{vec}(\mathbf{X}^{pred})^\top\ \mathbf{K}\ \text{vec}(\mathbf{X}^{pred}) - \text{optimal}}{\text{optimal}} \end{split} \label{eq:metric_2} \end{equation} Note that in the QAP test, there is no outlier issue, and our solver purely optimizes the objective score, by matching all the nodes. \begin{table*}[tb!] \centering \caption{Performance comparison w.r.t F1 score and objective score (the higher the better) in the Willow Object dataset, where ``F1" and ``Obj" are short for F1 score and objective score. All images contain 10 inliers and \textbf{3} randomly generated outliers in both graphs. For our RGM, ``RGM + AR" means RGM with affinity regularization, ``RGM + IC" means RGM with inlier count information, and ``RGM + AR + IC" means RGM with both.} \resizebox{0.98\textwidth}{!} { \renewcommand{\arraystretch}{1.5} \begin{tabular}{l|l|cc|cc|cc|cc|cc|cc} \hline & \multirow{2}[0]{*}{\diagbox{Method}{Class}} & \multicolumn{2}{c|}{Car} & \multicolumn{2}{c|}{Duck} & \multicolumn{2}{c|}{Face} & \multicolumn{2}{c|}{Motorbike} & \multicolumn{2}{c|}{Winebottle} & \multicolumn{2}{c}{\textbf{Average}} \\ \cline{3-14} & & F1 & Obj & F1 & Obj & F1 & Obj & F1 & Obj & F1 & Obj & F1 & Obj \\ \hline \multirow{7}[0]{*}{\rotatebox{90}{Learning-free}} & RRWM~\cite{Cho2010ReweightedRW} & 65.48\% & 1.0101 & 60.25\% & 1.0726 & 84.21\% & 0.9765 & 54.04\% & 1.0457 & 66.19\% & 1.0141 & 66.03\% & 1.0238 \\ & GAGM~\cite{Gold1996AGA} & 57.50\% & 0.9837 & 53.59\% & 1.0533 & 82.87\% & 0.9747 & 47.38\% & 1.0084 & 60.96\% & 0.9917 & 60.46\% & 1.0024 \\ & IPFP~\cite{Leordeanu2009AnIP} & 71.16\% & 1.0297 & 62.29\% & 1.0824 & 84.65\% & 0.9785 & 60.07\% & 1.0653 & 68.86\% & 1.0273 & 69.41\% & 1.0366 \\ & PSM~\cite{Egozi2013APA} & 66.64\% & 1.0067 & 61.22\% & 1.0674 & 82.79\% & 0.9721 & 55.63\% & 1.0383 & 66.02\% & 1.0057 & 66.46\% & 1.0180 \\ & GNCCP~\cite{Liu2012AnEP} & 73.65\% & 1.0345 & 63.71\% & 1.0870 & 84.38\% & 0.9788 & 62.20\% & 1.0709 & 71.43\% & 1.0329 & 71.07\% & 1.0408 \\ & BPF~\cite{Wang2018GraphMW} & 74.00\% & 1.0349 & 62.91\% & 1.0875 & 84.38\% & 0.9788 & 63.09\% & 1.0712 & 71.90\% & 1.0334 & 71.26\% & 1.0412 \\ & ZACR~\cite{wang2020zero} & 66.27\% & 1.0257 & 61.67\% & 1.0787 & 84.59\% & 0.9798 & 59.27\% & 1.0665 & 68.10\% & 1.0295 & 67.98\% & 1.0360 \\ \hline \multirow{6}[0]{*}{\rotatebox{90}{Supervised}} & GMN~\cite{ZanfirCVPR18} & 57.96\% & - & 57.87\% & - & 86.66\% & - & 58.18\% & - & 67.52\% & - & 65.64\% & - \\ & PCA~\cite{WangICCV19} & 71.00\% & - & 57.80\% & - & 86.12\% & - & 57.89\% & - & 65.74\% & - & 67.71\% & - \\ & LCS~\cite{wang2020learning} & 72.23\% & - & 61.90\% & - & 86.84\% & - & 62.35\% & - & 69.15\% & - & 70.49\% & - \\ & NGM~\cite{Wang2019NeuralGM} & 68.77\% & - & 61.09\% & - & 86.58\% & - & 55.65\% & - & 69.48\% & - & 68.31\% & - \\ & BBGM~\cite{rolinek2020deep} & 76.10\% & - & 63.62\% & - & 96.71\% & - & 60.93\% & - & 69.70\% & - & 73.41\% & - \\ & NGM-v2~\cite{Wang2019NeuralGM} & 78.76\% & - & 65.41\% & - & 86.84\% & - & 63.94\% & - & 71.19\% & - & 73.23\% & - \\ \hline \multirow{3}[0]{*}{\rotatebox{90}{RL}} & RGM + AR& 78.14\% & \textbf{1.0628 } & 63.46\% & \textbf{1.1574 } & 97.06\% & \textbf{1.0076 } & 65.88\% & \textbf{1.1033 } & 73.73\% & 1.0560 & 75.65\% & \textbf{1.0774 } \\ & RGM + IC & 80.60\% & 1.0606 & 63.80\% & 1.1227 & 97.30\% & 1.0070 & \textbf{68.60\%} & 1.0961 & 73.30\% & 1.0555 & 76.72\% & 1.0684 \\ & RGM + AR + IC & \textbf{81.65\%} & 1.0608 & \textbf{66.57\%} & 1.1275 & \textbf{97.80\%} & 1.0064 & 67.30\% & 1.1030 & \textbf{74.53\%} & \textbf{1.0572 } & \textbf{77.57\%} & 1.0712 \\ \bottomrule \end{tabular}% } \label{tab:willow} \end{table*}% \subsubsection{Compared Methods} As mentioned before, RGM falls in line with learning-free graph matching back-end solvers that use the affinity matrix $\mathbf{K}$ as input, regardless $\mathbf{K}$ is obtained by learning-based methods or not. Both traditional methods and learning-based methods are compared: \textbf{GAGM}~\cite{Gold1996AGA} utilizes the graduated assignment technique with an annealing scheme, which can iteratively approximate the cost function by Taylor expansion. \textbf{RRWM}~\cite{Cho2010ReweightedRW} proposes a random walk view of the graph matching, with a re-weighted jump on graph matching. \textbf{IFPF}~\cite{Leordeanu2009AnIP} iteratively improves the solution via integer projection, given a continuous or discrete solution. \textbf{PSM}~\cite{Egozi2013APA} improves the spectral algorithm through a probabilistic view. It presents a probabilistic interpretation of the spectral relaxation scheme. \textbf{GNCCP}~\cite{Liu2012AnEP} follows the convex-concave path-following algorithm, which provides a much simpler form of the partial permutation matrix. \textbf{BPF}~\cite{Wang2018GraphMW} designs a branch switching technique to seek better paths at the singular points, to deal with the singular point issue in the previous path following strategy. \textbf{ZACR}~\cite{wang2020zero} designs to suppress the matching of outliers by assigning zero-valued vectors to the potential outliers, which is the latest graph matching solver designated for outliers. In particular, we further compare RGM with current popular deep graph matching methods: \textbf{GMN}~\cite{ZanfirCVPR18}, \textbf{PCA}~\cite{WangICCV19}, \textbf{NGM}~\cite{Wang2019NeuralGM}, \textbf{LCS}~\cite{wang2020learning}, \textbf{BBGM}~\cite{rolinek2020deep}, which are the state-of-the-art deep graph matching methods, and more importantly, most of them are all open-sourced which are more convenient for a fair comparison. \subsubsection{Datasets for Evaluation} We briefly describe the used datasets, in line with the recent comprehensive evaluation for deep GM~\cite{Wang2019NeuralGM}. \textbf{Synthetic Dataset} is created by random 2D coordinates as nodes and their distances as edge features for graph matching. \textbf{Willow Object} is collected from real images by~\cite{Cho2013LearningGT}. It contains 256 images from 5 categories, and each category is represented with at least 40 images. All instances in the same class share 10 distinctive image keypoints. For testing the performance of handling outliers, we add several randomly sampled outliers to the keypoints of each image. \textbf{Pascal VOC}~\cite{bourdev2009poselets} consists of 20 classes with keypoint labels on natural images. The instances vary by scale, pose and illumination. The number of keypoints in each image ranges from 6 to 23. \textbf{QAPLIB}~\cite{Burkard1997QAPLIBA} contains 134 real-world QAP instances from 15 categories, e.g. planning a hospital facility layout or testing of self-testable sequential circuits. The problem size is defined as $n_1 = n_2$ by Lawler’s QAP. We use 14 of the 15 categories in our experiments, the only one left is ``els", due to there is only one sample in this category. \subsubsection{Training and Testing Protocols} \label{sec:tt_protocol} Due to the complexity in evaluation of learning-based graph matching methods and QAP solvers, here we elaborate the training and testing protocols in detail. We use the open-source version of the compared methods, and we tune or follow the hyperparameters set by the authors, to achieve the sound performance. For the synthetic dataset, we use geometry features to construct the affinity matrix with the train-test split rate (2 : 1) follows~\cite{JiangPAMI20}. For the natural image dataset Willow Object and Pascal VOC, we use the pretrained features by CNN and GNN from BBGM~\cite{rolinek2020deep} via supervised learning {on the training set}, and use the pre-splited train/test set (8 : 1) in line with BBGM as well. We input the learned affinity matrix to RGM and all learning-free methods\footnote{ZACR is an exception which will be explained in Sec.~\ref{sec:willow_1} in detail.}, to make the comparison with supervised methods as fair as possible. For QAPLIB, there is no need for front-end feature extractors as the affinity matrix is already given. The train-test split rate for RGM is (1 : 1) for each of the selected 14 categories, as we choose the smaller-size half of the instances to train RGM in that category. While for the peer method NGM~\cite{Wang2019NeuralGM}, due to its model's nature, it does not split the train-test set in their QAPLIB experiments and test directly after training on the same set. In contrast, our RGM follows the basic protocol in RL, which splits the train-test set to make a relatively fair comparison with the baselines. \begin{figure}[tb!] \centering \includegraphics[width=0.46\textwidth]{figures/synthetic_iccv21_plus_new.pdf} \caption{Performance comparison w.r.t F1 score and objective score (the higher the better) by increasing noise level on the synthetic dataset.} \label{fig:syn} \end{figure} \begin{table*}[tb!] \centering \caption{Average performance across all the objects w.r.t F1 score and objective score (the higher the better) in the Willow Object dataset with respect to different numbers of randomly added outliers given the 10 inliers, where ``F1" and ``Obj" are short for F1 score and objective score, ``RGM + AR" means RGM with affinity regularization, ``RGM + IC" means RGM with inlier count information, and ``RGM + AR + IC" means RGM with both.} \resizebox{0.98\textwidth}{!} { \renewcommand{\arraystretch}{1.5} \begin{tabular}{l|l|cc|cc|cc|cc|cc|cc} \hline & \multirow{2}[0]{*}{\diagbox{Method}{Outlier \#}} & \multicolumn{2}{c|}{1} & \multicolumn{2}{c|}{2} & \multicolumn{2}{c|}{3} & \multicolumn{2}{c|}{4} & \multicolumn{2}{c|}{5} & \multicolumn{2}{c}{6} \\ \cline{3-14} & & F1 & Obj & F1 & Obj & F1 & Obj & F1 & Obj & F1 & Obj & F1 & Obj \\ \hline \multirow{7}[0]{*}{\rotatebox{90}{Learning-free}}& RRWM~\cite{Cho2010ReweightedRW} & 73.90\% & 0.7121 & 71.57\% & 0.7628 & 66.03\% & 1.0238 & 61.27\% & 1.0671 & 56.08\% & 1.0968 & 52.21\% & 1.1261 \\ & GAGM~\cite{Gold1996AGA} & 69.17\% & 0.6031 & 62.99\% & 0.6102 & 60.46\% & 1.0024 & 57.00\% & 1.0472 & 53.85\% & 1.0815 & 52.71\% & 1.1168 \\ & IPFP~\cite{Leordeanu2009AnIP} & 80.89\% & 0.9215 & 73.88\% & 0.9278 & 69.41\% & 1.0366 & 64.07\% & 1.0841 & 58.33\% & 1.1143 & 55.36\% & 1.1599 \\ & PSM~\cite{Egozi2013APA} & 84.33\% & 0.9142 & 71.35\% & 0.7711 & 66.46\% & 1.0180 & 60.17\% & 1.0565 & 54.56\% & 1.0824 & 51.06\% & 1.1042 \\ & GNCCP~\cite{Liu2012AnEP} & 85.81\% & 0.9827 & 77.82\% & 0.9690 & 71.07\% & 1.0408 & 65.47\% & 1.0899 & 59.30\% & 1.1211 & 56.17\% & 1.1718 \\ & BPF~\cite{Wang2018GraphMW} & 85.89\% & 0.9830 & 78.18\% & 0.9701 & 71.26\% & 1.0412 & 65.74\% & 1.0904 & 59.47\% & 1.1217 & 56.31\% & 1.1729 \\ & ZACR~\cite{wang2020zero} & 81.71\% & 0.9593 & 74.68\% & 0.9523 & 67.98\% & 1.0360 & 63.77\% & 1.0835 & 57.73\% & 1.1187 & 54.75\% & 1.1695 \\ \hline \multirow{6}[0]{*}{\rotatebox{90}{Supervised}}& GMN~\cite{ZanfirCVPR18} & 75.52\% & - & 72.35\% & - & 65.64\% & - & 56.64\% & - & 55.41\% & - & 51.95\% & - \\ & PCA~\cite{WangICCV19} & 79.78\% & - & 74.01\% & - & 67.71\% & - & 59.41\% & - & 57.41\% & - & 52.31\% & - \\ & LCS~\cite{wang2020learning} & 86.27\% & - & 77.68\% & - & 70.49\% & - & 66.79\% & - & 59.32\% & - & 55.61\% & - \\ & NGM~\cite{Wang2019NeuralGM} & 81.24\% & - & 74.37\% & - & 68.31\% & - & 61.93\% & - & 56.87\% & - & 53.32\% & - \\ & BBGM~\cite{rolinek2020deep} & 87.21\% & - & 79.84\% & - & 73.41\% & - & 69.09\% & - & 61.79\% & - & 58.07\% & - \\ & NGM-v2~\cite{Wang2019NeuralGM} & 86.84\% & - & 78.90\% & - & 73.23\% & - & 68.96\% & - & 60.52\% & - & 56.73\% & - \\ \hline \multirow{3}[0]{*}{\rotatebox{90}{RL}}& RGM + AR& 85.63\% & \textbf{1.0205} & 80.09\% & \textbf{1.0385} & 75.65\% & \textbf{1.0762} & 71.20\% & \textbf{1.1300} & 64.11\% & \textbf{1.1786} & 62.10\% & \textbf{1.2157} \\ & RGM + IC & 86.18\% & 1.0110 & 80.11\% & 1.0294 & 76.72\% & 1.0504 & 71.78\% & 1.1059 & 65.26\% & 1.1390 & 64.27\% & 1.1900 \\ & RGM + AR + IC & \textbf{87.68\%} & 1.0198 & \textbf{80.37\%} & 1.0309 & \textbf{77.57\%} & 1.0666 & \textbf{73.77\%} & 1.1146 & \textbf{67.49\%} & 1.1495 & \textbf{65.10\%} & 1.1895 \\ \bottomrule \end{tabular}% } \label{tab:willow_} \end{table*}% \subsection{Experiments on Synthetic Dataset} We evaluate RGM on the synthetic graphs following the protocol of \cite{Wang2019NeuralGM}. The synthetic data test is relatively simple, and it is mainly to show the effectiveness of our back-end solver when the front-end information is limited as there is no visual data for CNN to learn. More outlier tests will be given on the real data. We first generate sets of random points in the 2D plane. The coordinates of these points are sample from uniform distribution $U(0,1) \times U(0,1)$. First, we select 10 sets of points as the set of ground truth points. Then, we randomly scale their coordinates from $U(1 - \delta_{s}, 1 + \delta_{s})$. The set of scaled points and the set of ground truth points are regarded as the pairwise graphs to be matched. We set there are 10 inliners without outlier, and the noise level $\delta_{s}$ varies from $0$ to $0.5$. For the calculation of affinity matrix $\textbf{K}$, the node affinity is set by $0$ and the edge affinity is set by the difference of edge length: $\textbf{K}_{ia, jb} = \exp(-\frac{(f_{ij} - f_{ab})^{2}}{\sigma_{1}})$, where the $f_{ij}$ is the edge length of $E_{ij}$. We generate 300 sets of scaled points for each ground truth sets and get 3,000 pairwise graphs. We split the data into the training and testing sets by the ratio of (2 : 1). The results of the synthetic datasets are shown in Fig.~\ref{fig:syn}. Evaluations are performed in terms of the noise level $\delta_{s}$. We can see that RGM performs the best in terms of matching F1 score and objective score in all experiments. \begin{table*}[t] \centering \caption{Sensitivity test by using inexact inlier count $n_i$ ranging from 8 to 13, instead of the ground truth 10 on Willow Object with 10 inliers and \textbf{3} outliers.} \resizebox{0.98\textwidth}{!} { \renewcommand{\arraystretch}{1.75} \begin{tabular}{l|cc|cc|cc|cc|cc|cc} \hline \multirow{2}[0]{*}{\diagbox{Method}{$n_i$}} & \multicolumn{2}{c|}{8} & \multicolumn{2}{c|}{9} & \multicolumn{2}{c|}{10 (ground truth)} & \multicolumn{2}{c|}{11} & \multicolumn{2}{c|}{12} & \multicolumn{2}{c}{13}\\ \cline{2-13} & F1 & Obj & F1 & Obj & F1 & Obj & F1 & Obj & F1 & Obj & F1 & Obj\\ \hline BPF~\cite{Wang2018GraphMW} & 71.26\% & 1.0412 & 71.26\% & 1.0412 & 71.26\% & 1.0412 & 71.26\% & 1.0412 & 71.26\% & 1.0412 & 71.26\% & 1.0412 \\ \cdashline{1-13}[1.5pt/2pt] BBGM~\cite{rolinek2020deep} & 73.41\% & - & 73.41\% & - & 73.41\% & - & 73.41\% & - & 73.41\% & - & 73.41\% & - \\ \cdashline{1-13}[1.5pt/2pt] RGM + AR& \textbf{75.65\%} & \textbf{1.0762} & \textbf{75.65\%} & \textbf{1.0762} & 75.65\% & \textbf{1.0762} & 75.65\% & 1.0762 & \textbf{75.65\%} & 1.0762 & \textbf{75.65\%} & 1.0762 \\ \hline RGM + IC & 70.78\% & 0.9573 & 73.78\% & 1.0312 & 76.72\% & 1.0504 & 73.61\% & \textbf{1.0863} & 71.40\% & \textbf{1.1183} & 66.96\% &\textbf{1.1325} \\ RGM + AR +IC & 71.94\% & 1.0101 & 74.87\% & 1.0485 & \textbf{77.57\%} & 1.0666 & \textbf{75.73\%} & 1.0738 & 74.78\% & 1.0809 & 73.89\% & 1.0852 \\ \bottomrule \end{tabular}% } \label{tab:willow__}% \end{table*}% \begin{table*}[t!] \centering \caption{Average performance over all classes on Pascal VOC. The number of inlier ranges from 6 to 23 without outlier. ``w/ label" denotes requiring a label.} \resizebox{0.99\textwidth}{!} { \renewcommand{\arraystretch}{1.75} \begin{tabular}{c|cccccc|c||cccccc|cc} \hline & \multirow{2}[0]{*}{RRWM} & \multirow{2}[0]{*}{GAGM} & \multirow{2}[0]{*}{IPFP} & \multirow{2}[0]{*}{PSM} & \multirow{2}[0]{*}{GNCCP} & \multirow{2}[0]{*}{BPF} & \multirow{2}[0]{*}{RGM} & \multirow{2}[0]{*}{GMN} & \multirow{2}[0]{*}{PCA} & \multirow{2}[0]{*}{LCS } & \multirow{2}[0]{*}{BBGM} & \multirow{2}[0]{*}{NGM} & \multirow{2}[0]{*}{NGM-v2} & NGM & NGM-v2 \\ & \scriptsize{\cite{Cho2010ReweightedRW}} & \scriptsize{\cite{Gold1996AGA}} & \scriptsize{\cite{Leordeanu2009AnIP}} & \scriptsize{\cite{Egozi2013APA}} & \scriptsize{\cite{Liu2012AnEP}} & \scriptsize{\cite{Wang2018GraphMW}} & & \scriptsize{\cite{ZanfirCVPR18}} & \scriptsize{\cite{WangICCV19}} & \scriptsize{\cite{wang2020learning}} & \scriptsize{\cite{rolinek2020deep}} & \scriptsize{\cite{Wang2019NeuralGM}} & \scriptsize{\cite{Wang2019NeuralGM}} & + RGM & + RGM \\ \hline F1 & 48.58\% & 53.52\% & 41.94\% & 53.72\% & 49.98\% & 54.70\% & 60.19\% & 55.30\% & 64.80\% & 68.50\% & 79.00\% & 64.10\% & 80.10\% & 68.04\% & \textbf{81.88\%} \\ Obj & 1.017 & 1.019 & 1.017 & 1.019 & 1.018 & 1.020 & \textbf{1.040 } & - & - & - & - & - & - & 1.015 & 1.011 \\ \hline w/ label & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ \\ \bottomrule \end{tabular}% } \label{tab:voc}% \end{table*}% \subsection{Experiments on Willow Object Dataset with Outliers} \label{sec:willow} \subsubsection{Performance over Different Categories} \label{sec:willow_1} In the experiments, we use 10 distinctive key points with 3 randomly generated outliers for each image. The experiments setup follows~\cite{Jiang2020UnifyingOA}, and the evaluation is performed on all five categories. For the training and testing process, we follow the data split rate in BBGM~\cite{rolinek2020deep} in all experiments. Table~\ref{tab:willow} shows the results over the five categories give three outliers. We compare our methods (bottom box) with the learning-free back-end solvers (top box) and the learning based deep graph matching methods (middle box). We use learning features extracted by BBGM~\cite{rolinek2020deep} to construct the affinity matrix, which is used as input for all learning-free back-end solvers and our methods. For other learning-based baselines, we train and test them by their own pipelines directly, and they do not report their objective score because learning-based methods only care about the accuracy or F1 score. Since there are several outliers that should not be matched, the ground truth matching results only contain parts of the input keypoints (10 actually). Therefore, we use the F1 score instead of accuracy to test the performance. Note for our methods at the bottom, ``RGM + AR" means RGM with affinity regularization, ``RGM + IC" denotes RGM with inlier count information, and ``RGM + AR + IC" means RGM with both. Regularized affinity and inlier count information are two ways to handle the outliers introduced in Sec.~\ref{sec:regualr}. RGM reaches the best results over all five classes in this dataset, and get a 4\% improvement in average compared to the best baseline BBGM. For the variants of our methods, ``RGM + AR + IC" shows the best performance thanks to the additional inlier count information and regularization. The simple version ``RGM + AR" requires no extra information and still outperforms all the baselines. For the learning-free baselines, we find a strange result that BPF~\cite{Wang2018GraphMW} reaches the best performance rather than ZACR~\cite{wang2020zero}, which is the latest GM solvers tailored for handling outliers. Per discussion with the authors of ZACR, this is perhaps mainly due to a few strong assumptions they made which is more suitable to the 50 images (30 cars and 20 motorbikes) they used for experiments in their paper, which may not always hold in other datasets including Willow Object. For example, ZACR requires edges linked by two inliers have clear higher similarities than the edges linked by inlier-outlier or outlier-outlier. Besides, by its inherent design, ZACR solves Koopmans-Beckmann's QAP instead of Lawler's QAP, and therefore has some difficulty utilizing the affinity matrix which is obtained by pretraining BBGM. Via the communication and discussion with the authors of ZACR, we have tried to modify their code including using the learned node and edge features inline with their model's intereface. Regrettably, the results in Table~\ref{tab:willow} and Table~\ref{tab:willow_} are the best results we can get. The matching visualization is given in Fig.~\ref{fig:image}. We visualize the matching results of RGM on all five categories. We paint the inliers as green nodes and the outliers as blue nodes. In each pair of images, the green and red lines represent correct and incorrect predictions respectively. Since it is supposed to match all inliers and ignore the outliers, we can see that RGM barely matches the blue outliers and focuses on the green inliers. \subsubsection{Performance over Different Amounts of Outliers} We conduct more experiments with respect to different amounts of outliers. In these experiments, the number of inliers is fixed to 10 while the number of outliers varies from 1 to 6. Table~\ref{tab:willow_} shows the performance of our method and baselines in the average of all five classes on the Willow Object dataset with respect to different amounts (1 - 6) of outliers. Our method is shown in the last three rows in Table~\ref{tab:willow_}. RGM outperforms all baselines with respect to every number of outliers from 1 to 6. As the number of outliers increases, the performances of all methods tend to decrease, since the obstruction of outliers is getting larger. While the improvement of RGM becomes more significant. One may attribute the advantage of ``RGM + IC" and ``RGM + AR + IC" to the use extra information of the inlier count which is a unique ability of our RL-based model compared to peer learning-based baselines which cannot directly explore such information. Yet ``RGM + AR" does not require the inlier count information as input, and can still outperform all baselines in almost all settings. \begin{figure}[t!] \centering \includegraphics[width=0.45\textwidth]{figures/willow_generalize_.pdf} \\ \caption{Generalization test for number of outliers by F1 Score ($\uparrow$). Row and column indices denote the amount of outliers on training and testing set, respectively. The average F1 on all five classes in Willow Object is reported. For each testing set (column), the darker red the better.} \label{fig:wg} \end{figure} \begin{table*}[tb!] \centering \caption{Performance gap with the optimal (\%) ($\downarrow$) on QAPLIB, the mean / max / min gaps are reported with respect to every class. At last, the average performance over all classes and the inference time (s) per instance are reported. Note here the peer method NGM has no difference for its v1 and v2 versions as their difference lies on the front-end feature extraction which is irrelevant in QAPLIB setting. The number in the bracket for each category is the size of instances we used in these experiments.} \resizebox{0.99\textwidth}{!} { \renewcommand{\arraystretch}{1.75} \begin{tabular}{l|ccc|ccc|ccc|ccc|ccc|ccc|ccc|ccc} \hline & \multicolumn{3}{c|}{bur (26) } & \multicolumn{3}{c|}{chr (12 - 25)} & \multicolumn{3}{c|}{esc (16 - 64)} & \multicolumn{3}{c|}{had (12 - 20)} & \multicolumn{3}{c|}{kra (30 - 32)} & \multicolumn{3}{c|}{lipa (20 - 60)} & \multicolumn{3}{c|}{nug (12 - 30)} & \multicolumn{3}{c}{rou (12 - 30)} \\ \cline{2-25} & mean & min & max & mean & min & max & mean & min & max & mean & min & max & mean & min & max & mean & min & max & mean & min & max & mean & min & max \\ \hline SM~\cite{Leordeanu2005AST} & 22.3 & 20.3 & 24.9 & 460.1 & 144.6 & 869.1 & 301.6 & 0.0 & 3300.0 & 17.4 & 14.7 & 21.5 & 65.3 & 63.8 & 67.3 & 19.0 & 3.8 & 34.8 & 45.5 & 34.2 & 64.0 & 35.8 & 30.9 & 38.2 \\ RRWM~\cite{Cho2010ReweightedRW} & 23.1 & 19.3 & 27.3 & 616.0 & 120.5 & 1346.3 & 63.9 & 0.0 & 200.0 & 25.1 & 22.1 & 28.3 & 58.8 & 53.9 & 67.7 & 20.9 & 3.6 & 41.2 & 67.8 & 52.6 & 79.6 & 51.2 & 39.3 & 60.1 \\ SK-JA~\cite{Kushinsky2019SinkhornAF} & 4.7 & 2.8 & 6.2 & \textbf{38.5 } & \textbf{0.0 } & \textbf{186.1 } & 364.8 & 0.0 & 2200.0 & 25.8 & 6.9 & 100.0 & 41.4 & 38.9 & 44.4 & \textbf{0.0 } & \textbf{0.0 } & \textbf{0.0 } & 25.3 & 10.9 & 100.0 & 13.7 & 10.3 & 17.4 \\ NGM~\cite{Wang2019NeuralGM} & \textbf{3.4 } & \textbf{2.8 } & \textbf{4.4 } & 121.3 & 45.4 & 251.9 & 126.7 & 0.0 & 200.0 & 8.2 & 6.0 & 11.6 & 31.6 & 28.7 & 36.8 & 16.2 & 3.6 & 29.4 & 21.0 & 14.0 & 28.5 & 30.9 & 23.7 & 36.3 \\ \hline RGM & 7.1 & 4.5 & 9.0 & 112.4 & 23.4 & 361.4 & \textbf{32.8 } & \textbf{0.0 } & \textbf{141.5 } & \textbf{6.2 } & \textbf{1.9 } & \textbf{9.0 } & \textbf{15.0 } & \textbf{10.4 } & \textbf{20.6 } & 13.3 & 3.0 & 23.8 & \textbf{9.7 } & \textbf{6.1 } & \textbf{12.9 } & \textbf{13.4 } & \textbf{7.1 } & \textbf{16.7 } \\ \midrule \hline & \multicolumn{3}{c|}{scr (12 - 20)} & \multicolumn{3}{c|}{sko (42 - 64)} & \multicolumn{3}{c|}{ste (36)} & \multicolumn{3}{c|}{tai (12 - 64)} & \multicolumn{3}{c|}{tho (30 - 40)} & \multicolumn{3}{c|}{wil (50)} & \multicolumn{3}{c|}{\textbf{Average (12 - 64)}} & \multicolumn{3}{c}{Time per instance} \\ \cline{2-22} & mean & min & max & mean & min & max & mean & min & max & mean & min & max & mean & min & max & mean & min & max & mean & min & max & \multicolumn{3}{c}{(in seconds)}\\ \hline SM~\cite{Leordeanu2005AST} & 123.4 & 104.0 & 139.1 & 29.0 & 26.6 & 31.4 & 475.5 & 197.7 & 1013.6 & 180.5 & 21.6 & 1257.9 & 55.0 & 54.0 & 56.0 & 13.8 & 11.7 & 15.9 & 181.2 & 46.9 & 949.9 & \multicolumn{3}{c}{\textbf{0.01}} \\ RRWM~\cite{Cho2010ReweightedRW} & 173.5 & 98.9 & 218.6 & 48.5 & 47.7 & 49.3 & 539.4 & 249.5 & 1117.8 & 197.2 & 26.8 & 1256.7 & 80.6 & 78.2 & 83.0 & 18.2 & 12.5 & 23.8 & 169.5 & 49.5 & 432.9 & \multicolumn{3}{c}{0.15} \\ SK-JA~\cite{Kushinsky2019SinkhornAF} & 48.6 & 44.3 & 55.7 & 18.3 & 16.1 & 20.5 & 120.4 & 72.5 & 200.4 & 25.2 & \textbf{1.6 } & 107.1 & 32.9 & 30.6 & 35.3 & 8.8 & \textbf{6.7 } & 10.7 & 93.2 & \textbf{9.0 } & 497.9 & \multicolumn{3}{c}{563.44} \\ NGM~\cite{Wang2019NeuralGM} & 55.5 & 41.4 & 66.2 & 25.2 & 22.8 & 27.7 & \textbf{101.7 } & \textbf{57.6 } & \textbf{172.8 } & 61.4 & 18.7 & 352.1 & 27.5 & 24.8 & 30.2 & 10.8 & 8.2 & 11.1 & 62.4 & 17.8 & 129.7 & \multicolumn{3}{c}{15.72} \\ \hline RGM & 45.5 & \textbf{30.2 } & 56.1 & \textbf{10.6 } & \textbf{9.9 } & \textbf{11.2 } & 134.1 & 69.9 & 237.0 & \textbf{17.3 } & 11.4 & \textbf{28.6 } & \textbf{20.7 } & \textbf{12.7 } & \textbf{28.6 } & \textbf{8.1 } & 7.9 & \textbf{8.4 } & \textbf{35.8 } & 10.7 & \textbf{101.1 } & \multicolumn{3}{c}{75.53} \\ \bottomrule \end{tabular} } \label{tab:qap2} \end{table*}% \subsubsection{Sensitivity to the Input of Inlier Count} \label{sec:ic} Since in reality this information might not be always accurate, we test our methods' robustness against inexact number of inliers. We re-conduct experiments in Sec.~\ref{sec:willow_1}, of which each image contains 10 inliers and 3 outliers. We change the hyperparameter inlier count $n_i$ to (8 - 13) instead of ground truth 10, as the input to ``RGM + IC" and ``RGM + AR + IC". The results in Table~\ref{tab:willow__} show that the performance of our methods ``RGM + IC" and ``RGM + AR + IC" tend to downgrade given incorrect inlier information, which is in our expectation. It turns out that ``RGM + IC" can outperform the best baseline BBGM when the given inlier count information is in (9 - 11), while ``RGM + AR + IC" can outperform BBGM when the given inlier count information is in (9 - 13). Meanwhile, the regularized affinity matrix can also enhance the robustness. \subsubsection{Generalization to Different Amounts of Outliers} We carry additional experiments to test the generalization ability among different numbers of outliers. In this study, we train our RGM on one certain number of outliers and test it on another setting. We conduct these experiments on Willow Object with 10 inliers and a range of outliers from 0 to 6. The results are shown in Fig.~\ref{fig:wg}, where for every testing case (column) darker red means better performance. We can see that our RGM can generalize well to the different numbers of outliers, since the performance of RGM is promising. Relatively speaking, training RGM with 2, 3, or 4 outliers can reach a better generalization performance, and training RGM with 3 outliers can reach the best. \subsection{Experiments on Pascal VOC without Outliers} \label{sec:pascal} Table~\ref{tab:voc} reports the results on the Pascal VOC. We also apply the train/test split rate as mentioned in Sec.~\ref{sec:tt_protocol}. We show the average performance of every 20 class in this dataset, and we can see that RGM outperforms all baselines on Pascal VOC. Specifically, the baselines on the left side are label-free methods, which use the same input information as RGM thus leads to a fair comparison. In other words, the left side methods are the pure back-end solvers that pursue the highest objective score. We can see that our RGM reaches the highest objective score 1.040, which is clearly greater than 1. Even with this high objective score, the matching accuracy is still unsatisfactory. Then, we conduct experiments with methods on the right side that requires label as supervision, which means their backend-solver are trained to reach the higher accuracy instead of the objective score. We combine our RGM with the deep GM methods NGM and NGM-v2, by using their well-trained solver as the guidance of RGM. Then, we can find that RGM can boost the performance of origin methods with the modified direction. It is worth noting that, the objective score reached by (NGM-v2 + RGM) is not as good as RGM on the left side (1.011 vs. 1.040), when we use ground truth label as guidance fro higher matching accuracy. It suggests the objective score of the ground truth solution may not always be optimal in the sense of objective score. This can be attributed to the imperfect modeling of the affinity model for real images, even there is no outlier. Besides, Fig.~\ref{fig:tab:generalization} (a) shows the generalization ability of RGM among similar categories. We use one class for training and another class for testing. For every testing class (every column) the red is the darker the better. We can see RGM generalizes well to different classes. \begin{figure}[t!] \centering \begin{tabular}{c} \includegraphics[width=0.365\textwidth]{figures/gereralization_.pdf} \\ \footnotesize{(a) Pascal VOC dataset }\\ \includegraphics[width=0.465\textwidth]{figures/QAPLIB_generalize_new.pdf} \\ \footnotesize{(b) QAPLIB dataset} \end{tabular} \caption{Generalization test w.r.t (a) F1 score ($\uparrow$), (b) optimal gap ($\downarrow$). Row and column indices denote training and testing classes, respectively. For every testing class (column), the darker red the better.} \label{fig:tab:generalization} \end{figure} \begin{figure*}[t!] \centering \includegraphics[width=0.98\textwidth]{figures/image_6_.pdf} \caption{Visual illustration of the matching results by RGM on the Willow Object dataset with 10 inliers (green), and 3 outliers (blue) which are randomly extracted from the images. Green and red lines represent correct and incorrect node matchings respectively. The correct solution is supposed to match all green inliers with green line. The subtitle of each figure shows the correct / incorrect matching count out of the 10 inliers.} \label{fig:image} \end{figure*} \subsection{Experiments on QAPLIB Dataset} \label{sec:QAPLIB_test} For computing the optimal gap for QAPLIB~\cite{Burkard1997QAPLIBA}, we use the optimal values exactly the same from \cite{Wang2019NeuralGM}. The results are shown in Table~\ref{tab:qap2}, where ``esc(16-64)" denotes that the graph size of class ``esc" varies from 16 to 64. The training of RGM uses half of the instances of smaller sizes. Here, we calculate the gap between the computed solution and the optimal, and report the average optimal-gap (the lower the better). Besides, inference time per instance is listed in the last column. We compare with four existing solvers: SM~\cite{Leordeanu2005AST} solves the QAP by the spectral numerical technique, Sinkhorn-JA~\cite{Kushinsky2019SinkhornAF} solves the lifted linear program relaxations of QAP, RRWM~\cite{Cho2010ReweightedRW} and NGM~\cite{Wang2019NeuralGM}. Note that the NGM is the first that utilizes deep learning to solve QAP which is still an emerging research setting. It shows that RGM outperforms all the baselines, including the latest solver NGM. The inference time of RGM is between NGM and SK-JA. Fig.~\ref{fig:tab:generalization}(b) shows the generalization ability of RGM, which is trained on one class and tested on another. For every testing class (every column), the darker red means the low optimal gap, the better performance. It shows that RGM generalizes soundly to unseen instances with different problem sizes. \begin{figure}[tb!] \centering \includegraphics[width=0.46\textwidth]{figures/hp.pdf} \caption{Hyperparameter study for our revocable deep reinforcement learning: the results over different values of hyperparameters. The experiments are conducted on the Willow Object with 10 inliers and 3 outliers. The average F1 score ($\uparrow$) on all five classes is reported.} \label{fig:hp} \end{figure} \subsection{Further Results and Discussion} \subsubsection{Hyperparameter Sensitivity Study} To test the sensitiveness of RGM to the value of hyperparameters, we conduct the hyperparameter study on Willow Object. We use the same setting as Sec.~\ref{sec:willow_1}, where there are 10 inliers and 3 outliers for each image. We choose six hyperparameters for evaluation: batch size, $\gamma$, experience replay size, hidden size in GNN, hidden size in Q value network, and regularization function for affinity regularization ($f_{1}(n) = \frac{3\cdot max(n1, n2) - n}{3\cdot max(n1, n2)}$, $f_{2}(n) = \frac{1 + n}{1 + 3\cdot n}$, $f_{3}(n) = \frac{1}{n^{2}}$). The results are shown in Fig.~\ref{fig:hp}. It shows that RGM is not sensitive to hyperparameters in batch size, hidden size, and regularization function, since there are only small fluctuations in F1 score. Here $\gamma$ is the hyperparameter in Eq.~\ref{eq:ddqn} as the rate for considering the current reward and the long-term reward in RL, while setting it too high or too low will lead to a relatively bad performance. RGM performs badly when the replay size is too small for experience replay. In general, RGM is relatively stable across a range of hyperparameters. \begin{figure}[tb!] \centering \begin{tabular}{cc} \includegraphics[width=0.21\textwidth]{figures/case_synthetic_new_.pdf} & \includegraphics[width=0.21\textwidth]{figures/case_willow_new_.pdf} \\ \footnotesize{(a) Synthetic Dataset} & \footnotesize{(b) Willow Object Dataset} \end{tabular} \caption{Case study: performance comparison w.r.t F1 score ($\uparrow$) in terms of different numbers of initial matched seeds (i.e. node pairs).} \label{fig:cs} \end{figure} \subsubsection{Case Study: Seeded Graph Matching} As aforementioned, the core of RGM is to learn the back-end decision making process for graph matching. More importantly, RGM can take supplementary information for graph matching, such as initial seeds. Initial seed means that one or several node in each of the original pairwise graphs is already matched by human or other information sources. However, existing learning based graph matching methods can not utilize these valuable initial seeds because they provide the permutation matrix solution in one shot, not like RGM solves the problem progressively. In implementation, we only need to set the initial seeds as the first several actions, and then let RGM execute normally. We conduct this case study on both the synthetic dataset and the Willow Object dataset, of which each image contains 10 inliers and 3 outliers. Fig.~\ref{fig:cs} shows that adding suitable initial seeds does improve the performance, which can be useful when the matching can be conducted by manual annotation in the beginning. \subsubsection{Ablation Study: Revocable Action v.s. Local Rewrite} \label{sec:lr} We study the effectiveness of our revocable action scheme, by comparing it with the local rewrite (LR~\cite{Chen2019LearningTP}) under the same RL framework. LR is an influential mechanism in RL-based combinatorial optimization. It tries to improve a given solution instead of generating one from scratch. To some extent, LR can also reverse the applied actions by its local rewrite mechanism and it is recognized as state-of-the-art technique for improving RL. We conduct the comparative experiments on QAPLIB. We use LR to improve the solution given by the baselines and our RGM without the revocable framework (RGM w/o rev.), and compared the results with RGM. Table~\ref{tab:lr} shows the results. We can see that our revocable framework RGM still performs the best compared to all boosted baselines. It turns out LR does improve the origin solutions, but its performance and efficiency is still below our revocable framework. \begin{table}[tb!] \centering \caption{Comparison with the local rewrite (LR) ~\cite{Chen2019LearningTP} boosting technique on QAPLIB dataset w.r.t the optimal gap (the lower the better) and inference time in seconds, where ``+LR" denotes using local rewrite given the output from the original methods, and ``RGM w/o rev." denotes our method without the revocable mechanism as described in Sec.~\ref{sec:rev}.} \resizebox{0.48\textwidth}{!} { \renewcommand{\arraystretch}{1.5} \begin{tabular}{l|c|c|c|c} \hline & Time per & \multicolumn{3}{c}{QAPLIB} \\ \cline{3-5} & instance & mean & min & max \\ \hline SM~\cite{Leordeanu2005AST} & \textbf{0.01} & 181.22\% & 46.93\% & 949.94\% \\ \multicolumn{1}{r|}{+ LR} & 47.51 & 119.14\% & 44.21\% & 291.15\% \\ \cdashline{1-5}[1.5pt/2pt] RRWM~\cite{Cho2010ReweightedRW} & 0.15 & 169.50\% & 49.51\% & 432.94\% \\ \multicolumn{1}{r|}{+ LR} & 47.93 & 100.19\% & 46.25\% & 223.84\% \\ \cdashline{1-5}[1.5pt/2pt] SK-JA~\cite{Kushinsky2019SinkhornAF} & 563.44 & 93.23\% & \textbf{9.03\%} & 497.91\% \\ \multicolumn{1}{r|}{+ LR} & 623.18 & 73.87\% & \textbf{9.03\%} & 212.53\% \\ \cdashline{1-5}[1.5pt/2pt] NGM~\cite{Wang2019NeuralGM} & 15.72 & 62.35\% & 17.78\% & 129.75\% \\ \multicolumn{1}{r|}{+ LR} & 69.57 & 54.41\% & 16.54\% & 117.36\% \\ \hline RGM w/o rev. & 47.24 & 41.49\% & 12.58\% & 119.75\% \\ \multicolumn{1}{r|}{+ LR} & 96.45 & 39.70\% & 11.65\% & 109.47\% \\ \cdashline{1-5}[1.5pt/2pt] RGM & 75.53 & \textbf{35.84\%} & 10.74\% & \textbf{101.13\%} \\ \bottomrule \end{tabular}% } \label{tab:lr}% \end{table}% \subsubsection{Ablation Study: Using Alternative RL Backbones} In RGM, we adopt Double Dueling DQN (D3QN) with priority experience replay as our backbone, for which we perform ablation study against alternatives on Willow Object with 10 inliers and 3 outliers. Fig.~\ref{fig:rl} shows the mean F1 score over five classes. We compare D3QN with popular backbones: A2C~\cite{mnih2016asynchronous}, ACER~\cite{Wang2017SampleEA}, TRPO~\cite{schulman2015trust}, PPO~\cite{schulman2017proximal}, the origin DQN~\cite{Mnih2013PlayingAW}, and Double DQN~\cite{Hasselt2016DeepRL}. D3QN outperforms all other algorithms in the Willow Object. We think that the main reason is that: graph matching is a discrete decision making problem, where the value based methods (DQN, DDQN, D3QN) can be more suitable than the policy based methods, which is widely accepted~\cite{Sutton2005ReinforcementLA, ivanov2019modern, sutton2018reinforcement}. Besides, off-policy methods are generally more stable than the on-policy methods, thus the former is more suitable considering the environment is changing during matching. \subsubsection{Inconsistency between Affinity Objective and F1} \label{subsec:limiation} Last but not least, we discuss a standing issue in graph matching and possibly also in other optimization tasks regardless the presence of outliers. The matching accuracy or F1 score is inconsistent with the value of objective function. In our analysis, the reason is probably that the objective function cannot perfectly model the ultimate goal, due to limited modeling capacity and noise etc. For example, as shown in Table~\ref{tab:limitation}, when applying our RGM (or any other method is fine from the different-quality solution sampling perspective) on the real image dataset Pascal VOC, the resulting quantile statistics about the objective score deliver an important message: 39.4\% sampled solutions can achieve an objective score even higher than one, which means these wrong solutions can even get a higher score than the ground truth matching. Note that the front-end features CNN/GNN and affinity metric model are learned by the state-of-the-art supervised learning GM model BBGM, however the learned affinity function seems still not perfectly fit with the F1 score. Despite its impressive results achieved in our paper, this result also suggests the limitation of RL-based solvers which pursuits the high objective score value, which we leave for future work and possibly one way to improve it is involving multiple graphs~\cite{YanPAMI16,JiangPAMI20} to regularize the objective function. \begin{figure}[tb!] \centering \includegraphics[width=0.48\textwidth]{figures/rl__.pdf} \caption{Average F1 score ($\uparrow$) of different RL algorithms, on five classes of Willow Object with 10 inliers and 3 outliers in both sides for matching.} \label{fig:rl} \end{figure} \begin{table}[tb!] \centering \caption{Quantile of the objective score and F1 score of the solutions found by RGM on bus category in Pascal VOC dataset which contains no outlier. The mismatch of F1 score and objective score is clear. } \resizebox{0.48\textwidth}{!} { \renewcommand{\arraystretch}{1.5} \begin{tabular}{l|c|c|c|c} \hline Objective range & (0.00, 0.99) & [0.99, 1.00) & = 1.00 & (1.00, +$\infty$) \\ \hline Proportion & 4.0\% & 11.6\% & 45.0\% & 39.4\% \\ F1 score & 40.6\% & 61.2\% & 100.0\% & 56.5\% \\ \bottomrule \end{tabular}} \label{tab:limitation} \end{table} \section{Conclusion and Outlook} \label{sec:con} In this paper, we have presented a deep RL based approach for graph matching, especially in the presence of outliers from both input graphs. This sequential decision scheme allows the agent to naturally select the inliers for matching and stop from matching excessive outliers. To our best knowledge, it is the first work for RL of weighted graph matching which can be applied to its general QAP form. We further propose two important techniques to improve the robustness of our solver. The first is revocable action mechanism which is shown well suited to our complex constrained search procedure. The other is affinity regularization based on parametric function fitting, which is shown can effectively refrain the agent to matching excessive outliers, when the number of inliers is unknown. Extensive experiments on both synthetic and real-world datasets show the cost-effectiveness of RGM. There are a few future works worth further investigation: i) explore the generalization of our proposed revocable mechanism to other combinatorial learning problems whereby the constraints are relatively heavy; ii) improve the scalability of our approach by exploring its sequential decision nature, e.g. more dynamic and efficient graph embedding for the graph matching problem. \section*{Acknowledgement} This work was supported by National Key Research and Development Program of China (2020AAA0107600), National Natural Science Foundation of China (61972250,72061127003), and Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102). \bibliographystyle{IEEEtran}
{ "timestamp": "2021-08-18T02:14:25", "yymm": "2012", "arxiv_id": "2012.08950", "language": "en", "url": "https://arxiv.org/abs/2012.08950" }
\section{Introduction} \label{sec:intro} The scattering of cosmological dark matter (DM) particles with stars has long been used as a tool in the quest to uncover the particle nature of DM. For a wide range of DM masses, collisions between ambient DM and the constituents of a star would result in sufficient energy loss for the DM to become gravitationally bound to the star, with important observational consequences. This provides a sensitive probe of dark matter scattering cross sections in a way that is highly complementary to terrestrial DM direct detection experiments. While much attention has been focused on capture in the Sun~\citep{Gould:1987ju,Gould:1987ir,Jungman:1995df,Busoni:2013kaa,Garani:2017jcj,Busoni:2017mhe}, capture in neutron stars (NSs)~\citep{Goldman:1989nd,Kouvaris:2007ay,Kouvaris:2010vv,deLavallaz:2010wp,McDermott:2011jp,Bell:2013xk,Bramante:2013nma,Bramante:2017xlb,Baryakhtar:2017dbj,Raj:2017wrv,Bell:2018pkk,Garani:2018kkd,Camargo:2019wou,Bell:2019pyc,Acevedo:2019agu,Joglekar:2019vzy,Joglekar:2020liw,Bell:2020jou,Ilie:2020vec,Dasgupta:2020dik,Bell:2020lmm} has a similar long history. The figure of merit for DM capture in NSs is the cross section for which the capture probability is of order 1. Because of the enormous NS target mass and density, this extreme condition is met when the DM-neutron scattering cross section is $\sigma \sim 10^{-45} \rm cm^2$. This is comparable to the sensitivity of DM-nucleon recoil experiments for those interactions for which direct detection is most sensitive, namely, unsuppressed spin-independent scattering of GeV scale DM. It is orders of magnitude more sensitive for high or low mass DM, spin-dependent interactions, or cross sections that are either velocity or momentum suppressed. In many cases, this translates to a reach well below the so-called neutrino floor, beyond which neutrino scattering presents an irreducible background to direct detection experiments. In recent years, there has been renewed interest in DM capture in NSs because of a number of key developments: (i) the realization that DM capture, and subsequent annihilation, may lead to appreciable NS heating within reach of future telescopes in the near-infrared~\cite{Baryakhtar:2017dbj}; (ii) capture of non-annihilating DM, such as asymmetric DM, may trigger black hole formation~\cite{Kouvaris:2010jy,McDermott:2011jp,Kouvaris:2011fi,Bell:2013xk,Garani:2018kkd,Dasgupta:2020mqg}; (iii) improved understanding of NSs through a variety of observational data, including gravitational waves from NS mergers~\cite{TheLIGOScientific:2017qsa,Abbott:2018wiz,Monitor:2017mdv,Radice:2018ozg}. However, up until recently, the treatment of DM capture in NSs has largely been adapted from that for capture in the Sun, without fully accounting for the extreme physics of a NS environment. Due to the great promise of NS techniques, it is imperative to develop more accurate evaluations of the capture rate. To that end, recent calculations have included a fully relativistic scattering treatment~\cite{Joglekar:2019vzy,Joglekar:2020liw,Bell:2020jou,Bell:2020lmm}, gravitational focusing~\cite{Bell:2020jou,Bell:2020lmm}, Pauli blocking~\cite{Garani:2018kkd,Joglekar:2019vzy,Joglekar:2020liw,Bell:2020jou,Bell:2020lmm}, the opacity of the star~\citep{Bell:2020jou} and multiple-scattering effects ~\cite{Bramante:2017xlb,Joglekar:2019vzy,Joglekar:2020liw,Ilie:2020vec,Bell:2020jou,Bell:2020lmm}. In addition, one should properly incorporate the NS internal structure by consistently calculating the radial profiles of the equation of state (EoS) dependent parameters~\citep{Bell:2020jou,Garani:2018kkd,Bell:2020lmm} and the general relativistic corrections~\citep{Bell:2020jou,Bell:2020lmm}, by solving the Tolman-Oppenheimer-Volkoff (TOV) equations~\cite{Tolman:1939jz,Oppenheimer:1939ne}. However, despite these improvements, the current state-of-the-art calculations still miss important physical effects. In this Letter, we address two important features that are intrinsic to the physics of neutron stars. In all prior treatments, the nucleon form factors that relate DM-nucleon couplings to the underlying DM-quark couplings have been evaluated at zero momentum transfer. While this is a valid assumption for non-relativistic scattering in direct detection experiments, it is a very poor approximation for the scattering of quasi-relativistic DM in a NS. Moreover, the nucleon targets are typically treated as a free Fermi gas, neglecting the fact that there are strong many body forces at play. We incorporate these effects for the first time, through (i) the use of momentum dependent form factors in the scattering matrix elements and (ii) effective masses to account for strongly interacting nucleons. \section{Capture of DM in neutron stars} Neutron stars are primarily composed of degenerate neutrons. The simplest approach to calculate the DM capture rate, accounting for Pauli blocking, NS internal structure and general relativistic (GR) corrections, is to assume that DM scatters off a Fermi sea of neutrons, neglecting baryon interactions. Assuming that a single collision is sufficient for a DM particle to become gravitationally bound, the capture rate, $C$, is~\cite{Bell:2020jou} \begin{widetext} \begin{eqnarray} C &=& \frac{4\pi}{v_\star} \frac{\rho_\chi}{m_\chi} {\rm Erf }\left(\sqrt{\frac{3}{2}}\frac{v_\star}{v_d}\right)\int_0^{R_\star} r^2 \frac{\sqrt{1-B(r)}}{B(r)} \Omega^{-}(r) \, dr, \label{eq:captureM2} \\ \Omega^{-}(r) &=& \frac{\zeta(r)}{32\pi^3}\int dt dE_n ds \frac{|\overline{M}(s,t)|^2}{s^2-(m_n^2-m_\chi^2)^2} \frac{E_n}{m_\chi}\sqrt{\frac{B(r)}{1-B(r)}} \frac{s}{\gamma(s)} f_{\rm FD}(E_n,r)[1-f_{\rm FD}(E_n^{'},r)],\label{eq:intrateideal} \end{eqnarray} \end{widetext} where $r$ is the radial variable, $\Omega^{-}$ is the interaction rate, $B(r)$ is the time component of the Schwarzchild metric, $\rho_\chi$ is the local DM density, $v_\star$ is the NS velocity, $v_d$ is the DM velocity dispersion, $|\overline{M}|^2$ is the squared matrix element parametrized in terms of the Mandelstam variables $s$ and $t$, $E_n$ and $E_n^{'}$ are the initial and final neutron energies respectively, the Fermi Dirac distribution $f_{\rm FD}$ depends on $r$ through the neutron chemical potential and \begin{eqnarray} \gamma(s) &=& \sqrt{(s-m_n^2-m_\chi^2)^2-4m_n^2m_\chi^2},\\ \zeta(r)&=&\frac{n_{n}(r)}{n_{free}(r)}, \label{eq:zeta} \end{eqnarray} where $n_{free}$ is the neutron number density in the ideal Fermi gas approximation. The integration intervals for $s$, $t$ and $E_n$ are given in Ref.~\cite{Bell:2020jou}. Evaluating Eqs.~\ref{eq:captureM2} and \ref{eq:intrateideal} requires the assumption of an equation of state (EoS) to determine realistic radial profiles for the neutron number density $n_n(r)$, chemical potential and $B(r)$, by solving the general relativistic version of the equations of hydrostatic equilibrium, the TOV equations. The $\zeta(r)$ correction factor of Eq.~\ref{eq:zeta} was first introduced in Ref.~\cite{Garani:2018kkd} in order to retain the free Fermi gas approximation while using an EoS that accounts for nucleon interactions. Most earlier calculations in the literature neglect this effect entirely, as they do not adopt an EoS and instead use average quantities. \section{Nucleon Interactions} At the extremely high densities found in neutron stars, particularly in the core, the ideal Fermi gas approach is no longer a good approximation, since nucleons undergo strong interactions. In equations of state of nucleon-rich dense matter, these interactions are often described in terms of effective Lagrangians such as Skyrme forces~\cite{Skyrme:1959zz,Vautherin:1971aw} and relativistic mean field models~\cite{Serot:1984ey,Serot:1997xg}. In the presence of a Lorentz scalar mean field, baryons in general, and nucleons in particular, develop an effective mass, $m^{\rm eff}$, different from the rest mass in vacuum, which must be used to consistently express the energy spectrum of interacting nucleons. Properly incorporating this effective mass in the evaluation of the capture rate is a superior approach to the use of the $\zeta(r)$ correction factor of Eq.~\ref{eq:zeta}. To deal with the full range of NS masses, up to the maximum observed, a relativistic treatment is necessary. The constraints of $\beta$-equilibrium suggest that hyperons will appear at the high densities associated with stars above 1.6 $M_\odot$. We use the EoS corresponding to the quark-meson coupling (QMC) model~\cite{Guichon:2018uew}, as presented in Ref.~\cite{Motta:2019tjc}. This model first suggested that one could have stars with masses of order 2 $M_\odot$, even when hyperons appeared~\cite{RikovskaStone:2006ta}. This is a consequence of the repulsive three-body forces which arise naturally in that model~\cite{Guichon:2004xg} because of the self-consistent adjustment of the internal quark structure of the bound nucleons to the strong mean scalar field generated in the dense medium. In this model the energy of a neutron with momentum $\vec{p}_n$ is \begin{equation} E_n(p_n)= \sqrt{p^2_n +\left[m_n^{\rm eff}(n_b)\right]^2} +U_n(n_b), \label{Eq:NeutronE} \end{equation} where $n_b$ is the baryon number density and $U_n$ is the Lorentz vector potential felt by the single neutron. Note that the kinetic term contains the effect of the Lorentz scalar interactions and the total energy resembles that of free particles~\cite{Reddy:1997yr}. The calculation of the DM-nucleon interaction rate is then similar to that for an ideal Fermi gas, but using $m_n^{\rm eff}$ instead of the rest mass in Eq.~\ref{eq:intrateideal} and accounting for the available single neutron spectrum in the $f_{\rm FD}$ distribution with the kinetic part of Eq.~\ref{Eq:NeutronE}. The interaction rate becomes \begin{widetext} \begin{equation} \Omega^{-}(r) = \frac{1}{32\pi^3}\int dt dE_n ds \frac{|\overline{M}(s,t,m_n^{\rm eff})|^2}{s^2-[(m_n^{\rm eff})^2-m_\chi^2]^2} \frac{E_n}{m_\chi}\sqrt{\frac{B(r)}{1-B(r)}}\frac{s}{\gamma(s,m_n^{\rm eff})}f_{\rm FD}(E_n,r)[1-f_{\rm FD}(E_n^{'},r)]. \label{eq:intrate} \end{equation} \end{widetext} \begin{figure}[t] \centering \includegraphics[width=8.5cm]{figs/mneff_r_QMC.pdf} \caption{Radial profile of the ratio of the neutron effective mass to the mass of isolated neutrons for different NS configurations of the QMC EoS, motivated by observations~\cite{Ozel:2016oaf,Antoniadis:2016hxz}.} \label{fig:mneff} \end{figure} The DM-neutron scattering rate, and the kinematically allowed phase space, now depend on the radius through the neutron effective mass (which depends on the baryon number density). In Fig.~\ref{fig:mneff}, we show radial profiles of $m_n^{\rm eff}$ for neutron stars of different masses with a QMC EoS. Note that the effective masses decrease with increasing density, towards the NS centre, and with heavier NSs. \section{Nucleon Form Factors} \begin{figure}[t] \centering \includegraphics[width=8.5cm]{figs/norm_diff_intrate_q_D1.pdf} \caption{Normalised differential DM-neutron interaction rate as a function of the DM energy loss $q_0$ for the scalar operator. We show the interaction rate calculated using a constant neutron coupling $c_n^S(0)$ (light blue line) and that accounting for the neutron form factor dependence on the transferred momentum (magenta line). We have set $m_\chi=1{\rm \,TeV}$, $B=0.5$, $\mu_{F,n}=0.4{\rm \,GeV}$ and $Q_0 =1{\rm \,GeV}$.} \label{fig:intrateqtr1} \end{figure} When the momentum transfer in the DM-nucleon scattering process is sufficiently large, nucleons cannot be treated as point particles. This important observation has been overlooked in all existing NS capture calculations. Given the large DM velocity induced upon infall to the NS, it is necessary to account for the momentum dependence of the nucleon couplings when evaluating the scattering rates.\footnote{Note, however, that the momentum transfer is not large enough for deep inelastic scattering to make a significant contribution to the total cross section.} To characterise these scattering rates, without loss of generality, we shall adopt an Effective Field Theory (EFT) framework to parametrize the coupling of DM to quarks and gluons. For fermionic DM, the lowest order operators arise at mass dimension 6 and have the form $(\overline{\chi}\Gamma\chi)( \overline{q}\Gamma q)$, where $\Gamma$ represents the Lorentz structure of the operator. For scalar ($S$) and pseudoscalar ($P$) interactions, the operator coefficients are conventionally taken to scale as $y_q/\Lambda^2$, where $y_q$ are the quark Yukawa couplings and $\Lambda$ is the cutoff scale of the effective theory. For vector ($V$), axial ($A$) and tensor ($T$) interactions, the operator coefficients are independent of the Yukawa couplings. The DM-quark couplings induce nucleon level interactions with protons and neutrons~\cite{DelNobile:2013sia}. The latter are usually evaluated at zero momentum transfer, a limit which is valid for direct detection experiments. The squared effective neutron couplings in this limit are \begin{align} c_n^S(m_n^{\rm eff}) &= \frac{2 (m_n^{\rm eff})^2}{\Lambda^4 v^2 }\left[\sum_{q=u,d,s}f_{T_q}^{(n)}+\frac{2}{9}f_{T_G}^{(n)}\right]^2,\label{eq:scalarcoup}\\ c_n^P(m_n^{\rm eff}) &= \frac{2 (m_n^{\rm eff})^2}{\Lambda^4 v^2}\left[\sum_{q=u,d,s}\left(1-3\frac{\overline{m}}{m_q}\right)\Delta_q^{(n)}\right]^2,\label{eq:pseudoscalarcoup}\\ c_n^V &= \frac{9}{\Lambda^4}, \qquad c_n^A = \frac{1}{\Lambda^4}\left[\sum_{q=u,d,s}\Delta_q^{(n)}\right]^2, \end{align} where $v=246$ GeV is the electroweak vacuum expectation value, $\overline{m}\equiv(1/m_u+1/m_d+1/m_s)^{-1}$ and $f_{T_q}^{(n)}, f_{T_G}^{(n)}=1-\sum_{q=u,d,s} f_{T_q}^{(n)}$ and $\Delta_q^{(n)}$ are the hadronic matrix elements, determined via experiment or lattice QCD simulations. \begin{figure*}[th] \centering \includegraphics[width=\textwidth]{figs/C_mDM_n_QMC_3ops_ratio.pdf} \caption{Capture rate in the optically thin limit for scalar (top), pseudoscalar (middle) and vector (bottom) operators as a function of the DM mass $m_\chi$, using the free Fermi gas approach with constant neutron couplings (dashed blue line) and momentum dependent couplings (orange line), and the interacting neutron approach for constant nucleon couplings (dashed light blue line) and momentum dependent couplings (magenta line), for NSs with a QCM EoS and $M_\star=1M_\odot$ (left), $M_\star=1.5M_\odot$ (middle) and $M_\star=1.9M_\odot$ (right). In the lower panels, we show the ratios of the capture rates with respect to that for the free Fermi gas calculation with constant hadronic form factors. } \label{fig:Cmdm} \end{figure*} In contrast to direct detection, in neutron stars the transferred momentum could be of order $10{\rm \,GeV}$, depending on the NS and DM masses~\cite{Bell:2020jou}, and hence the momentum dependence of the nucleon couplings~\cite{Thomas:2001kw} should be included. The squared neutron form factors read \begin{align} c_n^I(t) &= \frac{c_n^I}{(1-t/Q_0^2)^2},\quad I\in\{S,P,V,A,T\},\label{eq:tdep} \end{align} where we have introduced the dependence on the transferred momentum through the Mandelstam variable $t$, and $Q_0$ is an energy scale that depends on the specific hadronic form factor. For simplicity, we take $Q_0=1{\rm \,GeV}$ for all operators, a conservative choice. To illustrate the importance of correctly including this effect, we show in Fig.~\ref{fig:intrateqtr1} the normalised differential interaction rate for the scalar operator, with (magenta) and without (light blue) accounting for the momentum dependence. Here we set the neutron mass equal to the rest mass. We see that the momentum dependence of Eq.~\ref{eq:tdep} suppresses the interaction rate when the energy transfer is large, shifting the normalized spectrum towards lower energy transfer. \section{Results} \label{sec:results} In Fig.~\ref{fig:Cmdm} we illustrate the impact of effective masses, and momentum dependent couplings, on the DM capture rate in NSs of mass $1.0M_\odot$, $1.5M_\odot$ and $1.9M_\odot$, assuming the QMC EoS. Results are shown for three representative choices of EFT operators, namely scalar, pseudoscalar and vector; the exact expressions for their scattering amplitudes can be found in Table~2 of Ref.~\cite{Bell:2020jou}. Results are shown for the free Fermi gas approximation with neutron form factors at zero momentum transfer (dashed dark blue) and momentum dependent form factors (orange), and for the interacting neutron approach, characterised by $m_n^{\rm eff}$, with (magenta) and without (dashed light blue) momentum dependent form factors. From these figures, we can conclude the following: \begin{enumerate} \item The introduction of momentum-dependent form factors does not affect the capture rate when the DM mass is below $\sim0.2{\rm \,GeV}$, since the transferred momentum is small. (Compare orange line with dashed dark blue line, or magenta line with dashed light blue line for any operator or NS mass.) \item The momentum dependence of the couplings strongly suppresses the interaction rate at large DM mass, $m_\chi\gtrsim 1{\rm \,GeV}$. In fact, the form factors act as an effective cutoff on the kinematically available values of $t$ that are important for capture, removing large-$t$ contributions to the interaction rate (see Fig.~\ref{fig:intrateqtr1}). By including this correction in the free Fermi gas approximation, the capture rate is lowered by more than one order of magnitude (pseudoscalar operator) for the $1.5M_\odot$ NS (middle column) and more than two orders of magnitude for the heaviest NS considered here (compare dashed dark blue with orange lines). \item For DM masses below $\sim0.2{\rm \,GeV}$ the capture rate is scarcely affected by the interacting baryon approach in the case of the pseudoscalar (second row) and vector (bottom row) interactions. This low mass region corresponds to the Pauli blocking regime, where DM capture preferentially occurs close to the NS surface~\cite{Bell:2020jou} where neutrons are not strongly degenerate and $m_n^{\rm eff}\simeq m_n$ (see Fig.~\ref{fig:mneff}). For a scalar interaction (top panels) however, the use of $m_n^{\rm eff}$ results in an overall rescaling (light blue shaded region) because the interaction rate scales with the neutron mass as $m_n^2$. \item Nucleon interactions significantly affect the capture when $m_\chi \gtrsim m_n$. This is true for all EFT operators. For the $1.5M_\odot$ NS, the capture rate is reduced by up to $\sim 1$ order of magnitude (see the light blue or magenta shaded regions, for constant or momentum dependent couplings, respectively). This occur because the capture rate depends on the DM-neutron reduced mass, which approaches the neutron mass when $m_\chi\gg m_n$. Unlike the capture of light DM, where Pauli blocking restricts the available neutron final states, DM of mass $m_\chi\gtrsim m_n$ can be captured deep in the star, where $m_n^{\rm eff}<m_n$ (see Fig.~\ref{fig:mneff}). Thus, a lower $m_n$ induces a smaller reduced mass and hence a suppression of the capture rate. \item The suppression of the capture rate due to the momentum dependence of the form factors (dashed dark blue line vs orange line) is larger than that due to neutron interactions (dashed dark blue line vs dashed light blue line). When both effects are included (magenta line) the total reduction is lower than the product of the two individual effects. The overall suppression is largest for the pseudoscalar operator and large NS mass, where the reduction reaches up to 3 orders of magnitude. \end{enumerate} Finally, we estimate the uncertainties in our results:\\ (i) Based on the known values of the hadronic matrix elements~\cite{Zanotti:2017bte,Alarcon:2017ivh}, a conservative choice that holds for all operators is to take $Q_0\sim0.9\pm0.1{\rm \,GeV}$. Comparing the capture rates for $Q_0 = 0.9{\rm \,GeV}$ and $Q_0 = 1{\rm \,GeV}$, we find that the smaller value of $Q_0$ results in a larger suppression in the capture rate by a factor of order 1.2 -- 1.4, depending on the operator. In this sense, our choice of $Q_0 = 1{\rm \,GeV}$ is conservative.\\ (ii) We have compared our results for the QMC EoS with those for BSk24~\cite{Goriely:2013,Pearson:2018tkr,Chamel:2009yx} (a Skyrme type EoS) and found little difference. E.g., the absolute difference in the capture rates are within 2--10\% for a $1.5M_\odot$ NS, and within 10--30\% for a $1 M_\odot$ NS, depending on the operator. \section{Conclusion} \label{sec:conclusion} The capture of dark matter in neutron stars has the potential to provide a very sensitive probe of dark matter interactions with ordinary matter. However, all previous treatments of the DM capture rate have neglected two important effects that are inherent to the physics of neutron stars. First, treating nucleon targets as an ideal Fermi gas is a poor approximation when the capture occurs in the degenerate stellar interior, where strong interactions are expected to take place. In the interacting nucleon framework, the neutron mass is essentially replaced by an effective mass. This radially dependent single neutron mass suppresses the capture of DM in the mass range $m_\chi \gtrsim m_n$. This effect is stronger in denser NSs, where the neutron degeneracy is higher. Second, neutrons cannot be treated as point-like particles when calculating the DM-nucleon scattering cross section. Unlike dark matter direct detection experiments, where the limit of zero momentum transfer is always valid, dark matter scattering in a NS requires that the momentum dependence of the nucleon form factors be retained. This is necessary because the dark matter is accelerated to quasi relativistic velocities upon infall to a NS, due to the large gravitational field, resulting in collisions with appreciable momentum transfer when $m_\chi \gtrsim m_n$. The suppression of the capture rate is most pronounced for heavier NSs, for which the gravitational fields are stronger. We have shown that the combination of these two effects can suppress the dark matter capture rate by up to 3 orders of magnitude, for DM-neutron scattering. We note that these effects are also relevant for DM scattering from other hadronic NS constituents, such as protons or hyperons. In addition, these effects are expected to have an even greater impact on the very heavy DM mass regime. This is because the reduced energy transfer per collision (as illustrated in Fig.~\ref{fig:intrateqtr1}) implies that multi-scattering will be relevant for lower DM mass, and that a larger number of collisions will be required to achieve capture. \begin{acknowledgments} NFB, SR and AWT were supported in part by the Australian Research Council and MV by the Commonwealth of Australia. We thank Filippo Anzuini and Andrew Melatos for discussions. \end{acknowledgments}
{ "timestamp": "2021-09-14T02:08:43", "yymm": "2012", "arxiv_id": "2012.08918", "language": "en", "url": "https://arxiv.org/abs/2012.08918" }
\section{Introduction} \subsection{Description of the Results} In this work, we cover two main topics: \begin{itemize} \item[(1)] Stable (dg) operads, which are a new class of operads, and which are, in a particular sense, stable analogues of $\mathbb{E}_\infty$ operads. \item[(2)] Algebraic models of $p$-adic stable homotopy types using these operads. \end{itemize} First, let us recall the notion of an $\mathbb{E}_\infty$ cochain operad and how it gives rise to algebraic models of $p$-adic homotopy types. Let $\mathpzc{E}^\dagger$ be a model for the $\mathbb{E}_\infty$ cochain operad (throughout this work, we have used the symbol $\dagger$ to distinguish contexts with cochain complexes from contexts with chain complexes). Given a cochain complex $X$, the structure of an algebra over an $\mathpzc{E}^\dagger$ encodes a homotopy coherent commutative, associative and unital multiplication. If we take the cohomology of the complex, killing the higher homotopies, we find that $\text{H}^\bullet(X)$ inherits a (graded) commutative algebra structure in the traditional sense. In fact, the cohomology inherits even more structure. It posseses certain cohomology operations $P^s$, $s \in \mathbb{Z}$, which satisfy an instability condition, and as a result becomes an unstable module over $\mathcal{B}$, the algebra of generalized Steenrod operations. \\ A particular case of such $\mathpzc{E}^\dagger$-algebras are the cochains $\text{C}^\bullet(X)$ on spaces $X$. In this case, the algebra structure on the cohomology is given by the cup product, while the operations are the Steenrod operations. While the cochains, as a dg module, might not remember the homotopy type of a space, in~\cite{Mandell}, Mandell demonstrated that if we take cochains with coefficients in $\overline{\mathbb{F}}_p$, the cochains functor \[ \text{C}^\bullet(-;\overline{\mathbb{F}}_p) \colon \mathsf{Spc}^{\text{op}} \to \widebar{\mathpzc{E}}^\dagger\text{-}\mathsf{Alg} \] as a functor to $\widebar{\mathpzc{E}}^\dagger$-algebras (where $\widebar{\mathpzc{E}}^\dagger$ is the $\mathbb{E}_\infty$ cochain operad over $\overline{\mathbb{F}}_p$), induces a full embedding of the homotopy category of spaces into the derived category of $\widebar{\mathpzc{E}}^\dagger$-algebras when we restrict to connected nilpotent $p$-complete spaces of finite $p$-type, which is to say the $\widebar{\mathpzc{E}}^\dagger$-algebras remember the homotopy types of such spaces. Thus, while rational homotopy types admit algebraic models via CDGAs, when working $p$-adically, we can use $\widebar{\mathpzc{E}}^\dagger$-algebras. \\ Now we move onto stable operads and stable homotopy types. First of all, we show that the operad $\mathpzc{E}^\dagger$ possesses a stabilization map \[ \Psi \colon \Sigma \mathpzc{E}^\dagger \to \mathpzc{E}^\dagger \] from its operadic suspension to itself. Upon iteration, via an inverse limit, we produce a new operad, denoted $\mathpzc{E}_{\text{st}}^\dagger$, which is our stable operad. In fact, we have a new class of operads, the stable operads, in the sense that we are able to perform the above construction for multiple models of the $\mathbb{E}_\infty$ operad (we have a stable Barratt-Eccles operad, a stable McClure-Smith operad, and also a stable Eilenberg-Zilber operad, though the Eilenberg-Zilber is not quite an $\mathbb{E}_\infty$ operad). The stable operad $\mathpzc{E}_{\text{st}}^\dagger$ appears to be of independent interest outside of its application, which we discuss below, to $p$-adic stable homotopy theory; for example, due to its homotopy additivity, which we also discuss below. \\ Having constructed the stable operad, first, we demonstrate that one has homotopical control over $\mathpzc{E}_{\text{st}}^\dagger$ and the corresponding category of algebras $\mathpzc{E}_{\text{st}}^\dagger\text{-}\mathsf{Alg}$ in the following sense. \begin{Theorem}\label{thm:monadweakequiv} \textit{The monad $\mathbf{E}^\dagger_{\mathbf{st}}$ associated to $\mathpzc{E}_{\emph{st}}^\dagger$ preserves weak equivalences.} \end{Theorem} \begin{Theorem}\label{thm:semimodelstr} \textit{The category $\mathpzc{E}_{\text{st}}^\dagger\text{-}\mathsf{Alg}$ admits a Quillen semi-model structure where the weak equivalences and fibrations are the quasi-isomorphisms and degreewise epimorphisms.} \end{Theorem} (See Definition~\ref{def:category_semi_model_str} for the definition of a Quillen semi-model structure, a weakening of the more well-known notion of a Quillen model structure.) Next, we develop a theory of cohomology operations for algebras over $\mathpzc{E}_{\text{st}}^\dagger$. We once again get operations $P^s$ for $s \in \mathbb{Z}$, but they now no longer satisfy the instability condition. \begin{Theorem}\label{thm:homops} \textit{We have the following:} \begin{itemize} \item[(i)] \textit{The cohomologies of $\mathpzc{E}_{\emph{st}}^\dagger$-algebras possess natural operations $P^s$ for $s \in \mathbb{Z}$ which satisfy the Adem relations. These operations do not, however, satisfy the instability condition seen in the unstable case.} \item[(ii)] \textit{Given a cochain complex $X$, we have a natural isomorphism:} \[ \emph{H}^\bullet(\mathbf{E}^\dagger_{\normalfont{\textbf{st}}} X) \cong \widehat{\mathcal{B}} \otimes \emph{H}^\bullet(X) \] \end{itemize} \end{Theorem} Here $\widehat{\mathcal{B}}$ is a certain completion, with respect to a filtration by excess, of the algebra $\mathcal{B}$. Note that, in the unstable case of $\mathpzc{E}^\dagger$, in (ii) above, we would not only have to tensor with $\mathcal{B}$ to add in the operations, but also enforce the instability condition and also take a polynomial algebra to add in products. In the case of $\mathpzc{E}_{\text{st}}^\dagger$, the instability of the operations and the products disappear. \\ Next, we justify the ``stable'' in ``stable operad''. This should of course be a statement about homotopy coherent, or $\infty$-, additivity, and this is exactly what we demonstrate. In particular, we demonstrate that the monad $\mathbf{E}_{\textbf{st}}^\dagger$ is homotopy coherent, or $\infty$-, additive, in the following sense. \begin{Theorem}\label{thm:stab} \textit{We have the following:} \begin{itemize} \item[(i)] \textit{For dg modules $X$ and $Y$, we have a natural quasi-isomorphism:} \[ \mathbf{E}^\dagger_{\normalfont{\textbf{st}}}(X \oplus Y) \sim \mathbf{E}^\dagger_{\normalfont{\textbf{st}}}(X) \oplus \mathbf{E}^\dagger_{\normalfont{\textbf{st}}}(Y) \] \item[(ii)] \textit{More generally, for cofibrant $\mathpzc{E}_{\emph{st}}^\dagger$-algebras $A$ and $B$, we have a natural quasi-isomorphism:} \[ A \amalg B \sim A \oplus B \] \end{itemize} \end{Theorem} For comparison, in the unstable case, in (ii), we have $A \otimes B$ in place of $A \oplus B$. \\ Finally, we move onto the application to $p$-adic stable homotopy types. For this, we need to fix a model for spectra. We take the classical sequential model in the sense of Bousfield-Friedlander, with the exception that, rather than the ordinary suspension $- \wedge \mathbb{S}^1$, we use the Kan suspension of based simplicial sets. We then define an appropriate, and concrete in the sense that we get honest dg modules, notion of spectral cochains and then prove the following, providing another sense in which $\mathpzc{E}_{\text{st}}^\dagger$ is a stable analogue of $\mathpzc{E}^\dagger$. \begin{Theorem}\label{thm:specopaction} \textit{Given any spectrum $E$, the spectral cochains $\emph{C}^\bullet(E)$ naturally form an algebra over $\mathpzc{E}_{\emph{st}}^\dagger$.} \end{Theorem} Finally then, we get algebraic models for $p$-adic stable homotopy types in the following sense -- in the statement, the $\overline{\mathbb{F}}_p$ cochains functor $\overline{\text{C}}{}^\bullet$ is constructed from the $\mathbb{F}_p$ cochains functor $\text{C}^\bullet$ simply by tensoring with $\overline{\mathbb{F}}_p$, and similarly, the operad $\widebar{\mathpzc{E}}_{\text{st}}^\dagger$, an operad over $\overline{\mathbb{F}}_p$, is constructed from $\mathpzc{E}_{\text{st}}^\dagger$ by tensoring with $\overline{\mathbb{F}}_p$. \begin{Theorem}\label{thm:modelsstable} \textit{The spectral cochains functor} \[ \overline{\emph{C}}{}^\bullet \colon \mathsf{Sp}^{\emph{op}} \to \widebar{\mathpzc{E}}_{\emph{st}}^\dagger\text{-}\mathsf{Alg} \] \textit{induces a full embedding of the stable homotopy category into the derived category of $\widebar{\mathpzc{E}}_{\emph{st}}$-algebras when we restrict to bounded below $p$-complete spectra of finite $p$-type.} \end{Theorem} We mentioned above that rational homotopy types can be modelled by commutative DGAs, and also that $p$-adic homotopy types can be modelled by $\mathbb{E}_\infty$ DGAs. It is also well-known that rational stable homotopy types can be modelled by chain complexes. Our result for $p$-adic stable homotopy types then completes the following picture. \vspace{5mm} \begin{center} \begin{tabular}{m{4.25cm} m{3.5cm} m{3.5cm}} \hline \textbf{Algebraic models of homotopy types} & unstable & stable \tabularnewline \hline \tabularnewline [-1em] rational & commutative DGAs & chain complexes \tabularnewline \tabularnewline [-1em] $p$-adic & $\mathbb{E}_\infty$ DGAs & $\widebar{\mathpzc{E}}_{\text{st}}^\dagger$ DGAs \end{tabular} \end{center} We can make a remark on the structure that is captured by the operad $\mathpzc{E}_{\text{st}}^\dagger$. In both the unstable and stable $p$-adic cases, the models arise via cochains, with coefficients in $\overline{\mathbb{F}}_p$. Let us consider just $\mathbb{F}_p$ cochains. Given a space $X$, the mod $p$ cochains are given, as a spectrum, by $[\Sigma^\infty_+X, \text{H}\mathbb{F}_p]$. This object carries the following structure: \begin{itemize} \item[(i)] It is an $\text{H}\mathbb{F}_p$-module via ``pointwise'' scalar multiplication. \item[(ii)] It possesses an action, via postcomposition, by $[\text{H}\mathbb{F}_p, \text{H}\mathbb{F}_p]$. \item[(iii)] It is a ring spectrum via the multiplication of $\text{H}\mathbb{F}_p$. \end{itemize} These manifest in a dg context as follows: \begin{itemize} \item[(i$^\prime$)] The cochains can be modelled as a dg module. \item[(ii$^\prime$)] The cohomology inherits an action by $\mathcal{B}/(1-P^0) \cong \mathcal{A}$. \item[(iii$^\prime$)] The cochains form an $\mathbb{E}_\infty$ dg algebra. \end{itemize} In fact, we shall see that (ii') is a consequence of (iii'). Now let us consider cochains on a spectrum $E$. The cochains are given, as a spectrum, by $[E, \text{H}\mathbb{F}_p]$. This object carries the structure described in (i) and (ii) above. It no longer posseses a ring structure as, although the multiplication of $\text{H}\mathbb{F}_p$ is still present, to define a ``pointwise'' multiplication, one needs a diagonal map, which general spectra, unlike spaces and their suspension spectra, do not possess. The structure that is still present manifests, respectively, in our work as (i$^\prime$) above together with the following modified version of (ii$^\prime$): the cohomology inherits an action by $\widehat{\mathcal{B}}/(1-P^0) \cong \mathcal{A}$. We shall see that these operations in (ii$^\prime$) are a consequence of the $\mathpzc{E}_{\text{st}}^\dagger$-algebra structure, and so one can say that it is primarily these operations which this operad serves to encode. \subsection{Acknowledgments} The author would like to thank Igor Kriz, under whose supervision this project was completed at the University of Michigan. We also thank Paul Goerss, Mike Mandell and Peter May, who made many helpful comments on earlier versions of this work. \subsection{Notations, Terminology and Conventions}\label{sec:nots_convs} We list here some notations, terminology and conventions which are used throughout the work. We do this for ease of reference and because setting them in place now will allow us to be precise in our statements and constructions in later parts of the work. \newpage \textbf{General:} \begin{itemize} \item For each integer $n \ge 0$, $[n]$ denotes the poset $0 \to 1 \to \cdots \to n$. \item For each integer $n \ge 0$, $(n)$ denotes the set $\{1,\dots,n\}$; $(0)$ is the empty set. \item For $n \ge 0$, $\Sigma_n$ denotes the symmetric group on $n$ letters; $\Sigma_0$ is the trivial group, where the unique element is thought of as representing the unique isomorphism on the empty set. \end{itemize} \textbf{In relation to simplicial sets:} \begin{itemize} \item The category of simplicial sets will be denoted by $\mathsf{Spc}$, and that of based simplicial sets by $\mathsf{Spc}_*$. \item For each integer $d \ge 0$, $\Delta_d$ denotes the standard $d$-dimensional simplex as a simplicial set. \item Given a simplicial set $S$, $S_n^{\text{nd}}$ denotes the non-degenerate $n$-simplices of $S$. \end{itemize} \textbf{In relation to (co)chain complexes:} \begin{itemize} \item The term \textit{chain complex} will refer to a graded module equipped with a differential of degree $-1$ and the term \textit{cochain complex} will refer to a graded module equipped with a differential of degree $+1$; in addition, we shall let \textit{differential graded module} refer to either of these two possibilities. The phrase \textit{differential graded} is often shortened to \textit{dg}. By default, all dg modules are unbounded. The category of chain complexes over $k$, where $k$ is a field, is denoted by $\mathsf{Ch}_k$, and the category of cochain complexes over $k$ is denoted by $\mathsf{Co}_k$; the symbol $\mathsf{DG}_k$ will denote either of these two possibilities. \item Given a (co)chain complex $X$ over $k$, $X^\vee$ denotes the dual (co)chain complex of $X$. Thus $X^\vee := \text{F}(X,k[0])$ where $\text{F}$ denotes the internal hom of (co)chain complexes. These yield contravariant functors $\mathsf{Ch}_k \to \mathsf{Ch}_k$ and $\mathsf{Co}_k \to \mathsf{Co}_k$, which are involutions up to natural isomorphism, and which are both denoted by $(-)^\vee$. More concretely, unravelling the definition, we find that, given a dg module $X$ and a fixed degree $d$, $(X^\vee)_d$ consists of module maps $X_{-d} \to k$ in the chain case, and $X^{-d} \to k$ in the cochain case. The negative sign here ensures that the induced differential on $X^\vee$ has the same degree as the differential of the original dg module $X$. Note that if $X$ is concentrated in non-negative degrees, $X^\vee$ will be concentrated in non-positive degrees, and vice versa. \item Given a chain complex $X$ over $k$, $X^\dagger$ denotes the associated cochain complex, where $(X^\dagger)^p = X_{-p}$; similarly, if $X$ is a cochain complex over $k$, $X^\dagger$ denotes the associated chain complex, where $(X^\dagger)_p = X^{-p}$. These yield an inverse pair of functors $\mathsf{Ch}_k \to \mathsf{Co}_k$ and $\mathsf{Co}_k \to \mathsf{Ch}_k$, both of which are denoted by $(-)^\dagger$. As an example, note that, given any space, the cochains on the space are constructed from the chains by first dualizing via $(-)^\vee$, and then reindexing via $(-)^\dagger$. \item Given dg modules $X$ and $Y$, the tensor product $X \otimes Y$ is defined, as usual, by letting $x \otimes y$ have degree $|x| + |y|$, and the differential follows the standard sign convention: \[ \partial (x \otimes y) = \partial x \otimes y + (-1)^{|x|} x \otimes \partial y \] This tensor product on dg modules is always endowed with the symmetry $X \otimes Y \to Y \otimes X$ defined by: \[ x \otimes y \mapsto (-1)^{|x||y|}(y \otimes x) \] The internal hom $\text{F}(X,Y)$ is defined, in degree $n$, as the collection of degree $d$ graded module maps $f \colon X_\bullet \to Y_{\bullet + d}$ (no compatibility with the differnetial is required for these maps), and the differential on $\text{F}(X,Y)$ is given by: \[ \partial f = \partial \circ f - (-1)^{|f|} (f \circ \partial) \] \item Given a dg module $X$ and $n \in \mathbb{Z}$, we let $X[n]$ be the dg module defined by setting $X[n]_d = X_{d-n}$. Note that, if we let $k$ denote a ground field, $X[1] \cong X \otimes k[1]$ and $X[-1] \cong \text{F}(k[1],X)$. \item Let $k$ denote a field. We shall often denote the chain or cochain complex \[ \cdots \leftarrow 0 \leftarrow \underset{\text{deg.} \: n}{k} \leftarrow 0 \leftarrow \cdots \] \[ \cdots \rightarrow 0 \rightarrow \underset{\text{deg.} \: n}{k} \rightarrow 0 \rightarrow \cdots \] by $\mathbb{S}^n$ and refer to them as \textit{sphere complexes}, and the chain or cochain complex \[ \cdots \leftarrow 0 \leftarrow \underset{\text{deg.} \: n-1}{k} \overset{\text{id}}\leftarrow \underset{\text{deg.} \:n}{k} \leftarrow 0 \leftarrow \cdots \] \[ \cdots \rightarrow 0 \rightarrow \underset{\text{deg.} \: n-1}{k} \overset{\text{id}}\rightarrow \underset{\text{deg.} \:n}{k} \rightarrow 0 \rightarrow \cdots \] by $\mathbb{D}^n$, and refer to them as \textit{disk complexes}. \item All (co)chains on spaces or spectra are taken to be normalized. \end{itemize} \textbf{In relation to operads and their (co)algebras:} \begin{itemize} \item A \textit{chain operad} is an operad in $\mathsf{Ch}_k$. A \textit{cochain operad} is an operad in $\mathsf{Co}_k$. A \textit{differential graded operad}, or \textit{dg operad}, is an operad in either one of $\mathsf{Ch}_k$ and $\mathsf{Co}_k$. \item The category of chain operads is denoted by $\mathsf{Op}(\mathsf{Ch}_k)$ and the category of cochain operads is denoted by $\mathsf{Op}(\mathsf{Co}_k)$. \item All operads are dg operads, and are symmetric. \item The notations, in the sense of the typeface, for operads and their corresponding monads and free algebra functors will follow the following rule: if $\mathpzc{P}$ denotes an operad, the corresponding monad and free algebra functor will both be denoted by $\mathbf{P}$. \item Given a dg operad $\mathpzc{P}$, the categories of $\mathpzc{P}$-algebras and $\mathpzc{P}$-coalgebras, respectively, are denoted by $\mathpzc{P}\text{-}\mathsf{Alg}$ and $\mathpzc{P}\text{-}\mathsf{Coalg}$. \item We have seen that we have a reindexing operator $(-)^\dagger$ on dg modules. We also have such an operator on dg operads. When this operation is applied aritywise to an operad, we in fact get another operad (an easy check shows that the reindexing operation is compatible with all the structure data in an operad). As such, if $\mathpzc{P}$ is an operad in $\mathsf{Op}(\mathsf{Ch}_k)$, we let $\mathpzc{P}^{\dagger}$ denote the \textit{associated operad in $\mathsf{Op}(\mathsf{Co}_k)$}, where $\mathpzc{P}^\dagger (n) := \mathpzc{P}(n)^\dagger$. Similarly, if $\mathpzc{Q}$ is an operad in $\mathsf{Op}(\mathsf{Co}_k)$, wet let $\mathpzc{Q}^{\dagger}$ denote the \textit{associated operad in $\mathsf{Op}(\mathsf{Ch}_k)$}, where again $\mathpzc{Q}^\dagger (n) := \mathpzc{Q}(n)^\dagger$. These yield an inverse pair of functors $\mathsf{Op}(\mathsf{Ch}_k) \to \mathsf{Op}(\mathsf{Co}_k)$ and $\mathsf{Op}(\mathsf{Co}_k) \to \mathsf{Op}(\mathsf{Ch}_k)$, both of which are denoted by $(-)^\dagger$. Note: one can also apply the dualization operator $(-)^\vee$ aritywise to an operad, which yields, under suitable finiteness hypotheses, a co-operad, but we shall have no need for this. \item We have seen that we have a reindexing operator $(-)^\dagger$ on dg modules and dg operads. We also have such an operator on (co)algebras over dg operads. In fact, given a dg operad $\mathpzc{P}$ and a (co)algebra $A$ over $\mathpzc{P}$, an easy check shows that $A^\dagger$ is canonically a $\mathpzc{P}^\dagger$ (co)algebra. Moreover, we also have a dualization operator $(-)^\vee$ on (co)algebras over dg operads, though this requires some finiteness hypotheses. We mention the one case which we shall make use of: if $\mathpzc{P}$ is a dg operad which is such that each $\mathpzc{P}(n)$, for $n \ge 0$, is of finite dimension in each degree, and if $A$ is a $\mathpzc{P}$-coalgebra, then $A^\vee$ is canonically a $\mathpzc{P}$-algebra. \end{itemize} \section{Stabilizations of $\mathbb{E}_\infty$ Operads}\label{sec:stabilizations} In this section, we shall construct stable analogues of the Eilenberg-Zilber, McClure-Smith and Barratt-Eccles operads. Note that only the latter two constitute stabilizations of $\mathbb{E}_\infty$ operads. In order to construct actions of the latter two on spectral cochains, however, as we will do later, it is convenient to also have a stable analogoue of the Eilenberg-Zilber operad. Prior to constructing these stabilizations, we first need to discuss some general aspects of dg operads, their algebras and some basic constructions on simplicial sets, which we shall also need in later chapters. For reference throughout this chapter: we set that $p$ is to denote an unspecified but fixed prime, and, when considering the aforementioned operads, the ground field will be taken to be $\mathbb{F}_p$. \subsection{Kan Suspensions and Moore Loopings of Simplicial Sets} As is standard, we let $\Delta$ denote the simplex category. We also let $\mathsf{Spc}$ denote the category of spaces, by which we mean simplicial sets, and let $\mathsf{Spc}_*$ denote the category of based spaces, by which we mean based simplicial sets. For each $d \ge 0$, we let $\Delta_d$ denote the standard $d$-simplex. Given a based simplicial set, there exists more than one possible choice for a suspension functor. The most obvious one is perhaps $- \wedge \mathbb{S}^1$, where $\mathbb{S}^1 = \Delta_1/\partial\Delta_1$, but we will use a different one, the Kan suspension, which is weakly equivalent to $- \wedge \mathbb{S}^1$. Similarly, rather than $\text{F}(\mathbb{S}^1,-)$ for loopings, we will use a different, but weakly equivalent, looping functor, the Moore loopings. We reserve the standard suspension and loops notations, $\Sigma$ and $\Omega$, for the Kan suspensions and Moore loopings. \\ In order to define the Kan suspension, we first recall a cone functor $\text{C}(-)$ for based simplicial sets (see Chapter 3, Section 5 in~\cite{GoerssJardine} for details on this construction). Let $S$ be a based simplicial set as above. In degree $d$, we have that: \[ \text{C}(S)_d = S_d \vee S_{d-1} \vee \dots \vee S_{0} \] Moreover, the action of the simplicial operators is as follows. Consider some map $\theta \colon [d] \to [e]$ in $\Delta$. We want a function $S_e \vee S_{e-1} \vee \dots \vee S_{0} \to S_d \vee S_{d-1} \vee \dots \vee S_{0}$. Let $i \in \{0,1,\dots,e\}$. Our function will be a based one, so that we need to define, for each such $i$, a map $S_i \to S_d \vee S_{d-1} \vee \dots \vee S_{0}$. Consider the last $i+1$ elements $[e]$. If the preimage under $\theta$ of these elements is empty, our map is just the constant one at the basepoint. Otherwise, we form the restricted map with source the preimage of the final $i+1$ elements of $[e]$ and target these final $i+1$ elements of $[e]$ and then reindex so that we have a map \begin{equation}\label{eq:theta_i} \theta(i) \colon [j] \to [i] \end{equation} for some $j \in \{0,1,\dots,d\}$. The desired map is then $\theta(i)^* \colon S_i \to S_j$ followed by the inclusion into $S_d \vee S_{d-1} \vee \dots \vee S_{0}$. \begin{Example}\label{examp:cones_Delta_k_+} For any $d \ge 0$, we have an isomorphism of based simplicial sets \[ \Delta_{d+1} \overset{\cong}\longrightarrow \text{C}(\Delta_{d+}) \] where $\Delta_{d+1}$ is based at $0$. The map is as follows. Consider some $\theta \colon [e] \to [d+1]$ in $(\Delta_{d+1})_e$. We have that $\text{C}(\Delta_{d+})_e = (\Delta_{d})_e \amalg \cdots \amalg (\Delta_{d})_0 \amalg *$. If $\theta$ doesn't map anything to the final $d+1$ elements of $[d+1]$, that is, if it maps everything to $0$, then we send it to $*$. Otherwise, we get some new map $\theta(d) \colon [j] \to [d]$ (the notation here is as in (\ref{eq:theta_i})), for some $j \in \{0,1\dots,d\}$ and $\theta$ is mapped to this element of $\text{C}(\Delta_{d+})_e$. An easy check shows that this does indeed define a map, in fact an isomorphism, of based simplicial sets. \end{Example} Now we proceed to discuss Kan suspensions of based simplicial sets. First, note that, given any based simplicial set $S$, we have a canonical inclusion map \begin{equation}\label{eq:X_to_cone} i \colon S \to \text{C}(S) \end{equation} which, in degree $d$, is just the inclusion $S_d \to S_d \vee S_{d-1} \vee \cdots \vee S_0$ of the $S_d$ summand (this is a map of based simplicial sets because the simplicial operators act on the wedge sums ``summand-wise''). \begin{Definition}\label{def:kan_susps} Given a based simplicial set $S$, its \textit{Kan suspension}, denoted $\Sigma S$, is defined by setting \[ \Sigma S := \text{C}(S)/S \] where the inclusion $S \to \text{C}(S)$ is as above. \end{Definition} Thus, given a based simplicial set $S$ and $d \ge 0$, we have: \[ (\Sigma S)_d \cong \left\{ \begin{array}{ll} S_{d-1} \vee \cdots \vee S_0 & \text{if $d \ge 1$} \\ * & \text{if $d = 0$} \end{array} \right. \] In particular, for the case of a disjointly based $S_+$, where $S$ is now an unbased simplicial set, we have: \[ (\Sigma S_+)_d \cong \left\{ \begin{array}{ll} S_{d-1} \amalg \cdots \amalg S_0 \amalg * & \text{if $d \ge 1$} \\ * & \text{if $d = 0$} \end{array} \right. \] \begin{Remark} The relation between the Kan suspension and the more usual smash suspension $- \wedge \mathbb{S}^1$, where $\mathbb{S}^1 = \Delta_1/\partial\Delta_1$, is that, for based simplicial sets $S$, there is a natural weak equivalence: \[ S \wedge \mathbb{S}^1 \to \Sigma S \] For a proof, see Proposition 2.17 in~\cite{MarcStephan}. \hfill $\vert\vert$ \end{Remark} We now record some simple facts and definitions regarding the Kan suspension, which will be useful to use later. Given any simplicial set $S$, we let $S_d^{\text{nd}}$ denote the collection of non-degenerate $d$-simplices of $S$. Note that, for any $S$, $S_0^{\text{nd}} = S_0$. \begin{Proposition}\label{prop:nd_susp} Let $S$ be a based simplicial set. We have: \[ (\Sigma S)_d^{\emph{nd}} \cong \left\{ \begin{array}{ll} S_{d-1}^{\emph{nd}} & \emph{if $d \ge 2$} \\ S_0 \smallsetminus * & \emph{if $d = 1$} \\ * & \emph{if $d = 0$} \end{array} \right. \] In particular, for a disjointly based $S_+$, where $S$ is now an unbased simplicial set, we have: \[ (\Sigma S_+)_d^{\emph{nd}} \cong \left\{ \begin{array}{ll} S_{d-1}^{\emph{nd}} & \emph{if $d \ge 1$} \\ * & \emph{if $d = 0$} \end{array} \right. \] \end{Proposition} \begin{proof} The first part follows from a laborius but easy direct check using the definition above of the simplicial operators on cones. For the second part, note that $(S_+)_{d-1}^{\text{nd}} = S_{d-1}^{\text{nd}}$ for $d \ge 2$ and $(S_+)_0 \smallsetminus * = S_0$. \end{proof} \begin{Proposition}\label{prop:susp_monos} The Kan suspension $\Sigma$ preserves monomorphisms. \end{Proposition} \begin{proof} This is immediate from the fact that induced maps act ``summand-wise''. \end{proof} \begin{Definition}\label{def:susp_simps} Given a based simplicial set $S$ and a simplex $s \colon \Delta_d \to S$ of $S$, of dimension $d$, let $\Sigma s \colon \Delta_{d+1} \to \Sigma S$ denote the corresponding simplex of dimension $d+1$, given by inclusion into the first wedge summand, of $\Sigma S$. We call $\Sigma s$ the \textit{suspension} of $s$. \end{Definition} \begin{Proposition}\label{prop:susp_simps_maps} Let $S$ and $T$ be based simplicial sets, $f \colon S \to T$ a based map and $s$ a simplex of $S$. We have that $(\Sigma f)(\Sigma s) = \Sigma (f(s))$. \end{Proposition} \begin{proof} This follows immediately from the fact that $\Sigma f$ acts ``summand-wise''. \end{proof} \begin{Proposition}\label{prop:faces_of_susp_simps} Let $S$ be a based simplicial set and $s$ a $d$-simplex of $S$. Then we have: \[ d_i(\Sigma s) = \left\{ \begin{array}{ll} \Sigma (d_{i-1}s) & i = 1,\dots,d+1 \\ * & i = 0 \end{array} \right. \] \end{Proposition} Note that the $\Sigma$'s here are used in the sense of Definition~\ref{def:susp_simps}, not as summation symbols. \begin{proof} Let $s \in S_d$ and consider some $d_i \colon S_d \to S_{d-1}$, $d^i \colon [d-1] \to [d]$. Consider the map $[d] \to [d+1]$ achieved by adjoining $0 \mapsto 0$ at the beginning (that is, we send $0$ to $0$ and otherwise, $i$, for $i \ge 1$, to $d^i(i-1)+1$) and note that this is exactly $d^{i+1}$. By definition of the action of simplicial operators on cones and suspensions, we have that $d_{i+1}(\Sigma s) = \Sigma (d_i s)$. For the $d_0$ case, again, this follows from the definition of the action of the simplicial operators on cones and suspensions. \end{proof} \begin{Proposition}\label{prop:n_r_chains_susp_X} Let $S$ be a based simplicial set. We have a natural isomorphism of chain complexes: \[ \Phi \colon \emph{C}_\bullet(\Sigma S) \overset{\cong}\longrightarrow \emph{C}_\bullet(S)[1] \] \end{Proposition} The chains here may be taken to have any desired coefficients, and are, as always throughout this work, normalized (and of course reduced as our simplicial set is based). \begin{proof} This follows from Propositions~\ref{prop:nd_susp} and~\ref{prop:faces_of_susp_simps}. \end{proof} We now move on to a discussion of Moore loopings, which constitute the loops functor which is right adjoint to the Kan suspension defined above. For more detail on this loops functor, see, for example, Chapter 2, Section 6 of~\cite{Wu}. It is defined as follows. \begin{Definition} Let $S$ be a based simplicial set. The \textit{Moore looping} of $S$ is defined by setting, for each $d \ge 0$: \[ (\Omega S)_d := \{s \in S_{d+1} \mid d_1 \cdots d_{d+1}(s) = *, d_0(s) = *\} \] We of course also need actions of the simplicial operators $d_i^d \colon (\Omega S)_d \to (\Omega S)_{d-1}$ and $s_i^d \colon (\Omega S)_d \to (\Omega S)_{d+1}$, and these are given by, respectively, $d^{d+1}_{i+1}$ and $s^{d+1}_{i+1}$; one can check that the simplicial identities do indeed hold. \end{Definition} \begin{Remark} On the action of the simplicial operators, more generally, given a map $\theta \colon [d] \to [e]$ in $\Delta$, to act on an element of $(\Omega X)_e$, we abut $0 \mapsto 0$ at the beginning to get a map $[d+1] \to [e+1]$ and then act. \hfill $\vert\vert$ \end{Remark} Prior to discussing the adjunction with the Kan suspension, as we did with the suspensions, we compute the non-degenerates in Moore loopings. \begin{Proposition}\label{prop:nd_loops} Let $S$ be a based simplicial set. Given any $d \ge 0$, we have that: \[ (\Omega S)_d^{\emph{nd}} \cong \left\{ \begin{array}{ll} S_{d+1}^{\emph{nd}} \cap (\Omega S)_d & \emph{if $d \ge 1$} \\ (S_1^{\emph{nd}} \cup *) \cap (\Omega S)_0 & \emph{if $d = 0$} \end{array} \right. \] \end{Proposition} \begin{proof} A laborius but easy direct check using the definition above of the action of the simplicial operators on the loopings. \end{proof} Finally, we record that we have an adjunction with $\Sigma$ and $\Omega$ and record facts about the unit and counit of this adjunction. \begin{Proposition}\label{prop:SigmaOmegadj} We have the following: \begin{itemize} \item[(i)] The Kan suspensions and Moore loopings constitute an adjunction where $\Sigma \dashv \Omega$. \item[(ii)] For all based simplicial sets $S$, the unit $S \to \Omega\Sigma S$ is an isomorphism. \item[(iii)] For all based simplicial sets $S$, the counit $\Sigma \Omega S \to S$ is a monomorphism. \end{itemize} \end{Proposition} \begin{proof} (i): The necessary verifications are straightforward; for a written account, see Proposition 2.14 in~\cite{MarcStephan} -- our loop functor is dual to the one used there, but an entirely analogous argument carries through. \\ (ii): To demonstrate this, we explicitly describe the unit of adjunction. It is given by maps $S \to \Omega \Sigma S$. We have $(\Omega \Sigma S)_d = \{s \in (\Sigma S)_{d+1} \mid d_0(s) = d_1 \cdots d_{d+1}(s) = *\} = \{s \in S_d \vee \cdots \vee S_0 \mid d_0(s) = d_1 \cdots d_{d+1}(s) = *\}$. Using the definition of the action of the simplicial operators on suspensions, we find that on each $S_d, \dots, S_0$, the action by $d_0$ is the identity, so that the elements that go to $*$ under $d_0$ are exactly those in $S_d$. Moreover, the condition $d_1 \cdots d_{d+1}(s) = *$ is automatic for all simplices since $(\Sigma T)_0 = *$ for any $T$ (one can also directly check that, for $s \in S_d$, $d_1\cdots d_{d+1}(s) = d_0\cdots d_d(s)$). Thus $(\Omega \Sigma S)_d = S_d$. One can check that the unit of adjunction is then just the identity on $S_d$ and hence an isomorphism. \\ (iii): It suffices (by, for example, the Eilenberg-Zilber lemma expressing degenerate simplices uniquely as iterated degeneracies of non-degenerate simplices) to show that the counit preserves non-degenerate simplices and that it is injective when restricted to the non-degenerate simplices. In dimension $d = 0$, this is clear since $(\Sigma T)_0 = *$ for any $T$. Let $d \ge 1$. We have that: \[ (\Sigma \Omega S)_d = (\Omega S)_{d-1} \wedge \cdots \wedge (\Omega S)_0 \] By Proposition~\ref{prop:nd_susp}, the non-degenerate simplices are exactly the elements which lie in the first summand, $(\Omega S)_{d-1}$ (excluding the basepoint if $d = 1$). Moreover, an easy check shows that the counit, restricted to this summand, is simply the inclusion into $S_d$. This map is then certainly injective on the non-degenerate simplices. It remains to show that the non-degenerate simplices are preserved, and this follows by Proposition~\ref{prop:nd_loops}, which tells us that a non-degenerate element in $(\Omega S)_{d-1}$ is necessarily non-degenerate in $S_d$ (except possibly in the case $d = 1$, where the element may also be the basepoint, but as just mentioned above, in the case $d = 1$, the basepoint is to be excluded). \end{proof} \subsection{Operadic (De)suspensions}\label{sec:opsusp} We now discuss the second ingredient which we need prior to constructing stabilizations of $\mathbb{E}_\infty$ operads, that of operadic suspensions and desuspensions. For this purpose, we shall need to explicitly distinguish the case of chain complexes and cochain complexes, i.e., of chain operads and cochain operads. \\ If $X$ is a (co)chain complex and $n \in \mathbb{Z}$, we let $X[n]$ be the (co)chain complex where $X[n]_d = X_{n-d}$. \begin{Definition}\label{def:operadic_susp_ch} Given a chain operad $\mathpzc{P}$ over $k$: \begin{itemize} \item The \textit{operadic suspension} $\Sigma\mathpzc{P}$ is defined by setting $\Sigma \mathpzc{P} := \mathpzc{P} \otimes_k \mathpzc{End}_{k[1]}$. \item The \textit{operadic desuspension} $\Sigma\mathpzc{P}$ is defined by setting $\Sigma^{-1} \mathpzc{P} := \mathpzc{P} \otimes_k \mathpzc{End}_{k[-1]}$. \end{itemize} Given a cochain operad $\mathpzc{P}$ over $k$: \begin{itemize} \item The \textit{operadic suspension} $\Sigma\mathpzc{P}$ is defined by setting $\Sigma \mathpzc{P} := \mathpzc{P} \otimes_k \mathpzc{End}_{k[-1]}$. \item The \textit{operadic desuspension} $\Sigma^{-1}\mathpzc{P}$ is defined by setting $\Sigma^{-1} \mathpzc{P} := \mathpzc{P} \otimes_k \mathpzc{End}_{k[1]}$. \end{itemize} \end{Definition} Above, writing $X$ for $k[1]$ or $k[-1]$, $\mathpzc{End}_{X}$ denotes the endomorphism operad on $X$ and $(\mathpzc{P} \otimes \mathpzc{End}_{X})(n) = \mathpzc{P}(n) \otimes \mathpzc{End}_{X}(n)$ (the symmetric group $\Sigma_n$ acts diagonally on these pieces, the identity for the operad is the diagonal one and the composition structure maps are those which permute tensor factors and then apply the composition maps for $\mathpzc{P}$ and $\mathpzc{End}_{X}$). Note that we have $\mathpzc{End}_{k[1]}(n)_d = \{\text{maps $k[1]^{\otimes n} \to k[1]$ in $\mathsf{Gr}_k$ of degree $d$}\}$ where $\mathsf{Gr}_k$ denotes the category of $\mathbb{Z}$-graded $k$-modules. Thus $\mathpzc{End}_{k[1]}(n)_d$ is zero if $d \neq 1-n$ and is $k$ otherwise, so that $\mathpzc{End}_{k[1]}(n) = k[1-n]$. As a result, we find that: \begin{center} \begin{tabular}{m{6cm} m{6cm}} \hline Chain operad $\mathpzc{P}$ & Cochain operad $\mathpzc{P}$ \tabularnewline \hline \tabularnewline [-1em] $(\Sigma \mathpzc{P})(n) \cong \mathpzc{P}(n)[1-n]$ & $(\Sigma \mathpzc{P})(n) \cong \mathpzc{P}(n)[n-1]$ \tabularnewline \tabularnewline [-1em] $(\Sigma^{-1} \mathpzc{P})(n) \cong \mathpzc{P}(n)[n-1]$ & $(\Sigma^{-1} \mathpzc{P})(n) \cong \mathpzc{P}(n)[1-n]$ \end{tabular} \end{center} As in Section~\ref{sec:nots_convs}, we have a reindexing operator $(-)^\dagger$ between chain and cochain operads. We now discuss how this construction behaves with respect to operadic (de)suspensions. \begin{Proposition}\label{prop:opreindexsusp} Let $\mathpzc{P}$ be a chain or cochain operad over $k$. Then we have: \[ (\Sigma \mathpzc{P})^{\dagger} = \Sigma (\mathpzc{P}^{\dagger}) \hspace{0.5cm} \text{and} \hspace{0.5cm} (\Sigma^{-1} \mathpzc{P})^{\dagger} = \Sigma^{-1} (\mathpzc{P}^{\dagger}) \] \end{Proposition} \begin{proof} An easy direct check one degree at a time. Note that, for either identity, the notion of suspension on one side is that for chain operads, while on the other is for cochain operads. \end{proof} Next, we discuss (co)algebras over (de)suspended operads. Let $\mathpzc{P}$ be a dg operad over $k$. We wish to discuss the relation between (co)algebra structures over $\mathpzc{P}$ and (co)algebra structures over the (de)suspensions $\Sigma^r\mathpzc{P}$, where $r \in \mathbb{Z}$. Once again, we must explicitly distinguish between the chain and cochain cases. \begin{Proposition}\label{prop:chainopsuspalg} Let $\mathpzc{P}$ be a chain operad over $k$. Then, for each $r \in \mathbb{Z}$, we have functors as follows: \[ \mathpzc{P}\text{-}\mathsf{Alg} \overset{(\cdot)[r]}\longrightarrow \Sigma^r\mathpzc{P}\text{-}\mathsf{Alg} \hspace{1cm} \mathpzc{P}\text{-}\mathsf{Coalg} \overset{(\cdot)[-r]}\longrightarrow \Sigma^r\mathpzc{P}\text{-}\mathsf{Coalg} \] On the other hand, if $\mathpzc{P}$ is a cochain operad over $k$, then, we have functors as follows: \[ \mathpzc{P}\text{-}\mathsf{Alg} \overset{(\cdot)[-r]}\longrightarrow \Sigma^r\mathpzc{P}\text{-}\mathsf{Alg} \hspace{1cm} \mathpzc{P}\text{-}\mathsf{Coalg} \overset{(\cdot)[r]}\longrightarrow \Sigma^r\mathpzc{P}\text{-}\mathsf{Coalg} \] \end{Proposition} \begin{proof} We shall describe the case of algebras over a chain operad; the other three cases are analogous. Let $A$ be an algebra over a chain operad $\mathpzc{P}$. We wish to show that $A[r]$ is an algebra over $\Sigma^r\mathpzc{P}$. To do so, we must construct maps: \[ (\Sigma^r\mathpzc{P})(n) \otimes_{\Sigma_n} A[r]^{\otimes n} \to A[r] \] These are as follows: \[ (\Sigma^r \mathpzc{P})(n) \otimes_{\Sigma_n} A[r]^{\otimes n} = \mathpzc{P}(n)[r-rn] \otimes_{\Sigma_n} A^{\otimes n}[rn] = (\mathpzc{P}(n) \otimes_{\Sigma_n} A^{\otimes n})[r] \to A[r] \] \end{proof} Finally, we discuss free algebras over (de)suspended operads. Given a dg operad $\mathpzc{P}$, and a (de)suspension $\Sigma^r\mathpzc{P}$, we shall let $\Sigma^r\mathbf{P}$ denote the monad and free algebra functor associated to $\Sigma^r\mathpzc{P}$. Note though that we are not (de)suspending the monad, but rather the operad. \begin{Proposition}\label{prop:freealgsuspop} Let $\mathpzc{P}$ denote a chain operad over $k$. Then, for a chain complex $X$, we have: \[ (\Sigma^r \mathbf{P}) X = \mathbf{P}(X[-r])[r] \] On the other hand, if $\mathpzc{P}$ is a cochain operad over $k$, for a cochain complex $X$, we have: \[ (\Sigma^r \mathbf{P}) X = \mathbf{P}(X[r])[-r] \] \end{Proposition} \begin{proof} We describe the case of a chain operad; the cochain case is analogous. We have: \begin{align*} (\Sigma^r \mathbf{P}) X &= \bigoplus_{n \ge 0} (\Sigma^r \mathpzc{P})(n) \otimes_{\Sigma_n} X^{\otimes n} \\ &= \bigoplus_{n \ge 0} \mathpzc{P}(n)[r-rn] \otimes_{\Sigma_n} X^{\otimes n} \\ &= \bigoplus_{n \ge 0} (\mathpzc{P}(n) \otimes_{\Sigma_n} X^{\otimes n}[-rn])[r] \\ &= \left(\bigoplus_{n \ge 0} \mathpzc{P}(n) \otimes_{\Sigma_n} X[-r]^{\otimes n}\right)[r] \\ &= \mathbf{P}(X[-r])[r] \end{align*} \end{proof} \subsection{The Stable Eilenberg-Zilber Operad} We first recall the definition of the Eilenberg-Zilber operad, in both a chain and a cochain version. Let $\mathsf{Gr}_{\mathbb{F}_p}$ denote the category with objects the $\mathbb{Z}$-graded $\mathbb{F}_p$-modules and morphisms the homogeneous maps. Let also $\mathsf{Spc}$ denote the category of spaces, by which we mean simplicial sets. Fix some $n \ge 0$. For each $d \in \mathbb{Z}$, we have that: \[ \mathpzc{Z}(n)_d := \{\text{nat trans, as functors $\mathsf{Spc} \to \mathsf{Gr}_{\mathbb{F}_p}$,} \: \text{C}_\bullet(-) \to \text{C}_{\bullet}(-)^{\otimes n} \: \text{of deg $d$}\} \] Thus $\mathpzc{Z}(n)_d$ consists of the degree $d$, $n$-ary co-operations on chains; note that, as always throughout this work, the chains here are normalized. Given such a natural transformation $\alpha = \{\alpha_S \colon \text{C}_\bullet(S) \to \text{C}_\bullet(S)^{\otimes n}\}$ of degree $d$, we get a natural transformation $\partial\alpha$ of degree $d-1$ by setting, for a simplicial set $S$ and a non-degenerate simplex $s \in S$, $(\partial\alpha)_S(s)= \partial\alpha_S(s) - (-1)^{d}\alpha_S\partial(s)$. An easy check shows that this gives us a differential $\partial \colon \mathpzc{Z}(n)_d \to \mathpzc{Z}(n)_{d-1}$. The operad identity in $\mathpzc{Z}(1)$ is the identity transformation. The symmetric group $\Sigma_n$ acts on $\mathpzc{Z}(n)$ by permuting the tensor factors. Finally, the operad composition maps are analogous to those that occur in co-endomorphism operads. \begin{Definition}\label{def:ez_op} The \textit{Eilenberg-Zilber chain operad} is the operad $\mathpzc{Z}$ defined above; the \textit{Eilenberg-Zilber cochain operad} is the operad $\mathpzc{Z}^\dagger$. \end{Definition} Here $(-)^\dagger$ refers to the reindexing construction -- see Section~\ref{sec:nots_convs}. \begin{Remark}\label{rmk:cochainEZ} An easy check shows that, since we are working over a field and since co-operations on chains and operations on cochains are determined by their values on the standard simplices, we can equivalently define the Eilenberg-Zilber cochain operad by setting \[ \mathpzc{Z}^\dagger(n)_d = \{\text{nat trans, as functors $\mathsf{Spc}^{\text{op}} \to \mathsf{Gr}_{\mathbb{F}_p}$,} \: \text{C}^\bullet(-)^{\otimes n} \to \text{C}^{\bullet}(-) \: \text{of deg $d$}\} \] and setting the rest of the operad data in a manner analogous to that above for the chain operad. Thus $\mathpzc{Z}^\dagger(n)_d$ consists of the degree $d$, $n$-ary operations on cochains. \hfill $\vert\vert$ \end{Remark} In order to stabilize the Eilenberg-Zilber operad, we first need to introduce basepoints. Thus, we alter the above construction slightly, and consider instead the following operad, consisting of co-operations on chains on based simplicial sets. \begin{Definition} The \textit{reduced Eilenberg-Zilber chain operad}, denoted $\mathpzc{Z}_*$, is the operad constructed in the same manner as the Eilenberg-Zilber chain operad $\mathpzc{Z}$ except that the chains are to be taken on based simplicial sets (and so are of course reduced, as well as being normalized as always). Thus, for example, we have the following: \[ \mathpzc{Z}_*(n)_d := \{\text{nat trans, as functors $\mathsf{Spc}_* \to \mathsf{Gr}_{\mathbb{F}_p}$,} \: \text{C}_\bullet(-) \to \text{C}_{\bullet}(-)^{\otimes n} \: \text{of deg $d$}\} \] The \textit{reduced Eilenberg-Zilber cochain operad} is then defined to be $\mathpzc{Z}_*^\dagger$. \end{Definition} Next, we note that the operad $\mathpzc{Z}_{*}$ admits a stabilization map \[ \Psi \colon \Sigma\mathpzc{Z}_{*} \to \mathpzc{Z}_{*} \] where $\Sigma$ here denotes the operadic suspension of chain operads, found in Definition~\ref{def:operadic_susp_ch}. This map, or the slight variant of it in the case of $\mathpzc{Z}$, has also been noted, for example, in~\cite{BergerFresse}, \cite{Smirnov1} and \cite{Smirnov2}. There are many ways in which one can describe this map; we give a description via the Kan suspension which will be useful for us later when we discuss spectral cochains. To do this, we need to specify maps $\mathpzc{Z}_{*}(n)[1-n] \to \mathpzc{Z}_{*}(n)$. Consider some natural transformation $\alpha$ in $\mathpzc{Z}_{*}(n)[1-n]_d$. This is a natural transformation of graded modules $\text{C}_\bullet(-) \to \text{C}_\bullet(-)^{\otimes n}$ of degree $d+n-1$ over based simplicial sets. By precomposition with the Kan suspension, we get a natural transformation $\text{C}_\bullet(\Sigma -) \to \text{C}_\bullet(\Sigma -)^{\otimes n}$ over based simplicial sets $X$. From Proposition~\ref{prop:n_r_chains_susp_X}, we have a natural isomorphism $\text{C}_\bullet(\Sigma -) \cong \text{C}_\bullet(-)[1]$. Thus, by pre and postcomposition, we get a natural transformation of graded modules $\text{C}_\bullet(-)[1] \to (\text{C}_\bullet(-)[1])^{\otimes n}$ of degree $d+n-1$, or equivalently a natural transformation $\text{C}_\bullet(-) \to \text{C}_\bullet(-)^{\otimes n}$ of degree $(d+n-1) + 1 - n = d$, which is to say an element, say $\alpha'$, of $\mathpzc{Z}_{*}(n)_{d}$. This correpondence $\alpha \mapsto \alpha'$ gives us a map $(\Sigma\mathpzc{Z}_{*})(n) \to \mathpzc{Z}_{*}(n)$. Moreover, one can check that this map is a chain map, and then that, assembling over $n$, we get an operad map $\Psi \colon \Sigma\mathpzc{Z}_{*} \to \mathpzc{Z}_{*}$, as desired. \\ We are now able to define our stabilization of the Eilenberg-Zilber operad. Upon iterating the above construction, we have maps $\Sigma^{k+1}\mathpzc{Z}_* \to \Sigma^{k}\mathpzc{Z}_*$ for each $k \ge 0$. We shall be somewhat loose in our notation and denote these also by $\Psi$. \begin{Definition} The \textit{stable Eilenberg-Zilber chain operad}, denoted $\mathpzc{Z}_{\text{st}}$, is the operad defined by as follows: \[ \mathpzc{Z}_{\text{st}} := \underset{\longleftarrow}{\text{lim}}(\cdots \overset{\Psi}\longrightarrow \Sigma^2\mathpzc{Z}_{*} \overset{\Psi}\longrightarrow \Sigma\mathpzc{Z}_{*} \overset{\Psi}\longrightarrow \mathpzc{Z}_{*}) \] The \textit{stable Eilenberg-Zilber cochain operad} is the operad $\mathpzc{Z}^\dagger_{\text{st}}$. \end{Definition} \begin{Remark} By Proposition~\ref{prop:opreindexsusp}, the operadic suspension $\Sigma$ and the reindexing operator $(-)^\dagger$ commute, so that we also have maps $\Sigma^{k+1}\mathpzc{Z}_*^\dagger \to \Sigma^{k}\mathpzc{Z}_*^\dagger$, and moreover, $(-)^\dagger$ clearly also commutes with inverse limits, so that one can also analogously construct the stable Eilenberg-Zilber cochain operad as the following inverse limit: \[ \mathpzc{Z}^\dagger_{\text{st}} := \underset{\longleftarrow}{\text{lim}}(\cdots \overset{\Psi}\longrightarrow \Sigma^2\mathpzc{Z}^\dagger_{*} \overset{\Psi}\longrightarrow \Sigma\mathpzc{Z}^\dagger_{*} \overset{\Psi}\longrightarrow \mathpzc{Z}^\dagger_{*}) \] \hfill $\vert\vert$ \end{Remark} Consider the chain complex $\mathpzc{Z}_{\text{st}}(n)$ in operadic degree $n$ of the Eilenberg-Zilber chain operad. Since limits of operads are formed termwise, $\mathpzc{Z}_{\text{st}}(n)$ is equivalent to the limit, in chain complexes, of the diagram: \[ \cdots \overset{\Psi}\longrightarrow \Sigma^2\mathpzc{Z}_{*}(n) \overset{\Psi}\longrightarrow \Sigma\mathpzc{Z}_{*}(n) \overset{\Psi}\longrightarrow \mathpzc{Z}_{*}(n) \] That is, it is the limit of: \[ \cdots \overset{\Psi}\longrightarrow \mathpzc{Z}_*(n)[2-2n] \overset{\Psi}\longrightarrow \mathpzc{Z}_*(n)[1-n] \overset{\Psi}\longrightarrow \mathpzc{Z}_*(n) \] In degree $d \in \mathbb{Z}$ then, we have: \begin{equation}\label{eq:EZst_elts} \mathpzc{Z}_{\text{st}}(n)_d \subseteq \prod_{k \ge 0} \mathpzc{Z}_{*}(n)[k-kn]_d = \prod_{k \ge 0} \mathpzc{Z}_{*}(n)_{d+kn-n} \end{equation} More specifically, an element of $\mathpzc{Z}_{\text{st}}(n)$ in degree $d$ is a sequence $(\alpha_0,\alpha_1,\alpha_2,\dots)$ where $\alpha_0$ is a degree $d$ chain co-operation $\text{C}_\bullet(-) \to \text{C}_\bullet(-)^{\otimes n}$, $\alpha_1$ is a degree $d+n-1$ chain co-operation $\text{C}_\bullet(-) \to \text{C}_\bullet(-)^{\otimes n}$ such that $\Psi(\alpha_1) = \alpha_0$, and so on. Similarly, the cochain complex $\mathpzc{Z}^\dagger_{\text{st}}(n)$ is the limit, in cochain complexes, of the diagram \[ \cdots \overset{\Psi}\longrightarrow \mathpzc{Z}^\dagger_*(n)[2n-2] \overset{\Psi}\longrightarrow \mathpzc{Z}^\dagger_*(n)[n-1] \overset{\Psi}\longrightarrow \mathpzc{Z}^\dagger_*(n) \] and, recalling what was said in Remark~\ref{rmk:cochainEZ}, an element of $\mathpzc{Z}_{\text{st}}^\dagger(n)$ in degree $d$ is a sequence $(\alpha_0,\alpha_1,\alpha_2,\dots)$ where $\alpha_0$ is a degree $d$ cochain operation $\text{C}^\bullet(-)^{\otimes n} \to \text{C}^\bullet(-)$, $\alpha_1$ is a degree $d-n+1$ cochain operation $\text{C}^\bullet(-)^{\otimes n} \to \text{C}^\bullet(-)$ such that $\Psi(\alpha_1) = \alpha_0$, and so on. \subsection{The Stable McClure-Smith Operad} In this section, as we did for the Eilenberg-Zilber operad, we stabilize the McClure-Smith operad, as always in both a chain form and a cochain form. First, we need to recall the definition of the McClure-Smith operad. This operad was used by McClure and Smith in~\cite{McClureSmith}, and is also discussed in~\cite{BergerFresse}, where it is called the surjection operad. In~\cite{McClureSmith}, it is shown that this operad is an $\mathbb{E}_\infty$ operad, so that the stabilization in this section will constitute our first stabilization of an $\mathbb{E}_\infty$ operad. \\ In order to recall the definition of the McClure-Smith operad, we require some preliminaries. For each integer $n \ge 0$, let $(n)$ denote the set $\{1,\dots,n\}$, where $(0) := \varnothing$. Given a set map $f \colon (m) \to (n)$, we will often view it as, and denote it by, the indexed sequence $(f(1),\dots,f(m))$. Suppose given $n \ge 0$ and a surjection $f \colon (m) \to (n)$. Say that $f$ is \textit{degenerate} if there exists some $l \in (m)$ such that $f(l) = f(l+1)$, and otherwise \textit{non-degenerate}; that is, call $f$ degenerate exactly when the sequence $(f(1),\dots,f(m))$ contains two equal adjacent entries. For each $n \ge 0$, let $\text{S}(n)$ be the graded $\mathbb{F}_p$-module freely generated by maps $f \colon (-) \to (n)$, where, if the source is $(m)$, the assigned degree is $m-n$. Let $\text{N}(n)$ denote the sub graded module generated by the non-surjective maps and $\text{D}(n)$ the sub graded module generated by the degenerate surjections. For each $n \ge 0$, set: \[ \mathpzc{M}(n) := \text{S}(n)/(\text{N}(n)+\text{D}(n)) \] \begin{Remark} Taken over $n \ge 0$, the above graded modules will be the underlying graded modules of the McClure-Smith chain operad. It is clear that $\mathpzc{M}(n)$ is the graded $\mathbb{F}_p$-module freely generated by the non-degenerate surjections $f \colon (-) \to (n)$, where, as above, if the source is $(m)$, the assigned degree is $m-n$. Moreover, $\mathpzc{M}(n)_d$ is zero if $d < 0$, and otherwise is freely generated by the non-degenerate surjections $(n+d) \to (n)$. \hfill $\vert\vert$ \end{Remark} We now endow the $\mathpzc{M}(n)$ with differentials. To keep track of signs, given a surjection $f \colon (m) \to (n)$ and a $i \in (m)$, we set: \[ \tau_f(i) = \#\{j \in (m) \mid f(j) < f(i) \: \text{or} \: f(j) = f(i) \: \text{and} \: j \le i\} \] The differential of $\mathpzc{M}(n)$ is then defined as follows. Given a non-degenerate surjection $f \colon (m) \to (n)$, denoted also by $(f(1),\dots,f(m))$, we have that: \[ \partial(f) = \sum_{i=1}^m (-1)^{\tau_f(i)-f(i)} (f(1),\dots,\widehat{f(i)},\dots,f(m)) \] Here, for $i \in (m)$, the term $(f(1),\dots,\widehat{f(i)},\dots,f(m))$ on the righthand side denotes the the map $(m-1) \to (n)$ whose images are, in order, exactly those that appear in the sequence $(f(1),\dots,\widehat{f(i)},\dots,f(m))$; if, upon omitting $f(i)$, this resulting map is no longer a surjection or is degenerate, that term is taken to be zero. The verification that this indeed defines a well-defined differential may be found in~\cite{McClureSmith}. \\ It remains to describe the operadic identity and composition data. The identity in $\mathpzc{M}(1)$ is the identity on $(1)$ and the action of $\Sigma_n$ on $\mathpzc{M}(n)$ is by postcomposition of surjections onto $(n)$. As for the composition maps, we shall specify this in the form of maps $\circ_r \colon \mathpzc{M}(n) \otimes \mathpzc{M}(m) \to \mathpzc{M}(n+m-1)$, for $n,m \ge 0$ and $r=1,\dots,n$. Let $f \colon (N) \to (n)$ and $g \colon (M) \to (m)$ be non-degenerate surjections. We need to define a composite $f \circ_r g$, which will be zero or a non-degenerate surjection $(N+M-1) \to (n+m-1)$. This composite can be described algorithmically as follows: \begin{itemize} \item In the sequence $(f(1),\dots,f(N))$, let $t$ be the number of occurences of $r$. Let these occurences be given by $f(i_1),\dots,f(i_t)$. \item Fix a choice of $t+1$ entries $1 = j_0 \le j_1 \le \cdots \le j_{t-1} \le j_t = M$ inside $(M)$, where the first is $1$ and the final $M$. In the sequence $(f(1),\dots,f(N))$, replace $f(i_1)$ by the subsequence $(g(j_{0}),\dots,g(j_1))$, $f(i_2)$ by the subsequence $(g(j_{1}),\dots,g(j_2))$, and so on, with the final replacement being that of $f(i_t)$ by the subsequence $(g(j_{t-1}),\dots,g(j_t))$. Note that the resulting sequence has length $N-t+M+t-1 = N+M-1$. Now alter this sequence as follows: (i) increase each entry $g(j)$ which has been entered by $r-1$ (ii) increase those entries $f(i)$ which remain and where $f(i) > r$ by $M-1$. \item The resulting sequence gives a map $f_{(j_0,\dots,j_t)} \colon (N+M-1) \to (n+m-1)$; if it is not a surjection or is a degenerate surjection, replace it with zero. \item The composite $f \circ_r g$ is then the sum of all the resulting maps $f_{(j_0,\dots,j_t)}$, the sum being taken over the tuples $(j_0, j_1, \dots j_t)$. \end{itemize} The verification that the above algorithmic procedure yields well-defined maps $\mathpzc{M}(n) \otimes \mathpzc{M}(m) \to \mathpzc{M}(n+m-1)$ for $r=1,\dots,n$, and yields an operad structure on the chain complexes $\mathpzc{M}(n)$, may be found in~\cite{BergerFresse}. \begin{Definition} The \textit{McClure-Smith chain operad} is the operad consisting of the chain complexes $\mathpzc{M}(n)$ and structural data specified above. Moreover, the \textit{McClure-Smith cochain operad} is then the operad $\mathpzc{M}^{\dagger}$. \end{Definition} Now we shall describe the stabilizations. We first discuss the case of the chain operad. As with the Eilenberg-Zilber chain operad, we first note that $\mathpzc{M}$ admits a stabilization map: \[ \Psi \colon \Sigma \mathpzc{M} \to \mathpzc{M} \] This map was also observed in~\cite{BergerFresse}. Here we have once again used the symbol $\Psi$, just as in the stabilization of $\mathpzc{Z}_{*}$. The context will always make it clear which map we intend by this symbol. Now, to define this map, we need to define, for each $n \ge 0$, a map $(\Sigma \mathpzc{M})(n) \to \mathpzc{M}(n)$, which is to say a map $\mathpzc{M}(n)[1-n] \to \mathpzc{M}(n)$. Consider a non-degenerate surjection $f \in \mathpzc{M}(n)[1-n]_d$. This is a non-degenerate surjection $f \colon (m) \to (n)$ where $m-n = d+n-1$ and so $m = d+2n-1$. We define $\Psi(f)$ algorithmically as follows: \begin{itemize} \item If $(f(1),\dots,f(n))$ is not a permutation of $(1,\dots,n)$, $\Psi(f)$ is zero. \item If $(f(1),\dots,f(n))$ is a permutation of $(1,\dots,n)$, $\Psi(f)$ is represented by the map $(n+d) \to (n)$ given by the sequence $(f(n),\dots,f(d+2n-1))$. \end{itemize} One can check that the above algorithmic procedure does indeed yield an operad map $\Psi \colon \Sigma \mathpzc{M} \to \mathpzc{M}$ -- see~\cite{BergerFresse}. \begin{Definition} The \textit{stable McClure-Smith chain operad}, denoted $\mathpzc{M}_{\text{st}}$, is the operad defined as follows: \[ \mathpzc{M}_{\text{st}} := \underset{\longleftarrow}{\text{lim}}(\cdots \overset{\Psi}\longrightarrow \Sigma^2\mathpzc{M} \overset{\Psi}\longrightarrow \Sigma\mathpzc{M} \overset{\Psi}\longrightarrow \mathpzc{M}) \] The \textit{stable McClure-Smith cochain operad} is defined to be $\mathpzc{M}_{\text{st}}^\dagger$. \end{Definition} \begin{Remark} By Proposition~\ref{prop:opreindexsusp}, the operadic suspension $\Sigma$ and the reindexing operator $(-)^\dagger$ commute, so that we also have maps $\Sigma^{k+1}\mathpzc{M}^\dagger \to \Sigma^{k}\mathpzc{M}^\dagger$, and moreover, $(-)^\dagger$ clearly also commutes with inverse limits, so that one can also analogously construct the stable McClure-Smith cochain operad as the following inverse limit: \[ \mathpzc{M}_{\text{st}}^\dagger := \underset{\longleftarrow}{\text{lim}}(\cdots \overset{\Psi}\longrightarrow \Sigma^2\mathpzc{M}^\dagger \overset{\Psi}\longrightarrow \Sigma\mathpzc{M}^\dagger \overset{\Psi}\longrightarrow \mathpzc{M}^\dagger) \] \hfill $\vert\vert$ \end{Remark} Next, we also note a comparison between the stabilization maps for the McClure-Smith and Eilenberg-Zilber operads. We note that there is a map, in fact an embedding, from the McClure-Smith operad to the Eilenberg-Zilber operad. To describe this map, we need some preliminaries. Given a finite linearly ordered set $V$, an \textit{overlapping partition} $\mathcal{A}$ of $V$ with $m$ pieces is a collection of subsets $A_1,\dots, A_m$ of $L$ with the following properties: (i) if $i < j$, then each element of $A_i$ is $\leq$ each element of $A_j$. (ii) for $i < m$, $A_i \cap A_{i+1}$ has exactly one element. \begin{Definition}\label{defn:seqcoop} Suppose given $n \ge 0$ and a surjection $f \colon (m) \to (n)$. We then get a natural transformation, over simplicial sets, $\langle f \rangle \colon \text{C}_\bullet(-) \to \text{C}_\bullet(-)^{\otimes n}$, where given a simplicial set $S$ and $\sigma \colon \Delta_p \to S$, we have that \begin{equation}\label{eqn:AW_seq_ops} \langle f \rangle (\sigma) = \sum_{\mathcal{A}} \bigotimes_{i=1}^n \sigma|\amalg_{f(j)=i}A_j \end{equation} where $\mathcal{A}$ runs through the overlapping partitions of $[p] = \{0,1,\dots,p\}$ with $m$ pieces. We call these natural transformations, the \textit{sequence co-operations}. Note that since $\sum_{\mathcal{A}} \bigotimes_{i=1}^n \sigma|\amalg_{f(j)=i}A_j$ has degree $\sum_{i=1}^n (|f^{-1}(i)|-1) = m - n$, the sequence operation $\langle f \rangle$ is homogeneous of degree $m-n$. \end{Definition} Note that, given $n \ge 0$ and a surjection $f \colon (m) \to (n)$, if $f$ is degenerate, $\langle f \rangle$ is the zero transformation. To see this, note that, if the surjection is degenerate, one of the tensor factors in the righthand side of (\ref{eqn:AW_seq_ops}) receives a repeated coordinate and so is zero as the chains are normalized. As a result, for each $n \ge 0$, we have a map $\text{AW}_n \colon \mathpzc{M}(n) \to \mathpzc{Z}(n)$. As in~\cite{BergerFresse}, we use the notation ``$\text{AW}_n$'' because the sequence operations generalize the classical Alexander-Whitney diagonal operation. As shown in~\cite{McClureSmith}, the $\text{AW}_n$ yield an operad map $\text{AW} \colon \mathpzc{M} \to \mathpzc{Z}$ and moreover are injective in each arity. Thus the McClure-Smith operad $\mathpzc{M}$ embeds into the Eilenberg-Zilber operad $\mathpzc{Z}$. Upon applying $(-)^\dagger$, we also get an embedding $\text{AW}^\dagger \colon \mathpzc{M}^\dagger \to \mathpzc{Z}^\dagger$ of cochain operads. In fact, given a surjection $f \colon (m) \to (n)$, the sequence co-operations, as defined in Definition~\ref{defn:seqcoop}, yield a natural transformation $\text{C}_\bullet(-) \to \text{C}_\bullet(-)^{\otimes n}$ not only over simplicial sets, but also over based simplicial sets. To see this, let $S$ be a based simplicial set and let $\Delta_0 \to S$ classify the basepoint $*_S \in S_0$ of $S$. Then, since every piece of an overlapping partition of $[0]$ is just $\{0\}$, the only case in which each tensor factor in (\ref{eqn:AW_seq_ops}) will be non-degenerate is if the input surjection is the identity on $(n)$. In this case the image is $*_S \otimes \cdots \otimes *_S$, and this tensor is zero in $\text{C}_\bullet(X)^{\otimes n}$ as the chains are reduced chains. As such, an easy check shows us that we then have an operad map \[ \text{AW} \colon \mathpzc{M} \to \mathpzc{Z}_* \] which we denote by the same symbol, $\text{AW}$, as earlier. Upin applying $(-)^\dagger$, we also get a map between the corresponding cochain operads: \[ \text{AW}^\dagger \colon \mathpzc{M}^\dagger \to \mathpzc{Z}_*^\dagger \] Moreover, these maps are compatible with the stabilizations of the two operads in sense that the following squares commute: \begin{center} \begin{tikzpicture}[node distance = 1.5cm] \node [] (A) {$\Sigma \mathpzc{M}$}; \node [right of = A,xshift=1.5cm] (B) {$\mathpzc{M}$}; \node [below of = A] (C) {$\Sigma \mathpzc{Z}_{*}$}; \node [below of = B] (D) {$\mathpzc{Z}_{*}$}; \draw [->] (A) -- (B) node[midway,anchor=south]{$\Psi$}; \draw [->] (A) -- (C) node[midway,anchor=east]{$\Sigma \text{AW}$}; \draw [->] (B) -- (D) node[midway,anchor=west]{$\text{AW}$}; \draw [->] (C) -- (D) node[midway,anchor=north]{$\Psi$}; \end{tikzpicture} \hspace{0.5cm} \begin{tikzpicture}[node distance = 1.5cm] \node [] (A) {$\Sigma \mathpzc{M}^\dagger$}; \node [right of = A,xshift=1.5cm] (B) {$\mathpzc{M}^\dagger$}; \node [below of = A] (C) {$\Sigma \mathpzc{Z}_{*}^\dagger$}; \node [below of = B] (D) {$\mathpzc{Z}_{*}^\dagger$}; \draw [->] (A) -- (B) node[midway,anchor=south]{$\Psi$}; \draw [->] (A) -- (C) node[midway,anchor=east]{$\Sigma \text{AW}^\dagger$}; \draw [->] (B) -- (D) node[midway,anchor=west]{$\text{AW}^\dagger$}; \draw [->] (C) -- (D) node[midway,anchor=north]{$\Psi$}; \end{tikzpicture} \end{center} A laborious but not too difficult diagram chase confirms this -- see also~\cite{BergerFresse}. As a result, we have commutative diagrams as follows: \begin{center} \begin{tikzpicture}[node distance = 1.5cm] \node [] (A) {$\Sigma \mathpzc{M}$}; \node [right of = A,xshift=1.5cm] (B) {$\mathpzc{M}$}; \node [below of = A] (C) {$\Sigma \mathpzc{Z}_{*}$}; \node [below of = B] (D) {$\mathpzc{Z}_{*}$}; \node [left of = A,xshift=-1.5cm] (E) {$\Sigma^2 \mathpzc{M}$}; \node [left of = E,xshift=-1.5cm] (F) {$\cdots$}; \node [left of = C,xshift=-1.5cm] (G) {$\Sigma^2 \mathpzc{Z}_{*}$}; \node [left of = G,xshift=-1.5cm] (H) {$\cdots$}; \draw [->] (A) -- (B) node[midway,anchor=south]{$\Psi$}; \draw [->] (A) -- (C) node[midway,anchor=west]{$\Sigma \text{AW}$}; \draw [->] (B) -- (D) node[midway,anchor=west]{$\text{AW}$}; \draw [->] (C) -- (D) node[midway,anchor=south]{$\Psi$}; \draw [->] (F) -- (E) node[midway,anchor=south]{$\Psi$}; \draw [->] (E) -- (A) node[midway,anchor=south]{$\Psi$}; \draw [->] (H) -- (G) node[midway,anchor=south]{$\Psi$}; \draw [->] (G) -- (C) node[midway,anchor=south]{$\Psi$}; \draw [->] (E) -- (G) node[midway,anchor=west]{$\Sigma^2 \text{AW}$}; \end{tikzpicture} \end{center} \begin{center} \begin{tikzpicture}[node distance = 1.5cm] \node [] (A) {$\Sigma \mathpzc{M}^\dagger$}; \node [right of = A,xshift=1.5cm] (B) {$\mathpzc{M}^\dagger$}; \node [below of = A] (C) {$\Sigma \mathpzc{Z}_{*}^\dagger$}; \node [below of = B] (D) {$\mathpzc{Z}_{*}^\dagger$}; \node [left of = A,xshift=-1.5cm] (E) {$\Sigma^2 \mathpzc{M}^\dagger$}; \node [left of = E,xshift=-1.5cm] (F) {$\cdots$}; \node [left of = C,xshift=-1.5cm] (G) {$\Sigma^2 \mathpzc{Z}_{*}^\dagger$}; \node [left of = G,xshift=-1.5cm] (H) {$\cdots$}; \draw [->] (A) -- (B) node[midway,anchor=south]{$\Psi$}; \draw [->] (A) -- (C) node[midway,anchor=west]{$\Sigma \text{AW}^\dagger$}; \draw [->] (B) -- (D) node[midway,anchor=west]{$\text{AW}^\dagger$}; \draw [->] (C) -- (D) node[midway,anchor=south]{$\Psi$}; \draw [->] (F) -- (E) node[midway,anchor=south]{$\Psi$}; \draw [->] (E) -- (A) node[midway,anchor=south]{$\Psi$}; \draw [->] (H) -- (G) node[midway,anchor=south]{$\Psi$}; \draw [->] (G) -- (C) node[midway,anchor=south]{$\Psi$}; \draw [->] (E) -- (G) node[midway,anchor=west]{$\Sigma^2 \text{AW}^\dagger$}; \end{tikzpicture} \end{center} From these, we get induced maps \begin{equation}\label{eq:can_MSst_to_EZst} \text{AW}_{\text{st}} \colon \mathpzc{M}_{\text{st}} \to \mathpzc{Z}_{\text{st}} \hspace{0.75cm} \text{AW}_{\text{st}}^\dagger \colon \mathpzc{M}_{\text{st}}^\dagger \to \mathpzc{Z}_{\text{st}}^\dagger \end{equation} between the stable Eilenberg-Zilber and stable McClure-Smith, chain and cochain, operads. \subsection{The Stable Barratt-Eccles Operad} We have now constructed stabilizations of both the Eilenberg-Zilber and McClure-Smith operads. In this section, we present a third and final stabilization, that of the Barratt-Eccles operad. This will constitute a second stabilization of an $\mathbb{E}_\infty$ operad. First, we recall the definition of the Barratt-Eccles operad. In spaces, by which we mean simplicial sets, there exists an operad $\mathpzc{E}_{\text{spc}}$ where: \[ \mathpzc{E}_{\text{spc}}(n) = \text{E}\Sigma_n \] Here $\text{E}\Sigma_n$ denotes the total space of the universal $\Sigma_n$-bundle; in particular, in simplicial degree $d$, we have that $(\text{E}\Sigma_n)_d = \Sigma_n^{\times (d+1)}$. This is the operad originally defined by Barratt and Eccles in~\cite{BarrattEccles}. We are interested in dg operads and so we take chains on this operad. \begin{Definition}\label{def:BE_op} The \textit{Barratt-Eccles chain operad}, denoted $\mathpzc{E}$, is the dg operad over $\mathbb{F}_p$ defined by: \[ \mathpzc{E}(n) = \text{C}_\bullet(\text{E}\Sigma_n) \] Moreover, the \textit{Barratt-Eccles cochain operad} is then the operad $\mathpzc{E}^\dagger$. \end{Definition} Here $\text{C}_\bullet$ denotes normalized chains, and we get a dg operad upon taking these because the normalized chains functor is symmetric monoidal. The chains here are of course taken with coefficients in $\mathbb{F}_p$, so that we get a dg operad over $\mathbb{F}_p$. Note that the Barratt-Eccles chain operad is concentrated in non-negative degrees, while the Barratt-Eccles cochain operad is concentrated in non-positive degrees. \\ As in the previous sections, we begin the stabilization by noting a stabilization map \[ \Psi \colon \Sigma \mathpzc{E} \to \mathpzc{E} \] which will yet again be denoted by $\Psi$ just as in the previous two stabilizations (the context will always make it clear which map we intend by this symbol). In order to define this map, we need to define, for each $n \ge 0$, a map $(\Sigma \mathpzc{E})(n) \to \mathpzc{E}(n)$, which is to say a map $\mathpzc{E}(n)[1-n] \to \mathpzc{E}(n)$. Consider a tuple $(\rho_0,\dots,\rho_{d+n-1})$ in $\mathpzc{E}(n)[1-n]_d = \mathpzc{E}(n)_{d+n-1}$, where each $\rho_i$ is a permutation in $\Sigma_n$. Then we define the image $\Psi((\rho_0,\dots,\rho_{d+n-1}))$ algorithmically as follows: \begin{itemize} \item If $(\rho_0(1),\dots,\rho_{n-1}(1))$ is not a permutation of $(1,\dots,n)$, $\Psi((\rho_0,\dots,\rho_{d+n-1}))$ is zero. \item If $(\rho_0(1),\dots,\rho_{n-1}(1))$ is a permutation of $(1,\dots,n)$, $\Psi((\rho_0,\dots,\rho_{d+n-1}))$ is the tuple $(\rho_{n-1},\dots,\rho_{d+n-1}) \in \mathpzc{E}(n)_{d}$. \end{itemize} One can check that the above algorithmic procedure does indeed yield an operad map $\Psi \colon \Sigma \mathpzc{E} \to \mathpzc{E}$ (see~\cite{BergerFresse}). \begin{Definition} The \textit{stable Barratt-Eccles chain operad}, denoted $\mathpzc{E}_{\text{st}}$, is the operad defined as follows: \[ \mathpzc{E}_{\text{st}} := \underset{\longleftarrow}{\text{lim}}(\cdots \overset{\Psi}\longrightarrow \Sigma^2\mathpzc{E} \overset{\Psi}\longrightarrow \Sigma\mathpzc{E} \overset{\Psi}\longrightarrow \mathpzc{E}) \] The \textit{stable Barratt-Eccles cochain operad} is then the operad $\mathpzc{E}_{\text{st}}^\dagger$. \end{Definition} \begin{Remark} By Proposition~\ref{prop:opreindexsusp}, the operadic suspension $\Sigma$ and the reindexing operator $(-)^\dagger$ commute, so that we also have maps $\Sigma^{k+1}\mathpzc{E}^\dagger \to \Sigma^{k}\mathpzc{E}^\dagger$, and moreover, $(-)^\dagger$ clearly also commutes with inverse limits, so that one can also analogously construct the stable Barratt-Eccles cochain operad as the following inverse limit: \[ \mathpzc{E}_{\text{st}}^\dagger := \underset{\longleftarrow}{\text{lim}}(\cdots \overset{\Psi}\longrightarrow \Sigma^2\mathpzc{E}^\dagger \overset{\Psi}\longrightarrow \Sigma\mathpzc{E}^\dagger \overset{\Psi}\longrightarrow \mathpzc{E}^\dagger) \] \hfill $\vert\vert$ \end{Remark} Next, we note a relation between the Barratt-Eccles and McClure smith operads. The two chain operads are related via a map: \[ \text{TR} \colon \mathpzc{E} \to \mathpzc{M} \] This map can be described algorithmically as follows: \begin{itemize} \item Consider some tuple $(\rho_0,\dots,\rho_d) \in \mathpzc{E}(n)_d = \text{C}_d(\text{E}\Sigma_n)$, where the $\rho_i$ are permutations of $(n)$. \item Let $r_0,\dots,r_d$ be any positive integers such that $r_0+ \cdots +r_d = n+d$. Note that each $r_i$ is necessarily $\le n$; moreover, each $r_0 + \cdots + r_i$ is necessarily $\le n+i$. Form a sequence $(\rho_0(1),\dots,\rho_0(r_0))$ of length $r_0$ using the first $r_0$ entries of the sequence given by $\rho_0$. Next, form a sequence of length $r_1$ using the first $r_1$ entries of the sequence given by $\rho_1$, but skipping any entry which has already occured as a non-final entry of a previous sequence (there are $r_0-1$ such entries, and we have $n-(r_0-1)-r_1 = n-r_0-r_1+1 \ge 0$). Now repeat this process to construct sequences of length $r_2,\dots,r_d$. \item Concatenate the $d+1$ sequences constructed in the previous point to construct an indexed sequence of length $r_0+\cdots+r_d = n+d$. This yields a map $f_{(r_0,\dots,r_d)} \colon (n+d) \to (n)$. If this map is not a surjection or is a non-degenerate surjection, replace it by zero. \item The image of $(\rho_0,\dots,\rho_d)$ under $\text{TR}_n$ is the sum $\sum f_{(r_0,\dots,r_d)}$ over the tuples $(r_0,\dots,r_d)$. \end{itemize} The verification that the above algorithmic procedure defines a well-defined map of operads may be found in~\cite{BergerFresse}. Moreover, in the same work, it is shown that the map $\text{TR}$ is onto in each operadic degree. Thus we see that the McClure-Smith chain operad $\mathpzc{M}$ is a quotient of the Barratt-Eccles chain operad $\mathpzc{E}$. Moreover, upon reindexing, we also get a map between the corresponding cochain operads: \[ \text{TR}^\dagger \colon \mathpzc{E}^\dagger \to \mathpzc{M}^\dagger \] This map is of course also onto in each operadic degree, so that the McClure-Smith cochain operad $\mathpzc{M}$ is a quotient of the Barratt-Eccles cochain operad $\mathpzc{E}$. \\ Finally, we note that the stabilization maps for the Barratt-Eccles and McClure-Smith operads are compatible in the sense that the following squares commute (see~\cite{BergerFresse}): \begin{center} \begin{tikzpicture}[node distance = 1.5cm] \node [] (A) {$\Sigma \mathpzc{E}$}; \node [right of = A,xshift=1.5cm] (B) {$\mathpzc{E}$}; \node [below of = A] (C) {$\Sigma \mathpzc{M}$}; \node [below of = B] (D) {$\mathpzc{M}$}; \draw [->] (A) -- (B) node[midway,anchor=south]{$\Psi$}; \draw [->] (A) -- (C) node[midway,anchor=east]{$\Sigma \text{TR}$}; \draw [->] (B) -- (D) node[midway,anchor=west]{$\text{TR}$}; \draw [->] (C) -- (D) node[midway,anchor=north]{$\Psi$}; \end{tikzpicture} \hspace{0.5cm} \begin{tikzpicture}[node distance = 1.5cm] \node [] (A) {$\Sigma \mathpzc{E}^\dagger$}; \node [right of = A,xshift=1.5cm] (B) {$\mathpzc{E}^\dagger$}; \node [below of = A] (C) {$\Sigma \mathpzc{M}^\dagger$}; \node [below of = B] (D) {$\mathpzc{M}^\dagger$}; \draw [->] (A) -- (B) node[midway,anchor=south]{$\Psi$}; \draw [->] (A) -- (C) node[midway,anchor=east]{$\Sigma \text{TR}^\dagger$}; \draw [->] (B) -- (D) node[midway,anchor=west]{$\text{TR}^\dagger$}; \draw [->] (C) -- (D) node[midway,anchor=north]{$\Psi$}; \end{tikzpicture} \end{center} Combining the above commutative squares with the ones earlier which compared the stabilization maps for the McClure-Smith and Eilenberg-Zilber operads, we have the following commutative diagrams: \begin{center} \begin{tikzpicture}[node distance = 1.5cm] \node [] (A) {$\Sigma \mathpzc{M}$}; \node [right of = A,xshift=1.5cm] (B) {$\mathpzc{M}$}; \node [below of = A] (C) {$\Sigma \mathpzc{Z}_{*}$}; \node [below of = B] (D) {$\mathpzc{Z}_{*}$}; \node [left of = A,xshift=-1.5cm] (E) {$\Sigma^2 \mathpzc{M}$}; \node [left of = E,xshift=-1.5cm] (F) {$\cdots$}; \node [left of = C,xshift=-1.5cm] (G) {$\Sigma^2 \mathpzc{Z}_{*}$}; \node [left of = G,xshift=-1.5cm] (H) {$\cdots$}; \node [above of = B] (I) {$\mathpzc{E}$}; \node [above of = A] (J) {$\Sigma \mathpzc{E}$}; \node [above of = E] (K) {$\Sigma^2 \mathpzc{E}$}; \node [above of = F] (L) {$\cdots$}; \draw [->] (A) -- (B) node[midway,anchor=south]{$\Psi$}; \draw [->] (A) -- (C) node[midway,anchor=west]{$\Sigma \text{AW}$}; \draw [->] (B) -- (D) node[midway,anchor=west]{$\text{AW}$}; \draw [->] (C) -- (D) node[midway,anchor=south]{$\Psi$}; \draw [->] (F) -- (E) node[midway,anchor=south]{$\Psi$}; \draw [->] (E) -- (A) node[midway,anchor=south]{$\Psi$}; \draw [->] (H) -- (G) node[midway,anchor=south]{$\Psi$}; \draw [->] (G) -- (C) node[midway,anchor=south]{$\Psi$}; \draw [->] (E) -- (G) node[midway,anchor=west]{$\Sigma^2 \text{AW}$}; \draw [->] (L) -- (K) node[midway,anchor=south]{$\Psi$}; \draw [->] (K) -- (J) node[midway,anchor=south]{$\Psi$}; \draw [->] (J) -- (I) node[midway,anchor=south]{$\Psi$}; \draw [->] (K) -- (E) node[midway,anchor=west]{$\Sigma^2 \text{TR}$}; \draw [->] (J) -- (A) node[midway,anchor=west]{$\Sigma \text{TR}$}; \draw [->] (I) -- (B) node[midway,anchor=west]{$\text{TR}$}; \end{tikzpicture} \end{center} \begin{center} \begin{tikzpicture}[node distance = 1.5cm] \node [] (A) {$\Sigma \mathpzc{M}^\dagger$}; \node [right of = A,xshift=1.5cm] (B) {$\mathpzc{M}^\dagger$}; \node [below of = A] (C) {$\Sigma \mathpzc{Z}_{*}^\dagger$}; \node [below of = B] (D) {$\mathpzc{Z}_{*}^\dagger$}; \node [left of = A,xshift=-1.5cm] (E) {$\Sigma^2 \mathpzc{M}^\dagger$}; \node [left of = E,xshift=-1.5cm] (F) {$\cdots$}; \node [left of = C,xshift=-1.5cm] (G) {$\Sigma^2 \mathpzc{Z}_{*}^\dagger$}; \node [left of = G,xshift=-1.5cm] (H) {$\cdots$}; \node [above of = B] (I) {$\mathpzc{E}^\dagger$}; \node [above of = A] (J) {$\Sigma \mathpzc{E}^\dagger$}; \node [above of = E] (K) {$\Sigma^2 \mathpzc{E}^\dagger$}; \node [above of = F] (L) {$\cdots$}; \draw [->] (A) -- (B) node[midway,anchor=south]{$\Psi$}; \draw [->] (A) -- (C) node[midway,anchor=west]{$\Sigma \text{AW}^\dagger$}; \draw [->] (B) -- (D) node[midway,anchor=west]{$\text{AW}^\dagger$}; \draw [->] (C) -- (D) node[midway,anchor=south]{$\Psi$}; \draw [->] (F) -- (E) node[midway,anchor=south]{$\Psi$}; \draw [->] (E) -- (A) node[midway,anchor=south]{$\Psi$}; \draw [->] (H) -- (G) node[midway,anchor=south]{$\Psi$}; \draw [->] (G) -- (C) node[midway,anchor=south]{$\Psi$}; \draw [->] (E) -- (G) node[midway,anchor=west]{$\Sigma^2 \text{AW}^\dagger$}; \draw [->] (L) -- (K) node[midway,anchor=south]{$\Psi$}; \draw [->] (K) -- (J) node[midway,anchor=south]{$\Psi$}; \draw [->] (J) -- (I) node[midway,anchor=south]{$\Psi$}; \draw [->] (K) -- (E) node[midway,anchor=west]{$\Sigma^2 \text{TR}^\dagger$}; \draw [->] (J) -- (A) node[midway,anchor=west]{$\Sigma \text{TR}^\dagger$}; \draw [->] (I) -- (B) node[midway,anchor=west]{$\text{TR}^\dagger$}; \end{tikzpicture} \end{center} From this, our canonical maps in (\ref{eq:can_MSst_to_EZst}) now extend to the following sequences of maps: \[ \mathpzc{E}_{\text{st}} \overset{\text{TR}_{\text{st}}}{\xrightarrow{\hspace{12mm}}} \mathpzc{M}_{\text{st}} \overset{\text{AW}_{\text{st}}}{\xrightarrow{\hspace{12mm}}} \mathpzc{Z}_{\text{st}} \hspace{0.75cm} \mathpzc{E}_{\text{st}}^\dagger \overset{\text{TR}_{\text{st}}^\dagger}{\xrightarrow{\hspace{12mm}}} \mathpzc{M}_{\text{st}}^\dagger \overset{\text{AW}_{\text{st}}^\dagger}{\xrightarrow{\hspace{12mm}}} \mathpzc{Z}_{\text{st}}^\dagger \] \subsection{The (Co)homologies, in Individual Arities, of the Stable Operads} We have constructed our stabilizations of $\mathbb{E}_\infty$ operads, and now begin a study of these stable operads. More specifically, we constructed two stabilizations of $\mathbb{E}_\infty$ operads, that of the McClure-Smith operad and that of the Barratt-Eccles operad. Henceforth, we shall fix the stable Barratt-Eccles operad as our model for a stable operad, though all results which we mention also hold with the stable McClure-Smith operad. As our first result, we compute the non-equivariant (co)homologies of the terms of the stable Barratt-Eccles operad. To begin, we have a result regarding the stabilization maps for the Barratt-Eccles operad. \begin{Proposition}\label{prop:stabmapontoML} For each $n \ge 0$, the towers \[ \cdots \to (\Sigma^{2}\mathpzc{E})(n) \to (\Sigma\mathpzc{E})(n) \to \mathpzc{E}(n) \hspace{1cm} \cdots \to (\Sigma^{2}\mathpzc{E}^\dagger)(n) \to (\Sigma\mathpzc{E}^\dagger)(n) \to \mathpzc{E}^\dagger(n) \] satisfy the Mittag-Leffler condition. In fact, if $n \ge 1$, the maps in the towers are onto. \end{Proposition} \begin{proof} We shall give a proof of the case of the chain operad; the case of the cochain operad follows by reindexing. First, suppose that $n = 0$. In this case, the Mittag-Leffler property holds because $(\Sigma^k\mathpzc{E})(0)$ is simply $\mathbb{F}_p[k]$, and so the stabilization maps are then necessarily zero maps. Now suppose that $n \ge 1$. We will prove surjectivity of the map $(\Sigma\mathpzc{E})(n) \to \mathpzc{E}(n)$; the surjectivity of the remaining maps is entirely analogous. Let $d \ge 0$. Then $\mathpzc{E}(n)_d$ is generated by tuples $(\rho_0,\dots,\rho_d)$ where the $\rho_i$ are permutations in $\Sigma_n$. On the other hand, $(\Sigma\mathpzc{E})(n)_d = \mathpzc{E}(n)[1-n]_d = \mathpzc{E}(n)_{d+n-1}$ is generated by tuples $(\rho_0',\dots,\rho'_{d+n-1})$ where the $\rho'_i$ are once again permutations in $\Sigma_n$. Given a particular tuple $(\rho_0,\dots,\rho_d)$ in $\mathpzc{E}(n)_d$, we can of course find permutations $\rho'_1,\dots,\rho'_{n-1}$ in $\Sigma_n$ such that $(\rho'_1(1),\dots,\rho'_{n-1}(1),\rho_0(1))$ is a permutation of $(1,\dots,n)$. Then, by definition of the stabilization map, we have that $(\rho_0,\dots,\rho_d)$ is the image of $(\rho'_1,\dots,\rho'_{n-1},\rho_0,\dots,\rho_d)$, and so we have the desired surjectivity. \end{proof} The above result allows us to compute the non-equivariant (co)homology of the stable Barratt-Eccles operad. The result is that, non-equivariantly, the operads are simply zero (except that the unit is present in arity 1). Later, we shall contrast this with a result which shows that the equivariant (co)homologies, on the other hand, are highly non-trivial. \begin{Proposition}\label{prop:stabhom} We have the following: \[ \emph{H}_\bullet\mathpzc{E}_{\emph{st}}(n) \cong \left\{ \begin{array}{ll} 0 & n \neq 1 \\ \mathbb{F}_p[0] & n=1 \end{array} \right. \hspace{1cm} \emph{H}^\bullet\mathpzc{E}^\dagger_{\emph{st}}(n) \cong \left\{ \begin{array}{ll} 0 & n \neq 1 \\ \mathbb{F}_p[0] & n=1 \end{array} \right. \] \end{Proposition} \begin{proof} We shall give a proof of the case of the chain operad; the case of the cochain operad follows by reindexing. By definition, $\mathpzc{E}_{\text{st}}(n)$ is the limit of the following tower: \[ \cdots \to (\Sigma^2\mathpzc{E})(n) \to (\Sigma\mathpzc{E})(n) \to \mathpzc{E}(n) \] By Proposition~\ref{prop:stabmapontoML}, this tower satisfies the Mittag-Leffler condition, and so, for each $d \in \mathbb{Z}$, we have an induced short exact seqeuence as follows: \[ 0 \to {\lim_k}^1\, \text{H}_{d+1}((\Sigma^k\mathpzc{E})(n)) \to \text{H}_d(\mathpzc{E}_{\text{st}}(n)) \to \lim_k \text{H}_d((\Sigma^k\mathpzc{E})(n)) \to 0 \] Moreover, as $\mathpzc{E}$ is $\mathbb{E}_\infty$, $\text{H}_{d+1}((\Sigma^k\mathpzc{E})(n))$ is simply $\mathbb{F}_p$ if $d + 1 = k - kn$, and zero otherwise, the tower comprising the $\text{H}_{d+1}((\Sigma^k\mathpzc{E})(n))$ clearly satisfies the Mittag-Leffler condition itself, so that the induced map \[ \text{H}_d(\mathpzc{E}_{\text{st}}(n)) \to \lim_k \text{H}_d((\Sigma^k\mathpzc{E})(n)) \] is in fact an isomorphism, for each $d \in \mathbb{Z}$. The result now follows immediately from the fact that $\mathpzc{E}$ is $\mathbb{E}_\infty$ and $(\Sigma^k\mathpzc{E})(n) = \mathpzc{E}(n)[k-kn]$. \end{proof} \section{Homotopy Theory of Algebras over the Stable Operads} We have constructed the stable Barratt-Eccles operad. In this section, we shall develop a homotopy theory of algebras over this operad, in the sense of Quillen model structures. \subsection{Cell Algebras}\label{sec:cell_alg} In order to develop our homotopy theory, we first need to recall the standard notion of cell algebras over operads. This notion is as follows. \begin{Definition}\label{def:cell_alg} Let $\mathpzc{P}$ be a dg operad over $k$. A \textit{cell $\mathpzc{P}$-algebra} is a $\mathpzc{P}$-algebra $A$ such that there exists a cotower of $\mathpzc{P}$-algebras \[ A_0 \to A_1 \to A_2 \to \cdots \] and a colimiting map from this cotower to $A$, such that: \begin{itemize} \item $A_0$ is the initial $\mathpzc{P}$-algebra, namely $\mathpzc{P}(0)$. \item For each $n \ge 0$, the map $A_n \to A_{n+1}$ fits into an algebra pushout square \begin{center} \begin{tikzpicture}[node distance=1cm] \node(A){$\mathbf{P}M$}; \node[below= of A](C){$\mathbf{P}\text{C}M$}; \node[right= of A](B){$A_n$}; \node[below= of B,yshift=0.5mm](D){$A_{n+1}$}; \draw[->] (A) -- (B) node[midway,anchor=south]{}; \draw[->] (A) -- (C) node[midway,anchor=east]{}; \draw[->] (C) -- (D) node[midway,anchor=north]{}; \draw[->] (B) -- (D) node[midway,anchor=west]{}; \begin{scope}[shift=($(A)!.2!(D)$)] \draw +(0,-0.25) -- +(0,0) -- +(0.25,0); \end{scope} \end{tikzpicture} \end{center} where $M$ is a dg module which is degreewise free and has zero differentials. \end{itemize} \end{Definition} \begin{Remark} The condition that the dg module $M$ be degreewise free and have zero differentials is equivalent to that it be a sum of copies of the standard sphere complexes. Moreover, the cone on such a complex, denooted $\text{C}M$ above, is then a sum of the standard disk complexes. (See Section~\ref{sec:nots_convs} for the sphere and disk complexes.) \hfill $\vert\vert$ \end{Remark} More generally, we also have cell maps as follows. \begin{Definition}\label{def:cell_map} Let $\mathpzc{P}$ be a dg operad over $k$. A \textit{cell map} $A \to B$ of $\mathpzc{P}$-algebras is a map such that there exists a cotower of $\mathpzc{P}$-algebras \[ A_0 \to A_1 \to A_2 \to \cdots \] and a colimiting map from this cotower to $B$, such that: \begin{itemize} \item $A_0 = A$ and the map $A_0 \to B$ is the given map $A \to B$. \item For each $n \ge 0$, the map $A_n \to A_{n+1}$ fits into an algebra pushout square \begin{center} \begin{tikzpicture}[node distance=1cm] \node(A){$\mathbf{P}M$}; \node[below= of A](C){$\mathbf{P}\text{C}M$}; \node[right= of A](B){$A_n$}; \node[below= of B,yshift=0.5mm](D){$A_{n+1}$}; \draw[->] (A) -- (B) node[midway,anchor=south]{}; \draw[->] (A) -- (C) node[midway,anchor=east]{}; \draw[->] (C) -- (D) node[midway,anchor=north]{}; \draw[->] (B) -- (D) node[midway,anchor=west]{}; \begin{scope}[shift=($(A)!.2!(D)$)] \draw +(0,-0.25) -- +(0,0) -- +(0.25,0); \end{scope} \end{tikzpicture} \end{center} where $M$ is a dg module which is degreewise free and has zero differentials. \end{itemize} \end{Definition} \begin{Remark} Looking at the definitions, we see that a $\mathpzc{P}$-algebra is a cell algebra if and only if the unique map $\mathpzc{P}(0) \to A$ is a cell map. \hfill $\vert\vert$ \end{Remark} We now describe well-known concrete models of cell algebras. Let $\mathpzc{P}$ be a dg operad over $k$ and let $A$ be a cell $\mathpzc{P}$-algebra. Let also \[ A_0 \to A_1 \to A_2 \to \cdots \] be a cell filtration of $A$ and fix some choices $M_1, M_2, \dots$ for the dg modules which appear in the attachment squares above. For each $n \ge 0$, let $N_n = \oplus_{i \le n} M_i$, where $N_0 = 0$, and let also $N = \oplus_{i \ge 1} M_i$. Then one can construct models for the $A_n$ and $A$, denoted say by $B_n$ and $B$, as follows. Let $\mathpzc{P}^\#$ denote the operad in graded modules formed by forgetting the differentials present in $\mathpzc{P}$. For $n \ge 0$, we have that, as a graded module \[ B_n = \mathbf{P}^\#(N_n[1]) = \bigoplus_{k \ge 0} \mathpzc{P}(k) \otimes_{\Sigma_k} (N_n[1])^{\otimes k} \] and the differentials of the $B_n$ are induced inductively, via the Leibniz rule, the attachment maps $\mathbf{P}M_n \to A_{n+1}$, together with the operadic composition maps of $\mathpzc{P}$. In the limit, we have that, as a graded module \[ B = \mathbf{P}^\#(N[1]) = \bigoplus_{k \ge 0} \mathpzc{P}(k) \otimes_{\Sigma_k} (N[1])^{\otimes k} \] and in this case the differential is of course induced by those of the $B_n$. The precise analogue of the statement that the $B_n$ and $B$ are models, respectively, for the $A_n$ and $A$ is that there exists a diagram of isomorphisms of $\mathpzc{P}$-algebras as follows \begin{center} \begin{tikzpicture} \node [] (A) {$A_0$}; \node [right of = A, xshift=1cm] (B) {$A_1$}; \node [right of = B, xshift=1cm] (C) {$A_2$}; \node [right of = C, xshift=1cm] (D) {$\cdots$}; \node [right of = D, xshift=1cm] (E) {}; \node [below of = A] (AA) {$B_0$}; \node [right of = AA, xshift=1cm] (BB) {$B_1$}; \node [right of = BB, xshift=1cm] (CC) {$B_2$}; \node [right of = CC, xshift=1cm] (DD) {$\cdots$}; \node [right of = DD, xshift=1cm] (EE) {}; \draw [->] (A) -- (B); \draw [->] (B) -- (C); \draw [->] (C) -- (D); \draw [->] (AA) -- (BB); \draw [->] (BB) -- (CC); \draw [->] (CC) -- (DD); \draw [->] (A) -- (AA) node[midway,anchor=west]{$\cong$}; \draw [->] (B) -- (BB) node[midway,anchor=west]{$\cong$}; \draw [->] (C) -- (CC) node[midway,anchor=west]{$\cong$}; \end{tikzpicture} \end{center} and this diagram induces, in the limit, an isomorphism $A \to B$. See, for example,~\cite{Mandell} and~\cite{FresseBook}. \subsection{Enveloping Operads}\label{sec:env_op} In the process of developing our homotopy theory, we shall have a need to consider algebra coproducts of the form $A \amalg \mathbf{P}X$ (where $\mathpzc{P}$ is an operad, $\mathbf{P}$ the associated free algebra functor and $A$ a $\mathpzc{P}$-algebra). Such coproducts can be formed with the help of a general construction on $A$, that of the enveloping operad of $A$ (this construction, as the name suggests, also has relations to other notions, such as that of representations of $A$). \begin{Definition}\label{def:env_op} Let $\mathpzc{P}$ be a dg operad over $k$ and $A$ a $\mathpzc{P}$-algebra. The \textit{enveloping operad} of $A$, denoted $\mathpzc{U}^A$, is defined as follows. For each $j \ge 0$, the dg $k[\Sigma_j]$-module $\mathpzc{U}^A(j)$ is defined to be the dg module coequalizer \[ \bigoplus_{i \ge 0} \mathpzc{P}(i+j) \otimes_{\Sigma_i} (\mathbf{P}A)^{\otimes i} \rightrightarrows \bigoplus_{i \ge 0} \mathpzc{P}(i+j) \otimes_{\Sigma_i} A^{\otimes i} \to \mathpzc{U}^A(j) \] where one of the two parallel maps is induced by the $\mathpzc{P}$-algebra structure map $\mathbf{P}A \to A$ of $A$ and the other by the composition product of $\mathpzc{P}$. Morever, the operadic structure maps of $\mathpzc{U}^A$ are induced by those of $\mathpzc{P}$. \end{Definition} We now summarise the facts about enveloping operads which we shall need: \begin{itemize} \item In operadic degree $0$, we have \[ \mathpzc{U}^A(0) \cong A \] where the universal coequalizer map is given by the $\mathpzc{P}$-algebra structure map of $A$, a map $\mathbf{P}A \to A$. \item In operadic degree $1$, as usual, $\mathpzc{U}^A(1)$ forms a unital associative algebra via the composition product $\mathpzc{U}^A(1) \otimes \mathpzc{U}^A(1) \to \mathpzc{U}^A(1)$. By definition, this algebra $\mathpzc{U}^A(1)$ is the \textit{enveloping algebra} of $A$ -- see~\cite{GinzburgKapranov} and~\cite{FresseBook}. \item In the construction of the $\mathpzc{U}^A(j)$, note that the two parallel maps preserve the $i=0$ summands, and moreover that, the coequalizer of just these two summands is simply $\mathpzc{P}(j)$. It follows that, for each $A$ and $j \ge 0$, $\mathpzc{U}^{A}(j)$ is equipped with a canonical map $\mathpzc{P}(j) \to\mathpzc{U}^{A}(j)$; in fact, these assemble into a canonical operad map: \[ \mathpzc{P} \to \mathpzc{U}^{A} \] \item An easy check of the definitions and universal properties demonstrates that the enveloping operad $\mathpzc{U}^{\mathpzc{P}(0)}$ of the initial $\mathpzc{P}$-algebra $\mathpzc{P}(0)$ is simply $\mathpzc{P}$; that is, we have: \[ \mathpzc{U}^{\mathpzc{P}(0)} \cong \mathpzc{P} \] \item In the case of a free algebra $\mathbf{P}X$, in forming the enveloping operad, we can simply generate on $X$ rather than $\mathbf{P}X$ and dispense with the relations imposed by the parallel maps in the coequalizer, so that: \[ \mathpzc{U}^{\mathbf{P}X}(j) \cong \bigoplus_{i \ge 0} \mathpzc{P}(i+j) \otimes_{\Sigma_i} X^{\otimes i} \] \item We have an equivalence of categories $\mathpzc{U}^A\text{-}\mathsf{Alg} \simeq \mathpzc{P}\text{-}\mathsf{Alg}_{A/}$ between the category of $\mathpzc{U}^A$-algebras and the category of $\mathpzc{P}$-algebras under $A$. See~\cite{GetzlerJones},~\cite{Mandell} or~\cite{FresseBook} for details. For example, if $B$ is a $\mathpzc{U}^A$-algebra, it is also endowed with a $\mathpzc{P}$-algebra structure by pulling back across the canonical map $\mathpzc{P} \to \mathpzc{U}^{A}$ mentioned above, and moreover, $B$ carries a canonical map from the initial $\mathpzc{U}^A$-algebra, which we have seen above is $\mathpzc{U}^A(0) \cong A$. Note also that this equivalence of categories follows easily in the case of the enveloping operad $\mathpzc{U}^{\mathpzc{P}(0)}$ from the observation above that $\mathpzc{U}^{\mathpzc{P}(0)}$ is simply $\mathpzc{P}$. \item Given any dg module $X$, we have a natural, in $X$, isomorphism of $\mathpzc{P}$-algebras under $A$ as follows: \[ \mathbf{U}^AX \cong A \amalg \mathbf{P}X \] See~\cite{Mandell} for this result. The intuition here is that $\mathpzc{U}^A(j)$ incorporates the relations needed to preserve the algebra structure of $A$, and no relations need be imposed for the free algebra $\mathbf{P}X$. Another argument is as follows. By the equivalence $\mathpzc{U}^A\text{-}\mathsf{Alg} \simeq \mathpzc{P}\text{-}\mathsf{Alg}_{A/}$ above, the right hand side $\mathbf{U}^AX$, being the free $\mathpzc{U}^A$-algebra on $X$, is also the free $\mathpzc{P}$-algebra under $A$ on $X$. On the other hand, we can construct this left adjoint in steps, where we have a left adjoint from $\mathsf{DG}_k$ to $\mathpzc{P}\text{-}\mathsf{Alg}$ given by $\mathbf{P}$ and a left adjoint from $\mathpzc{P}\text{-}\mathsf{Alg}$ to $\mathpzc{P}\text{-}\mathsf{Alg}_{A/}$ given by $A \amalg -$. As such, we get the desired natural isomorphism $A \amalg \mathbf{P}X \cong \mathbf{U}^AX$. Note also that, in the case of enveloping operads of free algebras, this result is clear, since, as above, we have $\mathpzc{U}^{\mathbf{P}X}(j) \cong \oplus_{i \ge 0} \mathpzc{P}(i+j) \otimes_{\Sigma_i} X^{\otimes i}$, and so $\mathbf{U}^{\mathbf{P}X}Y \cong \bigoplus_{i, j \ge 0} \mathpzc{P}(i+j) \otimes_{\Sigma_i \times \Sigma_j} X^{\otimes i} \otimes Y^{\otimes j} \cong \mathbf{P}(X \oplus Y) \cong \mathbf{P}X \amalg \mathbf{P}Y$. \item There are well-known concrete models for the enveloping operad of a cell algebra, and a resulting filtration of such an enveloping operad -- see, for example,~\cite{Mandell}. We record these models and filtrations here, for future reference. Let $\mathpzc{P}$ be a dg operad over $k$ and let $A$ be a cell $\mathpzc{P}$-algebra. Let also $A_0 \to A_1 \to A_2 \to \cdots$ be a cell filtration of $A$ and fix some choices $M_1, M_2, \dots$ for the dg modules which appear in the attachment squares. For each $n \ge 0$, let $N_n = \oplus_{i \le n} M_i$, where $N_0 = 0$, and let also $N = \oplus_{i \ge 1} M_i$. Then the models for the $\mathpzc{U}^{A_n}(j)$ and $\mathpzc{U}^A(j)$, for $n, j \ge 0$, are as follows. For each $n, j \ge 0$, we have that, as a $k[\Sigma_j]$-module: \[ \mathpzc{U}^{A_n}(j) = \bigoplus_{i \ge 0} \mathpzc{P}(i + j) \otimes_{\Sigma_i} (N_n[1])^{\otimes i} \] Similarly, for each $j \ge 0$, we have that, as a $k[\Sigma_j]$-module: \[ \mathpzc{U}^A(j) = \bigoplus_{i \ge 0} \mathpzc{P}(i + j) \otimes_{\Sigma_i} (N[1])^{\otimes i} \] The differential on the $\mathpzc{U}^{A_n}(j)$ and on $\mathpzc{U}^A(j)$ are given by the Leibniz rule, the attachment maps and the operadic composition. Thus we see that the operad $\mathpzc{U}^A$ is filtered by the operads $\mathpzc{U}^{A_n}$. Now, as $A_0$ is the initial $\mathpzc{P}$-algebra $\mathpzc{P}(0)$, we have $\mathpzc{U}^{A_0} = \mathpzc{U}^{\mathpzc{P}(0)} = \mathpzc{P}$. The terms of the operad $\mathpzc{U}^{A_1}$ then arise from the terms of $\mathpzc{U}^{A_0}$ by attachment of cells; more generally, for $n \ge 1$, the terms of the operad $\mathpzc{U}^{A_n}$ arise from the terms of $\mathpzc{U}^{A_{n-1}}$ by attachment of cells. This allows us to define, for $n \ge 1$, a filtration on the terms of the operad $\mathpzc{U}^{A_n}$ as follows. Fix such an $n$. For any $j \ge 0$, we let $\text{F}_m\mathpzc{U}^{A_n}(j)$, where $m \ge 0$, denote the sub graded module of $\mathpzc{U}^{A_n}(j)$ generated by the elements $\sigma \otimes a_1 \otimes \cdots \otimes a_i$ where at most $m$ of the factors $a_1,\dots,a_i \in N_n[1]$ project to a non-zero element in $M_n[1]$ (which constitutes the ``most recently added cells''); note that, since, when computing the differential of $\sigma \otimes a_1 \otimes \cdots \otimes a_i$ via the Leibniz rule, if $a_r \in M_n[1]$, we map it to the corresponding element of $\oplus_{i \ge 0} \mathpzc{P}(i) \otimes_{\Sigma_i} (N_{n-1}[1])^{\otimes i}$ via the attachment map $M_n \to A_{n-1}$, we have that the differential preserves the sub graded module $\text{F}_m\mathpzc{U}^{A_n}(j)$ so that we in fact have a sub dg module. Now, given $n \ge 1$ and $j \ge 0$, as graded right $\mathbb{F}_p[\Sigma_j]$-modules, note that we have that \[ \resizebox{.95\hsize}{!}{$\mathpzc{U}^{A_n}(j) = \bigoplus_{i \ge 0} \mathpzc{P}(i + j) \otimes_{\Sigma_i} (N_n[1])^{\otimes i} = \bigoplus_{i \ge 0} \bigoplus_{l=0}^{i} \mathpzc{P}(i+j) \otimes_{\Sigma_{i-l} \times \Sigma_l} N_{n-1}[1]^{\otimes (i-l)} \otimes M_n[1]^{\otimes l}$} \] and, for any $m \ge 0$, the submodule $\text{F}_m\mathpzc{U}^{A_n}(j)$ is then given by: \[ \text{F}_m\mathpzc{U}^{A_n}(j) = \bigoplus_{i \ge 0} \bigoplus_{l=0}^{\text{min}(i,m)} \mathpzc{P}(i+j) \otimes_{\Sigma_{i-l} \times \Sigma_l} N_{n-1}[1]^{\otimes (i-l)} \otimes M_n[1]^{\otimes l} \] Thus we see that, for $m \ge 1$, the inclusions $\text{F}_{m-1}\mathpzc{U}^{A_n}(j) \to \text{F}_m\mathpzc{U}^{A_n}(j)$ are, at the level of the underlying graded modules, split monomorphisms. We also have that: \begin{align*} \text{F}_m\mathpzc{U}^{A_n}(j)/\text{F}_{m-1}\mathpzc{U}^{A_n}(j) &\cong \bigoplus_{i \ge m} \mathpzc{P}(i+j) \otimes_{\Sigma_{i-m} \times \Sigma_m} N_{n-1}[1]^{\otimes (i-m)} \otimes M_n[1]^{\otimes m} \\ &\cong \left(\bigoplus_{i \ge m} \mathpzc{P}(i+j) \otimes_{\Sigma_{i-m}} N_{n-1}[1]^{\otimes (i-m)}\right) \otimes_{\Sigma_m} M_n[1]^{\otimes m} \\ &= \left(\bigoplus_{i \ge 0} \mathpzc{P}(i+m+j) \otimes_{\Sigma_i} N_{n-1}[1]^{\otimes i}\right) \otimes_{\Sigma_m} M_n[1]^{\otimes m} \\ &= \mathpzc{U}^{A_{n-1}}(m+j) \otimes_{\Sigma_m} M_n[1]^{\otimes m} \end{align*} Moreover, recalling that when we compute the differential of $\sigma \otimes a_1 \otimes \cdots \otimes a_i$ via the Leibniz rule, if $a_r \in M_n[1]$, we map it to the corresponding element of $\oplus_{i \ge 0} \mathpzc{P}(i) \otimes_{\Sigma_i} N_{n-1}[1]^{\otimes i}$ via the attachment map $M_n \to A_{n-1}$, and so in particular we map to zero in the quotient $\text{F}_m\mathpzc{U}^{A_n}(j)/\text{F}_{m-1}\mathpzc{U}^{A_n}(j)$, we see that the isomorphism \[ \text{F}_m\mathpzc{U}^{A_n}(j)/\text{F}_{m-1}\mathpzc{U}^{A_n}(j) \cong \mathpzc{U}^{A_{n-1}}(m+j) \otimes_{\Sigma_m} M_n[1]^{\otimes m} \] is in fact one of dg modules, not only of graded modules. \end{itemize} \subsection{Admissibility and Semi-admissibility of Operads} In this section, we consider some generalities on the homotopy theories, in the sense of Quillen model structures, of operads and their algebras. Let $\mathpzc{P}$ be a dg operad over $k$. The most common fashion in which one tries to place a model structure on $\mathpzc{P}\text{-}\mathsf{Alg}$ is by pull back, of the projective model structures, across the forgetful functor $\mathpzc{P}\text{-}\mathsf{Alg} \to \mathsf{DG}_k$. This motivates the following standard definition. \begin{Definition}\label{def:admissibility} Let $\mathpzc{P}$ be a dg operad over $k$. We say that $\mathpzc{P}$ is \textit{admissible} if $\mathpzc{P}\text{-}\mathsf{Alg}$ admits a model structure where the weak equivalences and fibrations are the quasi-isomorphisms and degreewise epimorphisms, respectively. \end{Definition} For example, it is known that cofibrant $\mathbb{E}_\infty$ operads are admissible -- see~\cite{BergerMoerdijk}. \\ In the case of our stable operads, we don't quite have admissibility, but rather a weakened form of it. To describe it, we first need to describe a weakening of model structures, to semi-model structures, which have also been considered in~\cite{White},~\cite{Spitzweck} and~\cite{FresseBook}. In short, a semi-model category is exactly a model category except that the factorization $\rightarrow = \overset{\sim}\hookrightarrow\twoheadrightarrow$ and the lifting property $(\overset{\sim}\hookrightarrow) \boxslash (\twoheadrightarrow)$ (which is to say, the factorization and lifting propertise that involve trivial cofibrations) are required to hold only in the case where the source is cofibrant. \begin{Definition}\label{def:category_semi_model_str} A \textit{Quillen semi-model category} is a category $\mathsf{E}$, together with three specified classes of morphisms, $\mathcal{W}$, $\mathcal{C}$ and $\mathcal{F}$, such that the following hold: \begin{itemize}[leftmargin=12mm] \item[SM1:] $\mathsf{E}$ is bicomplete. \item[SM2:] The class $\mathcal{W}$ satisfies 2-out-of-3. \item[SM3:] The classes $\mathcal{W}, \mathcal{C}$ and $\mathcal{F}$ are closed under retracts. \item[SM4:] Given a cofibration $i \colon A \hookrightarrow B$ and a fibraiton $p \colon X \twoheadrightarrow Y$ we have that $i \boxslash p$ if either $p$ is weak equivalence, or if $i$ is a weak equivalence and $A$ is a cofibrant. \item[SM5:] Given any map $A \to B$, it can be factored as a cofibration followed by a trivial fibration, and, if $A$ is cofibrant, it can also be factored as a trivial cofibration followed by a fibration. \end{itemize} We also add in the following requirement, which does automatically follow from the above: \begin{itemize}[leftmargin=12mm] \item[SM6:] Fibrations are closed under composition, products and base change. \end{itemize} \end{Definition} With this definition, one can run through the the standard arguments for model categories to verify that we can still perform analogous constructions of the derived category and derived functors, with appropriate modifications. In particular, one can construct the derived category via bifibrant replacements of cofibrant objects -- see Theorem 2.13 in~\cite{Mandell}, and moreover, as for derived functors, the relevant result which we will need later is the following, for which we refer to Theorems 2.14 and 2.15 in~\cite{Mandell}. \begin{Proposition}\label{prop:semimodelqadj} Let $L \colon \mathsf{E} \to \mathsf{M}$ and $R \colon \mathsf{M} \to \mathsf{E}$ be left and right adjoints between a semi-model category $\mathsf{E}$ and a model category $\mathsf{M}$. Then we have the following: \begin{itemize} \item[(i)] If $L$ preserves cofibrations between cofibrant objects and $R$ preserves fibrations, then the left derived functor of $L$ and the right derived functor of $R$ exist and are adjoint. Moreover, $L$ converts weak equivalences between cofibrant objects to weak equivalences, and the restriction of the left derived functor of $L$ to the cofibrant objects is naturally isomorphic to the derived functor of the restriction of $L$. \item[(ii)] Suppose that (i) holds and in addition for any cofibrant object $A$ in $\mathsf{E}$ and any fibrant object $Y$ in $M$, a map $A \to RY$ is a weak equivalence if and only if the adjoint $LA \to Y$ is a weak equivalence. Then the left derived functor of $L$ and the right derived functor of $R$ are inverse equivalences. \end{itemize} Moreover, we also have the following: \begin{itemize} \item[(iii)] The hypothesis in (i) above is equivalent to each of the following: \begin{itemize} \item $L$ preserves cofibrations between cofibrant objects and acyclic cofibrations between cofibrant objects. \item $R$ preserves fibrations and acyclic fibrations. \end{itemize} \end{itemize} \qed \end{Proposition} We can now define our weakning of admissibility for dg operads, which we call semi-admissibility. \begin{Definition}\label{def:semiadmissibility} Given a dg operad $\mathpzc{P}$ over $k$, we say that it is \textit{semi-admissible} if $\mathpzc{P}\text{-}\mathsf{Alg}$ admits a semi-model structure where the weak equivalences and fibrations are the quasi-isomorphisms and degreewise epimorphisms, respectively. \end{Definition} Finally, we mention criteria for the admissibility and semi-admissibility of a dg operad. See~\cite{Hinich1} for the admissibility criterion and~\cite{Mandell} for the semi-admissibility criterion (Mandell does not use the term ``semi-admissibility'', however, the results developed in Section 2 of~\cite{Mandell} are what we have codified in our definition of semi-model structures). Below, by ``disk complex'', we mean that which is referred to as such in Section~\ref{sec:nots_convs}. \begin{Proposition}\label{prop:amenable_implies_admissible} Let $\mathpzc{P}$ be a dg operad over $k$, $\mathbf{P}$ the associated free algebra functor. Then we have the following: \begin{itemize} \item[(i)] If, for any $\mathpzc{P}$-algebra $A$, the natural map \[ A \to A \amalg \mathbf{P}(\mathbb{D}^n) \] where $\mathbb{D}^n$ is a disk complex, is a quasi-isomorphism, $\mathpzc{P}$ is admissible. \item[(ii)] If the above condition holds for any cell $\mathpzc{P}$-algebra, $\mathpzc{P}$ is semi-admissible. \end{itemize} Moreover, in either case, the cofibrations are exactly the retracts of cell maps. \qed \end{Proposition} \subsection{The Homotopy Theory of Algebras over the Stable Operads} We can now develop our homotopy theory for algebras over the stable Barratt-Eccles chain and cochain operads. First, we wish to show that we have some homotopical control over these operads by showing that the associated monads preserve quasi-isomorphisms. To show this, we have a couple preliminary definitions and a lemma. \begin{Definition}\label{def:fin_mod} Given a ring $R$, say that a dg left $R$-module is \textit{finite} if it is bounded above and below and degreewise finitely generated. \end{Definition} \begin{Definition}\label{def:fin_flat} Given a ring $R$ and a dg right $R$-module $C$, say that $C$ is \textit{semi-flat} if, as a functor on dg left $R$-modules, $C \otimes_R -$ preserves quasi-isomorphisms between finite modules. \end{Definition} In the case of the $\mathbb{E}_\infty$ operads $\mathpzc{E}$ and $\mathpzc{E}^\dagger$, we have that, for each $n$, $\mathpzc{E}_{\text{st}}(n)$ and $\mathpzc{E}^\dagger_{\text{st}}(n)$ are flat over $\mathbb{F}_p[\Sigma_n]$, as they are free over $\mathbb{F}_p[\Sigma_n]$. In the case of our stable operads, we have the following. \begin{Lemma}\label{lem:finflatEst} For each $n \ge 0$, $\mathpzc{E}_{\emph{st}}(n)$ and $\mathpzc{E}^\dagger_{\emph{st}}(n)$ are semi-flat over $\mathbb{F}_p[\Sigma_n]$. \end{Lemma} \begin{proof} We shall demonstrate the case of the chain operad; the case of the cochain operad is entirely analogous. Fix $n \ge 0$. Let $Z$ be a finite chain complex over $\mathbb{F}_p[\Sigma_n]$. For each $k$, $(\Sigma^k\mathpzc{E})(n)$ is degreewise of finite dimension over $\mathbb{F}_p[\Sigma_n]$. Moreover, $Z$ is degreewise finitely presented over $\mathbb{F}_p[\Sigma_n]$ (we have finite generation by assumptiong and then finite presentation follows because $\mathbb{F}_p[\Sigma_n]$ is Noetherian). As a result, we can commute tensor product and inverse limit to conclude that: \[ \mathpzc{E}_{\text{st}}(n) \otimes_{\Sigma_n} Z = (\lim_k \: (\Sigma^k\mathpzc{E})(n)) \otimes_{\Sigma_n} Z = \lim_k \: ((\Sigma^k\mathpzc{E})(n) \otimes_{\Sigma_n} Z) \] Next, given a map $f \colon Z \to Z'$ between finite $\mathbb{F}_p[\Sigma_n]$-complexes $Z$ and $Z'$, we can write the induced map $\mathpzc{E}_{\text{st}}(n) \otimes_{\Sigma_n} Z \to \mathpzc{E}_{\text{st}}(n) \otimes_{\Sigma_n} Z'$ as the map induced on inverse limits by the maps $(\Sigma^k\mathpzc{E})(n) \otimes_{\Sigma_n} Z \to (\Sigma^k\mathpzc{E})(n) \otimes_{\Sigma_n} Z'$. If $f$ if a quasi-isomorphism, each of the latter maps $(\Sigma^k\mathpzc{E})(n) \otimes_{\Sigma_n} Z \to \Sigma^k\mathpzc{E}(n) \otimes_{\Sigma_n} Z'$ is also a quasi-isomorphism. Thus we have the following diagram of quasi-isomorphisms \begin{center} \begin{tikzpicture}[node distance = 2.5cm] \node [] (A) {$(\Sigma \mathpzc{E})(n) \otimes_{\Sigma_n} Z$}; \node [right of = A,xshift=1.5cm] (B) {$\mathpzc{E}(n) \otimes_{\Sigma_n} Z$}; \node [below of = A,yshift=6mm] (C) {$(\Sigma \mathpzc{E})(n) \otimes_{\Sigma_n} Z'$}; \node [below of = B,yshift=6mm] (D) {$\mathpzc{E}(n) \otimes_{\Sigma_n} Z'$}; \node [left of = A,xshift=-1.5cm] (E) {$(\Sigma^2 \mathpzc{E})(n) \otimes_{\Sigma_n} Z$}; \node [left of = E,xshift=-1.5cm] (F) {$\cdots$}; \node [left of = C,xshift=-1.5cm] (G) {$(\Sigma^2 \mathpzc{E})(n)\otimes_{\Sigma_n} Z'$}; \node [left of = G,xshift=-1.5cm] (H) {$\cdots$}; \draw [->] (A) -- (B) node[midway,anchor=south]{}; \draw [->] (A) -- (C) node[midway,anchor=west]{$\sim$}; \draw [->] (B) -- (D) node[midway,anchor=west]{$\sim$}; \draw [->] (C) -- (D) node[midway,anchor=south]{}; \draw [->] (F) -- (E) node[midway,anchor=south]{}; \draw [->] (E) -- (A) node[midway,anchor=south]{}; \draw [->] (H) -- (G) node[midway,anchor=south]{}; \draw [->] (G) -- (C) node[midway,anchor=south]{}; \draw [->] (E) -- (G) node[midway,anchor=west]{$\sim$}; \end{tikzpicture} \end{center} and the map $\mathpzc{E}_{\text{st}}(n) \otimes_{\Sigma_n} Z \to \mathpzc{E}_{\text{st}}(n) \otimes_{\Sigma_n} Z'$ is the map induced on the limits of the towers by the vertical arrows in this diagram. Now, it follows easily from Proposition~\ref{prop:stabmapontoML} that both the upper and lower towers satisfy the Mittag-Leffler condition. Thus by the $\text{lim}{}^1$ short exact sequence and the five lemma, the map induced on the limits is itself a quasi-isomorphism. \end{proof} We can now show that the monads associated to the stable Barratt-Eccles chain and cochain operads preserve quasi-isomorphisms. \begin{Proposition}\label{prop:MS_st_pres_w_eqs} The monads $\mathbf{E}_{\normalfont{\textbf{st}}}$ and $\mathbf{E}_{\normalfont{\textbf{st}}}^\dagger$ associated to the stable Barratt-Eccles chain and cochain operads preserve quasi-isomorphisms. \end{Proposition} \begin{proof} We shall demonstrate the case of the chain operad; the case of the cochain operad is entirely analogous. First, recall that the monad $\mathbf{E}$, associated to the unstable Barratt-Eccles chain operad, preserves quasi-isomorphisms, which follows immediately from the fact that, for each $n \ge 0$, $\mathpzc{E}(n)$ is $\mathbb{F}_p[\Sigma_n]$-free. For each $k \ge 0$, let $\Sigma^k\mathbf{E}$ denote the monad associated to the operadic suspension $\Sigma^k\mathpzc{E}$ (note that, despite the notation, the monad is not being suspended, only the operad is). For the same reason as for $\mathbf{E}$, each $\Sigma^k\mathbf{E}$ also preserves quasi-isomorphisms. We first note that $\mathbf{E}_{\textbf{st}}$ preserves quasi-isomorphisms between finite $\mathbb{F}_p$-chain complexes. This follows from Lemma~\ref{lem:finflatEst} and the fact that if $X \to Y$ is a quasi-isomorphism between finite complexes over $\mathbb{F}_p$ that then the induced map $X^{\otimes n} \to Y^{\otimes n}$ is a quasi-isomorphism between finite complexes over $\mathbb{F}_p[\Sigma_n]$. It remains to show that $\textbf{E}_{\textbf{st}}$ preserves quasi-isomorphisms between not necessarily finite $\mathbb{F}_p$-chain complexes. To deduce this from the case of finite complexes, recall that any monad associated to an operad preserves filtered colimits (see~\cite{Rezk}) and also that filtered colimits of complexes are exact. Next, given any chain complex $X$, note that \[ X = \underset{S \subseteq_{\text{fin}} X}{\text{colim}} \: S \] where $S \subseteq_{\text{fin}} X$ denotes the category of finite subcomplexes of $X$, the category of which is clearly filtered. Let $f \colon X \to Y$ be a quasi-isomorphism, where $X$ and $Y$ are arbitrary $\mathbb{F}_p$-chain complexes. Because $\textbf{E}_{\textbf{st}}$ preserves filtered colimits, $\textbf{E}_{\textbf{st}}(f)$ is isomorphic to a map \[ \underset{S \subseteq_{\text{fin}} X}{\text{colim}} \: \textbf{E}_{\textbf{st}}(S) \to \underset{T \subseteq_{\text{fin}} Y}{\text{colim}} \: \textbf{E}_{\textbf{st}}(T) \] which we then need to show to bea quasi-isomorphism. We have that these colimits can be taken to be in chain complexes because filtered colimits of dg operad algebras are created in the category of dg modules. We wish to use the fact that filtered colimits of complexes are exact, but are unable to do so at the moment because there are no induced maps between the summands in the colimits; in fact, the indexing categories for the colimits are not even the same. We remedy this as follows. It is standard that, over $\mathbb{F}_p$, as over any field, any chain complex is isomorphic to a direct sum $(\bigoplus_{i \in I} \mathbb{S}^{n_i}) \oplus (\bigoplus_{j \in J} \mathbb{D}^{n_j})$, where $\mathbb{S}^n$ and $\mathbb{D}^n$ denote the standard sphere and disk complexes (see Section~\ref{sec:nots_convs}). Note that, given the complex $(\bigoplus_{i \in I} \mathbb{S}^{n_i}) \oplus (\bigoplus_{j \in J} \mathbb{D}^{n_j})$, the homology is given exactly by the spherical summands $\bigoplus_{i \in I} \mathbb{S}^{n_i}$. We now split the proof into two cases. \\ \textit{Case 1:} Suppose that $X = (\bigoplus_{i \in I} \mathbb{S}^{n_i})$, $Y = (\bigoplus_{i \in I} \mathbb{S}^{n_i}) \oplus (\bigoplus_{j \in J} \mathbb{D}^{n_j})$ and $f$ is the inclusion $X \hookrightarrow Y$, which is obviously a quasi-isomorphism. Note that every subcomplex $T$ of $Y$ is necessarily a sum of the summands in $(\bigoplus_{i \in I} \mathbb{S}^{n_i}) \oplus (\bigoplus_{j \in J} \mathbb{D}^{n_j})$. For each finite subcomplex $T$ of $Y$, let $S_T$ denote the finite subcomplex of $X$ which contains only the spherical summands which occur in $T$. We thus have that, for each finite subcomplex $T$ of $Y$, $f$ restricts to a map $i_T \colon S_T \to T$ and that this map is itself a quasi-isomorphism. Moreover, we clearly have that \[ X = \underset{T \subseteq_{\text{fin}} Y}{\text{colim}} \: S_T \] as the change of index category simply causes some repeats in the summands. We have thus decomposed the map $f \colon X \to Y$ into the map induced on colimits by the maps $i_T$: \[ X = \underset{T \subseteq_{\text{fin}} Y}{\text{colim}} \: S_T \to \underset{T \subseteq_{\text{fin}} Y}{\text{colim}} \: T = Y \] Moreover, the map $\mathbf{E}_{\textbf{st}}X \to \mathbf{E}_{\textbf{st}}Y$ induced by $f$ is then decomposed as the following: \[ \mathbf{E}_{\textbf{st}}X = \underset{T \subseteq_{\text{fin}} Y}{\text{colim}} \: \textbf{E}_{\textbf{st}}(S_T) \to \underset{T \subseteq_{\text{fin}} Y}{\text{colim}} \: \textbf{E}_{\textbf{st}}(T) = \mathbf{E}_{\textbf{st}}Y \] Finally, this map induced on colimits is a quasi-isomorphism by what we have shown above in the case of finite complexes and the exactness of filtered colimits. \\ \textit{Case 2:} Now consider general $X$ and $Y$ and a quasi-isomorphism $f \colon X \to Y$. Let $X = (\bigoplus_{i \in I_1} \mathbb{S}^{n_i}) \oplus (\bigoplus_{j \in J_1} \mathbb{D}^{n_j})$ and let $Y = (\bigoplus_{i \in I_2} \mathbb{S}^{n_i}) \oplus (\bigoplus_{j \in J_2} \mathbb{D}^{n_j})$. Since $f$ is a quasi-isomorphism, it follows that $f$ must restrict to an isomorphism $\bigoplus_{i \in I_1} \mathbb{S}^{n_i} \to \bigoplus_{i \in I_2} \mathbb{S}^{n_i}$. We then get the following commutative square: \begin{center} \begin{tikzpicture}[node distance = 1.5cm] \node [] (A) {$\bigoplus_{i \in I_1} \mathbb{S}^{n_i}$}; \node [right of = A,xshift=1.5cm] (B) {$\bigoplus_{i \in I_2} \mathbb{S}^{n_i}$}; \node [below of = A] (C) {$X$}; \node [below of = B] (D) {$Y$}; \draw [->] (A) -- (B) node[midway,anchor=south]{$\cong$}; \draw [->] (A) -- (C) node[midway,anchor=east]{$\subseteq$}; \draw [->] (B) -- (D) node[midway,anchor=west]{$\subseteq$}; \draw [->] (C) -- (D) node[midway,anchor=north]{$f$}; \end{tikzpicture} \end{center} Upon applying $\mathbf{E}_{\textbf{st}}$ to this square, having already established Case 1, we get the desired result. \end{proof} Next, to be able to do homotopy theory with algebras over our stable operads, we wish to show that the stable Barratt-Eccles chain and cochain operads $\mathpzc{E}_{\text{st}}$ and $\mathpzc{E}^\dagger_{\text{st}}$ are semi-admissible. We demonstrate this with the help of a few lemmas along the way. Due to the criterion in Proposition~\ref{prop:amenable_implies_admissible}, we are interested in the coproducts $A \amalg \mathbf{E}_{\text{st}}(\mathbb{D}^n)$ and $A \amalg \mathbf{E}_{\text{st}}^\dagger(\mathbb{D}^n)$ for cell algebras $A$. The construction, which we saw earlier, of such coproducts via enveloping operads will lead us to consider the enveloping operads for cell algebras $A$. \begin{Lemma}\label{lem:almost_splitunstable} Let $R$ be a ring and let $i \colon C \to D$ be a map of dg right $R$-modules which is split as a morphism of graded right $R$-modules. Then, if any two of $C$, $D$ and $D/C$ are semi-flat, so is the third. \end{Lemma} This fact also holds with semi-flat replaced by flat, with essentially the same proof -- see Proposition 13.12 in~\cite{Mandell}. \begin{proof} We shall consider the case of chain complexes, the case of cochain complexes differing only in some notations. Given any chain complex $P$ of left $R$-modules, the sequence \[ 0 \to C \otimes_R P \to D \otimes_R P \to (D/C) \otimes_R P \to 0 \] is exact, as tensor is always right exact and the given retraction $r \colon D \to C$ gives an induced retraction $r \otimes_R \text{id}_P \colon D \otimes_R P \to C \otimes_R P$ (at the level of graded modules). This yields a long exact sequence in homology: \[ \cdots \to \text{H}_n(C \otimes_R P) \to \text{H}_n(D \otimes_R P) \to \text{H}_n((D/C) \otimes_R P) \to \cdots \] Given any quasi-isomorphism $P \to Q$ between finite chain complexes of left $R$-modules, we get a morphism of these long exact sequences: \begin{center} \begin{tikzpicture}[node distance = 2cm] \node [] (A) {$\cdots$}; \node [right of = A,xshift=1cm] (B) {$\text{H}_n(C \otimes_R P)$}; \node [right of = B,xshift=1cm] (C) {$\text{H}_n(D \otimes_R P)$}; \node [right of = C,xshift=1cm] (D) {$\text{H}_n((D/C) \otimes_R P)$}; \node [right of = D,xshift=1cm] (E) {$\cdots$}; \node [below of = A] (a) {$\cdots$}; \node [below of = B] (b) {$\text{H}_n(C \otimes_R Q)$}; \node [below of = C] (c) {$\text{H}_n(D \otimes_R Q)$}; \node [below of = D] (d) {$\text{H}_n((D/C) \otimes_R Q)$}; \node [below of = E] (e) {$\cdots$}; \draw [->] (B) -- (b); \draw [->] (C) -- (c); \draw [->] (D) -- (d); \draw [->] (A) -- (B); \draw [->] (B) -- (C); \draw [->] (C) -- (D); \draw [->] (D) -- (E); \draw [->] (a) -- (b); \draw [->] (b) -- (c); \draw [->] (c) -- (d); \draw [->] (d) -- (e); \end{tikzpicture} \end{center} The result now follows by the five lemma. \end{proof} \begin{Lemma}\label{lem:change_of_ringsunstable} Let $m, n \ge 0$. Given a semi-flat dg right $\mathbb{F}_p[\Sigma_{m+n}]$-module $M$ and a finite dg left $\mathbb{F}_p[\Sigma_m]$-module $N$, $M \otimes_{\mathbb{F}_p[\Sigma_m]} N$ is semi-flat over $\mathbb{F}_p[\Sigma_n]$. \end{Lemma} This fact also holds with semi-flat replaced by flat (in which case $N$ needn't be finite), with essentially the same proof -- see Proposition 13.13 in~\cite{Mandell}. \begin{proof} Given a finite dg left $\mathbb{F}_p[\Sigma_n]$-module $P$, we have a natural isomorphism \[ (M \otimes_{\mathbb{F}_p[\Sigma_{m}]} N) \otimes_{\mathbb{F}_p[\Sigma_{n}]} P \cong M \otimes_{\mathbb{F}_p[\Sigma_{m+n}]} (\mathbb{F}_p[\Sigma_{m+n}] \otimes_{\mathbb{F}_p[\Sigma_m] \otimes_{\mathbb{F}_p} \mathbb{F}_p[\Sigma_n]} (N \otimes_{\mathbb{F}_p} P)) \] and from this the result follows immediately, noting that $\mathbb{F}_p[\Sigma_{m+n}]$ is flat over $\mathbb{F}_p[\Sigma_m] \otimes_{\mathbb{F}_p} \mathbb{F}_p[\Sigma_n]$, and that $\mathbb{F}_p[\Sigma_{m+n}] \otimes_{\mathbb{F}_p[\Sigma_m] \otimes_{\mathbb{F}_p} \mathbb{F}_p[\Sigma_n]} (N \otimes_{\mathbb{F}_p} P)$ is a finite complex over $\mathbb{F}_p[\Sigma_{m+n}]$ given the finiteness of $N$ and $P$. \end{proof} \begin{Lemma}\label{lem:Estenvopsflat} Let $A$ be a cell $\mathpzc{E}_{\emph{st}}$-algebra or a cell $\mathpzc{E}_{\emph{st}}^\dagger$-algebra. Let also $\mathpzc{U}^A$ denote the associated enveloping operad. Then, for all $j \ge 0$, $\mathpzc{U}^A(j)$ is semi-flat over $\mathbb{F}_p[\Sigma_j]$. \end{Lemma} Note that in the case of the (unstable) Barratt-Eccles operad, an analogous result holds with semi-flat replaced by flat, with a similar proof -- see Lemma 13.6 in~\cite{Mandell}. \begin{proof} We shall demonstrate the result in the case of the chain operad $\mathpzc{E}_{\text{st}}$, the case of the cochain operad being entirely analogous. Let \[ A_0 \to A_1 \to A_2 \to \cdots \] be a cell filtration of $A$ and fix some choices $M_1, M_2, \dots$ for the chain complexes which appear in the attachment squares. For each $n \ge 0$, let $N_n = \oplus_{i \le n} M_i$, where $N_0 = 0$, and let also $N = \oplus_{i \ge 0} M_i$. As in Section~\ref{sec:env_op}, we have that, for each $j \ge 0$, as a graded right $\mathbb{F}_p[\Sigma_j]$-module: \[ \mathpzc{U}^A(j) = \bigoplus_{i \ge 0} \mathpzc{E}_{\text{st}}(i + j) \otimes_{\Sigma_i} (N[1])^{\otimes i} \] The differential on $\mathpzc{U}^A(j)$, we recall, is given by the Leibniz rule, the attachment maps and the operadic composition. Moreover, for each $n \ge 0$, and again for each $j \ge 0$, as a graded right $\mathbb{F}_p[\Sigma_j]$-module: \[ \mathpzc{U}^{A_n}(j) = \bigoplus_{i \ge 0} \mathpzc{E}_{\text{st}}(i + j) \otimes_{\Sigma_i} (N_n[1])^{\otimes i} \] Moreover, from Section~\ref{sec:env_op}, recall that we have filtrations $\text{F}_m\mathpzc{U}^{A_n}$ of the $\mathpzc{U}^{A_n}$. We shall prove the desired result by an induction. We show that, for each $m,j,n \ge 0$, $\text{F}_m\mathpzc{U}^{A_n}(j)$ is semi-flat over $\mathbb{F}_p[\Sigma_j]$, and we will do this by inducting on $n$. In the case $n=0$, as in Section~\ref{sec:env_op}, we have that $\mathpzc{U}^{A_0} = \mathpzc{E}_{\text{st}}$, and moreover that $\text{F}_m\mathpzc{U}^{A_0}(j) = \mathpzc{E}_{\text{st}}(j)$ for all $m, j \ge 0$. The required semi-flatness then follows by Lemma~\ref{lem:finflatEst}. Suppose now that, for some $n \ge 1$, we have that $\text{F}_m\mathpzc{U}^{A_{n-1}}(j)$ is finitely flat over $\mathbb{F}_p[\Sigma_j]$ for all $m,j \ge 0$. We wish to show that $\text{F}_m\mathpzc{U}^{A_n}(j)$ is finitely flat over $\mathbb{F}_p[\Sigma_j]$ for all $m, j \ge 0$. We shall do this by inducting over $m$. By definition of the filtration piece $\text{F}_0$, we have that, for each $j \ge 0$, $\text{F}_0\mathpzc{U}^{A_n}(j) = \mathpzc{U}^{A_{n-1}}(j) = \text{colim}_m\,\text{F}_m\mathpzc{U}^{A_{n-1}}(j)$ which, by invoking the inductive hypothesisis for the induction over $n$ and passing to the colimit, we see is semi-flat over $\mathbb{F}_p[\Sigma_j]$. Next, suppose that for some $m \ge 1$, $\text{F}_{m-1}\mathpzc{U}^{A_n}(j)$ is semi-flat over $\mathbb{F}_p[\Sigma_j]$. As in Section~\ref{sec:env_op}, we have that: \[ \text{F}_m\mathpzc{U}^{A_n}(j)/\text{F}_{m-1}\mathpzc{U}^{A_n}(j) \cong \mathpzc{U}^{A_{n-1}}(m+j) \otimes_{\Sigma_m} M_n[1]^{\otimes m} \] Now, by invoking the inductive hypothesis for the induction over $n$ and passing to the colimit, we see that $\mathpzc{U}^{A_{n-1}}(j+m) = \text{colim}_{m'}\,\text{F}_{m'}\mathpzc{U}^{A_{n-1}}(m+j)$ is semi-flat over $\mathbb{F}_p[\Sigma_{m+j}]$. Moreover, by Lemma~\ref{lem:change_of_ringsunstable}, we have that $\mathpzc{U}^{A_{n-1}}(m+j) \otimes_{\Sigma_m} M_n[1]^{\otimes m}$ is then semi-flat over $\mathbb{F}_p[\Sigma_j]$ so long as $M_n$ is finite. In fact, this holds for arbitrary $M_n$ as a non-finite $M_n$ can be written as a filtered colimit of its finite subcomplexes and both the tensor product $\mathpzc{U}^{A_{n-1}}(m+j) \otimes_{\Sigma_m} -$ and the tensor power $(-)^{\otimes m}$ commute with filtered colimits. Next, recalling that the inclusion $\text{F}_{m-1}\mathpzc{U}^{A_n}(j) \to \text{F}_m\mathpzc{U}^{A_n}(j)$ is split at the level of the underlying graded modules (see Section~\ref{sec:env_op}), we may now invoke the inductive hypothesis for the induction over $m$ and apply Lemma~\ref{lem:almost_splitunstable} to conclude that $\text{F}_m\mathpzc{U}^{A_n}(j)$ is semi-flat over $\mathbb{F}_p[\Sigma_j]$, as desired. This completes the induction over $m$ so that we have that $\text{F}_m\mathpzc{U}^{A_n}(j)$ is semi-flat over $\mathbb{F}_p[\Sigma_j]$ for all $m,j \ge 0$. Moreover, this conclusion then completes the induction over $n$ so that we have that $\text{F}_m\mathpzc{U}^{A_n}(j)$ is semi-flat over $\mathbb{F}_p[\Sigma_j]$ for all $m,j,n \ge 0$. Finally then, if we fix a $j \ge 0$, upon passing to the colimit, we have that $\mathpzc{U}^{A_n}(j) = \text{colim}_m\text{F}_m\mathpzc{U}^{A_n}(j)$ is finitely flat over $\mathbb{F}_p[\Sigma_j]$, and then, passing to the colimit again, we have the desired result that $\mathpzc{U}^{A}(j) = \text{colim}_n\mathpzc{U}^{A_n}(j)$ is semi-flat over $\mathbb{F}_p[\Sigma_j]$, which completes the proof. \end{proof} We now use the above lemmas to demonstrate that the operads $\mathpzc{E}_{\text{st}}$ and $\mathpzc{E}_{\text{st}}^\dagger$ are semi-admissible. \begin{Proposition}\label{prop:E_adm} The Barratt-Eccles chain and cochain operads, $\mathpzc{E}_{\emph{st}}$ and $\mathpzc{E}^\dagger_{\emph{st}}$, are semi-admissible. \end{Proposition} \begin{proof} We shall demonstrate the case of the chain operad, the case of the cochain operad being entirely analogous. By Proposition~\ref{prop:amenable_implies_admissible}, it suffices to show that, if $A$ is a cell $\mathpzc{E}_{\text{st}}$-algebra, then for each $n \in \mathbb{Z}$, the canonical map \[ A \to A \amalg \mathbf{E}_{\textbf{st}}(\mathbb{D}^n) \] is a quasi-isomorphism; here $\mathbb{D}^n$ is a disk complex (see Section~\ref{sec:nots_convs}). As per the facts about enveloping operads in Section~\ref{sec:env_op}, we have that, as an algebra under $A$: \[ A \amalg \mathbf{E}_{\textbf{st}}(\mathbb{D}^n) \cong \mathbf{U}^A(\mathbb{D}^n) = \bigoplus_{j \ge 0} \mathpzc{U}^A(j) \otimes_{\Sigma_j} (\mathbb{D}^n)^{\otimes j} = A \oplus \left(\bigoplus_{j \ge 1} \mathpzc{U}^A(j) \otimes_{\Sigma_j} (\mathbb{D}^n)^{\otimes j}\right) \] Now, for $j \ge 1$, $(\mathbb{D}^n)^{\otimes j}$ has zero homology and is finite. Moreover, by Lemma~\ref{lem:Estenvopsflat}, $\mathpzc{U}^A(j)$ is semi-flat over $\mathbb{F}_p[\Sigma_j]$, so that this zero homology is preserved by the tensor, which gives us the desired result. \end{proof} Recalling the definition of semi-admissibility, and the final part of Proposition~\ref{prop:amenable_implies_admissible}, we get the following. \begin{Corollary}\label{cor:stablesemimod} The categories of algebras $\mathpzc{E}_{\emph{st}}\text{-}\mathsf{Alg}$ and $\mathpzc{E}^\dagger_{\emph{st}}\text{-}\mathsf{Alg}$ possess a Quillen semi-model structure where: \begin{itemize} \item The weak equivalences are the quasi-isomorphisms. \item The fibrations are the surjective maps. \item The cofibrations are retracts of relative cell complexes, where the cells are the maps $\mathbf{E}_{\mathbf{st}}M \to \mathbf{E}_{\mathbf{st}}\emph{C}M$ in the chain case, and the maps $\mathbf{E}_{\mathbf{st}}^\dagger M \to \mathbf{E}_{\mathbf{st}}^\dagger\emph{C}M$ in the cochain case, where $M$ is a degreewise free complex with zero differentials. \end{itemize} \qed \end{Corollary} \section{Cohomology Operations for Algebras over the Stable Operads}\label{sec:cohom_ops} Given an algebra $A$ over the (unstable) Barratt-Eccles operad, the (co)homology of $A$ possesses natural operations. In the case of the chain operad, these are the generalized Dyer-Lashof operations, and in the case of the cochain operad, these are the generalized Steenrod operations. In this section, we shall demonstrate that algebras over the stable Barratt-Eccles operad also have natural (co)homology operations, though they are now stable operations in a precise sense, to be described below. In fact, for the sake of brevity, henceforth, we shall work with only the stable Barratt-Eccles cochain operad, though all that we say will have clear analogoues for the stable Barratt-Eccles chain operad. Throughout this section, the ground field will be $\mathbb{F}_p$, for an unspecified but fixed prime $p$. \subsection{The Cohomology Operations I}\label{sec:stabops} Let $A$ be an algebra over $\mathpzc{E}_{\text{st}}^\dagger$. Our aim is to show that there are induced natural operations \[ P^s \colon \text{H}^\bullet(A) \to \text{H}^{\bullet}(A) \] and also \[ \beta P^s \colon \text{H}^\bullet(A) \to \text{H}^{\bullet}(A) \] in the case $p > 2$. Recall that such operations exist in the case of the unstable operad $\mathpzc{E}^\dagger$ (for a constructiion of them, see~\cite{May}). In fact, in our stable case, we will have some other operations on $\text{H}^\bullet(A)$ as well. The general construction of all of them will be found in Section~\ref{sec:stabops2} below. Here, we wish to give a restricted but more concrete construction of them. We shall restrict ourselves in this section alone to the case $p = 2$; analogous explicit considerations in the $p > 2$ case are also possible, though more cumbersome. \\ To begin, recall that, as in~\cite{May}, the operations, when $p = 2$, in the case of the unstable operad are defined with the help of the arity $2$ part of the operad $\mathpzc{E}^\dagger$. As such, our first goal is to examine the arity $2$ part of the stable operad $\mathpzc{E}_{\text{st}}^\dagger$. Examining the definition of the Barrat-Eccles cochain operad, one finds that the cochain complex $\mathpzc{E}^\dagger(2)$ is isomorphic to the standard $\mathbb{F}_2[\Sigma_2]$-free resolution of $\mathbb{F}_2$, namely: \begin{equation}\tag*{$\mathpzc{E}^\dagger(2):$} \cdots \longrightarrow \underset{\text{deg}\, -2}{\mathbb{F}_2[\Sigma_2]} \overset{1+\tau}{\longrightarrow} \underset{\text{deg}\, -1}{\mathbb{F}_2[\Sigma_2]} \overset{1+\tau}{\longrightarrow} \underset{\text{deg}\, 0}{\mathbb{F}_2[\Sigma_2]} \longrightarrow 0 \longrightarrow \cdots \end{equation} Here, $\tau$ denotes the non-trivial permutation of $\{1,2\}$. To be more specific, the isomorphism between the above complex and $\mathpzc{E}^\dagger(2)$, in degree $-d$, for $d \ge 0$, sends $1$ to $(1, \tau, 1, \tau, \dots)$ (a sequence containing $d+1$ elements). Now let us see what the arity $2$ part of our stable operad looks like. \begin{Proposition}\label{prop:Est(2)} The cochain complex $\mathpzc{E}_{\emph{st}}^\dagger(2)$ is isomorphic to the following: \begin{equation}\tag*{$\mathpzc{E}^\dagger_{\text{st}}(2):$} \cdots \longrightarrow \mathbb{F}_2[\Sigma_2] \overset{1+\tau}{\longrightarrow} \mathbb{F}_2[\Sigma_2] \overset{1+\tau}{\longrightarrow} \mathbb{F}_2[\Sigma_2] \overset{1+\tau}{\longrightarrow} \mathbb{F}_2[\Sigma_2] \longrightarrow \cdots \end{equation} \end{Proposition} \begin{proof} We have a description of $\mathpzc{E}^\dagger(2)$ above. Moreover, for each $k \ge 0$, we have that $(\Sigma^k\mathpzc{E}^\dagger)(2) = \mathpzc{E}^\dagger(2)[k]$. Now, given as input a tuple $(\rho_0,\dots,\rho_d)$ of permutations $\rho_i \in \Sigma_2$, for some $d \ge 0$, by definition, the stabilization map $(\Sigma^{k+1}\mathpzc{E}^\dagger)(2) \to (\Sigma^k\mathpzc{E}^\dagger(2)$ simply drops the first entry of the tuple. It follows that the inverse limit is then the desired complex -- to see this, note that, if we write the complexes $(\Sigma^k\mathpzc{E}^\dagger)(2)$ vertically, the tower $\cdots \to (\Sigma^2\mathpzc{E}^\dagger)(2) \to (\Sigma\mathpzc{E}^\dagger)(2) \to \mathpzc{E}^\dagger(2)$ looks as follows: \begin{center} \begin{tikzpicture}[node distance=1.5cm] \node[](A){$\vdots$}; \node[below= of A](B){$\mathbb{F}_2[\Sigma_2]$}; \node[below= of B](C){$\mathbb{F}_2[\Sigma_2]$}; \node[below= of C](D){$\mathbb{F}_2[\Sigma_2]$}; \node[below= of D](E){$\mathbb{F}_2[\Sigma_2]$}; \node[below= of E](F){$0$}; \node[below= of F](G){$\vdots$}; \node[right= of A,xshift=5mm](AA){$\vdots$}; \node[below= of AA](BB){$\mathbb{F}_2[\Sigma_2]$}; \node[below= of BB](CC){$\mathbb{F}_2[\Sigma_2]$}; \node[below= of CC](DD){$\mathbb{F}_2[\Sigma_2]$}; \node[below= of DD](EE){$0$}; \node[below= of EE,yshift=-1mm](FF){$0$}; \node[below= of FF](GG){$\vdots$}; \node[right= of AA,xshift=5mm](AAA){$\vdots$}; \node[below= of AAA](BBB){$\mathbb{F}_2[\Sigma_2]$}; \node[below= of BBB](CCC){$\mathbb{F}_2[\Sigma_2]$}; \node[below= of CCC](DDD){$0$}; \node[below= of DDD,yshift=-1mm](EEE){$0$}; \node[below= of EEE,yshift=-1mm](FFF){$0$}; \node[below= of FFF](GGG){$\vdots$}; \node[left of= B,xshift=-5mm](BL){$\cdots$}; \node[left of= C,xshift=-5mm](CL){$\cdots$}; \node[left of= D,xshift=-5mm](DL){$\cdots$}; \node[left of= E,xshift=-5mm](EL){$\cdots$}; \node[left of= F,xshift=-5mm](FL){$\cdots$}; \draw[->] (A) -- (B) node[midway,anchor=west]{}; \draw[->] (B) -- (C) node[midway,anchor=west]{$1+\tau$}; \draw[->] (C) -- (D) node[midway,anchor=west]{$1+\tau$}; \draw[->] (D) -- (E) node[midway,anchor=west]{$1+\tau$}; \draw[->] (E) -- (F) node[midway,anchor=west]{}; \draw[->] (F) -- (G) node[midway,anchor=west]{}; \draw[->] (AA) -- (BB) node[midway,anchor=west]{}; \draw[->] (BB) -- (CC) node[midway,anchor=west]{$1+\tau$}; \draw[->] (CC) -- (DD) node[midway,anchor=west]{$1+\tau$}; \draw[->] (DD) -- (EE) node[midway,anchor=west]{}; \draw[->] (EE) -- (FF) node[midway,anchor=west]{}; \draw[->] (FF) -- (GG) node[midway,anchor=west]{}; \draw[->] (AAA) -- (BBB) node[midway,anchor=west]{}; \draw[->] (BBB) -- (CCC) node[midway,anchor=west]{$1+\tau$}; \draw[->] (CCC) -- (DDD) node[midway,anchor=west]{}; \draw[->] (DDD) -- (EEE) node[midway,anchor=west]{}; \draw[->] (EEE) -- (FFF) node[midway,anchor=west]{}; \draw[->] (FFF) -- (GGG) node[midway,anchor=west]{}; \draw[->] (B) -- (BB) node[midway,anchor=south]{$\tau$}; \draw[->] (BB) -- (BBB) node[midway,anchor=south]{$\tau$}; \draw[->] (C) -- (CC) node[midway,anchor=south]{$\tau$}; \draw[->] (CC) -- (CCC) node[midway,anchor=south]{$\tau$}; \draw[->] (D) -- (DD) node[midway,anchor=south]{$\tau$}; \draw[->] (DD) -- (DDD) node[midway,anchor=south]{}; \draw[->] (E) -- (EE) node[midway,anchor=south]{}; \draw[->] (EE) -- (EEE) node[midway,anchor=south]{}; \draw[->] (F) -- (FF) node[midway,anchor=south]{}; \draw[->] (FF) -- (FFF) node[midway,anchor=south]{}; \draw[->] (BL) -- (B) node[midway,anchor=west]{}; \draw[->] (CL) -- (C) node[midway,anchor=west]{}; \draw[->] (DL) -- (D) node[midway,anchor=west]{}; \draw[->] (EL) -- (E) node[midway,anchor=west]{}; \draw[->] (FL) -- (F) node[midway,anchor=west]{}; \end{tikzpicture} \end{center} \end{proof} \begin{Remark} We saw earlier, in Proposition~\ref{prop:stabhom}, that the non-equivariant homology of $\mathpzc{E}_{\text{st}}^\dagger(2)$ is zero. On the other hand, Proposition~\ref{prop:Est(2)} above and an easy calculation shows that the equivariant homology of $\mathpzc{E}_{\text{st}}^\dagger$, by which we mean the homology of $\mathpzc{E}_{\text{st}}^\dagger(2)/\Sigma_2$, consists of exactly a unique $\mathbb{F}_2$ generator in each degree: \begin{equation}\tag*{$\text{H}_\bullet(\mathpzc{E}_{\text{st}}^\dagger(2)/\Sigma_2):$} \hspace{-1.95mm} \cdots \hspace{1cm} \underset{\text{deg}\,-1}{\mathbb{F}_2} \hspace{1cm} \underset{\text{deg}\,0}{\mathbb{F}_2} \hspace{1cm} \underset{\text{deg}\,1}{\mathbb{F}_2} \hspace{1cm} \cdots \end{equation} For comparison, in the unstable case, via the description of $\mathpzc{E}^\dagger(2)$ above and another easy calculation, we have that the homology of $\mathpzc{E}^\dagger(2)/\Sigma_2$, consists of exactly a unique $\mathbb{F}_2$ generator in each non-positive degree: \begin{equation}\tag*{$\text{H}_\bullet(\mathpzc{E}^\dagger(2)/\Sigma_2):$} \cdots \hspace{1cm} \underset{\text{deg}\,-1}{\mathbb{F}_2} \hspace{1cm} \underset{\text{deg}\, 0}{\mathbb{F}_2} \hspace{1cm} \underset{\text{deg}\,1}{0} \hspace{1cm} \cdots \end{equation} In fact, we will see below that it is exactly the generators of these equivariant arity $2$ homologies that give rise to the cohomology operations, and moreover, that the existence of generators in positive degrees in the stable case results in a stability property of these operations. We will also see that the higher arity equivariant cohomologies correspond to iterated operations and that the total equivariant cohomology is highly non-trivial, despite the trivial non-equivariant cohomology. \hfill $\vert\vert$ \end{Remark} Let us set in place some standardized notations to work with the unstable and stable Barratt-Eccles operads. For $d \le 0$, we define $e_{d}^{\text{un}}$ to be the element $(1, \tau, 1, \tau, \dots)$ (a sequence of $|d|+1$ elements) of $\mathpzc{E}^\dagger(2)^{d}$; if $d > 0$, we set $e_{d}^{\text{un}}$ to be zero in $\mathpzc{E}^\dagger(2)^{d}$. On the other hand, for each $d \in \mathbb{Z}$, we define $e_d^{\text{st}}$, an element of $\mathpzc{E}_{\text{st}}^\dagger(2)^{d}$, as follows: if $d \le 0$, $e_d^{\text{st}} := (e_d^{\text{un}}, e_{d+1}^{\text{un}} \cdot \tau, e_{d+2}^{\text{un}}, e_{d+3}^{\text{un}} \cdot \tau, \dots)$; if $d > 0$, it is the element $e_d^{\text{st}} := (0, \dots, 0, e_0^{\text{un}}, e_{1}^{\text{un}} \cdot \tau, e_{2}^{\text{un}}, e_{3}^{\text{un}} \cdot \tau, \dots)$, where there are $|d|$ leading zeros. \begin{Remark}\label{rmk:edtwice} By the definition of $\mathpzc{E}_{\text{st}}^\dagger$ as an inverse limit, we have a canonical map \[ \mathpzc{E}_{\text{st}}^\dagger \to \mathpzc{E}^\dagger \] from the stable Barratt-Eccles operad to the Barratt-Eccles operad, which, for any $n$, sends an infinite tuple in $\mathpzc{E}_{\text{st}}^\dagger(n)$ to its first entry. By the definition of the $e_d^{\text{un}}$ and $e_d^{\text{st}}$, we find that, in arity $2$, for each $d \in \mathbb{Z}$, this map sends $e_d^{\text{st}}$ to $e_d^{\text{un}}$. \hfill $\vert\vert$ \end{Remark} We can now define the cohomology operations for algebras over the stable Barratt-Eccles cochain operad, at least when $p = 2$. \begin{Proposition}\label{prop:opsalgEstconstrp=2} Given an algebra $A$ over $\mathpzc{E}_{\emph{st}}^\dagger$, for each $s \in \mathbb{Z}$ and $[a] \in \emph{H}^q(A)$, by setting \[ P^s([a]) = [(e_{s-q}^{\emph{st}})_*(a,a)] \] we get a well-defined graded map \[ P^s \colon \emph{H}^\bullet(A) \to \emph{H}^\bullet(A) \] which is linear over $\mathbb{F}_2$, of degree $s$ and natural in $A$. \end{Proposition} Here we use the notation $(e_{s-q}^{\text{st}})_*(a,a)$ for the image of $e_{s-q}^{\text{st}} \otimes a \otimes a$ under $\mathpzc{E}_{\text{st}}^\dagger(2) \otimes A^{\otimes 2} \to A$. \begin{proof} First, let us check that the operations are well-defined. This follows from the following two facts, which we shall demonstrate: (i) given a cocycle $a$ in $A$, $e_d^{\text{st}} \otimes a \otimes a$, for any $d$, is a cocycle in $\mathpzc{E}_{\text{st}}^\dagger(2) \otimes_{\Sigma_2} A^{\otimes 2}$ (ii) if $a$ and $a'$ are cohomologous cocycles in $A$, $e_d^{\text{st}} \otimes a \otimes a$ and $e_d^{\text{st}} \otimes a' \otimes a'$ are cohomologous cocycles in $\mathpzc{E}_{\text{st}}^\dagger(2) \otimes_{\Sigma_2} A^{\otimes 2}$. Consider (i) first. This follows from the following identities, which hold in $\mathpzc{E}_{\text{st}}^\dagger(2) \otimes_{\Sigma_2} A^{\otimes 2}$: \begin{align*} \partial (e_d \otimes a \otimes a) &= (e_{d+1} \cdot (1 + \tau)) \otimes a \otimes a \\ &= e_{d+1} \otimes ((1+\tau) \cdot a \otimes a) \\ &= e_{d+1} \otimes a \otimes a + e_{d+1} \otimes (\tau \cdot a \otimes a) \\ &= e_{d+1} \otimes a \otimes a + e_{d+1} \otimes a \otimes a = 0 \end{align*} Next, consider (ii). We need to show that $e_d \otimes a \otimes a - e_d \otimes a' \otimes a'$ is a coboundary in $\mathpzc{E}_{\text{st}}^\dagger(2) \otimes_{\Sigma_2} A^{\otimes 2}$. By assumption, we know that $a - a'$ is a coboundary in $A$; let $a - a' = \partial b$. The desired result then follows from the following easily verifiable identity: \[ \partial (e_d \otimes a \otimes b + e_d \otimes b \otimes a' + e_{d+1} \otimes b \otimes b) = e_d \otimes a \otimes a - e_d \otimes a' \otimes a' \] Next, let us verify linearity over $\mathbb{F}_2$. First, we have homogeneity as follows: \begin{align*} P^s(\lambda [a]) &= P^s([\lambda a]) \\ &= [(e_{s-q})_*(\lambda a, \lambda a)] \\ & = \lambda^2 [(e_{s-q})_*(a, a)] \\ &= \lambda [(e_{s-q})_*(a, a)] \\ &= \lambda P^s([a]) \end{align*} As for additivity, let $[a],[b] \in \text{H}^q(A)$. Consider $e_{s-q} \otimes (a + b) \otimes (a + b) - e_{s-q} \otimes a \otimes a - e_{s-q} \otimes b \otimes b$ as an element of $\mathpzc{E}^\dagger_{\text{st}}(2) \otimes_{\Sigma_2} A^{\otimes 2}$. It suffices to show that this element is a coboundary in $\mathpzc{E}^\dagger_{\text{st}}(2) \otimes_{\Sigma_2} A^{\otimes 2}$. We have that $e_{s-q} \otimes (a + b) \otimes (a + b) - e_{s-q} \otimes a \otimes a - e_{s-q} \otimes b \otimes b = e_{s-q} \otimes a \otimes b + e_{s-q} \otimes b \otimes a$, and then the result follows by the following: \begin{align*} e_{s-q} \otimes a \otimes b + e_{s-q} \otimes b \otimes a &= e_{s-q} \otimes a \otimes b + e_{s-q} \tau \otimes a \otimes b \\ &= e_{s-q} (\tau + 1) \otimes a \otimes b \\ &= \partial (e_{s-q} \otimes a \otimes b) \end{align*} To see that $P^s$ is homogeneous of degree $s$, simply note that, given $[a] \in \text{H}^q(A)$, in $\mathpzc{E}_{\text{st}}^\dagger(2) \otimes A^{\otimes 2}$, $e_{s-q} \otimes a \otimes a$ has degree $(s-q)+q+q = q+s$. Finally, we verify naturality. Let $f \colon A \to B$ be a map of algebras over $\mathpzc{E}_{\text{st}}^\dagger$. We need to show that, for each $s \in \mathbb{Z}$, the following square commutes: \begin{center} \begin{tikzpicture}[node distance = 1.5cm] \node [] (A) {$\text{H}^\bullet(A)$}; \node [below of = A] (B) {$\text{H}^\bullet(B)$}; \node [right of = A,xshift=1cm] (C) {$\text{H}^\bullet(A)$}; \node [right of = B,xshift=1cm] (D) {$\text{H}^\bullet(B)$}; \node [below right of = A,yshift=3mm,xshift=2mm] (E) {$\mathrel{\rotatebox[origin=c]{-90}{$\circlearrowright$}}$ ?}; \draw [->] (A) -- (B) node[midway,anchor=east]{$f_*$}; \draw [->] (A) -- (C) node[midway,anchor=south]{$P^s$}; \draw [->] (C) -- (D) node[midway,anchor=west]{$f_*$}; \draw [->] (B) -- (D) node[midway,anchor=north]{$P^s$}; \end{tikzpicture} \end{center} This follows from the commutativity of the following diagram: \begin{center} \begin{tikzpicture}[node distance = 1.5cm] \node [] (D) {$\text{H}_\bullet(\mathpzc{E}_{\text{st}}^\dagger(2) \otimes_{\Sigma_2} A^{\otimes 2})$}; \node [right of = D,xshift=4cm] (E) {$\text{H}_\bullet(A)$}; \node [below of = D] (DD) {$\text{H}_\bullet(\mathpzc{E}_{\text{st}}^\dagger(2) \otimes_{\Sigma_2} B^{\otimes 2})$}; \node [right of = DD,xshift=4cm] (EE) {$\text{H}_\bullet(B)$}; \draw [->] (D) -- (E) node[midway,anchor=south]{$\mathpzc{E}_{\text{st}}^\dagger$-action}; \draw [->] (DD) -- (EE) node[midway,anchor=north]{$\mathpzc{E}_{\text{st}}^\dagger$-action}; \draw [->] (D) -- (DD) node[midway,anchor=east]{$f_*$}; \draw [->] (E) -- (EE) node[midway,anchor=west]{$f_*$}; \end{tikzpicture} \end{center} \end{proof} We now make various remarks regarding these stable operations and how they compare with the corresponding unstable operations: \begin{itemize} \item In the case of the unstable Barratt-Eccles operad, the associated cohomology operations, which is to say the generalized Steenrod operations, are defined by exactly the same formula, $P^s([a]) = [(e_{s-q}^{\text{un}})_*(a,a)]$, except that we replace $e_{s-q}^{\text{st}}$ by $e_{s-q}^{\text{un}}$; see~\cite{May} for the construction of these operations. Note that since $e_d^{\text{un}}$ is zero for $d > 0$, these operations are zero when $s > q = |a|$. This property, that $P^s[a] = 0$ when $s > |a|$, is known as the \textit{instability} of the operations. In our stable case, we have non-zero generators $e_{d}^{\text{st}}$ even in positive degrees, and so our stable operations do not satisfy the instability property, as one would hope. In fact, we can see the disappearance of the instability of the operations in an iterative manner, as follows. We know that the operations in the case of algebras over $\mathpzc{E}^\dagger$ satisfy instability. By an analogous construction, or with the help of Proposition~\ref{prop:freealgsuspop}, one can show that one also has operations in the case of algebras over $\Sigma\mathpzc{E}^\dagger$ and moreover, that these satisfy a shifted instability condition, which says that $P^s[a] = 0$ when $s > |a|+1$. Similarly, one also has operations in the case of algebras over $\Sigma^2\mathpzc{E}^\dagger$, and these satisfy the shifted instability condition which says that $P^s[a] = 0$ when $s > |a|+2$. This continues, and eventually, in the limit, in the case of the stabilization $\mathpzc{E}^\dagger$, the instability disappears. \item Suppose that $A$ is an algebra over the unstable Barratt-Eccles operad $\mathpzc{E}^\dagger$. By pull back across the canonical map $\mathpzc{E}_{\text{st}}^\dagger \to \mathpzc{E}^\dagger$, $A$ is then also an algebra over the stable Barratt-Eccles operad $\mathpzc{E}^\dagger_{\text{st}}$. Each of the algebra structures induce operations on $\text{H}^\bullet(A)$, both of which are denote by $P^s$. The notation is consistent however, because, as noted in Remark~\ref{rmk:edtwice}, the canonical map $\mathpzc{E}_{\text{st}}^\dagger \to \mathpzc{E}^\dagger$ sends $e_d^{\text{st}}$ to $e_d^{\text{un}}$, for each $d \ge 0$. \item In the case of algebras over the unstable Barratt-Eccles operad, the cohomology operations satisfy the well-known Adem relations -- see~\cite{May}. These relations also hold for our operations above in the case of algebras over the stable Barratt-Eccles operad -- this follows, for example, from the computation of the cohomologies of free algebras over $\mathpzc{E}_{\text{st}}^\dagger$, given later in this section. \item In the case of an algebra $A$ over the unstable Barratt-Eccles operad, the cohomology $\text{H}^\bullet(A)$ not only possesses the operations $P^s$, but also a graded-commutative algebra structure -- see~\cite{May}. The definition of the products is given by the formula $[a] \cdot [b] = [(e_0^{\text{un}})_*(a,b)]$. In the case of algebras over the stable Barratt-Eccles operad, these products disappear; we only have the additive structure provided by the operations, as one would expect in a stable situation. To see why these products disappear, note that the obvious analogue of the above formula is $[a] \cdot [b] = [(e_0^{\text{st}})_*(a,b)]$. However, this does not make sense because, given cocycles $a$ and $b$, while $e_0^{\text{un}} \otimes a \otimes b$ is always a cocycle in $\mathpzc{E}^\dagger(2) \otimes A^{\otimes 2}$, $e_0^{\text{st}} \otimes a \otimes b$ is not always a cocycle in $\mathpzc{E}_{\text{st}}^\dagger(2) \otimes A^{\otimes 2}$, as $e_0^{\text{st}}$ is itself not a cocyle in $\mathpzc{E}_{\text{st}}^\dagger(2)$ (see Proposition~\ref{prop:Est(2)}). Put another way, $\mathpzc{E}^\dagger(2)$ has cohomology $\mathbb{F}_2[0]$, generated by $e_0^{\text{un}}$, and this generator yields the products in the unstable case, but $\mathpzc{E}^\dagger(2)$ has zero cohomology (see Proposition~\ref{prop:stabhom}). This argument shows that the products at least cannot be defined by the obvious analogous formula, though of course does not demonstrate that they necessarily do not exist. The latter follows however from the computation of the cohomology of free $\mathpzc{E}_{\text{st}}^\dagger$-algebras, to appear below. One can also consider the disappearance of products in an iterative manner, as follows. We know that the cohomologies of algebras over $\mathpzc{E}^\dagger$ have a product operation. By an entirely analogous construction, or with the help of Proposition~\ref{prop:freealgsuspop}, one can show that the cohomologies of algebras over $\Sigma\mathpzc{E}^\dagger$ have a shifted product operation where the product of a degree $r$ element with a degree $s$ element lies in degree $r+s+1$. Similarly, the cohomologies of algebras over $\Sigma^2\mathpzc{E}^\dagger$ have a shifted product operation where the product of a degree $r$ element with a degree $s$ element lies in degree $r+s+2$. This continues, and eventually, in the limit, in the case of the stabilization $\mathpzc{E}^\dagger_{\text{st}}$, the product disappears. \end{itemize} \subsection{The Completion $\widehat{\mathcal{B}}$ of the Algebra of Generalized Steenrod Operations} Our next goal is to compute the cohomology of a free algebra $\textbf{E}_{\textbf{st}}^\dagger X$ over the stable Barratt-Eccles operad $\mathpzc{E}_{\text{st}}^\dagger$. In order to achieve this, we first need to recall the algebra $\mathcal{B}$ of generalized Steenrod operations and construct a certain completion $\widehat{\mathcal{B}}$ of it, which allows certain infinite sums of the iterated operations in $\mathcal{B}$. In the case of the unstable operad, it is well-known that $\text{H}^\bullet(\textbf{E}^\dagger X)$ is a free construction on $\text{H}^\bullet(X)$, that which first constructs the free unstable $\mathcal{B}$-module on $X$, and then applies a certain free algebra construction on this module to add in the products which exist in the unstable case (see~\cite{May}). In our stable case, we know that the products disappear and that the instability of the operations disappears, and below, will prove that $\text{H}^\bullet(\textbf{E}_{\textbf{st}}^\dagger X)$ is the free $\widehat{\mathcal{B}}$-module on $\text{H}^\bullet(X)$. \\ Now, let us begin constructing the completion $\widehat{\mathcal{B}}$. We first need to recall the definition of $\mathcal{B}$. In fact, we have an algebra $\mathcal{B}$ for each value of the prime $p$. In order to define these algebras, we need to discuss multi-indices and some associated definitions, both of which shall vary depending on whether $p = 2$ or $p > 2$. \\ First suppose that $p = 2$. In this case, a \textit{multi-index} is a sequence $I = (i_1,\dots,i_k)$ of integers $i_j \in \mathbb{Z}$, where $k \ge 0$ (if $k = 0$, we have the empty sequnce $()$). Given such a multi-index, we will associate to it the formal string $P^I = P^{i_1} \cdots P^{i_k}$. Given the multi-index $I = (i_1,\dots,i_k)$, we then set the following: \begin{itemize} \item The \textit{length} $l(I)$ is $k$; if $I = ()$, this is to be interpreted as $0$. \item The \textit{degree} $d(I)$ is $i_1 + \cdots + i_k$; if $I = ()$, this is to be interpreted as $0$. \item The \textit{excess} $e(I)$ is $i_1 - i_2 - \cdots - i_k$; if $I = ()$, this is to be interpreted as $-\infty$. \item We say that $I$ is \textit{admissible} if $i_j \ge 2i_{j+1}$ for each $j$; the empty multi-index is admissible. \end{itemize} Now suppose that $p > 2$. In this case, a \textit{multi-index} is a sequence $I = (\varepsilon_1, i_1,\dots, \varepsilon_k, i_k)$ of integers $i_j \in \mathbb{Z}$ and $\varepsilon_j \in \{0,1\}$, where $k \ge 0$ (if $k = 0$, we have the empty sequnce $()$). Given such a multi-index, we associate to it the formal string $\beta^{\varepsilon_1} P^{i_1} \cdots \beta^{\varepsilon_k} P^{i_k}$, where $\beta^1$ here is to be intepreted as $\beta$ and $\beta^0$ as an empty symbol. Given the multi-index $I = (\varepsilon_1,i_1,\dots,\varepsilon_k,i_k)$, we then set the following: \begin{itemize} \item The \textit{length} $l(I)$ is $k$; if $I = ()$, this is to be interpreted as $0$. \item The \textit{degree} $d(I)$ is $(2i_1(p-1)+\varepsilon_1) + \cdots + (2i_k(p-1)+\varepsilon_k)$; if $I = ()$, this is to be interpreted as $0$. \item The \textit{excess} $e(I)$ is $(2i_1+\varepsilon_1) - (2i_2(p-1)+\varepsilon_2) - \cdots - (2i_k(p-1)+\varepsilon_k)$; if $I = ()$, this is to be interpreted as $-\infty$. \item We say that $I$ is \textit{admissible} if $i_j \ge pi_{j+1}+\varepsilon_{j+1}$ for each $j$; the empty multi-index is admissible. \end{itemize} \begin{Remark}\label{rmk:excess_S} The excess of a multi-index $I$ is related to the instability of the cohomology operations on an algebra $A$ over $\mathpzc{E}^\dagger$, in that $P^I[a] = 0$ is zero whenever $e(I) > |a|$ (this is an easy consequence of the definition of excess and the instability property for a single operation $P^s$). Note also that the excess, in this case where $p = 2$ may also be written as $i_k - \sum_{j=2}^k(2i_j - i_{j-1})$, which gives a relation to the admissibility condition; moreover, it can also be written as $2i_1 - d(I)$, giving a relation to the degree. \hfill $\vert\vert$ \end{Remark} In order to construct $\mathcal{B}$, consider formal symbols $P^s$ and $\beta P^s$ for $s \in \mathbb{Z}$. We need to recall the Adem relations. If $p = 2$, the Adem relations consist of the relations \[ P^rP^s = \sum_{i \in \mathbb{Z}} \binom{s-i-1}{r-2i}P^{r+s-i}P^i \] for all $r, s$ such that $r < 2s$. If $p > 2$, the Adem relations consist of the relations \[ P^rP^s=\sum_{i \in \mathbb{Z}}(-1)^{r+i}\binom{(p-1)(s-i)-1}{r-pi}P^{r+s-i}P^i \] \[ \beta P^rP^s = \sum_{i \in \mathbb{Z}} (-1)^{r+i}\binom{(p-1)(s-i)-1}{r-pi}\beta P^{r+s-i}P^i \] for all $r, s$ such that $r < ps$, and also the relations \[ P^r \beta P^s = \sum_{i \in \mathbb{Z}} (-1)^{r+i}\left(\binom{(p-1)(s-i)}{r-pi}\beta P^{r+s-i}P^i - \binom{(p-1)(s-i)-1}{r-pi-1} P^{r+s-i}\beta P^i\right) \] \[ \beta P^r \beta P^s = \sum_{i \in \mathbb{Z}} (-1)^{r+i}\binom{(p-1)(s-i)-1}{r-pi-1}\beta P^{r+s-i}\beta P^i \] for all $r, s$ such that $r \le ps$. Now we can define the algebra $\mathcal{B}$. \begin{Remark} We can see that the admissibility of a multi-index is related to the Adem relations. For example, consider the case of $p = 2$, a multi-index $I = (i_1,\dots,i_k)$ and associated string $P^{i_1} \cdots P^{i_k}$. Certainly when $k = 2$, the term $P^{i_1}P^{i_2}$ admits an application of the Adem relations if and only if $i_1 < 2i_2$, which is to say if and only if $(i_1,i_2)$ is not admissible. More generally, the existence of the Cartan-Serre basis below implies that the relations apply to $P^{i_1} \cdots P^{i_k}$ if and only if $I$ is not admissible. \hfill $\vert\vert$ \end{Remark} \begin{Definition}\label{def:alg_S_B} The \textit{algebra of generalized Steenrod operations} $\mathcal{B}$ is defined as follows. If $p = 2$, where $p$ is our fixed prime, we have \[ \mathcal{B} := \mathbf{F}\{P^s \mid s \in \mathbb{Z}\}/I_{\text{Adem}} \] where $\mathbf{F}\{P^s \mid s \in \mathbb{Z}\}$ denotes the free graded algebra over $\mathbb{F}_2$ on the formal symbols $P^s$, for $s \in \mathbb{Z}$, where $P^s$ has degree $s$, and where $I_{\text{Adem}}$ denotes the two-sided ideal generated by the Adem relations. If $p > 2$, we have \[ \mathcal{B} := \mathbf{F}\{P^s, \beta P^s \mid s \in \mathbb{Z}\}/I_{\text{Adem}} \] where $\mathbf{F}\{P^s, \beta P^s \mid s \in \mathbb{Z}\}$ denotes the free graded algebras over $\mathbb{F}_p$ on formal symbols $P^s, \beta P^s$, for $s \in \mathbb{Z}$, where $P^s, \beta P^s$ have degrees $2s(p-1), 2s(p-1)+1$ respectively, and where $I_{\text{Adem}}$ denotes the two-sided ideal generated by the Adem relations. \end{Definition} Note that the Adem relations apply per degree (that is, the ideal generated by them is a homogeneous one), so that $\mathcal{B}$ inherits the grading of the free algebra. \\ We now record some well-known facts about $\mathcal{B}$: \begin{itemize} \item There exists a well-known basis for $\mathcal{B}$, the \textit{Cartan-Serre basis}. This is an $\mathbb{F}_p$-basis and consists of the monomials $P^I$ where $I$ is admissible. See, for example,~\cite{CohenLadaMay}. \item We will also have need to consider certain quotients of $\mathcal{B}$. If $p = 2$, for each $k \in \mathbb{Z}$, we set \[ \mathcal{B}_{\le k} := \mathbf{F}\{P^s \mid s \in \mathbb{Z}\}/(I_{\text{Adem}} + I_{\text{exc} \, > \, k}) = \mathcal{B}/I_{> k} \] and if $p > 2$, for each $k \in \mathbb{Z}$, we set \[ \mathcal{B}_{\le k} := \mathbf{F}\{P^s, \beta P^s \mid s \in \mathbb{Z}\}/(I_{\text{Adem}} + I_{\text{exc} \, > \, k}) = \mathcal{B}/I_{> k} \] where, in etiher case, $I_{\text{exc} \, > \, k}$ denotes the two-sided ideal generated by monomials of excess $> k$. Moreover, we have $I_{> k} := (I_{\text{Adem}} + I_{\text{exc} \, > k})/I_{\text{Adem}}$. Sometimes, we will use the notations $I_{\text{exc} \, \ge \, k}$, $I_{\ge k}$ and $\mathcal{B}_{< k}$ in place of $I_{\text{exc} \, > \, k-1}$, $I_{> k-1}$ and $\mathcal{B}_{\le k-1}$. From the Cartan-Serre basis for $\mathcal{B}$, one can derive bases for these quotients; again, see~\cite{CohenLadaMay}. They are as follows. For each $k \in \mathbb{Z}$, the algebra $\mathcal{B}_{\le k}$ has an $\mathbb{F}_p$-basis given by the monomials $P^I$ where $I$ is admissible and $e(I) \le k$. \item As is standard, we say that a $\mathcal{B}$-module $H$ is \textit{unstable} if $P^Ih = 0$ is zero whenever $e(I) > |h|$. In the case of an algebra $A$ over the unstable Barratt-Eccles operad $\mathpzc{E}^\dagger$, we can sum up the cohomology operations and their instability property by noting that $\text{H}^\bullet(A)$ is then an unstable module over $\mathcal{B}$. For details, we refer the reader to~\cite{May}. \end{itemize} We have recalled the definition of, and some facts about, the algebra $\mathcal{B}$. We now move onto the construction of the completion $\widehat{\mathcal{B}}$ and shall begin by constructing the underlying graded module of $\widehat{\mathcal{B}}$. To define this graded module, consider functions: \[ f \colon \{\text{admissible multi-indices}\} \to \mathbb{F}_p \] We have an addition and a scalar multiplication for such functions, computed pointwise. We think of such a function as a potentially infinite sum, and so use the suggestive notation \[ \sum_{I \: \text{admissible}} a_IP^{I} \] where $a_I = f(I)$. Our graded module will consist of such sums, with particular finiteness properties in relation to the length and excess of multi-indices. Specifically, the underlying graded module of $\widehat{\mathcal{B}}$ is defined by setting that, in degree $d \in \mathbb{Z}$, the graded piece $\widehat{\mathcal{B}}_d$ is to consist of the potentially infinite sums above which satisfy the following requirements: \begin{itemize} \item For all $I$, if $a_I \neq 0$, $d(I) = d$. \item The set of lengths $\#\{l(I) \mid a_I \neq 0\}$ is bounded above, or, equivalently, finite. \item For any $k \in \mathbb{Z}$, $\#\{I \mid a_I \neq 0, e(I) < k\}$ is finite. \end{itemize} \begin{Remark} Since, given any non-empty multi-index $I$ of degree $d$, we have $e(I) = 2i_1 - d(I) = 2i_1 - d$, where $I = (i_1,\dots,i_k)$, in the $p=2$ case and $e(I) = 2pi_1 + 2\varepsilon_1 - d(I) = 2pi_1 + 2\varepsilon_1 - d \le 2pi_1 + 2 - d$, where $I = (\varepsilon_1, i_1,\dots, \varepsilon_k, i_k)$, in the $p > 2$ case, we can rephrase the third condition as saying that, given any $k \in \mathbb{Z}$, there may exist at most finitely many $I$ with $a_I \neq 0$ which are such that the entry $i_1$ is smaller than $k$. We may then also, imprecisely though suggestively, package the condition as ``$i_1 \to +\infty$''. \hfill $\vert\vert$ \end{Remark} Note that we have an obvious embedding of graded modules: \[ \mathcal{B} \hookrightarrow \widehat{\mathcal{B}} \] An example of an element, one in degree $0$, which is present in the completion $\widehat{\mathcal{B}}$ but not in $\mathcal{B}$, is the following infinite sum: \[ \sum_{k \ge 0} P^{k}P^{-k} \] In fact, as the following proposition demonstrates, all elements of $\widehat{\mathcal{B}}$ which are not in $\mathcal{B}$ share the features of this example which say that the initial entries of the multi-indices tend to $+\infty$ while the final entries tend to $-\infty$. \begin{Proposition}\label{prop:i1posinf_ikneginf} Let $\sum a_IP^I$ be an element of $\widehat{\mathcal{B}}$. We have the following: \begin{itemize} \item[(i)] Given any $k \in \mathbb{Z}$, for all but finitely many $I$, the initial entry is greater than $k$. \item[(ii)] Given any $k \in \mathbb{Z}$, for all but finitely many $I$, the final entry is less than by $k$. \end{itemize} \end{Proposition} Here, as before, in the case $p > 2$, where multi-indices take the form $(\varepsilon_1,i_1,\dots,\varepsilon_r,i_r)$, where the $i_j$ lie in $\mathbb{Z}$ while the $\varepsilon_j$ lie in $\{0,1\}$, the first entry is taken to be $i_1$, and the final entry, $i_r$, which is to say we disregard the $\varepsilon_j$ for this particular purpose. \begin{proof} (i): Let the given element lie in degree $d$. When $p =2$, the result follows by the fact that $e(I) = 2i_1 - d$ for any $I = (i_1,\dots,i_r)$ of degree $d$ and that, in the sum, there can only be finitely many elements of excess below a given bound. When $p > 2$, the result follows in a similar fashion, using instead the identity $e(I) = 2pi_1 + 2\varepsilon_1 - d$ for any $I = (\varepsilon_1,i_1,\dots,\varepsilon_r,i_r)$ of degree $d$. \\ (ii): Let the given element lie in degree $d$. Of course there can be only one length one monomial which occurs in the sum. Consider then monomials of length $r \ge 2$. First consider the case where $p = 2$. Given a multi-index $I = (i_1,\dots,i_r)$, by admissibility, we have $i_1 \ge 2i_2 \ge 2^2i_3 \ge \cdots \ge 2^{r-1}i_r$. Put another way, we have that $i_j \ge 2^{r-j}i_r$ for $j = 1,\dots,r$. Thus, we have \begin{align*} d &= i_1 + (i_2 + \cdots + i_r) \\ &\ge i_1 + (2^{r-2}i_r + 2^{r-3}i_r + \cdots + i_r) \\ &= i_1 + Ci_r \end{align*} where $C = 1 + 2 + \cdots + 2^{r-2} > 0$. Thus we have $i_r \le \frac{1}{C}(d -i_1)$ and so the result follows by part (i). Now consider the case $p > 2$. In this case, given a multi-index $(\varepsilon_1,i_1,\dots,\varepsilon_r,i_r)$, admissibility gives us that $i_j \ge pi_{j+1} + \varepsilon_{j+1}$ for each $j = 1,\dots,r-1$, and so, in particular, $i_j \ge pi_{j+1}$ for each $j = 1, \dots, r-1$. It follows that $i_j \ge p^{r-j}i_r$ for $j = 1,\dots,r$. Moreover, we have that $d(I) = 2(p-1)(i_1+ \cdots + i_k) + \varepsilon_1 + \cdots + \varepsilon_r \le 2(p-1)(i_1+ \cdots + i_k) + r$. The argument now is analogous to the one above for the $p = 2$ case. \end{proof} We now wish to endow our graded module $\widehat{\mathcal{B}}$ with an algebra structure. To do so, however, we first need some technical lemmas regarding the Adem relations. Given any multi-index $I$, via the Cartan-Serre basis which we mentioned above, we know that $P^I$ can be written uniquely as a sum \[ \sum a_KP^K \] where each $K$ is admissible. Let us call this the \textit{admissible monomials expansion} of $P^I$. Note that since the Adem relations either annihilate a monomial or preserve its length, any $K$ for which $a_K$ is non-zero must have the same length as $I$. \begin{Lemma}\label{lem:first_entry_down_last_up} Let $I$ be a non-empty multi-index. If $K$ is a multi-index which appears in the admissible monomials expansion of $P^I$, then the following hold: \[ (\emph{initial entry of} \: K) \ge (\emph{initial entry of} \: I) \hspace{0.5cm} (\emph{final entry of} \: K) \le (\emph{final entry of} \: I) \] \end{Lemma} As before, here we follow our convention that, in the case $p > 2$, where multi-indices take the form $(\varepsilon_1,i_1,\dots,\varepsilon_r,i_r)$, where the $i_j$ lie in $\mathbb{Z}$ while the $\varepsilon_j$ lie in $\{0,1\}$, the first entry is taken to be $i_1$, and the final entry, $i_r$, which is to say we disregard the $\varepsilon_j$ for this particular purpose. \begin{proof} We shall give a proof of the case where $p =2$; the case where $p > 2$ follows by a similar proof, upon appropriate modifications. Let $I = (i_1,\dots,i_n)$. In the case $n=1$, we already have admissibility and so the result is trivial. Consider the case where $n = 2$. Let $I = (r,s)$. If $r \ge 2s$, $P^rP^s$ is admissible and so the result is trivial. Suppose then that $r < 2s$. Then, by the Adem relations, we have that the admissible monomials expansion is: \[ \sum_i \binom{s-i-1}{r-2i} P^{r+s-i}P^i \] (An easy check of when the binomial coefficient is non-zero shows that the terms which appear on the right-hand side are indeed admissible.) The terms on the right-hand side which appear are those with index $i$ satisfying $r-s+1 \le i \le r/2$. Thus the minimum first entry, say $k_{\text{min init}}$, of the multi-indices $(r+s-i,i)$ which occur satisfies $k_{\text{min init}} \ge r + s - r/2 = r/2 + s > r/2 + r/2 = r$, giving us the desired result. On the other hand, the maximum second entry, say $k_{\text{max final}}$, of the multi-indices $(a+b-i,i)$ which occur satisfies $k_{\text{max final}} \le r/2 < s$, once again giving us the desired result. \\ Now let us consider the case $n \ge 3$. We have that there exists a finite sequence of terms, say $T_1, \dots, T_m$, $m \ge 1$, in the free algebra over $\mathbb{F}_2$ on the $P^i$, $i \in \mathbb{Z}$, which is such that $T_1 = P^{i_1} \cdots P^{i_n}$, $T_m = \sum P^K$ is the admissible monomials expansion of $P^{i_1} \cdots P^{i_n}$, and, for each $j \ge 2$, $T_j$ is constructed from $T_{j-1}$ by taking some monomial summand $P^J$ and replacing a sub-monomial $P^rP^s$ of $P^J$ with the equivalent $\sum_i \binom{s-i-1}{r-2i} P^{r+s-i}P^i$ provided by the Adem relations. Now, if the move which is made in transitioning from $T_{j-1}$ to $T_j$ is applied to a sub-monomial $P^rP^s$ where $P^r$ is not the initial entry of the corresponding monomial $P^J$, there is no change made to any initial entry in any monomial summand. If, on the other hand, the move is applied to a sub-monomial $P^rP^s$ where $P^r$ is the initial entry of $P^J$, by the argument in the $n=2$ case above, the minimum of all the initial entries in the resulting multi-indices in $T_j$ is bounded below by the original such minimum in $T_{j-1}$. Thus, by a simple induction, we have that the minimum of the initial entries of all the multi-indices appearing in $\sum P^K$ is indeed bounded below by $i_1$. By an entirely analogous argument, considering instead the cases where $P^b$ is, or is not, the final entry of $P^J$, we have that the maximum of the final entries of all the multi-indices appearing in $\sum P^K$ is indeed bounded above by $i_k$. \end{proof} \begin{Lemma}\label{lem:excess_down} Let $I$ be a multi-index. If $K$ is a multi-index which appears in the admissible monomials expansion of $P^I$, then $e(K) \ge e(I)$. \end{Lemma} \begin{proof} The case of an empty $I$ is trivial, so suppose that it is non-empty. Suppose that $p = 2$. Let $I = (i_1, \dots, i_n)$ and $K = (k_1,\dots,k_n)$, where $n \ge 1$. We can write $e(I) = 2i_1 - d(I)$ and $e(K) = 2k_1 - d(K)$. The Adem relations preserve degree, so that $d(I) = d(K)$. The result then follows by Lemma~\ref{lem:first_entry_down_last_up}. Now suppose that $p > 2$. Let $I = (\varepsilon_1, i_1, \dots, \varepsilon_n, i_n)$ and $K = (\varepsilon'_1, k_1,\dots, \varepsilon'_n, k_n)$, where $n \ge 1$. We can write $e(I) = 2pi_1 + 2\varepsilon_1 - d(I)$ and $e(K) = 2pk_1 + 2\varepsilon'_1 - d(K)$. The Adem relations preserve degree, so that $d(I) = d(K)$. Moreover, an examination of the Adem relations shows that, if $\varepsilon_1 = 1$, then $\varepsilon'_1 = 1$, so that $\varepsilon_1 \le \varepsilon'_1$. The result now follows by Lemma~\ref{lem:first_entry_down_last_up}. \end{proof} \begin{Lemma}\label{lem:QIQJprop} Let $I$ and $J$ be admissible multi-indices. If $K$ is a multi-index which appears in the admissible monomials expansion of $P^IP^J$, then $e(K) \ge e(J)$. \end{Lemma} \begin{proof} We shall give a proof of the case where $p =2$; the case where $p > 2$ follows by a similar proof, upon appropriate modifications. The proof will be via three inductions. \\ Consider the case when $I$ has length $1$. Let $I = (r)$. We will prove this case by induction on the length of $J$. If $J$ has length $0$, it is empty, the monomial in question is $P^r$, which is already admissible, and $e(J) = -\infty$, so that we have the desired result. Now suppose $J$ has length $1$. Let $J = (s)$. If $r \ge 2s$, the monomial in question, $P^rP^s$, is already admissible, and the excess is $r-s$, which is bounded below by $e(J) = s$ since $r \ge 2s$. On the other hand, if $r < 2s$, the admissible monomials expansion is given by \[ \sum_i \binom{s-i-1}{r-2i}P^{r+s-i}P^i \] where $r-s+1 \le i \le r/2$. The excess of a generic term on the right-hand side is given by $r+s-2i$ and this is bounded below by $r+s-2(r/2) = s = e(J)$, giving us the desired result. Now suppose that we have the desired result for $J$ of length $< n$, where $n \ge 2$. Consider $P^rP^{j_1} \cdots P^{j_n} = (P^rP^{j_1} \cdots P^{j_{n-1}})P^{j_n}$. Let $\sum P^K$ be the admissible monomials expansion of $P^aP^{j_1} \cdots P^{j_{n-1}}$. By the induction hypothesis, for each $K$, we have $e(K) \ge e(J) + j_n$. We now have $P^aP^{j_1} \cdots P^{j_n} = \sum P^KP^{j_n}$. By Proposition~\ref{lem:excess_down}, for a given $K$, any multi-index which appears in the admissible monomials expansion of the term $P^KP^{j_n}$ has excess bounded below by $e(K,j_n) = e(K) - j_n \ge e(J) + j_n - j_n = e(J)$, giving us the desired result. We have thus established, by induction on the length of $J$, the case in which $I$ has length $1$. \\ Now consider the case where $J$ has length $1$. Let $J = (s)$. We will prove this case by induction on the length of $I$. If $I$ has length zero, it is empty, the monomial in question is $P^s$, which is already admissible and so the desired result is trivial. Suppose that $I$ has length $1$. Let $I = (r)$. The monomial in question is then $P^rP^s$ and the desired result follows by exactly the same argument as the one above which was already made for this monomial. Now suppose that we have the desired result for $I$ of length $< n$, where $n \ge 2$. Consider $P^{i_1} \cdots P^{i_n}P^s = P^{i_1}(P^{i_2} \cdots P^{i_{n-1}}P^{s})$. Let $\sum P^K$ be the admissible monomials expansion of $P^{i_2} \cdots Q^{i_{n-1}}P^{s}$. By the induction hypothesis, for each $K$, we have $e(K) \ge s$. We now have $P^{i_1} \cdots P^{i_n}P^s = \sum P^{i_1}P^K$. By the result established by the previous induction, that of the case in which $I$ has length $1$, we have that, for a given $K$, any multi-index which appears in the admissible monomials expansion of the term $P^{i_1}P^K$ has excess bounded below by $e(K)$, which as we saw already is itself below below by $s$, giving us the desired result. We have thus established, by induction on the length of $I$, the case in which $J$ has length $1$. \\ We will now prove the general statement in the proposition by induction on the length of $J$. First suppose that $J$ has length $0$. Then the monomial in question is $P^I$, the admissible monomials expansion is also simply $P^I$ (as $I$ is assumed to be admissible), and we have $e(I) \ge e(J)$ for any $I$ since $e(J) = -\infty$. If $J$ has length $1$, we have the desired result by the second of the two previous inductions. Now suppose that we have the desired result for $J$ of length $< n$, where $n \ge 2$. Consider $P^IP^{j_1} \cdots P^{j_n} = (P^IP^{j_1} \cdots P^{j_{n-1}})P^{j_n}$. Let $\sum P^K$ be the admissible monomials expansion of $P^IP^{j_1} \cdots P^{j_{n-1}}$. By the induction hypothesis, for each $K$, we have $e(K) \le e(J) + j_n$. We now have $P^IP^{j_1} \cdots P^{j_n} = \sum P^KP^{j_n}$. By Lemma~\ref{lem:excess_down}, for a given $K$, any multi-index which appears in the admissible monomials expansion of the term $P^KP^{j_n}$ has excess bounded below by $e(K,j_n) = e(K) - j_n \ge e(J) + j_n - j_n = e(J)$, giving us the desired result. We have thus established, by induction on the length of $J$, the completely general case. \end{proof} We are now ready to equip our graded module $\widehat{\mathcal{B}}$ with an algebra structure. For each $d_1, d_2 \in \mathbb{Z}$, we must construct maps: \[ \widehat{\mathcal{B}}_{d_1} \otimes \widehat{\mathcal{B}}_{d_2} \to \widehat{\mathcal{B}}_{d_1+d_2} \] Consider two infinite sums, the product of which \[ \left( \sum_I a_IP^I \right) \cdot \left( \sum_I b_IP^I \right) \] we wish to construct, where we suppose that the only $a_I$ and $b_I$ which are non-zero are those for which the degree is $d_1$, $d_2$ respectively. Given any two admissible $I$ and $J$, let \[ P^IP^J = \sum_{K \: \text{admissible}} c^{I,J}_K P^K \] be the admissible monomials expansion of $P^IP^J$; note that, for any fixed $I$ and $J$, only finitely many of the $c^{I,J}_K$ may be non-zero. We then set: \begin{equation}\label{eqn:prodShat} \left( \sum_I a_IP^I \right) \cdot \left( \sum_I b_IP^I \right) := \sum_K \left(\sum_{I,J} a_Ib_Jc^{I,J}_K\right) P^K \end{equation} \begin{Proposition}\label{prop:prodwelldefScomplete} The product on $\widehat{\mathcal{B}}$ as above is well-defined and equips $\widehat{\mathcal{B}}$ with an algebra structure over $\mathbb{F}_p$. \end{Proposition} \begin{proof} We first show that the righthand side of (\ref{eqn:prodShat}) is well-defined as an infinite sum. To do this, having fixed the degrees $d_1$ and $d_2$ as above, we need to ensure that the sum $\sum_{I,J} a_Ib_Jc^{I,J}_K$ is finite for any given $K$. Fix such a $K$, say $K_0$. By the definition of $\widehat{\mathcal{B}}$ as a graded module, we know that, for all but finitely many $I$, we have that $a_I = 0$ or $e(I) > e(K_0) + d_2$. Note that any $I$ satisfying $e(I) > e(K_0) + d_2$ is necessarily non-empty. Now, for such an $I$, where $e(I) > e(K) + d_2$, we have that, for any $J$ for which $b_J \neq 0$, $e(IJ) = e(I) - d_2 > e(K_0) + d_2 - d_2 = e(K_0)$, where $IJ$ denotes the concatenation of $I$ and $J$. Thus, by Lemma~\ref{lem:excess_down}, there are only finitely many $I$ for which $a_I \neq 0$ and for which there exists a $J$ such that $c^{I,J}_{K_0} \neq 0$ (note that $c^{I,J}_{K_0} \neq 0$ amounts to saying that $P^{K_0}$ appears in the admissible monomials expansion of $P^IP^J$). Fix such an $I$, say $I_0$. We know that, for all but finitely many $J$, we have that $b_J = 0$ or $e(J) > e(K_0)$. Thus, by Lemma~\ref{lem:QIQJprop}, there are only finitely many $J$ for which $b_J \neq 0$ and $c^{I_0,J}_{K_0} \neq 0$, where the latter amounts to saying that $P^{K_0}$ appears in the admissible monomials expansion of $P^{I_0}P^J$. All told, we have demonstrated that, for any given $K_0$, there are only finitely many terms in both the infinite sums \[ \sum_I a_IP^I \qquad \text{and} \qquad \sum_I b_IP^I \] which make a non-zero contribution to the coefficient \[ \sum_{I,J} a_Ib_Jc^{I,J}_{K_0} \] of $P^{K_0}$. Thus the product is indeed well-defined, at least as an infinite sum. In fact, it is an element of $\widehat{\mathcal{B}}_{d_1+d_2}$ for the following reasons: (i) the degree condition is satisfied because the Adem relations preserve degree (ii) the length condition is satisfied because the Adem relations either annihilate an element or preserve its length (iii) the excess condition is satisfied by the same argument as above, which showed not only well-definedness as an infinite sum, but more strongly that all but finitely many pairings of the $P^I$ and $P^J$ yield monomials with an associated excess above any given bound. Finally, because any given coefficient arises from a product of finite sums, it is clear that the requisite associativity, identity and bilinearity follow from the fact that the definition yields the product of $\mathcal{B}$ when restricted to finite sums and that these properties do indeed hold for the product of $\mathcal{B}$. \end{proof} We have now constructed $\widehat{\mathcal{B}}$ as a graded algebra over $\mathbb{F}_p$. Moreover, the embedding \[ \mathcal{B} \hookrightarrow \widehat{\mathcal{B}} \] is now clearly one of algebras. It remains to make precise in what sense $\widehat{\mathcal{B}}$ is a completion of $\mathcal{B}$. For $k \ge 0$, recall the quotients $\mathcal{B}_{< k}$ of $\mathcal{B}$ which we defined earlier. Filter each of these algebras by length where $\text{F}_t\mathcal{B}_{< k}$ consists of those monomials $P^I$ satisfying the length bound $p^{l(I)} \le t$. Note that, for any $k \le l$, we have a canonical map \[ \mathcal{B}_{< l} \to \mathcal{B}_{< k} \] and that this map is a filtered map, in that, for each $t \ge 0$, we have induced maps: \[ \text{F}_t\mathcal{B}_{< l} \to \text{F}_t\mathcal{B}_{< k} \] Next, note that we can also filter $\widehat{\mathcal{B}}$ similarly by length by setting, for $t \ge 0$, $\text{F}_t\widehat{\mathcal{B}}$ to comprise the sums $\sum_{I \: \text{admissible}} a_IP^{I}$ which satisfy the degree, length and excess requirements in the definition of $\widehat{\mathcal{B}}$ and which also satisfy, more specifically regarding length, the bound $p^{l(I)} \le t$ for any $I$ where $a_I \neq 0$. For each $k \ge 0$, we have a map \[ \widehat{\mathcal{B}} \to \mathcal{B}_{< k} \] which projects an infinite sum to the sub-sum of the elements of excess $< k$, of which there are finitely many by definition of $\widehat{\mathcal{B}}$. Moreover, this map is a filtered map in that we have, for each $t \ge 0$, an induced map: \[ \text{F}_t\widehat{\mathcal{B}} \to \text{F}_t\mathcal{B}_{< k} \] These maps are compatible with the maps $\text{F}_t\mathcal{B}_{< l} \to \text{F}_t\mathcal{B}_{< k}$ above in that they yield a left cone on the tower \[ \cdots \to \text{F}_t\mathcal{B}_{< 2} \to \text{F}_t\mathcal{B}_{< 1} \to \text{F}_t\mathcal{B}_{< 0} \] and so yield a map $\text{F}_t\widehat{\mathcal{B}} \to \lim_{k \ge 0}\text{F}_t\mathcal{B}_{< k}$. The precise statement then regarding in what sense $\widehat{\mathcal{B}}$ is a completion of $\mathcal{B}$ is the following. \begin{Proposition}\label{prop:FtBhatcomplete} The map $\emph{F}_t\widehat{\mathcal{B}} \to \lim_{k \ge 0}\emph{F}_t\mathcal{B}_{< k}$ constructed above is an isomorphism of graded modules. Moreover, as graded modules, we have that: \[ \widehat{\mathcal{B}} \cong \underset{t \ge 0}{\emph{colim}} \: \underset{k \ge 0}{\emph{lim}} \: \emph{F}_t\mathcal{B}_{< k} \] \end{Proposition} \begin{proof} We noted above that $\mathcal{B}_{< k}$ has a basis given by the admissible monomials of excess strictly smaller than $k$. Moreover, each $\text{F}_t\mathcal{B}_{< k}$ then has a basis given by the admissible monomials $P^I$ satisfying both $e(I) < k$ and the length bound $p^{l(I)} \le t$. Moreover, the map $\text{F}_t\mathcal{B}_{< k+1} \to \text{F}_t\mathcal{B}_{< k}$ simply kills those basis elements with excess exactly $k+1$. The first part of the result now follows by an easy verification of the necessary universal property. The second part then follows by noting that, under this established isomorphism, given $t \le t'$, the map induced on the limits by the natural maps $\text{F}_t\mathcal{B}_{< k} \to \text{F}_{t'}\mathcal{B}_{< k}$ corresponds exactly to the inclusion $\text{F}_t\widehat{\mathcal{B}} \to \text{F}_{t'}\widehat{\mathcal{B}}$. \end{proof} \subsection{Cohomologies of Free Algebras over the Stable Operads} We mentioned earlier that we wish to compute the cohomology of a free algebra $\textbf{E}_{\textbf{st}}^\dagger X$ over the stable Barratt-Eccles operad $\mathpzc{E}_{\text{st}}^\dagger$. Having constructed the completion $\widehat{\mathcal{B}}$, we are now able to do this. We shall show that, if $X$ is a cochain complex, the cohomology of the free $\mathpzc{E}_{\text{st}}^\dagger$-algebra on $X$ is precisely the free $\widehat{\mathcal{B}}$-module on $\text{H}^\bullet(X)$, namely $\widehat{\mathcal{B}} \otimes \text{H}^\bullet(X)$. In order to compare $\text{H}^\bullet(\mathbf{E}_{\textbf{st}}^\dagger X)$ and $\widehat{\mathcal{B}} \otimes \text{H}^\bullet(X)$, we shall introduce an intermediating construction. Thus, we define a functor $\text{A}$ on cochain complexes by setting: \[ \text{A}(X) := \underset{t \ge 0}{\text{colim}} \lim_{k \ge 0} \text{F}_t\text{H}^\bullet((\Sigma^k\mathbf{E}^\dagger)X) \] Here the maps in the limiting tower come from the stabilization maps $\Sigma^{k+1}\mathpzc{E}^\dagger \Sigma^k\mathpzc{E}^\dagger$ and the filtration $\{\text{F}_t\text{H}^\bullet((\Sigma^k\mathbf{E}^\dagger)X)\}_{t \ge 0}$ of the cohomology $\text{H}^\bullet((\Sigma^k\mathbf{E}^\dagger)X)$ is as follows: \[ \text{F}_t\text{H}^\bullet((\Sigma^k\mathbf{E}^\dagger)X) = \bigoplus_{n \le t} \text{H}^\bullet((\Sigma^k\mathpzc{E}^\dagger)(n) \otimes_{\Sigma_n} X^{\otimes n}) \] We study this functor $\text{A}(-)$ by studying the steps in its construction, one at a time; recall that here $\Sigma^k\mathbf{E}^\dagger$ denotes the monad associated to the operadic suspension $\Sigma^k\mathpzc{E}^\dagger$, as opposed to a monadic suspension. Let $X$ instead be a cochain complex, and let $\{c_i\}$ be a basis of $\text{H}^\bullet(X)$. By Proposition~\ref{prop:freealgsuspop} and the standard calculation of the cohomology of free algebras over the unstable Barratt-Eccles operad (see~\cite{CohenLadaMay}), we have that, for each $k \ge 0$, $\text{H}^\bullet((\Sigma^k\mathbf{E}^\dagger)X)$ is isomorphic to a shift down by $k$ of the free graded-commutative algebra over $\mathbb{F}_p$ on the terms $P^Ic_i$ where $I$ is admissible and $e(I) < |c_i| + k$. Let $\mathcal{F}_k$ denote this object. Then, via the maps $\Sigma^{k+1}\mathpzc{E}^\dagger \to \Sigma^k\mathpzc{E}^\dagger$, we have a commutative diagram as follows: \begin{center} \begin{tikzpicture}[node distance = 1.5cm] \node [] (A) {$\mathcal{F}_1$}; \node [right of = A,xshift=1.5cm] (B) {$\mathcal{F}_0$}; \node [below of = A] (C) {$\text{H}^\bullet((\Sigma\mathbf{E}^\dagger) X)$}; \node [below of = B] (D) {$\text{H}^\bullet(\mathbf{E}^\dagger X)$}; \node [left of = A,xshift=-1.5cm] (E) {$\mathcal{F}_2$}; \node [left of = E,xshift=-1.5cm] (F) {$\cdots$}; \node [left of = C,xshift=-1.5cm] (G) {$\text{H}^\bullet((\Sigma^2 \mathbf{E}^\dagger) X)$}; \node [left of = G,xshift=-1.5cm] (H) {$\cdots$}; \draw [->] (A) -- (B) node[midway,anchor=south]{}; \draw [->] (A) -- (C) node[midway,anchor=west]{$\cong$}; \draw [->] (B) -- (D) node[midway,anchor=west]{$\cong$}; \draw [->] (C) -- (D) node[midway,anchor=south]{}; \draw [->] (F) -- (E) node[midway,anchor=south]{}; \draw [->] (E) -- (A) node[midway,anchor=south]{}; \draw [->] (H) -- (G) node[midway,anchor=south]{}; \draw [->] (G) -- (C) node[midway,anchor=south]{}; \draw [->] (E) -- (G) node[midway,anchor=west]{$\cong$}; \end{tikzpicture} \end{center} \begin{Proposition}\label{prop:instab_prods} Let $X$ be a cochain complex. For each $k \ge 0$, the maps $\mathcal{F}_{k+1} \to \mathcal{F}_k$ kill any product, including the empty product, which is to say the multiplicative unit. \end{Proposition} One can sum up this proposition as ``the instability of products''. \begin{proof} Let us first deal with non-empty products. Recall our notation $e_d^{\text{un}}$, for $d \le 0$, for particular elements of $\mathpzc{E}^\dagger(2)$, where $e_d^{\text{un}}$ has degree $d$. Let $e_d^{\text{un}}[k]$ denote the corresponding element of $(\Sigma^k\mathpzc{E}^\dagger)(2) = \mathpzc{E}^\dagger(2)[k]$. Note that $e_d^{\text{un}}[k]$ has degree $d+k$. In particular $e_0^{\text{un}}[k]$ has degree $k$, and, since $(\Sigma^k\mathpzc{E}^\dagger)(2)$ is zero above degree $k$, the stabilization map $\Sigma^{k+1}\mathpzc{E}^\dagger \to \Sigma^k\mathpzc{E}^\dagger$ must map $e_0^{\text{un}}[k+1]$ to $0$. Now, fix $k \ge 0$ and consider some product $(P^I c) \cdot (P^{I'} c')$ in $\mathcal{F}_{k+1}$. Let $[\eta]$ and $[\xi]$, respectively, be the images of $P^I c$ and $P^{I'} c'$ under the isomorphism $\mathcal{F}_{k+1} \to \text{H}^\bullet((\Sigma^{k+1}\mathbf{E}^\dagger)X)$. Then, by definition of the products in the cohomologies (see~\cite{May}), $(P^I c) \cdot (P^{I'} c')$ maps, under this same isomorphism, to $[e_0^{\text{un}}[k+1]_*(\eta, \xi)]$, by which we mean the class of the image of $e_0^{\text{un}}[k+1] \otimes \eta \otimes \xi$ under the map $(\Sigma^{k+1}\mathpzc{E}^\dagger)(2) \otimes ((\Sigma^{k+1}\mathbf{E}^\dagger)X)^{\otimes 2} \to (\Sigma^{k+1}\mathbf{E}^\dagger)X$. Since, as noted above, $e_0^{\text{un}}[k+1]$ maps to $0$ under $\Sigma^{k+1}\mathpzc{E}^\dagger \to \Sigma^k\mathpzc{E}^\dagger$, we have that the product $(P^I c) \cdot (P^{I'} c')$ maps to zero under the composite $\mathcal{F}_{k+1} \to \text{H}^\bullet((\Sigma^{k+1}\mathbf{E}^\dagger)X) \to \text{H}^\bullet((\Sigma^{k}\mathbf{E}^\dagger)X)$, and this proves the required result for the map $\mathcal{F}_{k+1} \to \mathcal{F}_k$, except in the case of the empty product. For the empty product, the argument is similar: one notes that the multiplicative unit in $\mathcal{F}_{k+1}$ corresponds to the generator in degree $-k-1$ of $(\Sigma^{k+1}\mathpzc{E}^\dagger)(0)$, and this generator is mapped to zero in $(\Sigma^k\mathpzc{E}^\dagger)(0)$ since the latter is zero in all degrees except degree $-k$. \end{proof} We can also consider a filtered version of the diagram above. Consider again a cochain complex $X$ and let $\{c_i\}$ be a basis of $\text{H}^\bullet(X)$. By Proposition~\ref{prop:freealgsuspop} and the standard calculation of the cohomology of free algebras over the unstable Barratt-Eccles operad (again, see~\cite{CohenLadaMay}), given $t \ge 0$, $\text{F}_t\text{H}^\bullet((\Sigma^k\mathbf{E}^\dagger)X)$ is isomorphic to a shift down by $k$ of the $\mathbb{F}_p$-submodule of $\mathcal{F}_k$ generated by the products $(P^{I_1}c_1) \cdots (P^{I_k}c_r)$, where $r \ge 0$, $I$ is admissible, $e(I) < |c_i| + k$ and $p^{l(I_1)} + \cdots + p^{l(I_r)} \le t$. If we let $\text{F}_t\mathcal{F}_k$ denote this submodule of $\mathcal{F}_k$, we get a filtration $\{\text{F}_t\mathcal{F}_k\}_{t \ge 0}$ of $\mathcal{F}_k$. Moreover, we then have a commutative diagram as follows: \begin{center} \begin{tikzpicture}[node distance = 1.5cm] \node [] (A) {$\text{F}_t\mathcal{F}_1$}; \node [right of = A,xshift=1.5cm] (B) {$\text{F}_t\mathcal{F}_0$}; \node [below of = A] (C) {$\text{F}_t\text{H}^\bullet((\Sigma\mathbf{E}^\dagger) X)$}; \node [below of = B] (D) {$\text{F}_t\text{H}^\bullet(\mathbf{E}^\dagger X)$}; \node [left of = A,xshift=-1.5cm] (E) {$\text{F}_t\mathcal{F}_2$}; \node [left of = E,xshift=-1.5cm] (F) {$\cdots$}; \node [left of = C,xshift=-1.5cm] (G) {$\text{F}_t\text{H}^\bullet((\Sigma^2 \mathbf{E}^\dagger) X)$}; \node [left of = G,xshift=-1.5cm] (H) {$\cdots$}; \draw [->] (A) -- (B) node[midway,anchor=south]{}; \draw [->] (A) -- (C) node[midway,anchor=west]{$\cong$}; \draw [->] (B) -- (D) node[midway,anchor=west]{$\cong$}; \draw [->] (C) -- (D) node[midway,anchor=south]{}; \draw [->] (F) -- (E) node[midway,anchor=south]{}; \draw [->] (E) -- (A) node[midway,anchor=south]{}; \draw [->] (H) -- (G) node[midway,anchor=south]{}; \draw [->] (G) -- (C) node[midway,anchor=south]{}; \draw [->] (E) -- (G) node[midway,anchor=west]{$\cong$}; \end{tikzpicture} \end{center} We mentioned earlier that the products which appear in the cohomologies of algebras over $\mathpzc{E}^\dagger$ disappear in the cohomologies of algebras over $\mathpzc{E}_{\text{st}}^\dagger$ (and saw that they at least aren't definable in the same manner). We shall now make precise in what sense this is true in the case of free algebras; it will follow from the instability of the products described in Proposition~\ref{prop:instab_prods} above. Given a cochain complex $X$, a basis $\{c_i\}$ of $\text{H}^\bullet(X)$ and $\mathcal{F}_k$ as above, we let $\mathcal{F}^{+}_k$ be the submodule of $\mathcal{F}_k$ generated by the monomials $P^Ic_i$ where $e(I) < |e_i|+k$; that is, it is submodule which omits all products, including the empty product, and is also a free graded module over $\mathbb{F}_p$ on the monomials $P^Ic_i$ where $I$ is admissible and $e(I) < |c_i|+k$. Just like $\mathcal{F}_k$, $\mathcal{F}^{+}_k$ is filtered, where $\text{F}_t\mathcal{F}^{+}_k$ denotes the submodule generated by the monomials $P^Ic_i$ which satisfy $e(I) < |c_i|+k$ and also the additional requirement that $p^{l(I)} \le t$. The inclusion $\mathcal{F}_k^{+} \to \mathcal{F}_k$ is then clearly a filtered one, in that, for each $t \ge 0$, we have an induced inclusion $\text{F}_t\mathcal{F}_k^{+} \to \text{F}_t\mathcal{F}_k$. We now have commutative diagrams as follows: \begin{center} \begin{tikzpicture}[node distance = 1.5cm] \node [] (A) {$\mathcal{F}_1$}; \node [right of = A,xshift=1.5cm] (B) {$\mathcal{F}_0$}; \node [below of = A] (C) {$\text{H}^\bullet((\Sigma\mathbf{E}^\dagger) X)$}; \node [below of = B] (D) {$\text{H}^\bullet(\mathbf{E}^\dagger X)$}; \node [left of = A,xshift=-1.5cm] (E) {$\mathcal{F}_2$}; \node [left of = E,xshift=-1.5cm] (F) {$\cdots$}; \node [left of = C,xshift=-1.5cm] (G) {$\text{H}^\bullet((\Sigma^2 \mathbf{E}^\dagger) X)$}; \node [left of = G,xshift=-1.5cm] (H) {$\cdots$}; \node [above of = B] (X) {$\mathcal{F}_0^{+}$}; \node [above of = A] (Y) {$\mathcal{F}_1^{+}$}; \node [above of = E] (Z) {$\mathcal{F}_2^{+}$}; \node [above of = F] (W) {$\cdots$}; \draw [->] (A) -- (B) node[midway,anchor=south]{}; \draw [->] (A) -- (C) node[midway,anchor=west]{$\cong$}; \draw [->] (B) -- (D) node[midway,anchor=west]{$\cong$}; \draw [->] (C) -- (D) node[midway,anchor=south]{}; \draw [->] (F) -- (E) node[midway,anchor=south]{}; \draw [->] (E) -- (A) node[midway,anchor=south]{}; \draw [->] (H) -- (G) node[midway,anchor=south]{}; \draw [->] (G) -- (C) node[midway,anchor=south]{}; \draw [->] (E) -- (G) node[midway,anchor=west]{$\cong$}; \draw [->] (W) -- (Z); \draw [->] (Z) -- (Y); \draw [->] (Y) -- (X); \draw [right hook->] (X) -- (B); \draw [right hook->] (Y) -- (A); \draw [right hook->] (Z) -- (E); \end{tikzpicture} \end{center} \hspace{1cm} \begin{center} \begin{tikzpicture}[node distance = 1.5cm] \node [] (A) {$\text{F}_t\mathcal{F}_1$}; \node [right of = A,xshift=1.5cm] (B) {$\text{F}_t\mathcal{F}_0$}; \node [below of = A] (C) {$\text{F}_t\text{H}^\bullet((\Sigma\mathbf{E}^\dagger) X)$}; \node [below of = B] (D) {$\text{F}_t\text{H}^\bullet(\mathbf{E}^\dagger X)$}; \node [left of = A,xshift=-1.5cm] (E) {$\text{F}_t\mathcal{F}_2$}; \node [left of = E,xshift=-1.5cm] (F) {$\cdots$}; \node [left of = C,xshift=-1.5cm] (G) {$\text{F}_t\text{H}^\bullet((\Sigma^2 \mathbf{E}^\dagger) X)$}; \node [left of = G,xshift=-1.5cm] (H) {$\cdots$}; \node [above of = B] (X) {$\text{F}_t\mathcal{F}_0^{+}$}; \node [above of = A] (Y) {$\text{F}_t\mathcal{F}_1^{+}$}; \node [above of = E] (Z) {$\text{F}_t\mathcal{F}_2^{+}$}; \node [above of = F] (W) {$\cdots$}; \draw [->] (A) -- (B) node[midway,anchor=south]{}; \draw [->] (A) -- (C) node[midway,anchor=west]{$\cong$}; \draw [->] (B) -- (D) node[midway,anchor=west]{$\cong$}; \draw [->] (C) -- (D) node[midway,anchor=south]{}; \draw [->] (F) -- (E) node[midway,anchor=south]{}; \draw [->] (E) -- (A) node[midway,anchor=south]{}; \draw [->] (H) -- (G) node[midway,anchor=south]{}; \draw [->] (G) -- (C) node[midway,anchor=south]{}; \draw [->] (E) -- (G) node[midway,anchor=west]{$\cong$}; \draw [->] (W) -- (Z); \draw [->] (Z) -- (Y); \draw [->] (Y) -- (X); \draw [right hook->] (X) -- (B); \draw [right hook->] (Y) -- (A); \draw [right hook->] (Z) -- (E); \end{tikzpicture} \end{center} \begin{Proposition}\label{prop:proddisappear} If $X$ is a cochain complex, $\{c_i\}$ is a basis of $\emph{H}^\bullet(X)$ and $t \ge 0$, the induced maps \[ \lim_{k \ge 0} \mathcal{F}^{+}_k \to \lim_{k \ge 0} \emph{H}^\bullet((\Sigma^k \mathbf{E}^\dagger) X) \hspace{1cm} \lim_{k \ge 0} \emph{F}_t\mathcal{F}^{+}_k \to \lim_{k \ge 0} \emph{F}_t\emph{H}^\bullet((\Sigma^k \mathbf{E}^\dagger) X) \] are isomorphisms. \end{Proposition} \begin{proof} We shall demonstrate the case of the map $\lim_k \mathcal{F}^{+}_k \to \lim_k \text{H}^\bullet((\Sigma^k \mathbf{E}^\dagger) X)$; the filtered case is analogous. We need to show that the induced map $\lim_k \mathcal{F}^{+}_k \to \lim_k \mathcal{F}_k$ is an isomorphism, as we already know that the map $\lim_k \mathcal{F}_k \to \lim_k \text{H}^\bullet((\Sigma^k \mathbf{E}^\dagger) X)$ is an isomorphism. Injectivity is obvious from the standard description of inverse limits of towers via infinite tuples and, moreover, surjectivity follows from this description and the instability of products described in Proposition~\ref{prop:instab_prods}. \end{proof} Thus, given a cochain complex $X$, we have a description of $\lim_{k \ge 0} \text{F}_t\text{H}^\bullet((\Sigma^k\mathbf{E}^\dagger)X)$ for each $k \ge 0$, and so, upon taking the colimit over $t$, a description of $\text{A}(X)$. This allows us to demonstrate the following a priori unexpected property of $\text{A}$. \begin{Proposition}\label{prop:ABadditive} The functor $\emph{A}$ is additive. \end{Proposition} \begin{proof} Let $X$ and $Y$ be cochain complexes, and let $\{c_i\}$ and $\{d_j\}$ be basses of their cohomologies $\text{H}^\bullet(X)$ and $\text{H}^\bullet(Y)$, respectively. We wish to show that the canonical map \[ \text{A}(X) \oplus \text{A}(Y) \to \text{A}(X \oplus Y) \] is an isomorphism. By definition, we have: \[ \text{A}(X) = \underset{t \ge 0}{\text{colim}} \lim_{k \ge 0} \text{F}_t\text{H}^\bullet((\Sigma^k\mathbf{E}^\dagger)X) \] Moreover, by Proposition~\ref{prop:proddisappear}, for each $k, t \ge 0$, $\lim_{k \ge 0} \text{F}_t\text{H}^\bullet((\Sigma^k\mathbf{E}^\dagger)X)$ is isomorphic to $\lim_{k \ge 0} \text{F}_t\mathcal{F}^{+}_k$, where $\text{F}_t\mathcal{F}^{+}_k$ is the free graded module on the monomials $P^Ic_i$ which satisfy $e(I) < |c_i|+k$ and $p^{l(I)} \le t$. It follows that, in degree say $d \in \mathbb{Z}$, $\text{A}(X)$ is isomorphic to the module which consists of infinite sums \[ \sum a_{I_i,c_i}(P^{I_i}c_i) \] by which we mean functions $f \colon \{(I_i, c_i) \mid I_i \:\: \text{admissible}\} \to \mathbb{F}_p$, where $f(I_i,c_i) = a_{I_i,c_i}$, satisfying the following requirements: \begin{itemize} \item For all $(I_i,c_i)$, if $a_{I_i,c_i} \neq 0$, $d(I_i) + |c_i| = d$. \item The set of lengths $\#\{l(I_i) \mid a_{I_i,c_i} \neq 0\}$ is bounded above, or, equivalently, finite. \item For any $k \ge 0$, $\#\{(I_i, c_i) \mid a_{I_i,c_i} \neq 0, e(I_i) < |c_i| + k\}$ is finite. \end{itemize} Similarly, $\text{A}(Y)$ is isomorphic to the module which consists of infinite sums \[ \sum a_{I_j,d_j}(P^{I_j}d_j) \] satisfying the following requirements: \begin{itemize} \item For all $(I_j,d_j)$, if $a_{I_j,d_j} \neq 0$, $d(I_j) + |d_j| = d$. \item The set of lengths $\#\{l(I_j) \mid a_{I_j,d_j} \neq 0\}$ is bounded above, or, equivalently, finite. \item For any $k \ge 0$, $\#\{(I_j, d_j) \mid a_{I_j,d_j} \neq 0, e(I_j) < |d_j| + k\}$ is finite. \end{itemize} Moreover, as $\text{H}^\bullet(X \oplus Y) = \text{H}^\bullet(X) \oplus \text{H}^\bullet(Y)$ has basis $\{c_i\} \amalg \{d_j\}$, $\text{A}(X \oplus Y)$ is isomorphic to the module which consists of infinite sums \[ \sum a_{I_i,c_i}(P^{I_i}c_i) + \sum a_{I_j,d_j}(P^{I_j}d_j) \] satisfying the following requirements: \begin{itemize} \item For all $(I_i,c_i)$ and $(I_j,d_j)$, if $a_{I_i,d_i} \neq 0$, $d(I_i) + |c_i| = d$, and if $a_{I_j,d_j} \neq 0$, $d(I_j) + |d_j| = d$. \item Both sets of lengths $\#\{l(I_i) \mid a_{I_i,c_i} \neq 0\}$ and $\#\{l(I_j) \mid a_{I_j,d_i} \neq 0\}$ are bounded above, or, equivalently, finite. \item For any $k \ge 0$, both sets $\#\{(I_i, c_i) \mid a_{I_i,c_i} \neq 0, e(I_i) < |c_i| + k\}$ and $\#\{(I_j, d_j) \mid a_{I_j,d_j} \neq 0, e(I_j) < |d_j| + k\}$ is finite. \end{itemize} An easy check shows that, under the identifications provided by the above isomorphisms, the canonical map $\text{A}(X) \oplus \text{A}(Y) \to \text{A}(X \oplus Y)$ corresponds to the obvious inclusion of sets of infinite sums. The map is then clearly an isomorphism, as desired. \end{proof} We now describe how $\text{A}(X)$ is intermediate between $\text{H}^\bullet(\mathbf{E}_{\text{st}}^\dagger X)$ and $\widehat{\mathcal{B}} \otimes \text{H}^\bullet (X)$. Let $X$ be a cochain complex. Via the canonical maps $\mathpzc{E}_{\text{st}}^\dagger \to \Sigma^k\mathpzc{E}^\dagger$, we get canonical maps $\mathbf{E}_{\textbf{st}}^\dagger X \to (\Sigma^k\mathbf{E}^\dagger)X$, which are filtered maps in that they decompose into maps $\text{F}_t\text{H}^\bullet(\mathbf{E}_{\textbf{st}}^\dagger X) \to \text{F}_t\text{H}^\bullet((\Sigma^k\mathbf{E}^\dagger)X)$. These maps are compatible with the maps $\text{F}_t\text{H}^\bullet((\Sigma^{k+1}\mathbf{E}^\dagger)X) \to \text{F}_t\text{H}^\bullet((\Sigma^k\mathbf{E}^\dagger)X)$, so that we get an induced map $\text{F}_t\text{H}^\bullet(\mathbf{E}_{\textbf{st}}^\dagger X) \to \lim_{k \ge 0} \text{F}_t\text{H}^\bullet((\Sigma^k\mathbf{E}^\dagger)X)$. Upon taking colimits over $t$, we get an induced natural map: \[ \Phi_1 \colon \text{H}^\bullet(\mathbf{E}_{\text{st}}^\dagger X) \to \text{A}(X) \] Now consider $\widehat{\mathcal{B}} \otimes \text{H}^\bullet(X)$. We construct a natural map: \[ \Phi_2 \colon \widehat{\mathcal{B}} \otimes \text{H}^\bullet (X) \to \text{A}(X) \] This map is defined as follows; the idea is that the term $\lim_k \text{H}^\bullet((\Sigma^k\mathbf{E}^\dagger)X)$ which arises in the construction of $\text{A}(X)$ inherits an action by the term $\mathcal{B}_{< k}$ which arises in the construction of $\widehat{\mathcal{B}}$. Recall, as we saw in Propostion~\ref{prop:FtBhatcomplete}, that we have $\widehat{\mathcal{B}} = \text{colim}_{t \ge 0} \lim_{k \ge 0} \text{F}_t \mathcal{B}_{< k}$. For each $d_1, d_2$, we need to specify a map $\widehat{\mathcal{B}}_{d_1} \otimes \text{H}^{d_2} (X) = \text{colim}_{t \ge 0} (\text{F}_{t}\widehat{\mathcal{B}}_{d_1} \otimes \text{H}^{d_2}(X)) \to \text{colim}_{t \ge 0} \lim_{k \ge 0} \text{F}_t\text{H}^{d_1 + d_2}((\Sigma^k\mathbf{E}^\dagger)X)$. Fix some $t_0 \ge 0$. For each $k \ge 0$, consider the following composite: \[ \text{F}_{t_0}\widehat{\mathcal{B}}_{d_1} \otimes \text{H}^{d_2}(X) \to \text{F}_{t_0}(\mathcal{B}_{< d_2+k})_{d_1} \otimes \text{F}_1\text{H}^{d_2}((\Sigma^k\mathbf{E}^\dagger)X) \to \text{F}_{t_0+1}\text{H}^{d_1+d_2} ((\Sigma^k\mathbf{E}^\dagger)X) \] Here the first map is induced by the canonical map $\text{F}_{t_0}\widehat{\mathcal{B}}_{d_1} \to \text{F}_{t_0}(\mathcal{B}_{< d_2+k})_{d_1}$ together with the unit of adjunction $X \to (\Sigma^k\mathbf{E}^\dagger)X$. The second map comes from Proposition~\ref{prop:freealgsuspop} and the standard computation of the cohomologies of free algebras over the unstable Barratt-Eccles operad (once again, see~\cite{CohenLadaMay}) -- that is, the map is given by the generalized Steenrod operations on such cohomologies. In short, given a degree $d$ cohomology class $c$ in $\text{H}^\bullet(X)$ and an infinite sum in $\widehat{\mathcal{B}}$, we pass from the former to a cohomology class $c'$ in $\text{H}^\bullet((\Sigma^k\mathbf{E}^\dagger)X)$, project the infinite sum onto the finite sub-sum comprising just the summands of excess at most $d+k$, and then act on $c'$ by the generalized Steenrod operations. Now, an easy check shows that the above composite maps are compatible with the maps $\text{F}_{t_0+1}\text{H}^{d_1+d_2}((\Sigma^{k+1}\mathbf{E}^\dagger)X) \to \text{F}_{t_0+1}\text{H}^{d_1+d_2}((\Sigma^k\mathbf{E}^\dagger)X)$, so that we get an induced map: \[ \text{F}_{t_0}\widehat{\mathcal{B}}_{d_1} \otimes \text{H}^{d_2}(X) \to \lim_{k \ge 0} \text{F}_{t_0+1}\text{H}^{d_1+d_2} ((\Sigma^k\mathbf{E}^\dagger)X) \hookrightarrow \text{colim}_{t \ge 0} \lim_{k \ge 0} \text{F}_t\text{H}^{d_1 + d_2}((\Sigma^k\mathbf{E}^\dagger)X) \] Finally, we note that the above maps are compatible with the inclusions $\text{F}_t \widehat{\mathcal{B}}_{d_1} \hookrightarrow \text{F}_{t+1} \widehat{\mathcal{B}}_{d_1}$, so that we get the desired map $\widehat{\mathcal{B}}_{d_1} \otimes \text{H}^{d_2} (X) \to \underset{t \ge 0}{\text{colim}} \lim_{k \ge 0} \text{F}_t\text{H}^{d_1 + d_2}((\Sigma^k\mathbf{E}^\dagger)X)$, completing the construction of $\Phi_2$. \\ Given a cochain complex $X$, the maps $\Phi_1$ and $\Phi_2$ will allow us to relate $\text{H}^\bullet(\mathbf{E}_{\textbf{st}}^\dagger X)$ and $\widehat{\mathcal{B}} \otimes \text{H}^\bullet(X)$, via $\text{A}(X)$. First, we need two lemmas, the first of which has to do with the limiting tower that occurs in the construction of $\text{A}(X)$. \begin{Lemma}\label{lem:stabmapontocx} If $X$ is a cochain complex, the towers \[ \cdots \to (\Sigma^2\mathbf{E}^\dagger)X \to (\Sigma\mathbf{E}^\dagger)X \to \mathbf{E}^\dagger X \hspace{1cm} \cdots \to \emph{H}^\bullet((\Sigma^2\mathbf{E}^\dagger)X) \to \emph{H}^\bullet((\Sigma\mathbf{E}^\dagger)X) \to \emph{H}^\bullet(\mathbf{E}^\dagger X) \] satisfy the Mittag-Leffler condition. Moreover, for each $t \ge 0$, the towers \[ \cdots \to \emph{F}_t(\Sigma^2\mathbf{E}^\dagger)X \to \emph{F}_t(\Sigma\mathbf{E}^\dagger)X \to \emph{F}_t\mathbf{E}^\dagger X \hspace{1cm} \cdots \to \emph{F}_t\emph{H}^\bullet((\Sigma^2\mathbf{E}^\dagger)X) \to \emph{F}_t\emph{H}^\bullet((\Sigma\mathbf{E}^\dagger)X) \to \emph{F}_t\emph{H}^\bullet(\mathbf{E}^\dagger X) \] also satisfy the Mittag-Leffler condition. \end{Lemma} \begin{proof} We shall show that, for each $n \ge 0$ and any cochain complex $X$, the towers \[ \cdots \to (\Sigma^{2}\mathpzc{E}^\dagger)(n) \otimes_{\Sigma_n} X^{\otimes n} \to (\Sigma\mathpzc{E}^\dagger)(n) \otimes_{\Sigma_n} X^{\otimes n} \to \mathpzc{E}^\dagger(n) \otimes_{\Sigma_n} X^{\otimes n} \] \[ \cdots \to \text{H}^\bullet((\Sigma^{2}\mathpzc{E}^\dagger)(n) \otimes_{\Sigma_n} X^{\otimes n}) \to \text{H}^\bullet((\Sigma\mathpzc{E}^\dagger)(n) \otimes_{\Sigma_n} X^{\otimes n}) \to \text{H}^\bullet(\mathpzc{E}^\dagger(n) \otimes_{\Sigma_n} X^{\otimes n}) \] satisfy the Mittag-Leffler condition, from which the desired results follow immediately. The Mittag-Leffler property for the first of the two towers above follows immediately from the Mittag-Leffler property of the tower $\cdots \to (\Sigma^{2}\mathpzc{E})(n) \to (\Sigma\mathpzc{E})(n) \to \mathpzc{E}(n)$, which was established in Proposition~\ref{prop:stabmapontoML}. Consider then the second tower. Let $\{c_i\}$ denote a basis of $\text{H}^\bullet(X)$, and recall the identifications above, for each $k \ge 0$, of $\text{H}^\bullet((\Sigma^k\mathbf{E}^\dagger)X)$ with $\mathcal{F}_k$ and $\text{F}_t\text{H}^\bullet((\Sigma^k\mathbf{E}^\dagger)X)$ with $\text{F}_t\mathcal{F}_k$. Inspecting the definitions of the filtrations, we have that $\text{H}^\bullet((\Sigma^k\mathpzc{E}^\dagger)(n) \otimes_{\Sigma_n} X^{\otimes n})$ is the $\mathbb{F}_p$-submodule of $\mathcal{F}_k$ generated by the terms $(P^{I_1}c_1) \cdots (P^{I_r}c_r)$ where $r \ge 0$, $I$ is admissible, $e(I) < |c_i| + k$ and $p^{l(I_1)} + \cdots + p^{l(I_k)} = n$. Now, given a fixed $k$, the map $\text{H}^\bullet((\Sigma^{k+1}\mathpzc{E}^\dagger)(n) \otimes_{\Sigma_n} X^{\otimes n}) \to \text{H}^\bullet((\Sigma^k\mathpzc{E}^\dagger)(n) \otimes_{\Sigma_n} X^{\otimes n})$ sends any $P^Ic_i$ which satisfies, not only that $e(I) < |c_i| + k + 1$, but rather $e(I) < |c_i| + k$, to itself, so that all monomials $P^Ic_i$ in $\mathcal{F}_k$ occur in the image. On the other hand, by Proposition~\ref{prop:instab_prods}, all products are killed, and in particular, no products occur in the image. We have thus identified the image of the map $\text{H}^\bullet((\Sigma^{k+1}\mathpzc{E}^\dagger)(n) \otimes_{\Sigma_n} X^{\otimes n}) \to \text{H}^\bullet((\Sigma^k\mathpzc{E}^\dagger)(n) \otimes_{\Sigma_n} X^{\otimes n})$, and, moreover, essentially the same argument shows that all maps $\text{H}^\bullet((\Sigma^{k'}\mathpzc{E}^\dagger)(n) \otimes_{\Sigma_n} X^{\otimes n}) \to \text{H}^\bullet((\Sigma^k\mathpzc{E}^\dagger)(n) \otimes_{\Sigma_n} X^{\otimes n})$, for $k' \ge k+1$, have this same image, which gives us the desired result. \end{proof} Next, given a cochain complex $X$, the following lemma offers a partial comparison of $\text{H}^\bullet(\mathbf{E}_{\textbf{st}}^\dagger X)$ and $\widehat{\mathcal{B}} \otimes \text{H}^\bullet(X)$ via $\text{A}(X)$, using the maps $\Phi_1$ and $\Phi_2$. It is only a partial comparison because it holds only in the case of finite cochain complexes $X$; recall, as per Definition~\ref{def:fin_mod}, that a finite $\mathbb{F}_p$-cochain complex is one which is bounded above and below, and of finite dimension in each degree. \begin{Lemma}\label{lem:PhiPsILfinite} For finite cochain complexes $X$, the maps \[ \Phi_1 \colon \emph{H}^\bullet(\mathbf{E}_{\mathbf{st}}^\dagger X) \to \emph{A}(X) \] \[ \Phi_2 \colon \widehat{\mathcal{B}} \otimes \emph{H}^\bullet (X) \to \emph{A}(X) \] are isomorphisms. \end{Lemma} \begin{proof} Let $X$ be a finite cochain complex. We first consider the map of $\Phi_1$. We have filtrations $\{\text{F}_t\mathbf{E}_{\textbf{st}}^\dagger X\}_{t \ge 0}$ and $\{\text{F}_t(\Sigma^k\mathbf{E}^\dagger) X\}_{t \ge 0}$ of the free algebras $\mathbf{E}_{\textbf{st}}^\dagger X$ and $(\Sigma^k\mathbf{E}^\dagger) X$ by setting: \[ \text{F}_t\mathbf{E}_{\textbf{st}}^\dagger X = \bigoplus_{n \le t} \mathpzc{E}_{\textbf{st}}^\dagger(n) \otimes_{\Sigma_n} X^{\otimes n} \] \[ \text{F}_t(\Sigma^k\mathbf{E}^\dagger) X = \bigoplus_{n \le t} (\Sigma^k\mathpzc{E}^\dagger)(n) \otimes_{\Sigma_n} X^{\otimes n} \] By taking cohomologies, we also get similar filtrations of $\text{H}^\bullet(\mathbf{E}_{\textbf{st}}^\dagger X)$ and $\text{H}^\bullet((\Sigma^k\mathbf{E}^\dagger) X)$. \\ Now, since $X$ is finite, we have: \begin{align*} \text{F}_t\mathbf{E}_{\text{st}}^\dagger X &= \bigoplus_{n \le t} \mathpzc{E}_{\text{st}}^\dagger(n) \otimes_{\Sigma_n} X^{\otimes n} \\ &= \bigoplus_{n \le t} \lim_{k \ge 0}((\Sigma^k\mathpzc{E}^\dagger)(n)) \otimes_{\Sigma_n} X^{\otimes n} \\ &\cong \bigoplus_{n \le t} \lim_{k \ge 0}((\Sigma^k\mathpzc{E}^\dagger)(n) \otimes_{\Sigma_n} X^{\otimes n}) \\ &\cong \lim_{k \ge 0} (\bigoplus_{n \le t}(\Sigma^k\mathpzc{E}^\dagger)(n) \otimes_{\Sigma_n} X^{\otimes n}) \\ & = \lim_{k \ge 0} \text{F}_t(\Sigma^k\mathbf{E}^\dagger)X \end{align*} Here it is the first (non-identity) isomorphism which requires finiteness of $X$. By Lemma~\ref{lem:stabmapontocx}, we have that \[ \text{F}_t\text{H}^\bullet(\mathbf{E}_{\textbf{st}}^\dagger X) \cong \lim_{k \ge 0} \text{F}_t\text{H}^\bullet((\Sigma^k\mathbf{E}^\dagger)X) \] and then, upon taking colimits over $t \ge 0$, we get that $\Phi_1 \colon \text{H}^\bullet(\mathbf{E}_{\textbf{st}}X) \to \text{A}(X)$ is an isomorphism, as desired. Now consider the map $\Phi_2$. It is standard that, over $\mathbb{F}_p$, as over any field, any cochain complex can be written as a direct sum $(\bigoplus_{i \in I} \mathbb{S}^{n_i}) \oplus (\bigoplus_{j \in J} \mathbb{D}^{n_j})$, where $\mathbb{S}^n$ and $\mathbb{D}^n$ denote the standard sphere and disk complexes (see Section~\ref{sec:nots_convs}). As $X$ is finite, both $I$ and $J$ here must be finite. Because, by Proposition~\ref{prop:ABadditive}, $\text{A}(-)$ is additive, and because $\widehat{\mathcal{B}} \otimes \text{H}^\bullet(-)$ is of course also additive, we need only demonstrate the case where $X$ is a sphere complex $\mathbb{S}^n$ or a disk complex $\mathbb{D}^n$. The case of the disk complex follows immediately from the fact that the $\Sigma^k\mathbf{E}^\dagger$ preserve quasi-isomorphisms (as the operads $\Sigma^k\mathpzc{E}^\dagger$ are $\Sigma$-free). Thus, it remains to consider the case where $X = \mathbb{S}^n$ for some $n \in \mathbb{Z}$. Let $c_n$ denote the generator of $\text{H}^\bullet(X)$ in degree $n$. By Proposition~\ref{prop:FtBhatcomplete}, we have that: \[ \widehat{\mathcal{B}} \otimes \text{H}^\bullet(X) = \underset{t \ge 0}{\text{colim}} \lim_{k \ge 0} \text{F}_t \mathcal{B}_{< k} \otimes \mathbb{F}_p\{c_n\} \] Using the bases which we saw earlier for the $\mathcal{B}_{< k}$ (those derived from the Cartan-Serre basis for $\mathcal{B}$), we see that $\widehat{\mathcal{B}} \otimes \text{H}^\bullet(X)$, in degree say $d \in \mathbb{Z}$, is isomorphic to the module which consists of infinite sums \[ \sum a_{I}(P^{I}c_n) \] by which we mean functions $f \colon \{I \mid I \: \text{admissible}\} \to \mathbb{F}_p$, where $f(I) = a_I$, satisfying the following requirements: \begin{itemize} \item For all $I$, if $a_I \neq 0$, $d(I) + n = d$. \item The set of lengths $\#\{l(I) \mid a_I \neq 0\}$ is bounded above, or, equivalently, finite. \item For any $k \ge 0$, $\#\{I \mid a_I \neq 0, e(I) < k\}$ is finite. \end{itemize} On the other hand, as in the proof of Proposition~\ref{prop:ABadditive}, $\text{A}(X)$, in degree $d$, is isomorphic to the module which consists of infinite sums \[ \sum a_{I}(P^{I}c_n) \] satisfying the following requirements: \begin{itemize} \item For all $I$, if $a_I \neq 0$, $d(I) + n = d$. \item The set of lengths $\#\{l(I) \mid a_I \neq 0\}$ is bounded above, or, equivalently, finite. \item For any $k \ge 0$, $\#\{I \mid a_I \neq 0, e(I) < k+n\}$ is finite. \end{itemize} The only difference occurs in the third condition; however, the two conditions are obviously equivalent, and an easy check shows that the map $\Phi_2$, under these isomorphisms, corresponds to simply the identity, and so is itself an isomorphism. \end{proof} Finally, with the above lemmas in hand, given a cochain complex $X$, we shall now be able to use the maps $\Phi_1$ and $\Phi_2$ above to show that $\text{H}^\bullet(\mathbf{E}_{\textbf{st}}^\dagger X)$ and $\widehat{\mathcal{B}} \otimes \text{H}^\bullet(X)$ are in fact naturally isomorphic, thus computing the cohomologies of free algebras over the stable Barratt-Eccles cochain operad $\mathpzc{E}_{\text{st}}^\dagger$. \begin{Proposition}\label{prop:stablefreehom} For cochain complexes $X$, we have a natural isomorphism: \[ \emph{H}^\bullet(\mathbf{E}_{\mathbf{st}}^\dagger X) \cong \widehat{\mathcal{B}} \otimes \emph{H}^\bullet (X) \] \end{Proposition} \begin{proof} Consider the maps \[ \Phi_1 \colon \text{H}^\bullet(\mathbf{E}_{\text{st}}^\dagger X) \to \text{A}(X) \] \[ \Phi_2 \colon \widehat{\mathcal{B}} \otimes \text{H}^\bullet (X) \to \text{A}(X) \] which we recall are natural in $X$. We will demonstrate the desired result by showing that $\Phi_1$ and $\Phi_2$ are both injective and that they have the same image inside $\text{A}(X)$. As we have noted before, it is standard that, over $\mathbb{F}_p$, as over any field, any cochain complex is, up to isomorphism, a direct sum $(\bigoplus_{i \in I} \mathbb{S}^{n_i}) \oplus (\bigoplus_{j \in J} \mathbb{D}^{n_j})$, where $\mathbb{S}^n$ and $\mathbb{D}^n$ denote the standard sphere and disk complexes (see Section~\ref{sec:nots_convs}). Thus, by naturality, it suffices to show that we have the aforementioned injectivity and coincidence of images for the complexes $(\bigoplus_{i \in I} \mathbb{S}^{n_i}) \oplus (\bigoplus_{j \in J} \mathbb{D}^{n_j})$. In fact, we can restrict our attention even further. Given such a complex $(\bigoplus_{i \in I} \mathbb{S}^{n_i}) \oplus (\bigoplus_{j \in J} \mathbb{D}^{n_j})$, the inclusion $\bigoplus_{i \in I} \mathbb{S}^{n_i} \to (\bigoplus_{i \in I} \mathbb{S}^{n_i}) \oplus (\bigoplus_{j \in J} \mathbb{D}^{n_j})$ is a quasi-isomorphism. Note that, as a functor of $X$, by Proposition~\ref{prop:MS_st_pres_w_eqs}, $\text{H}^\bullet(\mathbf{E}_{\text{st}}^\dagger X)$ maps quasi-isomorphisms to isomorphisms, and, similarly, as a functor of $X$, $\text{A}(X)$ maps quasi-isomorphisms to isomorphisms because the $\Sigma^k\mathpzc{E}$ are $\Sigma$-free. Moreover, as a functor of $X$, $\widehat{\mathcal{B}} \otimes \text{H}_\bullet X$ maps quasi-isomorphisms to isomorphisms for obvious reasons. It follows, by naturality, that it suffices to demonstrate that the aforementioned injectivity and coincidence of images for a complex $X$ of the form $\bigoplus_{i \in I} \mathbb{S}^{n_i}$. Let $X$ be such a complex and let then $\{c_i\}$ be a basis for $\text{H}^\bullet(X)$. Letting $\lambda$ be an ordinal in bijection with $I$, choose a filtration \[ X_0 \to X_1 \to \cdots \to X_{\alpha} \to \cdots \] of $X$, where each $X_{\alpha}$ is finite, and where, for each $\alpha < \lambda$, $\{c_i\}_{i < \alpha}$ gives a basis of $\text{H}_\bullet (X_\alpha)$. Because $\text{H}^\bullet(\mathbf{E}_{\text{st}}^\dagger X)$ and $\widehat{\mathcal{B}} \otimes \text{H}^\bullet(X)$ commute with filtered colimits, and because, by Lemma~\ref{lem:PhiPsILfinite}, $\Phi_1$ and $\Phi_2$ are isomorphisms when the input is finite, we can factor $\Phi_1$ and $\Phi_2$ as follows: \begin{center} \begin{tikzpicture}[node distance = 1cm] \node [] (A) {$\text{H}^\bullet(\mathbf{E}_{\text{st}}^\dagger X)$}; \node [below of = A] (B) {$\widehat{\mathcal{B}} \otimes \text{H}^\bullet(X)$}; \node [right of = A,xshift=3cm] (C) {$\text{colim}_\alpha \text{H}^\bullet(\mathbf{E}_{\text{st}}^\dagger X_\alpha)$}; \node [right of = B,xshift=3cm] (D) {$\text{colim}_\alpha (\widehat{\mathcal{B}} \otimes \text{H}^\bullet(X_\alpha))$}; \node [right of = C,yshift = -5mm,xshift=3cm] (E) {$\text{colim}_\alpha \text{A}(X_\alpha)$}; \node [right of = E,xshift=2cm] (F) {$\text{A}(X)$}; \draw [->] (A) -- (C) node[midway,anchor=south]{$\cong$}; \draw [->] (B) -- (D) node[midway,anchor=north]{$\cong$}; \draw [->] (C) -- (E) node[midway,anchor=south]{$\cong$}; \draw [->] (D) -- (E) node[midway,anchor=north]{$\cong$}; \draw [->] (E) -- (F); \end{tikzpicture} \end{center} It follows that the images of $\Phi_1$ and $\Phi_2$ coincide, as desired. As for injectivity, by the above factorizations, it suffices to demonstrate injectivity of the map: \[ \text{colim}_\alpha \text{A}(X_\alpha) \longrightarrow \text{A}(X) \] As in the proof of Proposition~\ref{prop:ABadditive}, $\text{A}(X)$, in degree $d$, is isomorphic to the module which consists of infinite sums \[ \sum a_{I_i,c_i}(P^{I_i}c_i) \] by which we mean functions $f \colon \{(I_i,c_i) \mid I_i \: \text{admissible}\} \to \mathbb{F}_p$, where $f(I_i,c_i) = a_{I_i,c_i}$, satisfying the following requirements: \begin{itemize} \item For all $(I_i,c_i)$, if $a_{I_i,c_i} \neq 0$, $d(I_i) + |c_i| = d$. \item The set of lengths $\#\{l(I_i) \mid a_{I_i,c_i} \neq 0\}$ is bounded above, or, equivalently, finite. \item For any $k \ge 0$, $\#\{(I_i,c_i) \mid a_{I_i,c_i} \neq 0, e(I_i) < |c_i|+k\}$ is finite. \end{itemize} In the above, $i$ varies over all of $\lambda$. For each $\alpha$, under the above isomorphism, $\text{A}(X_\alpha)$ can then be identifed with the subset of $\text{A}(X)$ comprising the sums which satisfy the additional requirement that $a_{I_i,c_i} = 0$ if $i \nless \alpha$. It follows that the map $\text{colim}_\alpha \text{A}(X_\alpha) \to \text{A}(X_\alpha)$ is injective, with image consisting of the sums which satisfy the additional requirement that there exists some $\alpha < \lambda$ such that $a_{I_i,c_i} = 0$ for all $i \nless \alpha$. \end{proof} \begin{Remark}\label{rmk:eqhomnontrivial} We saw in Proposition~\ref{prop:stabhom} that the non-equivariant homologies, in individual arities, $\text{H}^\bullet(\mathpzc{E}^\dagger_{\text{st}}(n))$, $n \ge 0$, of the stable operad $\mathpzc{E}_{\text{st}}^\dagger$ are simply zero (except the unit in arity 1). On the other hand, the equivariant homologies, summed up, $\oplus_n \text{H}^\bullet(\mathpzc{E}^\dagger_{\text{st}}(n)/\Sigma_n)$, yield $\text{H}^\bullet(\mathbf{E}_{\textbf{st}}^\dagger\mathbb{F}_p[0])$. By Proposition~\ref{prop:stablefreehom} above, we have that this is exactly the completion $\widehat{\mathcal{B}}$ of the generalized Steenrod algebra: \[ \bigoplus_{n \ge 0} \text{H}^\bullet(\mathpzc{E}^\dagger_{\text{st}}(n)/\Sigma_n) \:\: \cong \:\: \widehat{\mathcal{B}} \] Thus, while the non-equivariant homologies are (almost) zero, the equivariant homologies $\text{H}^\bullet(\mathpzc{E}^\dagger_{\text{st}}(n)/\Sigma_n)$, $n \ge 0$, are highly non-trivial objects. \hfill $\vert\vert$ \end{Remark} \subsection{The Cohomology Operations II}\label{sec:stabops2} Let $A$ be an algebra over the stable Barratt-Eccles operad $\mathpzc{E}_{\text{st}}^\dagger$. Earlier, at least in the case $p = 2$, we constructed cohomology operations for $A$, which could be viewed as elements of $\mathcal{B}$. In fact, these operations do not account for all the operations which are naturally induced on the cohomology of $A$. The algebra of cohomology operations for $A$ is instead the completion $\widehat{\mathcal{B}}$, which as we have seen allows certain infinite sums. \begin{Proposition}\label{prop:(co)hom_ops_stab_op} Given an algebra $A$ over $\mathpzc{E}_{\emph{st}}^\dagger$, $\emph{H}^\bullet(A)$ is naturally an algebra over $\widehat{\mathcal{B}}$. \end{Proposition} \begin{proof} The map describing the action of $\widehat{\mathcal{B}}$ is the following composite: \[ \widehat{\mathcal{B}} \otimes \text{H}^\bullet(A) \overset{\cong}\longrightarrow \text{H}^\bullet(\mathbf{E}_{\textbf{st}}^\dagger A) \to \text{H}^\bullet(A) \] Here the first map is the isomorphism provided by Proposition~\ref{prop:stablefreehom}. The second map is that which arises by applying $\text{H}^\bullet(-)$ to the structure map $\alpha \colon \mathbf{E}_{\textbf{st}}^\dagger A \to A$ which defines the $\mathpzc{E}_{\text{st}}^\dagger$-algebra structure of $A$. The properties required by a $\widehat{\mathcal{B}}$-action follows from those required by an $\mathpzc{E}_{\text{st}}^\dagger$-action. For example, associativity can be derived as follows. Letting $m \colon \mathbf{E}_{\textbf{st}}^\dagger \mathbf{E}_{\textbf{st}}^\dagger \Rightarrow \mathbf{E}_{\textbf{st}}^\dagger$ denote the monadic multiplication, we have the following commutative square: \begin{center} \begin{tikzpicture}[node distance=1cm] \node(A){$\mathbf{E}_{\textbf{st}}^\dagger \mathbf{E}_{\textbf{st}}^\dagger A$}; \node[below= of A](C){$\mathbf{E}_{\textbf{st}}^\dagger A$}; \node[right= of A](B){$\mathbf{E}_{\textbf{st}}^\dagger A$}; \node[below= of B,yshift=-0.75mm](D){$A$}; \draw[->] (A) -- (B) node[midway,anchor=south]{$\mathbf{E}_{\textbf{st}}^\dagger\alpha$}; \draw[->] (A) -- (C) node[midway,anchor=east]{$m_A$}; \draw[->] (C) -- (D) node[midway,anchor=north]{$\alpha$}; \draw[->] (B) -- (D) node[midway,anchor=west]{$\alpha$}; \end{tikzpicture} \end{center} Upon applying $\text{H}^\bullet(-)$, and invoking Proposition~\ref{prop:stablefreehom}, we get the following commutative square: \begin{center} \begin{tikzpicture}[node distance=1cm] \node(A){$\widehat{\mathcal{B}} \otimes \widehat{\mathcal{B}} \otimes \text{H}^\bullet(A)$}; \node[below= of A](C){$\widehat{\mathcal{B}} \otimes \text{H}^\bullet(A)$}; \node[right= of A](B){$\widehat{\mathcal{B}} \otimes \text{H}^\bullet(A)$}; \node[below= of B,yshift=-0.75mm](D){$\text{H}^\bullet(A)$}; \draw[->] (A) -- (B) node[midway,anchor=south]{}; \draw[->] (A) -- (C) node[midway,anchor=east]{}; \draw[->] (C) -- (D) node[midway,anchor=north]{}; \draw[->] (B) -- (D) node[midway,anchor=west]{}; \end{tikzpicture} \end{center} A diagram chase now yields associativity of the $\widehat{\mathcal{B}}$-action. \end{proof} \begin{Remark} Earlier, we constructed cohomology operations for algebras over $\mathpzc{E}_{\text{st}}^\dagger$ explicitly, at least in the case $p = 2$. If one unravels the general construction in Proposition~\ref{prop:(co)hom_ops_stab_op} above, one can see that, in the case $p = 2$, and in the case of the finite sums in $\widehat{\mathcal{B}}$, the operations coincide with those which we constructed earlier. In fact, the action of an infinite sum, in any given instance, reduces to an action by a finite sum in $\mathcal{B}$ (one projects to an appropriate finite sub-sum comprising iterated operations up to a certain upper bound on the excess). Moreover, in the earlier, more explicit construction of the operations for algebras over $\mathpzc{E}_{\text{st}}^\dagger$, we did not verify such things as the Adem relations. These now follow from the general construction above and from the definition of $\widehat{\mathcal{B}}$ itself. \hfill $\vert\vert$ \end{Remark} \subsection{Unstable Modules over $\widehat{\mathcal{B}}$} We saw earlier that, in the case of an algebra over the unstable operad $\mathpzc{E}^\dagger$, the cohomology is not only a $\mathcal{B}$-module, but an unstable $\mathcal{B}$-module, where instability here means that $P^Ix = 0$ when $e(I) > |x|$. We can entirely analogously define instability for $\widehat{\mathcal{B}}$-modules: say that a $\widehat{\mathcal{B}}$-module $H$ is \textit{unstable} if $P^Ix = 0$ whenever $e(I) > |x|$ (where $P^I$ is now really an infinite sum with all, but one, coefficients equal to zero). In the case of an algebra over the stable Barratt-Eccles operad $\mathpzc{E}_{\text{st}}^\dagger$ however, the cohomology is in fact not an unstable $\widehat{\mathcal{B}}$-module -- we saw this earlier in the explicit construction of the operations in the case $p = 2$. \\ Now, given that $\widehat{\mathcal{B}}$-modules appear naturally in the stable case, it is natural to ask: why $\widehat{\mathcal{B}}$ wasn't seen in the unstable case? The following proposition gives an answer for this: restricting attention to unstable modules, we find that an unstable $\mathcal{B}$-module is canonically also an unstable $\widehat{\mathcal{B}}$-module so that, in particular, in the case of the unstable operad, we could equivalently have said that the cohomologies are unstable $\widehat{\mathcal{B}}$-modules. \begin{Proposition}\label{prop:BBhatactions} Let $H$ be a graded module over $\mathbb{F}_p$. An unstable action by $\mathcal{B}$ on $H$ extends canonically to an unstable action by $\widehat{\mathcal{B}}$. \end{Proposition} \begin{proof} Suppose given an unstable $\mathcal{B}$-action on $H$. We define a $\widehat{\mathcal{B}}$-action on $H$ as follows: given any element $x$, an infinite sum is to be projected to the finite sub-sum comprising operations of excess $< |x|$, and then act according to the given action by $\mathcal{B}$. We must simply check that this does indeed yield a $\widehat{\mathcal{B}}$-action, as instability and compatibility with the $\mathcal{B}$-action are clear. The only thing that is non-obvious is associativity. This can be verified as follows. Let $\Sigma_1$ and $\Sigma_2$ denote two infinite sums in $\widehat{\mathcal{B}}$, and let $x$ be an element of $H$. Then $(\Sigma_1)(\Sigma_2 x)$ can be computed as follows: consider only those operations $P^J$ in $\Sigma_2$ where $e(J) < |x|$, and then, given such an operation $P^J$, we only consider those operations $P^I$ in $\Sigma_1$ where $e(I) < |x| + d(J)$. On the other hand, $(\Sigma_1\Sigma_2)x$ can be computed as follows: consider only those terms in the product $\Sigma_1\Sigma_2$ coming from a product $P^I \cdot P^J$ which can contain a term of excess $< |x|$ (we don't need to restrict to exactly those terms with excesss $< |x|$ since any extraneous terms of excess $\ge |x|$ will act by zero due to instability). Due to Lemma~\ref{lem:QIQJprop}, we can restrict to products $P^I \cdot P^J$ where $e(J) < |x|$. Due to Lemma~\ref{lem:excess_down}, we can restrict further to those products which also satisfy $e(IJ) < |x|$, where $IJ$ denotes the concatenation of $I$ and $J$; if $I$ is non-empty, using $e(IJ) = e(I) - d(J)$, we can rephrase this condition to $e(I) < |x| + d(J)$, and moreover can do the same if $I$ is empty, as then both $e(I) < |x| + d(J)$ and $e(IJ) < |x|$ are true under the standing assumption that $e(J) < |x|$. These conditions, namely $e(J) < |x|$ and $e(I) < |x| + d(J)$, are exactly those which we saw in the consideration of $(\Sigma_1)(\Sigma_2 x)$, and so we have demonstrated that $(\Sigma_1)(\Sigma_2 x) = (\Sigma_1\Sigma_2)x$, as desired. \end{proof} \subsection{The Relation Between $\widehat{\mathcal{B}}$ and the Steenrod Algebra $\mathcal{A}$} We have seen that cohomologies of algebras over $\mathpzc{E}_{\text{st}}^\dagger$ yield modules over $\widehat{\mathcal{B}}$. We shall see later that spectral cochains yield algebras over $\mathpzc{E}_{\text{st}}^\dagger$, and as such, cohomologies of spectra possess actions by $\widehat{\mathcal{B}}$. In this special case of cochains on spectra, as in the case of cochains on spaces, we shall see that $P^0$ acts by the identity. As a result, we shall now consider the quotient $\widehat{\mathcal{B}}/(1-P^0)$. Our goal is to show that this quotient is isomorphic to the Steenrod algebra $\mathcal{A}$. First, recall that the Steenrod algebra may be defined as follows. If $p = 2$, where $p$ is our fixed prime, we have \[ \mathcal{A} = \mathbf{F}\{P^s \mid s \ge 0\}/(I_{\text{Adem}}, 1-P^0) \] where $\mathbf{F}\{P^s \mid s \ge 0\}$ denotes the free graded algebra over $\mathbb{F}_2$ on the formal symbols $P^s$, for $s \ge 0$, where $P^s$ has degree $s$, and where $I_{\text{Adem}}$ denotes the two-sided ideal generated by the Adem relations. If $p > 2$, we have \[ \mathcal{A} = \mathbf{F}\{P^s, \beta P^s \mid s \ge 0\}/(I_{\text{Adem}}, 1-P^0) \] where $\mathbf{F}\{P^s, \beta P^s \mid s \in \mathbb{Z}\}$ denotes the free graded algebras over $\mathbb{F}_p$ on formal symbols $P^s, \beta P^s$, for $s \in \mathbb{Z}$, where $P^s, \beta P^s$ have degrees $2s(p-1), 2s(p-1)+1$ respectively, and where $I_{\text{Adem}}$ denotes the two-sided ideal generated by the Adem relations. \begin{Remark} In the above constructions of $\mathcal{A}$, the Adem relations are to be understood as those which we have seen earlier in the construction of $\mathcal{B}$ except where the summation index is restricted so as to yield only operations of non-negative degree. \hfill $\vert\vert$ \end{Remark} The Steenrod algebra has a basis, the Cartan-Serre basis, which is similar to the Cartan-Serre basis which we described earlier for $\mathcal{B}$. This is an $\mathbb{F}_p$-basis and is as given by the monomials $P^I$ where $I$ is admissible and, if $p = 2$, $I = (i_1,\dots,i_k)$ satisfies $i_j > 0$ for each $j$, and if $p > 2$, $I = (\varepsilon_1, i_1,\dots,\varepsilon_k, i_k)$ satisfies, once again, $i_j > 0$ for each $j$. See, for example,~\cite{Milnor}. \\ Now, note that we have a map \[ \widehat{\mathcal{B}} \to \mathcal{A} \] where, given the element $\sum a_IP^I$ of $\widehat{\mathcal{B}}$, we map it to the class of the sub-sum consisting of those multi-indices in which all entries are non-negative; this sub-sum is finite by Proposition~\ref{prop:i1posinf_ikneginf}. That this is indeed a map of algebras follows from the lemma below. \begin{Lemma}\label{lem:PIPJnegentry} Given admissible multi-indices $I$ and $J$, if either of them contains a negative entry, than all multi-indices in the admissible monomials expansion of $P^IP^J$ must contain a negative entry, or, equivalently by admissibility, must have a negative final entry. \end{Lemma} Here, as before, in the case $p > 2$, where multi-indices take the form $(\varepsilon_1,i_1,\dots,\varepsilon_r,i_r)$, where the $i_j$ lie in $\mathbb{Z}$ while the $\varepsilon_j$ lie in $\{0,1\}$, the first entry is taken to be $i_1$, and the final entry, $i_r$, which is to say we disregard the $\varepsilon_j$ for this particular purpose. \begin{proof} We shall outline the case where $p = 2$; the $p > 2$ case is analogous. If $J$ contains a negative entry, or, equivalently by admissibility, has a negative final entry, the result follows by Lemma~\ref{lem:first_entry_down_last_up}. Assume then that it is $I$ that has a negative final entry. We shall prove the result by inducting on the length of $J$. If $J$ has length zero, it is empty and the result is trivial. Suppose that $J$ has length one, and say that it is equal to $(b)$, for some $b \in \mathbb{Z}$. Consider $P^IP^b$. We shall prove the result in this case by an induction on the length of $I$. As $I$ is required to contain an negative entry, it cannot have zero length. Suppose that $I$ has length one, and say that it is equal to $(a)$, where we must have $a < 0$. If $a \ge 2b$, the result is obvious as then $b < 0$. Suppose that $a < 2b$. Then the admissible monomials expansion of $P^aP^b$ is given by the Adem relations: \[ \sum \binom{b-i-1}{a-2i}P^{a+b-i}P^i \] In order for the binomial coefficient to be non-zero, we must have $i \le a/2$, and so we see that the final entry $i$ in the multi-index $(a+b-i,i)$ is always negative, as desired. Now suppose that we have the result for terms $P^IP^b$ where $I$ has length $< n$, for some $n \ge 2$. Given an admissible multi-index $I = (i_1,\dots,i_n)$ of length $n$, where $i_n < 0$, we have $P^IP^b = P^{i_1}(P^{i_2} \cdots P^{i_n}P^b)$. Upon first forming the admissible monomials expansion of $P^{i_2} \cdots P^{i_n}P^b$, the inductive hypothesis for the induction on the length of $I$ and Lemma~\ref{lem:first_entry_down_last_up} give us the desired result. Now let us return to the induction on the length of $J$. We have demonstrated the result in the cases where $J$ has length zero or one. Now suppose that we have the result for terms $P^IP^J$ where $J$ has length $< n$, for some $n \ge 2$. Given an admissible multi-index $J = (j_1,\dots,j_n)$ of length $n$, we have $P^IP^J = (P^IP^{j_1} \cdots P^{j_{n-1}})P^{j_n}$. The result now follows by the inductive hypothesis and by the case where $J$ has length one. \end{proof} \begin{Proposition}\label{prop:BhatandA} We have the following: \begin{itemize} \item[(i)] The following sequence is exact: \[ 0 \longrightarrow \widehat{\mathcal{B}} \overset{1-P^0}\longrightarrow \widehat{\mathcal{B}} \longrightarrow \mathcal{A} \longrightarrow 0 \] \item[(ii)] The left ideal of $\widehat{\mathcal{B}}$ generated by $1-P^0$ coincides with the two-sided ideal and the above map $\widehat{\mathcal{B}} \to \mathcal{A}$ induces an algebra isomorphism: \[ \widehat{\mathcal{B}}/(1-P^0) \cong \mathcal{A} \] \end{itemize} \end{Proposition} In the above sequence, the map denoted by $1-P^0$ is right multiplication by $1-P^0$. Note that, as shown in~\cite{Mandell}, analogues of both of these statements hold true if we replace $\widehat{\mathcal{B}}$ with $\mathcal{B}$. \begin{proof} Let us consider (i). Recall that, for each $k \ge 0$, we defined $\mathcal{B}_{\le k}$ as the quotient of $\mathcal{B}$ by $I_{\text{Adem}} + I_{\text{exc} \, > \, k}$, where $I_{\text{Adem}}$ denotes the ideal generated by the Adem relations and $I_{\text{exc} \, > \, k}$ the ideal generated by the monomials of excess $> k$. Let us let $\mathcal{A}_{\le k}$ denote the analogous quotients of $\mathcal{A}$. As per Proposition 12.5 in~\cite{Mandell}, for each $k \ge 1$, we have an exact sequence as follows: \[ 0 \longrightarrow \mathcal{B}_{\le k} \overset{1-P^0}\longrightarrow \mathcal{B}_{\le k} \longrightarrow \mathcal{A}_{\le k} \longrightarrow 0 \] For each $t \ge 0$, we can consider a filtered version of the above sequence, where the filtrations are by length. That is, we can consider the sequence \[ 0 \longrightarrow \text{F}_t\mathcal{B}_{\le k} \overset{1-P^0}\longrightarrow \text{F}_{t+1}\mathcal{B}_{\le k} \longrightarrow \text{F}_{t+1}\mathcal{A}_{\le k} \longrightarrow 0 \] and we note that this sequence is itself also exact: surjectivity at the righthand end is clear, injectivity at the lefthand end follows from the exactness of the previous sequence, and, just as in the case of the previous sequence, exactness in the middle follows by examination of the Cartan-Serre bases for $\mathcal{B}_{\le k}$ and $\mathcal{A}_{\le k}$ (these are the same as the bases for $\mathcal{B}$ and $\mathcal{A}$, made up of admissible multi-indices, together with the obvious restriction on the excess of the multi-indices). Upon taking limits over $k$, since the maps $\text{F}_t\mathcal{B}_{\le k+1} \to \text{F}_t\mathcal{B}_{\le k}$ are clearly onto, so that the corresponding $\lim^1$ vanishes, we get an exact sequence as follows: \[ 0 \longrightarrow \lim_{k \ge 1}\text{F}_t\mathcal{B}_{\le k} \longrightarrow \lim_{k \ge 1}\text{F}_{t+1}\mathcal{B}_{\le k} \longrightarrow \lim_{k \ge 1}\text{F}_{t+1}\mathcal{A}_{\le k} \longrightarrow 0 \] Upon taking colimits over $t$, by Proposition~\ref{prop:FtBhatcomplete}, we get an exact sequence as follows: \[ 0 \longrightarrow \widehat{\mathcal{B}} \longrightarrow \widehat{\mathcal{B}} \longrightarrow \underset{t \ge 0}{\text{colim}}\lim_{k \ge 1}\text{F}_{t+1}\mathcal{A}_{\le k} \longrightarrow 0 \] Moreover, an easy check shows that $\text{colim}_{t \ge 0}\lim_{k \ge 0}\text{F}_{t+1}\mathcal{A}_{\le k} \cong \mathcal{A}$ (in the case of $\mathcal{A}$, no infinite sums arise in this construction, unlike the case of $\mathcal{B}$; this follows by an argument similar to that in Proposition~\ref{prop:i1posinf_ikneginf}, where we saw that there must exist negative degree operations in all but finitely many terms of an infinite sum $\sum a_IP^I$ of iterated operations increasing in excess). Thus we have an exact sequence \[ 0 \longrightarrow \widehat{\mathcal{B}} \longrightarrow \widehat{\mathcal{B}} \longrightarrow \mathcal{A} \longrightarrow 0 \] and, unravelling the identifications which we have made, we see that the maps in this sequence are exactly those in the proposition statement. This completes the proof of (i). \\ Now let us consider (ii). By the exactness in (i), the kernel of the map $\widehat{\mathcal{B}} \to \mathcal{A}$ coincides with the image of the map $\widehat{\mathcal{B}} \to \widehat{\mathcal{B}}$ which we have denoted by $1-P^0$. This latter image is the left ideal of $\widehat{\mathcal{B}}$ generated by $1-P^0$. By definition of the map $\widehat{\mathcal{B}} \to \mathcal{A}$, its kernel clearly contains the two-sided ideal of $\widehat{\mathcal{B}}$ generated by $1-P^0$, and so we have that this two-sided ideal is contained in the aforementioned left ideal, from which it follows that these two ideals coincide. Moreover, we have the isomorphism $\widehat{\mathcal{B}}/(1-P^0) \cong \mathcal{A}$ because the map $\widehat{\mathcal{B}} \to \mathcal{A}$ is onto and the kernel, as just established, is precisely the two-sided ideal generated by $1-P^0$. \end{proof} \section{The Homotopy Coherent, or $\infty$-, Additivity of the Stable Operads} In this section, we wish to demonstrate the homotopy coherent, or $\infty$-, additivity of stable Barratt-Eccles operad, justifying the adjective ``stable'', in three successively more general forms. First, we will show that given free algebras $\mathbf{E}_{\textbf{st}}^\dagger X$ and $\mathbf{E}_{\textbf{st}}^\dagger Y$, the algebra coproduct $\mathbf{E}_{\textbf{st}}^\dagger X \amalg \mathbf{E}_{\textbf{st}}^\dagger Y$ is naturally quasi-isomorphic to the direct sum $\mathbf{E}_{\textbf{st}}^\dagger X \oplus \mathbf{E}_{\textbf{st}}^\dagger Y$. As $\mathbf{E}_{\textbf{st}}^\dagger$ is a left adjoint as a functor from dg modules to algebras, and so preserves colimits, we can also phrase this result as saying that $\mathbf{E}_{\textbf{st}}^\dagger$, as a monad on dg modules, is homotopy additive. Next, we shall generalize this result and show that, for cofibrant algebras $A$ and $B$, over $\mathpzc{E}_{\text{st}}^\dagger$, the coproduct $A \amalg B$ is naturally quasi-isomorphic to $A \oplus B$. Here the cofibrancy is in the sense of the Quillen semi-model structure provided by Corollary~\ref{cor:stablesemimod}. Finally, we shall generalize this one step further and show that if, given a diagram of algebras $A \leftarrow C \rightarrow B$, if $A$ and $B$ are cofibrant and $C \to A$ is a cofibration, then $A \amalg_C B$ is naturally quasi-isomorphic to $A \oplus_C B$. \subsection{Derived Coproducts of Algebras over the Stable Operads} We begin with the homotopy additivity of the monad $\mathbf{E}_{\textbf{st}}^\dagger$. \begin{Proposition}\label{prop:additivity_of_monad} If $X$ and $Y$ are cochain complexes, we have a natural quasi-isomorphism: \[ \mathbf{E}_{\normalfont{\textbf{st}}}^\dagger(X \oplus Y) \sim \mathbf{E}^\dagger_{\normalfont{\textbf{st}}}(X) \oplus \mathbf{E}^\dagger_{\normalfont{\textbf{st}}}(Y) \] \end{Proposition} \begin{proof} Given the cochain complexes $X$ and $Y$, we have a canonical map \[ \mathbf{E}^\dagger_{\textbf{st}} (X) \oplus \mathbf{E}^\dagger_{\textbf{st}} (Y) \to \mathbf{E}^\dagger_{\textbf{st}} (X \oplus Y) \] and we claim that this map is a quasi-isomorphism. If $X$ and $Y$ are finite (where, as before, by a finite complex over $\mathbb{F}_p$ we mean one which is bounded above and below and of finite dimension in each degree), Lemma~\ref{lem:PhiPsILfinite} gives us a computation of the necessarily cohomologies via the functor $\text{A}$, and then the result follows by the additivity of $\text{A}$ as per Proposition~\ref{prop:ABadditive}. Now fix $X$ to be some finite complex, say $X_0$, and consider $\mathbf{E}^\dagger_{\textbf{st}}(X_0) \oplus \mathbf{E}^\dagger_{\textbf{st}}(-)$ and $\mathbf{E}^\dagger_{\textbf{st}}(X_0 \oplus -)$ as endofunctors on cochain complexes. As $\mathbf{E}_{\textbf{st}}^\dagger$, as a functor to algebras, preserves colimits, and because filtered colimits of dg operad algebras are created in the category of dg modules, both $\mathbf{E}^\dagger_{\textbf{st}}(X_0) \oplus \mathbf{E}^\dagger_{\textbf{st}}(-)$ and $\mathbf{E}^\dagger_{\textbf{st}}(X_0 \oplus -)$, as endofunctors on cochain complexes, preserve filtered colimits. Since any complex $Y$ can be written as a filtered colimit of its finite subcomplexes, by naturality and exactness of filtered colimits of complexes, we have that the map $\mathbf{E_{st}}(X_0) \oplus \mathbf{E_{st}}(Y) \to \mathbf{E_{st}}(X_0 \oplus Y)$ is a quasi-isomorphism for all $Y$. Now repeat this argument, fixing instead $Y$ and considering the terms as functors of $X$, to get the desired general result. \end{proof} As we noted above, Propostion~\ref{prop:additivity_of_monad} can be regarded as a statement about coproducts of free algebras. We now consider, more generally, coproducts of cofibrant algebras. As we saw earlier, coproducts of cell algebras may be computed via enveloping operads, and so we shall be led to consider the enveloping operads associated to the stable Barrat-Eccles operad $\mathpzc{E}_{\text{st}}^\dagger$. First, however, we have the following lemma. \begin{Lemma}\label{lem:stabilityigorsense} For each $j \ge 2$ and each non-trivial partition $j = j_1 + \cdots + j_k$, we have that: \[ \mathpzc{E}_{\emph{st}}^\dagger(j)/\Sigma_{j_1} \times \cdots \times \Sigma_{j_k} \sim 0 \] \end{Lemma} Here non-trivial means that the partition is not indiscrete. Moreover, by $\mathpzc{E}^\dagger_{\text{st}}(j)/\Sigma_{j_1} \times \cdots \times \Sigma_{j_k}$, we mean $\mathpzc{E}^\dagger_{\text{st}}(j) \otimes_{\Sigma_{j_1} \times \cdots \times \Sigma_{j_k}} (\mathbb{S}^0)^{\otimes j_1} \otimes \cdots \otimes (\mathbb{S}^0)^{\otimes j_k}$, where $\mathbb{S}^0$ is the complex $\mathbb{F}_p[0]$. \begin{proof} By Proposition~\ref{prop:additivity_of_monad}, we have that the canonical map \[ \mathbf{E}^\dagger_{\textbf{st}}(\mathbb{S}^0) \oplus \cdots \oplus \mathbf{E}^\dagger_{\textbf{st}}(\mathbb{S}^0) \to \mathbf{E}^\dagger_{\textbf{st}}(\mathbb{S}^0 \oplus \cdots \oplus \mathbb{S}^0) \] where $\mathbb{S}^0$ denotes the complex $\mathbb{F}_p[0]$, and where we take $k$ copies of $\mathbb{S}^0$ in the case of partitions of size $k$, is a quasi-isomorphism. The result now follows from the fact that: \[ \mathbf{E}^\dagger_{\textbf{st}}(\mathbb{S}^0 \oplus \cdots \oplus \mathbb{S}^0) \cong \bigoplus_{j_1, \dots, j_k \ge 0} \mathpzc{E}_{\text{st}}^\dagger(j_1 +\cdots + j_k) \otimes_{\Sigma_{j_1} \times \cdots \times \Sigma_{j_k}} (\mathbb{S}^0)^{\otimes j_1} \otimes \cdots \otimes (\mathbb{S}^0)^{\otimes j_k} \] \end{proof} Now, given an algebra $A$ over $\mathpzc{E}_{\text{st}}^\dagger$, and the associated enveloping operad $\mathpzc{U}^A$, recall, as in Section~\ref{sec:env_op}, that we have a canonical map $\mathpzc{E}^\dagger_{\text{st}} \to \mathpzc{U}^A$. \begin{Lemma}\label{lem:UandVforcofibA} Given a cofibrant algebra $A$ over $\mathpzc{E}^\dagger_{\emph{st}}$, for each $j \ge 1$ and any partition $j = j_1 + \cdots + j_k$, the canonical map \[ \mathpzc{E}_{\emph{st}}^\dagger(j)/\Sigma_{j_1} \times \cdots \times \Sigma_{j_k} \to \mathpzc{U}^A(j)/\Sigma_{j_1} \times \cdots \times \Sigma_{j_k} \] is a quasi-isomorphism. \end{Lemma} \begin{proof} Without loss of generality, we may take $A$ to be a cell $\mathpzc{E}_{\text{st}}^\dagger$-algebra. Following the notation of Section~\ref{sec:cell_alg}, let \[ A_0 \to A_1 \to A_2 \to \cdots \] be a cell filtration of $A$ and fix some choices $M_1, M_2, \dots$ for the cochain complexes which appear in the attachment squares. For each $n \ge 0$, let $N_n = \oplus_{i \le n} M_i$, where $N_0 = 0$, and let also $N = \oplus_{i \ge 0} M_i$. As per Section~\ref{sec:env_op}, we have that, for each $j \ge 0$, as a graded right $\mathbb{F}_p[\Sigma_j]$-module: \begin{equation}\label{eqn:cellU} \mathpzc{U}^A(j) = \bigoplus_{i \ge 0} \mathpzc{E}^\dagger_{\text{st}}(i + j) \otimes_{\Sigma_i} (N[1])^{\otimes i} \end{equation} The differential on $\mathpzc{U}^A(j)$, we recall, is given by the Leibniz rule, the attachment maps and the operadic composition. Moreover, for each $n \ge 0$, and again for each $j \ge 0$, as a graded right $\mathbb{F}_p[\Sigma_j]$-module: \begin{equation}\label{eqn:cellskelU} \mathpzc{U}^{A_n}(j) = \bigoplus_{i \ge 0} \mathpzc{E}_{\text{st}}^\dagger(i + j) \otimes_{\Sigma_i} (N_n[1])^{\otimes i} \end{equation} Moreover, from Section~\ref{sec:env_op}, recall that we have filtrations $\text{F}_m\mathpzc{U}^{A_n}$ of the $\mathpzc{U}^{A_n}$. For each $j \ge 0$, the map $\mathpzc{E}_{\text{st}}^\dagger(j) \to \mathpzc{U}^{A}(j)$ corresponds to the inclusion of the $i = 0$ summand in (\ref{eqn:cellU}), and, similarly, for each $n \ge 0$, the map $\mathpzc{E}_{\text{st}}^\dagger(j) \to \mathpzc{U}^{A_n}(j)$ corresponds to the inclusion of the $i = 0$ summand in (\ref{eqn:cellskelU}). It follows that the map $\mathpzc{E}_{\text{st}}^\dagger(j) \to \mathpzc{U}^{A}(j)$ factors through $\mathpzc{U}^{A_n}(j)$ for each $n \ge 0$ and, moreover, the maps $\mathpzc{E}_{\text{st}}^\dagger(j) \to \mathpzc{U}^{A_n}(j)$ factor through $\text{F}_m\mathpzc{U}^{A_n}(j)$ for each $m \ge 0$. We shall now prove the desired result via an induction. Specifically, we shall show that, for each $m,n \ge 0$, the map \[ \mathpzc{E}_{\text{st}}^\dagger(j)/\Sigma_{j_1} \times \cdots \times \Sigma_{j_k} \to \text{F}_m\mathpzc{U}^{A_n}(j)/\Sigma_{j_1} \times \cdots \times \Sigma_{j_k} \] is a quasi-isomorphism for all $j \ge 1$ and any partition $j = j_1 + \cdots + j_k$. The desired result then follows by passage to colimits. We will prove this statement by an induction on $n$. In the case $n = 0$, we have that $\text{F}_m\mathpzc{U}^{A_0}(j) = \mathpzc{E}_{\text{st}}^\dagger(j)$ for each $m \ge 0$ and $j \ge 1$, so that the result is obvious. Next suppose that, for some $n \ge 1$, the property holds for $\text{F}_m\mathpzc{U}^{A_{n-1}}$ holds for all $m \ge 0$. We shall show that this same property holds for $\text{F}_m\mathpzc{U}^{A_{n}}$ for $m \ge 0$, by an induction over $m$. We have that, for each $j \ge 1$, $\text{F}_0\mathpzc{U}^{A_n}(j) = \mathpzc{U}^{A_{n-1}}(j) = \text{colim}_m\,\text{F}_m\mathpzc{U}^{A_{n-1}}(j)$ which, by invoking the inductive hypothesisis for the induction over $n$ and passing to the colimit, we see satisfies the required property (recall that filtered colimits of complexes are exact). Next, suppose that the required property holds for $\text{F}_{m-1}\mathpzc{U}^{A_n}(j)$ for some $m \ge 1$. Fix some $j \ge 1$ and a partition $j = j_1 + \cdots + j _k$. We wish to show that the map $\mathpzc{E}_{\text{st}}(j)/\Sigma_{j_1} \times \cdots \times \Sigma_{j_k} \to \text{F}_m\mathpzc{U}^{A_n}(j)/\Sigma_{j_1} \times \cdots \times \Sigma_{j_k}$ is a quasi-isomorphism. Since we can factor this map as \[ \mathpzc{E}_{\text{st}}(j)/\Sigma_{j_1} \times \cdots \times \Sigma_{j_k} \to \text{F}_{m-1}\mathpzc{U}^{A_n}(j)/\Sigma_{j_1} \times \cdots \times \Sigma_{j_k} \to \text{F}_m\mathpzc{U}^{A_n}(j)/\Sigma_{j_1} \times \cdots \times \Sigma_{j_k} \] it suffices, due to the inductive hypothesis for the induction over $m$, to show that the second map in this composition is a quasi-isomorphism. As in the proof of Lemma~\ref{lem:Estenvopsflat}, we have an exact sequence \[ 0 \to \text{F}_{m-1}\mathpzc{U}^{A_n}(j) \to \text{F}_m\mathpzc{U}^{A_n}(j) \to \text{F}_m\mathpzc{U}^{A_n}(j)/\text{F}_{m-1}\mathpzc{U}^{A_n}(j) \to 0 \] which is split, at the lefthand end, at the level of graded modules. As in the proof of Lemma~\ref{lem:almost_splitunstable}, we thus have an induced exact sequence as follows: \begin{multline}\label{eqn:SES} 0 \to \text{F}_{m-1}\mathpzc{U}^{A_n}(j)/\Sigma_{j_1} \times \cdots \times \Sigma_{j_k} \to \text{F}_m\mathpzc{U}^{A_n}(j)/\Sigma_{j_1} \times \cdots \times \Sigma_{j_k} \\ \to (\text{F}_m\mathpzc{U}^{A_n}(j)/\text{F}_{m-1}\mathpzc{U}^{A_n}(j))/\Sigma_{j_1} \times \cdots \times \Sigma_{j_k} \to 0 \end{multline} By the long exact sequence in homology, it suffices to show that the righthand term, that is, the term $(\text{F}_m\mathpzc{U}^{A_n}(j)/\text{F}_{m-1}\mathpzc{U}^{A_n}(j))/\Sigma_{j_1} \times \cdots \times \Sigma_{j_k}$, has zero homology. From Section~\ref{sec:env_op}, we have that: \[ \text{F}_m\mathpzc{U}^{A_n}(j)/\text{F}_{m-1}\mathpzc{U}^{A_n}(j) \cong \mathpzc{U}^{A_{n-1}}(m+j) \otimes_{\Sigma_m} M_n[1]^{\otimes m} \] It follows that $(\text{F}_m\mathpzc{U}^{A_n}(j)/\text{F}_{m-1}\mathpzc{U}^{A_n}(j))/\Sigma_{j_1} \times \cdots \times \Sigma_{j_k}$ is isomorphic to: \[ \mathpzc{U}^{A_{n-1}}(m+j) \otimes_{\Sigma_m \times \Sigma_{j_1} \times \cdots \times \Sigma_{j_k}} M_n[1]^{\otimes m} \otimes (\mathbb{S}^0)^{\otimes j_1} \otimes \cdots \otimes (\mathbb{S}^0)^{\otimes j_k} \] By the inductive hypothesis for the induction over $n$, and by writing $M_n[1]$ (which is a sum of spheres) as a filtered colimit of its finite subcomplexes if it isn't already finite, we have that this term is quasi-isomorphic to: \[ \mathpzc{E}_{\text{st}}^\dagger(m+j) \otimes_{\Sigma_m \times \Sigma_{j_1} \times \cdots \times \Sigma_{j_k}} M_n[1]^{\otimes m} \otimes (\mathbb{S}^0)^{j_1} \otimes \cdots \otimes (\mathbb{S}^0)^{j_k} \] Moreover, since $m+j \ge 2$, this has zero homology by Lemma~\ref{lem:stabilityigorsense}. This completes the induction over $m$ and then also the induction over $n$. \end{proof} \begin{Proposition}\label{prop:coproducts_of_algebras} Let $A$ and $B$ be cofibrant algebras over $\mathpzc{E}^\dagger_{\emph{st}}$. Then we have a natural quasi-isomorphism: \[ A \amalg B \sim A \oplus B \] \end{Proposition} \begin{proof} Without loss of generality, we may take $A$ and $B$ to be cell $\mathpzc{E}$-algebras. Following the notation of Section~\ref{sec:cell_alg}, let \[ A_0 \to A_1 \to A_2 \to \cdots \] be a cell filtration of $A$ and fix some choices $M_1^A, M_2^A, \dots$ for the chain complexes which appear in the attachment squares. Let also \[ B_0 \to B_1 \to B_2 \to \cdots \] be a cell filtration of $B$ and fix some choices $M_1^B, M_2^B, \dots$ for the chain complexes which appear in the attachment squares. From Section~\ref{sec:env_op}, recall the formulae for $\mathpzc{U}^{A_n}(j)$ and $\mathpzc{U}^{B_n}(j)$, and also their filtration pieces $\text{F}_m\mathpzc{U}^{A_n}(j)$ and $\text{F}_m\mathpzc{U}^{B_n}(j)$. It suffices to show that the canonical map \[ A_n \oplus B_n \to A_n \amalg B_n \] is a quasi-isomorphism for each $n \ge 0$. In the case $n=0$, we get a map $\mathpzc{E}_{\text{st}}^\dagger(0) \oplus \mathpzc{E}_{\text{st}}^\dagger(0) \to \mathpzc{E}_{\text{st}}^\dagger(0) \amalg \mathpzc{E}_{\text{st}}^\dagger(0) \cong \mathpzc{E}_{\text{st}}^\dagger(0)$ (this isomorphism holds since $\mathpzc{E}_{\text{st}}^\dagger(0)$ is initial) and this map is necessarily a quasi-isomorphism since $\mathpzc{E}_{\text{st}}^\dagger(0)$ has zero homology as per~Proposition~\ref{prop:stabhom}. Suppose then that, for some $n \ge 1$, the map $A_{n-1} \oplus B_{n-1} \to A_{n-1} \amalg B_{n-1}$ is a quasi-isomorphism. Note that $A_n$, $B_n$, $A_n \amalg B_n$, may be identified, respectively, with $\mathpzc{U}^{A_n}(0)$, $\mathpzc{U}^{B_n}(0)$, $\mathpzc{U}^{A_n \amalg B_n}(0)$. Moreover, we have filtration pieces $\text{F}_m\mathpzc{U}^{A_n}(0), \text{F}_m\mathpzc{U}^{B_n}(0)$ and $\text{F}_m\mathpzc{U}^{A_n \amalg B_n}(0)$, and the map $A_n \oplus B_n \to A_n \amalg B_n$ is a filtered map, in that we have induced maps as follows: \[ \text{F}_m\mathpzc{U}^{A_n}(0) \oplus \text{F}_m\mathpzc{U}^{B_n}(0) \to \text{F}_m\mathpzc{U}^{A_n \amalg B_n}(0) \] Thus, from the map $A_n \oplus B_n \to A_n \amalg B_n$, we get an induced map of the strongly convergent spectral sequences associated to the aforementioned filtrations. Recalling, from Section~\ref{sec:env_op}, the computations of the associated graded pieces corresponding to the filtrations on the $\mathpzc{U}^{A_n}(j)$, we have that the map on the $E^1$-terms consists of the maps: \begin{multline*} \Big(\mathpzc{U}^{A_{n-1}}(m) \otimes_{\Sigma_m} (M^A_n[1])^{\otimes m}\Big) \oplus \Big(\mathpzc{U}^{B_{n-1}}(m) \otimes_{\Sigma_m} (M^B_n[1])^{\otimes m}\Big) \\ \longrightarrow \mathpzc{U}^{A_{n-1} \amalg B_{n-1}}(m) \otimes_{\Sigma_m} (M^A_n[1] \oplus M^B_n[1])^{\otimes m} \end{multline*} If $m = 0$, this map reduces to $A_{n-1} \oplus B_{n-1} \to A_{n-1} \amalg B_{n-1}$ and so is a quasi-isomorphism by the inductive hypothesis. Suppose then that $m \ge 1$. Then, by Lemma~\ref{lem:UandVforcofibA}, it suffices to show that the map \[ \Big(\mathpzc{E}_{\text{st}}^\dagger(m) \otimes_{\Sigma_m} (M^A_n[1])^{\otimes m}\Big) \oplus \Big(\mathpzc{E}_{\text{st}}^\dagger(m) \otimes_{\Sigma_m} (M^B_n[1])^{\otimes m}\Big) \longrightarrow \mathpzc{E}_{\text{st}}^\dagger(m) \otimes_{\Sigma_m} (M^A_n[1] \oplus M^B_n[1])^{\otimes m} \] is a quasi-isomorphism. If $m = 1$, this is obvious, so suppose that $m \ge 2$. We have that \[ \mathpzc{E}^\dagger_{\text{st}}(m) \otimes_{\Sigma_m} (M^A_n[1] \oplus M^B_n[1])^{\otimes m} \cong \bigoplus_{l = 0}^m \mathpzc{E}^\dagger_{\text{st}}(m) \otimes_{\Sigma_{m-l} \times \Sigma_l} (M^A_n[1])^{\otimes(m-l)} \otimes (M^B_n[1])^{\otimes l} \] By Lemma~\ref{lem:stabilityigorsense}, the only two summands which have non-zero homology are those corresponding to $l = 0, m$, and so we once more have a quasi-isomorphism, as desired. It follows that the aforementioned map of spectral sequences is an isomorphism from $E^2$ onwards, and so we have that the map $A_n \oplus B_n \to A_n \amalg B_n$ is a quasi-isomorphism, completing the induction. \end{proof} \subsection{Derived Pushouts of Algebras over the Stable Operads} We shall now consider pushouts of algebras over the Stable Barratt-Eccles operad $\mathpzc{E}_{\text{st}}^\dagger$. Our claim is that, for algebras $A$, $B$ and $C$ over $\mathpzc{E}^\dagger_{\text{st}}$, under suitable circumstances, we have that $A \amalg_C B \sim A \oplus_C B$. We shall compute the pushout via a bar construction. Given a diagram $A \leftarrow C \rightarrow B$ of algebras over $\mathpzc{E}_{\text{st}}^\dagger$, the bar construction $\beta_\bullet(A,C,B)$ is the simplicial $\mathpzc{E}_{\text{st}}^\dagger$-algebra that is given in simplicial degree $n$ as follows: \[ \beta_n(A, C, B) := A \amalg \underbrace{C \amalg \cdots \amalg C}_{n \: \text{factors}} \amalg B \] Given this simplicial algebra, we can consider its normalization $\text{N}(\beta_\bullet(A,C,B))$, which is once again an algebra over $\mathpzc{E}_{\text{st}}^\dagger$ via the shuffle map (see~\cite{KrizMay} for the normalization of a simplicial algebra and its algebra structure). Now, regarding $A \amalg_C B$ as a constant simplicial algebra, the canonical maps $A \amalg B \to A \amalg_C B$ and $C \to A \amalg_C B$ induce a map of simplicial algebras $\beta_\bullet(A, C, B) \to A \amalg_C B$ and therefore a map of algebras on their normalizations, $\text{N}(\beta_\bullet(A,C,B)) \to A \amalg_C B$. \begin{Proposition}\label{prop:pushout_normalization} Given a diagram $A \leftarrow C \rightarrow B$ of algebras over $\mathpzc{E}_{\emph{st}}^\dagger$, if each of $A$ and $B$ and $C$ are cofibrant and $C \to B$ is a cofibration, then the canonical map \[ \emph{N}(\beta_\bullet(A,C,B)) \to A \amalg_C B \] is a quasi-isomorphism. \end{Proposition} In order to prove this result, we first need two lemmas. \begin{Lemma}\label{lem:norm_fin_flat} Given a diagram $A \leftarrow C \rightarrow B$ of algebras over $\mathpzc{E}_{\emph{st}}^\dagger$, if each of $A$, $B$ and $C$ is cofibrant, the normalization $\emph{N}(\mathpzc{U}^{\beta_\bullet(A,C,B)}(j))$ is semi-flat as a dg right $\mathbb{F}_p[\Sigma_j]$-module. \end{Lemma} \begin{proof} Given a dg left $\mathbb{F}_p[\Sigma_j]$-module $X$, we have a natural isomorphism: \[ \text{N}(\mathpzc{U}^{\beta_\bullet(A,C,B)}(j)) \otimes_{\Sigma_j} X \cong \text{N}(\mathpzc{U}^{\beta_\bullet(A,C,B)}(j) \otimes_{\Sigma_j} X) \] The required flatness now follows from Lemma~\ref{lem:Estenvopsflat}. \end{proof} \begin{Lemma}\label{lem:Estweksforpowers} Let $n \ge 0$, $P$ and $Q$ be dg right $\mathbb{F}_p[\Sigma_n]$-modules which are semi-flat, and let $P \to Q$ be a quasi-isomorphism. Then, for all dg $\mathbb{F}_p$-modules $X$, the induced map \[ P \otimes_{\Sigma_n} X^{\otimes n} \to Q \otimes_{\Sigma_n} X^{\otimes n} \] is a quasi-isomorphism. \end{Lemma} \begin{proof} We shall demonstrate this for finite $X$ (where finiteness over $\mathbb{F}_p$ means bounded above and below and of finite dimension in each degree). The case of a general $X$ then follows since any $X$ is a filtered colimit of its finite subcomplexes and filtered colimits commute with finite tensor powers and tensor products. In fact, we will show that, for any finite dg right $\mathbb{F}_p[\Sigma_n]$-module $Z$, the natural map \[ P \otimes_{\Sigma_n} Z \to Q \otimes_{\Sigma_n} Z \] is a quasi-isomorphism. To see this, let $Z_{\text{cof}} \to Z$ be a cofibrant approximation of $Z$, in the projective Quillen model structure on dg right $\mathbb{F}_p[\Sigma_n]$-modules, and then consider the following commutative square: \begin{center} \begin{tikzpicture}[node distance=1.5cm] \node(A){$P \otimes_{\Sigma_n} Z$}; \node[below= of A](C){$P \otimes_{\Sigma_n} Z_{\text{cof}}$}; \node[right= of A,xshift=1cm](B){$Q \otimes_{\Sigma_n} Z$}; \node[below= of B,yshift=0mm](D){$Q \otimes_{\Sigma_n} Z_{\text{cof}}$}; \draw[->] (A) -- (B) node[midway,anchor=south]{}; \draw[->] (C) -- (A) node[midway,anchor=east]{}; \draw[->] (C) -- (D) node[midway,anchor=north]{}; \draw[->] (D) -- (B) node[midway,anchor=west]{}; \end{tikzpicture} \end{center} The bottom horizontal map is then a map \[ P \otimes_{\Sigma_n}^{\mathbb{L}} Z \to Q \otimes_{\Sigma_n}^{\mathbb{L}} Z \] between the derived tensor products. To be precise, the derived functors are those of the functors $P \otimes_{\Sigma_n} -$ and $Q \otimes_{\Sigma_n} -$. Since, however, the two possible derived tensor products, achieved upon fixing one or the other variable, are naturally isomorphic, the above map can be identified with the image of $P \to Q$ under the derived functor of $- \otimes_{\Sigma_n}^{\mathbb{L}} Z$. As such, as $P \to Q$ is a quasi-isomorphism, the above map is also quasi-isomorphism. Now, if we can show that the vertical maps are also quasi-isomorphisms, we will have the desired result. The proofs for the two are identical, so we will describe just the one for the lefthand vertical map. Since $Z$ is bounded below, we can take $Z_{\text{cof}}$ to also be bounded below. Suppose that $Z$ is bounded above at degree $d$, in that $Z_{d'}$ is zero for $d' > d$. Consider the truncation $\tau_{\le d+1}Z_{\text{cof}}$ which is to say the complex which coincides with $Z_{\text{cof}}$ up to, and including, degree $d+1$, but is zero thereafter. More generally, we consider the truncations $\tau_{\le d+i}Z_{\text{cof}}$ for $i \ge 1$. Then we can write $Z_{\text{cof}}$ as the colimit of: \[ \tau_{d+1}Z_{\text{cof}} \to \tau_{d+2}Z_{\text{cof}} \to \cdots \] Moreover, since $Z$ is zero above degree $d$, we have maps $\tau_{\le d+i}Z_{\text{cof}} \to Z$, each of which is a quasi-isomorphism. Thus we get the following diagram: \begin{center} \begin{tikzpicture}[node distance = 1.5cm] \node [] (A) {$\tau_{\le d+1}Z_{\text{cof}}$}; \node [right of = A,xshift=1.5cm] (B) {$\tau_{\le d+2}Z_{\text{cof}}$}; \node [right of = B,xshift=1.5cm] (E) {$\tau_{\le d+3}Z_{\text{cof}}$}; \node [right of = E,xshift=1.5cm] (F) {$\cdots$}; \node [below of = A] (C) {$Z$}; \node [below of = B] (D) {$Z$}; \node [right of = D,xshift=1.5cm] (G) {$Z$}; \node [right of = G,xshift=1.5cm] (H) {$\cdots$}; \draw [->] (A) -- (B) node[midway,anchor=south]{}; \draw [->] (B) -- (E) node[midway,anchor=south]{}; \draw [->] (E) -- (F) node[midway,anchor=south]{}; \draw [->] (A) -- (C) node[midway,anchor=west]{$\sim$}; \draw [->] (B) -- (D) node[midway,anchor=west]{$\sim$}; \draw [->] (E) -- (G) node[midway,anchor=west]{$\sim$}; \draw [->] (C) -- (D) node[midway,anchor=south]{}; \draw [->] (D) -- (G) node[midway,anchor=south]{}; \draw [->] (G) -- (H) node[midway,anchor=south]{}; \end{tikzpicture} \end{center} If we tensor this diagram with $P$, since $P$ is semi-flat, the vertical arrows remain quasi-isomorphisms. Moreover, the map induced on the colimits of the two cotowers in the resulting diagram is exactly the map $P \otimes_{\Sigma_n} Z_{\text{cof}} \to P \otimes_{\Sigma_n} Z$, so that we have our desired result, as sequential colimits are exact. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:pushout_normalization}] We may assume, without loss of generality, that $A$ is a cell algebra and that $C \to B$ is a relative cell map. We shall in fact prove the more general fact that the map \[ \text{N}(\mathpzc{U}^{\beta_\bullet(A,C,B)}(j)) \to \mathpzc{U}^{A \amalg_C B}(j) \] is a quasi-isomorphism for each $j \ge 0$. The desired result is the case $j=0$. Let \[ B_0 \to B_1 \to B_2 \to \cdots \] be a factorization of $C \to B$ as a relative cell map, so that $B_0 = C$, and fix some choices $M_1, M_2, \dots$ for the cochain complexes which appear in the attachment squares. By passage to colimits, it suffices to show that, for all $n \ge 0$, the map \[ \text{N}(\mathpzc{U}^{\beta_\bullet(A,C,B_n)}(j)) \to \mathpzc{U}^{A \amalg_C B_n}(j) \] is a quasi-isomorphism for each $j \ge 0$. Now, in the case $n=0$, we get the map $\text{N}(\mathpzc{U}^{\beta_\bullet(A,C,C)}(j)) \to \mathpzc{U}^{A}(j)$. By a standard argument, the map of simplicial algebras $\beta_\bullet(A,C,C) \to A$ is a homotopy equivalence, and so, upon forming the arity $j$ parts of the enveloping operads, we have that the map $\mathpzc{U}^{\beta_\bullet(A,C,C)}(j) \to \mathpzc{U}^A(j)$ is a homotopy equivalence of simplicial dg right $\mathbb{F}_p[\Sigma_j]$-modules and, as simplicial homotopies induce chain homotopies on normalizations, upon taking normalizations, we get a chain homotopy equivalence of dg right $\mathbb{F}_p[\Sigma_j]$-modules. In particular, the map is a quasi-isomorphism, as desired. Now suppose that, for some $n \ge 1$, the map $\text{N}(\mathpzc{U}^{\beta_\bullet(A,C,B_{n-1})}(j)) \to \mathpzc{U}^{A \amalg_C B_{n-1}}(j)$ is a quasi-isomorphism for each $j \ge 0$. Recall the filtrations on the enveloping operads of cell algebras, for example, as in Section~\ref{sec:env_op}. The simplicial map $\mathpzc{U}^{\beta_\bullet(A,C,B_n)}(j) \to \mathpzc{U}^{A \amalg_C B_n}(j)$ is in fact a filtered map, in that, for each $m \ge 0$, we have an induced map $\text{F}_m\mathpzc{U}^{\beta_\bullet(A,C,B_n)}(j) \to \text{F}_m\mathpzc{U}^{A \amalg_C B_n}(j)$. We now take the normalization and consider the induced map on the strongly convergent spectral sequences associated to these filtrations. Recalling, from Section~\ref{sec:env_op}, the computations of the associated graded pieces corresponding to the filtrations on the enveloping operads, we have that the map on the $E^1$-terms consists of the following maps: \[ \text{N}(\mathpzc{U}^{\beta_\bullet(A,C,B_{n-1})}(m+j)) \otimes_{\Sigma_m} M_n[1]^{\otimes m} \to \mathpzc{U}^{A \amalg_C B_{n-1}}(m+j) \otimes_{\Sigma_m} M_n[1]^{\otimes m} \] By Lemma~\ref{lem:norm_fin_flat}, Lemma~\ref{lem:Estenvopsflat}, the inductive hypothesis and Lemma~\ref{lem:Estweksforpowers}, this map is a quasi-isomorphism for all $m \ge 0$ (to see this, first write $M_n[1]$, which is a sum of sphere complexes, as a filtered colimit of its finite subcomplexes). It follows that the map of spectral sequences is an isomorphism from $E^2$ onwards, and so the map $\text{N}(\mathpzc{U}^{\beta_\bullet(A,C,B_{n})}(j)) \to \mathpzc{U}^{A \amalg_C B_{n}}(j)$ is a quasi-isomorphism. This completes the induction. \end{proof} Now, with the help of the computation of the pushout in Proposition~\ref{prop:pushout_normalization}, we can now prove the desired result. \begin{Proposition}\label{prop:pushouts_alg} Given a diagram $A \leftarrow C \rightarrow B$ of algebras over $\mathpzc{E}_{\emph{st}}^\dagger$, if each of $A$, $B$ and $C$ are cofibrant, and $C \to B$ is a cofibration, then we have that: \[ A \amalg_C B \sim A \oplus_C B \] \end{Proposition} \begin{proof} Let $\beta^{\text{dg}}_\bullet(A,C,B)$ denote the bar construction in dg modules, so that, in simplicial degree $n$, we have: \[ \beta_n^{\text{dg}}(A, C, B) := A \oplus \underbrace{C \oplus \cdots \oplus C}_{n \: \text{factors}} \oplus B \] Then we have a composite quasi-isomorphism: \[ \text{N}(\beta^{\text{dg}}_\bullet(A,C,B)) \overset{\sim}\to \text{N}(\beta_\bullet(A,C,B)) \overset{\sim}\to A \amalg_C B \] Here the first map is a quasi-isomorphism by Proposition~\ref{prop:coproducts_of_algebras}, and the second is a quasi-isomorphism by Proposition~\ref{prop:pushout_normalization}. Moreover, since cofibrations of algebras, being retracts of relative cell maps, are necessarily cofibrations of complexes (in the standard projective Quillen model structure on complexes), we have a natural quasi-isomorphism $\text{N}(\beta^{\text{dg}}_\bullet(A,C,B)) \sim A \oplus_C B$, and this gives us the desired result. \end{proof} \section{A Spectral Cochains Adjunction} Hitherto, we have dealt with algebras over the stable Barratt-Eccles operad $\mathpzc{E}_{\text{st}}^\dagger$ at a general level. In this section, we shall consider an application of the stable Barratt-Eccles operad, and show that cochains on spectra yield examples of such algebras. \subsection{Spectra and Their Model Structure} We begin by fixing what it is that we mean by a spectrum, which is to say we need to fix a model for spectra. We adopt the following model. A spectrum $E$ is a sequence of based simplicial sets $E_0, E_1, E_2, \dots$ together with a collection of maps $\rho_n \colon \Sigma E_n \to E_{n+1}$, where the suspension is the Kan suspension, or equivalently, maps $\sigma_n \colon E_n \to \Omega E_{n+1}$, where the loop space is the Moore loop space. A map of spectra $f \colon E \to F$ is given by a collection of maps $f_n \colon E_n \to F_n$ which are compatible with the structure maps. We thus have a category of spectra, and we denote this category by $\mathsf{Sp}$. \\ We now also make some standard definitions regarding stable homotopy groups. Given a spectrum $E$, for $i \in \mathbb{Z}$, the $i^{\text{th}}$ stable homotopy group of $E$ is the colimit $\pi_i^{\text{st}}(E) := \text{colim}_{k \ge 0} \pi_{i+k}(|E_k|)$; here, given $k \ge 0$, the map $\pi_{i+k}(|E_k|) \to \pi_{i+k+1}(|E_{k+1}|)$ is that which sends the class of a based map $\mathbb{S}^{i+k}_{\text{top}} \to |E_k|$ to the class of the composite $\mathbb{S}^{i+k+1}_{\text{top}} \cong \Sigma \mathbb{S}^{i+k}_{\text{top}} \to |\Sigma E_k| \to |E_{k+1}|$ where we use the fact that, upon geometric realization, the Kan suspension can be identified, up to natural isomorphism, with the usual topological suspension, as shown for example in Proposition 2.16 in~\cite{MarcStephan}. Moreover, given a map $f \colon E \to F$ of spectra, we call it a stable weak homotopy equivalence if the induced maps $\pi_i^{\text{st}}(E) \to \pi_i^{\text{st}}(F)$ are isomorphisms for each $i \in \mathbb{Z}$. \begin{Proposition}\label{prop:spec_mod_str} On $\mathsf{Sp}$, there is a model structure such that: \begin{itemize} \item[(i)] A map $f \colon E \to F$ is a weak equivalence if and only if it is a stable weak homotopy equivalence. \item[(ii)] A map $f \colon E \to F$ is a cofibration if and only if the map $f_0 \colon E_0 \to F_0$ and the maps $E_{n+1} \, \amalg_{\Sigma E_n} \, \Sigma F_n \to F_{n+1}$, for $n \ge 0$, are cofibrations of based simplicial sets; they are, in particular, levelwise cofibrations. \item[(iii)] A map $f \colon E \to F$ is a fibration if and only if it has the right lifting property with respect to maps which are both cofibrations and weak equivalences; they are, in particular, levelwise fibrations. \end{itemize} \end{Proposition} \begin{proof} All except that the cofibrations are levelwise monomorphisms is immediate from Theorem 2.29 in~\cite{MarcStephan}. To see that cofibrations are monomorphisms, let $f \colon E \to F$ be a cofibration of spectra $E$ and $F$. By definition, $f_0$ is a monomorphism, and moreover, so is $E_{1} \, \scalebox{1.25}{$\amalg$}_{\Sigma E_0} \, \Sigma F_0 \to F_{1}$. Consider the following pushout square: \begin{center} \begin{tikzpicture}[node distance=1.5cm] \node(A){$\Sigma E_0$}; \node[below= of A](C){$E_{1}$}; \node[right= of A,xshift=1cm](B){$\Sigma F_0$}; \node[below= of B,yshift=0.5mm](D){$E_{1} \, \scalebox{1.25}{$\amalg$}_{\Sigma E_0} \, \Sigma F_0$}; \draw[->] (A) -- (B) node[midway,anchor=south]{$\Sigma f_0$}; \draw[->] (A) -- (C) node[midway,anchor=east]{$\rho_0$}; \draw[->] (C) -- (D) node[midway,anchor=north]{$i$}; \draw[->] (B) -- (D) node[midway,anchor=west]{}; \begin{scope}[shift=($(A)!.2!(D)$)] \draw +(0,-0.25) -- +(0,0) -- +(0.25,0); \end{scope} \end{tikzpicture} \end{center} By Proposition~\ref{prop:susp_monos}, $\Sigma f_0$ is a monomorphism. Since in simplicial sets, pushouts of monomorphisms are once again monomorphisms, we see that the map $i$ is a monomorphism, and thus so is the composite $E_1 \to E_{1} \, \scalebox{1.25}{$\amalg$}_{\Sigma E_0} \, \Sigma F_0 \to F_1$, which is exactly $f_1$. Repeating this argument, we see by induction that each $f_n \colon E_n \to F_n$ is a monomorphism. \end{proof} \begin{Remark}\label{rmk:comparison_comb_sp} As per Bousfield and Friedlander in~\cite[\S 2.5]{BousfieldFriedlander} and Marc Stephan in~\cite{MarcStephan}, there is a Quillen equivalence \[ \text{Sp} \colon \mathsf{Sp} \rightleftarrows \mathsf{CSp} \colon \text{Ps} \] between our category of spectra and $\mathsf{CSp}$, the category of Kan's combinatorial spectra, equipped with the model structure of Brown in~\cite{Brown}. \hfill $\vert\vert$ \end{Remark} We can single out some special spectra with some standard definitions, as follows. Given a spectrum $E$, say that it is an $\Omega$-spectrum if each $E_n$ is a Kan complex and the maps $\sigma_n$ are weak homotopy equivalences of based simplicial sets. Given a spectrum $E$, say that it is a strict $\Omega$-spectrum if each $E_n$ is a Kan complex and the maps $\sigma_n$ are isomorphisms of based simplicial sets. Given a spectrum $E$, say that it is $\Sigma$-cofibrant if the maps $\rho_n$ are cofibrations of based simplicial sets. \begin{Remark}\label{rmk:htpy_grps_fib_sp} Given an $\Omega$-spectrum $E$, we clearly have that $\pi_i^{\text{st}}(E) \cong \pi_i(E_0)$ for $i \ge 0$, and $\pi_i^{\text{st}}(E) \cong \pi_0(E_{|i|})$ for $i < 0$. \hfill $\vert\vert$ \end{Remark} \begin{Proposition}\label{prop:fibcofibsp} We have the following: \begin{itemize} \item[(i)] The fibrant spectra are exactly the $\Omega$-spectra. The cofibrant spectra are exactly the $\Sigma$-cofibrant spectra. The strict $\Omega$-spectra are bifibrant. \item[(ii)] A map between $\Omega$-spectra is a weak equivalence if and only if it is a levelwise weak homotopy equivalence of based simplicial sets. \item[(iii)] A map between strict $\Omega$-spectra is a fibration if and only if it is levelwise fibration of based simplicial sets. \end{itemize} \end{Proposition} \begin{proof} (i): The case of fibrant objects follows from Theorem 2.29 in~\cite{MarcStephan}. For the cofibrant objects, note that a map $f \colon E' \to E$ is a cofibration if $E'_0 \to E_0$ and $E'_{n+1} \, \scalebox{1.25}{$\amalg$}_{\Sigma E'_n} \, \Sigma E_n \to E_{n+1}$ are cofibrations. Taking the $E'_n$ to be $*$, we are left with the map $* \to E_0$ and the maps $\Sigma E_n \to E_{n+1}$. The former is of course always a cofibration of based simplicial sets. Now consider the case of strict $\Omega$-spectra. If each map $E_n \to \Omega E_{n+1}$ is an isomorphism, the maps $\Sigma E_n \to E_{n+1}$ are monomorphisms as they may be written as $\Sigma E_n \to \Sigma \Omega E_{n+1} \to E_{n+1}$ where the first map is an isomorphism and the second is a monomorphism by part (iii) of Proposition~\ref{prop:SigmaOmegadj}. \\ (ii): See Lemma 2.28 in~\cite{MarcStephan}. \\ (iii): Let $E \to F$ be a levelwise fibration between strict $\Omega$-spectra $E$ and $F$. Consider the adjunction between spectra and Kan's combinatorial spectra in Remark~\ref{rmk:comparison_comb_sp} above. It is immediate from the definitions (in, e.g., \cite{MarcStephan}) that if $E$ is a strict $\Omega$-spectra, then the unit of adjunction $E \to \text{Ps}\,\text{Sp}E$ is an isomorphism. Thus the induced map $\text{Ps}\,\text{Sp}E$ is a levelwise fibration. By Proposition 3.18 in~\cite{MarcStephan}, given a map $f$ between combinatorial spectra, $\text{Ps}(f)$ is a levelwise fibration if and only if it is a fibration. Thus $\text{Ps}\,\text{Sp}E$ is a fibration. It follows that the map $E \to F$ is itself a fibration, as desired. \end{proof} Next, let us consider some examples of spectra, under our model for spectra. The most obvious is that of suspension spectra, which are spectra freely generated on a space. Given a based simplicial set $S$, the suspension spectrum $\Sigma^\infty S$ is defined by setting $(\Sigma^\infty S)_n = \Sigma^nS$, and the structure maps, in the form via suspensions, are identities. More generally, for each $n \ge 0$, we define $\Sigma^{\infty - n}X$ by setting $(\Sigma^{\infty - n}X)_m = \Sigma^{m-n}X$ for $m \ge n$, and $*$ for $m < n$ and the structure maps to be the obvious ones in suspension form. If $n = 0$, we recover $\Sigma^\infty S$. Note that the structure maps $\rho_n \colon \Sigma (\Sigma^\infty S)_n \to (\Sigma^\infty S)_{n+1}$ for the corresponding suspension spectrum $\Sigma^\infty$ are identities. It follows that the structure maps $\sigma _n \colon (\Sigma^\infty S)_n \to \Omega (\Sigma^\infty)_{n+1 S}$ are components of the unit of the $(\Sigma, \Omega)$-adjunction and so are, by Proposition~\ref{prop:SigmaOmegadj}, isomorphisms. The component simplicial sets $\Sigma^n S$, however, are not necessarily Kan complexes, even if $S$ is (e.g., consider the case of the $0$-sphere), and so the suspension spectrum is not necessarily a strict $\Omega$-spectrum, or an $\Omega$-spectrum at all for that matter. \\ Now let us consider Eilenberg-MacLane spectra in our model. Let $A$ be any abelian group. Then a standard model for the Eilenberg-MacLane space $\text{K}(A,n)$, for each $n \ge 0$, as a simplicial set, is the simplicial set whose $d$-simplices are given by the cocycles $\text{Z}^n(\Delta_d; A)$. Consider the simplicial set $\text{K}(A,n)$ as a based simplicial set, with zero as the basepoint. The based simplicial sets $\text{K}(A,0), \text{K}(A,1), \text{K}(A,2), \dots$ then assemble into a spectrum, and this is the Eilenberg-MacLane spectrum $\text{H}A$. To see this, we need structure maps: \[ \text{K}(A,n) \to \Omega \text{K}(A,n+1) \] These are as follows. Let $\alpha$ be an $n$-cocycle on $\Delta_d$. Then, we may act on the chains on $\Delta_{d+1}$ by stipulating that, given a simplex $[n+1] \to [d+1]$ in $\Delta_{d+1}$, if no entry maps to zero, we send it to zero, or if exactly one entry maps to zero, we drop $0$ from both the source and target and then reindex to get a map $[n] \to [d]$, a simplex in $\Delta_d$, and then act by $\alpha$. (We needn't concern ourselves with the case of maps $[n+1] \to [d+1]$ which send more than one entry to zero as those yield degenerate simplices of $\Delta_{d+1}$.) An easy check shows that this action on chains defines an $(n+1)$-cocycle $\beta$ on $\Delta_{d+1}$,that this cocycle lies in $\Omega \text{K}(A,n+1)$ and moreover that the assignment $\alpha \mapsto \beta$ yields a map of based simplicial sets $\text{K}(A,n) \to \Omega \text{K}(A,n+1)$, as desired. \\ We can also consider shifts of Eilenberg-MacLane spectra under our model. Let $A$ be an abelian group as above. Given any $k \in \mathbb{Z}$, the shifted Eilenberg-MacLane spectrum $\Sigma^k\text{H}A$ is the spectrum where $(\Sigma^k\text{H}A)_n = \text{K}(A,k+n)$. Here, for $n < 0$, we interpret $\text{K}(A,n)$ to be $*$, by which we mean the based $\Delta_0$. In general, we have that $(\Sigma^k\text{H}A)_{n,d} = \text{Z}^{k+n}(\Delta_d; A)$. \begin{Proposition}\label{prop:EM_spec_fib} Given any abelian group $A$, the Eilenberg-MacLane spectrum $\emph{H}A$, and more generally the shifted Eilenberg-MacLane spectrum $\Sigma^k\emph{H}A$ for any $k \in \mathbb{Z}$, is a strict $\Omega$-spectrum, and so is bifibrant. \end{Proposition} \begin{proof} The case for the shifted Eilenberg-MacLane spectra follows immediately from that of the unshifted $\text{H}A$. Moreover, we only need to verify that $\text{H}A$ is a strict $\Omega$-spectrum as the bifibrancy then follows by Proposition~\ref{prop:fibcofibsp}. That each $(\text{H}A)_n$ is a Kan complex follows by the fact that it is the underlying simplicial set of a simplicial group. It then remains to show that the structure maps $\text{K}(A,n) \to \Omega \text{K}(A,n+1)$ above are isomorphisms. First, note that the graded pieces are in fact abelian groups and that the structure maps are clearly maps of abelian groups. Thus, for injectivity, we can simply check that only zero maps to zero. Consider some $n$-cocycle on $\Delta_p$ which maps to the zero $(n+1)$-cocycle on $\Delta_{p+1}$. Given $[n] \to [p]$, abut $0 \mapsto 0$. On the resulting map $[n+1] \to [p+1]$ the image cocycle acts by the original one. Thus the original one must be zero. This demonstrates injectivity. Now we show surjectivity. Consider some $(n+1)$-cocycle on $\Delta_{p+1}$. We define an $n$-cocycle on $\Delta_p$ as follows: given any $[n] \to [p]$, abut $0 \mapsto 0$ and then act by the given cocycle. This is indeed a cocycle: given $[n+1] \to [p]$, if we take faces then abut $0 \mapsto 0$ we get the same as first abutting to $[n+2] \to [p+1]$ and then taking the faces $d_i$ for $1 \le i \le n+2$, so that we must be getting the same final result as if we acted upon $d_0$ of the abutment to $[n+2] \to [p+1]$ by the original cocycle, but then this will map nothing to $0$ (since in the abutment only $0$ mapped to $0$), and thus the final result will be zero, as desired. Now we note that our given $(n+1)$-cocycle on $\Delta_{p+1}$ is exactly the image of this newly constructed $n$-cocycle on $\Delta_p$: it certainly is if exactly one entry maps to $0$; moreover, if no entry maps to $0$, it must be mapped to $0$ by our cocycle due to the $d_0 = *$ condition, and we can ignore the cases where more than one entry maps to zero since we are taking normalized cochains. \end{proof} \begin{Remark}\label{rmk:EM_htpy_grps} As a result of Proposition~\ref{prop:EM_spec_fib} above and Remark~\ref{rmk:htpy_grps_fib_sp}, we have that $\pi_0^{\text{st}}(\text{H}A) \cong \pi_0(\text{K}(A,0)) \cong A$, whereas $\pi_i^{\text{st}}(\text{H}A) \cong \pi_i(\text{K}(A,0)) \cong *$ for $i > 0$ and $\pi_i^{\text{st}}(\text{H}A) \cong \pi_0(\text{K}(A,|i|)) \cong *$ for $i < 0$. More generally, similar case by case considerations show that $\pi_k^{\text{st}}(\Sigma^k\text{H}A) \cong \pi_k(\text{K}(A,k)) \cong A$ whereas $\pi_i^{\text{st}}(\Sigma^k\text{H}A) \cong *$ for $i \neq k$. \hfill $\vert\vert$ \end{Remark} \subsection{Spectral Cochains as Algebras over the Stable Operads}\label{subsec:spec_cochains} Our goal in this section is to construct explicit models for the mod $p$ spectral (co)chains for our model of spectra and show that these (co)chains possess an algebraic structure given by a (co)action of the stable Barratt-Eccles operad. Let $E$ be a spectrum, with structure maps $\rho_n \colon \Sigma E_n \to E_{n+1}$. If we apply the mod $p$ chains functor, we get maps $\text{C}_\bullet(E_n;\mathbb{F}_p)[1] \cong \text{C}_\bullet(\Sigma E_n;\mathbb{F}_p) \to \text{C}_\bullet(E_{n+1};\mathbb{F}_p)$ where the first map is that which is in Proposition~\ref{prop:n_r_chains_susp_X}. Equivalently, we have a map: \[ \text{C}_\bullet(E_n;\mathbb{F}_p) \to \text{C}_\bullet(E_{n+1};\mathbb{F}_p)[-1] \] Moreover, upon applying the dualization operator $(-)^\vee$ from Section~\ref{sec:nots_convs}, we get maps $\text{C}_\bullet(E_{n+1};\mathbb{F}_p)^\vee[1] = \text{C}_\bullet(E_{n+1};\mathbb{F}_p)[-1]^\vee \to \text{C}_\bullet(E_n;\mathbb{F}_p)^\vee$, and so, upon applying the reindexing operator $(-)^{\dagger}$, and moving the shift from the source to the target, we get maps: \[ \text{C}^\bullet(E_{n+1};\mathbb{F}_p)[-1] \to \text{C}^\bullet(E_n;\mathbb{F}_p) \] The model for the mod $p$ spectral (co)chains is then as follows. Let $E$ be a spectrum. The chains on $E$ with coefficients in $\mathbb{F}_p$, denoted $\text{C}_\bullet(E;\mathbb{F}_p)$, are as follows: \[ \text{C}_\bullet(E;\mathbb{F}_p) := \text{colim}(\text{C}_\bullet(E_0;\mathbb{F}_p) \to \text{C}_\bullet(E_{1};\mathbb{F}_p)[-1] \to \text{C}_\bullet(E_{2};\mathbb{F}_p)[-2] \to \cdots) \] The cochains on $E$ with coefficients in $\mathbb{F}_p$, denoted $\text{C}^\bullet(E;\mathbb{F}_p)$, are as follows: \[ \text{C}^\bullet(E;\mathbb{F}_p) := \text{lim}(\cdots \to \text{C}^\bullet(E_2;\mathbb{F}_p)[-2] \to \text{C}^\bullet(E_1;\mathbb{F}_p)[-1] \to \text{C}^\bullet(E_0;\mathbb{F}_p)) \] \begin{Remark}\label{rmk:spec_chain_notation} We can relate the (co)chains above to those on combinatorial spectra. We have that the (co)chains on $E$ as dsecribed above are exactly the (co)chains, in the usual sense, on the associated combinatorial spectrum $\text{Ps}(E)$, where $\text{Ps}$ is as in Remark~\ref{rmk:comparison_comb_sp}. Moreover, we can describe the spectral (co)chains above more explicitly. Let $E$ be a spectrum, with structure maps $\rho_n \colon \Sigma E_n \to E_{n+1}$. Given a simplex $d$-simplex $x$ in $E_n$, we have a corresponding $(d+1)$-simplex $\Sigma x$ in $\Sigma E_n$ (the notation $\Sigma x$ here is as in Definition~\ref{def:susp_simps}), and thus, upon applying $\rho$, a $(d+1)$-simplex $\rho(\Sigma x) \in E_{n+1}$. Pictorially: \[ x \in (E_n)_d \leadsto \Sigma x \in (\Sigma E_n)_{d+1} \leadsto \rho(\Sigma x) \in (E_{n+1})_{d+1} \] We then clearly have that: \[ \text{C}_\bullet(E;\mathbb{F}_p) = \bigoplus_{n \ge 0} \text{C}_\bullet(E_n;\mathbb{F}_p)[-n] \Big/ (x - \rho(\Sigma x)) \] To be even more explicit, an easy check, using Proposition~\ref{prop:faces_of_susp_simps}, shows that $\text{C}_d(E;\mathbb{F}_p)$ is the free $\mathbb{F}_p$-module on $\amalg_{e - n = d} (E_n)_e$, modulo the submodule generated by the basepoint, the degenerate simplices and the terms $x - \rho(\Sigma x)$ where $x \in (E_n)_e$ for some $n,e$ such that $e - n = d$ and so then $\rho(\Sigma x) \in (E_{n+1})_{e+1}$, and we also have that $(e+1)-(n+1) = d$. To keep the bookkeeping precise, in the future, given an element $x \in \amalg_{e - n = d} (E_n)_e$, we shall let $[n,e,x]$ denote the corresponding element of $\text{C}_d(E;\mathbb{F}_p)$. Also, on the cochains: an easy degreewise check shows that the internal hom complex functor $\text{F}(-,\mathbb{F}_p[0])$ converts the colimit appearing in the definition of the chains into a limit, and it then follows that the cochains are exactly the cochain complex formed by application of $(-)^\dagger \circ (-)^\vee$ to the chains. \hfill $\vert\vert$ \end{Remark} Next, we wish to show that the spectral cochains yield algebras over the stable Barratt-Eccles cochain operad $\mathpzc{E}_{\text{st}}^\dagger$. We begin by showing that chains on spectra naturally form coalgebras over the stable Eilenberg-Zilber chain operad $\mathpzc{Z}_{\text{st}}$. \begin{Proposition}\label{prop:spec_chains_coalg} Let $E$ be a spectrum. The chains $\emph{C}_\bullet(E;\mathbb{F}_p)$ naturally form a coalgebra over the stable Eilenberg-Zilber chain operad $\mathpzc{Z}_{\emph{st}}$. \end{Proposition} \begin{proof} We wish to produce a coaction of the stable Eilenberg-Zilber operad on the chains, which, for brevity, we shall denote by $\text{C}_\bullet(E)$. Fix $k \ge 0$. We then want a map: \[ \mu \colon \mathpzc{Z}_{\text{st}}(k) \otimes \text{C}_\bullet(E) \to \text{C}_\bullet(E)^{\otimes k} \] To do so, we will construct a bilinear map $\bar\mu \colon \mathpzc{Z}_{\text{st}}(k) \times \text{C}_\bullet(E) \to \text{C}_\bullet(E)^{\otimes k}$ which preserves the degrees and then check that the resulting map on the tensor product commutes with the differentials. We first produce a map $\bar{\bar\mu} \colon \mathpzc{Z}_{\text{st}}(k) \times \text{S}_\bullet(E) \to \text{C}_\bullet(E)^{\otimes k}$ where $\text{S}_\bullet(E) = \mathbb{F}_p[\amalg_{n,d} (E_n)_d]$ (with degrees so that $\text{C}_\bullet(E)$ is a quotient of $\text{S}_\bullet(E)$, as in Remark~\ref{rmk:spec_chain_notation}). Let $\alpha = (\alpha_0,\alpha_1,\dots) \in \mathpzc{Z}_{\text{st}}(k)$ be of degree $d$ and let $x \in (E_n)_e$, which is of degree $e-n$ in $\text{S}_\bullet(E)$. We set $\bar{\bar\mu}(\alpha,x) := \alpha_n(x)$; or more precisely, we set $\bar{\bar\mu}(\alpha,x)$ to be the image of $x$ under the composite: \[ \text{S}_\bullet(E_n) \longrightarrow \text{C}_\bullet(E_n) \overset{\alpha_n}\longrightarrow \text{C}_\bullet(E_n)^{\otimes k} \longrightarrow \text{C}_\bullet(E)^{\otimes k} \] (Here $\text{S}_\bullet(E_n) = \mathbb{F}_p[\amalg_e(E_n)_e]$.) Next we extend this definition to all of $\mathpzc{Z}_{\text{st}}(k) \times \text{S}_\bullet(E)$ by linearity in the second variable, recalling that $\text{S}_\bullet(E)$ is the free $\mathbb{F}_p$-module on $\scalebox{1.25}{$\amalg$}_{n,e} (E_n)_e$. The map is then clearly bilinear in both variables. We need to check that $\bar{\bar\mu}(\alpha,e)$ is of degree $d+e-n$. Recall that $\alpha_n$ is of degree $d+n(k-1)$. As an element of $\text{S}_\bullet(E_n)$, $x$ has degree $e$, and then, after application of the first map in the composite above, degree $e$ once more. Upon application of $\alpha_n$, we get an element of degree $e+d+n(k-1) = d+e-n+nk$, and then, since the final map reduces degree by $n$ in each tensor factor, we get an element of degree $d+e-n+nk-nk = d+e-n$ as desired. (More precisely: prior to application of the final map, we have a sum of terms of the form $t_1 \otimes \cdots \otimes t_k \in (E_n)_{d_1} \otimes \cdots \otimes (E_n)_{d_k}$ where $d_1 + \cdots + d_k = d+e-n+nk$, and then after application of the final map, such terms have degree $(d_1-n) + \cdots + (d_k - n) = (d_1 + \cdots + d_k) - nk = d+e-n+nk - nk = d+e-n$.) Thus $\bar{\bar\mu}(\alpha,x)$ lies in $(\text{C}_\bullet(E)^{\otimes k})_{d+e-n}$, as desired. \\ Now we show that our map $\mathpzc{Z}_{\text{st}}(k) \times \text{S}_\bullet(E) \to \text{C}_\bullet(E)^{\otimes k}$ descends to a map $\mathpzc{Z}_{\text{st}}(k) \times \text{C}_\bullet(E) \to \text{C}_\bullet(E)^{\otimes k}$. Suppose first that $x \in (E_n)_e$ is degenerate, the basepoint or a degeneracy of a basepoint. Then, it will be killed by the first of the three maps in the composite above, and so will be killed by $\bar{\bar\mu}$. Next, we need $[n,e,x]$ and $[n+1,e+1,\rho_n(\Sigma x)]$ (the notations here are as in Remark~\ref{rmk:spec_chain_notation}) to be identified. That is, we need $\alpha_{n+1}(\rho_n(\Sigma x))$ to be equal to $\alpha_n(x)$. Or, more precisely, we need the image of $x$ under \begin{equation}\label{eq:comp1} \text{S}_\bullet(E_n) \longrightarrow \text{C}_\bullet(E_n) \overset{\alpha_n}\longrightarrow \text{C}_\bullet(E_n)^{\otimes k} \longrightarrow \text{C}_\bullet(E)^{\otimes k} \end{equation} to coincide with the image of $\rho_n(\Sigma x)$ under: \begin{equation}\label{eq:comp2} \text{S}_\bullet(E_{n+1}) \longrightarrow \text{C}_\bullet(E_{n+1}) \overset{\alpha_{n+1}}\longrightarrow \text{C}_\bullet(E_{n+1})^{\otimes k} \longrightarrow \text{C}_\bullet(E)^{\otimes k} \end{equation} Consider the following diagram, in which the square commutes by naturality of $\alpha_{n+1}$: \begin{center} \begin{tikzpicture}[node distance=1.5cm] \node(A){$\text{C}_\bullet(\Sigma E_n)$}; \node[below= of A](C){$\text{C}_\bullet(E_{n+1})$}; \node[right= of A,xshift=1cm](B){$\text{C}_\bullet(\Sigma E_n)^{\otimes k}$}; \node[below= of B,yshift=0mm](D){$\text{C}_\bullet(E_{n+1})^{\otimes k}$}; \node[right of = D,xshift=2cm](E){$\text{C}_\bullet(E)^{\otimes k}$}; \draw[->] (A) -- (B) node[midway,anchor=south]{$(\alpha_{n+1})_{\Sigma E_n}$}; \draw[->] (A) -- (C) node[midway,anchor=east]{$\text{C}_\bullet(\rho_n)$}; \draw[->] (C) -- (D) node[midway,anchor=north]{$(\alpha_{n+1})_{E_{n+1}}$}; \draw[->] (B) -- (D) node[midway,anchor=west]{$\text{C}_\bullet(\rho_n)^{\otimes k}$}; \draw[->] (D) -- (E); \end{tikzpicture} \end{center} Start with $\Sigma x$ at the topleft corner. Applying the sequence of maps given by $\downarrow, \rightarrow, \rightarrow$, we get the desired image of $\rho_n(\Sigma x)$. On the other hand, applying $\rightarrow$, because $\alpha_n = \Psi(\alpha_{n+1})$, we get, upon altering the degree assignment, $\alpha_n(x)$ (note that $\text{C}_\bullet(\Sigma E_n)^{\otimes k}$ and $\text{C}_\bullet(E_n)^{\otimes k}$ are exactly equal, except for the degrees); or more precisely, we get the image of $x$ under the first two of the three maps in the composite (\ref{eq:comp1}). Letting this image be $\sum [\Sigma t_1] \otimes \cdots \otimes [\Sigma t_k]$, where the $t_i$ are simplices of $E_n$ of degrees say $d_i$ (and $\Sigma t_i$ denote the corresponding suspended simplices as in Definition~\ref{def:susp_simps}), the image under the entire composite is then $\sum [n,d_1,t_1] \otimes \cdots \otimes [n,d_k,t_k]$. Now, go back to $\sum [\Sigma t_1] \otimes \cdots \otimes [\Sigma t_k]$ and apply $\downarrow$. We get $\sum [\rho_n(\Sigma t_1)] \otimes \cdots \otimes [\rho_n(\Sigma t_k)]$. Next, applying $\rightarrow$, we get $\sum [n+1,d_1+1,\rho_n(\Sigma t_1)] \otimes \cdots \otimes [n+1,d_n+1,\rho_n(\Sigma t_k)]$. Because the square commutes, we have that the desired image of $\rho_n(\Sigma e)$ under the composite in (\ref{eq:comp2}) is this sum $\sum [n+1,d_1+1,\rho_n(\Sigma t_1)] \otimes \cdots \otimes [n+1,d_n+1,\rho_n(\Sigma t_k)]$. But in $\text{C}_\bullet(E)$, we have $[n+1,d_i+1,\rho_n(\Sigma t_i)] = [n,d_i,t_i]$, so that this desired image is $\sum [n,d_1,t_1] \otimes \cdots \otimes [n,d_k,t_k]$, and this was exactly also the desired image of $x$ under the composite (\ref{eq:comp1}), so that the two coincide, as was needed. \\ By the above, we see that $\bar{\bar\mu} \colon \mathpzc{Z}_{\text{st}}(k) \times \text{S}_\bullet(E) \to \text{C}_\bullet(E)^{\otimes k}$ descends to a bilinear map $\bar\mu \colon \mathpzc{Z}_{\text{st}}(k) \times \text{C}_\bullet(E) \to \text{C}_\bullet(E)^{\otimes k}$ which preserves the degrees, and we let the associated map $\mathpzc{Z}_{\text{st}}(k) \otimes \text{C}_\bullet(E) \to \text{C}_\bullet(E)^{\otimes k}$ be $\mu$. It remains to check that $\mu$ commutes with the differentials. Consider $\alpha \in \mathpzc{Z}_{\text{st}}(k)_d$ and $x \in (E_n)_e$. We have $\partial(\alpha \otimes [n,e,x]) = \partial \alpha \otimes [n,e,x] + (-1)^d \alpha \otimes \partial [n,e,x] = (\partial \alpha_0, \partial \alpha_1, \dots) \otimes [n,e,x] + (-1)^d\alpha \otimes (\sum_i [n,e,d_i(x)])$ which under $\mu$ has image \begin{align*} (\partial \alpha_n)(x) + (-1)^d \alpha_n(\sum_i d_i(x)) &= \partial_{\text{C}_\bullet(E)^{\otimes k}}(\alpha_n(x)) - (-1)^{d}\alpha_n(\sum_i d_i(x)) + (-1)^d\alpha_n(\sum_i d_i(x)) \\ &= \partial_{\text{C}_\bullet(E)^{\otimes k}}(\alpha_n(x)) \\ &= \partial_{\text{C}_\bullet(E)^{\otimes k}}(\mu(\alpha,x)) \end{align*} as desired. We now have the coaction maps $\mu$, and one can verify that the compatibility conditions required for an operad coaction are indeed satisfied. \\ Finally, we demonstrate the functoriality. Let $E$ and $F$ be spectra and $f \colon E \to F$ a map of spectra. We then have an induced map $f_* \colon \text{C}_\bullet(E) \to \text{C}_\bullet(F)$ of chain complexes as above, and we need to check that this map is compatible with the coaction of $\mathpzc{Z}_{\text{st}}$. Consider $[n,e,x]$ in $\text{C}_\bullet(E)$. Applying $f_*$ and then $\mu^F \colon \mathpzc{Z}_{\text{st}}(k) \otimes \text{C}_\bullet(F) \to \text{C}_\bullet(F)^{\otimes k}$, we get $\alpha_n(f_n(x))$, which by naturality of $\alpha_n$ is equal to $f_n^{\otimes k}(\alpha_n(x))$, and this is exactly the result upon instead first applying $\mu^E$ and then $f_*^{\otimes k}$. This gives us induced maps. \end{proof} \begin{Proposition}\label{prop:stabopactioncochains} Given a spectrum $E$, the cochain complex $\emph{C}^\bullet(E; \mathbb{F}_p)$ is naturally an algebra over the stable Barratt-Eccles cochain operad $\mathpzc{E}_{\emph{st}}^\dagger$. \end{Proposition} \begin{proof} By Propositions~\ref{prop:spec_chains_coalg}, the chains $\text{C}_\bullet(E;\mathbb{F}_p)$ are a coalgebra over $\mathpzc{Z}_{\text{st}}$. As in Section~\ref{sec:nots_convs}, if we apply $(-)^\vee$ to the chains, we get an algebra over $\mathpzc{Z}_{\text{st}}$. Thus, again as in Section~\ref{sec:nots_convs}, if we apply $(-)^\dagger \circ (-)^{\vee}$ to the chains, yielding the cochains as per Remark~\ref{rmk:spec_chain_notation}, we get an algebra over $\mathpzc{Z}_{\text{st}}^\dagger$. Finally, we get an $\mathpzc{E}_{\text{st}}^\dagger$-algebra structure by pulling across the map $\mathpzc{E}_{\text{st}}^\dagger \to \mathpzc{Z}_{\text{st}}^\dagger$ constructed earlier. \end{proof} As a result of the above, we can interpret cochains on spectra as a functor to algebras over $\mathpzc{E}_{\text{st}}^\dagger$: \[ \text{C}^\bullet \colon \mathsf{Sp}^{\text{op}} \to \mathpzc{E}_{\text{st}}^\dagger\text{-}\mathsf{Alg} \] \begin{Remark} We can in fact view the action of the stable operad on spectral cochains in an iterative manner, as follows. By definition, we have that \[ \text{C}_\bullet(E;\mathbb{F}_p) := \text{colim}(\text{C}_\bullet(E_0;\mathbb{F}_p) \to \text{C}_\bullet(E_{1};\mathbb{F}_p)[-1] \to \text{C}_\bullet(E_{2};\mathbb{F}_p)[-2] \to \cdots) \] \[ \text{C}^\bullet(E;\mathbb{F}_p) := \text{lim}(\cdots \to \text{C}^\bullet(E_2;\mathbb{F}_p)[-2] \to \text{C}^\bullet(E_1;\mathbb{F}_p)[-1] \to \text{C}^\bullet(E_0;\mathbb{F}_p)) \] As is well-known in the unstable case, we have that $\text{C}_\bullet(E_0;\mathbb{F}_p)$ and $\text{C}^\bullet(E_0;\mathbb{F}_p)$ form, respectively, a coalgebra over $\mathpzc{E}$ and an algebra over $\mathpzc{E}^\dagger$. Thus, by Proposition~\ref{prop:chainopsuspalg}, we have that the second terms, $\text{C}_\bullet(E_0;\mathbb{F}_p)[-1]$ and $\text{C}^\bullet(E_0;\mathbb{F}_p)[-1]$ form, respectively, a coalgebra over $\Sigma\mathpzc{E}$ and an algebra over $\Sigma\mathpzc{E}^\dagger$. Similarly, we have that the third terms, $\text{C}_\bullet(E_0;\mathbb{F}_p)[-2]$ and $\text{C}^\bullet(E_0;\mathbb{F}_p)[-2]$ form, respectively, a coalgebra over $\Sigma^2\mathpzc{E}$ and an algebra over $\Sigma^2\mathpzc{E}^\dagger$. In the limit, we get (co)algebra structures over the stable operads. \hfill $\vert\vert$ \end{Remark} Now, by Proposition~\ref{prop:stabopactioncochains} and the work in the previous sections, the cohomologies $\text{H}^\bullet(E;\mathbb{F}_p)$ inherit operations $P^s$, and also $\beta P^s$ in the case $p > 2$. As in the case of spaces, as shown below, they satisfy an important property which does not hold in general. This property is of course well-known to hold; we include it as a demonstration of work with our explicit model. \begin{Proposition}\label{prop:P01sp} Given a spectrum $E$, the operation $P^0$ acts by the identity on $\emph{H}^\bullet(E;\mathbb{F}_p)$. \end{Proposition} \begin{proof} We shall outline the $p = 2$ case; the $p > 2$ is analogous. For brevity, let $\text{C}^\bullet(E)$ denote the mod $p$ cochains. The operation $P^0$ is computed via images under the map $\mathpzc{E}_{\text{st}}^\dagger(2) \otimes \text{C}^\bullet(E)^{\otimes 2} \to \text{C}^\bullet(E)$. From Section~\ref{sec:cohom_ops}, recall the notations $e_d^{\text{un}}$ and $e_d^{\text{st}}$, for certain elements of $\mathpzc{E}^\dagger(2)$ and $\mathpzc{E}^\dagger_{\text{st}}(2)$ respectively. In particular, recall that: \begin{equation}\label{eqn:esteun} e_{d}^{\text{st}} = \left\{\begin{array}{ll} (e_{d}^{\text{un}},\tau e_{d+1}^{\text{un}},e_{d+2}^{\text{un}}, \tau e_{d+3}^{\text{un}}, \dots) & d \ge 0 \\ (0, \dots, 0, e_{0}^{\text{un}},\tau e_{1}^{\text{un}},e_{2}^{\text{un}}, \tau e_{3}^{\text{un}}, \dots) & d < 0 \end{array}\right. \end{equation} Here in the second case there are $|d|$ zeros. Consider some cocycle $\alpha$ in $\text{C}^{d}(E)$. Let $\beta \in \text{C}^d(E)$ be the image of $e^{\text{st}}_d \otimes \alpha \otimes \alpha$ under $\mathpzc{E}^\dagger_{\text{st}}(2) \otimes \text{C}^\bullet(E)^{\otimes 2} \to \text{C}^\bullet(E)$. By definition of $P^0$, $P^0\alpha$ is given by the class of $\beta$. Thus we need to show that $\beta = \alpha$. Consider a simplex $x \in (E_n)_{d'}$, where $d'-n = d$. As the $\mathpzc{E}_{\text{st}}^\dagger$-action on cochains is dual to a $\mathpzc{E}_{\text{st}}$-coaction on chains, $\beta(x)$ is given by $(\alpha \otimes \alpha)(y)$ where $y$ is the image of $e_d^{\text{st}} \otimes x$ under $\mathpzc{E}_{\text{st}}(2) \otimes \text{C}_\bullet(E) \to \text{C}_\bullet(E)^{\otimes 2}$. By the definition of the $\mathpzc{E}_{\text{st}}$-coaction on chains and by (\ref{eqn:esteun}) above, we have that, since $x$ lies in $E_n$, computation of $y$ reduces to a computation of the image of $e^{\text{un}}_{d'} \otimes x \otimes x$ under the unstable coaction map $\mathpzc{E}(2) \otimes \text{C}_\bullet(E_n) \to \text{C}_\bullet(E_n)^{\otimes 2}$. An explicit computation of this coaction in the unstable case shows that $y = x \otimes x$. One possible method of doing this computation is by first passing, under the map $\mathpzc{E}^\dagger \to \mathpzc{M}^\dagger$ described earlier, from $e_{d'}^{\text{un}}$ to the corresponding surjection in the McClure-Smith operad, which one can check is the surjection $(d'+2) \to (2)$ given by the sequence $(1212 \cdots)$, and then using the coaction formula for the McClure-Smith operad, given by the map $\text{AW}$ from Section~\ref{sec:stabilizations}. Now, as $y = x \otimes x$, $\beta(x) = (\alpha \otimes \alpha)(x \otimes x) = \alpha(x)^2 = \alpha(x)$, and so $\beta = \alpha$, as desired. \end{proof} \subsection{Change of Coefficients from $\mathbb{F}_p$ to $\overline{\mathbb{F}}_p$} Hitherto, we have worked with coefficients in $\mathbb{F}_p$, whether it was with the stable operads or with the spectral cochains. In order to construct algebraic models of $p$-adic stable homotopy types however, we must pass to the algebraic closure $\overline{\mathbb{F}}_p$. We describe the necessary modifications in this section. First, we define a new operad, one over $\overline{\mathbb{F}}_p$. \begin{Definition}\label{def:operadFpbar} The operad $\widebar{\mathpzc{E}}_{\text{st}}^{\dagger}$, an operad in cochain complexes over $\overline{\mathbb{F}}_p$, is as follows: \[ \widebar{\mathpzc{E}}_{\text{st}}^{\dagger}(n) := \mathpzc{E}_{\text{st}}^{\dagger}(n) \otimes_{\mathbb{F}_p[\Sigma_n]} \overline{\mathbb{F}}_p[\Sigma_n] \] \end{Definition} We now have three tasks to complete, tasks which we completed in the case of the operad $\mathpzc{E}_{\text{st}}^\dagger$: (i) show that one can do homotopy theory over $\widebar{\mathpzc{E}}_{\text{st}}^{\dagger}$ (ii) compute the cohomology of free algebras over $\widebar{\mathpzc{E}}_{\text{st}}^{\dagger}$ and develop a theory of cohomology operations (iii) demonstrate homotopy additivity properties of $\widebar{\mathpzc{E}}_{\text{st}}^{\dagger}$. Let us first consider (i). Our goal is to show that, just like $\mathpzc{E}_{\text{st}}^\dagger$, the monad associated to $\widebar{\mathpzc{E}}_{\text{st}}^{\dagger}$ preserves quasi-isomorphisms, and moreover, that $\widebar{\mathpzc{E}}_{\text{st}}^{\dagger}$ is semi-admissible. As usual, we denote the monad, and also the free algebra functor, associated to the operad $\widebar{\mathpzc{E}}_{\text{st}}^{\dagger}$, by $\overline{\mathbf{E}}_{\text{st}}^\dagger$. \begin{Proposition}\label{prop:Ebarmonad} We have the following: \begin{itemize} \item[(i)] The monad corresponding to the operad $\widebar{\mathpzc{E}}_{\text{st}}^{\dagger}$ preserves quasi-isomorphisms. \item[(ii)] The operad $\widebar{\mathpzc{E}}_{\text{st}}^{\dagger}$ is semi-admissible, which is to say the category of algebras $\widebar{\mathpzc{E}}_{\emph{st}}^\dagger\text{-}\mathsf{Alg}$ possesses a Quillen semi-model structure where: \begin{itemize} \item The weak equivalences are the quasi-isomorphisms. \item The fibrations are the surjective maps. \item The cofibrations are retracts of relative cell complexes, where the cells are the maps $\overline{\mathbf{E}}_{\emph{st}}^\dagger M \to \overline{\mathbf{E}}_{\emph{st}}^\dagger\emph{C}M$, where $M$ is a degreewise free $\overline{\mathbb{F}}_p$-complex with zero differentials. \end{itemize} \end{itemize} \end{Proposition} \begin{proof} (i): Given a cochain complex $X$ over $\mathbb{F}_p$, an easy check shows that: \begin{equation}\label{eqn:freealgfpbar} \overline{\mathbf{E}}_{\text{st}}^\dagger (\overline{\mathbb{F}}_p \otimes_{\mathbb{F}_p} X) \cong \overline{\mathbb{F}}_p \otimes_{\mathbb{F}_p} (\mathbf{E}_{\text{st}}^\dagger X) \end{equation} As in the proof of Proposition~\ref{prop:MS_st_pres_w_eqs}, it suffices to consider quasi-isomorphisms which are monomorphisms. Given a monomorphism $f \colon X \to Y$ of $\overline{\mathbb{F}}_p$-complexes, by an appropriate choice of bases, we can find $\mathbb{F}_p$-complexes $\underline{X}$ and $\underline{Y}$ together with a commutative square as follows: \begin{center} \begin{tikzpicture}[node distance = 1.5cm] \node [] (A) {$X$}; \node [right of = A,xshift=1cm] (B) {$\overline{\mathbb{F}}_p \otimes_{\mathbb{F}_p} \underline{X}$}; \node [below of = A] (C) {$Y$}; \node [right of = C,xshift=1cm] (D) {$\overline{\mathbb{F}}_p \otimes_{\mathbb{F}_p} \underline{Y}$}; \draw [->] (A) -- (B) node[midway,anchor=south]{$\cong$}; \draw [->] (A) -- (C) node[midway,anchor=east]{$f$}; \draw [->] (C) -- (D) node[midway,anchor=north]{$\cong$}; \draw [->] (B) -- (D) node[midway,anchor=west]{}; \end{tikzpicture} \end{center} The result now follows by Proposition~\ref{prop:MS_st_pres_w_eqs} and (\ref{eqn:freealgfpbar}) above. \\ (ii): Let $A$ be a cell $\widebar{\mathpzc{E}}_{\text{st}}^\dagger$-algebra. As in the proof of Proposition~\ref{prop:E_adm}, it suffices to show that, for each $j \ge 0$, $\mathpzc{U}^A(j)$ is semi-flat over $\overline{\mathbb{F}}_p[\Sigma_j]$; that is, it suffices to show that the obvious analogue of Lemma~\ref{lem:Estenvopsflat} holds. In fact, an easy inspection show that, by entirely analogous proofs, the obvious analogues for all of Lemmas~\ref{lem:finflatEst},~\ref{lem:almost_splitunstable}, and~\ref{lem:change_of_ringsunstable} hold, and then as a result, so does the analogue of Lemma~\ref{lem:Estenvopsflat}, as desired. \end{proof} The semi-model structure constructed above yields also the derived category of $\widebar{\mathpzc{E}}_{\text{st}}^\dagger$-algebrs. We now move onto the second task (ii), that of computing the cohomologies of free $\widebar{\mathpzc{E}}_{\text{st}}^\dagger$-algebras, and developing a theory of cohomology operations. The former is achieved via the following result and Proposition~\ref{prop:stablefreehom}. \begin{Proposition}\label{prop:freealghomEbarpoint} Given an $\overline{\mathbb{F}}_p$-complex $X$, we have that: \[ \emph{H}^\bullet(\overline{\mathbf{E}}_{\emph{st}}^\dagger X) \cong \widehat{\mathcal{B}} \otimes_{\mathbb{F}_p} \emph{H}^\bullet(X) \] \end{Proposition} Note that the tensor is over $\mathbb{F}_p$, not over $\overline{\mathbb{F}}_p$. \begin{proof} By a choice of basis, we can find an $\mathbb{F}_p$-complex $\underline{X}$ such that $X \cong \overline{\mathbb{F}}_p \otimes_{\mathbb{F}_p} \underline{X}$. We then have that: \begin{align*} \text{H}^\bullet(\overline{\mathbf{E}}_{\text{st}}^\dagger X) &\cong \text{H}^\bullet(\overline{\mathbf{E}}_{\text{st}}^\dagger (\overline{\mathbb{F}}_p \otimes_{\mathbb{F}_p} \underline{X})) \\ &\cong \text{H}^\bullet(\overline{\mathbb{F}}_p \otimes_{\mathbb{F}_p} \mathbf{E}_{\text{st}}^\dagger (\underline{X})) \\ &\cong \overline{\mathbb{F}}_p \otimes_{\mathbb{F}_p} \text{H}^\bullet(\mathbf{E}_{\text{st}}^\dagger \underline{X}) \\ &\cong \overline{\mathbb{F}}_p \otimes_{\mathbb{F}_p} \widehat{\mathcal{B}} \otimes_{\mathbb{F}_p} \text{H}^\bullet(\underline{X}) \\ &\cong \widehat{\mathcal{B}} \otimes_{\mathbb{F}_p} \text{H}^\bullet(X) \end{align*} \end{proof} Now we consider cohomology operations for $\widebar{\mathpzc{E}}_{\text{st}}^\dagger$-algebras. \begin{Proposition}\label{prop:cohom_ops_Fpbar} Given an algebra $A$ over $\widebar{\mathpzc{E}}_{\emph{st}}^\dagger$, the cohomology $\emph{H}^\bullet(A)$ possesses an $\mathbb{F}_p$-linear action by $\widehat{\mathcal{B}}$. \end{Proposition} Note that the operations are $\mathbb{F}_p$-linear, as opposed to $\overline{\mathbb{F}}_p$-linear. \begin{proof} As per Proposition~\ref{prop:freealghomEbarpoint}, we have an isomorphism $\text{H}^\bullet(\widebar{\mathbf{E}}_{\textbf{st}}^\dagger A) \cong \widehat{\mathcal{B}} \otimes_{\mathbb{F}_p} \text{H}^\bullet(A)$. The $\mathbb{F}_p$-linear action of $\widehat{\mathcal{B}}$ is then via the composite \[ \widehat{\mathcal{B}} \otimes_{\mathbb{F}_p} \text{H}^\bullet(A) \overset{\cong}\longrightarrow \text{H}^\bullet(\widebar{\mathbf{E}}_{\textbf{st}}^\dagger A) \longrightarrow \text{H}^\bullet (A) \] where the second map is that which we achieve by applying $\text{H}^\bullet(-)$ to the algebra structure map $\widebar{\mathbf{E}}_{\textbf{st}}^\dagger A \to A$ of $A$. \end{proof} Next, we consider task (iii), that of the homotopy additivity of $\widebar{\mathpzc{E}}_{\text{st}}^\dagger$. \begin{Proposition}\label{prop:additivity_of_monad_Fpbar} We have the following: \begin{itemize} \item[(i)] Given $\overline{\mathbb{F}}_p$-complexes $X$ and $Y$, we have a natural quasi-isomorphism: \[ \overline{\mathbf{E}}_{\emph{st}}^\dagger(X \oplus Y) \sim \overline{\mathbf{E}}_{\text{st}}^\dagger(X) \oplus \overline{\mathbf{E}}_{\text{st}}^\dagger(Y) \] \item[(ii)] Given cofibrant $\widebar{\mathpzc{E}}^\dagger_{\emph{st}}$-algebras $A$ and $B$, we have a natural quasi-isomorphism: \[ A \amalg B \sim A \oplus B \] \item[(iii)] Given a diagram $A \leftarrow C \rightarrow B$ of $\widebar{\mathpzc{E}}^\dagger_{\emph{st}}$-algebras, if each of $A$, $B$ and $C$ are cofibrant, and $C \to B$ is a cofibration, then we have that: \[ A \amalg_C B \sim A \oplus_C B \] \end{itemize} \end{Proposition} \begin{proof} (i): Given the $\overline{\mathbb{F}}_p$-complexes $X$ and $Y$, we have a canonical map: \[ \overline{\mathbf{E}}_{\text{st}}^\dagger(X) \oplus \overline{\mathbf{E}}_{\text{st}}^\dagger(Y) \to \overline{\mathbf{E}}_{\text{st}}^\dagger(X \oplus Y) \] Upon choosing bases for $X$ and $Y$, we have $\mathbb{F}_p$-complexes $\underline{X}$ and $\underline{Y}$ such that $X \cong \overline{\mathbb{F}}_p \otimes_{\mathbb{F}_p} \underline{X}$ and $Y \cong \overline{\mathbb{F}}_p \otimes_{\mathbb{F}_p} \underline{Y}$. We of course then also have a basis for $X \oplus Y$ and an isomorphism $X \oplus Y \cong \overline{\mathbb{F}}_p \otimes_{\mathbb{F}_p} (X \oplus Y)$. It follows that the above canonical map can be constructed by tensoring the map \[ \mathbf{E}_{\text{st}}^\dagger(\underline{X}) \oplus \mathbf{E}_{\text{st}}^\dagger(\underline{Y}) \to \mathbf{E}_{\text{st}}^\dagger(\underline{X} \oplus \underline{Y}) \] with $\overline{\mathbb{F}}_p$. The result now follows by Proposition~\ref{prop:additivity_of_monad}. \\ (ii): We are trying to show that the obvious analogue of Proposition~\ref{prop:coproducts_of_algebras} holds. An easy inspection shows that, by entirely analogous proofs, the obvious analogues of Lemmas~\ref{lem:stabilityigorsense} and ~\ref{lem:UandVforcofibA} hold, and then, as a result, so does the analogue of Proposition~\ref{prop:coproducts_of_algebras}, as desired. \\ (iii): We are trying to show that the obvious analogue of Proposition~\ref{prop:pushouts_alg} holds. An easy inspection shows that, by entirely analogous proofs, the obvious analogues of Proposition~\ref{prop:pushout_normalization} and Lemmas~\ref{lem:norm_fin_flat} and~\ref{lem:Estweksforpowers} hold, and then, as a result, so does the analogue of Proposition~\ref{prop:pushouts_alg}, as desired. \end{proof} We have now completed the transition from coefficients in $\mathbb{F}_p$ to coefficients in $\overline{\mathbb{F}}_p$ at the level of the operad. Next, we consider spectral cochains with coefficients in $\overline{\mathbb{F}}_p$. Henceforth, we let $\text{C}^\bullet(-)$ denote cochains with coefficients in $\mathbb{F}_p$. \begin{Definition}\label{def:speccochainsFpbar} Given a spectrum $E$, we set the following: \[ \overline{\text{C}}^\bullet(E) := \text{C}^\bullet(E) \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p \] \end{Definition} Of couse, one would hope that the cochains $\overline{\text{C}}^\bullet(E)$ yield algebras over $\widebar{\mathpzc{E}}_{\emph{st}}^\dagger$. The following result confirms this. \begin{Proposition}\label{prop:speccochainsalgoverFpbar} Given a spectrum $E$, $\overline{\emph{C}}^\bullet(E)$ is naturally an algebra over $\widebar{\mathpzc{E}}_{\emph{st}}^\dagger$. \end{Proposition} \begin{proof} In general, if $X$ if an $\mathbb{F}_p$-complex which is an $\mathpzc{E}_{\text{st}}^\dagger$-algebra via a structure map $\mathbf{E}_{\text{st}}^\dagger X \to X$, then $\overline{\mathbb{F}}_p \otimes_{\mathbb{F}_p} X$ is an $\overline{\mathbb{F}}_p$-complex which is an $\widebar{\mathpzc{E}}_{\text{st}}^\dagger$-algebra, via the structure map: \[ \overline{\mathbf{E}}_{\text{st}}^\dagger (\overline{\mathbb{F}}_p \otimes_{\mathbb{F}_p} X) \cong \overline{\mathbb{F}}_p \otimes_{\mathbb{F}_p} (\mathbf{E}_{\text{st}}^\dagger X) \to \overline{\mathbb{F}}_p \otimes_{\mathbb{F}_p} X \] \end{proof} We have now completed the transition of all previous material to coefficients in $\overline{\mathbb{F}}_p$. \subsection{The Adjoint to Spectral Cochains} Consider the spectral cochains functor: \[ \overline{\text{C}}^\bullet \colon \mathsf{Sp}^{\text{op}} \to \widebar{\mathpzc{E}}_{\text{st}}^\dagger\text{-}\mathsf{Alg} \] We shall construct an adjoint to this spectral cochains functor. That is, we will construct an adjoint functor: \[ \text{U} \colon \widebar{\mathpzc{E}}_{\text{st}}^\dagger\text{-}\mathsf{Alg} \to \mathsf{Sp}^{\text{op}} \] We define $\text{U}$ by setting, given an $\widebar{\mathpzc{E}}_{\text{st}}^\dagger$-algebra $A$, the following in spectral degree $n \ge 0$ and simplicial degree $d \ge 0$: \[ \text{U}(A)_{n,d} := \widebar{\mathpzc{E}}_{\text{st}}^\dagger\mathsf{Alg}(A,\overline{\text{C}}^\bullet(\Sigma^{\infty-n}\Delta_{d+})) = \widebar{\mathpzc{E}}_{\text{st}}^\dagger\mathsf{Alg}(A, \text{C}^\bullet(\Sigma^{\infty-n}\Delta_{d+}) \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p) \] With $A$ and $n$ fixed, $\text{U}(A)_{n,d} $ is clearly contravariantly functorial in $d$, so that we have a simplicial set $\text{U}(A)_n$; moreover, it becomes a based simplicial set upon endowing it with the zero map as a basepoint. We now want maps $\text{U}(A)_n \to \Omega\text{U}(A)_{n+1}$ or $\Sigma\text{U}(A)_n \to \text{U}(A)_{n+1}$. In dimension $d$, this means a map $\Sigma\text{U}(A)_{n,d} \to \text{U}(A)_{n+1,d+1}$, such that $d_0$ and $d_1 \cdots d_{d+1}$, applied to simplices in the image, map to $*$. Thus we want a map \[ \widebar{\mathpzc{E}}_{\text{st}}^\dagger\text{-}\mathsf{Alg}(A,\text{C}^\bullet(\Sigma^{\infty - n}\Delta_{d+}) \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p) \to \widebar{\mathpzc{E}}_{\text{st}}^\dagger\text{-}\mathsf{Alg}(A,\text{C}^\bullet(\Sigma^{\infty - n-1}\Delta_{d+1,+}) \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p) \] which is such that the algebra maps which lie in the image of this map satisfy the property that they yield the zero map upon postcomposition either with the map $\text{C}^\bullet(\Sigma^{\infty - n-1}\Delta_{d+1,+}) \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p \to \text{C}^\bullet(\Sigma^{\infty - n-1}\Delta_{d+}) \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p$ induced by $d_0$, or instead with the map $\text{C}^\bullet(\Sigma^{\infty - n-1}\Delta_{d+1,+}) \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p \to \text{C}^\bullet(\Sigma^{\infty - n-1}\Delta_{0+}) \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p$ induced by $d_1 \cdots d_{d+1}$. We first note that we have an isomorphism of differential graded $\mathbb{F}_p$-modules \[ \text{C}_\bullet(\Sigma^{\infty - n}\Delta_{d+}) \to \text{C}_\bullet(\Sigma \Delta_{d+}) \] of degree $n+1$ (note that here in the source we are taking chains on a spectrum while in the target we are taking chains on a based simplicial set). This isomorphism is given by sending $[n,e,x]$ (see Remark~\ref{rmk:spec_chain_notation} for this notation) in $\Delta_{d+}$, which is of degree $e-n$, to $[\Sigma x]$ in $\Sigma \Delta_{d+}$, which is of degree $e+1$; that this is an isomorphism follows from Proposition~\ref{prop:nd_susp}. Next, note that we have an isomorphism of differential graded $\mathbb{F}_p$-modules \[ \text{C}_\bullet(\Sigma^{\infty - n-1}\Delta_{d+1,+}) \to \text{C}_\bullet(\Delta_{d+1,+}) \] again of degree $n+1$. This isomorphism is given by sending $[n+1,e,x]$ in $\Delta_{d+1,+}$, which is of degree $e-n-1$, to $[x]$ in $\Delta_{d,+}$, which is of degree $e$. Now, using Example~\ref{examp:cones_Delta_k_+} and the canonical map $\text{C}(X) \to \Sigma X$, we have a canonical map $\Delta_{d+1} \to \Sigma \Delta_{d+}$, yielding a map $\Delta_{d+1,+} \to \Sigma \Delta_{d+}$, and so, using the above isomorphisms, we get a composite map \[ \text{C}_\bullet(\Sigma^{\infty - n-1}\Delta_{d+1,+}) \to \text{C}_\bullet(\Delta_{d+1,+}) \to \text{C}_\bullet(\Sigma\Delta_{d+}) \to \text{C}_\bullet(\Sigma^{\infty - n}\Delta_{d+}) \] which is a map of chain complexes since the degree is $(n+1) + 0 - (n+1) = 0$. Moreover, we claim that it is a map of $\mathpzc{E}_{\text{st}}$-coalgebras. Once we have this, by dualization, tensoring with $\overline{\mathbb{F}}_p$, and postcomposition, we get the desired map: \[ \mathpzc{E}_{\text{st}}\text{-}\mathsf{Alg}(A,\text{C}^\bullet(\Sigma^{\infty - n}\Delta_{d+}) \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p) \to \mathpzc{E}_{\text{st}}\text{-}\mathsf{Alg}(A,\text{C}^\bullet(\Sigma^{\infty - n-1}\Delta_{d+1,+}) \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p) \] To see that the map is an $\mathpzc{E}_{\text{st}}$-coalgebra map, consider some element $[n+1,e,x]$ in the source chains $\text{C}_\bullet(\Sigma^{\infty - n-1}\Delta_{d+1,+})$, where $x$ is some map $[e] \to [d+1]$. Let $x'$ be the corresponding map $[e'] \to [d]$ by restricting to the preimage of the final $d+1$ elements (if this preimage is empty, or if it is all of $[e]$, we have $*$). The corresponding element of $\text{C}_\bullet(\Sigma^{\infty - n}\Delta_{d+})$ is $[n+1,e',x']$ (we have $n+1$ here instead of $n$ since in the third map in the composition, the isomorphism maps to the $(n+1)^{\text{st}}$ level, instead of the $n^{\text{th}}$ level, of the spectrum $\text{C}_\bullet(\Sigma^{\infty - n}\Delta_{d+})$). Thus an element $\alpha = (\alpha_0,\alpha_1,\dots)$ of $\mathpzc{E}_{\text{st}}(k)$ coacts on both the element in the source and the element in the target by $\alpha_{n+1}$, yielding $\alpha_{n+1}(e)$ and $\alpha_{n+1}(e')$. Thus what we want is for the following square to commute: \begin{center} \begin{tikzpicture}[node distance=1.5cm] \node(A){$\text{C}_\bullet(\Delta_{d+1,+})$}; \node[below= of A](C){$\text{C}_\bullet(\Delta_{d+1,+})^{\otimes k}$}; \node[right= of A,xshift=1cm](B){$\text{C}_\bullet(\Sigma \Delta_{d+})$}; \node[below= of B,yshift=0mm](D){$\text{C}_\bullet(\Sigma \Delta_{d+})^{\otimes k}$}; \draw[->] (A) -- (B) node[midway,anchor=south]{}; \draw[->] (A) -- (C) node[midway,anchor=east]{$\alpha_{n+1}$}; \draw[->] (C) -- (D) node[midway,anchor=north]{}; \draw[->] (B) -- (D) node[midway,anchor=west]{$\alpha_{n+1}$}; \end{tikzpicture} \end{center} This commutativity follows from the naturality of $\alpha_{n+1}$. \\ Next, we check that the required condition above that postcomposing maps in the image with the map $\text{C}^\bullet(\Sigma^{\infty - n-1}\Delta_{d+1,+}) \to \text{C}^\bullet(\Sigma^{\infty - n-1}\Delta_{d,+})$ induced by $d_0$, or instead with the map $\text{C}^\bullet(\Sigma^{\infty - n-1}\Delta_{d+1,+}) \to \text{C}^\bullet(\Sigma^{\infty - n-1}\Delta_{0+})$ induced by $d_1 \cdots d_{d+1}$, gives the zero map. That is, first, given a map \[ A \to \text{C}^\bullet(\Sigma^{\infty - n}\Delta_{d+}) \to \text{C}^\bullet(\Sigma^{\infty - n - 1}\Delta_{d+1,+}) \] we want the composite \[ A \to \text{C}^\bullet(\Sigma^{\infty - n}\Delta_{d+}) \to \text{C}^\bullet(\Sigma^{\infty - n - 1}\Delta_{d+1,+}) \to \text{C}^\bullet(\Sigma^{\infty - n-1}\Delta_{d,+}) \] where the final map is given by $d_0$, to be zero. Here the composite of the latter two maps is the dual of the following composite: \begin{center} \begin{tikzpicture}[node distance = 1cm] \node [] (A) {$\text{C}_\bullet(\Sigma^{\infty - n-1}\Delta_{d+})$}; \node [below of = A,yshift=-5mm] (B) {$\text{C}_\bullet(\Sigma^{\infty - n-1}\Delta_{d+1,+})$}; \node [right of = B,xshift=2.5cm] (C) {$\text{C}_\bullet(\Delta_{d+1,+})$}; \node [right of = C,xshift=2cm] (D) {$\text{C}_\bullet(\Sigma\Delta_{d+})$}; \node [right of = D,xshift=2cm] (E) {$\text{C}_\bullet(\Sigma^{\infty - n}\Delta_{d+})$}; \draw [->] (A) -- (B) node[midway,anchor=east]{}; \draw [->] (B) -- (C) node[midway,anchor=north]{}; \draw [->] (C) -- (D) node[midway,anchor=north]{}; \draw [->] (D) -- (E) node[midway,anchor=north]{}; \end{tikzpicture} \end{center} Start with some $q$-simplex $[e] \to [d]$. Then it gets postcomposed to $[e] \to [d+1]$ where the image doesn't contain $0$, then we get this same map again but with a different degree, then we restrict to those entries which don't map to $0$ and so get back the original $[e] \to [d]$ which in the suspension is killed (mapped to $*$) and thus we will get zero in $A$ at the end of the composition. \\ On the other hand, if we had postcomposed instead with the map $\text{C}^\bullet(\Sigma^{\infty - n-1}\Delta_{d+1,+}) \to \text{C}^\bullet(\Sigma^{\infty - n-1}\Delta_{0+})$ induced by $d_1 \cdots d_{d+1}$, the vertical map above alone would change, and, proceeding as in the analysis above, we would start with the identity on $[0]$ and this will map under the map induced by $d_1 \cdots d_{d+1}$ to the inclusion $[0] \to [d+1]$ mapping $0$ to $0$, and this will be killed by the map $\text{C}_\bullet(\Delta_{d+1,+}) \to \text{C}_\bullet(\Sigma\Delta_{d+})$ (see the definition of the map to the cone in Example~\ref{examp:cones_Delta_k_+}). \\ We have now constructed maps $\text{U}(A)_{n,d} \to \text{U}(A)_{n+1,d+1}$, i.e., maps $(\text{U}(A)_{n})_{d} \to (\Omega\text{U}(A)_{n+1})_{d}$. Moreover, one can readily check that these maps commute with the simplicial operators, so that we have the desired simplicial set maps $\text{U}(A)_n \to \Omega \text{U}(A)_{n+1}$.\\ We now have a spectrum $\text{U}(A)$ associated to $A$. We next show that this construction is functorial in $\widebar{\mathpzc{E}}_{\text{st}}^\dagger$-algebras $A$. This is easily seen from the fact that \[ \text{U}(A)_{n,d} = \widebar{\mathpzc{E}}_{\text{st}}^\dagger\text{-}\mathsf{Alg}(A,\text{C}^\bullet(\Sigma^{\infty - n}\Delta_{d+}) \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p) \] as, given a map $A \to B$, we get, by precomposition, an induced map from $\widebar{\mathpzc{E}}_{\text{st}}^\dagger\text{-}\mathsf{Alg}(B,\text{C}^\bullet(\Sigma^{\infty - n}\Delta_{d+}) \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p)$ to $\widebar{\mathpzc{E}}_{\text{st}}^\dagger\text{-}\mathsf{Alg}(A,\text{C}^\bullet(\Sigma^{\infty - n}\Delta_{d+}) \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p)$. Considering these for a fixed $n$ but variable $d$, we get a simplicial set map $\text{U}(B)_n \to \text{U}(A)_n$ since the simplicial operators act by postcomposition and so commute with the precomposition maps $\widebar{\mathpzc{E}}_{\text{st}}^\dagger\text{-}\mathsf{Alg}(B,\text{C}^\bullet(\Sigma^{\infty - n}\Delta_{d+}) \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p) \to \widebar{\mathpzc{E}}_{\text{st}}^\dagger\text{-}\mathsf{Alg}(A,\text{C}^\bullet(\Sigma^{\infty - n}\Delta_{d+}) \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p)$. Moreover, one can immediately verify that these simplicial set maps are compatible with the structure maps of the spectra $\text{U}(A)$ and $\text{U}(B)$. Thus we have a functor: \[ \text{U} \colon \widebar{\mathpzc{E}}_{\text{st}}^\dagger\text{-}\mathsf{Alg} \to \mathsf{Sp}^{\text{op}} \] \begin{Proposition}\label{prop:U_is_right_adjoint} The functor $\emph{U}$ is left adjoint to the cochains functor on spectra, so that we have an adjunction: \begin{center} \begin{tikzpicture}[node distance=3cm] \node[](A){$\mathsf{Sp}^{\emph{op}}$}; \node[right of = A](B){$\widebar{\mathpzc{E}}_{\emph{st}}^\dagger\text{-}\mathsf{Alg}$}; \draw[->,transform canvas={yshift=2mm}] (A) -- (B) node[midway,anchor=south]{$\overline{\emph{C}}^\bullet$}; \draw[->,transform canvas={yshift=-2mm}] (B) -- (A) node[midway,anchor=north]{$\emph{U}$} node[midway,anchor=south,yshift=-0.75mm]{$\top$}; \end{tikzpicture} \end{center} \end{Proposition} \begin{proof} Let $E$ be a spectrum and $A$ a $\mathpzc{E}_{\text{st}}$-algebra. We wish to construct the natural isomorphism between $\mathsf{Sp}(E,\text{U}(A))$ and $\widebar{\mathpzc{E}}^\dagger_{\text{st}}\text{-}\mathsf{Alg}(A,\widebar{\text{C}}^{\bullet}(E))$. This requires verifications which are not obvious but not too difficult, though they are rather lengthy. We shall provide here the part of the correspondence which yields an $\widebar{\mathpzc{E}}^\dagger_{\text{st}}$algebra map $g = \{g_n\} \colon A \to \widebar{\text{C}}^\bullet(E)$ when given a spectrum map $f =\{f_n\} \colon E \to \text{U}(A)$. Fix such a spectrum map $f = \{f_n\} \colon E \to \text{U}(A)$. We want an algebra map $g = \{g_n\} \colon A \to \widebar{\text{C}}^\bullet(E)$. Consider $a \in A$, of degree say $n$. We desire a map $g_n(a) \colon \widebar{\text{C}}_n(E) \to \mathbb{F}_p$. Consider some $[m,e,x]$ in $(E_m)_e$ where $e-m=n$. We have an element $f_m(x) \in \text{U}(A)_{m,e}$, which is to say a map $A \to \widebar{\text{C}}^\bullet(\Sigma^{\infty - m}\Delta_{e+})$, and so, taking the image of $a$, a map $\widebar{\text{C}}_n(\Sigma^{\infty - m}\Delta_{e+}) \to \mathbb{F}_p$. Let $g(a)([m,e,x])$ be the image under this map of $[m,e,\text{id}_{[e]}]$ (note that this cell is of degree $e-m = n$); that is, $g_n(a)([m,e,x]) = f_m(x)(a)([m,e,\text{id}_{[e]}])$. Linearity of $g_n$ follows from that of $f_m(x)$. Next, we must check that the differentials are preserved. Fix some $a \in A^n$. We have two $(n+1)$-cochains $g_{n+1}(\partial a), \partial g_n(a) \colon \widebar{\text{C}}_{n+1}(E) \to \mathbb{F}_p$ and we desire that these two to be the same. Consider some $[m,e,x]$ where $x \in (E_m)_e$ and $e-m=n+1$. The latter cochain first forms $\partial [m,e,x] = \sum_i [m,e-1,d_i(x)]$ and then sends this to $\sum_i f_m(d_i(x))(a)([m,e-1,\text{id}_{[e-1]}])$. On the other hand, the former cochain sends it to $f_m(x)(\partial a)([m,e,\text{id}_{[e]}])$. Now, since $f_m$ is a map of simplicial sets, we have $\sum_i f_m(d_i(x))(a)([m,e-1,\text{id}_{[e-1]}]) = \sum_i (d_if_m(x))(a)([m,e-1,\text{id}_{[e-1]}])$. For each $i$, the map $d_if_m(e)$ is the composite: \[ A \overset{f_m(x)}\longrightarrow \widebar{\text{C}}^\bullet(\Sigma^{\infty - m}\Delta_{e+}) \overset{d_i}\longrightarrow \widebar{\text{C}}^\bullet(\Sigma^{\infty - m}\Delta_{e-1,+}) \] It follows that $\sum_i (d_if_m(x))(a)([m,e-1,\text{id}_{[e-1]}]) = \sum_i (f_m(x))(a)([m,e-1,d_i])$. On the other hand, since $f_m(x)$ is a map of cochain complexes, we have that $f_m(x)(\partial a) = \partial f_m(x)(a)$. It follows that \begin{align*} f_m(x)(\partial a)([m,e,\text{id}_{[e]}]) &= (\partial f_m(x)(a))([m,e,\text{id}_{[e]}]) \\ &= f_m(x)(a)(\partial [m,e,\text{id}_{[e]}]) \\ &= f_m(x)(a)\left(\sum_i [m,e-1,d_i]\right) \\ &= \sum_i f_m(x)(a)([m,e-1,d_i]) \end{align*} Thus, as desired, the two cochains coincide. Next, we must check that our map $g = \{g_n\} \colon A \to \widebar{\text{C}}^\bullet(E)$ respects the actions by $\widebar{\mathpzc{E}}^\dagger_{\text{st}}$. Let $\alpha = (\alpha_0,\alpha_1,\dots) \in \widebar{\mathpzc{E}}^\dagger_{\text{st}}(k)$. Consider some $a_1, \dots, a_k \in A$, and assume without loss of generality (due to linearity), that the $a_i$ are homogeneous, say of degrees $n_1, \dots, n_k$. If we first act by $\alpha$ and then apply $g$, we get a cochain whose image at $[m,e,x]$, where $e-m = n_1 + \cdots + n_k$, is $f_m(x)(\alpha(a_1,\dots,a_k))([m,e,\text{id}_{[e]}])$. Since $f_m(x)$ is a map of algebras, this is equivalent to $\alpha(f_m(x)(a_1),\dots,f_m(x)(a_k))([m,e,\text{id}_{[e]}])$. On the other hand, if we apply $g$ first and then act by $\alpha$, we first get cochains $g_{n_1}(a_1), \dots, g_{n_k}(a_k)$ and then the cochain $\alpha(g_{n_1}(a_1), \dots, g_{n_k}(a_k))$. Let the coaction of $\alpha$ on $[m,e,\text{id}_{[e]}]$ be $\sum [m,e_1,\theta_1] \otimes \cdots \otimes [m,e_k,\theta_k]$, where $\theta_i$ is a map $[e_i] \to [e]$. Then, considering the map $\Delta_e \to E_m$ corresponding to the same $x \in (E_m)_e$ as above, and using naturality of $\alpha_m$, we find that the coaction of $\alpha$ on $[m,e,x]$ is given by $\sum [m,e_1,\theta_1^*x] \otimes \cdots \otimes [m,e_k,\theta_k^*x]$. Now, by definition of the $\widebar{\mathpzc{E}}^\dagger_{\text{st}}$-action on spectral cochains, if we evaluate the cochain $\alpha(g_{n_1}(a_1), \dots, g_{n_k}(a_k))$ at $[m,e,x]$, we get \[ \alpha(g_{n_1}(a_1), \dots, g_{n_k}(a_k))([m,e,x]) = (g_{n_1}(a_1) \otimes \cdots \otimes g_{n_k}(a_k))(\alpha \cdot [m,e,x]) \] which amounts to: \[ (g_{n_1}(a_1) \otimes \cdots \otimes g_{n_k}(a_k))\left(\sum [m,e_1,\theta_1^*x] \otimes \cdots \otimes [m,e_k,\theta_k^*x]\right) \] On the other hand, we have: \[ \alpha(f_m(x)(a_1),\dots,f_m(x)(a_k))([m,e,\text{id}_{[e]}]) = (f_m(x)(a_1) \otimes \cdots \otimes f_m(x)(a_k))(\alpha \cdot [m,e,\text{id}_{[e]}]) \] which is to say \[ (f_m(x)(a_1) \otimes \cdots \otimes f_m(x)(a_k))\left(\sum [m,e_1,\theta_1] \otimes \cdots \otimes [m,e_k,\theta_k]\right) \] and this amounts to: \[ \sum f_m(x)(a_1)([m,e_1,\theta_1]) \otimes \cdots \otimes f_m(x)(a_k)([m,e_k,\theta_k]) \] In either case, we only have to worry about summands where $e_i - m = n_i$ for each $i$. In this case, the former becomes: \[ \sum (f_m(\theta_1^*x)(a_1)[m,q_1,\text{id}_{[q_1]}] \otimes \cdots \otimes f_m(\theta_k^*x)(a_k)[m,q_k,\text{id}_{[q_k]}] \] which is to say: \[ \sum ((\theta_1^*f_m)(x)(a_1)[m,e_1,\text{id}_{[e_1]}] \otimes \cdots \otimes (\theta_k^*f_m)(x)(a_k)[m,e_k,\text{id}_{[e_k]}] \] Now, for each $i$, $\theta_i^*f_m(x)$ is the composite: \[ A \overset{f_m(x)}\longrightarrow \widebar{\text{C}}^\bullet(\Sigma^{\infty - m}\Delta_{e+}) \overset{\theta_i}\longrightarrow \widebar{\text{C}}^\bullet(\Sigma^{\infty - m}\Delta_{e_i,+}) \] It follows that, for each $i$, $((\theta_1^*f_m)(x)(a_1)[m,e_i,\text{id}_{[e_i]}] = f_m(x)(a_1)[m,e_i,\theta_i]$. Thus the two cochains coincide, as desired. \end{proof} Now we consider homotopical properties of the above spectral cochains adjunction. \begin{Proposition}\label{prop:spec_coch_adj_qadj} The spectral cochains adjunction \begin{center} \begin{tikzpicture}[node distance=3cm] \node[](A){$\mathsf{Sp}^{\emph{op}}$}; \node[right of = A](B){$\widebar{\mathpzc{E}}_{\emph{st}}^\dagger\text{-}\mathsf{Alg}$}; \draw[->,transform canvas={yshift=2mm}] (A) -- (B) node[midway,anchor=south]{$\overline{\emph{C}}^\bullet$}; \draw[->,transform canvas={yshift=-2mm}] (B) -- (A) node[midway,anchor=north]{$\emph{U}$} node[midway,anchor=south,yshift=-0.75mm]{$\top$}; \end{tikzpicture} \end{center} is a Quillen adjunction. \end{Proposition} Note that here on the righthand side we have a Quillen semi-model category, as opposed to a Quillen model category. By a Quillen adjunction, we mean one which satisfies the conditions in Proposition~\ref{prop:semimodelqadj} (iii). \begin{proof} We first demonstrate that $\overline{\text{C}}^\bullet$ preserves fibrations, which is to say that $\overline{\text{C}}^\bullet$ sends a cofibration $i \colon E \to F$ of spectra to an epimorphism. Since $i$ is a cofibration, we have that $i_0 \colon E_0 \to F_0$ and, for $n \ge 0$, the maps \[ E_{n+1} \, \scalebox{1.25}{$\amalg$}_{\Sigma E_n} \, \Sigma F_n \to F_{n+1} \] are cofibrations of based simplicial sets, which is to say that they are injective in each simplicial degree. In particular, by Proposition~\ref{prop:spec_mod_str}, each $i_n \colon E_n \to F_n$, for $n \ge 0$, is a monomorphism. Since monomorphisms of simplicial sets preserve non-degenerate simplices, each $i_n$, for $n \ge 0$, preserves non-degenerate simplices, and of course also the basepoints and their degeneracies. Thus, upon taking chains $\text{C}_\bullet(-)$, we get a sequential colimit of monomorphisms, which is once again a monomorphism since sequential colimits are exact. As we are over a field, we have a split inclusion, so that, upon dualizing, reindexing and tensoring, we have the result for the cochains $\widebar{\text{C}}^\bullet(-)$. \\ Next, we shall show that $\text{U}$ preserves cofibrations. Given a cofibration $A \to B$ of algebras, we wish to show that $\text{U}(A) \to \text{U}(B)$ is a cofibration in the opposite category of spectra. We know that all cofibrations of algebras may be written as retracts of cell maps. As $\text{U}$ is a left adjoint and so preserves colimits, we need only show that $\text{U}$ maps the cofibrations $\widebar{\mathbf{E}}_{\textbf{st}}^\dagger M \to \widebar{\mathbf{E}}_{\textbf{st}}^\dagger \text{C}M$ to cofibrations. Here $M$ is an $\overline{\mathbb{F}}_p$-complex with zero differentials. In fact, since for such $M$ the map $M \to \text{C}M$ decomposes as a direct sum of maps $\mathbb{S}^n \to \mathbb{D}^{n+1}$ for various $n$, we need only consider this case of the map $\mathbb{S}^n \to \mathbb{D}^{n+1}$. We shall show, more generally, that if $X \to Y$ is an inclusion of complexes where $X$ and $Y$ are finite type (by which we mean that they are finite dimensional in each degree), then $\text{U}\widebar{\mathbf{E}}_{\textbf{st}}^\dagger Y \to \text{U} \widebar{\mathbf{E}}_{\textbf{st}}^\dagger X$ is a fibration of spectra. \\ To begin, we claim that, given any complex $X$ of finite type, $\text{U}\widebar{\mathbf{E}}_{\textbf{st}}^\dagger X$ is a strict $\Omega$-spectrum. First, note that, in each spectral degree $n$ and simplicial degree $d$, we have that: \begin{align*} (\text{U}\widebar{\mathbf{E}}_{\textbf{st}}^\dagger X)_{n,d} &= \widebar{\mathpzc{E}}_{\text{st}}^\dagger\text{-}\mathsf{Alg}(\widebar{\mathbf{E}}_{\textbf{st}}^\dagger X,\text{C}^\bullet(\Sigma^{\infty - n}\Delta_{d+}) \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p) \\ &\cong \mathsf{Co}_{\overline{\mathbb{F}}_p}(X,\text{C}^\bullet(\Sigma^{\infty - n}\Delta_{d+}) \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p) \end{align*} As maps of complexes are closed under addition, we have that, for each $n \ge 0$, $(\text{U}\widebar{\mathbf{E}}_{\textbf{st}}^\dagger X)_n$ is the underlying simplicial set of a simplicial abelian group, and so a Kan complex. Next, we will show that the maps $\text{U}(A)_n \to \Omega\text{U}(A)_{n+1}$ are bijections in each simplicial degree $d$. To see this, first note that, since $X$ is of finite type, we may dualize to find that \begin{align*} (\text{U}\widebar{\mathbf{E}}_{\textbf{st}}^\dagger X)_{n,d} \cong \mathsf{Ch}_{\overline{\mathbb{F}}_p}(\text{C}_\bullet(\Sigma^{\infty - n}\Delta_{d+}) \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p, \text{D}X) \cong \mathsf{Ch}_{\mathbb{F}_p}(\text{C}_\bullet(\Sigma^{\infty - n}\Delta_{d+}), \text{D}X) \end{align*} where $\text{D}X$ is the chain complex given by $(\text{D}X)_e = \text{Hom}_{\overline{\mathbb{F}}_p}(X_e,\overline{\mathbb{F}}_p)$. Under this isomorphism, the map $\text{U}(A)_n \to \Omega\text{U}(A)_{n+1}$ is given by sending a complex map $\text{C}_\bullet(\Sigma^{\infty - n}\Delta_{d+}) \to \text{D}X$ to the composite \begin{center} \begin{tikzpicture}[node distance = 1cm] \node [] (B) {$\text{C}_\bullet(\Sigma^{\infty - n-1}\Delta_{d+1,+})$}; \node [right of = B,xshift=2.5cm] (C) {$\text{C}_\bullet(\Delta_{d+1,+})$}; \node [right of = C,xshift=2cm] (D) {$\text{C}_\bullet(\Sigma\Delta_{d+})$}; \node [right of = D,xshift=2cm] (E) {$\text{C}_\bullet(\Sigma^{\infty - n}\Delta_{d+})$}; \node [right of = E,xshift=1.5cm] (F) {$\text{D}X$}; \draw [->] (B) -- (C) node[midway,anchor=north]{}; \draw [->] (C) -- (D) node[midway,anchor=north]{}; \draw [->] (D) -- (E) node[midway,anchor=north]{}; \draw [->] (E) -- (F) node[midway,anchor=north]{}; \end{tikzpicture} \end{center} where the first three maps are standard maps defined as part of the definition of $\text{U}$. Consider a map $\text{C}_\bullet(\Sigma^{\infty - n}\Delta_{d+}) \to \text{D}X$ which is non-zero. Then there exists some simplex $\theta \colon [e] \to [d]$ in $\Delta_{d+}$ which is not mapped to zero. Consider now the map $\theta' \colon [e+1] \to [d+1]$ which maps $0$ to $0$ and maps $i$ to $\theta(i-1)+1$ for $i \ge 1$. This gives a simplex in $\Delta_{d+1,+}$ and so an element of the source of the composite above. Upon applying the first map, we get $\theta'$ again, and then upon applying the second map, we get the original $\theta \colon [e] \to [d]$ but in dimension $e+1$, then the original $\theta$ and then finally a non-zero element in $X$ by our assumption above. This shows that $\text{U}(X)_n \to \Omega\text{U}(X)_{n+1}$ is injective in each simplicial degree. It remains to demonstrate surjectivity. The proof is similar. Suppose given a map $f \colon \text{C}_\bullet(\Sigma^{\infty - n-1}\Delta_{d+1,+}) \to \text{D}X$ and suppose that it satisfies the ``$d_0 = d_1 \cdots d_{n+1} = *$'' condition required for membership in $\Omega\text{U}(X)_{n+1,d}$. We then need to define a map $g \colon \text{C}_\bullet(\Sigma^{\infty - n}\Delta_{d+}) \to \text{D}X$. Given $\theta \colon [e] \to [d]$, we map it to $f(\theta')$ where $\theta'$ is defined as above. One can check directly that this is indeed a map of complexes, and we see that, upon precomposition with the first three maps above, we get the original map $f$ since, as before, $\theta' \mapsto \theta$ under the composite of the first three maps. This completes the proof that $\text{U}\widebar{\mathbf{E}}_{\textbf{st}}^\dagger X$ is a strict $\Omega$-spectrum. \\ Now, invoking Proposition~\ref{prop:fibcofibsp} (iii), it remains to show that if $X \to Y$ is an inclusion of complexes where $X$ and $Y$ are of finite type, then $\text{U}\widebar{\mathbf{E}}_{\textbf{st}}^\dagger Y \to \text{U} \widebar{\mathbf{E}}_{\textbf{st}}^\dagger X$ is a levelwise fibration of spectra. Thus, for each $n \ge 0$, we desire lifts of the following squares: \begin{equation}\label{eq:liftsqU1} \begin{tikzpicture}[baseline=(current bounding box.center), node distance = 1.5cm] \node [] (A) {$\Lambda_d^i$}; \node [right of = A,xshift=1cm] (B) {$(\text{U}\widebar{\mathbf{E}}_{\textbf{st}}^\dagger Y)_n$}; \node [below of = A] (C) {$\Delta_d$}; \node [right of = C,xshift=1cm] (D) {$(\text{U}\widebar{\mathbf{E}}_{\textbf{st}}^\dagger X)_n$}; \draw [->] (A) -- (B); \draw [->] (A) -- (C); \draw [->] (C) -- (D); \draw [->] (B) -- (D); \draw [->,dashed] (C) -- (B); \end{tikzpicture} \end{equation} Via the earlier identifications, this amounts to a lift of the square: \begin{equation}\label{eq:liftsqU2} \begin{tikzpicture}[baseline=(current bounding box.center), node distance = 1.5cm] \node [] (A) {$\Lambda_d^i$}; \node [right of = A,xshift=1cm] (B) {$(\text{V}(\text{D} Y))_n$}; \node [below of = A] (C) {$\Delta_d$}; \node [right of = C,xshift=1cm] (D) {$(\text{V}(\text{D} X))_n$}; \draw [->] (A) -- (B); \draw [->] (A) -- (C); \draw [->] (C) -- (D); \draw [->] (B) -- (D); \draw [->,dashed] (C) -- (B); \end{tikzpicture} \end{equation} Here $\text{V}$ is the functor $\mathsf{Ch}_{\overline{\mathbb{F}}_p} \to \mathsf{Sp}$ given by setting \[ \text{V}(Z)_{n,d} := \mathsf{Ch}_{\overline{\mathbb{F}}_p}(\overline{\text{C}}_\bullet(\Sigma^{\infty - n}\Delta_{d+}), Z) \] which we note is a right adjoint to the spectral chains functor taken as a functor to simply chain complexes, forgetting the coalgebra structure. Considering the composite adjunction \begin{center} \begin{tikzpicture}[node distance=3cm] \node[](A){$\mathsf{Sp}$}; \node[right of = A](B){$\mathsf{Ch}_{\overline{\mathbb{F}}_p}$}; \node[left of = A](C){$\mathsf{Spc}$}; \draw[->,transform canvas={yshift=2mm}] (A) -- (B) node[midway,anchor=south]{$\overline{\text{C}}_\bullet$}; \draw[->,transform canvas={yshift=-2mm}] (B) -- (A) node[midway,anchor=north]{$\text{V}$} node[midway,anchor=south,yshift=-0.5mm]{$\bot$}; \draw[->,transform canvas={yshift=2mm}] (C) -- (A) node[midway,anchor=south]{$\Sigma^{\infty - n}_+$}; \draw[->,transform canvas={yshift=-2mm}] (A) -- (C) node[midway,anchor=north]{$(-)_n$} node[midway,anchor=south,yshift=-0.5mm]{$\bot$}; \end{tikzpicture} \end{center} we see that the above lifting problem is equivalent to one of the following form in chain complexes: \begin{center} \begin{tikzpicture}[node distance = 1.5cm] \node [] (A) {$\overline{\text{C}}_\bullet(\Sigma^{\infty - n}\Lambda_{d+}^i)$}; \node [right of = A,xshift=1cm] (B) {$\text{D}Y$}; \node [below of = A] (C) {$\overline{\text{C}}_\bullet(\Sigma^{\infty - n}\Delta_{d+})$}; \node [right of = C,xshift=1cm] (D) {$\text{D}X$}; \draw [->] (A) -- (B) node[midway,anchor=south]{}; \draw [->] (A) -- (C); \draw [->] (C) -- (D) node[midway,anchor=north]{}; \draw [->] (B) -- (D) node[midway,anchor=west]{$\text{D}f$}; \draw [->,dashed] (C) -- (B); \end{tikzpicture} \end{center} Now, up to shifts, we have that $\overline{\text{C}}_\bullet(\Sigma^{\infty - n}\Delta_{d+}) \cong \overline{\text{C}}_\bullet(\Delta_{d+})$ and $\overline{\text{C}}_\bullet(\Sigma^{\infty - n}\Lambda_{d+}^i) \cong \overline{\text{C}}_\bullet(\Lambda_{d+}^i)$. As a result, $\overline{\text{C}}_\bullet(\Sigma^{\infty - n}\Delta_{d+})$ and $\overline{\text{C}}_\bullet(\Sigma^{\infty - n}\Lambda_{d+}^i)$ are acyclic chain complexes. Moreover, $\overline{\text{C}}_\bullet(\Sigma^{\infty - n}\Lambda_{d+}^i) \to \overline{\text{C}}_\bullet(\Sigma^{\infty - n}\Delta_{d+})$ is clearly an inclusion between free complexes. Thus the lefthand vertical map is a trivial cofibration in the standard projective model structure on chain complexes. On the other hand, since $X \to Y$ was an inclusion of complexes, $\text{D}Y \to \text{D}X$ is an epimorphism. Thus the lift exists, as desired. \end{proof} As a result of the above, we have a derived adjunction: \begin{center} \begin{tikzpicture}[node distance=2cm] \node[](A){$\mathsf{hSp}^{\text{op}}$}; \node[right of = A](B){$\mathsf{h}\widebar{\mathpzc{E}}_{\text{st}}^\dagger\text{-}\mathsf{Alg}$}; \draw[<-,transform canvas={yshift=2mm}] (B) -- (A) node[midway,anchor=south]{ }; \draw[<-,transform canvas={yshift=-2mm}] (A) -- (B) node[midway,anchor=north]{} node[midway,anchor=south,yshift=-0.5mm]{}; \end{tikzpicture} \end{center} \section{Algebraic Models of $p$-Adic Stable Homotopy Types} In this section, we shall provide an application of our stable operads to $p$-adic stable homotopy theory. We will do this using the derived spectral cochains adjunction \begin{center} \begin{tikzpicture}[node distance=2cm] \node[](A){$\mathsf{hSp}^{\text{op}}$}; \node[right of = A](B){$\mathsf{h}\widebar{\mathpzc{E}}_{\text{st}}^\dagger\text{-}\mathsf{Alg}$}; \draw[<-,transform canvas={yshift=2mm}] (B) -- (A) node[midway,anchor=south]{ }; \draw[<-,transform canvas={yshift=-2mm}] (A) -- (B) node[midway,anchor=north]{} node[midway,anchor=south,yshift=-0.5mm]{}; \end{tikzpicture} \end{center} constructed in the previous section. We have shown that cochains on spectra, appropriately defined, yield algebras over the stable Barratt-Eccles operad. We will now show that these cochains, endowed with this structure, yield algebraic models for $p$-adic stable homotopy types, where $p$ here is a fixed but unspecified prime, as in previous chapters. More specifically, we will to show that, when restricted to the bounded below $p$-complete spectra of finite $p$-type, the map $\mathsf{hSp}^{\text{op}} \to \mathsf{h}\widebar{\mathpzc{E}}_{\text{st}}^\dagger\text{-}\mathsf{Alg}$ is fully faithful. This is equivalent, for formal reasons, to showing that, for any such spectrum $E$, the ``unit'' of the derived adjunction $E \to (\text{der} \: \text{U}) \circ (\text{der} \: \overline{\text{C}}^\bullet(-)) (E)$ (``der'' indicates a derived functor) is an isomorphism. For this reason, following~\cite{Mandell}, we make the following definition. \begin{Definition}\label{def:res_sp} A spectrum $E$ is said to be \textit{resolvable} if the ``unit'' of the derived spectral cochains adjunction above, evaluated at this spectrum, is an isomorphism. \end{Definition} \subsection{The Case of the Generalized Eilenberg-MacLane Spectra $\Sigma^n\normalfont{\text{H}}\mathbb{F}_p$} To begin, we wish to prove that $\text{H}\mathbb{F}_p$, and more generally $\Sigma^n\text{H}\mathbb{F}_p$, $n \in \mathbb{Z}$, is resolvable. In order to do this, first, note that, due to Proposition~\ref{prop:EM_spec_fib}, in computing the derived cochains functor, we need not perform any replacement of $\Sigma^n\text{H}\mathbb{F}_p$, though we do still need to perform a cofibrant replacement of $\overline{\text{C}}^\bullet(\Sigma^n\text{H}\mathbb{F}_p)$. We will do this by constructing a cell model for $\overline{\text{C}}^\bullet(\Sigma^n\text{H}\mathbb{F}_p)$. Fix $n \in \mathbb{Z}$. Intuitively, one expects that $\Sigma^n\text{H}\mathbb{F}_p$ on the spectral side ought to correspond to $\overline{\textbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n]$, or something similar, on the algebraic side. As per Proposition~\ref{prop:cohom_ops_Fpbar}, we have an operation $P^0$ on the cohomology of the free algebra $\overline{\textbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n]$. As per Proposition~\ref{prop:P01sp}, $P^0$ always acts by the identity on spectral cochains. Moreover, we shall see that this is the only special circumstance which we need to take into account, in that we shall be able to construct our cell model for $\overline{\text{C}}^\bullet(\Sigma^n\text{H}\mathbb{F}_p)$ by forcing this operation $P^0$ to be the identity. \\ First, recall that $\overline{\text{C}}^\bullet(\text{H}\mathbb{F}_p)$ is given by applying $- \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p$ to the following: \[ \text{lim}(\cdots \to \text{C}^\bullet(\text{K}(\mathbb{F}_p,2))[-2] \to \text{C}^\bullet(\text{K}(\mathbb{F}_p,1))[-1] \to \text{C}^\bullet(\text{K}(\mathbb{F}_p,0))) \] Thus, in particular $\overline{\text{C}}^0(\text{H}\mathbb{F}_p)$ is given by applying $- \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p$ to the following: \[ \text{lim}(\cdots \to \text{C}^2(\text{K}(\mathbb{F}_p,2)) \to \text{C}^1(\text{K}(\mathbb{F}_p,1)) \to \text{C}^0(\text{K}(\mathbb{F}_p,0))) \] Next, recall that, for each $m \ge 0$, $\text{K}(\mathbb{F}_p,m)$ is the based simplicial set whose $d$-simplices are given by $\text{Z}^m(\Delta_d;\mathbb{F}_p)$ (note that this is $*$ when $d < m$ and $\mathbb{F}_p$ when $d = m$). For each $m \ge 0$, we have a canonical fundamental class given by the cocyle $k_m$ in $\text{C}^m\text{K}(\mathbb{F}_p,m)$ which sends $\alpha \in \text{Z}^m(\Delta_m;\mathbb{F}_p)$ to $\alpha(\text{id}_{[m]})$. Upon unravelling the definition of the structure maps for Eilenberg-MacLane spectra, we find that, for each $m \ge 0$, the map $\text{C}^{m+1}(\text{K}(\mathbb{F}_p,m+1)) \to \text{C}^m(\text{K}(\mathbb{F}_p,m))$ sends $k_{m+1}$ to $k_m$. As such, we have a well-defined canonical element $(\cdots, k_2, k_1, k_0)$ in the inverse limit and so a well-defined canonical element $h_0 = (\cdots, k_2, k_1, k_0) \otimes 1$ in $\overline{\text{C}}^0(\text{H}\mathbb{F}_p)$. More generally, with $n$ as above, consider again $\overline{\text{C}}^\bullet(\Sigma^n\text{H}\mathbb{F}_p)$, which is given by applying $- \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p$ to the following: \[ \text{lim}(\cdots \to \text{C}^\bullet(\text{K}(\mathbb{F}_p,n+2))[-2] \to \text{C}^\bullet(\text{K}(\mathbb{F}_p,n+1))[-1] \to \text{C}^\bullet(\text{K}(\mathbb{F}_p,n))) \] We have that, in particular, $\overline{\text{C}}^n(\Sigma^n\text{H}\mathbb{F}_p)$ is given by applying $- \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p$ to the following: \[ \text{lim}(\cdots \to \text{C}^{n+2}(\text{K}(\mathbb{F}_p,n+2)) \to \text{C}^{n+1}(\text{K}(\mathbb{F}_p,n+1)) \to \text{C}^n(\text{K}(\mathbb{F}_p,n))) \] Once again, upon unravelling the definition of the structure maps for generalized Eilenberg-MacLane spectra, we find that, for each $m \ge n$, the map $\text{C}^{m+1}(\text{K}(\mathbb{F}_p,m+1)) \to \text{C}^m(\text{K}(\mathbb{F}_p,m))$ sends $k_{m+1}$ to $k_m$. As such, we have a well-defined canonical element $(\cdots, k_{n+2}, k_{n+1}, k_n)$ in the inverse limit and so a well-defined canonical element \[ h_n = (\cdots, k_{n+2}, k_{n+1}, k_n) \otimes 1 \] in $\overline{\text{C}}^n(\Sigma^n\text{H}\mathbb{F}_p)$. Note that, for each $n$, $h_n$ is a cocycle, because each $k_m$, $m \ge 0$, is a cocycle. \\ Now, we shall construct our cell model for $\overline{\text{C}}^\bullet(\Sigma^n\text{H}\mathbb{F}_p)$ by attaching a cell to the free algebra $\overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n]$ to set $P^0$ to act by the identity. Let $i_n$ denote the degree $n$ cocycle of $\overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n]$ given by the tensor $\text{id} \otimes 1$. Let also $p_n$ be a representative of the class $(1-P^0)[i_n]$, and then denote also by the same symbol the map $\overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n] \to \overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n]$ induced by the map $\overline{\mathbb{F}}_p[n] \to \overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n] \colon 1 \mapsto p_n$. Now define an $\widebar{\mathpzc{E}}_{\text{st}}^\dagger$-algebra $J_n$ via the following pushout diagram: \begin{center} \begin{tikzpicture}[node distance=1cm] \node(A){$\overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n]$}; \node[below= of A](C){$\overline{\mathbf{E}}_{\textbf{st}}^\dagger\text{C}\overline{\mathbb{F}}_p[n]$}; \node[right= of A](B){$\overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n]$}; \node[below= of B,yshift=-0.75mm](D){$J_n$}; \draw[->] (A) -- (B) node[midway,anchor=south]{$p_n$}; \draw[->] (A) -- (C) node[midway,anchor=east]{}; \draw[->] (C) -- (D) node[midway,anchor=north]{}; \draw[->] (B) -- (D) node[midway,anchor=west]{}; \begin{scope}[shift=($(A)!.2!(D)$)] \draw +(0,-0.25) -- +(0,0) -- +(0.25,0); \end{scope} \end{tikzpicture} \end{center} This algebra $J_n$ is our hopeful cell model for $\overline{\text{C}}^\bullet(\Sigma^n\text{H}\mathbb{F}_p)$. In order to show that it is indeed a model for these cochains in an appropriate sense, we construct a comparison map $J_n \to \overline{\text{C}}^\bullet(\Sigma^n\text{H}\mathbb{F}_p)$. First, let $f \colon \overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n] \to \overline{\text{C}}^\bullet(\Sigma^n\text{H}\mathbb{F}_p)$ denote the map induced by the map $\overline{\mathbb{F}}_p[n] \to \overline{\text{C}}^\bullet(\Sigma^n\text{H}\mathbb{F}_p) \colon 1 \mapsto h_n$. Next, let $q_n$ denote a degree $n+1$ element in $\overline{\text{C}}^\bullet(\Sigma^n\text{H}\mathbb{F}_p)$ which is such that $\partial(q_n)$ is a representative of $(1-P^0)[h_n]$ (such an element $q_n$ exists since, as per Proposition~\ref{prop:P01sp}, $(1-P^0)[h_n]$ is zero). Denote by $g$ the map $\overline{\mathbf{E}}_{\textbf{st}}^\dagger\text{C}\overline{\mathbb{F}}_p[n] \to \overline{\text{C}}^\bullet(\Sigma^n\text{H}\mathbb{F}_p)$ induced by the map $\text{C}\overline{\mathbb{F}}_p[n] \to \overline{\text{C}}^\bullet(\Sigma^n\text{H}\mathbb{F}_p)$ which sends the degree $n$ and $n+1$ generators, respectively, to $p_n$ and $q_n$. Now, by checking the images of $i_n$, we have that the following square commutes: \begin{center} \begin{tikzpicture}[node distance=1cm] \node(A){$\overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n]$}; \node[below= of A](C){$\overline{\mathbf{E}}_{\textbf{st}}^\dagger\text{C}\overline{\mathbb{F}}_p[n]$}; \node[right= of A](B){$\overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n]$}; \node[below= of B,yshift=-0.75mm](D){$\overline{\text{C}}^\bullet(\Sigma^n\text{H}\mathbb{F}_p)$}; \draw[->] (A) -- (B) node[midway,anchor=south]{$p_n$}; \draw[->] (A) -- (C) node[midway,anchor=east]{}; \draw[->] (C) -- (D) node[midway,anchor=north]{$g$}; \draw[->] (B) -- (D) node[midway,anchor=west]{$f$}; \end{tikzpicture} \end{center} As such, we get an induced map: \[ a \colon J_n \to \overline{\text{C}}^\bullet(\Sigma^n\text{H}\mathbb{F}_p) \] The following result now makes precise that $J_n$ is a cell model for $\overline{\text{C}}^\bullet(\Sigma^n\text{H}\mathbb{F}_p)$. \begin{Proposition}\label{prop:cell_model_EM} For each $n \in \mathbb{Z}$, the map $a \colon J_n \to \overline{\emph{C}}{}^\bullet(\Sigma^n\emph{H}\mathbb{F}_p)$ above is a quasi-isomorphism. \end{Proposition} \begin{proof} Consider the composite: \[ \overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n] \oplus_{\mathbf{E}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n]} \overline{\mathbf{E}}_{\textbf{st}}^\dagger\text{C}\overline{\mathbb{F}}_p[n] \longrightarrow J_n \overset{a}\longrightarrow \overline{\text{C}}{}^\bullet(\Sigma^n\text{H}\mathbb{F}_p) \] By Proposition~\ref{prop:pushouts_alg}, the first map is a quasi-isomorphism, and so it suffices to demonstrate that the composite, say $c$, is a quasi-isomorphism. Consider now instead the following composite: \[ \overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n] \overset{b}\longrightarrow \overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n] \oplus_{\mathbf{E}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n]} \overline{\mathbf{E}}_{\textbf{st}}^\dagger\text{C}\overline{\mathbb{F}}_p[n] \overset{c}\longrightarrow \overline{\text{C}}{}^\bullet(\Sigma^n\text{H}\mathbb{F}_p) \] Here $b$ is the canonical map from the first summand in the pushout. We claim that, upon taking cohomology, both $b$ and $c \circ b$ are onto and have the same kernel. It suffices to do this as then $c$ is clearly necessarily a quasi-isomorphism. Let $\iota$ denote the map $\overline{\mathbf{E}}_{\textbf{st}}^\dagger \overline{\mathbb{F}}_p[n] \to \overline{\mathbf{E}}_{\textbf{st}}^\dagger\text{C}\overline{\mathbb{F}}_p[n]$ and consider the following exact sequence: \[ 0 \to \overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n] \overset{p_n-\iota}\longrightarrow \overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n] \oplus \overline{\mathbf{E}}_{\textbf{st}}^\dagger\text{C}\overline{\mathbb{F}}_p[n] \longrightarrow \overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n] \oplus_{\mathbf{E}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n]} \overline{\mathbf{E}}_{\textbf{st}}^\dagger\text{C}\overline{\mathbb{F}}_p[n] \to 0 \] By Proposition~\ref{prop:freealghomEbarpoint}, we can identify the cohomology of $\overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n]$ with $\widehat{\mathcal{B}} \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p[n]$, and by Propositions~\ref{prop:freealghomEbarpoint} and~\ref{prop:Ebarmonad}, we can identify the cohomology of $\overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n] \oplus \overline{\mathbf{E}}_{\textbf{st}}^\dagger\text{C}\overline{\mathbb{F}}_p[n]$ also with $\widehat{\mathcal{B}} \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p[n]$. Moreover, under this identification, the map corresponding to $p_n-b$ sends $1$ to $1-P^0$ and so, more generally, becomes right multiplication by $1-P^0$. Noting that this map is injective (which follows from the fact that the Adem relations preserve length), it follows from the long exact sequence in cohomology that, on cohomology, the map $b$ is onto with kernel the left ideal of $\widehat{\mathcal{B}} \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p[n]$ generated by $1-P^0$, which we note, by Proposition~\ref{prop:BhatandA}, coincides with the two-sided ideal generated by $1-P^0$. \\ Now consider the composite $c \circ b$. Upon identifying once more the cohomology of $\overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n]$ with $\widehat{\mathcal{B}} \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p[n]$, we have a map $\widehat{\mathcal{B}} \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p[n] \to \text{H}^\bullet(\overline{\text{C}}^\bullet(\Sigma^n\text{H}\mathbb{F}_p))$. By Propositions~\ref{prop:P01sp} and~\ref{prop:BhatandA}, we get an induced map: \[ (\widehat{\mathcal{B}} \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p[n])/(1-P^0) \cong \mathcal{A} \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p[n] \to \text{H}^\bullet(\overline{\text{C}}^\bullet(\Sigma^n\text{H}\mathbb{F}_p)) \] Noting that $1$ is mapped to the fundamental class $[h_n]$, by the standard calculation of the cohomology of Eilenberg-MacLane spectra, we have that this map is an isomorphism. As such, just as with $b$, at the level of cohomology, $c \circ b$ is onto with kernel the two-sided ideal generated by $1-P^0$, and this completes the proof. \end{proof} Having constructed our cofibrant replacement of $\overline{\text{C}}^\bullet(\Sigma^n\text{H}\mathbb{F}_p)$, we now need to consider how this replacement transforms under application of $\text{U}$. For this purpose, we have the following result. \begin{Proposition}\label{prop:Utosquare} We have the following: \begin{itemize} \item[(i)] $\emph{U}\overline{\mathbf{E}}_{\normalfont{\textbf{st}}}^\dagger\overline{\mathbb{F}}_p[n] \cong \Sigma^n\emph{H}\overline{\mathbb{F}}_p$ and, under the identification, $\emph{U}p_n$ induces on $\pi^{\emph{st}}_n$ the map $1 - \Phi$ where $\Phi$ is the Frobenius automorphism of $\overline{\mathbb{F}}_p$. \item[(ii)] $\emph{U}\overline{\mathbf{E}}_{\normalfont{\textbf{st}}}^\dagger\emph{C}\overline{\mathbb{F}}_p[n] \sim *$ or, more specfically, $\emph{U}\overline{\mathbf{E}}_{\normalfont{\textbf{st}}}^\dagger\emph{C}\overline{\mathbb{F}}_p[n]$ is a contractible Kan complex in each spectral degree. \end{itemize} \end{Proposition} \begin{proof} (i): In spectral degree $m$ and simplicial degree $d$, we have: \begin{align*} (\text{U}\overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n])_{m,d} &= \overline{\mathpzc{E}}_{\text{st}}^\dagger\text{-}\mathsf{Alg}(\overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n],\overline{\text{C}}{}^\bullet(\Sigma^{\infty - m}\Delta_{d+})) \\ &\cong \mathsf{Co}_{\overline{\mathbb{F}}_p}(\overline{\mathbb{F}}_p[n], \overline{\text{C}}{}^\bullet(\Sigma^{\infty - m}\Delta_{d+})) \\ &\cong \text{Z}^n(\overline{\text{C}}{}^\bullet(\Sigma^{\infty - m}\Delta_{d+})) \\ &= \text{Z}^n(\text{C}^\bullet(\Sigma^{\infty - m}\Delta_{d+}) \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p) \\ &\cong \text{Z}^n(\text{C}^\bullet(\Delta_{d+})[-m] \otimes_{\mathbb{F}_p} \overline{\mathbb{F}}_p) \\ &\cong \text{Z}^{n+m}(\Delta_{d};\overline{\mathbb{F}}_p) \\ &= (\Sigma^n\text{H}\overline{\mathbb{F}}_p)_{m,d} \end{align*} One can readily verify directly that the action of the simplicial operators coincide and that so do the spectral structure maps. \\ By Proposition~\ref{prop:EM_spec_fib} and Remark~\ref{rmk:htpy_grps_fib_sp}, we can compute the $n^{\text{th}}$ stable homotopy group of $\Sigma^n\text{H}\overline{\mathbb{F}}_p$ via the $n^{\text{th}}$ unstable homotopy group of the space in spectral degree $0$. Moreover, we find that, under the identification $\pi^{\text{st}}_n(\text{U}\overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n]) \cong \pi^{\text{st}}_n(\Sigma^n\text{H}\overline{\mathbb{F}}_p) \cong \overline{\mathbb{F}}_p$, an element $\lambda \in \overline{\mathbb{F}}_p$ corresponds to the class of the map \[ \overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n] \longrightarrow \overline{\text{C}}{}^\bullet(\Sigma^{\infty}\Delta_{n+}) \cong \overline{\text{C}}{}^\bullet(\Delta_{n+}) \] which sends $i_n$ to the cochain $\alpha$ which sends $\text{id}_{[n]} \in (\Delta_n)_n$ to $\lambda$. To act by $\text{U}p_n$, we precompose with $p_n \colon \overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n] \to \overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n]$. Thus, we need to compute the image of $i_n$ under the following composite: \[ \overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n] \overset{p_n}\longrightarrow \overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n] \longrightarrow \overline{\text{C}}{}^\bullet(\Sigma^{\infty}\Delta_{n+}) \cong \overline{\text{C}}{}^\bullet(\Delta_{n+}) \] To do this, we need to act by $1-P^0$ on $\alpha \in \overline{\text{C}}{}^\bullet(\Sigma^{\infty}\Delta_{n+})$. If we unravel the definition of the action of $1-P^0$ on the cochains on the spectrum $\Sigma^{\infty}\Delta_{n+}$, we find that this action reduces to the action of $1-P^0$ on the cochains on the space $\Delta_n$ and, moreover, in the proof of Proposition~\ref{prop:P01sp}, we saw that the action of $P^0$ on the cochains on a space sends a cochain $\beta$ to a cochain $\beta'$ such that $\beta'(s) = \beta(s)^p$ for all simplices $s$. Thus, under the composite above, $i_n$ maps to a cochain which sends $\text{id}_{[n]}$ to $1-\lambda^p$, as desired. \\ (ii): By Proposition~\ref{prop:Ebarmonad} (i), as $\text{C}\overline{\mathbb{F}}_p$ is acyclic, the canonical map $\overline{\mathpzc{E}}^\dagger_{\text{st}}(0) \to \overline{\mathbf{E}}^\dagger_{\textbf{st}}\text{C}\overline{\mathbb{F}}_p[n]$ is a quasi-isomorphism. Since $\overline{\mathbf{E}}^\dagger_{\textbf{st}}\text{C}\overline{\mathbb{F}}_p[n]$ is a cell algebra, by Propositions~\ref{prop:semimodelqadj} and~\ref{prop:spec_coch_adj_qadj}, we have that $\text{U}\overline{\mathbf{E}}^\dagger_{\textbf{st}}\text{C}\overline{\mathbb{F}}_p[n] \to \text{U}\overline{\mathpzc{E}}^\dagger_{\text{st}}(0)$ is a weak equivalence of spectra. As $\overline{\mathpzc{E}}^\dagger_{\text{st}}(0)$ is the initial algebra, by the definition of $\text{U}$, we have that $\text{U}\overline{\mathpzc{E}}^\dagger_{\text{st}}(0) = *$. Moreover, we saw in the proof of Proposition~\ref{prop:spec_coch_adj_qadj} that $\text{U}\overline{\mathbf{E}}^\dagger_{\textbf{st}}X$ is a strict $\Omega$-spectrum, and so a fibrant spectrum, when $X$ is a complex of finite type, so that $\text{U}\overline{\mathbf{E}}^\dagger_{\textbf{st}}\text{C}\overline{\mathbb{F}}_p[n]$ is fibrant spectrum. Thus, by Proposition~\ref{prop:fibcofibsp} (ii), $\text{U}\overline{\mathbf{E}}^\dagger_{\textbf{st}}\text{C}\overline{\mathbb{F}}_p[n] \to \text{U}\overline{\mathpzc{E}}^\dagger_{\text{st}}(0)$ is a levelwise weak equivalence, from which the desired result immediately follows. \end{proof} We can now demonstrate the resolvability of $\Sigma^n\text{H}\mathbb{F}_p$. \begin{Proposition}\label{prop:HFp_resolv} For each $n \in \mathbb{Z}$, $\Sigma^n\emph{H}\mathbb{F}_p$ is resolvable. \end{Proposition} \begin{proof} Consider again the pushout square \begin{center} \begin{tikzpicture}[node distance=1cm] \node(A){$\overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n]$}; \node[below= of A](C){$\overline{\mathbf{E}}_{\textbf{st}}^\dagger\text{C}\overline{\mathbb{F}}_p[n]$}; \node[right= of A](B){$\overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n]$}; \node[below= of B,yshift=-0.75mm](D){$J_n$}; \draw[->] (A) -- (B) node[midway,anchor=south]{$p_n$}; \draw[->] (A) -- (C) node[midway,anchor=east]{}; \draw[->] (C) -- (D) node[midway,anchor=north]{}; \draw[->] (B) -- (D) node[midway,anchor=west]{}; \begin{scope}[shift=($(A)!.2!(D)$)] \draw +(0,-0.25) -- +(0,0) -- +(0.25,0); \end{scope} \end{tikzpicture} \end{center} and the map $J_n \to \overline{\text{C}}{}^\bullet(\Sigma^n\text{H}\mathbb{F}_p)$ which we constructed earlier in this section. Upon applying $\text{U}$ to the pushout square, as $\text{U}$ is a left adjoint and maps to the opposite category of spectra, we get a pullback square of spectra as follows: \begin{center} \begin{tikzpicture}[node distance=1.5cm] \node(A){$\text{U}J_n$}; \node[below= of A](C){$\text{U}\overline{\mathbf{E}}^\dagger_{\textbf{st}}\overline{\mathbb{F}}_p[n]$}; \node[right= of A](B){$\text{U}\overline{\mathbf{E}}^\dagger_{\textbf{st}}\text{C}\overline{\mathbb{F}}_p[n]$}; \node[below= of B,yshift=1mm](D){$\text{U}\overline{\mathbf{E}}^\dagger_{\textbf{st}}\overline{\mathbb{F}}_p[n]$}; \draw[->] (A) -- (B) node[midway,anchor=south]{}; \draw[->>] (A) -- (C) node[midway,anchor=east]{}; \draw[->] (C) -- (D) node[midway,anchor=north]{$\text{U}p_n$}; \draw[->>] (B) -- (D) node[midway,anchor=west]{}; \begin{scope}[shift=($(D)!.2!(A)$)] \draw +(-0.25,0) -- +(0,0) -- +(0,0.25); \end{scope} \end{tikzpicture} \end{center} Here the vertical maps are fibrations because $\overline{\mathbf{E}}_{\textbf{st}}^\dagger\overline{\mathbb{F}}_p[n] \to \overline{\mathbf{E}}_{\textbf{st}}^\dagger\text{C}\overline{\mathbb{F}}_p[n]$ is a cofibration between cell algebras and because, by Proposition~\ref{prop:spec_coch_adj_qadj}, $\text{U}$ maps cofibrations between cofibrant algebras to fibrations of spectra. By Proposition~\ref{prop:cell_model_EM}, the unit of the derived adjunction is represented by the composite \[ \Sigma^n\text{H}\mathbb{F}_p \to \text{U}\overline{\text{C}}{}^\bullet(\Sigma^n\text{H}\mathbb{F}_p) \to \text{U}J_n \] so that we need to show that this map is weak equivalence. We saw in the proof of Proposition~\ref{prop:spec_coch_adj_qadj} that $\text{U}\overline{\mathbf{E}}^\dagger_{\textbf{st}}X$ is a strict $\Omega$-spectrum, and so a fibrant spectrum, when $X$ is a complex of finite type. This implies that all spectra in the square above are fibrant. By Proposition~\ref{prop:Utosquare} and the long exact sequence in stable homotopy groups, we have that $\pi_i^{\text{st}}(\text{U}J_n)$ is $\mathbb{F}_p$ when $i = n$, and zero otherwise. Thus, it suffices to show that the map $\Sigma^n\text{H}\mathbb{F}_p \to \text{U}J_n$, say $\eta$, is an isomorphism on $\pi^{\text{st}}_n$. This amounts to showing that the map \[ \mathsf{hSp}(\Sigma^\infty\mathbb{S}^n,\Sigma^n\text{H}\mathbb{F}_p) \to \mathsf{hSp}(\Sigma^\infty\mathbb{S}^n, \text{U}J_n) = \mathsf{hSp}^{\text{op}}(\text{U}J_n,\Sigma^\infty\mathbb{S}^n) \cong \mathsf{h}\widebar{\mathpzc{E}}_{\text{st}}^\dagger\text{-}\mathsf{Alg}(J_n,\overline{\text{C}}{}^\bullet(\Sigma^\infty\mathbb{S}^n)) \] induced by $\eta$ and the derived adjunction isomorphism is bijective, or equivalently, injective. For each $\lambda \in \mathbb{F}_p$, consider the map $\sigma_\lambda \colon \Sigma^\infty\mathbb{S}^n \to \Sigma^n\text{H}\mathbb{F}_p$ given by the map $\mathbb{S}^n \to \text{K}(\mathbb{F}_p,n)$ which sends the unique non-denenerate $n$-simplex to the $n$-cocycle $\alpha$ on $\Delta_n$ defined by $\alpha(\text{id}_{[n]}) = \lambda$. The images of these maps under the localization functor $\gamma_{\mathsf{Sp}} \colon \mathsf{Sp} \to \mathsf{hSp}$ give the $p$ distinct maps in $\mathsf{hSp}(\Sigma^\infty\mathbb{S}^n,\Sigma^n\text{H}\mathbb{F}_p)$. Fix $\lambda \in \mathbb{F}_p$ and consider $\sigma_\lambda$. Unravelling the definition of the above map, the image of $\gamma_{\mathsf{Sp}}(\sigma_\lambda)$ is computed as follows: form the composite $J_n \to \overline{\text{C}}{}^\bullet(\Sigma^n\text{H}\mathbb{F}_p) \to \overline{\text{C}}{}^\bullet(\Sigma^\infty\mathbb{S}^n)$, where the second map is $\sigma_\lambda$ and the first map is the adjoint of $\eta$, and then take the image of this map under the localization functor $\gamma_{\overline{\mathpzc{E}}_{\text{st}}^\dagger\text{-}\mathsf{Alg}} \colon \overline{\mathpzc{E}}_{\text{st}}^\dagger\text{-}\mathsf{Alg} \to \mathsf{h}\overline{\mathpzc{E}}_{\text{st}}^\dagger\text{-}\mathsf{Alg}$. We have that, for different values of $\lambda$, the maps $\overline{\text{C}}{}^\bullet(\Sigma^n\text{H}\mathbb{F}_p) \to \overline{\text{C}}{}^\bullet(\Sigma^\infty\mathbb{S}^n)$ differ on cohomology, and thus so must the composite maps $J_n \to \overline{\text{C}}{}^\bullet(\Sigma^n\text{H}\mathbb{F}_p) \to \overline{\text{C}}{}^\bullet(\Sigma^\infty\mathbb{S}^n)$, and as a result the images of these maps under $\gamma_{\overline{\mathpzc{E}}_{\text{st}}^\dagger\text{-}\mathsf{Alg}}$ must be distinct. Thus the above map is injective, and so bijective, as desired. \end{proof} \subsection{Fibration Theorems} Above, we have demonstrated the resolvability of (generalized) Eilenberg-MacLane spectra. We now demonstrate results which will also allow us to induct up Postnikov towers for more general resolvability results. \begin{Proposition}\label{prop:resolv_inv_limits} Let $E$ be a spectrum and suppose that it can be described as the inverse limit of a diagram \[ \cdots \to E_2 \to E_1 \to E_0 \] such that: \begin{itemize} \item Each map $E_{n+1} \to E_n$ is a fibration and $E_0$ is fibrant. \item The canonical map $\emph{colim}\,\overline{\emph{H}}{}^\bullet E_n \to \overline{\emph{H}}{}^\bullet E$ is an isomorphism. \end{itemize} Then $E$ is resolvable whenever the $E_n$, for all $n \ge 0$, are resolvable. \end{Proposition} \begin{proof} Suppose that the $E_n$ are resolvable. We can factor maps of $\widebar{\mathpzc{E}}^\dagger_{\text{st}}$-algebras into relative cell inclusions followed by trivial fibrations. Applying this to the cotower $\overline{\text{C}}{}^\bullet E_0 \to \overline{\text{C}}{}^\bullet E_1 \to \overline{\text{C}}{}^\bullet E_2 \to \cdots$ we get a diagram of $\widebar{\mathpzc{E}}^\dagger_{\text{st}}$-algebras as follows: \begin{center} \begin{tikzpicture}[node distance = 2cm] \node [] (A) {$\overline{\mathbb{F}}_p$}; \node [right of = A,xshift=0cm] (B) {$A_0$}; \node [below of = B] (C) {$\overline{\text{C}}{}^\bullet E_0$}; \node [right of = C] (D) {$\overline{\text{C}}{}^\bullet E_1$}; \node [right of = B,xshift=0cm] (E) {$A_1$}; \node [right of = E,xshift=0cm] (F) {$\cdots$}; \node [right of = D,xshift=0cm] (G) {$\cdots$}; \draw [right hook->] (A) -- (B) node[midway,anchor=south]{}; \draw [right hook->] (B) -- (E) node[midway,anchor=west]{}; \draw [right hook->] (E) -- (F) node[midway,anchor=west]{}; \draw [->] (C) -- (D) node[midway,anchor=west]{}; \draw [->] (D) -- (G) node[midway,anchor=west]{}; \draw [->>] (B) -- (C) node[midway,anchor=west]{$\sim$}; \draw [->>] (E) -- (D) node[midway,anchor=west]{$\sim$}; \end{tikzpicture} \end{center} Set $A = \text{colim}\,A_n$. Then, by the assumption that $\overline{\text{H}}{}^\bullet E \cong \text{colim}\,\overline{\text{H}}{}^\bullet E_n$, we have that the canonical map $A \to \overline{\text{C}}{}^\bullet E$ is a quasi-isomorphism. Applying $\text{U}$, we have that $\text{U}A$ is the inverse limit of the $\text{U}A_n$ and we have a commutative diagram as follows: \begin{center} \begin{tikzpicture}[node distance = 2cm] \node [] (A) {$E_1$}; \node [right of = A,xshift=0cm] (B) {$E_0$}; \node [below of = A] (C) {$\text{U}A_1$}; \node [below of = B] (D) {$\text{U}A_0$}; \node [left of = A,xshift=0cm] (E) {$\cdots$}; \node [left of = C,xshift=0cm] (G) {$\cdots$}; \draw [->] (A) -- (B) node[midway,anchor=south]{}; \draw [->] (A) -- (C) node[midway,anchor=west]{$\sim$}; \draw [->] (B) -- (D) node[midway,anchor=west]{$\sim$}; \draw [->] (C) -- (D) node[midway,anchor=south]{}; \draw [->] (E) -- (A) node[midway,anchor=south]{}; \draw [->] (G) -- (C) node[midway,anchor=south]{}; \end{tikzpicture} \end{center} Here the vertical maps are the composites $E_n \to \text{U}\overline{\text{C}}{}^\bullet E_n \to \text{U}A_n$ which are quasi-isomorphisms as the $E_n$ are resolvable and these composites represent components of the unit of the derived adjunction. Moreover, since $\text{U}$ maps cofibrations between cofibrant algebras to fibrations of spectra, each map in the bottom row is a fibration and each of the $\text{U}A_n$ are fibrant. As weak equivalences between fibrant spectra are simply levelwise weak equivalences (see Proposition~\ref{prop:fibcofibsp} (ii)), by the usual argument for inverse limits of weak equivalences of spaces along towers of fibrations, we find that the induced map on limits $E \to \text{U}A$ is a weak equivalence. This map is the composite $E \to \text{U}\overline{\text{C}}{}^\bullet E \to \text{U}A$ and so represents the evaluation of the unit of the derived adjunction at $E$. This unit is thus an isomorphism at $E$, and so $E$ is resolvable, as desired. \end{proof} Next, we wish to consider resolvability of fibre products. \begin{Proposition}\label{prop:resolv_fib_prods} Let $E$ be a spectrum and suppose that it can be written as a fibre product \begin{center} \begin{tikzpicture}[node distance=0.75cm] \node(A){$E$}; \node[below= of A](C){$E_1$}; \node[right= of A,xshift=1cm](B){$E_2$}; \node[below= of B,yshift=0mm](D){$F$}; \draw[->] (A) -- (B) node[midway,anchor=south]{}; \draw[->] (A) -- (C) node[midway,anchor=east]{}; \draw[->] (C) -- (D) node[midway,anchor=north]{}; \draw[->] (B) -- (D) node[midway,anchor=west]{}; \begin{scope}[shift=($(D)!.2!(A)$)] \draw +(-0.25,0) -- +(0,0) -- +(0,0.25); \end{scope} \end{tikzpicture} \end{center} such that: \begin{itemize} \item $E_1, E_2$ and $F$ are fibrant and $E_1, E_2$ are of finite $p$-type. \item The righthand vertical map $E_2 \to F$ is a fibration. \item There exists an $N$ such that, for $n > N$, $(E_1)_n, (E_2)_n$ are connected and $F_n$ is simply connected. \end{itemize} Then $E$ is resolvable whenever $E_1, E_2, F$ are resolvable. \end{Proposition} \begin{proof} Suppose that $E_1, E_2$ and $F$ are resolvable. For the diagram $\overline{\text{C}}{}^\bullet(E_1) \leftarrow \overline{\text{C}}{}^\bullet(F) \rightarrow \overline{\text{C}}{}^\bullet(E_2)$, we take a cofibrant approximation: \begin{center} \begin{tikzpicture}[node distance = 2cm] \node [] (A) {$A$}; \node [right of = A,xshift=0cm] (B) {$C$}; \node [below of = A] (C) {$\overline{\text{C}}{}^\bullet(F)$}; \node [below of = B] (D) {$\overline{\text{C}}{}^\bullet(E_2)$}; \node [left of = A,xshift=0cm] (E) {$B$}; \node [left of = C,xshift=0cm] (G) {$\overline{\text{C}}{}^\bullet(E_1)$}; \draw [right hook->] (A) -- (B) node[midway,anchor=south]{}; \draw [->] (A) -- (C) node[midway,anchor=west]{$\sim$}; \draw [->] (B) -- (D) node[midway,anchor=west]{$\sim$}; \draw [->] (C) -- (D) node[midway,anchor=south]{}; \draw [left hook->] (A) -- (E) node[midway,anchor=south]{}; \draw [->] (C) -- (G) node[midway,anchor=south]{}; \draw [->] (E) -- (G) node[midway,anchor=west]{$\sim$}; \end{tikzpicture} \end{center} Suppose, for the time being, that we have shown that the induced map $B \amalg_A C \to \overline{\text{C}}{}^\bullet(E)$ is a quasi-isomorphism. Then, having formed a cofibrant replacement of the cochains $\overline{\text{C}}{}^\bullet(E)$, the unit of the derived adjunction, evaluated at $E$, is represented by the composite $E \to \text{U}\widebar{\text{C}}^\bullet (E) \to \text{U}(B \amalg_A C)$. Moreover, we have the following commutative diagram: \begin{center} \begin{tikzpicture}[node distance = 3cm] \node [] (A) {$E$}; \node [below of = A] (B) {$E_1$}; \node [right of = A] (C) {$E_2$}; \node [below of = C] (D) {$F$}; \node [below right of = A,yshift=1cm,xshift=3cm] (AA) {$\text{U}(B \amalg_A C)$}; \node [below of = AA] (BB) {$\text{U}B$}; \node [right of = AA] (CC) {$\text{U}C$}; \node [below of = CC] (DD) {$\text{U}A$}; \draw[->] (A) -- (B); \draw[->] (A) -- (C); \draw[->] (B) -- (D); \draw[->] (C) -- (D); \draw[->] (AA) -- (BB); \draw[->] (AA) -- (CC); \draw[->] (BB) -- (DD); \draw[->] (CC) -- (DD); \draw[->] (A) -- (AA); \draw[->] (B) -- (BB); \draw[->] (C) -- (CC); \draw[->] (D) -- (DD); \end{tikzpicture} \end{center} If $E_1, E_2, F$ are resolvable, each of the maps $E_1 \to \text{U}B$, $E_2 \to \text{U}C$ and $F \to \text{U}A$ is a weak equivalence. Moreover, each of the lefthand and righthand squares are pullback squares, and the maps $E_2 \to F$ and $\text{U}C \to \text{U}A$ are fibrations (the latter because, as in the proof of Proposition~\ref{prop:spec_coch_adj_qadj}, $\text{U}$ maps cofibrations to fibrations of spectra). It follows that $E \to \text{U}(B \amalg_A C)$ is also weak equivalence, and this is exactly what is desired to show that $E$ is resolvable. \\ Due to the argument just described, it remains only to show that the map $B \amalg_A C \to \overline{\text{C}}{}^\bullet(E)$ is a quasi-isomorphism. Recall that the pushout may be computed via the bar construction \[ \text{Bar}_n(B,A,C) = B \amalg \underbrace{A \amalg \cdots \amalg A}_{n \: \text{factors}} \amalg \, C \] in that, by Proposition~\ref{prop:pushout_normalization} (or really the analogue of it for $\widebar{\mathpzc{E}}^\dagger_{\text{st}}$), the induced map $\text{N}(\text{Bar}_\bullet(B,A,C)) \to B \amalg_A C$ from the normalization is a quasi-isomorphism. We wish to relate this pushout to the fibre product. We first construct cochains on the fibre product via a cobar construction. The cobar construction is defined as follows: \[ \text{Cobar}^n(E_1,F,E_2) = E_1 \times \underbrace{F \times \cdots \times F}_{n \: \text{factors}} \times E_2 \] This gives a cosimplicial spectrum with coface maps induced by diagonal maps and codegeneracies by projections. Applying cochains, we get a simplicial cochain complex: \[ \overline{\text{C}}{}^\bullet(\text{Cobar}^\bullet(E_1,F,E_2)) \] Considering $E$ as a constant cosimplicial spectrum, we have an induced map \[ \text{N}(\overline{\text{C}}{}^\bullet(\text{Cobar}^\bullet(E_1,F,E_2))) \to \overline{\text{C}}{}^\bullet(E) \] and we claim that this is a quasi-isomorphism. Expressing, as in Section~\ref{subsec:spec_cochains}, spectral cochains as an inverse limit of space level cochains, we have that the map $\overline{\text{C}}{}^\bullet(\text{Cobar}^\bullet(E_1,F_,E_2)) \to \overline{\text{C}}{}^\bullet(E)$ is an inverse limit of the maps: \[ \overline{\text{C}}{}^\bullet(\text{Cobar}^\bullet((E_1)_n,F_n,(E_2)_n))[-n] \to \overline{\text{C}}{}^\bullet(E_n)[-n] \] As in the proof of Lemma 5.2 in~\cite{Mandell} (a lemma proven in the course of demonstrating the unstable analogue of our result here), for sufficiently large $n$, upon normalization, these maps are quasi-isomorphisms. Moreover, the maps forming the inverse limit tower are epimorphisms since $E_1, E_2, F$ are fibrant. Thus, by a $\text{lim}^1$ argument, we have that the map $\overline{\text{C}}{}^\bullet((\text{Cobar}^\bullet(E_1,F_,E_2))) \to \overline{\text{C}}{}^\bullet(E)$ between the inverse limits is also a quasi-isomorphism. \\ Now we relate the bar and cobar constructions. Using the various projection maps on $E_1 \times F \times \cdots \times F \times E_2$, we have maps: \[ B \amalg A \amalg \cdots \amalg A \amalg C \longrightarrow \overline{\text{C}}{}^\bullet(E_1) \amalg \overline{\text{C}}{}^\bullet(F) \amalg \cdots \amalg \overline{\text{C}}{}^\bullet(F) \amalg \overline{\text{C}}{}^\bullet(E_2) \longrightarrow \overline{\text{C}}{}^\bullet(E_1 \times F \times \cdots \times F \times E_2) \] These maps are quasi-isomorphisms because if we postcompose with the map \[ \overline{\text{C}}{}^\bullet(E_1 \times F \times \cdots \times F \times E_2) \longrightarrow \overline{\text{C}}{}^\bullet(E_1 \amalg F \amalg \cdots \amalg F \amalg E_2) \] induced by the canonical map \[ E_1 \amalg F \amalg \cdots \amalg F \amalg E_2 \to E_1 \times F \times \cdots \times F \times E_2 \] (given by a matrix with identity maps along the diagonal and zero maps elsewhere) and make the identification $\overline{\text{C}}{}^\bullet(E_1 \amalg F \amalg \cdots \amalg F \amalg E_2) \cong \overline{\text{C}}{}^\bullet(E_1) \amalg \overline{\text{C}}{}^\bullet(F) \amalg \cdots \amalg \overline{\text{C}}{}^\bullet(F) \amalg \overline{\text{C}}{}^\bullet(E_2)$, we get a quasi-isomorphism by definition of $A, B$ and $C$, and because the canonical map $E_1 \amalg F \amalg \cdots \amalg F \amalg E_2 \to E_1 \times F \times \cdots \times F \times E_2$ is a weak equivalence of spectra by the usual argument (coproducts and products of fibrant spectra are weakly equivalent). Now, it follows that we get a quasi-isomorphism of simplicial $\widebar{\mathpzc{E}}^\dagger_{\text{st}}$-algebras \[ \text{Bar}_\bullet(B,A,C) \to \overline{\text{C}}{}^\bullet(\text{Cobar}^\bullet(E_1,F,E_2)) \] and so a quasi-isomorphism: \[ \text{N}(\text{Bar}_\bullet(B,A,C)) \to \text{N}(\overline{\text{C}}{}^\bullet(\text{Cobar}^\bullet(E_1,F,E_2))) \] Finally, we can make use of this by noting that we have a commutative square as follows: \begin{center} \begin{tikzpicture}[node distance=1cm] \node(A){$\text{N}(\text{Bar}_\bullet(B,A,C))$}; \node[below= of A](C){$B \amalg_C A$}; \node[right= of A,xshift=1cm](B){$\text{N}(\overline{\text{C}}{}^\bullet(\text{Cobar}^\bullet(E_1,F,E_2)))$}; \node[below= of B,yshift=0.5mm](D){$\overline{\text{C}}{}^\bullet(E)$}; \draw[->] (A) -- (B) node[midway,anchor=south]{$\sim$}; \draw[->] (A) -- (C) node[midway,anchor=east]{$\sim$}; \draw[->] (C) -- (D) node[midway,anchor=north]{}; \draw[->] (B) -- (D) node[midway,anchor=west]{$\sim$}; \end{tikzpicture} \end{center} Here the bottom map is the aforementioned map, the map which we wished to show to be a weak equivalence, and so we are done. \end{proof} \subsection{Proof of the Main Theorem} With the aid of the last two results, we can now extend our earlier resolvability result to include other spectra. \begin{Proposition}\label{prop:ZmodpmZphat} The Eilenberg-MacLane spectra $\Sigma^n\emph{H}A$, for $n \in \mathbb{Z}$, with $A = \mathbb{Z}/p^m$ for some $m \ge 1$ or $A = \mathbb{Z}_p^\wedge$ are resolvable. \end{Proposition} \begin{proof} For $m \ge 1$ and $n \in \mathbb{Z}$, recall that we have well-known commutative squares as follows: \begin{center} \begin{tikzpicture}[node distance=1.5cm] \node(A){$\text{K}(\mathbb{Z}/p^m,n)$}; \node[below= of A](C){$\text{K}(\mathbb{Z}/p^{m-1},n)$}; \node[right= of A,xshift=1cm](B){$\text{P}\text{K}(\mathbb{Z}/p,n+1)$}; \node[below= of B,yshift=-1mm](D){$\text{K}(\mathbb{Z}/p,n+1)$}; \draw[->] (A) -- (B) node[midway,anchor=south]{}; \draw[->] (A) -- (C) node[midway,anchor=east]{}; \draw[->] (C) -- (D) node[midway,anchor=north]{}; \draw[->>] (B) -- (D) node[midway,anchor=west]{}; \begin{scope}[shift=($(D)!.2!(A)$)] \draw +(-0.25,0) -- +(0,0) -- +(0,0.25); \end{scope} \end{tikzpicture} \end{center} (Here $\text{P}$ denotes a path space, and, given the description of the Eilenberg-MacLane spaces before, these maps can be given precise combinatorial descriptions.) An easy check shows that the maps in these squares in fact assemble together to yield maps of the Eilenberg-MacLane spectra, so that, for $m \ge 1$ and $n \in \mathbb{Z}$, we have commutative squares as follows: \begin{center} \begin{tikzpicture}[node distance=1.5cm] \node(A){$\Sigma^n\text{H}\mathbb{Z}/p^{m+1}$}; \node[below= of A](C){$\Sigma^n\text{H}\mathbb{Z}/p^m$}; \node[right= of A,xshift=1cm](B){$\text{P}\Sigma^{n+1}\text{H}\mathbb{Z}/p$}; \node[below= of B,yshift=0mm](D){$\Sigma^{n+1}\text{H}\mathbb{Z}/p$}; \draw[->] (A) -- (B) node[midway,anchor=south]{}; \draw[->] (A) -- (C) node[midway,anchor=east]{}; \draw[->] (C) -- (D) node[midway,anchor=north]{}; \draw[->>] (B) -- (D) node[midway,anchor=west]{}; \end{tikzpicture} \end{center} Moreover, the conditions of Proposition~\ref{prop:resolv_fib_prods} are satisfied, so that, by induction, we have the desired result for $\mathbb{Z}/p^m$ for $m \ge 1$. Next, Proposition~\ref{prop:resolv_inv_limits} gives us the case of $\Sigma^n\text{H}\mathbb{Z}_p^{\wedge}$ using the following tower: \[ \Sigma^n\text{H}\mathbb{Z}_p^{\wedge} = \lim(\cdots \to \Sigma^n\text{H}\mathbb{Z}/p^m \to \cdots \to \Sigma^n\text{H}\mathbb{Z}/p) \] \end{proof} We are now finally able to provide the desired algebraic models of $p$-adic stable homotopy types. \begin{Proposition}\label{prop:algmodels} All bounded below, $p$-complete spectra of finite $p$-type are resolvable. As a result, the cochains functor \[ \overline{\emph{C}}^\bullet \colon \mathsf{Sp}^{\emph{op}} \to \overline{\mathpzc{E}}_{\emph{st}}^\dagger\emph{-}\mathsf{Alg} \] induces a full embedding of the homotopy category of spectra into the derived category of $\widebar{\mathpzc{E}}_{\emph{st}}^\dagger$-algebras when we restrict to bounded below, $p$-complete spectra of finite $p$-type. \end{Proposition} \begin{proof} This follows by our resolvability results above, namely Propositions~\ref{prop:ZmodpmZphat},~\ref{prop:resolv_fib_prods} and~\ref{prop:resolv_inv_limits}, and the fact that bounded below, $p$-complete spectra of finite $p$-type admit Postnikov towers in which the fibres are $\Sigma^n\text{H}A$, for $n \in \mathbb{Z}$, with either $A = \mathbb{Z}/p^m$ for some $m \ge 1$ or $A = \mathbb{Z}_p^\wedge$. \end{proof} \nocite{*} \bibliographystyle{alpha}
{ "timestamp": "2021-03-25T01:01:41", "yymm": "2012", "arxiv_id": "2012.08941", "language": "en", "url": "https://arxiv.org/abs/2012.08941" }
\section{INTRODUCTION} \label{sec:intro} The gamma-ray sky is investigated through satellite and ground-based detectors. While satellite detectors are mainly limited by their small collection area (ex.: FERMI-LAT \cite{fermilat} with 4x4 towers of area 40 $cm^2$ each), Imaging Atmospheric Cherenkov Telescopes (IACTs) provide a collection area of the order of some $km^2$ by means of an indirect detection. IACTs take advantage of stereoscopic detection of Cherenkov radiation emitted by the secondary particles in development of the atmospheric shower, which makes possible to reconstruct the direction and energy of the incoming photon. The current operating IACT facilities are the High Energy Stereoscopic System (H.E.S.S.)\cite{hessicrc} in Khomas Highland (Namibia) composed by 5 telescopes, the Major Atmospheric Gamma Imaging Cherenkov Telescope (MAGIC)\cite{magic} in La Palma (Spain) composed by 2 telescopes and the Very Energetic Radiation Imaging Telescope Array System (VERITAS)\cite{veritas} composed by 4 telescopes. Those telescopes are in operation since the years 2000s and have already discovered a number of gamma-ray sources. Better spatial and time resolutions and a broader energy range are necessary to investigate further the deepest and most interesting science cases in gamma-ray astronomy \cite{sciencewithcta}. The Cherenkov Telescope Array (CTA) is the next generation of ground-based gamma-ray observatory and it will provide an energy coverage from 20 GeV to 300 TeV, a sensitivity level improvement of an order of magnitude at 1 TeV in comparison to current instruments and the full sky coverage, with sites in the two hemispheres (Cerro Paranal in Chile and La-Palma in Spain). Three kind of telescopes are planned to be used in the observatory to provide this energy coverage: the small-sized telescope (SST), the medium-sized telescope (MST) and the large-sized telescope (LST). The SSTs are better suited for the highest energies, while the LSTs to the lowest energy domain. A total of 99 telescopes are planned to be built in the south and 19 in the north site\cite{sciencewithcta}. This huge step in comparison to current instruments comes with a big challenge in designing, building and maintaining the telescopes. Here we propose a solution for monitoring the MST structure based on the Operational Modal Analysis (OMA) technique. The technique lies on vibration measurements, the estimation of the telescope eigenfrequencies and eigenmodes and their monitoring throughout the time. The method was applied to the MST prototype structure, which was used for a many-fold of system tests between 2014 to 2019 in Berlin, Germany. The hardware used in the data acquisition and the telescope are described in Section \ref{sec:hardware}. The method is introduced in Section \ref{sec:oma} with its pipeline applied to our case. The results of the monitor are shown in Section \ref{sec:monitoring} and the proof of concept in Section \ref{sec:ropes}. The conclusion and prospects are in Section \ref{sec:conclusion}. \section{hardware} \label{sec:hardware} The MST is based on a modified Davies-Cotton design with a reflector diameter of 12 m \cite{markusicrc}. The telescope is composed of 4 parts: tower, dish, Camera Support Structure (CSS) and the Cherenkov camera. A prototype was built in Adlershof, Berlin in 2013 for tests and serial production preparation of mainly the optical and mechanical systems. Fig. \ref{fig:mst} shows the prototype, in this snapshot without the Cherenkov camera. A dummy load is mounted to the camera frame to replace the camera weight. Both are supported by the CSS and its tensioning ropes. A weather station was also mounted near the telescope to monitor the environmental conditions. The condition monitoring system for the telescopes structure are part of a larger monitoring system developed for the MST prototype. Further monitoring concepts were developed for instance the drive monitoring system and for the bending model. The description of those are presented elsewhere \cite{victoricrc}. The measurement of pointing-model parameters and motor torques is possible without dedicated hardware. However, additional hardware is necessary for the measurement of structure vibration (for OMA) and the vibration of motor and gear components of the drive system. Force balance accelerometers are selected for the vibration measurements. This kind of sensor is widely used for earthquake measurements and at monitoring systems in large structures in the civil industry. In the case of the telescope, it is important to use a very sensitive sensor (high dynamic range) due to the exciting force, the wind, which applies a rather weak force on the structure. See the details of the sensor in Table \ref{table:sensor}. \begin{table}[] \begin{center} \begin{tabular}{|c|c|} \hline \textbf{Vendor} & GeoSIG\cite{geosig} \\ \textbf{Model} & AC-73 \\ \textbf{Technology} & Force Balance \\ \textbf{Frequency range} & DC to 200 Hz \\ \textbf{Acceleration range} & 0 +- 2g \\ \textbf{Output signal} & 20 Vpp \\ \textbf{Operating temperature} & -20$^oC$ to 70$^oC$ \\ \textbf{Enclosure rating} & IP68 Z \\ \hline \end{tabular} \end{center} \label{table:sensor} \caption{ Detailed information of the accelerometers selected for the MST structure monitoring } \end{table} The accelerometers are mounted at different locations along the CSS to maximize the effectiveness of the method, as it is shown in Fig. \ref{fig:mst}. The monitoring cabinet with the data acquisition is located inside the telescope tower. \begin{figure} \centering \includegraphics[scale=1]{images/mst.png} \caption{Defined positions for the structure accelerometers based on the OMA.} \label{fig:mst} \end{figure} Data acquisition of the analog accelerometer output signal is performed with a commercial system from Gantner Instruments\cite{gantner}. Gantner Q.bloxx modules are used for the anti-alias filtering and AD-conversion of the analog sensor signals. A Gantner Q.station provides control over the data acquisition and an ethernet connection for the data transfer. Data-taking for all measurements is controlled via Alma Common Software (ACS) in dedicated monitoring runs. A special ACS component controls the Gantner Q.station. Data transfer is realized via the FTP protocol to a dedicated FTP-server which forwards data to a noSQL database server running mongoDB. The initial configuration of the Q.station and the connected Q.bloxx modules were only possible using the Windows OS. A Linux version of the Gantner configuration environment is already available for a newer version of the module. \section{OPERATIONAL MODAL ANALYSIS (OMA)} \label{sec:oma} The OMA is an analysis method which focus on estimating the modal parameters of large structures, for instance a telescope. The estimation is based on the measurement of accelerometers mounted on pre-selected locations of the structure. Every rigid body has its own modal parameters, which depend basically on the geometry and stiffness. These modal parameters will change whenever there is a change in the structure. Changes such as settlement, bending, loosening of screws, material fatigues and relaxation of the tension in the CSS ropes should be reflected in the modal parameters. Therefore, by monitoring of the modal parameters one monitors also the condition of the structure \cite{victoricrc}. On the other hand, a Finite Element Method (FEM) simulations do provide the expected modal parameters for an ideal structure under predefined conditions. While the FEM simulation does not take into account inhomogeneities in the material, welds and issues during production, the OMA has a better clue on the real state of the structure within the capabilities of the method. Furthermore, the comparison between simulated and estimated modal parameters is important to assure a good understanding of the structure behavior and ultimately the fulfillment of the requirements. \subsection{The method} \label{subsec:method} The prerequisites for the application of the OMA are: \newline - The excitation on the structure shall be broadband and homogeneous. The input force, for instance the wind, must not be on specific frequencies but should rather cover a large range of frequencies. Furthermore, the force should be applied from different directions and on the structure as a whole; \newline - The accelerometers shall be spread throughout the structure in a way to cover the whole structure. They shall not be concentrated in specific regions or in substructures (such a single beam); \newline - The accelerometers shall be sensitive enough to detect the vibration caused by the input force. The prerequisites above assure that all the modes of the structure are excited and detected. The method assumes that the input force is a white noise applied from every direction and in every position of the structure, which is in reality not quite true. The wind speed and direction vary in every timescale. This variation is though not a big problem since it only influences the intensity and not the position of the modal frequencies in the frequency domain and the detection of higher and less important modal frequencies. To mitigate this effect two approaches were followed: data is taken during a longer period (about one hour) and data quality criteria based on the wind direction and speed were defined. Details follow in the subsequent sections. The larger the number of sensors available the more modes (and more complex ones) are detected. Consequently, changes in single complex modes, for example, could be interpreted as a well localized damage in the structure. Redundant information is automatically discarded by the analysis method. On the other hand, there is a trade off with the price of the system, since sensors with enough sensitivity to detect vibration caused by wind can be quite expensive. Therefore, the second prerequisite in the list above is actually limited by the price of the sensors. It was found that a three tri-axis sensors configuration (9 channels, ie. 9 degrees of freedom) is enough to derive the main modal parameters of the structure at a reasonable price. To monitor the modal parameters through time, one shall take data in the exact same structure, otherwise the results will be biased. In the case of the telescope, the exact same structure means that the same elevation and azimuth angle should be used during data taking. For other angle configurations, the structure has a different geometry and, therefore, different modal parameters. An azimuth angle equal to 0° and elevation angle equal to 0° were defined as the standard configuration for data taking for the MST prototype. Environmental conditions such as the seasonal temperature and humidity variation should also influence the estimation of the modal parameters, though in a very small scale compared to the expected change due to structural changes. \subsection{Rules of thumb} \label{subsec:rules} To maximize the quality of the method results the following rules of thumb are suggested, according to \cite{fdd} and references in \cite{svibs}: \newline - The sampling frequency should be larger than 2.5 times the maximum frequency of interest; \newline - The total time of acquisition (s) should be at least 1000 divided by the minimum frequency of interest. According to MST FEM simulations, the first frequencies of the structure are expected in the range from 1 to 10 Hz. Therefore, the minimum acquisition time should be 1000 seconds and the sampling frequency should be at least 25 Hz \cite{fdd}. From tests on the prototype, it was concluded that a larger sampling frequency (oversampling) helps by better resolving the peaks and a longer acquisition helps by lowering the noise level in the frequency domain. Therefore, the sampling frequency was defined to 100 Hz (which gives a Nyquist frequency of 50 Hz) and the acquisition time to 3750 s. \subsection{Pipeline} \label{subsec:pipeline} The OMA applied for the condition monitoring of the MST is based on the Frequency Domain Decomposition (FDD) technique \cite{fdd}. The analyses code, developed in Python, is described here with example results for better understanding. The pipeline in the analysis code is described below. \newline - Data conversion: using the data-sheet from the sensors the data are converted from mA to g (and m/s²), where g is the gravity constant on Earth. Fig. \ref{fig:input} shows an example of data taken on August 25th. Ch. 4 represents the Y-axis (horizontal movement) for the sensor in the Cherenkov camera frame. \\ \newline \begin{figure} \centering \includegraphics[scale=0.55]{images/2.jpeg} \caption{Calibrated data for one sensor channel in the camera frame. Data taken on August 25th 2019. The vibration is caused by the combination of all kind of excitation with wind being the main component. The baseline of the signal depends on the orientation of the sensor in regard to the gravity force. The systematic error of the sensor is of the order of 0.1 µm/s².} \label{fig:input} \end{figure} \newline - Data decimation: this is applied to reduce the Nyquist frequency from 50 Hz to the maximum frequency of interest (10 Hz). The advantage of oversampling and applying a decimation afterwards is that it resolves better the peaks in the frequency domain and reduces the noise level;\\ \newline - Cross Spectral Density (CSD) estimation: it estimates the CSD using the Welch’s method ie. the time series of each channel is converted to the frequency domain by a Discrete Fourier Transform (DFFT) and then individually multiplied by all the other DFFT channels. The result can be gathered together to form a 9x9 matrix for each frequency bin;\\ \newline - Singular Value Decomposition (SVD): the 9x9 matrix from the last step is decomposed in three matrices according to the SVD method \cite{fdd}:\\ \begin{equation} \label{eq:csd} CSD = U\cdot S\cdot V, \end{equation} \noindent where CSD is the initial 9x9 matrix for a frequency bin, U is a 9x9 matrix, which contains the information about the mode shapes; S is a 9x9 diagonal matrix, which contains the singular values of the CSD in decreasing order; and V is usually the transposed U. If the analyzed frequency is a modal frequency, the first value in the diagonal matrix S will be much larger than the subsequent singular values. In this case, the frequency is a modal frequency and the corresponding modal shape is the first column of the U matrix. Fig. \ref{fig:fdd} shows the result of the SVD for the data from the 3 sensors taken on August 25th 2019: the blue, orange and green curves are the first, second and third singular value, respectively. The peaks in the first singular value curve represent the potential modal frequencies. The peaks in the second singular value curve show whenever there is a change in the dominant mode. A peak in the further curves would indicate the degree of degeneracy of that frequency. \begin{figure} \centering \includegraphics[scale=0.35]{images/fdd.png} \caption{SVD result for data taken on August 28th, 2019. The blue , orange and green are respectively the first, second and third singular values.} \label{fig:fdd} \end{figure} - Peak selection: a simple algorithm is applied to the array of first singular values to extract the frequency bins, in which the peaks are located. These peaks are the potential modal frequencies. Fig. \ref{fig:fddpeak} shows the selection of the peaks. It is not a problem when the algorithm takes a fake peak, for instance the first peak in the figure, because these kind of peaks will be excluded later on in the assurance criteria;\\ \begin{figure} \centering \includegraphics[scale=0.35]{images/fddpeak.png} \caption{Peak selection in the first singular value curve.} \label{fig:fddpeak} \end{figure} - Modal Assurance Criteria (MAC): The MAC value is a value between 0 and 1 to measure how much linearly dependent two vectors are (0 being linearly independent). It is defined by the inner product between one mode shape (9-dimensional vector) and the other, divided by both norms. The MAC value is calculated for every pair of potential modal frequencies found in the last step. The result is a MAC matrix showing the linearly dependency among all the encountered peaks. Fig. \ref{fig:mac} shows the MAC matrix for all the selected peaks in the example case (August 25th, 2019). As pointed out before, it was identified that the first two peaks are strongly correlated. Many other correlations between the peaks were also identified and are treated in the next step;\\ \begin{figure} \centering \includegraphics[scale=0.45]{images/mac.png} \caption{MAC matrix for the selected peaks..} \label{fig:mac} \end{figure} - MAC selection: the MAC value of 0.8 is defined as the maximum accepted value for two mode shapes to be considered independent from one another. If the MAC value is larger than 0.8, the two mode shapes are considered to be linearly dependent and the one with larger sum of all MAC values in respect to the other vectors is discarded. Finally, the result is a new and cleaner MAC matrix together with the list of the structural modal frequencies. Fig \ref{fig:mac2} shows the new matrix.\\ \begin{figure} \centering \includegraphics[scale=0.45]{images/mac2.png} \caption{Cleaner MAC matrix after excluding correlated modes.} \label{fig:mac2} \end{figure} - Bell shape curve: for each modal frequency, a second MAC value is calculated between the central frequency bin where the peak is located and the nearby ones (by using the first column vector of U in Eq. \ref{eq:csd}). Whenever the MAC value is larger than 0.95 the frequency bins are considered to be part of the same mode and the intensity of that bin is kept the same as before (the first singular value for that frequency bin). For all the other frequency bins the intensity is set to zero. The result is a bell shape curve shown in Fig. \ref{fig:bell} for the first modal frequency found before. The intensity unit is not important for the following analysis. \\ \begin{figure} \centering \includegraphics[scale=0.4]{images/bell.png} \caption{Bell shape curve around the first modal frequency.} \label{fig:bell} \end{figure} - Auto correlation function: the auto correlation function for each modal frequency is defined by the Inverse Fourier Transform (IFFT) of the bell shape curve. It is then normalized to maximum intensity 1 and average 0. The function represents the decay curve of the vibration at that modal frequency. Fig. \ref{fig:autocorr} shows the auto correlation function for the first modal frequency;\\ \begin{figure} \centering \includegraphics[scale=0.4]{images/auto.png} \caption{Auto correlation function for the first modal frequency. The blue curve is the decay curve, the red curve is the region of interest (between 0.3 and 0.7 a.u.) and the black crosses indicate the extremes position.} \label{fig:autocorr} \end{figure} - Logarithm Decimation: the logarithm decimation function is defined as twice the logarithm of the relation between the intensity of the first extreme (peaks and valleys) and the n-extreme in the auto correlation curve. The logdec factor $\xi$ is defined as the slope of the intensity of this function as function of the number of the peak. The range to be considered in the linear fitting is the region where the norm of the auto correlation function is between 0.3 and 0.7. This region of interest avoids taking into consideration regions close to the start and end of the excitation, where non linear effects could lead to more uncertainties in the estimation. Fig. \ref{fig:logdec} shows the function also for the first modal frequency.\\ \begin{figure} \centering \includegraphics[scale=2]{images/logdec.png} \caption{The logarithm decimation function for the first modal frequency. Blue dots show the estimated function and the red line the linear fit within the region of interest.} \label{fig:logdec} \end{figure} Figs. \ref{fig:logdec} and \ref{fig:autocorr} show that the behavior of the oscillation is only linear within the first beating envelopes. - Damping rate estimation ($\partial$): Finally, based on the logdec factor $\xi$, the damping rate for each modal frequency can be estimated through the equation \ref{eq:logdec}\\ \begin{equation} \partial = \dfrac{1}{\sqrt{1+(\dfrac{2\pi}{\xi})^2}}. \label{eq:logdec} \end{equation} The damping rate is usually estimated with large uncertainties, since there are some assumptions before reaching the final estimated value. These uncertainties can be identified as the standard deviation of the damping rate of each modal frequency throughout the time if no change is expected in the structure (see Section \ref{sec:monitoring}). Table \ref{tab:results} summarizes the results for the August 25th data. A crosscheck analysis with an independent commercial software (Artemis Modal from Svibs \cite{svibs}) delivered similar results. The estimated mode shapes can be better visualized through the commercial software Artemis Modal and are also presented in Table \ref{table:freq}. It is not simple to interpret the mode shape as a simple translation/rotation, because the real structure is more complex than its CAD design. Effects such as small asymmetries, different screw torques, sagging and other possible deformations makes this interpretation difficult, hence the third column of the table might be inaccurate. \begin{table}[] \label{table:freq} \begin{center} \begin{tabular} {|c|c|c|} \hline \textbf{Modal Freq.(Hz)} & \textbf{Damping(\%))} & \textbf{Mode shape }\\ \hline 1.20 & 2.04 & Translation (Y-axis) \\ 1.40 & 1.40 & Translation (X-axis )\\ 2.22 & 0.36 & Rotation around Z \\ 3.29 & 0.19 & Rotation around X\\ 3.35 & 0.22 & Translation (Z-axis)\\ 3.42 & 0.22 & Translation (mostly X-axis, complexer mode)\\ 4.02 & 0.25 & Rotation around Z\\ 5.90 & 0.27 & Rotation around Y\\ 6.07 & 0.17 & Translation (X-axis )\\ 6.40 & 0.29 & Translation (Z-axis)\\ 6.59 & 0.3 & Rotation around X\\ 8.46 & 0.24 & Rotation around Y\\ \hline \end{tabular} \end{center} \caption{Summary of the OMA results for data taken on August 25th. X-axis is the gravity direction, Y-axis the horizontal direction and Z-axis the telescope axis} \label{tab:results} \end{table} The advantage of the analysis code presented here in comparison to the commercial software is the automation and customisation, which are important features in the tracking of the modal parameters for a large number of MST telescopes over long time scales. \section{LONG TERM MONITORING} \label{sec:monitoring} The OMA results shown in Section \ref{subsec:pipeline} are for data taken on August 25th, 2019. The main purpose of the analysis is to develop a monitoring system, which tracks the modal parameters throughout the time and detects changes in the structure. In this section the results of the automated analysis code for the tracking are presented. Data was taken automatically every day at 6:00 a.m. at the MST prototype in Adlershof, Berlin. The data acquisition is defined according to Section \ref{sec:hardware}: sampling ratio of 100 Hz, acquisition time of 3750 s, telescope at 0° elevation and 0° azimuth. The three tri-axis sensors and the whole system were working properly from August 16th, 2019 until December 26, 2019 except between August 26th and August 28th, when there was an update in the system. The analysis also ran automatically every day. To improve the reliability of the results, it is important to apply a selection criteria to the data to cut off all the days in which the excitation force was either too weak or too strong. When the wind is weak, less modes are excited and when it is strong new and more complex modes are excited. When the wind comes mostly from one direction, there is the risk that not all the telescope modes will be excited, therefore a criteria in the wind direction variance was also applied. Based on the data acquired by the weather station, the quality criteria was defined as shown in Table \ref{table:criteria}. The criteria was applied for the time window of the data acquisition (6h - 7h) on each day in which data was acquired. \begin{table}[] \begin{center} \begin{tabular} {|c|c|c|} \\hline \textbf{Minimum Mean wind speed($m/s$)} & \textbf{Maximum Mean wind speed($m/s$)} & \textbf{Wind direction variance($^o$)} \\ \\hline 1.1 & 2.5 & 120 \\ \\\hline \end{tabular} \\hline \end{center} \caption{Quality criteria} \label{table:criteria} \end{table} Fig \ref{fig:wind} shows the wind information for 3 different days in which the wind passed the quality criteria (see text). From upper to lower panel: a day with strong wind, a rather normal day and a weak wind day. \begin{figure} \centering \begin{subfigure} \centering \includegraphics[scale=1]{images/wind1.png}\\ \label{fig:wind1} \end{subfigure} \hspace{1pt} \begin{subfigure} \centering \includegraphics[scale=1]{images/wind2.png}\\ \label{fig:wind2} \end{subfigure} \hspace{1pt} \begin{subfigure} \centering \includegraphics[scale=1]{images/wind3.png}\\ \label{fig:wind3} \end{subfigure} \hspace{1pt} \hspace{1pt} \caption{Wind diagram for three different days: 2019 October 12th (upper panel),2019 November 26th (middle panel), 2019 November 26th (lower panel).The wind strength is given by the length of the red lines which originate at the center where the telescopes are located. The direction is also given in 360$^o$} \label{fig:wind} \end{figure} The modal frequencies are shown in Fig. \ref{fig:monitor}. The black lines connect together correlated frequencies for every consecutive day, i.e. a MAC value is calculated between every frequency in one day and every frequency in the consecutive day. A correlation happens whenever the MAC value is larger than 0.95. Some of the dots are not connected, despite being in the same position at the day before, because there was not enough information available for the MAC value to be above the threshold. This can happen for two reasons: either there was a lack of excitation or the mode is of a higher order and is difficult to be excited. The more channels available (the more sensors) the more accurate the MAC value is and the more linearly independent (higher) modes are tracked. \begin{figure} \centering \includegraphics[scale=1.8]{images/frequency.png} \caption{Tracking modal frequencies throughout the days. The red dots are the modal frequencies in each day and a black line is drawn whenever there is a correlation between consecutive day modes.} \label{fig:monitor} \end{figure} The damping rate for each modal frequency is also tracked and plotted in Fig. \ref{fig:damping}. As it was mentioned before, the uncertainties are higher than for the Eigenfrequency estimation; therefore, there is a relatively larger standard deviation for correlated modes in comparison to the frequency tracking. \begin{figure} \centering \includegraphics[scale=1.8]{images/damping.png} \caption{Tracking damping rate throughout the days. The red dots are the damping rates in each day and a black line is drawn whenever there is a correlation between consecutive days modes.} \label{fig:damping} \end{figure} In Fig. \ref{fig:monitor} and \ref{fig:damping} one can identify a special day on October 12th, when the first modal frequency is smaller than the days before and the damping is higher. On the very next days, the frequency and damping of the first mode came back to their usual values, indicating that this was not a change in the structure itself. The reason for this change is in the excitation force. The wind on October 12th between 6h and 7h was in average within the quality limits although there must have been a spike in the wind speed since from Fig \ref{fig:wind} upper panel the maximum value reaches up to 7.6 m/s, while in other days this is not usual. Therefore, there is space for improvement in the quality criteria definition. Furthermore, information about rain rate, temperature, humidity and pressure could have an influence on the results but was not used in the quality criteria definition since these are not the primary sources of excitation. The last step of the monitoring analysis is to define variables which can suggest whether there was a change in the structure or not. These variables can be defined in many different ways either in the frequency or in the time domain\cite{damage}. We defined three different variables to be monitored: 1) square frequency change (Eq. \ref{eq:chifreq}): the sum of squared differences between the modal frequencies in subsequent days, 2) square damping change (Eq. \ref{eq:chidamp}): same as 1 for the damping and 3) the number of correlation established between subsequent days (black lines in Fig \ref{fig:monitor} and \ref{fig:damping}). \begin{equation} \Delta f_i ^2 = \dfrac{\Sigma^{N}_j(f_{i+1,j} - f_{i,j})}{N} \label{eq:chifreq} \end{equation} \begin{equation} \Delta \partial_i ^2 = \dfrac{\Sigma^{N}_j(\partial_{i+1,j} - \partial_{i,j})}{N} \label{eq:chidamp} \end{equation} \noindent where $i$ presents the day, $j$ the jth correlation between $i$ and $i+1$ and $N$ is the total number of correlation between those days. The first two frequencies in Fig. \ref{fig:monitor} correspond to the two highest damping in Fig. \ref{fig:damping}. These modes are contaminated with a tiny rotation in horizontal and vertical axis, specially seen under strong wind, when the width of the two peaks increase significantly. To overcome this natural barrier for the monitoring, one could define a more strict range for the allowed wind speed or alternatively, do not take these two peaks into consideration when calculating the monitor parameters. Nevertheless, for the further analysis we stick to the current quality criteria defined in Table \ref{table:criteria} and kept those two peaks. Fig \ref{fig:chifreq} and Fig \ref{fig:chidamp} show the monitoring of the defined variables $\Delta \partial_i ^2 $ and $\Delta f_i ^2$ for the same period. There is a clear correlation between a change in the modal frequencies and a change in the damping rate, which shows the robustness of the analysis. \begin{figure} \centering \includegraphics[scale=1.4]{images/chisquarefreq.png}\\ \hfill \caption{Square frequency change indicate how much the modal frequencies of the same mode shape changed between one day and the next. A large peak indicates a change in the structure. For the period analysed the structure is not expected to change.} \label{fig:chifreq} \end{figure} \begin{figure} \centering \includegraphics[scale=1.4]{images/chisqdampoing.png}\\ \hfill \caption{Square and Square damping change indicate how much the damping rates of the same mode shape changed between one day and the next. A large peak indicates a change in the structure. For the period analysed the structure is not expected to change.} \label{fig:chidamp} \end{figure} In spite of the good correlation between the parameters, at first one cannot be sure when a change in the structure occurs as long as it doesn't happen and its effect is deeply investigated. A threshold must be defined in the parameters of Figs. \ref{fig:chifreq} and \ref{fig:chidamp} and whenever the calculated parameters exceed this threshold, a change most probably occurred in the structure. In the prototype system developed in Adlershof, an automatic email was sent everyday to the persons in charge with the main plots, tables and a main message telling the reader whether there was a change or not. \section{PROOF OF CONCEPT} \label{sec:ropes} More important than monitoring the modal parameters is to be able to investigate what happens when the structure changes. On November 26th and 27th 2019 a experiment was conducted at the prototype to study the influence of the tension in the CSS ropes at the modal parameters. Special tools were rented from an elevator manufacturer company and used to apply the tension and another one to measure it. Dedicated runs were conducted to extract the modal parameters from the structure during the experiment. To optimize the experiment duration, the dedicated runs had the duration of only 15 minutes instead of the normal 1 hour. The procedure was the following: \bigskip \noindent- Before tightening: A dedicated 15 minutes data taking run for extracting the initial modal parameters;\\ - 7.85 kN was applied to each rope when the telescope was pointing towards 90 degree elevation;\\ - A second dedicated 15 minutes data taking run;\\ - Relaxation: the telescope stood still for about an hour;\\ - A dedicated 15 minutes data taking run;\\ - After operation: the telescope was moved in random directions for about 10 minutes;\\ - A dedicated 15 minutes data taking run;\\ - Two cables had their tension intentionally released to 3 kN;\\ - A dedicated run was taken to see the effects of the releasing;\\ - The nominal tension of 10 kN was applied to every rope while the telescope was pointing towards 90 degree elevation;\\ - A dedicated run was taken to see the effects of the tightening;\\ - Relaxation: The telescope stood still until the next morning;\\ - The last dedicated run. \bigskip \begin{figure} \centering \includegraphics[scale=1.2]{images/ropes.png} \caption{The monitoring of the peak frequencies during the experiment conducted on November 26th and 27th. See the text for more information.} \label{fig:ropes} \end{figure} Fig. \ref{fig:ropes} shows the results for the experiment. The MAC criteria were relaxed when comparing the different peaks, in comparison to the value used in Section \ref{sec:monitoring}), therefore more peaks could be tracked but with larger uncertainty. A line is drawn whenever there is a correlation between the mode shapes of the subsequent datasets. As explained in the last section, the first two frequencies are not modal frequencies. The peak around 2.4 Hz is present during the whole experiment, although in the long term monitoring it is not anymore identified as an independent frequency after November 27th. The three peaks around 3 Hz merged together when tightening the cables to 7.85 kN, kept the same values during relaxations and split during the release of cables. The frequencies in the region from 3.2 to 3.5 Hz were not strong enough before the first tightening of the cables. During the tightening of the cables to 10 kN these frequencies slightly shifted to higher frequencies. This indicates that the structure became stiffer. Peaks around 4 Hz also merged together when tightening the cables and split when releasing. In general it is also seen that the relaxation from 10 kN was felt by peaks with frequency larger than 3.5 Hz, since they decreased, therefore the structure became less stiff. \section{CONCLUSION} \label{sec:conclusion} The condition monitoring system was developed and tested at the MST prototype in Berlin. Given the large number of telescopes planned for the CTA Observatory, a monitoring of the structures is key to avoid big failures and minimize maintenance. The system is based on the Operational Modal Analysis and on accelerometer measurements. The system developed proved to be functional, since the change in the ropes tension were identified as changes in the modal frequencies. Whenever a tension in the cables is released, some of the frequencies shifted towards lower frequencies and others split into more peaks. The opposite is also true, i.e. the modal frequencies shift towards higher values when the tension is higher and some frequencies tend to come together. By relaxation, some peaks shift towards lower frequencies. With these conclusions, one can monitor the tension in the cables without the need to measure it, which is a big advantage when one deals with dozens of telescopes in the desert. Based on the results from the experiment conducted in November, it is also possible to conclude that the first three modal frequencies (in Fig. \ref{fig:ropes} do not belong to the CSS modes but rather from the structure as a whole, since they did not show any shift during the tightening/releasing of the cables. To identify specific modes to track for specific problems is also important for the further development of the system. More structural changes could have been tested but due to time limitations it was not feasible. It is expected that the system detects overall changes, such as the tension in the ropes, settlement of the ground, ruptures and long term fatigues in the material. It is not expected that the system detects minor, localized issues and dynamic effects such as loosening of screws and level of lubrication of the gears. Despite the fact that the system is functional, the two defined parameters to monitor the health of the telescope (Eq. \ref{eq:chifreq} and \ref{eq:chidamp}) did not identify any change in the structure before and after the experimented conducted in November 26th. Due to the sum, the effect seen in some frequencies was smoothed out by the other frequencies in which there was no shift. This shows that the definition of the parameter to monitor is important and can be improved for the automatic detection of failures. Although these two parameters were unable to identify the change in the structure, the third one, which is the number of correlation between consecutive days, could give a hint of a change. It is clear from Fig. \ref{fig:monitor} and \ref{fig:damping} that there was a very low number of correlations between November 26th and November 28th. Furthermore, the long sequence of correlations of the 2.15 Hz peak disappeared and a new sequence around 2.9 Hz showed up. To take advantage of all information the data provides and to build a robust automatic condition system one shall define a collection of parameters to be monitored. A change in any of these parameters would indicate a change in the structure. The scope of this work was not to build such a complete system but to test the system at a prototype and prove that it is effective. There is room for improvement, which will be achieved during the construction phase of the MSTs planned in the next years.
{ "timestamp": "2020-12-17T02:20:02", "yymm": "2012", "arxiv_id": "2012.08995", "language": "en", "url": "https://arxiv.org/abs/2012.08995" }
\section{Introduction} Unitarity of scattering amplitudes has long been used to constrain the masses and couplings of thermal relic dark matter (DM) particles \cite{Griest:1989wd,Hedri:2014mua,vonHarling:2014kha,Cahill-Rowley:2015aea,Kahlhoefer:2015bea,Baldes:2017gzw,ElHedri:2017nny,ElHedri:2018atj,Harz:2018csl,Hektor:2019ote,Kannike:2019mzk,Alanne:2020jwx,Fuks:2020tam,Espinoza:2020qyf,Espinoza:2020kut}. More generally, it is applied to constrain new physics Beyond the Standard Model such as $Z^\prime$ couplings \cite{Hosch:1996wu,Shu:2007wg,Babu:2011sd,Kahlhoefer:2015bea,Fuks:2020tam}, and most often (and relevant for this work) scalar couplings \cite{Lee:1977eg,Casalbuoni:1986hy,Casalbuoni:1987eg,Maalampi:1991fb,Kanemura:1993hm,Cynolter:2004cq,Schuessler:2007av,SchuesslerThesis,Kang:2013zba,Betre:2014fva,Costa:2014qga,Ginzburg:2003fe,Akeroyd:2000wc,Horejsi:2005da,Khan:2016sxm,Aoki:2007ah,Hartling:2014zca,DiLuzio:2016sur,DiLuzio:2017tfn,Goodsell:2018tti,Goodsell:2018fex,Krauss:2018orw,Espinoza:2018itz,Arhrib:2018sbz,Abbas:2018pfp,Cheng:2018mkc,Cheng:2018mkc,Chen:2018uim,Hektor:2019ote,Mondal:2019ldc,Kannike:2019mzk,Dubinin:2019dtb,Capdevilla:2020qel,Domenech:2020yjf,Alanne:2020jwx,Espinoza:2020qyf,Espinoza:2020kut} (including some one-loop calculations \cite{Grinstein:2015rtl,Cacchio:2016qyh,Murphy:2017ojk,Cheng:2018mkc}). Unitarity famously limits the maximum possible cross-section for dark-matter annihilation, and thus gives an upper-bound on the mass of DM particles. The classic bound of ref.~\cite{Griest:1989wd} is derived for scattering momentum on-shell and represents a true all-orders bound, whereas standard constraints evaluated at large scattering momentum provide a complementary probe of the theory. Since they are usually evaluated at tree-level these should instead be considered really as a measure of the breakdown of perturbativity of the theory. To illustrate the relationship between the two, consider $2\rightarrow 2$ scattering processes from states $a \equiv (i,j)$ to $b \equiv(k,l)$ with matrix elements $\mathcal{M}_{ba}$ and centre-of-mass momenta $p_a, p_b$. We decompose them into partial waves with \begin{align} a_J^{ba} \equiv& \frac{1}{32\pi} \sqrt{\frac{4 |\mathbf{p}_{a}|\mathbf{p}_{b}|}{ 2^{\delta_a} 2^{\delta_b} s}} \int d z P_J (z) \mathcal{M}_{ba} (z) \end{align} where $\delta_{a} ( \delta_b)$ is $1$ for identical $i=j ( k=l)$ and $0$ otherwise; and $z$ the cosine of the angle between the three-momenta $\mathbf{p}_{a} , \mathbf{p}_{b}$. Then using unitarity of the corresponding S-matrix $S \sim 1 + i \mathcal{M}$, we find \begin{align} \frac{1}{2i} (a_J - a_J^\dagger)^{ba} \ge \sum_c \overline{a}_J^{cb} a_J^{ca} \quad \forall a,b,J. \end{align} Since the matrix $a_J^{ba}$ is normal, we can diagonalise both sides simultaneously and so the same equation holds for the eigenvalues $a_J^{i}$; so the typical ``perturbative'' unitarity constraints yield \begin{align} |\mathrm{Re} (a_J^{i})| \le \frac{1}{2}. \end{align} To derive the limits of ref.~\cite{Griest:1989wd} we can invert the decomposition of partial waves and insert into the expression for the scattering cross-section $\sigma^{ba} = \sum_J \sigma_J^{ba}$ for states $a \rightarrow b$ to obtain: \begin{align} \sigma_J^{ba} =& 4\pi\frac{2J + 1}{p_a^2} 2^{\delta_a} |a_J^{ba}|^2. \end{align} Then we have \begin{align} \mathrm{Im} (a_J^{aa}) \ge |a_J^{aa}|^2 + |a_J^{ba}|^2 \longrightarrow |a_J^{ba}|^2 \le \frac{1}{4} \end{align} and this leads to an ``absolute'' bound\footnote{There are possible exceptions, such as in the presence of poles.} of \begin{align} \sigma_J^{ba} \le \pi\frac{2J + 1}{p_a^2} 2^{\delta_a} . \end{align} In limiting the dark matter mass, the factor of $ 2^{\delta_a}$ is compensated for non-identical particles by having two different species. These bounds should be contrasted with the typical ``perturbative'' ones; for example, consider a toy model dark matter candidate $S$ with a $\mathbb{Z}_2$ symmetry that annihilates to a charged scalar $X$ via a quartic interaction: \begin{align} \mathcal{L}_{\rm toy} \supset - \frac{1}{2} \lambda_{\rm toy} S^2 |X|^2. \end{align} If we consider high-energy scattering as $s\rightarrow \infty$ then we obtain $a_0^{ba} = -\frac{\lambda_{\rm toy}}{16\pi \sqrt{2}}$ and we find the bound $\lambda_{\rm toy} < 8\pi \sqrt{2}$ \emph{at tree level}. This leads to the bound \begin{align} \sigma_0 \le 8\pi \frac{|\mathbf{p}_b|}{|\mathbf{p}_a| s}. \end{align} Consider now non-relativistic annihilation of the singlet $S$ into relativistic $X$, so $|\mathbf{p}_a| \approx m_S v, |\mathbf{p}_b| \approx m_S, s \approx 4 m_S^2, $ then we have the perturbative bound \begin{align} \sigma_0 \le \frac{2\pi}{m_S^2 v} \end{align} compared to the ``absolute'' bound of \begin{align} \sigma_0 \le \frac{2\pi}{m_S^2 v^2}. \end{align} Clearly even for this trivial case, for $v \ll 1$ the perturbative bound is stronger and will lead to a lower limit on the DM mass, since we have taken the bound on $\lambda_{\rm toy} $ at $s\rightarrow \infty$ and applied it for small $s$. Crucially, though, this bound is really a measure of the \emph{perturbativity} of the theory, since we only derived it with tree-level information, so it is entirely possible that a theory would saturate the ``absolute'' bound in the non-perturbative regime. In our toy example, we included for simplicity only a quartic coupling and took $s\rightarrow \infty$. This is rather typical in the literature among calculations of unitarity constraints. These ignore the contributions from, in particular, scalar trilinear couplings -- which have enormous implications for dark matter phenomenology, since they are responsible for all $s/t/u$ channel interactions. However, a framework within the package {\tt SARAH}\xspace \cite{Staub:2008uz,Staub:2013tta} for \emph{automatically} calculating the constraints on scalar trilinears was introduced in ref.~\cite{Goodsell:2018tti}, which can automatically scan over scattering momentum to find the best limit on the couplings of the theory. This has since been applied in e.g. ref.~\cite{Goodsell:2018fex,Krauss:2018orw,Espinoza:2018itz,Hektor:2019ote,Mondal:2019ldc,Kannike:2019mzk,Alanne:2020jwx,Espinoza:2020qyf,Espinoza:2020kut}. As we saw above even in a trivial example, this will lead to generally stronger bounds on the dark matter mass than in ref. \cite{Griest:1989wd}. However, the calculation in ref.~\cite{Goodsell:2018tti} was until now limited to \emph{colour neutral} scalars. In this paper we shall describe the extension in {\tt SARAH}\xspace\ {\tt v4.14.4} to \emph{colourful} scalars, where all group theory factors are automatically calculated, and use this to place constraints on scalar trilinear couplings that are relevant for a simple dark matter model with colourful mediators. Unitarity, however, is not the only constraint on trilinear couplings: they can also lead to alternative vacua, which in the case of charged fields mean charge- or colour-breaking minima of the potential. These are offset by having larger quartic couplings to stabilise the vacuum at the origin in field space. The typical approach to constraining a new model with such scalars, therefore, would be to use vacuum stability to constrain the size of cubic couplings, which in turn push the theory to large quartic couplings; large scattering-momentum unitarity to give an upper bound on the quartic couplings; and the dark matter annihilation cross-section is then limited by the values of both (since it can proceed via both quartic and $s/t/u$-channel interactions). This reasoning is reinforced, as discussed for example in ref.~\cite{Goodsell:2018tti}, by the fact that for a single neutral scalar field with both cubic and quartic couplings, the full bounds from unitarity on the cubic coupling are generally \emph{less} constraining that those from vacuum stability plus the upper limit on the quartic from unitarity. On the other hand, this naive picture does not necessarily hold for models with colourful states, or more scalars, but up until now there was no simple way of deriving the unitarity constraints for such theories. To our knowledge, such bounds had only been applied in a model with a colour octet in ref.~\cite{Cao:2013wqa,He:2013tla,Cheng:2018mkc}\footnote{We thank Junjie Cao for bringing the first of these to our attention after the first version of this paper.} (in the large scattering momentum limit only); and in the (N)MSSM in ref.~\cite{Schuessler:2007av,SchuesslerThesis} (with a scan over scattering momentum as discussed here) and \cite{Staub:2018vux} (using an earlier version of the code described in this paper). In the latter reference, a comparison of unitarity and vacuum stability bounds was performed for the Higgs-squark sector where the conclusion was that the unitarity constraints on the trilinear and quartic couplings between scalars were irrelevant in the MSSM (where the quartic couplings are given only by gauge and Yukawa couplings) but were \emph{complementary} to the vacuum stability constraint in the NMSSM. However, in those models the colourful scalar sectors interact only with the Higgs scalars, which cannot provide a dark matter candidate. We also point to ref.~\cite{Baker:2020vkh}, which makes use of the routines described here to constrain models of radiative fermion mass generation. In this paper we shall investigate in detail the (genuine) complementarity of the requirements of (full) unitarity including finite momentum scattering, vacuum stability and relic density to place an upper bound on a scalar dark matter model with colourful mediators for the first time, which will allow us to put an upper bound on the dark matter mass well below the Griest-Kamionkowski limit. In section \ref{SEC:Model} we describe our model and how we have calculated vacuum stability bounds for it; in sec.~\ref{SEC:ColourfulUnit} we describe the automatisation of the group theory calculations as we have implemented in {\tt SARAH}\xspace v4.14.4; in sec.~\ref{SEC:Results} we describe the procedure that we used to investigate the parameter space of our model and show the results, giving an upper bound on the mass of the dark matter particle. \section{A model of colourful mediators} \label{SEC:Model} To illustrate the new capabilities in {\tt SARAH}\xspace and test the idea of a maximum dark matter mass, we shall take a model with colourful scalar mediators, but where the dark matter candidate is the usual scalar singlet $S$ with a $\mathbb{Z}_2$ symmetry. The scalar mediator fields $Q_E$ and $Q_O$ both have quantum numbers $(3,1)_{-1/3}$ under $(SU(3), SU(2))_Y$; the difference between them is that $Q_E$ is even under the $\mathbb{Z}_2$, and $Q_O$ is odd. Then the most general lagrangian where the hidden sector respects CP symmetry is \begin{align} \mathcal{L} =& \mathcal{L}_{SM} - \frac{1}{2} m_S^2 S^2 - m_E^2 |Q_E|^2 - m_O^2 |Q_O|^2 - \lambda_S S^4 - \frac{1}{2} \lambda_{HS} S^2 |H|^2 - \lambda_{3} |H|^2 |Q_E|^2 - \lambda_{4} |H|^2 |Q_O|^2 \nonumber\\ & - \frac{1}{2} \lambda_1 S^2 |Q_O|^2 - \frac{1}{2} \lambda_2 S^2 |Q_E|^2 - \lambda_5 |Q_E|^4 - \lambda_6 |Q_O|^4 - \lambda_7 |Q_O|^2 |Q_E|^2 - \lambda_8 |Q_O Q_E^*|^2 \nonumber\\ & - \bigg[ \kappa_1 S Q_E Q_O^* + Y_Q^{ij} Q_E q_i q_j + \frac{1}{4} \lambda_C (Q_E Q_O^*)^2 + h.c. \bigg] \end{align} Here $q_i$ are the $(3,2)_{1/6}$ Weyl fermions representing left-handed SM quarks. This model has several interesting features. The first, which is the main point of considering it, is the trilinear coupling $\kappa$: this entirely controls the $s/t/u$-channel processes for dark-matter annihiliation and is crucial for the unitarity and vacuum stability analysis. The next is the baryonic coupling $Y_Q^{ij}$: the mediators carry baryon number, which is respected by the model (perturbatively). It also means that the state $Q_E$ decays to pairs of quarks; we shall take it to predominantly couple to the third generation, i.e. decays to a $tb$ pair. Therefore it is somewhat hard to search for at the LHC, being constrained mainly by $t \overline{t} b \overline{b}$ searches for which no BSM reanalysis is yet possible, so we expect its mass to be only bounded to be larger than $1$ TeV (rather than $2$ TeV and above for other colourful scalars that decay to the first two generations of quarks). This choice also makes the model somewhat safe from direct detection constraints (provided that the Higgs portal coupling $\lambda_{HS}$ is small). In this work, we shall be considering in any case much larger masses, so collider and direct searches are not relevant. Another interesting feature is that the state $Q_O$ can only decay to the singlet plus $Q_E$, requiring it to be heavier than the singlet. In addition, there are three operators containing two pairs of $Q_O, Q_E$, namely the $\lambda_7, \lambda_8$ and $\lambda_C$ terms. It is now possible within {\tt SARAH}\xspace to specify all of these and for them to be properly taken into account in the unitarity constraints; however, for our analysis we shall only consider $\lambda_7$ and take $\lambda_C, \lambda_8$ to be zero. This is mildly relevant for unitarity and vacuum stability constraints -- but not at all for the dark matter density. Since we are considering heavy dark matter that has little interaction via the Higgs portal, the relevant part of the scalar potential for this model involves the fields $S, Q_E$ and $Q_O$. These can develop expectation values and a colour-breaking minimum if $\kappa$ is large enough; however, finding the minimum of the potential involves solving coupled cubic equations and is not analytically tractable except for the the point where the masses and couplings are equal. To find possible true minima we wrote a small {\tt Python} code which we briefly describe in appendix \ref{APP:VacStab}. This uses {\tt HOM4PS2} \cite{hom4ps2} to quickly find \emph{all} minima of the set of coupled minimisation conditions for our chosen field directions. We found this simpler than installing the no-longer-supported {\tt Vevacious} \cite{Camargo-Molina:2013sta}, especially since there is a potentially large separation of scales between our dark matter sector and the Higgs sector; also note that we are only interested in the \emph{tree-level} minima because we are explicitly searching for points which have large trilinear couplings where perturbativity may break down. \section{Colourful unitarity bounds} \label{SEC:ColourfulUnit} Unitarity bounds on colourful scattering amplitudes for the MSSM were considered in \cite{SchuesslerThesis} where a derivation of the colour factors was given case by case for the different representations and amplitudes present. Here we shall give a description of the general procedure that we use, that applies to the scattering of \emph{any} states. Let us suppose that our initial (or final) states can be labelled $A_i, B_j$ and transform non-trivially under a non-Abelian group, let us say with dimensions $d_A, d_B$. This means that we multiply the number of rows that it takes up in the scattering matrix by $d_A \times d_B$. Clearly, however, we can break this into irreducible representations: \begin{align} d_A \times d_B = \sum_{C}^{n} d_C , \end{align} where $n$ is the total number of irreducible representations. Obviously the scattering matrix will only be non-zero when the incoming and outgoing pairs are in the same irrep, so then we need to apply a unitary transformation on the $d_A d_B$ states to split them into $n$ blocks; these are given by (generalised) Clebsch-Gordan coefficients. These can be built from invariant tensors, that is a mapping of $A\otimes B \otimes C^* \rightarrow 1$; we can denote this as $(t_C)^{ij }_{a}$ so that $A_i B_j \overline{C}^{a} (t_C)^{ij}_{a}$ is invariant under group tranformations. By considering infinitesimal transformations it is easy to see that contracting different invariant tensors together make another invariant tensor, and since the only invariant with just one representation and its conjugate is a Kronecker delta, then we must have\footnote{See also ref.~\cite{Cao:2013wqa} for explicit Clebsch-Gordan coefficients for a model with octets.} \begin{align} (t_C)^{ij }_{a} (\overline{t}_C)_{ij }^{b} \propto& \delta_{a}^{b}. \end{align} However, there could be more than one copy of any given representation in the decomposition above -- the most relevant example here being for a product of two octet representations, for which \begin{align} \mathbf{8} \times \mathbf{8} = & \mathbf{1} + \mathbf{27} + \mathbf{10} + \mathbf{\overline{10}} + 2 \times \mathbf{8} \end{align} where the relevant bit is the appearance of two $\mathbf{8}$ reps; this is more familiarly understood as the existence of two invariants, $d^{abc}$ and $f^{abc}$, which contract the symmetric and antisymmetric combinations. Hence if we have two or more copies of a given representation, we can label them $C$ and $D$ and have \begin{align} (t_C)^{ij }_{a} (\overline{t}_D)_{ij }^{b} =& g^{CD} \delta_{a}^{b}, \qquad \bigg(g^{CD} = 0 \ \mathrm{if\ reps}\ C,D\ \mathrm{not\ identical} \bigg) \end{align} Now we are free to diagonalise the basis of invariants and normalise them appropriately. Since the scattering matrix is an isomorphism of the initial to final colour rep, by Schur's lemma it is proportional to the identity. Then each matrix will just be $d_C$ copies of this along the diagonal. So then we need to do a unitary transformation $R_{ij,i'}$ on the scattering matrix to split it into blocks. For it to be unitary, we need \begin{align} R^{ij}_{a} \overline{R}_{ij}^{b} =& \delta^{b}_{a} \delta_{C D}, \qquad a \in C, b \in D\label{EQ:rconds1}\\ \sum_C \sum_{a \in C } R^{ij}_{a} \overline{R}_{kl}^{a} =& \delta^{i}_k \delta^{j}_l \label{EQ:rconds2}\end{align} Note that the second line involves \emph{the sum over all representations present}. From the above, it is clear that we can construct these matrices from our diagonalised basis of invariants, and the first condition means that we must take $g^{CD} = \delta^{CD}$ and $R^{ij}_{a} = \oplus_C (t_C)^{ij }_{a}$. Translating this to amplitudes, for $i,j \rightarrow k,l$ we have a scattering matrix $\mathcal{M}^{kl}_{\ \ ij}$ or equivalently $(a_{0})^{kl}_{\ \ ij}$ upon which we are free to make unitary tranformations of the states to get \begin{align} (\overline{t}_C)_{kl }^{b} (a_0)_{\ \ ij}^{kl} (t_C)^{ij }_{a} \equiv \delta_a^b a_0^{(C)}, \end{align} since outgoing states are equivalent to conjugated incoming ones. So, once we have constructed the invariants, we contract them with our scattering matrices to obtain a block-diagonal form. We now have a choice to extract $a_0^{(C)}$: we can take the trace over the remaining indices $a, b$, pick one example, or construct $a_0 a_0^\dagger$ on colour space and take the square root of the diagonal entries. In {\tt SARAH}\xspace we take the simplest choice and put $a = b =1$ as constraints in the evaluation of the amplitudes as it is by far the least computationally expensive. However, it should be noted that, if some of the couplings/invariants are specified by the user in a different basis, then there could in principle be a rotation between the incoming and outgoing states which would then yield incorrect results here. \subsection{Examples} \label{SEC:ColourExamples} The general technique that we use here is different from the approach in ref.~\cite{SchuesslerThesis}, and so it is instructive to give some simple examples. We did cross-check all of the colour factors produced by the {\tt SARAH}\xspace in the (N)MSSM with the results there. However, since the colour representations available in those models are not different from ours, we instead give examples directly in the model here and in appendix \ref{APP:Storage}. Consider first our dark matter annihilation channel $S, S \rightarrow (Q_E)_i, (\overline{Q}_E)^j$. We can decompose the final state into a singlet and an octet, but here we can only find the singlet representation. To find the projectors we can consider the $SU(N)$ identity \begin{align} \delta_{ii' } \delta^{jj'} =& \frac{1}{N} \delta_i^j \delta^{j'}_{i'} + 2 (T^a)_i^j (T^a)_{i'}^{j'} \end{align} the projectors for the singlet and octet are $\frac{1}{\sqrt{3}} \delta_j^i$ and $\sqrt{2} (T^a)_j^i$ in order for the above equation to become equation (\ref{EQ:rconds2}). In our model we only have $t/u$-channel annihilation via $Q_O$ exchange, so the diagram is proportional to $\kappa_1^2 \delta_i^j $ and so \begin{align} a_0^{(0)} (S S \rightarrow Q_E \overline{Q}_E) \propto \kappa_1^2 \frac{1}{\sqrt{3}} \times 3 = \sqrt{3} \kappa_1^2. \end{align} Similarly the $t/u$-channel elastic interaction $Q_E \overline{Q}_E \rightarrow Q_E \overline{Q}_E \propto 3 \kappa_1^2.$ Consider now the interaction with coupling $\lambda_5$ with scattering of $Q_E, \overline{Q}_E$ pairs to each other. The vertex in this case is $- 2\lambda_5 (\delta_i^j \delta_k^l + \delta_i^l \delta_k^j)$. % So for this diagram via the singlet and octet channels we have \begin{align} a_0^{(0)} (Q_E \overline{Q}_E \rightarrow Q_E \overline{Q}_E) \underset{s\rightarrow \infty}{=} & - 2\lambda_5 \frac{2}{32\pi} \frac{1}{3} ( 9+ 3) = - \frac{\lambda_5}{2\pi} \\ a_0^{(8)} (Q_E \overline{Q}_E \rightarrow Q_E \overline{Q}_E) \underset{s\rightarrow \infty}{=} & - 2\lambda_5 \frac{2}{32\pi} 2 \mathrm{tr}(T^1 T^1) = - \frac{\lambda_5}{8\pi} \end{align} Hence in the $s\rightarrow \infty$ limit we have the strongest limit from the singlet representation, and a limit of $\lambda_5 \le \pi$; the same limit applies for $\lambda_6$. If we consider $Q_E, Q_E$ scattering then we can use the same vertex, but now we decompose the representations into $\mathbf{3} + \mathbf{6}.$ The projector for the antisymmetric combination can be taken to be $\frac{1}{\sqrt{2}} (\delta_{1i} \delta_{2k} -\delta_{2k} \delta_{1i} )$ for incoming states (and $(i\leftrightarrow j, k \leftrightarrow l)$ for outgoing) and for the symmetric one we can just take $\delta_{1i} \delta_{1k}$ or equivalently $\frac{1}{\sqrt{2}} (\delta_{1i} \delta_{2k} +\delta_{2i} \delta_{1k} ).$ These lead to \begin{align} a_0^{(3)} (Q_E Q_E \rightarrow Q_E Q_E) \underset{s\rightarrow \infty}{=} & 0 \\ a_0^{(6)} (Q_E Q_E \rightarrow Q_E Q_E) \underset{s\rightarrow \infty}{=} & - \frac{\lambda_5}{4\pi}. \end{align} Hence again these give weaker bounds than the singlet representation. \section{Limiting the dark matter mass} \label{SEC:Results} Now that we have assembled the relevant machinery, in this section we will finally search for an upper bound on the dark matter mass in our model. To do this we use the {\tt SPheno}\xspace \cite{Porod:2003um,Porod:2011nf} code generated by {\tt SARAH}\xspace for our model to calculate the spectrum, decays and unitarity constraints; we use the vacuum stability code described in appendix \ref{APP:VacStab} to determine whether the colour-preserving vacuum is stable; and we use {\tt micrOMEGAs\ 5.2.1} \cite{Belanger:2018ccd,Belanger:2020gnr} to calculate the dark matter relic density and direct detection cross-sections. Since we are interested in the \emph{allowed} parameter space of the model, we will simply require that the dark matter relic density not exceed the Planck value $\Omega h^2 = 0.120 (3)$ \cite{Aghanim:2018eyx}. All constraints on the parameter space are listed in table~\ref{tab:Constraints}. \begin{table}[h] \centering \begin{tabular}{ll} \toprule Dark matter density & $\rho_\text{DM} \geq \Omega h^2 = 0.120(3)$ \\ Vacuum stability & $S\equiv x, Q_E \equiv \frac{1}{\sqrt{2}}y, Q_O \equiv \frac{1}{\sqrt{2}}z, \quad x, y, z \in \mathbb{R}$ \\ Mass hierarchy and cubic coupling & $\kappa, m_E \leq m_S \leq m_O, \quad \text{where}\ m_S \lesssim \mathcal{O}(300 \text{TeV})$ \\ Quartic couplings & $\Lambda \equiv \lambda_5 = \lambda_6 = 4\lambda_S \leq 3.5$ \\ Only decay of $Q_E$ to top-bottom pair & $Y_Q^{33} = 1$, all other terms are 0 \\ [0.5ex] \bottomrule \end{tabular} \caption{Constraints on the allowed parameter space.} \label{tab:Constraints} \end{table} However, to find the maximal dark matter mass with these constraints in our model with three heavy scalars, a cubic coupling and several quartic couplings involves a search on a multidimensional parameter space. We are interested in the mass hierarchy $m_O > m_S > m_E$ and in exploring ranges of $m_S$ up to $\mathcal{O} (300)$ TeV. Moreover, the quartic couplings should naively be bounded by $\lambda_S \le \frac{2 \pi}{3}, \lambda_{5,6} \le \pi $. However, as seen in \cite{Goodsell:2018tti}, cancellations between the contributions from quartics and cubic couplings, and the effect of a finite momentum cutoff could in principle allow somewhat larger values. Therefore, in a series of Markov Chain Monte Carlo (MCMC) scans we explored larger values with the final, finer scan of one million points having an upper limit of $\lambda_{5,6} \le 3.5$. We chose this rather generous upper limit (instead of, say, $\lambda_{5,6} \le 3.2$) to make sure that there are no unexpected phenomena in a theoretically excluded range. These are the most important quartic couplings since they control the overall stability of the potential. As described in appendix \ref{APP:VacStab}, for our vacuum stability determination, we consider field directions along $S \equiv x, Q_E \equiv \frac{1}{\sqrt{2}} y, Q_O \equiv \frac{1}{\sqrt{2}} z $ where $x,y,z $ are real. We see that taking $\lambda_5 = \lambda_6 = 4\lambda_S$ renders the potential symmetric in $x,y,z$ at large field values (when other couplings vanish) so for simplicity we impose this condition in our search which leaves us with a scan over \begin{align} \kappa, m_E \le m_S, m_O \ge m_S, \Lambda \equiv \lambda_5 = \lambda_6 = 4 \lambda_S, \end{align} and we simply take the other quartic couplings to zero except for $\lambda_7$ which we, quite arbitrarily, set to 0.1 although this has no impact on the search, except perhaps a very slight influence on vacuum stability. In other words, we are allowing self-couplings of the mediators and the singlet, respectively, and some coupling between the mediators. On the other hand, we are ignoring quartic couplings among the singlet and the mediators, and those where a Higgs boson is involved, since we are interested in the model with a $t/u$-channel mediator and not in the quartic quartic coupling channel -- or as a Higgs-portal model which have been extensively studied in the literature and has larger direct detection prospects. Moreover, for simplicity we take $Y_Q^{33} = 1$ and zero for other Baryonic couplings, so that our $Q_E$ field only decays to a top and bottom quark pair. This leaves us with five parameters, four of which are dimensionful. In principle $\lambda_2$ would also have an important impact on the annihilation of singlets to mediators, while changing the relationships of the quartic couplings may have some impact on the stability results. In future it would be interesting to perform a more sophisticated scan to allow for a more high-dimensional parameter space. To explore our parameter space, we performed a series of scans, starting from a uniform grid, then implementing several parallel Markov Chain Monte Carlo scans via the Metropolis-Hastings algorithm distributed across multiple cores on a cluster. Since we are interested only in the upper bound on the singlet mass, we construct a likelihood function $\mathcal{L}$ as a product: \begin{align} \mathcal{L} \equiv \mathcal{L}_{\rm upper} (\Omega h^2, 0.120, 0.001 ) \times \mathcal{L}_{\rm upper} (a_0, 0.5, 0.001) \times \mathcal{L}_{\rm upper} (\delta_{\rm stability}, 1, 0.2) \times \mathcal{L}_{\rm bias} (m_S, \overline{m}_S, 0.2) \end{align} where the first three likelihoods are sigmoid functions that cut off smoothly above the upper limit: \begin{align} \mathcal{L}_{\rm upper} (x, \overline{x}, s) \equiv \frac{1}{1 + \exp((x-\overline{x})/s)} \end{align} and $\delta_{\rm stability}$ is $1$ for a stable vacuum and $0$ otherwise. This amounts to fixed large bias for stable over unstable points\footnote{In principle we could check for metastability and assign a likelihood based on a tunnelling probability. However, other than adding a significant complication, this is not very meaningful for this model since such points would correspond to large trilinear couplings and a loss of perturbativity.} without categorically excluding unstable points. The second term of the combined likelihood corresponds to the unitarity constraint. The last term is a bias on the dark matter mass, forcing the scan to probe heavier singlets: \begin{align} \mathcal{L}_{\rm bias} (m_S, \overline{m}_S, s) = \left( \frac{m_S}{\overline{m}_S} \right)^{s}. \end{align} The value of $\overline{m}_S$ differs depending on the scan. After completing the MCMC scans, we select the points of the sample that strictly satisfy our constraints, which are therefore imposed as ``hard cuts''. Employing MCMC scans bears the advantage that a valid parameter space can be proposed more efficiently than a grid or random scan because the latter focus on regions that are allowed and avoid wasting computational resources on regions that are clearly excluded. In all MCMC scans, we select the largest partial wave amplitude to get a ``good" point. Figure~\ref{fig:onedAms} shows the distribution of the singlet mass after our scan, including only those points which passed all cuts. In table~\ref{tab:Cutflow}, we list the amount of points that pass after each combination of cuts. Hereby, the cut on the mass hierarchy ensures that $m_S \leq m_O$, and that $\lambda_S \geq 0.5$. There is a clear cutoff at around $m_S\simeq 47$ TeV, after which we found no more valid points. This implies a considerable amount of this mass range could be covered with a 100-TeV-collider. This is also the central result of this paper. \begin{table}[h] \centering \begin{tabular}{ll} \toprule Cut & number of points \\ \midrule Mass hierarchy & 508918 \\ Dark matter density (D) & 252098 \\ Unitarity (U) & 359274 \\ Vacuum stability (V) & 101365 \\ U + D & 140163 \\ U + V & 70056 \\ D + V & 10568 \\ All & 3963 \\ \bottomrule \end{tabular} \caption{Points left over after each cut. The raw sample contained one million points. D refers to the cut on the dark matter density, U to that on unitarity, and V to that on vacuum stability. Details see text.} \label{tab:Cutflow} \end{table} \begin{figure} \centering \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=\linewidth]{figures/onedAms.jpg} \caption{} \label{fig:onedAms} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=\linewidth]{figures/onedAkappabyms.jpg} \caption{} \label{fig:onedAkappabyms} \end{subfigure} \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=\linewidth]{figures/onedAkappabym.jpg} \caption{} \label{fig:onedAkappabym} \end{subfigure} \caption{Left: distribution of points as a function of $m_S$. There is a clear cutoff at $m_S\sim 47$ TeV. Middle: the ratio of the coupling $\kappa$ against $m_S$. While it peaks at around 3.5, there are values around 9, too. Right: the ratio of $\kappa$ and the highest mass (being either $m_S$ or $m_{O}$). There is a clear cutoff at about 3.5, and a peak around 2.5. The $y$-axis shows, in all three plots, how many of one million scan points made it through all cuts. } \label{fig:onedA} \end{figure} We show in figures~\ref{fig:onedUDVms}--\ref{fig:onedUDVlams} the effect of the separate cuts on the remaining points on the parameters $m_S, \kappa$ and $\Lambda$. We see that $\Lambda$ is bound by the naive unitarity constraint of $\pi$. As was expected when setting up the model, we find a pretty clear relation between the strength of the coupling $\kappa$ and the masses of the involved particles. This can be seen in figure~\ref{fig:onedAkappabyms}. There is a clear peak around 3.5 for $\kappa/m_S$, although there are some outliers towards higher values. If instead we take $\kappa$ in relation to the largest mass of each datapoint, i.e., one chooses the largest out of $m_S$ and $m_{O}$, the outliers disappear (figure~\ref{fig:onedAkappabym}). Instead, we find a peak at a ratio of about around 2.5. \begin{figure} \centering \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=\linewidth]{figures/onedUDVms.jpg} \caption{} \label{fig:onedUDVms} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=\linewidth]{figures/onedUDVkappa.jpg} \caption{} \label{fig:onedUDVkappa} \end{subfigure} \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=\linewidth]{figures/onedUDVlams.jpg} \caption{} \label{fig:onedUDVlams} \end{subfigure} \caption{Left: distribution of $m_S$ after various cuts. Middle: the same for $\kappa$. Right: the same for $\Lambda$. One can see that the cutoff at $\Lambda\sim \pi$ is due to unitarity. The y-axis shows, in all three plots, how many of one million scan points made it through each cut, respectively. In contrast to figure~\ref{fig:onedA}, these plots do not contain any information about which points make it through two or more cuts. } \label{fig:onedUDV} \end{figure} Figure~\ref{fig:twodUDVkappams} shows the valid points in the $\kappa-m_S$-plane after each individual cut. One can see a clear correlation between the two, and the peak of $\kappa / m_S$ at 3.5 (figure~\ref{fig:onedAkappabyms}) is manifest. The outliers with a higher $\kappa / m_S$ ratio tend to be concentrated around the lower end of the distribution, where $\kappa$ is around 50 TeV and $m_S$ is below 10 TeV. One can see that the vacuum stability constrains the allowed area from the bottom, i.e., the valid points are situated above the diagonal passing through $(\kappa, m_S) = (100 \mathrm{TeV},\ 10 \mathrm{TeV})$ and $(150 \mathrm{TeV},\ 30 \mathrm{TeV})$. Likewise, the dark matter criterion constrains the allowed area from the top, i.e., the valid points are below the diagonal passing through $(50\mathrm{TeV},\ 25 \mathrm{TeV})$ and $(150 \mathrm{TeV},\ 45 \mathrm{TeV})$. In figure~\ref{fig:twodUDVlamsms}, one can see the distribution of valid points in the $\Lambda-m_S$-plane after every cut.Vacuum stability eliminates points with low values of $\Lambda$ or $m_S$. The dark matter cut, by itself does not have much impact on the shape of the distribution. As expected, the cutoff $\Lambda\leq\pi$ is ensured by unitarity (third panel of figure~\ref{fig:twodUDVlamsms}). After all cuts, the points in the lower $m_S$ range are excluded, as expected, but also those above the diagonal passing through $(\Lambda, m_S) = (1.5,\ 25 \mathrm{TeV})$ and $(3.0,\ 50 \mathrm{TeV})$. The latter is a compound effect from the cuts on dark matter and unitarity, which shows that the dark matter cut does play a role after all. Finally, figure~\ref{fig:twodUDVkappamqm} shows the distribution of valid points in the $\kappa-m_O$-plane. After each of the individual cuts, the resulting shape is bordered by three diagonals: one almost vertical one on the low-$\kappa$ end, and two more or less parallel ones going from the bottom left to the top right of the respective panel. The distribution of valid points after all cuts can be deduced almost directly from the overlap of the distributions after the three individual cuts. \begin{figure} \centering \vspace{-1.0cm} \begin{subfigure}{\textwidth} \centering \hspace{-6mm} \includegraphics[width=0.24\linewidth]{figures/twodVkappams.jpg} \includegraphics[width=0.24\linewidth]{figures/twodDkappams.jpg} \includegraphics[width=0.24\linewidth]{figures/twodUkappams.jpg} \includegraphics[width=0.24\linewidth]{figures/twodAkappams.jpg} \caption{} \label{fig:twodUDVkappams} \end{subfigure} \begin{subfigure}{\textwidth} \centering \hspace{-6mm} \includegraphics[width=0.24\linewidth]{figures/twodVlamsms.jpg} \includegraphics[width=0.24\linewidth]{figures/twodDlamsms.jpg} \includegraphics[width=0.24\linewidth]{figures/twodUlamsms.jpg} \includegraphics[width=0.24\linewidth]{figures/twodAlamsms.jpg} \caption{} \label{fig:twodUDVlamsms} \end{subfigure} \begin{subfigure}{\textwidth} \centering \hspace{-6mm} \includegraphics[width=0.24\linewidth]{figures/twodVkappamqm.jpg} \includegraphics[width=0.24\linewidth]{figures/twodDkappamqm.jpg} \includegraphics[width=0.24\linewidth]{figures/twodUkappamqm.jpg} \includegraphics[width=0.24\linewidth]{figures/twodAkappamqm.jpg} \caption{} \label{fig:twodUDVkappamqm} \end{subfigure} \caption{Left column: selected two-dimensional planes of the parameter space after cutting for vacuum stability. Second column: same, but cut for dark matter. Third column: same, but cut for unitarity. Right column: same, but after all three cuts. Top: distribution of $\kappa$ against $m_S$. Middle: distribution of $\Lambda$ against $m_S$. There is a clear cutoff at $\Lambda\simeq \pi$ due to unitarity. Bottom: distribution of $\kappa$ against $m_O$, after each cut. As in~\ref{fig:onedUDVlams}, one sees that the cutoff at about $\pi$ is due to unitarity. The coloured regions indicate the regions where there were valid points after each cut, normalised for each plot (so a direct comparison of the colours between plots is not possible). } \label{fig:twodUDV} \end{figure} Finally, figure~\ref{fig:twodUDVAkappabymlams} shows the distribution of $\kappa / m_\text{max}$ as a function of $\Lambda$ after various cuts. One can see that vacuum stability imposes $\kappa / m_\text{max} \lesssim \Lambda+1$. Unitarity cuts away at some of the higher values of $\Lambda$ and $\kappa/ m_\text{max}$, and cuts off at $\Lambda \leq \pi$. \begin{figure} \centering \begin{subfigure}{\textwidth} \centering \includegraphics[width=0.45\linewidth]{figures/twodVkappabymlams.jpg} \hspace{5mm} \includegraphics[width=0.45\linewidth]{figures/twodDkappabymlams.jpg} \end{subfigure} \begin{subfigure}{\textwidth} \centering \includegraphics[width=0.45\linewidth]{figures/twodUkappabymlams.jpg} \hspace{5mm} \includegraphics[width=0.45\linewidth]{figures/twodAkappabymlams.jpg} \end{subfigure} \caption{Distribution of $\kappa$ over the biggest mass (from $m_S$ and $m_{O}$) against $\Lambda$. Top left: valid points after the vacuum stability cut. Top right: same after dark matter cut. Bottom left: same after unitarity cuts. Bottom right: same after all cuts. For values above $\sim2$, one observes a linear relationship where $\kappa / m_\text{max}\simeq\Lambda$. The z-axis shows how many of one million scan points made it through the cut(s). The colour code is again normalised for each plot separately. } \label{fig:twodUDVAkappabymlams} \end{figure} \subsection{Highest singlet mass} The point that we found where the singlet was heaviest, and the main result of this section, has \begin{align} m_S = 47.354\, \ensuremath{\mathrm{TeV}}\xspace, m_O= 53.8747\, \ensuremath{\mathrm{TeV}}\xspace, m_E = 39.0254\, \ensuremath{\mathrm{TeV}}\xspace, \kappa = 174.121\, \ensuremath{\mathrm{TeV}}\xspace, \Lambda = 3.05993. \end{align} The dark matter relic density is $\Omega h^2 = 0.122$ for this point and the maximal $a_0 = 0.49$ is from the scattering matrix corresponding to the singlet representation (as might be expected from the earlier discussion), evaluated at $\sqrt{s} = 141\, \ensuremath{\mathrm{TeV}}\xspace,$ well away from any poles. This point is on the cusp of being ruled out by the unitarity calculation, which is dominated by the coupling $\kappa;$ we find that decreasing the coupling $\Lambda$ changes $a_0$ very little at this point but leads to an unstable vacuum already at $\Lambda=3,$ while increasing $\kappa$ to $180$ \ensuremath{\mathrm{TeV}}\xspace leads to $a_0 > 0.5$ (and also an unstable vacuum). \subsection{Trilinears excluded by unitarity alone} Finally we wish to highlight that, although most points conformed to the naive expectation that we could apply the limit $\Lambda < \pi$ from unitarity and constrain $\kappa$ just from vacuum stability, there are exceptions that underline the complementarity of the unitarity calculation. For example, \begin{align} m_S = 26.07\,\ensuremath{\mathrm{TeV}}\xspace, m_O = 28.8\, \ensuremath{\mathrm{TeV}}\xspace, m_E = 7.21\, \ensuremath{\mathrm{TeV}}\xspace,\kappa = 73.6\,\ensuremath{\mathrm{TeV}}\xspace, \Lambda =2.645 . \end{align} This point has $\Omega h^2 = 0.12$ and maximum $a_0 =0.51$ (again from the singlet submatrix) and the vacuum stability equations \emph{have no other solutions than the origin}. In fact, this point is typical of a whole branch of points where $ m_S \sim m_O \gg m_E$ for which this is true -- these points are excluded by unitarity because of the size of $\kappa$, but we would not have seen this either from classic unitarity bounds where $s \rightarrow \infty$ or from the vacuum stability constraints. \section{Conclusions} \label{SEC:CONCLUSIONS} We have described the calculation and implementation of constraints from unitarity of scattering for $2 \rightarrow 2$ processes involving scalars of any representation under the strong gauge group, and finite scattering momentum. Since these unitarity constraints automatically constrain \emph{all} the scalar couplings of a theory, they are now very straightforward to include for a whole new class of models. We also illustrated the utility of these routines and the complementarity of the information that they provide for studying dark matter models compared to vacuum stability and both naive infinite momentum perturbative unitarity constraints, and the ``absolute'' bound of ref.~\cite{Griest:1989wd}. We showed that there are points for which vacuum stability and ``naive'' unitarity are insufficient, i.e. the full perturbative unitarity calculation is indispensable. We introduced a toy model with a baryonic coupling and colourful mediators that decay in an interesting way to a top-bottom quark pair, that is a very simple example of the sort of models that can now be explored with these constraints. It would be very interesting to explore models with more complicated gauge representations. The work also paves the way for several further extensions in future work: additional unbroken gauge groups; fermions and/or vectors in the scattering matrix; loop corrections. Moreover, our dark matter model had a maximum mass of $47\, \ensuremath{\mathrm{TeV}}\xspace,$ and coupled to colourful states, so much of the allowed parameter space would be accessible to a future $100\, \ensuremath{\mathrm{TeV}}\xspace$. It would therefore be interesting to consider dark matter-collider complementarity in terms of both its signatures at such a collider; but also the low-mass bounds at the LHC, since it could be searched for in the $t\overline{t} b \overline{b}$ channel. \section*{Acknowledgments} MDG acknowledges support from the grant \mbox{``HiggsAutomator''} of the Agence Nationale de la Recherche (ANR) (ANR-15-CE31-0002). He thanks Florian Staub for collaboration on including the colour factors in the unitarity routines in {\tt SARAH}\xspace. MDG also thanks Michael Baker for correspondence about those routines, and helping to identify bugs in the beta-version.
{ "timestamp": "2021-07-01T02:17:55", "yymm": "2012", "arxiv_id": "2012.09022", "language": "en", "url": "https://arxiv.org/abs/2012.09022" }
\section{Introduction} Optimization problems under uncertain constraints are pervasive in engineering applications. In the paradigm of {\it chance-constrained programs} (CCPs), uncertain parameters are treated as random variables and the uncertain constraints are required to be satisfied with a high probability. However, the feasibility set of a CCP is in general non-convex \cite{nemirovski2006convex}. Furthermore, although the probability of constraint violation is required to be small, the magnitude of constraint violation could potentially be unbounded which is not desirable. Consequently, recent approaches model uncertain constraints via coherent risk measures that preserve analytical tractability; specifically the conditional value-at-risk (CVaR) \cite{rockafellar2000optimization,artzner1999coherent}. In contrast with chance constraints, (i) CVaR preserves the convexity of the feasibility set, (ii) it requires the magnitude of constraint violation to be bounded in expectation (to be made more precise in Section \ref{subsec:cc-approx}), and (iii) CVaR constraints provide a convex inner approximation of chance constraints \cite{nemirovski2006convex}. Accordingly, CVaR-constrained programs (referred to as {\it risk-constrained programs (RCPs)}) have seen widespread applications in financial engineering \cite{krokhmal2002portfolio}, stochastic optimal control \cite{van2016distributionally,PS-MS-PP:19-ecc, singh2018framework}, safety-critical control applications \cite{samuelson2018safety}, robotics \cite{hakobyan2019risk} and energy systems \cite{summers2015stochastic}. In order to solve stochastic optimization problems in general and CCPs and RCPs in particular, the decision maker needs to know the probability distribution of uncertain parameters. In practice, this information is often unavailable and instead, the decision maker has access to data about the uncertainty in the form of samples. Accordingly, recent work has focused on constructing a family of probability distributions or an {\it ambiguity set} from the observed samples followed by solving the uncertain optimization problem in a worst-case sense for all distributions in the ambiguity set. This approach is referred to as {\it distributionally robust} optimization. Within this paradigm, ambiguity sets defined via the Wasserstein distance (see Section \ref{sec:prelims} for the definition) have been shown to have desirable out-of-sample performance and analytical tractability \cite{gao2016wasserstein,esfahani2018data,hota2018data}. Motivated by these attractive features, several recent works have proposed approximations and finite-dimensional reformulations of Wasserstein distributionally robust chance and CVaR constrained programs \cite{hota2018data,xie2018drccp,kuhn2018drccp,fatma2020drccp}. This class of problems have also been studied in the context of statistical learning \cite{shafieezadeh2019regularization}, data-driven control \cite{yang2017convex,coulson2020distributionally}, and optimal power flow \cite{poolla2020wasserstein}, among others. Note that the Wasserstein ambiguity set is defined directly in terms of the available samples that are drawn from an underlying data-generating distribution. Consequently, the distributionally robust problem instance is a random instance of the original CCP (or RCP) defined in terms of the underlying distribution. Therefore, in addition to analytical tractability and finite sample guarantees, it is desirable to analyze how well the optimal solution of the (random) distributionally robust program approximates the optimal solution of the original CCP (or RCP); particularly in the regime when the number of samples grows to infinity. This property is termed as {\it asymptotic consistency} in stochastic programming. While {\it asymptotic consistency} has been established for Wasserstein distributionally robust optimization problems \cite{esfahani2018data}, analogous results for chance- and risk-constrained programs have not been explored in the prior work. In this paper, we show under suitable assumptions that if the samples are being drawn from an underlying distribution $\mathbb{P}$, then the optimal solution and optimizers of the distributionally robust CCP or RCP converge to the corresponding quantities of the CCP or RCP (defined with respect to $\mathbb{P}$), as the number of samples increases and the size of the ambiguity set shrinks. We show that the convergence of the optimal values is from above if the rate at which the ambiguity set shrinks is chosen carefully. Our results provide the much needed asymptotic theoretical justification for Wasserstein distributionally robust constrained optimization programs. \noindent {\bf Notation:} The sets of real, positive real, non-negative real, and natural numbers are denoted by $\mathbb{R}$, $\mathbb{R}_{>0}$, $\mathbb{R}_{\ge 0}$, and $\mathbb{N}$, respectively. The extended reals are $\overline{\real} = \mathbb{R} \cup \{+ \infty, - \infty \}$. For $N \in \mathbb{N}$, we let $[N] := \{1,2,\ldots,N\}$. For brevity, we denote $\max(x,0)$ by $x_+$. The closure of a set $\SS$ is denoted by $\operatorname{cl}(\SS)$. For a set $\SS$ and $N \in \mathbb{N}$, we denote the $N$-fold cartesian product as $\SS^N := \Pi_{i=1}^N \SS$. Similar notation holds for the $N$-fold product of any probability distribution. \section{Technical Preliminaries}\label{sec:prelims} Here we formally define the notion of CVaR, Wasserstein distance, and data-driven ambiguity sets. \subsubsection{(Conditional) Value-at-Risk}\label{subsec:cc-approx} Let $Y$ be a (real-valued) random variable with distribution $\mathbb{P}$. For a tolerance level $\alpha \in (0,1)$, the value-at-risk (VaR) of $Y$ at level $\alpha$ is \begin{equation}\label{eq:def_var} \VaR{\mathbb{P}}_\alpha(Y) := \inf \setdef{y \in \mathbb{R}}{\mathbb{P}(Y \leq y) \geq 1-\alpha}. \end{equation} That is, it is the $(1-\alpha)$-quantile of the distribution of $Y$. The conditional value-at-risk (CVaR) of $Y$ at level $\alpha$ is \begin{align} \CVaR{\mathbb{P}}_{\alpha}(Y):= \inf_{t \in \mathbb{R}} \, \{ \alpha^{-1} \mathbb{E}_{\mathbb{P}}[(Y+t)_+] - t\} . \label{eq:CVaR-def} \end{align} If $Y$ has a continuous distribution, then $\CVaR{\mathbb{P}}_{\alpha}(Y) = \mathbb{E}_{\mathbb{P}}[Y | Y \geq \VaR{\mathbb{P}}_\alpha(Y)]$, i.e., it is the conditional expectation of $Y$ given that $Y$ exceeds $\VaR{\mathbb{P}}_\alpha(Y)$. \subsubsection{Wasserstein ambiguity sets}\label{subsec:wasserstein} Assume $\Xi \subseteq \mathbb{R}^m$ and $d$ to be a complete metric on $\Xi$. Let $\mathcal{B}({\Xi})$ and $\mathcal{P}(\Xi)$ be the Borel $\sigma$-algebra and the set of Borel probability measures on $\Xi$, resp. Let $\mathcal{P}_1(\Xi) \subseteq \mathcal{P}(\Xi)$ be the set of measures with finite first moment. Following \cite{esfahani2018data}, the $1$-Wasserstein distance between any two measures $\mu, \nu \in \mathcal{P}_1(\Xi)$ is \begin{equation}\label{eq:def_wasserstein} W_1(\mu,\nu) := \min_{\gamma \in \mathcal{H}(\mu,\nu)} \left\{\int_{\Xi \times \Xi} d(\xi,\omega) \gamma(d\xi,d\omega) \right\}, \end{equation} where $\mathcal{H}(\mu,\nu)$ is the set of all distributions on $\Xi \times \Xi$ with marginals $\mu$ and $\nu$. The minimum in \eqref{eq:def_wasserstein} is attained because the metric $d$ is continuous \cite{gao2016wasserstein}. We consider ambiguity sets containing distributions close to the empirical distribution induced by the observed samples. Specifically, let $\widehat{\Pb}_N := \frac{1}{N}\sum^N_{i=1} \delta_{\widehat{\xi}_i}$ be the empirical distribution constructed from samples $\{\widehat{\xi}_i\}_{i \in [N]}$, where $\delta_{\widehat{\xi}_i}$ is the unit point mass at $\widehat{\xi}_i$. We define the data-driven Wasserstein ambiguity set as \begin{equation}\label{eq:wasserstein-set} \mathcal{M}^\theta_N := \setdef{ \mu \in \mathcal{P}_1(\Xi)}{ W_1(\mu,\widehat{\Pb}_N) \leq \theta}, \end{equation} which contains all distributions with finite first moment that are within a distance $\theta \geq 0$ of $\widehat{\Pb}_N$. In~\cite{pichler2017quantitative}, it was shown that $\mathcal{M}^\theta_N$ is a weakly-compact subset of $\mathcal{P}_1(\Xi)$. \section{Distributionally robust risk-constrained programs and their consistency}\label{sec:drrcp} In this section, we introduce risk-constrained programs and their distributionally robust counterparts. We consider ambiguity sets defined by the Wasserstein metric and the empirical distribution as discussed above. Our main result establishes that as the number of samples increases, the optimizers and the optimal value of the distributionally robust problems converge, in an appropriate sense, to the corresponding quantities of the original (with respect to the true data-generating distribution) risk-constrained problem. Throughout we consider $\Xi \subseteq \mathbb{R}^m$ and $d$ to be a complete metric. A canonical CVaR or {\it risk-constrained program} (RCP) is of the form \begin{equation}\label{eq:def_rcp} \begin{aligned} \underset{x\in X}{\min} & \quad c^\intercal x \\ \operatorname{s.} \operatorname{t.}} %\text{$\,$}t.} & \quad \CVaR{\mathbb{P}}_{\alpha}(F(x,\xi)) \leq 0, \end{aligned} \end{equation} where $X \subseteq \mathbb{R}^n$ is a closed convex set (potentially defined via deterministic constraints), $c \in \mathbb{R}^n$, $\alpha \in (0,1)$, $\mathbb{P} \in \mathcal{P}(\Xi)$ is the distribution of the uncertain parameter $\xi$ (see Section~\ref{subsec:wasserstein} for notation), and $\map{F}{\mathbb{R}^n \times \Xi}{\mathbb{R}}$ is called the {\it constraint function}. Using~\eqref{eq:CVaR-def}, we can equivalently write the RCP as \begin{equation}\label{eq:def_equiv_rcp} \begin{aligned} \underset{x\in X, t \in \mathbb{R}}{\min} & \quad c^\intercal x \\ \operatorname{s.} \operatorname{t.}} %\text{$\,$}t.} & \quad \mathbb{E}_{\mathbb{P}}[(F(x,\xi)+t)_+] - t \alpha \leq 0. \end{aligned} \end{equation} By equivalent, we mean that $x$ is a feasible point for~\eqref{eq:def_rcp} if and only if there exists $t$ such that $(x,t)$ is feasible for~\eqref{eq:def_equiv_rcp}. The distributionally robust version of the RCP~\eqref{eq:def_rcp}, which we term as the {\it distributionally robust risk-constrained program} (DRRCP), is given by \begin{equation}\label{eq:cvar-drccp} \begin{aligned} \underset{x \in X}{\min} & \quad c^\intercal x \\ \operatorname{s.} \operatorname{t.}} %\text{$\,$}t.} & \quad \sup_{\mathbb{Q} \in \mathcal{M}_N^{\theta}} \inf_{t \in \mathbb{R}} \mathbb{E}_{\mathbb{Q}} [(F(x,\xi)+t)_+ - t\alpha] \le 0, \end{aligned} \end{equation} where $\mathcal{M}_N^{\theta}$ is the data-driven Wasserstein ambiguity set defined in \eqref{eq:wasserstein-set}. In other words, we require the CVaR constraint to hold for all distributions that are within a distance $\theta \geq 0$ from the empirical distribution $\widehat{\mathbb{P}}_N := \frac{1}{N} \sum^N_{i=1} \delta_{\widehat{\xi}_i}$ induced by the samples $\{\widehat{\xi}_i\}_{i \in [N]}$, drawn independently from $\mathbb{P}$. This problem is of interest when the decision-maker does not know the distribution $\mathbb{P}$ of the uncertain parameters and instead has access to samples. Thus, the optimal solution of \eqref{eq:cvar-drccp} is robust with respect to a family of distributions that are likely to have given rise to the observed samples. We now present a set of general assumptions. \begin{assumption}\longthmtitle{General assumptions on DRRCP}\label{ass:main1} The following hold: \begin{enumerate} \item the function $F: \mathbb{R}^n \times \Xi \to \mathbb{R}$ is continuous, \label{ass:m-1} \item for every $\xi \in \Xi$, $x \mapsto F(x,\xi)$ is convex on $X$, \label{ass:m-2} \item for every $x \in X$, $\xi \mapsto F(x,\xi)$ is bounded on $\Xi$, and \label{ass:m-3} \item $F$ is uniformly Lipschitz over the set $X$, that is, there exists $L > 0$ such that \begin{align*} \abs{F(x,\xi) - F(x,\xi')} \le L \norm{\xi - \xi'}, \end{align*} for all $\xi, \xi' \in \Xi$ and all $x \in X$. \label{ass:m-4} \end{enumerate} \end{assumption} We first reformulate~\eqref{eq:cvar-drccp} into a form similar to~\eqref{eq:def_equiv_rcp}. The below result shows that the $\inf$ and the $\sup$ operators in the constraint defining DRRCP~\eqref{eq:cvar-drccp} can be interchanged. The proof is an application of the min-max theorem due to \cite{shapiro2002minimax} stated as Theorem \ref{thm:min-max-shapiro} in the appendix. The results hold under continuity, convexity, and boundedness conditions in Assumption \ref{ass:main1} and the proof is presented in the appendix. \begin{lemma}\longthmtitle{Min-max equality for the constraint function}\label{lemma:drccp_minmax} Suppose Assumption~\ref{ass:main1}~\ref{ass:m-1}-\ref{ass:m-3} hold. Then, for every $x \in X$, we have \begin{align} \underset{\mathbb{Q} \in \mathcal{M}^\theta_N}{\sup} \, \underset{t \in \mathbb{R}}{\inf} & \, \mathbb{E}_{\mathbb{Q}} [(F(x,\xi)+t)_+ -t\alpha] \nonumber \\ = & \underset{t \in \mathbb{R}}{\inf} \, \underset{\mathbb{Q} \in \mathcal{M}^\theta_N}{\sup} \, \mathbb{E}_{\mathbb{Q}} [(F(x,\xi)+t)_+ -t\alpha]. \label{eq:min-max-equality} \end{align} \end{lemma} As a consequence of the above result, we can write the DRRCP~\eqref{eq:cvar-drccp} equivalently as \begin{equation}\label{eq:cvar-equiv-drccp} \begin{aligned} \underset{x \in X, t \in \mathbb{R}}{\min} & \quad c^\intercal x \\ \operatorname{s.} \operatorname{t.}} %\text{$\,$}t.} & \quad \sup_{\mathbb{Q} \in \mathcal{M}_N^{\theta}} \mathbb{E}_{\mathbb{Q}} [(F(x,\xi)+t)_+ - t\alpha] \le 0. \end{aligned} \end{equation} That is, $x$ is a feasible point for~\eqref{eq:cvar-drccp} if and only if there exists $t$ such that $(x,t)$ is feasible for~\eqref{eq:cvar-equiv-drccp}. Having reformulated the DRRCP into~\eqref{eq:cvar-equiv-drccp}, we move on to the consistency analysis. We require the following assumption throughout. \begin{assumption}\longthmtitle{Sequence of finite-sample guarantees}\label{as:fs-guarantee} Sequences $\setr{\beta_N} \subset (0,1)$ and $\setr{\epsilon_N} \subset (0,\infty)$ are such that $\sum_{N=1}^\infty \beta_N < \infty$, $\lim_{N \to \infty} \epsilon_N = 0$, and the following finite-sample guarantee holds for each $N \in \mathbb{N}$, \begin{align}\label{eq:fs-beta} \mathbb{P}^N (W_1(\mathbb{P},\widehat{\Pb}_N) \le \epsilon_N) \ge 1-\beta_N. \end{align} \end{assumption} The above assumption imposes that as the number of samples increases, the distance between the data-generating distribution and the empirical distribution becomes vanishingly small with higher confidence. Recent works have indeed established the existence of such sequences~\cite{esfahani2018data}. We start our analysis with some preliminary lemmas. \begin{lemma}\longthmtitle{Uniform convergence of distributions~\cite[Lemma 3.7]{esfahani2018data}}\label{le:peyman_conv_dist} Under Assumption~\ref{as:fs-guarantee}, we have \begin{align*} \mathbb{P}^\infty \Bigl( \lim_{N \to \infty} \sup_{\mathbb{Q} \in \mathcal{M}_N^{\epsilon_N}} W_1(\mathbb{Q}, \mathbb{P}) = 0 \Bigr) = 1. \end{align*} \end{lemma} The proof is analogous to the proof of~\cite[Lemma 3.7]{esfahani2018data} and is omitted in the interest of space. The above result shows that if the Wasserstein radius decreases to zero in a carefully chosen manner, then any sequence of distributions drawn from the ambiguity sets converges to the true distribution. \begin{remark}\longthmtitle{Comparison with~\cite{esfahani2018data}} Following the above lemma, \cite{esfahani2018data} proves asymptotic consistency of the optimal value and optimizers of distributionally robust expected cost minimization programs under suitable boundedness and continuity assumptions on the cost function. While constrained optimization programs can be written equivalently as expected cost minimization problems via an indicator function on the feasibility set, the consistency results from \cite{esfahani2018data} do not directly apply as the indicator function is not bounded for points that violate the constraints. \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol \end{remark} We now show that as the number of samples increases, the constraint function of the DRRCP's equivalent form~\eqref{eq:cvar-equiv-drccp} converges uniformly to that of the RCP \eqref{eq:def_equiv_rcp}. We first define \begin{align} v(x,t) & := \mathbb{E}_\mathbb{P} [(F(x,\xi)+t)_+ - t\alpha], \label{eq:vxt_def} \\ \widehat{v}_N(x,t) & := \sup_{\mathbb{Q} \in \mathcal{M}_N^{\epsilon_N}} \mathbb{E}_{\mathbb{Q}} [(F(x,\xi)+t)_+ - t\alpha], \label{eq:vNxt_def} \end{align} where note that $\widehat{v}_N$ is a random function as the ambiguity set depends on the samples. We now establish uniform $\mathbb{P}^\infty$-almost sure convergence of $\widehat{v}_N$ from above to $v$. For this, we require the constraint function to be uniformly Lipschitz continuous as stated in Assumption \ref{ass:main1}. \begin{lemma}\longthmtitle{Uniform convergence of $\widehat{v}_N$ from above to $v$}\label{le:unif-conv-v} Let Assumption~\ref{ass:main1}~\ref{ass:m-1},~\ref{ass:m-2} and~\ref{ass:m-4} hold. Further, suppose Assumption~\ref{as:fs-guarantee} holds. Then, the following hold \begin{align*} \mathbb{P}^\infty \Bigl( v(x,t) \le \widehat{v}_N(x,t) \text{ for all sufficiently large $N$} \Bigr) & = 1, \\ \mathbb{P}^\infty \Bigl( \lim_{N \to \infty} \sup_{x \in X, t \in \mathbb{R}} |\widehat{v}_N(x,t) - v(x,t)| = 0 \Bigr) & = 1, \end{align*} where the first equality is satisfied for all $(x,t) \in X \times \mathbb{R}$. \end{lemma} \begin{proof} Fix any $(x,t) \in X \times \mathbb{R}$. From~\eqref{eq:fs-beta}, we deduce that the following inequality holds with probability at least $1-\beta_N$, \begin{align*} \mathbb{E}_\mathbb{P} [(F(x,\xi) + t)_+ - t \alpha ] \le \sup_{\mathbb{Q} \in \mathcal{M}_N^{\epsilon_N}} \mathbb{E}_\mathbb{Q} [(F(x,\xi) + t)_+ - t \alpha]. \end{align*} That is, $\mathbb{P}^N (v(x,t) \le \widehat{v}_N(x,t) ) \ge 1-\beta_N$, for all $N \in \mathbb{N}$. Since $\sum_{N=1}^\infty \beta_N < \infty$, from Borel-Cantelli Lemma~\cite[Theorem 2.3.6]{durrett2010book}, we obtain the first assertion. From the uniform Lipschitz condition on $F$ stated in Assumption~\ref{ass:main1}~\ref{ass:m-4}, we deduce that for any fixed $(x,t) \in X \times \mathbb{R}$ and any $\xi, \xi' \in \Xi$, \begin{align*} & \abs{\bigl((F(x,\xi) + t)_+ - t \alpha \bigr) - \bigl( (F(x,\xi') + t)_+ - t \alpha \bigr)} \\ & \qquad \qquad \qquad= \abs{(F(x,\xi) + t)_+ - (F(x,\xi') + t)_+} \\ & \qquad \qquad \qquad \le \abs{F(x,\xi) - F(x,\xi')} \le L \norm{\xi - \xi'}, \end{align*} where the first inequality holds because the operator $( \cdot )_+$ is Lipschitz with constant unity. The above reasoning implies that the map $\xi \mapsto (F(x,\xi) + t)_+ - t \alpha$ is uniformly Lipschitz over the set $X \times \mathbb{R}$. Using this fact in the dual form of the definition of the Wasserstein metric \cite{esfahani2018data}, we conclude that \begin{align} \big| \mathbb{E}_{\mathbb{P}_1}[(F(x,\xi) + t)_+ - t \alpha] - & \mathbb{E}_{\mathbb{P}_2}[(F(x,\xi) + t)_+ - t \alpha] \big| \notag \\ & \le L W_1(\mathbb{P}_1,\mathbb{P}_2),\label{eq:lip-wass} \end{align} for any two distributions $\mathbb{P}_1$ and $\mathbb{P}_2$. Consider now a sequence of positive real numbers $\delta_N$, $N\in\mathbb{N}$ such that $\lim_{N \to \infty} \delta_N = 0$. For each $(x,t) \in X \times \mathbb{R}$, let $\mathbb{Q}_N^{(x,t)} \in \mathcal{M}_N^{\epsilon_N}$ be a $\delta_N$-optimal distribution such that \begin{align}\label{eq:v-ineq} \mathbb{E}_{\mathbb{Q}_N^{(x,t)}} & [(F(x,\xi)+t)_+ - t\alpha] \le \notag \\ & \widehat{v}_N(x,t) \leq \mathbb{E}_{\mathbb{Q}_N^{(x,t)}} [(F(x,\xi)+t)_+ - t\alpha] + \delta_N. \end{align} Existence of such a distribution is due to the fact that expectation is a linear operator. Next, we have \begin{align} |\widehat{v}_N(x,t) - v(x,t)| & \leq |\mathbb{E}_{\mathbb{Q}_N^{(x,t)}} [(F(x,\xi)+t)_+ - t\alpha] \notag \\ & \qquad - \mathbb{E}_{\mathbb{P}} [(F(x,\xi)+t)_+ - t\alpha]| + \delta_N \notag \\ & \leq L W_1(\mathbb{Q}_N^{(x,t)},\mathbb{P}) + \delta_N \notag \\ & \leq L \sup_{\mathbb{Q} \in \mathcal{M}_N^{\epsilon_N}} W_1(\mathbb{Q}_N,\mathbb{P}) + \delta_N. \label{eq:series-q} \end{align} The first inequality above uses~\eqref{eq:v-ineq}, the second inequality follows from the condition~\eqref{eq:lip-wass}, and the last inequality due to the fact that $\mathbb{Q}_N^{(x,t)} \in \mathcal{M}_N^{\epsilon_N}$. Since the right-hand side of~\eqref{eq:series-q} is independent of $(x,t)$, we have \begin{align*} \!\! \sup_{(x,t) \in X \times \mathbb{R}} \!\!\! |\widehat{v}_N(x,t) - v(x,t)| \leq L \!\!\sup_{\mathbb{Q} \in \mathcal{M}_N^{\epsilon_N}} \!\!\!W_1(\mathbb{Q},\mathbb{P}) + \delta_N. \end{align*} The proof then concludes by invoking Lemma~\ref{le:peyman_conv_dist}. \end{proof} We note here that the convergence from above of $\widehat{v}_N$ to $v$ is due to summability of $\beta_N$ in Assumption~\ref{as:fs-guarantee}. If one only needs convergence, then $\epsilon_N$ tending to zero is sufficient. We now present our main result. We denote by $\Jrcp$ the optimal value of~\eqref{eq:def_rcp} and for a given $N$, we let $\mathsf{J}^{\mathtt{DRRCP}}_N$ and $\{x^{\mathtt{DRRCP}}_N\}_{N \in \mathbb{N}}$ denote the optimal value and an optimizer of~\eqref{eq:cvar-drccp}, resp., where $\theta$ is set to $\epsilon_N$ satisfying Assumption~\ref{as:fs-guarantee}, i.e., $\epsilon_N$ is chosen depending on $N$ and $\beta_N$ satisfying~\eqref{eq:fs-beta}. \begin{theorem}\longthmtitle{Asymptotic consistency of the DRRCP~\eqref{eq:cvar-drccp}}\label{th:asymp-cvar} Let Assumptions~\ref{ass:main1} and~\ref{as:fs-guarantee} hold. Assume that the feasibility set of~\eqref{eq:def_rcp} has a nonempty interior and that the optimizers of~\eqref{eq:def_rcp} belong to a compact set $\mathcal{Y} \subset X$. Moreover, assume that for sufficiently large $N$ and any sequence of i.i.d samples $\{\widehat{\xi}_i\}_{i=1}^N$, optimizers of~\eqref{eq:cvar-drccp} with $\theta$ replaced with $\epsilon_N$ belong to $\mathcal{Y}$. Then, the following statements hold $\mathbb{P}^\infty$ - almost surely: \begin{enumerate} \item $\Jrcp \le \mathsf{J}^{\mathtt{DRRCP}}_N$ for all sufficiently large $N$, \item $\mathsf{J}^{\mathtt{DRRCP}}_N \to \Jrcp$ as $N \to \infty$, and \item any accumulation point of any sequence of optimizers $\{x^{\mathtt{DRRCP}}_N\}_{N\in\mathbb{N}}$ is an optimal solution of the problem \eqref{eq:def_rcp}. \end{enumerate} \end{theorem} \begin{proof} The first statement here follows from the first assertion of Lemma~\ref{le:unif-conv-v}. For the next two statements, the proof strategy is to show an analogous convergence argument: that the optima and optimizers of~\eqref{eq:cvar-equiv-drccp} approach~\eqref{eq:def_equiv_rcp}. All convergence statements in this proof involving random quantities hold $\mathbb{P}^\infty$-almost surely and we omit restating this fact for the sake of brevity. Denote the feasibility sets of~\eqref{eq:def_equiv_rcp} and~\eqref{eq:cvar-equiv-drccp} as $\FF^{\mathtt{RCP}}$ and $\FF^{\mathtt{DRRCP}}_N$, respectively. Then, $\FF^{\mathtt{RCP}} = \setdef{(x,t) \in X \times \mathbb{R}}{v(x,t) \le 0}$ and $\FF^{\mathtt{DRRCP}}_N = \setdef{(x,t) \in X \times \mathbb{R}}{\widehat{v}_N(x,t) \le 0}$. Recall that the set $\FF^{\mathtt{DRRCP}}_N$ is random. \emph{Step 1: Defining $\mathcal{W}$:} Since $F$ is continuous, $\mathcal{Y}$ is compact, and $F(x,\cdot)$ is bounded over $\Xi$ for every $x \in \mathcal{Y}$, we deduce that the set $\setdefb{t}{\mathbb{E}_{\mathbb{P}} [(F(x,\xi) +t)_+] - t\alpha \le 0, x \in \mathcal{Y}}$ is compact. Recall that optimizers of~\eqref{eq:def_rcp} belong to $\mathcal{Y}$. Thus, there exists a compact set $\mathcal{T} \subset \mathbb{R}$ such that optimizers of~\eqref{eq:def_equiv_rcp} belong to the set $\mathcal{W}: = \mathcal{Y} \times \mathcal{T}$. Similarly, for all sufficiently large $N$ and all sequence of $N$ i.i.d samples, the set of optimizers of~\eqref{eq:cvar-equiv-drccp} (with $\theta$ replaced with $\epsilon_N$) belong to the set $\mathcal{W}$. Since the intersection of $\mathcal{Y}$ and the feasibility set of~\eqref{eq:def_rcp} has a nonempty interior, one can assume, without loss of generality, that $\mathcal{W} \cap \FF^{\mathtt{RCP}}$ has a nonempty interior. \emph{Step 2: Establishing $\FF^{\mathtt{DRRCP}}_N \to \FF^{\mathtt{RCP}}$:} Following Lemma~\ref{le:unif-conv-v}, we know that $\widehat{v}_N$ converges uniformly $\mathbb{P}^\infty$-almost surely to $v$. Using this fact, one can establish convergence, defined in an appropriate sense, of $\FF^{\mathtt{DRRCP}}_N$ to $\FF^{\mathtt{RCP}}$. Specifically, we will show \begin{align}\label{eq:conv-dist-set} \lim_{N \to \infty} \sup_{(x,t) \in \FF^{\mathtt{DRRCP}}_N \cap \mathcal{W} } \operatorname{dist}((x,t), \FF^{\mathtt{RCP}}) = 0, \end{align} where $\operatorname{dist}( (x,t) , \FF^{\mathtt{RCP}})$ is the distance of the point $(x,t)$ to the set $\FF^{\mathtt{RCP}}$, that is, $\operatorname{dist}( (x,t), \FF^{\mathtt{RCP}}) = \inf_{(x',t') \in \FF^{\mathtt{RCP}}} \norm{(x,t) - (x',t')}$. We proceed with a contradiction argument to show~\eqref{eq:conv-dist-set}. Recall the assertion that~\eqref{eq:conv-dist-set} holds $\mathbb{P}^\infty$-almost surely. Now, for the sake of contradiction, assume that there exists a set of sequence of i.i.d samples \begin{align*} \mathcal{H} := \setdefb{ \{\widehat{\xi}_N (\sigma) \}_{N \in \mathbb{N}}}{\sigma \in \Sigma} \end{align*} that has finite measure under the distribution $\mathbb{P}^\infty$ and each element of $\mathcal{H}$ violates the limit~\eqref{eq:conv-dist-set}. Here, $\Sigma$ is some uncountable index set. To be more precise, $\mathcal{H}$ gives rise to a set of sequences $\setdef{\{\FF^{\mathtt{DRRCP}}_N(\sigma)\}_{N \in \mathbb{N}}}{ \sigma \in \Sigma}$ such that each element in this set violates~\eqref{eq:conv-dist-set}. This in turn implies that for each $\sigma \in \Sigma$, one can assign a sequence $\setr{(x_N(\sigma),t_N(\sigma)) \in \FF^{\mathtt{DRRCP}}_N(\sigma) \cap \mathcal{W}}_{N \in \mathbb{N}}$ and a constant $\gamma_\sigma > 0$ such that \begin{align}\label{eq:contra-dist} \operatorname{dist} \big( (x_N(\sigma),t_N(\sigma)), \FF^{\mathtt{RCP}} \big) > \gamma_\sigma, \quad \forall \, N \in \mathbb{N}. \end{align} Since $\mathcal{W}$ is compact, there exists a subsequence of $\{(x_N(\sigma),t_N(\sigma))\}$ that converges to some $(\bar{x}(\sigma),\bar{t}(\sigma)) \in \mathcal{W}$. We denote this subsequence by $\{(x_N(\sigma),t_N(\sigma))\}$ for convenience. Then, due to continuity of $v$, for any $\epsilon/2 > 0$, there exists $N_1(\sigma) \in \mathbb{N}$ such that \begin{align*} \abs{v(x_N(\sigma), t_N(\sigma)) - v(\bar{x}(\sigma),\bar{t}(\sigma))} & \le \epsilon/2 \end{align*} for all $N \ge N_1(\sigma)$. Moreover, by $\mathbb{P}^\infty$-almost sure uniform convergence of $\widehat{v}_N \to v$, for any $\epsilon/2 > 0$, for almost all $\sigma \in \Sigma$, there exists $N_2(\sigma) \in \mathbb{N}$ such that \begin{align*} \abs{\widehat{v}_N (x_N (\sigma), t_N (\sigma)) - v(x_N(\sigma), t_N(\sigma))} & \le \epsilon/2, \end{align*} for all $N \ge N_2(\sigma)$. Using the above two inequalities, we conclude that for almost all $\sigma \in \Sigma$, for any $\epsilon > 0$, there exists $\bar{N}(\sigma)$ such that \begin{align*} \abs{\widehat{v}_N(x_N(\sigma),t_N(\sigma)) - v(\bar{x}(\sigma),\bar{t}(\sigma))} \le \epsilon, \quad \forall N \ge \bar{N}(\sigma). \end{align*} This implies that $\lim_{N \to \infty} \widehat{v}_N(x_N(\sigma),t_N(\sigma)) = v(\bar{x}(\sigma),\bar{t}(\sigma))$ for almost all $\sigma$. Since $\widehat{v}_N(x_N(\sigma),t_N(\sigma)) \le 0$ for all $N$, we get $v(\bar{x}(\sigma),\bar{t}(\sigma)) \le 0$, that is, $(\bar{x}(\sigma),\bar{t}(\sigma)) \in \FF^{\mathtt{RCP}}$ for almost all $\sigma \in \Sigma$. This is in contradiction with~\eqref{eq:contra-dist}. Hence, we have established~\eqref{eq:conv-dist-set}. \emph{Step 3: Convergence of optimizers and optimal values:} Now let $(x_N,t_N) \in \FF^{\mathtt{DRRCP*}}_N$ for all $N$, where $\FF^{\mathtt{DRRCP*}}_N$ is the set of optimal solutions of~\eqref{eq:cvar-equiv-drccp}. Since the sequence $\setr{(x_N,t_N)}$ is contained in a compact set $\mathcal{W}$, by abuse of notation, we deduce that $(x_N,t_N) \to (\bar{x},\bar{t})$ for some $(\bar{x},\bar{t}) \in \mathcal{W}$. Since $\FF^{\mathtt{RCP}}$ is closed and~\eqref{eq:conv-dist-set} holds, we get $(\bar{x},\bar{t}) \in \FF^{\mathtt{RCP}}$. By continuity, \begin{align}\label{eq:first-v-ineq-1} \lim_{N \to \infty} c^\intercal x_N = c^\intercal \bar{x} \ge \Jrcp, \end{align} where $\Jrcp$ is the optimum value of~\eqref{eq:def_rcp}. Now, let $(x^*,t^*) \in \FF^{\mathtt{RCP*}}$, where $\FF^{\mathtt{RCP*}}$ is the set of optimal solutions of \eqref{eq:def_equiv_rcp}. Since $\FF^{\mathtt{RCP}}$ is convex and its interior is nonempty, there exists a sequence $\setr{(x_k,t_k)}_{k \in \mathbb{N}}$ belonging to the interior of $\FF^{\mathtt{RCP}}$ such that $(x_k,t_k) \to (x^*,t^*)$. This implies that for any $\epsilon > 0$, there exists $\bar{k}$ satisfying \begin{align}\label{eq:t-exp} c^\intercal x_{\bar{k}} - \Jrcp = c^\intercal x_{\bar{k}} - c^\intercal x^* \le \epsilon. \end{align} Since $\setr{(x_k,t_k)}$ belongs to the interior of $\FF^{\mathtt{RCP}}$ and $\widehat{v}_N$ converges to $v$ uniformly over $X \times \mathbb{R}$, we deduce that $(x_{\bar{k}},t_{\bar{k}}) \in \FF^{\mathtt{DRRCP}}_N$ for all sufficiently large $N$. For such $N$, optimality of $x_N$ implies that $c^\intercal x_{\bar{k}} \ge c^\intercal x_N$. Using this fact in~\eqref{eq:t-exp}, we get $\Jrcp \ge c^\intercal x_{\bar{k}} - \epsilon \ge c^\intercal x_N - \epsilon$. Taking $N \to \infty$ gives $\Jrcp \ge c^\intercal \bar{x} - \epsilon$. Since $\epsilon$ can be chosen arbitrarily small, we obtain $\Jrcp \ge c^\intercal \bar{x}$. Combined with~\eqref{eq:first-v-ineq-1}, we conclude $c^\intercal \bar{x} = c^\intercal x^*$ and hence $\bar{x} \in \FF^{\mathtt{RCP*}}$. Finally, the argument holds for any convergent subsequence of $\setr{(x_N,t_N)}$. The convergence of the optimum values then follows by continuity. \end{proof} The first part of our result, that $\Jrcp \le \mathsf{J}^{\mathtt{DRRCP}}_N$ for all sufficiently large $N$, signifies that the solution of the DRRCP is a conservative approximation of the solution of the RCP in the asymptotic regime. \begin{remark}\longthmtitle{Discussion on assumptions of Theorem~\ref{th:asymp-cvar}} Our assumption on the interior of the feasibility set of \eqref{eq:def_rcp} being nonempty is a fairly standard assumption in consistency analysis, e.g.,~\cite[Theorem 5.5 and Proposition 5.30]{shapiro2009lectures}. This ensures that the sample-based optimization problem (problem stated in \eqref{eq:cvar-drccp}) is feasible for large $N$. A sufficient condition for this assumption to hold is the existence of $x \in X$ such that $F(x,\xi) < 0$ for all $\xi \in \Xi$; this condition can be checked without knowing $\mathbb{P}$ or samples. Similarly, our assumption on the existence of a compact set $\mathcal{Y} \subset X$ containing the optimizers of \eqref{eq:def_rcp} and \eqref{eq:cvar-drccp} is also a standard one for consistency analysis \cite[Theorem 5.3 and Proposition 5.3]{shapiro2009lectures}, and is required to establish the convergence $\FF^{\mathtt{DRRCP}}_N \to \FF^{\mathtt{RCP}}$. It is trivially satisfied if $X$ is compact. If $X$ is unbounded, this assumption holds if $x \mapsto \mathbb{E}_{\mathbb{P}} [F(x,\xi)]$ and $x \mapsto \mathbb{E}_{\mathbb{Q}}[F(x,\xi)]$ are coercive for some distribution $\mathbb{Q} \in \mathcal{M}^{\theta}_N$ (e.g., the empirical distribution). \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol \end{remark} Next, we analyze the consistency of distributionally robust chance-constrained programs. \section{Distributionally robust chance-constrained programs and their consistency}\label{sec:drccp} Consider the \emph{chance-constrained program} (CCP), \begin{equation}\label{eq:def_ccp} \begin{aligned} \underset{x\in X}{\min} & \quad c^\intercal x \\ \operatorname{s.} \operatorname{t.}} %\text{$\,$}t.} & \quad \mathbb{P}( (F(x,\xi) \le 0 ) \geq 1-\alpha, \end{aligned} \end{equation} where we borrow the notation from Section~\ref{sec:drrcp}. In comparison with the RCP~\eqref{eq:def_rcp}, here, we require the uncertain constraint $F(x,\xi) \le 0$ to hold with a high probability, i.e., at least $1-\alpha$. Note that this constraint is equivalent to $\VaR{\mathbb{P}}_\alpha(F(x,\xi)) \leq 0$ and in general, the set of points satisfying the constraint is non-convex. The distributionally robust version of the CCP~\eqref{eq:def_ccp}, which we term as the \emph{distributionally robust chance-constrained program} (DRCCP), is given as \begin{equation}\label{eq:def_drccp} \begin{aligned} \underset{x\in X}{\min} & \quad c^\intercal x \\ \operatorname{s.} \operatorname{t.}} %\text{$\,$}t.} & \quad \inf_{\mathbb{Q} \in \mathcal{M}^\theta_N} \mathbb{Q}( (F(x,\xi) \le 0 ) \geq 1-\alpha, \end{aligned} \end{equation} where $\mathcal{M}^\theta_N$ is the ambiguity set defined in~\eqref{eq:wasserstein-set}. We next present the consistency analysis for the DRCCP. As explained before, the chance-constraint can render the feasibility set non-convex. Therefore, consistency requires the following conditions which are different from Assumption~\ref{ass:main1}. \begin{assumption}\label{as:regularity-ccp} \longthmtitle{Regularity of CCP} The map $F$ is uniformly continuous and either of the following holds: \begin{enumerate} \item The distribution $\mathbb{P}$ satisfies \begin{align*} \mathbb{P}(\setdef{\xi \in \Xi}{F(x,\xi) = 0}) = 0, \quad \text{ for all } x \in X. \end{align*} \item The set-valued map $H(x) := \setdef{\xi \in \Xi}{F(x,\xi) \le 0}$ is convex-valued and continuous over $X$ (where continuity implies inner and outer semicontinuity of the set-valued map) and for any $x \in X$, $\mathbb{P}(\mathrm{bd} H(x)) = 0$, where $\mathrm{bd} H(x)$ denotes the boundary of the set $H(x)$. \end{enumerate} \end{assumption} We have the following consistency result. The proof is largely based on results from~\cite{guo2017convergence}, where the consistency analysis was done for ambiguity sets that are not random. A key step in the proof is to establish almost sure convergence of the feasibility set of the DRCCP to the feasibility set of the CCP which requires the constraint function to be continuous. Assumption \ref{as:regularity-ccp}, inspired by \cite{guo2017convergence}, states complementary sufficient conditions which ensure this; \cite[Example 4.3]{guo2017convergence} illustrates how Assumption \ref{as:regularity-ccp} (ii) holds in cases where Assumption \ref{as:regularity-ccp} (i) fails.\footnote{The assumption is satisfied for several classes of functions. For example, if the constraint function has an affine separable form $F(x,\xi) = Ax + B\xi$, $B$ has full column rank, and $\mathbb{P}$ has a continuous distribution, then $\mathbb{P}(F(x,\xi) = 0) = 0$ for any $x$.} \begin{theorem}\longthmtitle{Asymptotic consistency of the DRCCP~\eqref{eq:def_drccp}}\label{theorem:asymp-drccp} Let Assumptions~\ref{as:fs-guarantee} and~\ref{as:regularity-ccp} hold. Assume that there exists a compact set $\mathcal{Y} \subset X$ such that the optimizers of~\eqref{eq:def_ccp} belong to $\mathcal{Y}$. Suppose there exists an optimizer $x^*$ of~\eqref{eq:def_ccp} that belongs to the closure of the set $\setdef{x \in X}{\mathbb{P}(F(x,\xi) \le 0) > 1-\alpha}$. Moreover, assume that for sufficiently large $N$ and any sequence of i.i.d samples $\{\widehat{\xi}_i\}_{i=1}^N$, optimizers of~\eqref{eq:def_drccp} with $\theta$ replaced with $\epsilon_N$ belong to $\mathcal{Y}$. Then, the following hold $\mathbb{P}^\infty$ - almost surely: \begin{enumerate} \item $\Jccp \le \mathsf{J}^{\mathtt{DRCCP}}_N$ for all sufficiently large $N$, \item $\mathsf{J}^{\mathtt{DRCCP}}_N \to \Jccp$ as $N \to \infty$, and \item any accumulation point of any sequence of optimizers $\{x^{\mathtt{DRCCP}}_N\}_{N\in\mathbb{N}}$ is an optimizer of the problem \eqref{eq:def_ccp}. \end{enumerate} Here, $\Jccp$ is the optimal value of~\eqref{eq:def_ccp} and for a given $N$, $\mathsf{J}^{\mathtt{DRCCP}}_N$ and $\{x^{\mathtt{DRCCP}}_N\}_{N \in \mathbb{N}}$ are the optimal value and an optimizer of~\eqref{eq:def_drccp}, respectively, where $\theta$ is set to $\epsilon_N$. \end{theorem} \begin{proof} By assumption, without loss of generality, one can assume that $X = \mathcal{Y}$ is a compact set. Define \begin{align} v^{\mathtt{CCP}}(x) & := \mathbb{P} (F(x,\xi) \le 0), \label{eq:vccp_def} \\ \widehat{v}^{\mathtt{DRCCP}}_N(x) & := \inf_{\mathbb{Q} \in \mathcal{M}_N^{\epsilon_N}} \mathbb{Q}(F(x,\xi) \le 0), \label{eq:vdrccp_def} \end{align} where $\{\epsilon_N\}_{N\in\mathbb{N}} \subset (0,\infty)$ is any sequence satisfying the hypotheses. Using Assumption~\ref{as:fs-guarantee} and following a similar reasoning as presented in the proof of Lemma~\ref{le:unif-conv-v}, we conclude that for any $x \in X$, \begin{align*} \mathbb{P}^\infty \Bigl(\widehat{v}^{\mathtt{DRCCP}}_N(x) \le v^{\mathtt{CCP}}(x) \text{ for all sufficiently large $N$} \Bigr) = 1. \end{align*} Consequently, $\Jccp \le \mathsf{J}^{\mathtt{DRCCP}}_N$ for all sufficiently large $N$. Regarding the convergence statements, note that from~\cite[Theorem 4.9]{guo2017convergence}, Assumption~\ref{as:regularity-ccp} implies continuity of $v^{\mathtt{CCP}}$. Further, from Lemma~\ref{le:peyman_conv_dist}, we deduce that $\mathcal{M}_N^{\epsilon_N}$ converges weakly to $\mathbb{P}$ almost surely. That is, almost surely, any sequence $\setr{\mathbb{P}_N \in \mathcal{M}_N^{\epsilon_N}}$ converges weakly to $\mathbb{P}$. Thus, from~\cite[Proposition 5.2, 5.3 and Theorem 3.2]{guo2017convergence}, we obtain $\mathbb{P}^\infty \Bigl( \lim_{N \to \infty} \sup_{x \in X} |\widehat{v}^{\mathtt{DRCCP}}_N(x) - v^{\mathtt{CCP}}(x)| = 0 \Bigr) = 1$. The proof concludes by applying~\cite[Theorem 3.4]{guo2017convergence}. \end{proof} \section{Conclusion}\label{sec:conclusions} We have studied the asymptotic consistency of data-driven distributionally robust risk- (captured by the CVaR) and chance-constrained optimization under Wasserstein ambiguity sets. As a consequence, under suitable assumptions on the problem data, the distributionally robust versions of the problems can be used as ``robust approximators'' of the original problems. In future, we plan to analyze the rate of convergence of the consistency arguments. Particularly, we wish to obtain confidence intervals for original optimizers of the problems using the solutions of the distributionally robust counterparts. \section*{Appendix} \renewcommand{\theequation}{A.\arabic{equation}} \renewcommand{\thetheorem}{A.\arabic{theorem}} The following result aids us in proving Lemma \ref{lemma:drccp_minmax}. \begin{theorem}\longthmtitle{Stochastic min-max equality \cite{shapiro2002minimax}}\label{thm:min-max-shapiro} Let $\mathcal{M}$ be a nonempty and weakly compact set of probability measures on $(\Xi,\mathcal{B}(\Xi))$. Consider a function $g:\mathbb{R}^n \times \Xi \to \mathbb{R}$. Let $T \subseteq \mathbb{R}^n$ be a closed convex set. Assume that there exists a convex neighborhood $V$ of $T$ such that for all $t \in V$, the function $g(t,\cdot)$ is measurable, integrable with respect to all $\mathbb{P} \in \mathcal{M}$, and $\sup_{\mathbb{P} \in \mathcal{M}} \mathbb{E}_{\mathbb{P}} [g(t,\xi)] < \infty$. Further assume that $g(\cdot,\xi)$ is convex on $V$ for all $\xi \in \Xi$. Let $\bar{t} \in \operatorname{argmin}_{t \in T} \sup_{\mathbb{P} \in \mathcal{M}} \mathbb{E}_{\mathbb{P}}[g(t,\xi)]$. Assume that for every $t$ in a neighborhood of $\bar{t}$, the function $g(t,\cdot)$ is bounded and upper semicontinuous on $\Xi$ and the function $g(\bar{t},\cdot)$ is bounded and continuous on $\Xi$. Then, \begin{equation*} \inf_{t \in T} \sup_{\mathbb{P} \in \mathcal{M}} \mathbb{E}_{\mathbb{P}}[g(t,\xi)] = \sup_{\mathbb{P} \in \mathcal{M}} \inf_{t \in T} \mathbb{E}_{\mathbb{P}}[g(t,\xi)]. \end{equation*} \end{theorem} \medskip \noindent {\bf Proof of Lemma \ref{lemma:drccp_minmax}:} We suppress the variable $x$ in the proof for better readability. We verify that the hypotheses of the min-max theorem (Theorem \ref{thm:min-max-shapiro}) hold. Drawing the parallelism in notation between our case and Theorem~\ref{thm:min-max-shapiro}, note that here $\mathbb{R}$ plays the role of both $T$ and $V$; $\mathcal{M}^\theta_N$ that of $\mathcal{M}$; and the function $g$ is $g(t,\xi):=(F(\xi)+t)_+ - t\alpha$. Recall that $\mathcal{M}^\theta_N$ is weakly compact. Note that $g$ is continuous as $F$ is so. Further since $F$ is bounded, for every $t \in \mathbb{R}$, the function $\xi \mapsto g(t,\xi)$ is bounded and $\sup_{\mathbb{Q} \in \mathcal{M}^\theta_N} \mathbb{E}_{\mathbb{Q}} [g(t,\xi)] < \infty$. Finally, for every $\xi \in \Xi$, $t \mapsto g(t,\xi)$ is convex. Thus, to conclude the proof it remains to show that the infimum on the right-hand side of~\eqref{eq:min-max-equality} is attained. To this end, define the function $h(t):= \underset{\mathbb{Q} \in \mathcal{M}^\theta_N}{\sup} \mathbb{E}_{\mathbb{Q}} [(F(\xi)+t)_+ -t\alpha]$. Now, for any $\mathbb{Q} \in \mathcal{M}^\theta_N$, the function $t \mapsto \mathbb{E}_{\mathbb{Q}} [(F(\xi)+t)_+ - t \alpha]$ is convex and real-valued. Since $h$ is supremum over a family of such functions, $h$ is convex, lower semicontinuous~\cite[Proposition 2.1.2]{JBHU-CL:04}. Further, for any $\xi$, $(F(\xi)+t)_+ - t \alpha \to \infty$ as $\abs{t} \to \infty$. This fact along with boundedness of $F$ implies $h(t) \to \infty$ as $\abs{t} \to \infty$. Thus, $\inf_{t \in \mathbb{R}} h(t)$ exists. \hfill $\blacksquare$
{ "timestamp": "2020-12-17T02:14:59", "yymm": "2012", "arxiv_id": "2012.08850", "language": "en", "url": "https://arxiv.org/abs/2012.08850" }
\section{Introduction} The primary application areas for copyspace detection includes generation of email banners, hero-pages, and call-to-actions, most often performed manually, with notable advances in generation \cite{vempati2019enabling,luban_2018,Hua_2018}. The process of graphic asset development is time-consuming, requiring designers to curate and manipulate media and vector graphics, build up layer stacks, and finally place and format text, all while balancing style, brand consistency and tone in the design. \section{Data and Methods} A set of $20\,000$ license-free images \cite{unsplash} were labeled by a team of experts. High-level design principles and compositional rules from graphic design theory were explicitly encoded within the labels. Figure \ref{fig:results} shows how we further divide images into four categories of ascending complexity of design. Additional label synthesis was performed using randomly decorated polygons to augment our training data. \begin{figure*}[ht!] \subfloat[\label{genworkflow}] \includegraphics[trim=0 0 0 0, clip, width=0.27\textwidth]{Images/NeurIPS_big.png}} \hspace{\fill} \subfloat[\label{synthetic_images}]{ \includegraphics[trim=0 -60 0 0, clip, width=0.35\textwidth]{Images/NeurIPS_synthetic.png}} \hspace{\fill} \subfloat[\label{mt-simtask}]{ \includegraphics[trim=0 -160 0 -80, clip, width=0.31\textwidth]{Images/perplex.png}}\\ \subfloat[\label{genworkflowcoco}] \includegraphics[trim=0 -50 0 0, clip, width=0.38\textwidth]{Images/coco.png} \hspace{\fill} \subfloat[\label{pyramidprocess}]{ \includegraphics[trim=0 0 0 0, clip, width=0.59\textwidth]{Images/res.png} \caption{Ground truth and predicted regions are rendered green and magenta respectively. (a) Class 1-4 images indicate increasing difficulty; (b) Human annotated and synthetic labels; (c) Good predictions are sometimes disjoint from annotations; (d) Examples of copyspace detection applied to Coco data set~\cite{lin2015microsoft}; (e) Varying the copyspace algorithm parameters yields multiple ad generations for a single image; } \label{fig:results} \end{figure*} We explore the copyspace problem utilizing frameworks for object detection \citep{DBLP:journals/corr/RenHG015,DBLP:journals/corr/RedmonDGF15,glenn_jocher_2020_3983579}. The Yolov5 Github repository is cited in lieu of corresponding publication in arxiv.org, a controversy we will not delve into beyond providing more model inter-comparison results. \section{Results and Discussion} Table \ref{table:1} shows results of a copyspace detection sample intercomparison, where among this set of models regression-based Yolo models generally show higher mAP and IoU performance with fewer parameters. Among ways copyspace is distinct from object detection is that there is not a single concrete copy space from which to draw rectangular bounds. When analyzing typical metrics for copyspace including IoU and mAP, it must be taken into account that a reasonable candidate copyspace might not be in the limited set of annotations. Table \ref{table:2} shows results for 4 classes of image complexity. Because the data set is highly imbalanced toward class 1, lower complexity images, we see the mAP results are preferentially biased. Figure \ref{fig:results} shows inference on Coco images \cite{lin2015microsoft}. In this limited exploration of copyspace detection we find favorable initial results using object-detection frameworks. Machine learning approaches can supplement this nuanced and lower-level task in the designer workflow, and allow focus on higher-skilled tasks. Copyspace detection is a component of a generative system, and further refinements to this task will directly improve complexity and variation of candidate design generations. \begin{table} \begin{tabular}{ c c c c c c c } \hline\hline & \textbf{Input Size} & \textbf{Parameters} & \textbf{Layers} & \textbf{mAP @ 0.5} & \textbf{mAP @ .5:.95} & \textbf{IoU}\\ \hline\hline \textit{Yolo v4} & 416$\times $416 & 2.7e7 & 162 & 20.3 & - & 85.1\\ \textit{Faster RCNN} & 640$\times $640 & 6.4e7 & 290 & 26.4 & 16.3 & 82.1\\ \textit{Yolo v5s} & 640$\times $640 & 7.3e6 & 191 & 30.1 & 23.3 & \textbf{88}\\ \textit{Yolo v5l} & 640$\times $640 & 4.6e7 & 335 & 32 & 24.4 & 73.9\\ \textit{Yolo v5x} & 640$\times $640 & 8.8e7 & 407 & \textbf{34.2} & \textbf{27.2} & 64.4\\ \hline\hline\\ \end{tabular} \caption{Comparison of top-performing copyspace detectors.} \label{table:1} \end{table} \begin{table} \begin{tabular}{ c c c c } \hline\hline & \textbf{N} & \textbf{mAP @ 0.5} & \textbf{mAP @ .5:.95} \\ \hline\hline \textit{Class 1} & 1.69e3 & 76.2 & 73.5\\ \textit{Class 2} & 80 & 34.3 & 19.9 \\ \textit{Class 3} & 69 & 17.3 & 11.2 \\ \textit{Class 4} & 73 & 9 & 4.3\\ \hline\hline\\ \end{tabular} \caption{Copyspace class results of image classes for Yolov5x.} \label{table:2} \end{table} \section{Ethics} Machine learning is a powerful tool that can abstract away tasks of a worker, in this case potentially automating a step in a designer's workflow in asset generation including art, advertisements, etc. In building out potential tools it is imperative to examine the potential societal impacts. In medicine there is a saying that a physician should practice at the top of their license. Abstracting away low-level tasks allows time to be shifted to higher-level tasks. In this case, the designer may focus more effort on the copy language and font properties that achieve an overarching effect. While it is not possible to say what the future will yield, there are numerous examples, e.g. ATM machines, where automation provided increased efficiency and jobs. {\small \bibliographystyle{ieee_fullname}
{ "timestamp": "2020-12-17T02:18:21", "yymm": "2012", "arxiv_id": "2012.08933", "language": "en", "url": "https://arxiv.org/abs/2012.08933" }
\section{Introduction} Efficient collective action to curb the spread of epidemics in general, and of the current Covid-19 pandemic in particular, requires input from a variety of disciplinary fields, from microscale fluid dynamics (to understand the propagation of virus or bacteria-laden droplets \cite{mittal2020flow,bourouiba2020fluid}) to macroscale epidemiology. At present, the weak link between these two scales hinders the prediction of how the SARS-CoV-2 virus at the origin of the pandemic \revise{will spread} {spreads} in a given crowd. Gatherings of people are encountered both in enclosed spaces (such as restaurants, offices, private accommodation or fitness centers), where statistical data may be insightful \emph{a posteriori} from an epidemiological standpoint \cite{chang2020mobility,hu2020risk,leclerc2020settings,galmiche2021etude}, as well as in non-confined environments. Most Covid-19 oubreaks are certainly associated with indoor settings \cite{bulfone2021outdoor}, but nonetheless a minority of infections -- at least a few percent, as a tentative estimate \cite{shen2020community,leclerc2020settings,weed2020rapid,galmiche2021etude} -- reportedly originate outdoors (e.g., in beer gardens \cite{express2020beer} and other outdoor gatherings \cite{bulfone2021outdoor}, during casual friendly encounters \cite{galmiche2021etude}, or while jogging \cite{guardian2020coronavirus}) or in mixed indoor/outdoor settings (e.g., on building sites \cite{EuropeanCDC2020August}). Despite their secondary role, viral transmissions amidst outdoor crowds may change the fate of an epidemic when the effective reproduction number $R_\mathrm{eff}$ is close to unity. Moreover, they raise a specific challenge because they are inherently hard to trace and document, but also hard to circumscribe, as they bring together unrelated people\footnote{As of March 2021, more than on third of new cases in France were wholly unaware of how they were infected \cite{galmiche2021etude}.}. These difficulties are a hurdle to the control of outbreaks. Accordingly, recommendations to wear a face covering outside have been issued far and wide. Some cities in China, France, part of Germany, Italy, Poland, Portugal, Singapore, South Korea, Spain, some Swiss cantons, and a number of US states, among others, have put in place mask mandates for some, or all, outdoor activities. Let us say from the outset that mask-related policies may have a broader impact than their chief purpose of physically warding off infections \cite{leung2020respiratory,bahl2020face,SAGE_Britain}: Widespread usage of face coverings attracts every one's attention to the health situation and may thus promote stronger vigilance and abiding by other sanitary measures. Furthermore, these policies are constrained by the legal possibilities in place in each country, the indirect consequences of the measures, public perception, and an imperative of simplicity. On the downside, there is some discomfort associated with wearing a mask and it might still be too early to assess the psychological impact of being surrounded by covered faces. Thus, a proper assessment of the risks of infections incurred by maskless crowds in diverse non-confined spaces is much needed, especially at a time when mask mandates are about to be, or have been, lifted. Not only can it provide more objective foundations to public policies, but it is instrumental in better targeting the situations where risks are highest and masks are most crucial, thus justifying stricter local control, and determining if (and where) it makes sense to restrict pedestrian mobility on streets and sidewalks. It will also be central to the safe revival of outdoor mass events, which are currently suspended in various countries, notably European ones. For the time being, there remains a gap between the thriving experimental and computational studies focused on measuring the emission and propagation of respiratory droplets \cite{morawska2009size,bourouiba2014violent,asadi2019aerosol,chen2020short,abkarian2020speech,bourouiba2020fluid,bao2020transmission,li2020mask,feng2020influence} and the investigations of disease spread at larger scales \cite{ferguson2020report,ferretti2020quantifying,harweg2020agent}. The former have shed light on the complex, turbulent dynamics that take place inside the exhaled puff and called into question both the binary distinction between falling droplets and tinier airborne aerosols \cite{bourouiba2020fluid,chong2021extended}, and the scientific basis of the 2-meter social distancing rule \cite{morawska2020time,Jonesm3223,yang2020towards,chong2021extended}. However, the translation of these results into epidemiological predictions relevant for policy-making \cite{vuorinen2020modelling,poydenot2021risk} is uneasy, and trailing. Poles apart from these microscopic studies, risk assessments at the scale of a facility or venue by means of agent-based pedestrian simulations \cite{xiao2020modeling,harweg2020agent,romero2020covid} or large-scale experiments \cite{moritz2020risk} resort to particularly crude assumptions with regard to viral transmission. Often, an individual is considered exposed to the disease when he or she comes in a given radius (e.g., 2 meters) around an infected person, regardless of their orientations, overlooking that their head orientations control the direction in which respiratory droplets are expelled. Moreover, pedestrian dynamics models are hardly designed to reproduce fine observables such as precise inter-pedestrian spacings with any reliability, neither in usual times nor in times of pandemic, when pedestrian behaviors and trajectories are altered to reduce infection risks \cite{pouw2020monitoring,ronchi2020risk}. To overcome these strong limitations, we collected detailed field data\footnote{The processed field data are freely available on the Zenodo public repository, with the DOI: 10.5281/zenodo.4527462. The Python scripts used to analyze the data can be obtained by request to the authors.}, during the pandemic in France, about pedestrian separations and orientations in diverse situations (hereafter called scenarios), either outdoors or in large, ventilated indoor facilities, and we developed a mathematically sound method to infer the rate of new infections in each scenario from our partial observations. The method rests on simple \emph{ad hoc} models for viral transmissions, which we introduce and fit to droplet emission data and existing exposure studies. While \revise{none of these simple models can claim to be accurate} {these models are individually only coarse approximations of the reality}, they all converge towards a fairly robust ranking of the scenarios in terms of infection risks. The proposed framework is also useful to quantify the effect of enhanced physical distancing and to assess the mitigation efficiency of redesigning certain premises (one-way footpaths, queues, etc.). \section{Methods} While one can rely on estimated numbers of contacts between people to model the spread of an epidemic at regional or national scales \cite{ferguson2020report,ferretti2020quantifying}, more detailed information about the viral transmission route and the interactions between people is required to gauge how a virus will propagate in a given crowd. Although small respiratory droplets may evaporate into airborne residues that can accumulate in the air and potentially travel long distances, short-range $(\sim 1\,\mathrm{m})$ transmission is widely believed to prevail in non-confined environments, at least for influenza and the coronavirus \cite{freeman2020covid,chen2020short,bao2020transmission}. These droplets are exhaled when breathing, talking, shouting, panting, coughing or sneezing, mostly through the mouth but also through the nose \cite{asadi2019aerosol,morawska2009size}; the focus must thus be put on their transport. \subsection{Modeling viral transmission via respiratory droplets} \hfill\\ In principle, the instantaneous transmission rate due to droplets emitted at $t_e$ by a contagious individual $E$ and inhaled at $t_r>t_e$ by a `receiver' $R$ reads \begin{equation} \nu(t_e,t_r)=T_0^{-1}\,\tilde{\nu}\Big[r,\theta^{E}(t_e),\theta^{R}(t_r),\,t_r-t_e,\,\mathrm{ambient\ flows},\,\mathrm{activity(t_e)}\Big] \end{equation} where the characteristic time for infection $T_0 \propto n_{\mathrm{inf}}/c_v$ \revise{} {at a given distance in front of the index patient} is related to the specifics of the disease (namely, the viral titer $c_v$ in the respiratory fluid and the minimal infectious dose $n_{\mathrm{inf}}$), whereas the function $\tilde{\nu}$ accounts for the fluid dynamics of droplet emission and transport. Its parameters $r$, $\theta^E$, and $\theta^R$ will be clarified in the following. Unfortunately, an accurate derivation of $\tilde{\nu}$ from the fluid dynamics of droplet and aerosol propagation in these scenarios would not only be extremely demanding computationally, but also hinge on very specific information that is neither available \cite{rosti2020fluid} nor transferable between situations, e.g., ambient air flows \cite{Jonesm3223,bhagat2020effects}, wind speed \cite{feng2020influence}, humidity \cite{chong2021extended}, and speech details \cite{asadi2019aerosol}. \begin{figure}[h] \begin{centering} \includegraphics[width=0.7\textwidth]{figure_transmission_rate3} \par\end{centering} \caption{ \label{fig:sketch_transmission_rate} (Color online) Spatially resolved model of disease transmission via virus-laden respiratory droplets. The transmission rate (Eq.~\ref{eq:nu_expression}) depends on the direction of the emitter's head, the distance between the emitter and the inhaler, and the latter's head orientation; these dependencies are all modeled with a decaying function, $f_{1}$, $f_{2}$, or $f_{3}$. Optimistic parameters combined with $f_{3}$ (\emph{ModOpt}$_3$ \revise{}{model, see Table~\ref{tab:Parameter-sets}}) are used in the illustration. } \end{figure} Therefore, we opted for the development of coarser-grained, \emph{ad hoc} models that notably overlook propagation delays and ambient air flows. (Relaxing the former hypothesis does not alter our main findings, as we show in Appendix~D by bringing back into play more realistic transmission dynamics \revise{}{(see Fig.~S5)}; on the other hand, ambient air flows may only be neglected if there is very little or no wind, in which case the exhaled puff is expected to be fairly similar indoors and outdoors in the short range.) With the insight into droplet propagation gained from computational studies as well as experiments on expiration and inhalation \cite{abkarian2020speech,li2020assessing,feng2020influence,bourouiba2020fluid,morawska2009size,yang2020towards}, the disease transmission rate thus boiled down to a function $\nu_{ER}(t)=\nu\Big[r(t),\theta^{E}(t),\theta^{R}(t)\Big]$ that decreases with the horizontal distance $r$ between the individuals' heads and the orientations $\theta^{E}$ and $\theta^{R}$ of the emitter's and receiver's heads \revise{} { relative to the direction of the vector that connects them (note that droplet emission and inhalation are not symmetric \cite{abkarian2020speech}). More precisely, assuming that these variables can be decoupled, we write the instantaneous transmission rate as \begin{eqnarray} \nu(r,\theta^{E},\theta^{R}) & = & \frac{1}{\tilde{T}_{0}}\bar{f}\left(\frac{r}{r_{0}}\right)\cdot\bar{f}\left(\frac{\theta^{E}}{\theta_{0}^{E}}\right)\cdot\bar{f}\left(\frac{\theta^{R}}{\theta_{0}^{R}}\right),\label{eq:nu_expression} \end{eqnarray} where $\bar{f}(x)$ is a function such that $\bar{f}(x\approx0)=1$ and $\bar{f}(x)$ decays rapidly for $x\gg1$. To be concrete, we tested the following three functions, \[ f_{1}(x)=\exp(\frac{1-x^{2}}{2});\,\,f_{2}(x)=|x|^{-m};\,\,f_{3}(x)=\exp(1-|x|) \] Because of the limited accuracy of our positional measurements and the uncertainties about very near field transmission, the peaks of these functions are leveled at $x\to0$, viz., $\bar{f}_{k}=\min(1,f_{k})$ for $k=1,\,2,\,3$. We then defined a family of plausible parameter sets for the $\nu$ functions, namely, $r_{0}$, $\tilde{T}_{0}$, $\theta_{0}^{E}$, and $\theta_{0}^{R}$ in Eq.~\ref{eq:nu_expression}) as well as $m$ in $f_{2}$. The parameters were bounded using a broad scope of available empirical data, which suggest a characteristic distance $r_{0}\leqslant1\,\mathrm{m}$, an infection time (at $r_{0}=0.5\,\mathrm{m}$) $T_{0}=\tilde{T}_{0}/\bar{f}(\frac{0.5}{r_{0}})$ between a dozen minutes and an hour, and an exponent $m\approx2-2.5$ (more details are given in Appendix~B), and within these plausible bounds we explored different sets of values, listed in Table~\ref{tab:Parameter-sets}. The parameter sets thus span the entire spectrum from highly contagious (`pessimistic') to only mildly contagious (`optimistic'). } An example of such a function is depicted in Fig.~\ref{fig:sketch_transmission_rate}. The spatial decay of the transmission function is such that $\nu(r,\theta^{E},\theta^{R})$ becomes negligible past a few meters, except for the worst-case model describing uncovered sneezes \cite{bourouiba2014violent}. The typical time for infection in the event of a close ($r=50\,\mathrm{cm})$ face-to-face contact lies between 5 minutes (an extremely pessimistic estimate) and 30 minutes (except for sneezes), consistently with the epidemiological literature on SARS-CoV-2, outbreak reports and exposure case studies \cite{SAGE_Britain,EuropeanCDC2020August, SantePubliqueFrance2020December, burke2020enhanced, heinzerling2020transmission, chu2020physical,yang2020towards}. \begin{table} \noindent \begin{centering} \begin{tabular}{|c|c|c|c|c|} \hline & $T_{0}$ (min) & $r_{0}$ (m) & $\theta_{0}^{E}$ & $\theta_{0}^{R}$ \tabularnewline \hline \hline Optimistic & 30 & 0.3 & 22.5$^{\circ}$ & 45$^{\circ}$\tabularnewline \hline Moderately optimistic & 15 & 0.5 & 30$^{\circ}$ & 60$^{\circ}$\tabularnewline \hline Standard & 10 & 0.5 & 30$^{\circ}$ & 60$^{\circ}$\tabularnewline \hline Pessimistic & 10 & 0.75 & 45$^{\circ}$ & 60$^{\circ}$\tabularnewline \hline Very pessimistic & 7.5 & 1 & 45$^{\circ}$ & 60$^{\circ}$\tabularnewline \hline Extremely pessimistic & 5 & 1 & 45$^{\circ}$ & 90$^{\circ}$\tabularnewline \hline Uncovered sneezes & 1.7 & 1.5 & 22.5$^{\circ}$ & 60$^{\circ}$\tabularnewline \hline \end{tabular} \par\end{centering} \caption{\label{tab:Parameter-sets}Parameter sets used in the transmission model of Eq.~\ref{eq:nu_expression}. The angular values $\theta_{0}^{E}$ and $\theta_{0}^{R}$ correspond to the half-angles of the emission and inhalation cones in the horizontal plane. \revise{}{The model denominations have been chosen to avoid putting an emphasis on a particular parameter of the disease transmission process.}} \end{table} The resulting models are representative of a disease that may be transmitted to multiple individuals within an hour in unfavorable cases \cite{shen2020community,li2020evidence}, but generally requires close and prolonged contacts for transmission: Even in households, reported attack rates lie in the range 5\%-30\%\cite{bar2020quantitative}, often around 15\% \cite{park2020early,park2020contact,burke2020active}; moreover, casual episodic contacts at work or in the community, even face to face, do not necessarily trigger an outbreak \cite{park2020early,burke2020enhanced}. More quantitatively, the spatial decay described by the fairly optimistic models agrees well with the decay of the concentration of droplets emitted by a coughing subject \cite{li2020assessing} (see Fig.~S3). The reliability of the transmission models is further tested by mimicking a journey aboard a Chinese train (overlooking its confined nature), where transmission risks for passengers who sat close to an infected traveler were recently assessed using trip records \cite{hu2020risk}. Overall, the simulated results, detailed in Appendix~B, are compatible with the empirical data; the models featuring the most \emph{optimistic} parameters display the best concordance. Finally, let us mention that the order of magnitude of the parameters (especially the most optimistic ones) is consistent with the putative minimal infectious dose of SARS-CoV-2, estimated to be order 100 \revise{} { particles } \cite{basu2020close}, granted that most exhaled droplets contain 0 or 1 viral copy \cite{poon2020soft} (see Appendix~B). Although these pieces of evidence favor the optimistic end of the parameter spectrum, our study is conducted with the whole gamut of plausible model parameters. This variety better reflects our current uncertainty with regard to the transmission of SARS-CoV-2, but also the established inter-individual and inter-case variability \cite{morawska2009size,asadi2019aerosol}, depending on physiology, talking characteristics \cite{buonanno2020quantitative}, viral mutations, etc. \subsection{Field measurements and inference of the rate of new infections} \hfill\\ \revise{} { We used a privacy-respective setup to film pedestrian flows and crowds in diverse scenarios in a discrete and passive way with a TomTom Bandit camera covered with a thin plastic layer (to deteriorate the quality of the image) and installed in a zenithal position. This privacy-respective setup was approved by the Data Protection Officer of the French National Centre for Scientific Research (CNRS). After some pre-processing with the FFMpeg software to correct the 'fish-eye' effect, downsample the video, and select the area of interest (from $8\,\mathrm{m^2}$ to $100\,\mathrm{m^2}$), the positions and head orientations of all pedestrians were manually tracked at a rate of 2 frames per second (fps) with the help of a dedicated Python software; linear interpolation then increased the rate to 10 fps. The estimated experimental error on the positions is typically below or around 20~cm, while that on the head orientations is lower than $20^{\circ}$ in most videos. Because of the limited field of view, some interactions with off-camera people were missed, especially at large separations. We compensated for this by appraising the fraction of interactions thus lost as a function of their range, under the assumption of homogeneous density, and reweighting the detected contacts accordingly. We checked that this rescaling method successfully restores the genuine contact distribution, up to contact distances close to the size of the field of view (Fig.~S2 in Appendix~A). Besides, we were able to identify and keep track of groups of pedestrians (i.e., co-walkers who appear to be relatives, co-workers, or friends) by visual inspection. Overall, close to 5,000 pedestrian trajectories were thus obtained. Pixel coordinates were converted into real-world coordinates with a geometric formula whose parameters are fit thanks to predefined calibration points at pedestrian height (see Appendix~A). } For each scenario the time and space-resolved pedestrian measurements are then coupled to the above transmission models. This directly yields the instantaneous rate, abbreviated as $\nu_{ij}(t)$, at which a supposedly infected index patient $i$, that we will call Iago, transmits the disease to other pedestrians \emph{j} around him via virus-laden droplets. Under the independent action hypothesis (IAH) \cite{druett1952bacterial,zwart2009experimental}, each inhaled virus is equally likely to lead to an infection, with no cooperation or antagonism between viruses. It follows that, over the time interval $[t_{0},t_{0}+\tau_{i}]$ over which Iago was filmed, he infected a number $C_{i}^{(\tau_{i})}$ of other people $j$ (leaving aside his fellow group members $ G_{i}$, whose possible infection is not specifically related to the scenario, except at the caf\'es), given by a Wells-Riley-like equation \cite{sze2010review}, viz. \begin{equation} C_{i}^{(\tau_{i})}=\sum_{j\notin G_i}S_{j}^{0}\cdot\left(1-e^{-N_{ij}}\right),\label{eq:C_tau_i} \end{equation} where $S_{j}^{0}$ is the probability that $j$ is susceptible (i.e., \emph{not already} infected) at the beginning of the observation interval, and $N_{ij}=\int_{t_{0}}^{t_{0}+\tau_{i}}\nu_{ij}(t)\,dt$ is the cumulative transmission risk \cite{tupper2020event}. \begin{figure*} \begin{centering} \includegraphics[width=1\textwidth]{figure_histograms_4} \par\end{centering} \caption{\label{fig:Ranking_of_scenarios} \revise{} {Number of new infections over a time interval $T_0$} in the scenarios under study, estimated with four different parameter sets for the transmission model. \revise{} {The values taken for the time for infection $T_0$ depend on the model, as detailed in Table~\ref{tab:Parameter-sets}; in reality, they vary with the index patient, the current activity, the stage of the disease, etc.} Except in the static scenarios (caf\'es and queue), infections within groups have been dismissed and the error bars span the interval between the estimated lower bound $\underline{C}^{(T_0)}$ and upper bound $\bar{C}^{(T_0)}$, while the filled bars represent $\frac{1}{2}\left(\underline{C}^{(T_0)}+\bar{C}^{(T_0)}\right)$. Refer to Table~S1 for details about the investigated scenarios.} \end{figure*} It is worth noticing that the IAH implies that infections are a stochastic process without threshold: Any encounter can potentially result in a new case, and multiple short interactions with various people are as risky as a single long one, and even riskier because, once infected, agent \emph{j} can no longer be infected, viz., $S_{j}^{0}:\,1\to0$. This saturation of the risk complicates the evaluation of $C_{i}^{(\tau_{i})}$ even if we assume that all pedestrians \emph{except Iago's group $G_{i}$ of co-walkers} are initially susceptible, because our videos only record part of Iago's wandering and may thus miss earlier off-camera interactions. Nonetheless, rigorous upper and lower bounds on $C_{i}^{(\tau_{i})}$ can be set by noticing that, on the one hand, $S_{j}^{0}\leqslant1$ and that, on the other hand, $\sum_{j\notin G_{i}}(1-S_{j}^{0})$ cannot be larger than the number of people actually infected by Iago, which is related to $C_{i}^{(\tau_{i})}$ (see Appendix~C). Finally, for comparison purposes, $C_{i}^{(\tau_{i})}$ is recast into an hourly rate of new infections $C_{i}\equiv C_{i}^{(\Delta T)}=\frac{\Delta T}{\tau_{i}}C_{i}^{(\tau_{i})}$ with $\Delta T=1\,\mathrm{h}$, assuming that the recorded videos are representative. As this rate is very sensitive to the chosen characteristic time for infection $T_0$, which exhibits great variability, we will mostly present results rescaled by $T_0$, which is equivalent to setting $\Delta T=T_0$ and showing the number of new cases over a time interval $\Delta T$. Static scenarios -- namely, the caf\'es and waiting lines -- are handled slightly differently, because then Iago's neighbors do not change significantly as time passes, in which case we set $N_{ij}=\frac{\Delta T}{\tau_{i}}\int_{t_{0}}^{t_{0}+\tau_{i}}\nu_{ij}(t)\,dt$ and $S_{j}^{0}=1$ in Eq.~\ref{eq:C_tau_i}. In a nutshell, the proposed framework enables us to quantitatively translate patchy observations, with undetected contacts, into an estimated global risk of viral spread. \section{Results} \subsection{Ranking of scenarios by the risks of new infections} \hfill\\ Inserting the collected field data into this framework, we obtain upper and lower bounds on the mean rate $C=\left\langle C_{i}\right\rangle _{i}$ of new infections per hour for each scenario and each transmission model. Figure~\ref{fig:Ranking_of_scenarios} presents a sample of results for four of these models (also see Figs.~S6-S8 \revise{}{for the results obtained with other models}). These results confirm the efficiency of the proposed bound-setting method, as the bounds are found to confine $C$ to a narrow interval. Most importantly, the ranking of the different scenarios turns out to be robust, that is to say, largely preserved across models. This is our first major result. Pursuing the analysis of Fig.~\ref{fig:Ranking_of_scenarios}, we observe that street caf\'es present the highest risks in terms of the mean number of new infections per hour, even though their tables were more spaced when the videos were shot than before the pandemic. These infections at caf\'es are easily rationalized by the close, face-to-face interactions between people sharing a table, let alone the increased emission of droplets associated with lively discussions and eating, which is overlooked here. This result is in line with case reports of high risks of viral transmission while dining and drinking (indoors or outdoors, unspecifically) \cite{chang2020mobility,li2020evidence,tupper2020event,galmiche2021etude}. Next in line among the observed scenarios comes the outdoor market alley. Despite its high average density $\rho\simeq0.5\,\mathrm{ped/m^{2}}$, this scenario never matches the level of risk at caf\'es, bar with the very pessimistic parameters corresponding to high contagiousness. Further down the list, crowd density explains the considerably higher risks at train and metro stations ($\rho\simeq0.25\,\mathrm{ped/m^{2}}$) than on fairly busy streets in Lyon ($\rho\simeq0.05-0.1\,\mathrm{ped/m^{2}}$) and, to an even larger extent, the riverbank walkway that we filmed. Somewhat intriguingly, the estimated infection rate may be as large, or even larger, at the observed testing site in Lyon than it is on these streets, although the overall density there is low and attendants were strictly asked to stay 2 meters apart from each other; yet, their relative proximity was prolonged over considerable time and, besides, they tended to turn and pace around a bit while waiting. One should however bear in mind that our models estimate the risks of viral transmission if no face mask is worn, whereas everybody was wearing a mask at the testing site that we filmed. \subsection{Rates of new infections} \hfill\\ Besides the robustness of this qualitative ranking of scenarios, largely maintained across parameter sets, on a more quantitative note we observe that the infection rates are always (except with the worst-case models) at least 10 times scantier in the investigated streets than at caf\'es, even without taking into account that talking and eating augment droplet emissions. In addition, the pessimistic estimates are generally at most a factor 10 larger than the (possibly more relevant) most optimistic ones. Thus, it is reasonable to conclude from those estimated values, that contagious Iago will infect a number of order 1 person \emph{roughly speaking} if he sits at a caf\'e for one hour, whereas he would probably cause \emph{significantly} less than $\sim0.1$ new infections if he spent this time walking on a fairly busy street. Nonetheless, these average rates brush aside the variety of pedestrian contacts in the different scenarios, which is better reflected in the box plots of Fig.~\ref{fig:boxplot}. The figure shows that, while the scenarios involving a moving crowd cause fewer infections than caf\'es on average, their rates of infections $C_{i}$ are more dispersed and, unlike caf\'es, they feature many \revise{outliers on both ends, i.e.,} { values that significantly deviate from the bulk, both } at $C_{i}\simeq0$ and at high-$C_{i}$, the latter being pedestrians that fortuitously turn into super-spreaders because of their pattern of on-street contacts. As we shall see below, the blame does not necessarily rest on the pedestrian, but rather on the ebbs and flows of crowding in each observed situation. Prior to that, let us remark that accounting for the directionality of droplet propagation and describing the orientations of pedestrians' heads had a marked effect on our results. Indeed, not only does an isotropic transmission model overestimate risks in crowds by a factor of at least 10 in comparison to its directional counterpart, but it also alters the ranking of scenarios: It predicts considerably more infections at the outdoor market than at the caf\'es (Fig.~S8). Otherwise, such an inversion (along with high risk estimates) is only found for our worst-case transmission models, in particular the model that we introduced to mimic the effect of a contagious patient sneezing every few minutes without covering his or her sneezes. On the other hand, allowing infections within groups, as we did for the caf\'es, does not dramatically change the picture, even though it substantially heightens the risks associated with sparse situations, for instance, the riverbank walkway. This is not surprising because in these situations close contacts mostly occur between group members. \begin{figure} \begin{centering} \includegraphics[width=0.5\textwidth]{boxplot_optimistic_exp_T0} \par\end{centering} \caption{\label{fig:boxplot}(Color online) \revise{} {Number of new infections $C_i^{(T_0)}$ over a time interval $T_0$} caused by the different pedestrians $i$ in each scenario, as estimated with \emph{ModOpt}$_{3}$. The dashed red lines represent mean values, solid back lines are medians and open symbols are outliers. } \end{figure} \subsection{Key determinants of the transmission rate} \hfill\\ To better understand the observed disparities, we need to identify the key variables that determine the level of risk. Figure~\ref{fig:risk_vs_density} confirms the intuition that the instantaneous pedestrian density $\rho(t)$ is a major determinant of the rate of viral transmission $\nu(t)\equiv\left\langle \sum_{j\notin G_i}\nu_{ij}(t)\right\rangle _{i}$ (where the average is taken over all pedestrians observed at time \emph{t}), in that it controls how close each pedestrian is to their counterparts. (Note that all time-dependent variables have been averaged over intervals of two seconds, to reduce the statistical noise.) The variation of $\nu$ with $\rho$ looks similar across scenarios, but is not strictly identical, which indicates that other scenario-dependent variables affect the transmission rate $\nu$. Furthermore, these variations become more muddled as one turns to more optimistic parameter sets, which is consistent with the idea that one then probes the configuration of the crowd at finer length-scales, owing to the shorter transmission range. The total pedestrian flow rate could in principle play a role; however, we found that $\nu$ does not follow any clear trend with this flow rate at fixed density $\rho$ (Fig.~S9). On the other hand, head orientations naturally have some bearing on the risks of infection, as evinced by the failure of isotropic transmission models to reproduce our results\footnote{We think that this is largely due to the discrepancy between the face-to-face orientations at caf\'es (which facilitate transmission) and the more or less random orientations e.g. at a market.}, but we now show that in non-static scenarios these orientations can be practically inferred using only the trajectories. To do so, we notice that the head orientations of walking pedestrians (speed $v>0.3\,\mathrm{m\cdot s^{-1}}$) are approximately normally distributed around their walking direction, with a standard deviation around $26^{\circ}$, and decorrelate over one second. Therefore, we choose to ascribe angular orientations randomly drawn from this normal distribution to walkers, while their stationary counterparts ($v\leqslant0.3\,\mathrm{m\cdot s^{-1}}$) are considered purely randomly oriented; the random values are refreshed every second. Quite interestingly, this simple reconstruction of head orientations yields mean infection rates $C$ per hour that agree very well with the values \revise{} {computed with the \emph{bona fide} orientations,} with a relative difference generally lower than 15\%, regardless of the transmission model. The correspondence between the individual $C_{i}$ values for each pedestrian is of course imperfect with this method, but overall the differences are not extremely large (Fig.~S10). These observations are particularly relevant to bolster risk assessments based on observed or (reliably) simulated pedestrian datasets in which head orientations are missing, as they most often are. \section{Discussion} \begin{figure} \begin{centering} \includegraphics[width=0.5\textwidth]{figure_risk_density_asymmetry} \par\end{centering} \caption{\label{fig:risk_vs_density}Dependence of the rate of viral transmission $\nu(t)$ on the instantaneous density $\rho(t)$, in different scenarios, using the \emph{ModOpt}$_{3}$ parameter set. Refer to Fig.~\ref{fig:Ranking_of_scenarios} for the color code. In the \emph{insets}, the rates are distinguished depending on the directionality of the flow in the (\emph{bottom right}) outdoor market scenario, (\emph{top left}) corridor flow experiments of the BaSiGo project \cite{cao2017fundamental}. The error bars and envelopes represent standard errors (i.e., $\pm\frac{\mathrm{std}}{\sqrt{n}}$, where $n$ is the number of uncorrelated data points).} \end{figure} \subsection{Insight into the risk in the scenarios under study} \hfill \\ The spatial resolution of our empirical data and models provides deeper insight into the circumstances of infection in the above scenarios; it can contribute to the debate about what physical distance should be prescribed between pedestrians in non-confined environments, and whether 2 meters or '6 feet' are enough from an epidemiological perspective \cite{Jonesm3223,morawska2020time}. Admittedly, the answer will heavily rest on the transmission model (which was here established in an \emph{ad hoc} way), but the statistics of inter-pedestrian contacts in the scenario also play a large role. Using the cumulative rate of transmission $\int\nu(t)\,dt$ as a proxy for the incurred risk, we find that its dominant contribution comes from interactions within a distance of 1 meter (for instance, 70\% at Bellecour metro station, with \emph{ModOpt}$_{3}$), whereas transmission beyond 2 meters, albeit possible, accounts for only a few percent of the risk (5\% at Bellecour), at most. Importantly, here and throughout the paper, risks have been quantified by the number of new cases expected in each setting; this choice is relevant at the \emph{collective} scale, for policy-making, but not for the evaluation of the risks incurred by an \emph{individual} in the crowd. \subsection{Mitigation efficiency of redesigns} \hfill \\ \revise{Beyond these conclusions rooted in actual observations,} {Beyond this debate, }the framework introduced here opens the door to evaluating the mitigation efficiency of hypothetical redesigns of streets and venues, consisting e.g. in enforcing one-way circulation on footpaths, a sitting plan at caf\'es, or increased spacing in queues. Since circulation plans have flourished during the pandemic, let us first explore the impact of one-way \emph{vs.} two-way foot traffic on sidewalks and pedestrian streets. To avoid potential situational biases, the question is investigated by separating the periods of time (binned in two-second intervals) when the flow was unidirectional from those when there were pedestrians going in both directions, in each given scenario -- the market alley in the bottom right inset of Fig.~\ref{fig:risk_vs_density}. Since the transmission risks were found to depend on density, but not on the total flow rate (i.e., the sum of the directional flow rates across sections perpendicular to the main flow), we perform a comparison at fixed density. Our data (inset of Fig.~\ref{fig:risk_vs_density}) reveal only little benefit to switching from two-way to one-way traffic in our wide-path scenarios. To further test this somewhat surprising finding, we exploit the \emph{controlled} experiments performed a few years ago by the German BaSiGo team\footnote{These extensive datasets are openly available under: https://ped.fz-juelich.de/db/doku.php} to study unidirectional and bidirectional pedestrian flows in 4 to 5-meter-wide corridors \cite{cao2017fundamental}; head orientations are reconstructed as explained above. The results, shown in the top left inset of Fig.~\ref{fig:risk_vs_density}, are in line with the aforementioned finding: In wide walkways, switching from two-way flow to one-way flow seems to (at best) reduce the risks only moderately, probably because head-on 'collisions' are rare in these self-organized flows. Next, we turn to queues and study how their arrangement affects transmission risks. On the basis of our observations at a testing site, we modeled a queue as a line of more or less equally spaced people, swaying in a $50\,\mathrm{cm}\times50\,\mathrm{cm}$ rectangle around their central spot and whose head orientations are normally distributed (with a standard deviation of $22^{\circ}$) around the queuing axis 75\% of the time and purely random for the remaining 25\% of the time, due to people turning around or having a look around. Both the positions and orientations are refreshed every second. Albeit simplistic \footnote{The observed scenario is significantly more complex than its reconstruction: It actually features two different, not strictly linear queues, one outdoors and one indoors, as well as a few people who are not queuing.}, the reconstructed queuing scenario is comparable with our actual observations as far as the estimated infection rates are concerned; these estimates with and without reconstruction differ by less than 50\% for any of our models -- except the most optimistic ones, which bestow special importance to rare contact events that are overlooked in the reconstitution. Figures~\ref{fig:queue} and S7 illustrate the extent to which predicted infection rates vary when the spacing between queuing people or the queuing geometry are modified. \revise{} { Naturally, the risks are minimal in the case of the linear queue with the largest spacing between individuals, whereas they are maximized for winding, S-shaped queues with very close rows and individuals standing right behind each other within each row. Since people cannot be expected to keep their heads strictly in the direction of the queue, there appears to be no smart solution to have at the same time minimal risks and a very compact queue; given the results of Fig.~\ref{fig:queue} and S7, a simple practical recommendation for S-shaped queues is to keep a little more space between rows than the actual spacing between individuals in each row. } \begin{figure} \begin{centering} \includegraphics[width=0.5\textwidth]{figure_queuing_risks_optimistic_exp} \par\end{centering} \caption{\label{fig:queue}Hourly rate of new infections in a linear queue (top row) and a winding queue, depending on the spacing between pedestrians and lines, as estimated with the \emph{ModOpt}$_{3}$ parameter set. People move one spot forward every other minute. Note the logarithmic color scale on the graph.} \end{figure} \subsection{Current limitations and perspectives} \hfill \\ \revise{}{ In summary, the foregoing risk assessments in non-confined environments can guide public decisions in times of pandemic, in that (irrespective of the transmission parameters that are used) they confirm the risks of infection incurred at caf\'es \cite{chang2020mobility} and underline the key role of pedestrian density in determining the rate of viral transmission in moving crowds without masks. Fairly busy streets, with densities up to $\rho\simeq0.1\,\mathrm{ped/m^{2}}$, are found to present risks that are not completely negligible, but comparatively quite low, and these risks will manifestly reach even lower values for less busy streets. This suggests that the scant reports of outbreaks in outdoor walking crowds are not only due to the intricacy of tracing back these infections (due to unidentified contacts, recall biases, etc.), but also to the limited transmission of the virus in these conditions, even without face coverings. Nevertheless, this remark does not apply to very crowded settings such as markets or metro and train stations, which deserve particular attention. Furthermore, our model-based approach has enabled us to explore the efficiency of street and venue redesigns in mitigating the viral spread. For wide walkways, we have not found clear benefits to switching from two-way foot traffic to one-way traffic. For queues, increasing the space between individuals naturally reduces transmission risks; in the case of an S-shaped queue, a simple rule that could be enforced in practice is to keep a little more space between rows than between individuals in each row. } Given that our study pioneers the coupling of empirical crowd data to spatial models of viral transmission at mesoscales, it undoubtedly suffers from some limitations. To start with, the empirical data could be extended to include more scenarios and longer footage. Perhaps more crucially, the transmission model should be refined. The models used in this study are admittedly overly simple, even though this problem was partly warded off by ensuring the robustness of our qualitative conclusions across diverse model variants, including a spatio-temporal model. More sophisticated models, which may differentiate transmission rates as a function of people's activity (reflecting known variations in droplet emission \cite{asadi2019aerosol,abkarian2020speech}) and account for the effect of the wind \cite{feng2020influence} and ambient air flows, will afford more accurate estimation of the rate of new infections. In addition, fluid dynamics simulations of long-range aerosol propagation would make it possible to study enclosed spaces with poor ventilation, where our current models that discard the airborne transmission route can only provide lower bounds on infection risks. Another task is to generalize the transmission models to people who are (adequately or inadequately) wearing a mask \cite{leung2020respiratory}, in order to determine how serious an issue very crowded streets really are in current times. It would be straightforward to account for the particle filtration efficiency of masks in the present framework, by simply multiplying the transmission rates by a reduction factor (say, $\sim20\%$ for cloth masks, $\sim10$\% for surgical masks and $\ll5\%$ for N95 masks \cite{li2020assessing,bar2020quantitative}, if \emph{only the emitters} have their face covered), but masks are probably even more efficient, because they also reduce the reach of the exhaled puff \cite{bahl2020face,bhagat2020effects}, thereby probably shortening the range of transmission of droplets \cite{li2020assessing}. \section*{Acknowledgments} We are grateful to Isabelle Sabran (Ville de Lyon), Jessica Magraner (Cour d'appel de Lyon), and Fr\'ed\'eric Laurent (Hospices Civils de Lyon) for facilitating our collection of data with our privacy-respective cameras and to the French MODCOV19 initiative for supporting part of this work. We also thank C\'ecile Appert-Rolland for lending us some material and Marina Nicolas for proofreading. This work was funded by Agence Nationale de la Recherche (ANR-20-COV1-0003) under project name \emph{SeparationsPietons}. The setup of the CFD simulations discussed in Appendix D was designed collectively, with P. B\'enard, G. Lartigue, V. Moureau (CORIA Rouen, France), G. Balarac, P. B\'egou (LEGI Grenoble, France), Y. Dubief (Univ. Vermont, USA) and R. Mercier (Safran Tech, France). CFD simulations were performed using HPC resources from GENCI-TGCC. SM also acknowledges the support of Agence Nationale de la Recherche (ANR-21-CO15-0002, TransporTable). \newpage \setcounter{equation}{0} \renewcommand{\theequation}{S\arabic{equation}} \setcounter{table}{0} \renewcommand{\thetable}{S\arabic{table}} \setcounter{figure}{0} \renewcommand{\thefigure}{S\arabic{figure}}%
{ "timestamp": "2021-11-04T01:14:15", "yymm": "2012", "arxiv_id": "2012.08957", "language": "en", "url": "https://arxiv.org/abs/2012.08957" }
\section{Introduction} \hspace*{1em} Image segmentation is an important task in image processing, which assigns similar feature region into a same cluster. In computer vision, labeled dataset established by organization for image segmentation such as PASCAL VOC 2012\cite{pascal-voc-2012},BSD\cite{martin2001database}, have been applied to several fileds. Encouraged by this, convolutional neural networks (CNNs)\cite{krizhevsky2012imagenet} \cite{simonyan2014very} have been successfully applied to image segmentation for autonomous driving or augmented reality. The advantage of CNN-based classifier systems is that they do not require manually designed complexly image features and provide a fully automated classifier. These methods\cite{long2015fully} \cite{zheng2015conditional} \cite{badrinarayanan2017segnet} have been proven to be useful in supervised image segmentation. However, large amount of unlabeled images and videos has not been leveraged effectively. \hspace*{1em} Recently, unsupervised image segmentation can be roughly divided into two categories, model-based method and learning-based method. K-Means\cite{krishna1999genetic} and Graph-based\cite{felzenszwalb2004efficient} were two popular model-based methods. Although they are fast and simplest, their performance are still not well. Learning-based method refers to extract feature from deep neural network model. Unsupervised domain adaptation (UDA)\cite{song2020unsupervised} \cite{zhang2019self} is typically proposed to adapt the model trained on the dataset with no identity annotations. In order to improve segmentation and labeling accuracy, researchers have expanded the basic UDA framework. \cite{kanezaki2018unsupervised} and \cite{kim2020unsupervised} propose a novel end-to-end network of unsupervised image segmentation that consists of normalization and an argmax function for differentiable clustering. Their results show better accuracy than existing methods. \hspace*{1em} However, previous studies on unsupervised image segmentation exit two important problems: Firstly, not stable. Model performance is largely dependent on random initialization of model parameters, which usually produces unstable results. They usually need to be trained several times to get the better result. Secondly, lack of limitation on the number of clusters. It is very difficult for model to learn the number of clusters without prior knowledge. A hyperparameter, cluster number $k$, must be inputted into the model to limit the number of clusters after image segmentation. \hspace*{1em}To address these problems, we introduced an unsupervised Mutual Mean-Teaching (MMT) framework to effectively perform image segmentation, which is inspired by \cite{ge2020mutual}. Specifically, for an arbitrary image input, cluster labels of pixels are obtained by training two same networks in unsupervised learning. These two collaborative networks also generate labels by network predictions for training each other. The labels generated in this way contain lots of noise. To avoid training error amplification, the temporally average model of each network is applied to produce reliable soft labels for supervising the other network in a collaborative training strategy. Furthermore, since the labels of pixels from two model are not matched, a label alignment algorithm based on the Hungarian algorithm is proposed to match the cluster labels. \hspace*{1em} In this paper, we make the following contributions \begin{itemize} \item [1)] MMT framework is introduced into unsupervised image segmentation task to produce stable segmentation result. \item [2)] A label alignment algorithm based on the Hungarian algorithm is proposed to match the cluster labels. \item [3)] Experimental results demonstrated proposed method got the better accuracy than existing methods while maintaining a stable result. \end{itemize} \section{Related Work} \hspace*{1em} Image segmentation is the task of assigning every pixel to a class label. Recently, image segmentation using deep neural networks has made great breakthrough. A fully convolutional network(FCN)\cite{long2015fully} has been proposed to train end-to-end network, and has outperformed conventional segmentation such as K-means\cite{krishna1999genetic} which leverage the clusters for improved classification scores is to do clustering on the feature vector. Unfortunately, the result of FCN has produced low spatial resolution and blurring effects. This problem was addressed by adding the CRF layer in the \cite{zheng2015conditional} maintaining the end-to-end architecture. However, there is a large amount of unlabeled images in real world. Unsupervised image segmentation learning is attracted a lot of researchers attention and several related methods have been proposed. \cite{kanezaki2018unsupervised} and \cite{kim2020unsupervised} proposed joint learning approach learns optimal CNN parameters for the image pixel clustering. The practicability of this method is that CNN filters are known to be effective for texture recognition and segmentation\cite{cimpoi2015deep} \cite{He_2019_ICCV}. But due to the random initialization of the filters, unstable results are obtained from their model. There are two different solutions to solve this problem. First, the parameters of CNN filters have been fixed, such as transfer learning\cite{oquab2014learning} \cite{cao2018partial} . The second solution is to create consistent training supervisions for labeled/unlabeled data via different models predictions, such as Teacher-student models\cite{laine2016temporal} \cite{zhang2018deep}. This paper choose the latter solution. \hspace*{1em}Recently, \cite{ge2020mutual} proposed Mutual Mean-Teaching (MMT) to learn better features from the target domain via off-line refined hard pseudo labels and on-line refined soft pseudo labels in an alternative training manner. The elegant work shows that the MMT structure can be used in unsupervised learning. However, the most time-consuming component in the MMT is that each iteration requires label alignment using k-means and the number of labels is fixed in original MMT structure.This disadvantage is not available for our task. So we select other label alignment method without fixed number of label instead of using k-means. \cite{fan2012cluster}and \cite{zhou2006clusterer} use overlap similarity rate as the coefficient matrix, hungary and implicit enumeration algorithm as basic algorithms to solve label alignment problem. It is shown that the algorithm provides convergence at a rate which is eventually effective. \section{Proposed Method} \begin{figure*} \centering \begin{tabular}{@{}c@{}} \includegraphics[width=\linewidth]{structure.png} \\[\abovecaptionskip] \small (a) The pipeline of proposed model based MMT framework. \end{tabular} \vspace{\floatsep} \begin{tabular}{@{}c@{}} \includegraphics[width=\linewidth]{model.png} \\[\abovecaptionskip] \small (b) The detail of model1 and model2. \end{tabular} \caption{Overall architecture of proposed model for unsupervised image segmentation. (a) is the pipeline of proposed model based on MMT framework. It consists of two collaborative networks, model 1 and model 2. They adopt same structure which are optimized under supervisions of off-line refined labels and on-line refined labels.The loss function of total variation and label alignment cross entropy are adopted. (b) is details of model 1 and model 2. It consists of feature extraction module and mean model processing. }\label{network} \end{figure*} \hspace*{1em} Image segmentation can be formulated as a classification problem of pixels with labels. For simplicity, let {} denote $\{\}^N_{n=1}$ unless otherwise noted, where $N$ denotes the number of pixels in an input color image $I=\{x_{n} \in R^3\}$. Let $y_{n}=f(x_{n},\theta)$ simply denote the process of extracting feature, where $\theta$ denotes current network parameters, $f:R^3 \rightarrow R^q$ is a feature extraction function and $\{y_{n} \in R^q \}$ be a set of $q$-dimensional feature vectors of image pixels. Cluster labels $\{c_{n} \in Z\}$ are assigned to all of pixels by $c_{n} = g(y_{n})$, where $g:R^{q} \rightarrow Z$ denotes a mapping function. $g$ can be an assignment function that return the label of the cluster centroid closest to $y_{n}$. The details interpretation are found in \cite{kim2020unsupervised}. A simple method coming our mind, while $f$ and $g$ are fixed, ${c_{n}}$ are obtained using the abovementioned process. Conversely, if $f$ and $g$ are trainable whereas ${c_{n}}$ are specified. then the parameters for $f$ and $g$ in this case can be optimized by gradient descent. Although good results have been achieved in this architecture which meet the following three criteria: Pixels of similar features should be assigned the same label; Spatially continuous pixels should be assigned the same label;The number of unique cluster labels should be large. However, the random initialization of filter parameters in model has crucial influences to the final performance. So how to initialize the filter parameters randomly but also produce stable results ? \hspace*{1em}we are motivated by the process of MMT creation: conduct label refinery in the target domain by optimizing the neural networks with off-line refined hard pseudo labels and on-line refined soft pseudo labels in a collaborative training manner. They propose this framework for tackling the problem of the label noise affecting the result performance significantly, especially in unsupervised learning. But the disadvantage is that there are fixed number of output label where it not available for our task requirements. So we select cluster label aligning algorithm \cite{zhou2006clusterer} without fixed number of label instead of using k-means. Our structure is proposed, as shown in Fig. 1. \subsection{Network architecture} \hspace*{1em} In model1, given the input image $x_{n}$ where each pixel value is normalized. The network is trained to extract features transformation function $y_{n}^{1}=f_{1}(x_{n},\theta_{1})$. Subsequently, to assign each pixel to a label is calculated by $C_{n}^{1} = argmax\{y_{n}^{1}, y_{n} \in R^q\}$. Similarly, model2 generates labels by the same networks with different initializations. We denote the feature transformation functions $y_{n}^{2}=f_{2}(x_{n},\theta_{2})$ and denote their corresponding label classifiers as $C_{n}^{2}$. If this two collaborative networks also generate labels by network predictions for training each other, their labels are lots of noise. In order to mitigate the label noise, the past temporally average model of each network is applied to instead of the current model to generate the labels. Specifically, the parameters of the temporally average models of the two networks at current iteration $T$ are denoted as $\theta_{mean1}^{T}$ and $\theta_{mean2}^{T}$ respectively, which can be calculated as $$\theta_{1}^{T} = \theta_{1}^{T-1} +\eta\nabla \theta_{1} $$ $$\theta_{2}^{T} = \theta_{2}^{T-1}+\eta\nabla \theta_{2} $$ $$\theta_{mean1}^{T} = \alpha\theta_{mean1}^{T-1} + (1-\alpha)\theta_{1}^{T} $$ $$\theta_{mean2}^{T} = \alpha\theta_{mean2}^{T-1} + (1-\alpha)\theta_{2}^{T}$$ where $\theta_{mean1}^{T-1}$,$\theta_{mean2}^{T-1}$ indicate the temporal average parameters of the two networks in the previous iteration $(T-1)$ ,the initial temporal average parameters are $\theta_{mean1}^{0} = \theta_{1}^{0}$ , $\theta_{mean2}^{0} = \theta_{2}^{0}$,$\alpha \in [0,1)$, $\eta$ is learning rate. \hspace*{1em}However, due to different initialization in the network, there is a great deal of difference between the two label generated by features of each channel in different network. As above assumption that $C_{n}^{1} = \{c_{1}^{1}, c_{2}^{1},..., c_{n}^{1}\}$ and $C_{n}^{2} = \{c_{1}^{2}, c_{2}^{2},..., c_{n}^{2}\}$, $C_{n}^{1}$ and $C_{n}^{2}$ is divided into $k$ sorts that can be expressed by the label vector $\lambda^{1} = [\lambda_{1}^{1},\lambda_{2}^{1},...,\lambda_{n}^{1}]$ and $\lambda^{2} = [\lambda_{1}^{2},\lambda_{2}^{2},...,\lambda_{n}^{2}]$ where $\lambda_{i}^{1},\lambda_{i}^{2} \in \{1,2,...,k\}$ are the cluster label from different networks. In order to address the issue of aligning label, the overlap similarity matrix $S$ could be constructed, where $S_{i,j}$ items appear in both $\lambda_{i}^{1}$ and $\lambda_{j}^{2}$ is counted. Then, the pair of clusters whose number of overlapped data items is the largest, are matched in the way that they are denoted by the same label.Such a process is repeated until all the clusters are matched.The programming model is expressed as formula follows. $$ \max Z = sum \ S\bigodot D_{dec} = \sum_{i=1}^n\sum_{j=1}^n s_{i,j}\times d_{i,j} $$ $$ s.t. \ \ \sum_{i=1}^n d_{i,j} = 1(j=1,2,\dots,n) \\ $$ $$ \sum_{j=1}^n d_{i,j} = 1(i=1,2,\dots,n) \\$$ $$ d_{i,j} = 0 , 1 \\ $$ where $S$ is overlap similarity matrix, $D_{dec}$ is the decision-making matrix. $\bigodot$ is matrix points multiplication sign.Briefly, we use $h$ denote the process of label alignment, $$C_{n}^{2'} = h_{2}(C_{n}^{1})$$ $$ C_{n}^{1'} = h_{1}(C_{n}^{2})$$ where $h_{1}$ denotes the map from the space of $C_{n}^{2}$ to the space of $C_{1}$ and $h_{2}$ denotes the map from the space of $C_{n}^{1}$ to the space of $C_{n}^{2}$ . \begin{algorithm}[htb] \caption{Unsupervised image segmentation using MMT} \label{algoevent} \LinesNumbered \KwIn{I = {$x_{n} \in R^3$}} \KwOut{$L=\{c_{n} \in Z\}$t} $Init() \rightarrow \theta_{1},\theta_{2}$ \; $\theta_{1}^{mean1} = \theta_{1}, \theta_{2}^{mean2} = \theta_{2}$ \; \For{t=2:T}{ $y_{n}^{1}=GetFeats_{1}(x_{n})$\; $y_{n}^{mean2}=GetFeats_{mean2}(x_{n})$\; $C_{n}^{mean2} = argmax\{y_{n}^{mean2}\}$\; $C_{n}^{1'} = h_{1}(C_{n}^{mean2})$ \quad // \quad${Label Assignment}$\; $L_{sim}^{1} = L_{sim}(y_{n}^{1},C_{n}^{1'})$, $L_{tv}^{1} = L_{tv}(y_{n}^{1})$\; $L^{1} = L_{sim}^{1} + \beta L_{tv}^{1}$ \; $\theta_{1}^{t} = \theta_{1}^{t-1} +\eta \nabla \theta_{1}$, $\theta_{1,mean}^{t} = \alpha \theta_{1,mean}^{t-1} + (1-\alpha)\theta_{1}^{t}$ \; $y_{n}^{2} = GetFeats_{2}(x_{n})$\; $y_{n}^{mean1} = GetFeats_{mean1}(x_{n})$ \; $C_{n}^{mean1} = argmax\{y_{n}^{mean1}\}$, \; $C_{n}^{2'} = h_{2}(C_{n}^{mean1})$ \quad // \quad${Label Assignment}$\; $L_{sim}^{2} = L_{sim}(y_{n}^{2},c_{n}^{2'})$, $L_{tv}^{2} = L_{tv}(y_{n}^{2})$ \; $L^{2} = L_{sim}^{2} + \beta L_{tv}^{2}$\; $\theta_{2}^{t} = \theta_{2}^{t-1} +\eta\nabla \theta_{2}$ , $\theta_{mean2}^{t} = \alpha \theta_{mean2}^{t-1} + (1-\alpha)\theta_{2}^{t}$\; } \end{algorithm} \subsection{Loss Functions} \hspace*{1em} \textbf{TV Loss}. To encourage the cluster labels to be the same as those of the neighboring pixels. total variation regularizer would be considered to horizontal and vertical differences of output label.we follow prior work \cite{shibata2017misalignment}. The spatial continuity loss $L_{tv}$ is defined as follows: $$ L_{tv}(y_{n}) = \sum_{i=1}^{W-1}\sum_{j=1}^{H-1}(||y_{i+1,j}-y_{i,j}||_{1} + ||y_{i,j+1}-y_{i,j}||_{1}) $$ where $W$ and $H$ represent the width and height of an input image, $|| \cdot ||_{1}$ represent the $L1-norm$. whereas $y_{i,j}$ represents the pixel value at $(i,j)$ in the feature map $y$. \hspace*{1em}\textbf{Feature Loss}. the cluster labels generated by average model2 are obtained by applying the $argmax$ function to the feature map $y_{mean2}$. The cluster labels $c^{mean2}_{n}$are further utilized as the pseudo targets. And then align label with model1 by mapping function of $h_{1}$. The feature loss penalizes the output label when it deviates in content from the target $C_{n}^{mean2}$. To achieve this goal, the following cross entropy loss is calculated: $$ L_{sim}(y_{n},h_{1}(C^{mean2}_{n})) = \\ \sum_{n=1}^{N}\sum_{j=1}^{q}(-\delta(j-h_{1}(C^{mean2}_{n}))\ln{y}^{n,j}) $$ where $$ \delta(t)=\left\{ \begin{array}{rcl} 1 & & {if \quad t = 0}\\ 0 & & {otherwise} \end{array} \right. $$ \subsection{OVERALL LOSS AND ALGORITHM} \hspace*{1em} To finish the task of image segmentation, we assign label to match the feature content representation of $y_{n}$ and spatial smoothness simultaneously. Thus we jointly minimise the distance of the content feature representations and spatial smoothness. The loss function we minimise is $$L_{total} = L_{sim}+\beta L_{tv}$$ where $\beta$ is the weighting factor for spatial smoothness. \hspace*{1em}The detailed optimization procedures are summarized in Algorithm 1. Compared to traditional unsupervised algorithm, labels generated by off-line refined instead of the labels generated by on-line algorithm. We keep the model2 and mean model2 constant during the model1 training phase, As model1 training tries to figure out how to segment the image using labels generated by mean model2, and then align the labels of the output of model1 and model2. Immediately, the parameter of $\theta_{1} $ and $\theta_{mean1}$ are updated. Similarly, keep the model1 and mean model1 constant during the model2 training phase. the model2 would be trying to segment image according to labels generated by mean model1. Then the parameter of $\theta_{2} $ and $\theta_{mean2}$ are updated. After several steps of training, if model1,mean model1, model2 and mean model2 have enough capacity, they will reach a point at which four models are stable. \section{EXPERIMENTAL RESULTS} \begin{figure}[h] \centering \{ \centering \includegraphics[width=0.71in]{2010.jpg}\vspace{1pt} \includegraphics[width=0.71in]{itoutput8.png}\vspace{1pt} \includegraphics[width=0.71in]{itoutput14.png}\vspace{1pt} \includegraphics[width=0.71in]{itoutput18.png} \centering \includegraphics[width=0.71in]{itblank.png}\vspace{1pt} \includegraphics[width=0.71in]{itour1.jpg}\vspace{1pt} \includegraphics[width=0.71in]{itour2.jpg}\vspace{1pt} \includegraphics[width=0.71in]{itour3.jpg} \centering \caption{Results of proposed method and \cite{kanezaki2018unsupervised} generated by training three times without fine-tuning any parameters. In first row, the first image is input image. The other three images are image segmentation results with different segments showing in different colors. The second row show three images are generated by our method.} \label{fig2} \end{figure} \begin{figure*}[htb] \centering \centering img \includegraphics[width=1.3in]{im101027.jpg} \includegraphics[width=1.3in]{im196062.jpg} \includegraphics[width=1.3in]{im108004.jpg} \includegraphics[width=1.3in]{im306052.jpg} \includegraphics[width=0.6in]{im253092.jpg} \centering gt\ \ \ \ \includegraphics[width=1.3in]{gt01027.jpg} \includegraphics[width=1.3in]{gt96062.jpg} \includegraphics[width=1.3in]{gt08004.jpg} \includegraphics[width=1.3in]{gt06052.jpg} \includegraphics[width=0.6in]{gt53092.jpg} \centering \cite{kanezaki2018unsupervised} \includegraphics[width=1.3in]{ot101027.png} \includegraphics[width=1.3in]{ot196062.png} \includegraphics[width=1.3in]{ot108004.png} \includegraphics[width=1.3in]{ot306052.png} \includegraphics[width=0.6in]{ot253092.png} \centering ours \includegraphics[width=1.3in]{our1.jpg} \includegraphics[width=1.3in]{our2.jpg} \includegraphics[width=1.3in]{our5.jpg} \includegraphics[width=1.3in]{our4.jpg} \includegraphics[width=0.6in]{our3.jpg} \caption{ Comparison of our method and \cite{kanezaki2018unsupervised} on unsupervised image segmentation. The first two rows show the origin images and ground truth image segmentations.The third row show good results which are selected by training many times from \cite{kanezaki2018unsupervised}. In this process, failed results have been abandoned since their method is not stable. The last row is our proposed method result which is not training many times. Different segments are shown in different colors.} \label{fig1} \end{figure*} \hspace*{1em} In this section, we conduct experiments to verify the effectiveness of the proposed stable image segmentation approach in unsupervised learning. 200 test images from the Benchmark(BSD500) are chosen to evaluate the proposed method. We trained the proposed model with T = 1000 iterations and fixed q = 100 for the channel of output feature map in the experiments. The number of convolutional layers M was set to 3 and the temporal ensemble momentum $\alpha$ is set to 0.999. The model parameters are optimized by the stochastic gradient descent method (SGD). The learning rate is fixed to 0.1 and set momentum to 0.9. For the comparison, we reproduce the baseline image segmentation from \cite{kanezaki2018unsupervised}. Figure \ref{fig2} shows the better results from our structure. The first row which we repeat training 3 times without fine-tuning any parameters from \cite{kim2020unsupervised} resulted in poor stable performance. Our method has a significant stable performance improvement. \begin{figure*}[htb] \centering img \includegraphics[width=0.58in]{2007.jpg} \includegraphics[width=1.03in]{2008.jpg} \includegraphics[width=1.13in]{2009.jpg} \includegraphics[width=0.59in]{2010.jpg} \includegraphics[width=0.9in]{2011.jpg} \includegraphics[width=1.17in]{2012.jpg} \centering gt\ \ \ \ \ \includegraphics[width=0.58in]{2007GT.jpg} \includegraphics[width=1.08in]{2008GT.jpg} \includegraphics[width=1.13in]{2009GT.jpg} \includegraphics[width=0.59in]{2010GT.jpg} \includegraphics[width=0.9in]{2011GT.jpg} \includegraphics[width=1.17in]{2012GT.jpg} \centering \cite{ji2019invariant} \includegraphics[width=0.58in]{2007IIC.jpg} \includegraphics[width=1.05in]{2008IIC.jpg} \includegraphics[width=1.13in]{2009IIC.jpg} \includegraphics[width=0.59in]{2010IIC.jpg} \includegraphics[width=0.9in]{2011IIC.jpg} \includegraphics[width=1.17in]{2012IIC.jpg} \centering \cite{kanezaki2018unsupervised} \includegraphics[width=0.58in]{2007SS.png} \includegraphics[width=1.03in]{2008SS.png} \includegraphics[width=1.13in]{2009SS.png} \includegraphics[width=0.59in]{2010SS.png} \includegraphics[width=0.9in]{2011SS.png} \includegraphics[width=1.17in]{2012SS.png} \centering \cite{kim2020unsupervised} \includegraphics[width=0.58in]{2007CC.png} \includegraphics[width=1.03in]{2008CC.png} \includegraphics[width=1.13in]{2009CC.png} \includegraphics[width=0.59in]{2010CC.png} \includegraphics[width=0.9in]{2011CC.png} \includegraphics[width=1.17in]{2012CC.png} \centering ours \includegraphics[width=0.58in]{our2007.jpg} \includegraphics[width=1.03in]{our2008.jpg} \includegraphics[width=1.13in]{our2009.jpg} \includegraphics[width=0.59in]{our2010.jpg} \includegraphics[width=0.9in]{our2011.jpg} \includegraphics[width=1.17in]{our2012.jpg} \centering \caption{Comparison of our method and IIC\cite{ji2019invariant}, \cite{kanezaki2018unsupervised},\cite{kim2020unsupervised} model fitting for six images from PAS-CAL VOC 2012. The first two rows show the origin images and ground truth image segmentations. The third row is generated by IIC method. The second and third shows the segmentations produced by the previous state-of-the-art system by \cite{kanezaki2018unsupervised} and \cite{kim2020unsupervised} and their failed results have been abandoned since their method is not stable. The last row shows the output of our highest performing structure. } \label{fig3} \end{figure*} \hspace*{1em}Examples of unsupervised image segmentation results on PASCAL VOC 2012 and BSD500 are shown in Figure \ref{fig1} and Figure \ref{fig3}, respectively. Figure \ref{fig1} shows a comparison of five images from the BSD500. The first two rows show the origin images and ground truth image segmentations, and third row shows the predicted image segmentation results with method of \cite{kanezaki2018unsupervised}, which I pick out relatively good results with training many times , and the last row are our results.They also demonstrate that our method performed better and more consistently, which indicates that our model was able to learn features that are stable to a large range of intensity variations.The results of segmentation comparing our method with IIC\cite{ji2019invariant}, \cite{kanezaki2018unsupervised} and \cite{kim2020unsupervised} are shown in Figure \ref{fig3}. Note that our results in very smooth image segmentation which is closer to ground truth segmentation. The images in Figure \ref{fig3} offer compelling evidence that our segmentation algorithm performs well on a variety of images from different domains. \hspace*{1em}The evaluation of segmentation results is that we can compare the segmentations produced by different algorithms, such as the k-means clustering, graph-based segmentation method (GS)\cite{felzenszwalb2004efficient} , \cite{kanezaki2018unsupervised} and \cite{kim2020unsupervised}. The parameters of \cite{kanezaki2018unsupervised} and \cite{kim2020unsupervised} are same as the parameter of we proposed method.The best parameters of k-means clustering and graph-based segmentation algorithms were experimentally determined from $\{2,5,8,11,14,17,20 \}$ and $\{100,500,1000,1500,2000 \}$. we get a accuracy table from calculating the true positives divided by sample size. The standard protocol of finding the best one-to-one permutation mapping between learned and ground-truth clusters using linear assignment\cite{kuhn2005hungarian}. Table 1 are drawn by calculating an evaluation parameter from BSD500. \begin{table} \begin{center} \begin{tabular}{|l|c|} \hline Method & Accuracy \\ \hline\hline Kmeans\cite{krishna1999genetic} & 0.3639 \\ GS\cite{felzenszwalb2004efficient} & 0.4661 \\ Kanezaki\cite{kanezaki2018unsupervised} & 0.5269\\ Kim\cite{kim2020unsupervised} & 0.5023\\ ours & 0.5384 \\ \hline \end{tabular} \label{tab111} \end{center} \caption{ Comparison pixel accuracy of unsupervised segmentation method in BSD500. } \end{table} \section{Conclusions} \hspace*{1em} In this paper, we proposed a unsupervised image segmentation model based on MMT, and using overlap similarity degree methods for label alignment. The model consists of convolutional filters for feature extraction and differentiable processes for feature clustering, which enables end-to-end network training. We have applied this method to unsupervised image segmentation where we achieve comparable performance and drastically improved stable compared to existing methods. \hspace*{1em}In future work, we hope to explore the use of other loss functions for unsupervised image segmentation tasks, such as semantic segmentation loss. We also plan to investigate the use of self-attention mechanism to unsupervised image segmentation. \bibliographystyle{unsrt} {\small
{ "timestamp": "2020-12-17T02:17:36", "yymm": "2012", "arxiv_id": "2012.08922", "language": "en", "url": "https://arxiv.org/abs/2012.08922" }
\section{Introduction} Over the last few years the devolvement of high-precision temperature sensing techniques has attracted considerable interest due to the broad and important applications ranging from medicine and biology \cite{Kinkert2009} to quantum information processing and quantum thermodynamics \cite{Mehboudi2019,Pasquale2016,Gemmer2004}. The quantum thermometer in generally consists of a system called probe which is brought into thermal equilibrium with a sample of interest. Various quantum optical systems can be used as temperature probes including for example quantum dots \cite{Seilmeier2014,Haupt2014,Sabin2014}, color centers in nanodiamonds \cite{Neumann2013,Kucsko2013,Toyli2013}, micromechanical resonators \cite{Brunelli2011,Brunelli2012} and trapped ions \cite{Meekhof1996,Robnagel2015,Gebert2016,Wan2015,Levy2020}. An accurate strategy for temperature determination can be executed by measuring the populations in the energy basis of the quantum probe system \cite{Paris2016,Marzolino2013,Correa2015,Campbell2017,Campbell2018}. Indeed, it turns out that this strategy is \emph{optimal} with smallest temperature statistical uncertainty which saturates the fundamental Cram\'er-Rao bound for temperature estimation of any equilibrium system. However, the energy measurements are in general challenging as in case of a probe consisting of a quantum harmonic oscillator, where the number of basis states is typically large at thermal equilibrium, which limits the achievable temperature precision. Alternative approach is to use additional ancillary qubits to couple coherently with the probe. Then the information of the temperature is transferred to the qubit states which can be read-out with high-efficiency at the end of the interaction \cite{Brunelli2011,Brunelli2012,Ivanov2019}. Although this strategy is experimentally more convenient the statistical uncertainty of the temperature determination is usually higher than the optimal one given by the fundamental quantum Cram\'er-Rao bound. In this work we propose an optimal adiabatic method for phonon temperature detection using trapped ions. Our technique relies on a global laser radiation which couples the internal spin states of ions to the vibrational mode via a red-sideband transition. This collective interaction is described by a non-linear Jaynes-Cummings type model in general. We show that by engineering time-dependent detuning and spin-motion coupling one can adiabatically transfer the relevant temperature information encoded in phonon distributions of vibrations onto the collective spin-excitation. Such a time-dependent control of the spin-phonon interaction has been extensively studied in creating of entangled spin and motion states \cite{Linington2008,Linington2008_1,Hume2009,Toyoda2011}. Here we show that each of the Fock states of the harmonic oscillator is adiabatically mapped on respective spin-excitation configuration. Thus the temperature determination is carried out by performing projection measurement of the spin populations at the end of the adiabatic transition. We show that our adiabatic sensing technique can be operated in and beyond the Lamb-Dicke limit and therefore is suitable for measuring a broad range of temperatures including a low temperature limit with mean thermal phonon excitations $\bar{n}\ll1$ as well as the high temperature regime with $\bar{n}\gg 1$. We quantify the sensitivity of the temperature estimation using classical Fisher information. We show that the projection measurements in the original spin basis lead to an \emph{equality} between the classical and quantum Fisher information for quantum harmonic oscillator at thermal equilibrium. Therefore, our quantum thermometry is optimal in the sense that the uncertainty of the temperature estimation is bounded by the quantum Cram\'er-Rao inequality. Moreover, we show that our adiabatic motion sensing technique can be applied for the detection of various other quantum states such as coherent and squeezed motion states. In particular, we discuss the detection of the phase of the coherent cat state via state-projective measurements which can be used for ultra sensitive force measurement with Heisenberg limit precision \cite{Maiwald2009,Munro2002}. The paper is organized as follows: In Sec. \ref{background} we provide the general theoretical framework on the sensitivity of the temperature estimation. In Sec. \ref{realization} we discuss the physical realization of the adiabatic temperature estimation technique using trapped ions. The adiabatic method relies on a time-dependent red-sideband interaction which transfers the relevant temperature information onto the collective spin states. We show that the state projection measurements in the original basis leads to equality between the classical and quantum Fisher information and thus the temperature uncertainty is bounded by the quantum Cram\'er-Rao inequality. In Sec. \ref{imperfections} we investigate effects due to the physical imperfections on the sensitivity of our adiabatic quantum thermometer. In Sec. \ref{cat} we discuss application of the sensing technique for measuring the relative phase of the coherent cat state. We show that the phase can be determined by performing spin projective measurement with Heisenberg limit precision. Finally, the conclusions are presented in Sec. \ref{conclusions}. \section{Principe of a Quantum Thermometry}\label{background} We begin by considering a probe system which is represented by a simple quantum harmonic oscillator with Hamiltonian $\hat{H}=\hbar\omega\hat{a}^{\dag}\hat{a}$, where $\hat{a}^{\dag}$ and $\hat{a}$ are the creation and annihilation operators of bosonic excitation with frequency $\omega$. We assume that the harmonic oscillator is prepared at thermal equilibrium and is described by a canonical Gibbs state with density matrix $\hat{\rho}_{T}=e^{-\beta\hat{H}}/Z=\sum_{n=0}^{\infty}p_{n}|n\rangle\langle n|$. Here $|n\rangle$ is the $n$th Fock state of the harmonic oscillator with eigenenergy $E_{n}=n\hbar\omega$, $p_{n}=Z^{-1}e^{-\beta E_{n}}$ are the corresponding thermal state probabilities, $Z={\rm Tr}(e^{-\beta\hat{H}})$ the associated partition function, $\beta=1/k_{\rm B}T$ with $k_{\rm B}$ being the Boltzmann constant and $T$ is the temperature, the parameter we wish to estimate. Since, the temperature is not a direct observable its value can be extracted only by performing suitable measurements of other experimentally accessible observable. For this goal, consider a discrete set of measurements defined in terms of its corresponding positive-operator valued measure (POVM) $\{\hat{\Pi}_{n}\}$, with $\sum_{n}\hat{\Pi}_{n}=\mathbb{I}$. The corresponding classical Fisher information which quantifies the amount of information on the temperature of the system is given by \cite{Paris2009} \begin{equation} F_{\rm C}(T)=\sum_{n}\frac{\left(\partial_{T}P_{n}(T)\right)^{2}}{P_{n}(T)},\label{CFI} \end{equation} where $P_{n}(T)={\rm Tr}(\hat{\Pi}_{n}\hat{\rho}_{T})$ is the probability to get outcome $n$ from the performed measurement. Furthermore, the variance $\delta T$ of the temperature estimator is bounded by the Cram\'er-Rao inequality \begin{equation} \delta T\ge \frac{1}{\sqrt{\nu F_{\rm C}(T)}}, \end{equation} where $\nu$ is the experimental repetitions. The optimal strategy to measure the value of the temperature is however associated with a privileged observable which maximize the classical Fisher information and thus allows to determine the temperature with ultimate precision. Indeed, it is possible to show that the classical Fisher information is upper bounded by $F_{\rm C}(T)\le F_{\rm Q}(T)$, where $F_{\rm Q}(T)={\rm Tr}(\hat{\rho}_{T}\hat{L}^{2})$ is the quantum Fisher information. Here $\hat{L}$ is the symmetrical logarithmic derivative (SLD) operator, which satisfies the operator equation $\partial_{T}\hat{\rho}_{T}=(\hat{\rho}_{T}\hat{L}+\hat{L}\hat{\rho}_{T})/2$. Thus, the ultimate achievable precision of the temperature determination, optimized over all possible measurements is quantified by the quantum Cram\'er-Rao bound \begin{equation} \delta T\ge \frac{1}{\sqrt{\nu F_{\rm Q}(T)}}.\label{QRB} \end{equation} The eigenstates of the SLD operator $\hat{L}$ define the optimal measurement basis in which the quantum Cram\'er-Rao bound can be saturated. It is straightforward to show that for a Gibbs state with $\hat{\rho}_{T}$ the SLD operator can be written as $\hat{L}=\sum_{n}\{(E_{n}-\langle \hat{H}\rangle)/T^{2}\}|n \rangle\langle n|$, where $\langle \hat{H}\rangle={\rm Tr}(\hat{H}\hat{\rho}_{T})$ is the average energy \cite{Paris2016}. The result emphasizes that the optimal temperature measurement is achieved in the Fock basis $|n\rangle$ of the harmonic oscillator, e.g., by measuring the probabilities $p_{n}$. Finally, the QFI for the harmonic oscillator at thermal equilibrium can be written as \begin{equation} F_{\rm Q}(T)=\frac{\hbar^{2}\omega^{2}}{4k_{\rm B}^{2}T^{4}}{\rm csch}^{2}\left(\frac{\hbar\omega}{2k_{\rm B}T}\right).\label{QFI} \end{equation} A question that arises is whether it is possible to saturate the fundamental quantum Cram\'er-Rao bound by performing different set of discrete measurements rather than measurements of the thermal state probabilities. For this goal we consider an auxiliary quantum system of $N$ spin-$1/2$ particles which interacts coherently with the quantum harmonic oscillator. Using time-dependent unitary evolution one can map the information of the temperature into the respective spin state populations. We show that performing state projection measurements one can saturate the fundamental quantum Cram\'er-Rao bound and thus determine the temperature with the ultimate precision given by Eq. (\ref{QRB}). \section{Ion Trap Realization of Quantum Thermometry}\label{realization} We discuss in the following the ion trap based quantum thermometer which is able to perform an optimal measurement of the phonon temperature by detecting the ions' spin populations. We consider a linear ion crystal of $N$ ions confined in a Paul trap along the $z$ axis with trap frequencies $\omega_{\chi}$ ($\chi=x,y,z$). We assume that the transverse frequencies are much larger than the axial trap frequency $\omega_{x,y}\gg\omega_{z}$ which leads to the formation of a linear ion crystal where the ions occupy equilibrium positions $z_{k}^{0}$ along the trap axis. The position operator of the $l$th ion can be expressed as $\hat{r}_{l}=\delta \hat{r}_{x,l}\vec{e}_{x}+\delta \hat{r}_{y,l}\vec{e}_{y}+(z_{l}^{0}+\delta \hat{r}_{z,l})\vec{e}_{z}$, where $\delta \hat{r}_{\chi,l}$ are the displacement operators around the ion's equilibrium positions, which can be written in terms of collective phonon modes as $\delta \hat{r}_{\chi,l}=\sum_{k=1}^{N}M^{\chi}_{l,k}\sqrt{\frac{\hbar}{2m\omega_{\chi,k}}}(\hat{a}^{\dag}_{\chi,k}+\hat{a}_{\chi,k})$ \cite{James1998}. Here $\hat{a}^{\dag}_{\chi,k}$ and $\hat{a}_{\chi,k}$ are the creation and annihilation operators of the collective phonons with frequency $\omega_{\chi,k}$ along the spatial direction $\chi$. The element $M^{\chi}_{l,k}$ is the amplitude of the normal mode $k$ on ion $l$. We assume that each ion has two metastable internal levels $\left|\downarrow\right\rangle$ and $\left|\uparrow\right\rangle$ with a transition frequency $\omega_{0}$. Then, the interaction-free Hamiltonian describing the linear ion crystal is given by \begin{equation} \hat{H}_{0}=\hbar\omega_{0}\hat{S}_{z}+\hbar\sum_{k=1}^{N}\sum_{\chi=x,y,z}\omega_{\chi,k}\hat{a}^{\dag}_{\chi,k}\hat{a}_{\chi,k},\label{H0} \end{equation} where $\hat{S}_{z}=\frac{1}{2}\sum_{l=1}^{N}\sigma^{z}_{l}$ and $\hat{S}^{+}=\sum_{l=1}^{N}\sigma^{+}_{l}$ ($\hat{S}^{-}=(\hat{S}^{+})^{\dag}$) are the collective spin operators with $\sigma_{l}^{z}$ being the Pauli operator for the $l$th spin and respectively $\sigma^{+}_{l}=\left|\uparrow_{l}\right\rangle\left\langle\downarrow_{l}\right|$ is the spin raising operator. \begin{figure} \includegraphics[width=0.45\textwidth]{fig1.eps} \caption{(Color online) Linkage pattern of the collective states of a string of two ions driven by red-sideband laser. Spins are initially prepared in their electronic ground state and the vibration center-of-mass mode is in thermal states. a) The state $\left|\downarrow\downarrow\right\rangle\left|0\right\rangle$ is not affected by the collective red-sideband interactions. b) and c) The states $\left|\downarrow\downarrow\right\rangle\left|1\right\rangle$ and $\left|\downarrow\downarrow\right\rangle\left|n\right\rangle$ ($n>1$) are coupled to the manifolds with the same number of total excitations.} \label{fig1} \end{figure} After performing a Doppler cooling of the linear ion crystal each collective vibrational mode is in a thermal state of motion with mean thermal phonon excitation $\bar{n}_{\chi,k}$. Since the oscillations of the ions in all three directions are decoupled one can determine the temperature of each vibrational mode independently \cite{Meekhof1996}. For concreteness we consider the temperature estimation of the collective center-of-mass mode along the spatial transverse direction $x$. This mode has the highest vibrational frequency $\omega_{x,1}=\omega_{x}$ in which the ions oscillate in phase with equal amplitude. The total Hilbert space is spanned by the basis $\{|S,m\rangle\otimes|n\rangle\}$ where $|n\rangle$ is the Fock state of the center-of-mass vibrational mode with $n$ phonons. The states $|S,m\rangle$ are the eigenvectors of the two commuting operators $\hat{S}^{2}|S,m\rangle=S(S+1)|S,m\rangle$ and $\hat{S}_{z}|S,m\rangle=m|S,m\rangle$ ($m=-S,\ldots,S$) with $S=\frac{N}{2}$. In the computational basis the state $\left|D_{l}\right\rangle=|S,-S+l\rangle$ with $l$ spin excitations ($l=0,1,\ldots, 2S$) can be expressed as \begin{equation} \left|D_{l}\right\rangle=\sqrt{\frac{l!(2S-l)!}{2S!}}\sum_{x}P_{x}\left|\uparrow_{1}\ldots\uparrow_{l}\downarrow_{l+1}\ldots\downarrow_{N}\right\rangle, \end{equation} where the sum subscript $x$ runs over all distinct permutations $P_{x}$ of the ions' internal states with $l$ spins in excited state $\left|\uparrow\right\rangle$ and respectively $N-l$ in the ground state $\left|\downarrow\right\rangle$. In order to create a coupling between the collective vibrations and the ion spin states we assume that the linear ion crystal is globally addressed by laser field with laser wave vector $\vec{k}$ pointing along the $x$ direction ($|\vec{k}|=k_{x}$) and laser frequency $\omega_{\rm L}(t)=\omega_{0}-\omega_{x}+\Delta(t)$ tuned near the center-of-mass red-sideband resonance with time-dependent detuning $\Delta(t)$ ($\omega_{x}\gg\Delta(t)$). After performing an optical rotating-wave approximation, the interaction Hamiltonian becomes \cite{Wineland1998,Haffner2008,Schneider2012} \begin{eqnarray} \hat{H}_{I}(t)&=&\hbar\Omega(t)\sum_{l=1}^{N}\{\sigma^{+}_{l}e^{i(\sum_{k=1}^{N}\eta^{x}_{l,k}(\hat{a}^{\dag}_{x,k}e^{i\omega_{x,k}t}+\hat{a}_{x,k}e^{-i\omega_{x,k}t})}\notag\\ &&\times e^{i(\omega_{x}t-\int_{t_{i}}^{t}\Delta(\tau)d\tau)}+{\rm h.c.}\},\label{HI} \end{eqnarray} where $\Omega(t)$ is the time-dependent Rabi frequency and $\eta^{x}_{l,k}=k_{x}\sqrt{\frac{\hbar}{2m\omega_{x,k}}}M^{x}_{l,k}$ is the Lamb-Dicke parameter. Moreover, since the laser field frequency is close to the red-sideband resonance of the center-of-mass mode one can perform vibrational rotating-wave approximation, in which the contribution of the other spectator phonon modes is neglected. Transforming the Hamiltonian in the rotating-frame with respect to $\hat{U}_{\rm R}=e^{i\int_{t_{i}}^{t}\Delta(\tau)d\tau \hat{S}_{z}}$ such that $\hat{H}_{\rm JC}(t)=\hat{U}_{R}^{\dag}\hat{H}_{I}(t)\hat{U}_{R}-i\hbar \hat{U}_{R}^{\dag}\partial_{t}\hat{U}_{R}$, we arrive to \begin{equation} \hat{H}_{\rm nJC}(t)=\hbar \Delta(t)\hat{S}_{z}+\hbar \lambda(t)(\hat{S}^{+}\hat{F}(\hat{n})\hat{a}+\hat{S}^{-}\hat{a}^{\dag}\hat{F}(\hat{n})),\label{nHJC} \end{equation} \begin{figure} \includegraphics[width=0.45\textwidth]{fig2.eps} \caption{(Color online) Lowest eigenfrequencies of Hamiltonian (\ref{HJC}) for three spins and for different phonon number $n$ as a function of time. We assume time-dependent detuning and spin-phonon coupling are given by Eq. (\ref{couplings}). In the adiabatic limit each of the initial states $|\psi_{n}(t_{i})\rangle=\left|\downarrow\downarrow\downarrow\right\rangle\left|n\right\rangle$ ($n=0,1,2,3$) is transformed into $|\psi_{n}(t_{i})\rangle\rightarrow|D_{n}\rangle|0\rangle$. } \label{fig2} \end{figure}where $\lambda(t)=\Omega(t)\eta^{x}_{l,1}$ is the time-dependent spin-phonon coupling and $\eta^{x}_{l,1}=\eta$ being the Lamb-Dicke parameter for the center-of-mass vibrational mode and $\hat{a}^{\dag}$ and $\hat{a}$ are respectively the phonon creation and annihilation operators corresponding to an oscillator with frequency $\omega_{x}$. The Hamiltonian (\ref{nHJC}) describes the non-linear Jaynes-Cummings (nJC) model, where the non-linear operator function can be expressed as \cite{Vogel1995} \begin{equation} \hat{F}(\hat{n})=e^{-\eta^{2}/2}\sum_{n=0}^{\infty}\frac{(-\eta^{2})^{n}}{n!(n+1)!}\hat{a}^{\dag n}\hat{a}^{n}.\label{F} \end{equation} Assuming the Lamb-Dicke limit $\eta\langle(\hat{a}^{\dag}+\hat{a})^{2}\rangle^{1/2}\ll 1$ in which the amplitudes of oscillations of the ions around their equilibrium positions are small compared to optical wavelength one can approximate the Hamiltonian (\ref{nHJC}) to \begin{equation} \hat{H}_{\rm JC}(t)=\hbar \Delta(t)\hat{S}_{z}+\hbar \lambda(t)(\hat{S}^{+}\hat{a}+\hat{S}^{-}\hat{a}^{\dag}),\label{HJC} \end{equation} which describes the linear Jaynes-Cummings (JC) model. We note that the Lamb-Dicke approximation is justified for low temperatures and small $\eta\ll1$. However, with increasing temperature one would need to consider the nJC Hamiltonian (\ref{nHJC}) as the effect of the non-linear term (\ref{F}) becomes significant. \begin{figure} \includegraphics[width=0.45\textwidth]{fig3.eps} \caption{(Color online) a) Average $\langle \hat{S}_{z}\rangle$ at $t_{\rm max}$ as a function of the thermal phonon excitation. We compare the result derived from the Hamiltonian $\hat{H}_{\rm JC}$ with the analytical solution (\ref{Sz}) (solid line) for $S=6$ (blue circles), $S=13/2$ (purple triangles) and $S=7$ (red squares). The other parameters are set to $\lambda_{0}/2\pi=5$ kHz, $\Delta_{0}/2\pi=22$ kHz, and $\gamma/2\pi=5.5$ kHz. b) The variance $\Delta \hat{S}_{z}$ at $t_{\rm max}$ for $S=6$. The blue circles are the exact solution and the solid line is the analytical expression (\ref{varS}).} \label{fig3} \end{figure} Since the collective spin excitation can be created (annihilated) by absorption (emission) of collective center-of-mass phonon, the linear as well as the non-linear Jaynes-Cummings Hamiltonian commutes with the operator of the total number of excitations defined by $\hat{N}=\hat{S}_{z}+\hat{a}^{\dag}\hat{a}$. Consequently, the Hilbert space is decomposed into the subspaces with well defined total number of excitations $N=n_{\rm s}+n$ with $n_{\rm s}=0,1,\ldots,2S$ being the number of spin excitations. \subsection{Temperature sensing protocol} The temperature estimation scheme begins by preparing the system initially in the product state $\hat{\rho}_{i}=\hat{\rho}_{\rm spin}\otimes\hat{\rho}_{\rm th}$ where $\hat{\rho}_{\rm th}=\sum_{n=0}^{\infty}p_{n}\left|n\right\rangle\left\langle n\right|$ is the thermal state density operator for the center-of-mass mode with $p_{k}=\frac{\bar{n}^{k}}{(1+\bar{n})^{k+1}}$ and $\bar{n}=(e^{\beta\hbar\omega_{x}}-1)^{-1}$ being the average number of thermal excitations. We assume that the spins are initially polarized along the $z$-direction in a pure state with density matrix $\hat{\rho}_{\rm spin}=\left|D_{0}\right\rangle\left\langle D_{0}\right|$. Therefore, the initial total number of excitations is determined by the number of center-of-mass phonons $n$, namely $N=n$ ($n=0,1,2,\ldots$). Then the system evolves according the time-dependent red-sideband interaction such that the relevant temperature information is distributed over and stored in the collective spin degrees-of-freedom. In Fig. \ref{fig1} the linkage pattern of the collective states of linear crystal of two ions is shown where for concreteness we assume linear JC interaction described by Hamiltonian (\ref{HJC}). As it can be seen a collective spin excitation can be only created by the annihilation of center-of-mass phonon and vice versa. Thus the motional ground state is not affected by the red-sideband interaction, while states with $n>0$ phonons are coupled to the manifolds with the same number of total excitations. Since we deal with thermal motional states each of these three independent transitions is realized with probability $p_{n}$. \subsection{Adiabatic Transition} \begin{figure} \includegraphics[width=0.45\textwidth]{fig4.eps} \caption{(Color online) Classical Fisher information for the observables $P_{s_{1},\ldots,s_{N}}$ at $t_{\rm max}$ as a function of the temperature for ion chain with four ions. The numerical result for different transverse trap frequencies $\omega_{x}$ is compared with the QFI (\ref{QFI}) (solid lines). The other parameters are set to $\lambda_{0}/2\pi=5$ kHz, $\Delta_{0}/2\pi=25$ kHz, and $\gamma/2\pi=5.5$ kHz. } \label{fig4} \end{figure} Our goal is to determine the probabilities $p_{n}$ to observe a Fock state $\left|n\right\rangle$ by execute a projection spin-dependent measurements. First, we emphasize that due to the off-resonant transitions the application of $\pi$ laser pulse is not capable to distinguish the probabilities $p_{n}$ by measuring the spin population \cite{Wineland1998,Haffner2008,Schneider2012}. For this reason we adopt the adiabatic technique for detecting $p_{n}$ which is shower in time but more robust with respect to parameter fluctuation. In Fig. \ref{fig2} we show the lowest eigenfrequencies of Hamiltonian (\ref{HJC}) for three spins and different phonon numbers $(n=0,1,2,3)$. Assume that at the initial moment the laser detuning is much higher than the spin-phonon coupling, $|\Delta(t_{i})|\gg\lambda(t_{i})$ and $\Delta(t_{i})<0$. Then the state vectors $\left|\psi_{n}(t_{i})\right\rangle=\left|D_{0}\right\rangle\left|n\right\rangle$ are an eigenstates of Hamiltonian (\ref{HJC}) such that $\hat{H}_{\rm JC}(t_{i})\left|\psi_{n}(t_{i})\right\rangle=-S\Delta(t_{i})\left|\psi_{n}(t_{i})\right\rangle$. Adiabatically varying the detuning $\Delta(t)$ such that we end up with $\Delta(t_{f})\gg\lambda(t_{f})$ and $\Delta(t_{f})>0$. In the adiabatic limit, the system remains in the same eigenstate of the Hamiltonian (\ref{HJC}) at all times. Since the total number of excitations is preserved the initial state $|\psi_{n}(t_{i})\rangle$ is adiabatically transformed into the final state $|\psi_{n}(t_{f})\rangle=|D_{n}\rangle|0\rangle$ where we assume $n\le 2S$. Since the maximal number of spin excitations is $n_{\rm s}=2S$ in which all spins are in the excited levels, the initial state $|\psi_{n}(t_{i})\rangle$ with $n>2S$ adiabatically evolves into $|\psi_{n}(t_{f})\rangle=|D_{2S}\rangle|n-2S\rangle$. Therefore, for a state with $N$ spins and thermal motion state this implies the following transition \begin{equation} \hat{\rho}_{i}\rightarrow\hat{\rho}_{f}=\sum_{l=0}^{2S}p_{l}|D_{l}\rangle\langle D_{l}|\otimes|0\rangle\langle0|+\hat{\rho}_{\rm res}.\label{rho} \end{equation} Hence, the maximally mixed thermal motion state is adiabatically transformed into the maximally mixed spin state in which one can observe state $|D_{l}\rangle$ with probability $p_{l}$. Finally, the residual density matrix in (\ref{rho}) is given by \begin{equation} \hat{\rho}_{\rm res}=|D_{2S}\rangle\langle D_{2S}|\otimes\sum_{n=2S+1}^{\infty}p_{n}|n-2S\rangle\langle n-2S|.\label{res} \end{equation} A convenient choice of the time-dependent detuning and spin-boson coupling, which can be used to drive the adiabatic transition, is \begin{equation} \Delta(t)=\Delta_{0}\sin\left(\frac{\gamma t}{2}\right),\quad \lambda(t)=\lambda_{0}\cos^{2}\left(\frac{\gamma t}{2}\right)\label{couplings}, \end{equation} where $\Delta_{0}>0$, $\lambda_{0}>0$ and $\gamma$ is a characteristic parameter which controls the adiabaticity of the transition. The interaction time varies as $t\in[-t_{\rm max} ,t_{\rm max}]$ with $t_{\rm max}=\pi/\gamma$ which ensures that $|\Delta(-t_{\rm max})|\gg \lambda(-t_{\rm max})$ and respectively $\Delta(t_{\rm max})\gg \lambda(t_{\rm max})$. \begin{figure} \includegraphics[width=0.45\textwidth]{fig5.eps} \caption{(Color online) Classical Fisher information as a function of the thermal phonon excitation $\bar{n}$. The numerical result for $\omega_{x}/2\pi=6$ MHz and different number of ions is compared with the QFI (\ref{QFI}) (dashed lines).} \label{fig5} \end{figure} In Fig. \ref{fig3}(a) we show the exact result for the average spin magnetization $\langle\hat{S}_{z}(t_{f})\rangle={\rm Tr}(\hat{\rho}_{f}\hat{S}_{z})$ compared with the analytical result given by \begin{equation} \langle \hat{S}_{z}(t_{f})\rangle=\bar{n}-S-\left(\frac{\bar{n}}{1+\bar{n}}\right)^{2S+1}(\bar{n}+S+1)\label{Sz}, \end{equation} where very good agreement is observed. We see that the time-dependent red-sideband interaction rotates the initial spin magnetization which varies with the thermal phonon excitations and thus the observable $\langle \hat{S}_{z}(t_{f})\rangle$ can be used for detecting the temperature. Indeed, the shot-noise limited sensitivity in the temperature estimation from the measured signal $\langle \hat{S}_{z}(t_{f})\rangle$ is $\delta T^{2}=(\nu F_{S_{z}})^{-1}$ where $F_{S_{z}}=\frac{1}{\langle\Delta \hat{S}_{z}\rangle^{2}}\left(\frac{\partial\langle \hat{S}_{z}\rangle}{\partial T}\right)^{2}$ is the fidelity susceptibility \cite{Pezze2018} and $\langle\Delta \hat{S}_{z}\rangle^{2}=\langle \hat{S}^{2}_{z}\rangle-\langle \hat{S}_{z}\rangle^{2}$ is the variance of $\hat{S}_{z}$. Using (\ref{rho}) it is straightforward to show that (see Fig. \ref{fig3}(b)) \begin{eqnarray} \langle\Delta \hat{S}_{z}(t_{f})\rangle^{2}&=&\frac{\bar{n}}{(1+\bar{n})^{4S+2}}\{(1+\bar{n})^{4S+3}-\bar{n}^{4S+1}(1+S+\bar{n})^{2}\notag\\ &&-\bar{n}^{2S}(1+\bar{n})^{2S+1}[1+\bar{n}+S(4+3S+2\bar{n})]\}.\label{varS} \end{eqnarray} However, a more convenient approach for temperature estimation is to detect the spin populations $P_{s_{1},\ldots,s_{N}}={\rm Tr}(\hat{\rho}_{f}\hat{\Pi}_{s_{1},\ldots,s_{N}})$, where $\hat{\Pi}_{s_{1},\ldots,s_{N}}=|s_{1},\ldots,s_{N}\rangle\langle s_{N},\ldots,s_{1}|$ is the projection operator with $s_{l}=\uparrow_{l},\downarrow_{l}$. Indeed, the magnetization of each spin after the adiabatic transition can be measured by illuminating the ions with a global laser radiation and collecting the state-dependent fluorescence on a camera. \begin{figure} \includegraphics[width=0.45\textwidth]{fig6.eps} \caption{(Color online) (a) Fidelity (\ref{fidelity}) at $t_{\rm max}$ for different characteristic rate $\gamma$. We integrate numerically the Liouville equation with Hamiltonian (\ref{nHJC}). The other parameters are set to $\lambda_{0}/2\pi=5$ kHz, $\Delta_{0}/2\pi=22$ kHz, $\eta=0.2$ and $N=6$. (b) The same but we vary the detuning $\Delta_{0}$ for $\gamma/2\pi=2.5$ kHz. } \label{fig6} \end{figure} In Fig. \ref{fig4} we show the exact result for the classical Fisher information (\ref{CFI}) for the spin probabilities $P_{s_{1},\ldots,s_{N}}$ compared with the QFI (\ref{QFI}). We see that the classical Fisher information associated with the observables $P_{s_{1},\ldots,s_{N}}$ is equal to the quantum Fisher information (\ref{QFI}) for quantum harmonic oscillator at thermal equilibrium. Therefore, the detection of the orientation of each spin is optimal for the temperature estimation in the sense that the temperature uncertainty is bounded by the quantum Cram\'er-Rao bound (\ref{QRB}). In Fig. \ref{fig5} is shown the numerical result for the classical Fisher information for different number of ions and high temperature. As the mean thermal phonon excitation increases the residual density matrix term $\hat{\rho}_{\rm res}$ (\ref{res}) limits temperature sensitivity. Indeed, the probability to observe a collective state with all spins in the excited levels is not equal to $p_{2S}$ but other highly excited thermal phonon states with probabilities $p_{n}$ ($n>2S$) are also contributed, which spoil the optimal temperature estimation. However, as we can see from the Fig. \ref{fig5} the effect of the residual term can be suppressed by increasing the number of ions. Indeed, for higher number of ions the probability to observe all spins in the excited states after the adiabatic transition becomes negligibly small, so that the effect of the residual term $\hat{\rho}_{\rm res}$ can be suppressed which ultimately improves the temperature sensitivity. In the following we examine the effect of the non-adiabatic transitions which limit the efficiency of the temperature determination. We discuss the red-sideband interaction beyond the Lamb-Dicke approximation by including the non-linear terms ($\ref{F}$), which becomes significant in the high temperature limit. Since the nJC Hamiltonian (\ref{nHJC}) preserves the total number of excitations the adiabatic transition (\ref{rho}) still holds. We show that the effect of the non-linear terms is merely to modify the adiabaticity of the transition. \section{Physical Imperfections}\label{imperfections} \begin{figure} \includegraphics[width=0.45\textwidth]{fig7.eps} \caption{(Color online) a) Collective spin populations as a function of time. We assume that the system is prepared in motion coherent state with density matrix operator $\hat{\rho}_{\alpha}=|\alpha\rangle\langle\alpha|$ with $\alpha=1.2$. The other parameters are set to $\lambda_{0}/2\pi=5$ kHz, $\Delta_{0}/2\pi=20$ kHz and $\gamma/2\pi=4.5$ kHz b) Classical Fisher information for the estimation very weak force $\epsilon$ for initial coherent cat state as a function of the displacement amplitude $\alpha$. The spin observables are measured at $t_{\rm max}$. We numerically integrate the Liouville equation with Hamiltonian (\ref{HJC}) for different number of ions. The other parameters are set to $\Delta_{0}/2\pi=22$ kHz and $\gamma/2\pi=2.2$ kHz. } \label{fig7} \end{figure} As a figure of merit for the efficiency of the adiabatic transition we use the fidelity between two density matrices defined by \cite{Gu2010} \begin{equation} F(\hat{\rho}_{f},\hat{\rho}(t))=\frac{{\rm Tr}(\hat{\rho}_{f}\hat{\rho}(t))}{\sqrt{{\rm Tr}(\hat{\rho}_{f}^{2}){\rm Tr}(\hat{\rho}(t)^{2})}}.\label{fidelity} \end{equation} Here $\hat{\rho}_{f}$ is the desired density matrix (\ref{rho}) and $\hat{\rho}(t)$ is the actual one. In Fig. \ref{fig6}(a) we show the numerical result for the fidelity (\ref{fidelity}) as a function of the controlling parameter $\gamma$ using the nJC Hamiltonian (\ref{nHJC}). As the temperature increases the Lamb-Dicke approximation is not fulfilled and thus one needs to include the high-order terms in the Lamb-Dicke expansion given by Eq. (\ref{F}). We observe that on one hand the non-adiabatic transitions become stronger for higher values of $\bar{n}$ and the fidelity decreases slightly when $\bar{n}$ increases toward high temperature limit. On the other hand the adiabaticity is improved for lower value of $\gamma$ and thus longer interaction time. For example, assuming the mean thermal phonon excitations $\bar{n}=15$ and $\gamma/2\pi=2.4$ kHz such that the total interaction time is $\tau=2 t_{\rm max}\approx 417$ $\mu$s, we estimate fidelity $F(\hat{\rho}_{f},\hat{\rho}(t_{\rm max}))>0.99$. Increasing the interaction time improves the fidelity until the random noise compromises the signal. For example, the electric fluctuations of the trap electrodes affect the motional phonon population during the adiabatic transition. Consider heating rate $\langle \dot{n}\rangle=1/t_{\rm dec}$, where $t_{\rm dec}$ is the characteristic decoherence time we require $t_{\rm dec}\gg \tau$. For heating rate of $0.1$ ${\rm ms}^{-1}$ \cite{Chiaverini2014}, which corresponds to typical heating rate in linear ion Pual traps and interaction time of order of $\tau\approx 0.4$ ms this condition is satisfied. Other possible source of errors are spontaneous spin flip from the excited state during the adiabatic transition and magnetic field fluctuations which cause spin dephasing. Usually the spontaneous decay of the upper level takes too long time of order of 1 s and thus it can be neglected. The coherence time is often limited by ambient magnetic field fluctuations which can be suppressed by using magnetic field insensitive transitions \cite{Lee2005}. In Fig. \ref{fig6}(b) we show the fidelity as a function of the detuning $\Delta_{0}$ and for fixed $\gamma$. On one hand, as can be seen by increasing $\Delta_{0}$ the adiabaticity of the transition is improved which leads to higher fidelity. On the other hand in order to resolve the vibrational center-of-mass mode the energy splitting to the energetically nearest rocking mode with frequency $\omega_{\rm roc}=\sqrt{\omega_{x}^{2}-\omega_{z}^{2}}$ has to be sufficiently large compared to the spin-phonon coupling $\lambda_{0}$ and laser detuning $\Delta_{0}$, namely $\Delta_{\rm gap}\gg \lambda_{0},\Delta_{0}$ where $\Delta_{\rm gap}=\omega_{x}-\omega_{\rm roc}$. Increasing the number of ions however makes the vibrational modes closer, such that the laser addressability of the center-of-mass mode imposes a restriction on $N$. Moreover, for given aspect ratio $\omega_{z}/\omega_{x}$ there is a maximal number of ions for which the system undergoes structural phase transition to a zigzag configuration. Thus the energy gap scales with the number of ions as $\Delta_{\rm gap}/\omega_{x}\approx 0.6228 \ln(6N)/N^{2}$; see for more details \cite{Ivanov2013}. Consider $N=12$ and $\omega_{x}/2\pi=8$ MHz we find $\Delta_{\rm gap}/2\pi\approx148$ kHz. For $\gamma/2\pi=2.3$ kHz, $\bar{n}=6$ and $\Delta_{0}/2\pi=15$ kHz we estimate fidelity $F(\hat{\rho}_{f},\hat{\rho}(t_{\rm max}))>0.99$. \section{Detection of the relative phase of the coherent cat state}\label{cat} Let us extend the discussion by considering various initial motion states. In Fig. \ref{fig7}(a) we show the time evolution of the collective-spin states for initial coherent state. The adiabatic evolution drives the system into the final density matrix given by Eq. (\ref{rho}) where now the Fock state distribution is given by $p_{n}=e^{-|\alpha|^{2}}|\alpha|^{2n}/n!$. Thus, the relevant information of the magnitude of the displacement amplitude is mapped onto the collective spin excitations and thereby it can be measured by detecting the spin populations at the end of the adiabatic transition. Furthermore, our adiabatic technique can be applied also for detecting the relative phase of the coherent cat state. Consider a motional density matrix $\hat{\rho}_{\rm cat}=|\psi_{\rm cat}\rangle\langle\psi_{\rm cat}|$, where $|\psi_{\rm cat}\rangle=(|\alpha\rangle+\left|-\alpha\right\rangle)/\sqrt{2}$ is a coherent cat state ($\alpha\gg 1$) such that we have $\hat{\rho}_{i}=|D_{0}\rangle\langle D_{0}|\otimes\hat{\rho}_{\rm cat}$. We assume that a time-varying force is applied which is on resonance with the frequency of the center-of-mass mode. The effect of the force is to displace a small motion amplitude with $\hat{D}(\epsilon)=e^{i\epsilon(\hat{a}^{\dag}-\hat{a})}$ where $\epsilon$ is the parameter we wish to estimate. The information of $\epsilon$ ($\epsilon\ll1$) is imprinted in the relative phase of the coherent cat state, namely $|\psi_{\rm cat}\rangle\approx(e^{i\theta}|\alpha\rangle+e^{-i\theta}\left|-\alpha\right\rangle)/\sqrt{2}$, where $\theta=\alpha\epsilon$ \cite{Munro2002}. Then the system evolves according the time-dependent detuning $\Delta(t)$ and spin-phonon coupling $\lambda(t)$ (\ref{couplings}) such that at $t_{\rm max}$ the spin populations are measured. In Fig. \ref{fig7}(b) we show the exact result for the classical Fisher information for estimating $\epsilon$ as a function of the initial displacement amplitude $\alpha$ and for different number of ions. Crucially, using a coherent cat state, the precision in estimating $\epsilon$ grows quadratically with $\alpha$ which corresponds to a Heisenberg limit \cite{Munro2002}. In order to achieve such precision $\delta\epsilon^{2}\ge1/\alpha^{2}$ one needs to perform a state-dependent measurement on the spin states at the end of the adiabatic transition. As is shown in Fig. \ref{fig7}(b) increasing $\alpha$ results in more phonon states being populated which in turn requires the increase of the number of ions. \section{Conclusions}\label{conclusions}. We have proposed an efficient adiabatic method for temperature measurement with trapped ions which can be operated beyond the Lamb-Dicke limit. The technique is based on an adiabatic evolution which transfer the relevant phonon temperature information onto the spin populations which can be measured by state-dependent fluorescence at the end of the adiabatic transition with high efficiency. We have characterized the amount of temperature information which can be extracted for such a spin detection in terms of classical Fisher information. We have shown that the state-projection measurements lead to equality between the classical and quantum Fisher information for harmonic oscillators at thermal equilibrium. Thus the temperature is determined with ultimate precision given by the quantum Cram\'er-Rao bound. Furthermore, we have discussed the application of our method for the detection of the relative phase of the coherent cat state. Such a phase can be generated by the application of very weak time-varying force which displaces the initial motional coherent cat state. We have shown that by executing a state projective measurement one can determine the unknown displacement with Heisenberg limit precision. \section*{Acknowledgments} A. V. K. and P. A. I. acknowledges support by the ERyQSenS project, Bulgarian Science Fund Grant No. DO02/3. W. L. acknowledges support from the EPSRC through grant No. EP/R04340X/1 via the QuantERA project “ERyQSenS”.
{ "timestamp": "2020-12-17T02:17:18", "yymm": "2012", "arxiv_id": "2012.08915", "language": "en", "url": "https://arxiv.org/abs/2012.08915" }
\section{} NASA's TESS mission \citep{tess} has produced high precision photometry of millions of stars to the community. The majority of TESS observations have a duration of $\approx$27 days, corresponding to a single observation during a TESS sector. A small subset of TESS targets are observed for multiple sectors, with approximately 1-2\% of targets falling in the Continuous Viewing Zone (CVZ) during the prime mission \citep{yield}, where targets are observed continuously for a year. These targets are highly valuable for extracting long period rotation rates, which can be linked to stellar ages. The TESS spacecraft orbits the earth every $\approx$14 days, after which there is a short data-downlink. Typical observations last 11-13 days between downlinks/pointing changes. TESS experiences significant scattered light during each orbit. These factors cause a systematic signal in the TESS data, which can make it difficult to 1) ``stitch'' time-series data together across data gaps between sectors and after downlinks 2) identify long-period trends in the data, particularly at periods of $\gtrsim$ 28 days. We present a method to create a Lomb-Scargle periodogram, while simultaneously detrending these TESS systematics. This method is similar to the work presented in \cite{sip} for detrending systematics in the NASA Kepler/K2 dataset, and we direct readers to that work for a full discussion. Similarly to \cite{sip}, we refer to this periodogram as a Systematics-insensitive Periodogram (SIP). Our implementation of SIP is a simple linear model, consisting of regressors to remove instrument systematics, and a sinusoid component to fit a power spectrum as a function of period (i.e. a periodogram). The principles behind our linear model are presented simply and efficiently in \cite{themagic}. We build our systematic regressors using the same principles as the K2 \texttt{EVEREST} pipeline \citep{everest}. Our method uses the following procedure: 1) using the TESS Target Pixel File data, and apertures assigned by the TESS Pipeline \citep{spoc, spoc2}, we build Simple Aperture Photometry (SAP) light curves of the target, including all scattered light contributions (i.e. not using the pipeline provided background correction). 2) we build an estimate of the background using the first 3 principal components of the pixel time series outside of the aperture. This creates a 3 by $t$ matrix, where $t$ is the number of time points in the dataset. We then include a column of ones to account for mean offsets, making a 4 by $t$ matrix. These are the systematics regressors. 3) We duplicate the regressors for every 14 day time-series segment between data-downlinks. For each segment, we then set the values in the regressors to zero at all \textbf{other} segments. This creates a sparse matrix with size $4s$ by $t$, where s is the number of segments. This matrix has block-diagonal structure, and has values only during each 14 day segment. 4) We create a simple sine and cosine curve at a given period, evaluated at all time points, using \texttt{astropy}'s \texttt{LombScargle} module. 5) Using \texttt{lightkurve}'s \texttt{RegressionCorrector} framework, we fit the systematics regressors and sinusoid components simultaneously to the SAP time-series flux data, including regularization terms to avoid overfitting. This process is run for every period of interest. The ``power" in the periodogram is defined as the amplitude of the sinusoid at each period. TESS data from the CVZ can contain 100,000+ data points, and calculating thie SIP can become expensive in memory. To avoid this, we rely on \texttt{lightkurve}'s \texttt{SparseDesignMatrix} class to keep memory usage low \citep{lk}. An example of the results of our SIP method is shown in Figure 1. The tool used to generate this SIP is available online\footnote{\href{https://doi.org/10.5281/zenodo.4300754}{doi.org/10.5281/zenodo.4300754}}\footnote{\href{https://github.com/christinahedges/TESS-SIP}{github.com/christinahedges/TESS-SIP}} as a pip installable Python tool. The method presented in this note can be easily modified to stitch together TESS CVZ observations using 1) different systematics models (e.g. models for the telescope jitter), or 2) different models for stellar variability (e.g. a simple basis-spline model for non-periodic variability). \begin{figure}[h!] \begin{center} \includegraphics[scale=0.85,angle=0]{demo.png} \caption{Example of the Systematics-insensitive Periodogram generated for target TIC 150428135 (TOI-700). Black shows the detrended light curve, with the best fit systematics at the maximum power period removed. This target has a significant, long period rotation, at 52 days, longer than the orbital period of the TESS spacecraft (28 days). Without using these strategies to mitigate instrument systematics, it is difficult to recover rotation rates at periods close to/greater than the TESS orbital period.} \end{center} \end{figure} \acknowledgments The SIP project was developed in part at the ``online.tess.science'' meeting, which took place globally in 2020 September. This research made use of Lightkurve, a Python package for Kepler and TESS data analysis (Lightkurve Collaboration, 2018). This research made use of Astropy,\footnote{\href{http://www.astropy.org}{http://www.astropy.org}} a community-developed core Python package for Astronomy \citep{astropy:2013, astropy:2018}.
{ "timestamp": "2020-12-17T02:19:15", "yymm": "2012", "arxiv_id": "2012.08972", "language": "en", "url": "https://arxiv.org/abs/2012.08972" }
\section{Introduction} In \cite{FeffermanPhongConf82, Fefferman83} Fefferman and Phong established the inequality, for $p > 1$ : \begin{equation}\label{eq:feff_phong_ineq} \int_{\mathbf{R}^n} V(x) \psi(x)^2 \mathop{}\mathrm{d} x \leq C_{n,p} N_p(V) \int_{\mathbf{R}^n} |\nabla \psi(x)|^2 \mathop{}\mathrm{d} x, \end{equation} with $\psi$ a compactly supported smooth function, $V$ non negative and locally integrable, $C_{n,p}$ is a constant depending only on the dimension and $p$, and $N_p$ is the Morrey norm : \begin{equation} N_p(V) =\sup_{\substack{x\in \mathbf{R}^n\\ r > 0}} \left( r^{2p-n} \int_{B(x,r)} |V(y)|^p \mathop{}\mathrm{d} y\right)^{1/p}. \end{equation} Such an inequality yields a positivity condition for the Schrödinger operator $H = \Delta - V$ (with $\Delta = -\sum_{i = 1}^n \partial_i^2$), namely that if $N_p(V) \leq 1/C_{n,p}$, then $H$ is a positive operator. In fact the following estimates on the lower bound of the spectrum of $H$, $\lambda_1(H)$ were also given : \begin{equation}\label{eq:lower bound estimate R^n} \sup_{\substack{x \in \mathbf{R}^n\\ r > 0}} \left( C_1 r^{-n}\int_{B(x,r)} V \mathop{}\mathrm{d} y - r^{-2} \right) \leq -\lambda_1(H) \leq \sup_{\substack{x \in \mathbf{R}^n\\ r > 0}} \left( C_p r^{-n} \left(\int_{B(x,r)} V^p \mathop{}\mathrm{d} y\right)^{1/p} - r^{-2} \right) \end{equation} The conditions for such inequalities (though with a constant that doesn't necessarily depends on the Morrey norm) to hold in $\mathbf{R}^n$ has been studied extensively, see for example in \cite{ChangWilsonWolff85, KermanSawyer86,Maz'yaVerbitsky95}. And in \cite{Maz'yaVerbitsky02}, Maz'ya and Verbitsky establish necessary and sufficient conditions for \eqref{eq:feff_phong_ineq} to hold with complex valued $V$. That being the case, it seems interesting to study to what extends, and under which geometrical hypotheses those results extend on other spaces, such as Riemannian manifolds. The first aim of this article is to generalize the initial result of Fefferman and Phong to a weighted Riemannian manifold $M$. A natural way to do that would be to use the Poincaré inequality : $\int_{B(x,r)} | f - f_{B(x,r)}|\mathop{}\mathrm{d}\mu \leq C r \int_{B(x,\kappa r)} |\nabla f|\mathop{}\mathrm{d}\mu$, $f \in \Smooth{\kappa B}$, for any $x\in M$, $r > 0$, with $\kappa > 1$, $f_B = \frac{1}{\mu(B)}\int_B f \mathop{}\mathrm{d}\mu$. It turns out that the result still holds under some weaker hypothesis. Our proof will follow the general idea used by Schechter in \cite{Schechter89}, that \eqref{eq:feff_phong_ineq} follows from the inequality (which holds in $\mathbf{R}^n$ following a result of Muckenhoupt and Wheeden\cite{MuckenhouptWheeden74}) : \begin{equation}\label{eq:Riesz_Max_L2} \left\|I_1 f\right\|_{L^2} \leq C \|M_1 f\|_{L^2}, \end{equation} \noindent with $I_1 f(x) = c_n \int_{\mathbf{R}^n} \frac{f(y)}{|x-y|^{n-1}} \mathop{}\mathrm{d}\mu(y)$, and $M_1 f(x) = \sup_{r> 0} r^{1-n} \int_{B(x,r)}|f(y)|\mathop{}\mathrm{d} y$, and that \eqref{eq:lower bound estimate R^n} is proved using similar estimates, this time on $(\Delta + \lambda^2)^{-1/2}$. The proof of the generalisation of \eqref{eq:lower bound estimate R^n} will naturally yields weak versions of \eqref{eq:feff_phong_ineq}, which holds under weaker hypothesis. \subsection{Definitions and Notations} A weighted Riemannian manifold $(M,g,\mu)$, or simply a weighted manifold, is the data of a smooth manifold $M$, $g$ a smooth Riemannian metric on $M$, and a Borel measure $\mathop{}\mathrm{d}\mu = \sigma^2 \mathop{}\mathrm{d} v_g$ on $M$, with $\sigma$ a smooth positive function on $M$ and $v_g$ is the Riemannian volume measure associated with the metric $g$. We define the (weighted) Dirichlet Laplace operator as the Friedrichs extension of the operator on $\SmoothComp{M}$ defined by $\Delta_\mu f = -\sigma^{-2} \div(\sigma^2 \nabla f)$, with associated quadratic form $Q(\psi) = \int_M |\nabla\psi|^2 \mathop{}\mathrm{d}\mu$. We will usually write the Dirichlet Laplace operator as simply $\Delta$. On a metric space $(X,d)$, for $x\in X$, $r > 0$, the ball of center $x$ and radius $r$ is the set $B(x,r) = \set*{y:\; d(x,y) < r}$. If $B = B(x,r)$ is the ball, $\theta \in R$, then $\theta B$ refers to the set $B(x, \theta r)$. For $p \geq 1$, we let $\|\cdot\|_p$ be the $L^p$ norm on $(M, \mu)$. We recall $\|f\|_p = \left( \int_M |f|^p \mathop{}\mathrm{d}\mu \right)^{1/p}$. For $T$ a bounded operator on $L^p$, we use $\|T\|_{L^p \rightarrow L^p}$ or $\|T\|_p$ to refer to its operator norm : i.e.\, $\|T\|_p = \sup_{\substack{\psi \in L^2\\ \psi \neq 0}} \frac{\|T\psi\|_p}{\|\psi\|_p}$. For an open $U \subset M$, $\lambda_1(U)$ refers to the first Dirichlet eigenvalue of $\Delta_\mu$ on $U$ : \begin{equation} \lambda_1(U) = \inf_{\substack{\psi\in\SmoothComp{U}\\ \psi \neq 0}} \frac{\|\nabla \psi\|^2_2}{\|\psi\|_2^2}. \end{equation} When $H$ is an operator defined on smooth function with compact support, $\lambda_1(H)$ is similarly defined to be : \begin{equation} \lambda_1(H) = \inf_{\substack{\psi\in\SmoothComp{M}\\ \psi \neq 0}} \frac{\left\langle H\psi, \psi \right\rangle}{\|\psi\|_2^2} \end{equation} On a weighted manifold $(M,g,\mu)$, for $p \geq 0$ we define the Morrey norms $N_p$ as follows : if $V$ is a non-negative, locally integrable function, we let $N_p(V)$ be : \begin{equation}\label{eq:Morrey} N_p(V) = \sup_{\substack{x \in M \\ r > 0}} \left( r^{2p} \fint_{B(x,r)} V^p \mathop{}\mathrm{d}\mu \right)^{1/p}, \end{equation} \noindent where $\fint_B f \mathop{}\mathrm{d}\mu = \frac{1}{\mu(B)} \int_B f \mathop{}\mathrm{d}\mu$ is the mean of $f$ over $B$. We also define the Morrey norm taken on balls of radius less than $R > 0$ : \begin{equation}\label{eq:def morrey norm} N_{p,R}(V) = \sup_{\substack{x \in M \\ 0 < r < R}} \left( r^{2p} \fint_{B(x,r)} V^p \mathop{}\mathrm{d}\mu \right)^{1/p} \end{equation} For our generalization to hold, it is important that $(M,g,\mu)$ must admits a \emph{relative Faber Krahn inequality}, $\FK{}$, defined as follows : \begin{definition} A weighted Riemannian manifold $(M, g, \mu)$ admits a relative Faber-Krahn inequality if there are constants $b, \eta > 0$, such that for all $x \in M$, $r > 0$, and for any $U$ open subset of the open ball $B(x,r)$ relatively compact in $B(x,r)$, then : \begin{equation}\label{eq:FK} \lambda_1(U) \geq \frac{b}{r^2} \left( \frac{\Vol{x}{r}}{\mu(U)}\right)^\frac{2}{\eta}. \end{equation} It instead admits a relative Faber-Krahn inequality at scale $R$, $\FK{R}$ if \eqref{eq:FK} holds only for $0 \leq r \leq R$. \end{definition} In what follows, we will call $b$, $\eta$ in either $\FK{}$ or $\FK{R}$ the \emph{Faber-Krahn constants} of the manifold. \subsection{Statements of the results} \begin{theorem}\label{th:Fefferman-Phong generalized} Let $(M, g, \mu)$ be a weighted Riemannian manifold satisfying $\FK{}$, then for any $p > 1$, there is a constant $C_p$, which depends only on the Faber-Krahn constants and on $p$, such that for any $V\in L^1_{loc}(M)$, $V \geq 0$, and any $\psi \in \SmoothComp{M}$, we have \begin{equation}\label{eq:Fefferman_Phong generalized} \int_M V\psi^2 \mathop{}\mathrm{d}\mu \leq C_p N_{p}(V) \int_M |\nabla\psi|^2 \mathop{}\mathrm{d} \mu. \end{equation} \end{theorem} If only $\FK{R}$ holds, then we still have the following weaker result : \begin{theorem}\label{th:Weak_positivity} Let $(M,g,\mu)$ be a weighted Riemannian manifold, such that, for some $R > 0$, $\FK{R}$ holds. Then for any $p > 1$ there is a constant $C_p > 0$, which depends only on the Faber-Krahn and on $p$, such that $V\in L^1_{loc}(M), V \geq 0$, and any $\psi \in \SmoothComp{M}$, \begin{equation}\label{eq:Weak_majoration} \int_M V\psi^2 \mathop{}\mathrm{d}\mu \leq C_p N_{p, R}(V) \left(\int_M |\nabla\psi|^2 \mathop{}\mathrm{d}\mu + \frac{1}{R^2} \int_M \psi^2 \mathop{}\mathrm{d}\mu \right). \end{equation} \end{theorem} From this inequality we can generalize the Fefferman Phong estimate on the lower bound of the spectrum of the operator $H = \Delta - V$. Indeed if $\FK{}$ holds, then for any $R > 0$, $\FK{R}$ is satisfied. Thus \eqref{eq:Weak_majoration} is true for any $R$. Then the following theorem follows easily : \begin{theorem}\label{th:lower_bound estimates} Let $(M,g,\mu)$ be a complete non-compact weighted Riemannian manifold satisfying $\FK{}$. Then for any $p > 1$ we have two constants $C_1, C_p > 0$, which depends only on the Faber-Krahn constants (and for $C_p$, on $p$), such that, for any $V \in L^1_{loc}(M)$, $V \geq 0$ : \begin{equation} \sup_{\substack{x\in M\\ \delta > 0}} \left( C_1 \fint_{B(x,\delta)} V \mathop{}\mathrm{d} \mu - \delta^{-2} \right) \leq -\lambda_1(\Delta_\mu - V) \leq \sup_{\substack{x\in M\\\delta > 0}} \left(C_p \left( \fint_{B(x,\delta)} V^p \mathop{}\mathrm{d}\mu \right)^{1/p} - \delta^2 \right). \end{equation} \end{theorem} In addition, if $\lambda_1(M) > 0$, then we can strengthen \eqref{eq:Weak_majoration}, and obtain the following result, giving a condition for $\Delta - V$ to be positive : \begin{theorem}\label{th:Positive lambda_1} Let $(M,g,\mu)$ be a complete non-compact weighted Riemannian manifold, such that $\FK{R}$ holds for $R > 0$. If in addition, if $\lambda_1(M) > 0$, then for any $p > 1$, there is a constant $C_p > 0$ depending only on the Faber-Krahn constants such that, for $V\in L^1_{loc}(M), V \geq 0$, and any $\psi \in \SmoothComp{M}$, \begin{equation}\label{eq:Positive lambda_1} \int_M V\psi^2 \mathop{}\mathrm{d}\mu \leq C_p N_{p, R}(V)\frac{1 + \lambda_1(M)R^2}{\lambda_1(M)R^2} \left(\int_M |\nabla\psi|^2 \mathop{}\mathrm{d}\mu + \frac{\lambda_1(M)}{2}\int_M \psi^2 \mathop{}\mathrm{d}\mu \right). \end{equation} \end{theorem} \subsection{$L^2$ Hardy inequality} Notice that the inequality \eqref{eq:Fefferman_Phong generalized} is, for potentials $V$ with $N_p(V) < +\infty$, nothing more than a generalized $L^2$ Hardy inequality. Thus, on manifolds for which theorem \ref{th:Fefferman-Phong generalized} holds, the classical Hardy inequality is true whenever $N_p(d(o,\cdot)^{-2})$ is finite for all points $o\in M$. For this, we must make an additional assumption on the measure $\mu$. \begin{definition} A measured metric space $(X,d,\mu)$ satisfy the reverse doubling property $\rd{}$ of order $\nu$ if, there is some constant $a > 0$ such that for all $x\in M$, $0 < r \leq r'$, \begin{equation} a \parenfrac{r'}{r}^\nu \leq \frac{\Vol{x}{r'}}{\Vol{x}{r}}. \end{equation} \end{definition} \begin{theorem}\label{th:Hardy} Let $(M,g,\mu)$ be a weighted Riemannian manifold, for which $\FK{}$ holds, and where $\mu$ admits the reverse doubling property of order $\nu$, $\rd{}$, with $\nu > 2$. For an arbitrary $o \in M$, let $\rho(x) = d(o,x)$. Then there is some constant $C > 0$, which depends only on the Faber-Krahn and reverse doubling constants, such that for any $\psi \in \SmoothComp{M}$ we have : \begin{equation}\label{eq:Hardy} \int_M \frac{\psi(x)^2}{\rho(x)^2} \mathop{}\mathrm{d}\mu(x) \leq C \int_M |\nabla \psi|^2 \mathop{}\mathrm{d}\mu \end{equation} \end{theorem} We can compare this to the results of V. Minerbe \cite{Minerbe09} or G. Grillo \cite{Grillo03}, who proved $L^p$ Hardy inequalities assuming a Poincaré inequalities and a doubling measure. While we only get a $L^2$ inequality, it holds true under the weaker hypothesis of a relative Faber-Krahn inequality. Cao, Grigor'yan and Liu \cite{Cao2020HardysIA} proved Hardy inequalities as a consequence of volume doubling, reverse doubling, and certain estimates on either the Green function or the heat kernel. Their results are far more general than this article. \subsection{Examples of manifolds satisfying relative Faber-Krahn inequalities} We give various cases of manifolds which will satisfy a relative Faber-Krahn inequality (at scale $R$). Then our results will follow. \subsubsection{Complete manifolds with Ricci curvature bounded from below} From Li and Yau\cite{LiYau86}, the heat kernel of a complete manifold $(M,g,\mu)$ of dimension $n$, with $\mu$ here being the Riemannian volume measure, with Ricci curvature bounded from below by $-K$, for a constant $K \geq 0$, admits the following diagonal estimate : \begin{equation*} p_t(x,x) \leq \frac{C_0}{\Vol{x}{\sqrt{t}}} e^{C_1 K t} \end{equation*} Also, as a consequence of the Bishop-Gromov volume comparison theorem, we get that (see \cite{CheegerGromovTaylor82,ChavelBook06, SaloffCosteBook02} for example), for any $0 < r \leq r'$ : \begin{equation*} \frac{\Vol{x}{r'}}{\Vol{x}{r}} \leq \parenfrac{r'}{r}^n \exp\left(\sqrt{(n-1)K}R \right) \end{equation*} Those two conditions implies, (see for example \cite{SaloffCosteBook02, HebischSaloff-Coste01}), that there is some $R > 0$ such that $M$ satisfy $\FK[n]{R}$. If the Ricci curvature is non-negative, then we also have $\FK[n]{}$. \subsubsection{Manifolds satisfying Faber Krahn inequalities outside a compact set} We consider a complete weighted manifold $M$, and remove from it a compact set with smooth boundary $K$. We let $E_1, \ldots, E_k$ be the connected components of $M\setminus K$, and suppose that each $E_i$ is the exterior of a compact set with smooth boundary in a complete manifold $M_i$. A simple example of such manifold is the connected sum of two (or more) copies of $\mathbf{R}^n$. It admits $\FK[n]{}$, but it is known that such manifold doesn't satisfy a Poincaré inequality (see for example \cite{BenjaminiItaiChavel96}). Using \cite{GrigoryanSaloffCoste16}, we get that if each $M_i$ satisfy $\FK{}$, then there is some $R > 0$ such that $M$ satisfies $\FK{R}$. \paragraph{Acknowledgements} I thank G. Carron for his many advices and remark that helped shape this article into its present form, and L Guillopé for his comments on the manuscript. I also thank the Centre Henri Lebesgue \textbf{ANR-11-LABX-0020-01} for creating an attractive mathematical environment. I was partially supported by the ANR grant: \textbf{ANR-18-CE40-0012}: RAGE. \section{Some techniques of harmonic analysis}\label{sec:Harmonic analysis} \begin{remark} We will often use $C$ or $c$ for generic constants which values might change from line to line. When we need to make it clear on which parameters the constant depends, new constant factors will be written when they appear before being folded into this generic constant. \end{remark} \subsection{Dydadic cubes} In $\mathbf{R}^n$, the natural decomposition of the space into cubes of length $2^k$, $k \in \mathbf{Z}$ is a very powerful tool. It turns out that families of open sets satisfying similar properties to those of the dyadic cubes in the euclidean space can be constructed in a more general setting. See for example the third part of \cite{Christ90}. We will use the construction of such "dyadic cubes" given by E. Sawyer and R. L. Wheeden in \cite{SawyerWheeden92}. Though it remains true in a more general setting, for our purposes it can be stated as : \begin{theorem}\label{cubes} Let $(X, d)$ be a separable metric space, then there is $\rho > 1$ ($\rho = 8$ works), such that for any (large negative) integer $m$, there are points $\set*{x_\alpha^k}$ and a family $\collection{D}_m = \set*{\mathcal{E}_\alpha^k}$ of Borel sets for $k = m,\, m+1,\,\ldots$, $\alpha = 1, 2, \ldots$, such that \begin{itemize} \item $B(x_\alpha^k, \rho^k) \subset \mathcal{E}_\alpha^k \subset B(x_\alpha^k, \rho^{k+1})$. \item For each $k = m, m+1, \ldots$, the family $\set*{\mathcal{E}_\alpha^k}_\alpha$ is pairwise disjoint in $\alpha$ and $X = \bigcup_\alpha \mathcal{E}_\alpha^k$. \item If $m \leq k < l$, then either $\mathcal{E}_\alpha^k \cap \mathcal{E}_\beta^l = \emptyset$ or $\mathcal{E}_\alpha^k \subset \mathcal{E}_\beta^l$. \end{itemize} \end{theorem} Given such a family $\collection{D}_m$, the sets $\mathcal{E}_\alpha^k$ will be called \emph{dyadic cubes} of $M$, or simply \emph{cubes}. The ball $B(x_\alpha^k, \rho^{k+1})$ is called the containing ball of the cube $\mathcal{E}_\alpha^k$. For any cube $Q$ the containing ball is denoted by $B(Q)$. $\rho$ will be called the sidelength constant of dyadic cubes. The length of a cube $Q$ is the radius of $\rho^{-1}B(Q)$, written $\ell(Q)$. \subsection{Properties of doubling measures} We start by recalling the definitions and some standard properties of doubling measures, while covering at the same time the $R$-doubling case. \begin{definition}\label{def:def doubling} A measured metric space $(X,d,\mu)$ satisfy the doubling property $\doubling{}$ of order $\eta$ if, there is some constant $A > 0$ such that for all $x\in M$, $0 < r \leq r'$, \begin{equation}\label{eq:def doubling} \frac{\Vol{x}{r'}}{\Vol{x}{r}} \leq A \parenfrac{r'}{r}^\eta. \end{equation} We call $A$ the doubling constant, and $\eta$ the doubling order. We will also say "the doubling constants" to refer to both $A$ and $\eta$ at the same time. The property $\doubling{}$ is equivalent to the fact that for some constant $A > 0$, for any ball $B \subset M$ : \begin{equation}\label{eq:def doubling alt} \mu(2B) \leq A \mu(B) \end{equation} \end{definition} The proof of the equivalence is the same as that of the $R$-doubling case given after definition \ref{def:R doubling}, (with $R = \infty$). A note on the constants : \eqref{eq:def doubling alt} implies \eqref{eq:def doubling} with $\eta = \log_2 A$ (and $A$ the same in both inequalities), while conversely, \eqref{eq:def doubling} implies that the constant in \eqref{eq:def doubling alt} be $2^\eta A$. We state again the reverse doubling property : \begin{definition}\label{def:def reverse doubling} A measured metric space $(X,d,\mu)$ satisfy the reverse doubling property $\rd{}$ of order $\nu$ if, there is some constant $a > 0$ such that for all $x\in M$, $0 < r \leq r'$, \begin{equation}\label{eq:def reverse doubling} a \parenfrac{r'}{r}^\nu \leq \frac{\Vol{x}{r'}}{\Vol{x}{r}}. \end{equation} We call $a$ the reverse doubling constant, and $\nu$ the reverse doubling order. The property $\rd{}$ is equivalent to the fact that for some constant $a \in (0,1)$, for any ball $B \subset M$ : \begin{equation}\label{eq:def reverse doubling alt} \mu(B) \leq a\mu(2B) \end{equation} \end{definition} \begin{proof}[Proof of \eqref{eq:def reverse doubling alt} implies \eqref{eq:def reverse doubling}] We can assume that $a \leq 1$. Let $x\in X$, $0 < r \leq r'$. Writing $\lfloor t \rfloor$ for the integer part of $t\in \mathbf{R}$, let $k = \left\lfloor \log_2 \frac{r'}{r}\right\rfloor$. Then : \begin{align*} \Vol{x}{r} &\leq a^k\Vol{x}{2^kr} \\ &\leq a^k \Vol{x}{r'} \\ &\leq a^{-1 + \log_2 \frac{r'}{r}} \Vol{x}{r'}\quad (a \leq 1) \\ &\leq \frac{1}{a} \parenfrac{r'}{r}^{-\nu} \Vol{x}{r'} \end{align*} With $\nu = -\log_2 a$. Thus : \begin{equation*} a \parenfrac{r'}{r}^\nu \leq \frac{\Vol{x}{r'}}{\Vol{x}{r}}. \end{equation*} \end{proof} \begin{proposition}\label{pr:doubling different centers} Let $(X, d, \mu)$ satisfies $\doubling{}$, then for any $x,y \in M$, $r, r' > 0$ such that $B(y, r) \subset B(x,r')$, we have : \begin{equation}\label{eq:doubling different centers} \frac{\Vol{x}{r'}}{\Vol{y}{r}} \leq A^2 \parenfrac{r'}{r}^\eta. \end{equation} \end{proposition} This is a classical result. The proof is similar to what we will do to prove proposition \ref{pr:R doubling different centers}. \begin{definition}\label{def:R doubling} A measured metric space $(X,d,\mu)$ satisfy the $R$-doubling property $\doubling{R}$ if there is some constant $A > 0$ such that \eqref{eq:def doubling} holds for all $x \in M$, and $0 < r \leq r' \leq 2R$. This is equivalent to \eqref{eq:def doubling alt} being true for all ball $B$ with radius less than $R$. \end{definition} We define in the same way the $R$-reverse doubling property $\rd{R}$. We will write $A_R$ for the doubling constant when it's important to precise which $R$ the constant is associated with. Some care is needed to get precisely those maximal radius. That \eqref{eq:def doubling alt} follows from \eqref{eq:def doubling} is immediate. \begin{proof}[Proof of \eqref{eq:def doubling alt} implies \eqref{eq:def doubling}] Suppose that there is some constant $A$ such that for all ball $B$ of radius less than $R$, then $\mu(2B) \leq A\mu(B)$. Let $r \leq r' \leq 2R$. $k = \left\lfloor \log_2 \frac{r'}{r} \right\rfloor$. We have $2^{-k-1}r' < r \leq 2^{-k}r' $, and, using repeatedly $\Vol{x}{\rho} \leq A \Vol{x}{\rho/2}$, valid for all $\rho \leq 2R$, we have : \begin{align*} \Vol{x}{r'} &\leq A^{k+1}\Vol{x}{2^{-k - 1} r'} \\ &\leq A^{k + 1} \Vol{x}{r} \\ &\leq A e^{\left(\log A \log \frac{r'}{r}\right)/ \log 2} \Vol{x}{r} \\ &\leq A \left(\frac{r'}{r}\right)^\eta \Vol{x}{r}, \end{align*} with $\eta = \log_2 A$. \end{proof} \begin{proposition}\label{pr:R doubling different centers} Let $X$ satisfies $\doubling{R}$, then for all $x, y \in X$, $r, r' > 0$ such that $B(y, r) \subset B(x,r')$ and with $r' < R$, then for $\eta = \log_2 A$ : \begin{equation}\label{eq:R doubling different centers} \frac{\Vol{x}{r'}}{\Vol{y}{r}} \leq A^2 \left(\frac{r'}{r}\right)^\eta. \end{equation} If in addition $X$ satisfies $\rd{R}$, then we also have for some constant $c > 0$, that for all $0 < r, r' < R$ and $B(y,r) \subset B(x,r')$, \begin{equation}\label{eq:reverse doubling different centers} c \parenfrac{r'}{r}^\nu \leq \frac{\Vol{x}{r'}}{\Vol{y}{r}}. \end{equation} \end{proposition} \begin{proof}For the first part, we simply use $B(x,r) \subset B(y, 2r)$ then applies \eqref{eq:def doubling}. For the second part, since $B(x,r') \subset B(y,2r')$, we can use \eqref{eq:doubling different centers} and we get : \begin{align*} \frac{\Vol{x}{r'}}{\Vol{y}{r}} &= \frac{\Vol{y}{r'}}{\Vol{y}{r}}\frac{\Vol{x}{r'}}{\Vol{y}{r'}} \\ &\geq a\parenfrac{r'}{r}^\nu\frac{\Vol{x}{r'}}{\Vol{y}{2r'}} \\ &\geq a A^{-2} 2^{-\eta} \parenfrac{r'}{r}^\nu \end{align*} \end{proof} We now suppose that $(X, d)$ is a \emph{path metric space}, i.e.\ that the distance $d(x,y)$ is realised as the infimum of the length of continuous path with end points $x$ and $y$. We will keep making this assumption in everything that follows (though some results are still true in a more general setting). \begin{proposition} Let $X$ satisfy $\doubling{R}$, and suppose that $X \setminus B(x,3R/4)$ is non empty for all $x\in X$. Then there is some $\nu > 0$ such that $X$ satisfy $\rd{R/2}$. \end{proposition} \begin{proof} Let $x \in X$, $r < R/2$. We take $y \in X$ such that $d(x,y) = \frac{3}{2}r$. Then $B(y, r/2) \subset B(x, 2r)\setminus B(x,r)$. Thus $\Vol{x}{2r} \leq A^2 4^\eta \Vol{y}{r/2} = A^4 \Vol{y}{r/2}$. Thus $(1 + A^{-4})\mu(B(x,r)) \leq \mu(B(x,2r))$. From this we show $\rd{R/2}$ in a similar way as in what follows definition \ref{def:R doubling}. \end{proof} The $R$-doubling also implies some upper bound on the volume of balls of large radius. The two following propositions, and their proof, come from \cite{HebischSaloff-Coste01}. \begin{proposition}\label{pr:annuli_bound} If $(X,d, \mu)$ satisfy $\doubling{R}$, then there is some $C > 0$ that depends only on the doubling constant and order, such that we have, for any $r > 0$, $R' \leq R$ : \begin{equation} \Vol{x}{r + R'/4} \leq C \Vol{x}{r} \end{equation} \end{proposition} \begin{proof} The case $r \leq R$ is obvious by the doubling property. For $r > R$, then let $\set*{x_i}_i$ be a maximal family in $B(x, r - R/4)$ such that for any $i \neq j$, $d(x_i, x_j) > R'/2$. Then the balls $B(x_i, R'/4) \subset B(x,r)$ are disjoints, and the balls $B(x_i, R')$ cover $B(x,r+R'/4)$, since a point of $B(x,r+R'/4)$ is at distance at most $R'/2$ of $B(x,r - R'/4)$ (because $(X,d)$ is a path-metric space). Thus $\Vol{x}{r+R'/4} \leq \sum_i \Vol{x_i}{R'} \leq A^2 \sum_i \Vol{x_i}{R'/4} \leq A^2 \Vol{x}{r}$. \end{proof} \begin{proposition}\label{pr:exp doubling} If $(X,d, \mu)$ satisfy $\doubling{R}$ then, there is a $D > 0$, that depends only on the doubling constant and doubling order, such that for any $r > 0$, we have : \begin{equation} \Vol{x}{r} \leq e^{D\frac{r}{R}} \mu(B(x,R)) \label{eq:quasi_doubling} \end{equation} \end{proposition} \begin{proof} Let $r > R$, $k = \left\lfloor 4\frac{r - R}{R}\right\rfloor$, then we have $\Vol{x}{r} \leq \Vol{x}{R + (k+1)R/4}$. Thus by proposition \ref{pr:annuli_bound}, $\Vol{x}{r} \leq C^{k+1}\Vol{x}{R}$. Moreover, $k + 1 \leq 4\frac{r}{R} - 3 \leq 4\frac{r}{R}$, and so : \begin{equation*} \Vol{x}{r} \leq \exp\left(4 \ln\left(C\right) \frac{r}{R} \right) \Vol{x}{R} \end{equation*} And thus we get \eqref{eq:quasi_doubling} with $D = 4 \ln(C)$. If $r \leq R$, then $\mu(B(x,r)) \leq \mu(B(x,R)) \leq e^{D\frac{r}{R}} \mu(B(x,R))$ and thus \eqref{eq:quasi_doubling} still holds. \end{proof} Similarly to how we always use $A$ for the doubling constant, $D$ will always be used for this constant $D = 8 \log A$. \begin{proposition}\label{pr:volume different centers} Let $X$ satisfies $\doubling{R}$, let $r \leq R$, then there exists a constant $C > 0$, that depends only on the doubling constant and order, such that for any $x, y \in X$, $\Vol{x}{r} \leq Ce^{D\frac{d(x,y)}{r}}\Vol{y}{r}$. \end{proposition} \begin{proof} $B(x,r) \subset B(y, r + d(x,y))$. Since $r \leq R$, by proposition \ref{pr:annuli_bound}, we have $\Vol{x}{r} \leq A^8 \Vol{y}{d(x,y)}$. Then using proposition \ref{pr:exp doubling} : \begin{equation*} \Vol{x}{r} \leq C e^{D\frac{d(x,y)}{R}} \Vol{y}{r} \leq C e^{D\frac{d(x,y)}{r}}\Vol{y}{r} \end{equation*} \end{proof} \begin{proposition}\label{pr:bigger R} If $(X,d, \mu)$ satisfy $\doubling{R}$, then it also satisfy $\doubling{R'}$ for any $R' > 0$, with a doubling constant $A_{R'} = A_R$ if $R' \leq R$, and $A_{R'} = e^{2D\frac{R'}{R}}$ if $R' > R$. \end{proposition} \begin{proof} The case $R' \leq R$ is obvious. Thus assume $R > R'$, let $r \leq R'$. If $r \leq R$ then the result is trivial since $A_{R} \leq A_{R'}$. If $r > R$, then by proposition \ref{pr:exp doubling}, $\Vol{x}{2r} \leq e^{D\frac{2r}{R}} \Vol{x}{r}$, and $e^{2D\frac{r}{R}} \leq e^{2D\frac{R'}{R}}$. Thus $\mu$ is R'-doubling, with a doubling constant $A_{R'} = e^{2D\frac{R'}{R}}$. \end{proof} With this we can generalise proposition \ref{pr:volume different centers} for any $r > 0$ : if $r > R$, we can use the $r$-doubling and apply proposition \ref{pr:volume different centers} for it. The constants are $A_r = e^{2D\frac{r}{R}}$, $D_r = 4\log \left(A_r^2\right) = 16 D \frac{r}{R}$, $A_r^8 = e^{16 D \frac{r}{R}}$. Then we have, for any $x,y\in X$, $r > 0$ : \begin{equation} \Vol{x}{r} \leq e^{16 D \frac{r + d(x,y)}{R}} \Vol{y}{r}. \end{equation} \begin{proposition}\label{pr:reverse_doubling bigger R} Let $(X,d,\mu)$ be a measured metric space that satisfy $\doubling{R}$. If it also satisfy $\rd{R}$, then for any $\kappa > 1$, it satisfy $\rd{\kappa R}$ with a different reverse doubling constant, that depends only on the doubling, reverse doubling constant and orders, and on $\kappa$. \end{proposition} The notable part of this proposition is that the reverse doubling order is conserved. \begin{proof} By proposition \ref{pr:bigger R}, $\mu$ is $\kappa R$-doubling for all $\kappa$, with some doubling order $\eta = \eta(\kappa)$. We take a point $x\in M$, and $r, r'$ with $0 < r \leq r' \leq \kappa R$. We want to prove that there's some constant $a_\kappa$ such that, for any such $x, r, r'$ : \begin{equation*}\label{eq:reverse_kappa_doubling} \frac{\Vol{x}{r'}}{\Vol{x}{r}} \geq a_\kappa \left(\frac{r'}{r}\right)^\nu \end{equation*} If $0 < r \leq r' \leq R$, then there's nothing to do but apply $\rd{R}$. If $0 < r \leq R < r' \leq \kappa R$, then : \begin{equation*} \frac{\Vol{x}{r'}}{\Vol{x}{r}} \geq \frac{\Vol{x}{R}}{\Vol{x}{r}} \geq a \left(\frac{R}{r}\right)^\nu \geq a \kappa^{-\nu} \left(\frac{r'}{r}\right)^\nu \end{equation*} Finally, when $R < r \leq r' < \kappa R$, then : \begin{align*} \frac{\Vol{x}{r'}}{\Vol{x}{r}} &\geq \frac{\Vol{x}{r'}}{\Vol{x}{R}}\frac{\Vol{x}{R}}{\Vol{x}{r}} \\ &\geq a\kappa^{-\nu} \left(\frac{r'}{R}\right)^\nu A^{-1} \left(\frac{R}{r}\right)^\eta \\ &\geq a A^{-1} \kappa^{-\nu} \left(\frac{R}{r}\right)^{\eta -\nu} \left(\frac{r'}{r}\right)^\nu \\ &\geq a A^{-1} \kappa^{-\eta} \left(\frac{r'}{r}\right)^\nu \end{align*} Thus \eqref{eq:reverse_kappa_doubling} holds for $a_\kappa= \min\left(a, a\kappa^{-\nu}, aA^{-1}\kappa^{-\eta}\right) = a A^{-1}\kappa^{-\eta}$. \end{proof} \begin{proposition}\label{p_covering_card} Let $(X,d,\mu)$ satisfy $\doubling{R}$. Take $x \in X$, $r > 0$, and let $B = B(x,r)$. Let $\delta$ be such that $0 < \delta \leq \min(r, R)$, and $\set{x_i}_i \subset B$ be a family of points such that the balls $B_i = B(x_i, \delta)$ form a covering of $B$ and that for any $i \neq j$, $\frac{1}{2}B_i \cap \frac{1}{2} B_j = \emptyset$. Then there are constants $C, c$, depending only on the doubling constant such that \begin{equation} \card(I) \leq C e^{c\frac{r}{\delta}} \end{equation} \end{proposition} \begin{proof} For any $i$, $B_i \subset B(x, r+ \delta)$, and since $\delta \leq R$, then by proposition \ref{pr:annuli_bound}, $\Vol{x}{r + \delta} \leq C \Vol{x}{r}$. Now, if $r > R$, then by proposition \ref{pr:exp doubling}, $\Vol{x}{r} \leq e^{D\frac{r}{\delta}}\Vol{x}{\delta}$ ($\delta \leq R$ and so we use that $\mu$ is $\delta$-doubling with the same doubling constant as that of the R-doubling). Moreover by proposition \ref{pr:volume different centers}, $\Vol{x}{\delta} \leq C e^{D\frac{d(x,x_i)}{\delta}}\Vol{x_i}{\delta} \leq C e^{D\frac{r}{\delta}}\mu(B_i)$, since $x_i \in B$, thus $d(x,x_i) \leq r$. Thus we have $\mu(B(x,r)) \leq C e^{2D\frac{r}{\delta}} \mu(B_i)$. Up to this point the constant $C$ depends only on the doubling constants. \begin{align*} \left(\card{I} \right)\Vol{x}{r + \delta} &\leq Ce^{2D\frac{r}{\delta}} \sum_{i \in I} \mu(B_i) \\ &\leq ACe^{2D\frac{r}{\delta}} \sum_i \mu\left(\frac{1}{2}B_i\right) \\ &\leq Ce^{2D\frac{r}{\delta}} \Vol{x}{r+ \delta} \end{align*} Thus $\card(I) \leq Ce^{2D\frac{r}{\delta}} $ and the constant $C$ depends only on the doubling constants. \end{proof} \begin{remark} For any ball $B$, such a covering always exists : take for $\set{x_i}_i \subset B$ a maximal family with $d(x_i, x_j) \geq \delta$ for any $i\neq j$. \end{remark} \begin{proposition} Let $M_R$ be the centered maximal function defined by : \begin{equation} \forall f\in L_{loc}^1(M),\; M_R f(x) = \sup_{r < R} \fint_{B(x,r)} |f| \mathop{}\mathrm{d} \mu \end{equation} Then, if $\mu$ satisfies $\doubling{R}$, $M_{R/2}$ is bounded on $L^p$ for all $p \in (1, +\infty]$, and the operator norm is bounded by a constant that only depends on the doubling constant $A$ and on $p$. \end{proposition} We will use the following classical results : \begin{lemma}[Vitali's covering lemma]\label{Vitali lemma} Let $(X,d)$ be a separable metric space, and $\set{B_j}_{j\in J}$ a collection of balls, such that $\sup_j r(B_j) < \infty$. Then for any $c > 3$ there exists a subcollection $\set{B_{j_n}}_{n\in \mathbf{N}} \subset \set{B_j}_{j\in J}$ such that the $B_{j_n}$ are pairwise disjoint and $\bigcup_{j\in J} B_j \subset \bigcup_{n\in \mathbf{N}} c B_{j_n}$. \end{lemma} \begin{theorem}[Marcinkiewicz interpolation theorem]\label{th:Marcinkiewicz} Let $(X,\mu)$ be a measure space, $T$ a sublinear operator acting on functions, i.e.\ there is a $\kappa > 0$ such that for any $f, g$ measurable, then $Tf, Tg$ are measurable and $T(f+g)(x) \leq \kappa \left(Tf(x) + Tg(x)\right)$ for almost every $x \in X$. Assume that for $1 \leq p < r < \infty$ : \begin{align*} \forall f \in L^p,\; \mu\set*{x\in X:\; Tf(x) > \lambda} &\leq \frac{A}{\lambda^p}\|f\|_p^p,\\ \forall f \in L^r,\; \mu\set*{x\in X:\; Tf(x) > \lambda} &\leq \frac{B}{\lambda^r}\|f\|_r^r, \end{align*} or that, for $1 \leq p < r = \infty$, we replace the second line by : $\forall f \in L^\infty, |Tf(x)| \leq B|f(x)|$ for almost every $x \in X$. Then, for every $s \in (p,r)$, for all $f \in L^s$, $Tf\in L^s$ and : \begin{equation} \|Tf\|_s \leq C(A,B, p, r, s, \kappa) \|f\|_s \end{equation} \end{theorem} \begin{proof}[Proof of the proposition] We have, for any $f \in L^\infty(M)$, $\|M_R f\|_\infty \leq \|f\|_\infty$. If $f \in L^1(M)$, then for any $\lambda > 0$, define $E_\lambda = \set*{x\in M:\; M_{R/2}Rf(x) > \lambda}$. If $x \in E_\lambda$, then there is some $r_x > 0$ such that $\lambda < \fint_{B(x,r_x)} |f| \mathop{}\mathrm{d} \mu$, and $2 r_x \leq R$. Then $\mu(B(x,r_x)) \leq \lambda^{-1} \int_{B(x,r)} |f| \mathop{}\mathrm{d}\mu$. We have $E_\lambda \subset \bigcup_x B(x,r_x)$, thus by Vitali's covering lemma, there is a subcollection $\set*{x_n}$ such that the $B(x_n, r_n)$ are pairwise disjoint and $E_\lambda \subset \bigcup_n B(x_n, 4 r_n)$. Also, since $r_n < R/2$, and $\mu$ is R-doubling, we have $\Vol{x_n}{4r_n} \leq A^2 \Vol{x_n}{r_n}$. Then : \begin{align*} \mu(E_\lambda) &\leq \sum_n \mu(B(x_n, 4r_n)) \\ &\leq A^2 \sum_n \mu(B(x_n, r_n)) \\ &\leq A^2 \lambda^{-1} \sum_n \int_{B(x_n, r_n)} |f| \mathop{}\mathrm{d}\mu \\ &\leq A^2 \frac{\|f\|_1}{\lambda}. \end{align*} And so by the Marcinkiewicz interpolation theorem, for any $p \in (1, +\infty)$, $M_{R/2}$ is bounded on $L^p$ with an operator norm $\|M_{R/2}\|_{p\rightarrow p} \leq C_p$, with $C_p$ depending only on $A$ and $p$. \end{proof} \begin{remark} Of course, $\doubling{R}$ implies $\doubling{R'}$ for all $R' > R$, then $M_R$ itself is also bounded, but with the constant $C_p$ depending on the constant for $\doubling{2R}$. And so are all the $M_{R'}$ with $R' > R$, with the constant $C_p$ depending on $p$, the $R$-doubling constant, and the ratio $R'/R$. \end{remark} \begin{proposition}\label{pr:uncentered equiv centered} Let $\tilde{M}_R$ the uncentered maximal function defined by : for all $f\in L_{loc}^1(M)$, \begin{equation} \tilde{M}_Rf(x) = \sup_{\substack{x \in B,\\r(B) \leq R}} \fint_B |f|\mathop{}\mathrm{d}\mu \end{equation} With this supremum to be interpretated as being over all balls $B$ satisfying the given condition, and $r(B)$ being the radius of $B$. Then, if $\mu$ is R-doubling, there exist some constant $C > 0$ such that $M_R \leq \tilde{M}_R \leq C M_{2R}$. \end{proposition} \begin{proof} Since a ball centered at $x$ is a ball containing $x$, $M_R \leq \tilde{M}_R$ is obvious. Now, for some balls $B= B(y,r)$ containing $x$, with radius less than $R$, we have $B \subset B(x, 2r)$ and : \begin{equation*} \fint_B |f| \mathop{}\mathrm{d} \mu \leq \frac{\Vol{x}{2r}}{\mu(B)}\fint_{B(x,2r)} |f| \mathop{}\mathrm{d} \mu \leq C M_{2R} f(x) \end{equation*} \end{proof} \begin{proposition}\label{p_dyad_max} Let $(X,d,\mu)$ be a separable, measured metric space, and $\collection{D}_m$ be a chosen construction of dyadic cubes on $X$. Define the associated dyadic maximal function $M_{d,m}$ by : \begin{equation} M_{d,m}f(x) = \sup_{\substack{Q \in \collection{D}_m \\ x \in Q}} \fint_Q |f| \mathop{}\mathrm{d} \mu \end{equation} Then there is a constant $C_p$ such that for any $p > 1$, for any $f\in L^p$, $\|M_{d,m}f\|_p \leq C_p \|f\|_p$. As a consequence, $M_{d,m,l}$, the maximal function defined the same way, but with the cubes in the supremum being only those of length less than $l$, is also bounded on $L^p$ for all $p > 1$. \end{proposition} \begin{proof} Let $f \in L^1(X)$, $\lambda > 0$, we define $E_\lambda = \set*{x\in X:\; M_{d,m}f(x) > \lambda}$. If $x \in E_\lambda$, then there is a cube $Q\in \collection{D}_m$ such that $\fint_Q |f| \mathop{}\mathrm{d}\mu > \lambda$, and so $Q \subset E_\lambda$. Then there is two possibilities : First, there is a maximal dyadic cube $P$ containing $x$ such that $\fint_P |f| \mathop{}\mathrm{d}\mu > \lambda$, then $P \subset E_\lambda$. Second, there is no such cube, then $\Omega = \bigcup_{\substack{Q\in \collection{D}_m \\ x\in Q}} Q \subset E_\lambda$, and we have $\mu(\Omega) \leq \lambda^{-1} \int_\Omega |f|\mathop{}\mathrm{d}\mu < \infty$. Then take $\set{Q_i}_i$ to be the family of all the maximal dyadic cubes such that $\fint_{Q_i} |f|\mathop{}\mathrm{d}\mu > \lambda$, and $\set{\Omega_j}_j$ be the family of all the the regions $\Omega_j = \bigcup_k Q_k^j$, where $\set{Q_k^j}$ is an infinite increasing sequence of cubes with $\fint_{Q_j^k} |f|\mathop{}\mathrm{d}\mu > \lambda$. The $Q_i, \Omega_j$ are pairwise disjoints : it is clear that the $Q_i$ are. Now, if for a cube $Q$, we have $Q \cap \Omega_j \neq \emptyset$, then there is a cube $P\subset \Omega_j$ such that $P \cap Q \neq \emptyset$, thus we have either $P\subset Q$ or $Q \subset P$. In both case, $Q\subset \Omega_j$ since $\Omega_j$ is the union of all cubes containing $P$. This mean both that $Q_i \cap \Omega_j = \emptyset$ for all $i,j$, and that $\Omega_j \cap \Omega_l = \emptyset$ for $j\neq l$. Thus, we have the disjoint union : \begin{equation*} E_\lambda = \bigcup_i Q_i \cup \bigcup_j \Omega_j, \end{equation*} Then $\mu(Q_i) < \lambda^{-1}\int_{Q_i} |f| \mathop{}\mathrm{d}\mu$, and $\mu(\Omega_j) \leq \lambda^{-1}\int_{\Omega_j}|f|\mathop{}\mathrm{d}\mu$. Summing on all cubes and all regions, $\mu(E_\lambda) \leq \lambda^{-1}\int_{E_\lambda} |f| \mathop{}\mathrm{d}\mu \leq \lambda^{-1} \|f\|_1$. Thus : \begin{equation}\label{eq:dyadic_max weak type} \mu\left( \set*{x\in X:\; M_{d,m}f(x) > \lambda}\right) \leq \frac{\|f\|_1}{\lambda} \end{equation} Moreover, for $f\in L^\infty(X)$, we clearly have $M_{d,m}f(x) \leq \|f\|_\infty$. Then by Marcienkiewicz interpolation theorem, there is a constant $C_p > 1$ such that $\|M_{d,m}f\|_p \leq C_p \|f\|_p$. \end{proof} \subsection{Estimates of operator norms by that of a maximal function} We refers to the works of C. Pérez and R.L. Wheeden \cite{PerezWheeden03} for a more general approach. In what follows, we let $(X,d)$ be a separable R-doubling metric space. We take $T$ an operator given by a kernel $K:X\times X\setminus \mathrm{Diag} \rightarrow \mathbf{R}$, i.e.\ \begin{equation}\label{T_def} Tf(x) = \int_X f(y)K(x,y)\mathop{}\mathrm{d}\mu(y) \end{equation} We say that $T$, or $K$, satisfy the condtion $\Kcond{}$ if $K$ is non negative and if there are constants $C_1, C_2 > 1$ such that : \begin{equation}\label{K_cond} \begin{aligned} d(x', y) \leq C_2 d(x,y) &\Rightarrow K(x,y) \leq C_1 K(x', y), \\ d(x, y') \leq C_2 d(x,y) &\Rightarrow K(x,y) \leq C_1 K(x, y'). \end{aligned} \end{equation} For each $m \in \mathbf{Z}$, $X$ admits a decomposition in dyadic cube. We take $\rho > 1$ to be as in theorem \ref{cubes}. We define $\varphi$ as the following functional on balls \begin{equation} \varphi(B) = \sup_{\substack{x,y \in B\\ d(x,y) \geq \frac{1}{2\rho}r(B)}} K(x,y), \end{equation} and $M_\varphi$ to be the following maximal functions : \begin{equation} M_\varphi f(x) = \sup_{\substack{x \in B}} \varphi(B) \int_{B} |f|\mathop{}\mathrm{d}\mu \end{equation} For $T$ satisfying $\Kcond{}$, it is shown in $(4.3)$ of \cite{SawyerWheedenZhao96} that $\varphi$ is decreasing in the following sense : \begin{proposition}\label{p_phi_larger_ball} There is a constant $\alpha$, which depends only on $C_1$, $C_2$, $\rho$ such that for any balls $B \subset B'$, $\varphi(B') \leq \alpha \varphi(B)$ \end{proposition} \begin{proof} First we want to prove that if $\eqref{K_cond}$ holds, then for any integer $k \geq 1$, $d(x', y) \leq C_2^k d(x,y)$ implies $K(x,y) \leq C_1^k K(x',y)$ (and the same with $x, y'$. We proceed by induction. The case $k = 1$ is obvious. Let $k > 2$, $x, x', y \in X$ such that $d(x', y) \leq C_2^k d(x,y)$, and suppose $d(x',y) \leq C_2^{k-1} d(x,y) \Rightarrow K(x,y) \leq C_1^{k-1}K(x'y)$. Then, if $d(x',y) \leq C_2^{k-1}d(x,y)$, the results holds and there is nothing to prove. If $d(x',y) > C_2^{k-1} d(x,y)$, then $X$ is a path metric space, so there is a path from $y$ to $x'$ of length $d(x',y)$, and on this path is a point $z$ such that $d(y,z) = C_2^{k-1} d(x,y)$. But then : \begin{equation*} d(x',y) \leq C_2^k d(x,y) = C_2 d(z,y) \end{equation*} And thus $K(z,y) \leq C_1 K(x',y)$. Then by induction we get that $K(x,y) \leq C_1^k K(x',y)$ for all $x, x', y$ with $d(x',y) \leq C_2^k d(x,y)$. We can generalize sligthly, and we have that for any $C_2 > 1$ there exist a $C_1 > 1$ such that \eqref{K_cond} holds. Now we can prove the proposition proper. For $x', y' \in B'$, $x,y \in B$ such that $d(x',y') \geq c r(B')$ and $d(x,y) \geq c r(B)$, with $c = \frac{1}{2\rho}$. We can suppose that $d(x,y') \geq d(x,x')$ (if not, we can exchange $x'$ and $y'$). Then $cr(B') \leq d(x',y') \leq d(x', x) + d(x, y') \leq 2d(x,y')$. Moreover, since $B \subset B'$, $d(x,y') \leq 2 r(B')$, and thus : \begin{equation*} d(x,y') \leq \frac{2}{c} d(x',y') \end{equation*} Thus by \eqref{K_cond} there is a constant $c_1 > 1$ such that $K(x',y') \leq c_1 K(x,y')$. Moreover $d(x,y) \leq d(x,y') + d(y', y) \leq d(x,y') + 2r(B') \leq (1 + 4/c) d(x,y')$. Thus by \eqref{K_cond} there is a constant $c_2 > 1$ such that $K(x,y') \leq c_2 K(x,y)$. Thus \begin{equation*} K(x',y') \leq c_1 c_2 K(x,y) \end{equation*} And thus $\varphi(B') \leq c_1 c_2 \varphi(B)$. \end{proof} We further assume that $\varphi$ satisfy the following condition : there is some $\varepsilon > 0$ and some constant $L > 0$ such that for any balls $B_1, B_2$, with $B_1\subset B_2$, we have : \begin{equation}\label{condition_phi} \varphi(B_1)\mu(B_1) \leq L \left(\frac{r(B_1)}{r(B_2)}\right)^\varepsilon \varphi(B_2)\mu(B_2) \end{equation} \begin{theorem}[C. Pérez and R.L. Wheeden \cite{PerezWheeden03}]\label{th_perez_wheeden} Let $(X,d,\mu)$ be a metric space with a doubling measure $\mu$. For $T$ an operator defined by \eqref{T_def} satisfying $\Kcond{}$, and with $\varphi$ satisfying \eqref{condition_phi}, then there is some constant $C$, depending only on the doubling constant and $p$, such that, for any $f:X \rightarrow \mathbf{R}$ : \begin{equation}\label{eq:perez_wheeden} \|Tf\|_p \leq C \|M_\varphi f\|_p \end{equation} \end{theorem} In addition for the operator $Tf(x) = \int_M \frac{d(x,y)^s}{\Vol{x}{d(x,y)}} f(y) \mathop{}\mathrm{d}\mu(y)$, we can replace $M_\varphi$ by the maximal function defined by $M_s f(x) = \sup_{r > 0} r^{s} \fint_{B(x,r)} |f| \mathop{}\mathrm{d}\mu$. See corollary \ref{cor:riesz_bounded}. We will also show a variant on this theorem. We consider the operator $T_\delta$, $\delta < R$, with kernel $K_\delta(x,y) = K(x,y) \chi_{\set{d(x,y) < \delta}}$, and we want to compare its $L^p$ norm to that of the maximal function $M_{\varphi,\delta}$ defined by : \begin{equation}\label{eq:def M_phi_delta} M_{\varphi, \delta} f(x) = \sup_{\substack{x \in B\\ r(B) < \delta}} \varphi(B) \int_B |f| \mathop{}\mathrm{d}\mu. \end{equation} The idea of the proof of this result will be essentially the same as that of theorem \ref{th_perez_wheeden} given in \cite{PerezWheeden03}, but some care must be taken to account for the different hypotheses properly, and thus we will give the details in what follows. The hypothesis to prove $\|Tf\|_p \leq C\|M_{\varphi, \delta}\|_p$ can be weakened compared to those of theorem \ref{th_perez_wheeden}. A key point is that proposition \ref{p_phi_larger_ball} has to hold at least for balls of radius at most $2\delta$. Looking at the proof of the proposition, this is true as long as \eqref{K_cond} holds for $C_2 \leq (1 + 8\rho)$ and $d(x,y) \leq 4\delta$. Then we take $(X, d, \mu)$ a R-doubling space. $T$ an operator defined by a kernel $K$. We say that $T$, or $K$ verify the condition $\Kcond{\delta}$, if there exist constants $C_1 > 1$, $C_2 \geq 1 + 8 \rho$, such that for any $x,y$ such that $d(x,y) \leq 4\delta$, we have : \begin{equation}\label{eq:K_cond_cutoff} \begin{aligned} \forall x'\in X,\, d(x', y) \leq C_2 d(x,y),\quad &K(x,y) \leq C_1 K(x', y)\\ \forall x'\in X,\, d(x, y') \leq C_2 d(x,y),\quad &K(x,y) \leq C_1 K(x, y'). \end{aligned} \end{equation} Property $\Kcond{\delta}$ ensure that \ref{p_phi_larger_ball} holds for balls of radius less than $2\delta$. Since we will end up considering balls of a radius slightly larger than $\delta$, the following proposition will be useful. \begin{proposition}\label{pr:M_phi_p_bound} Let $(X,d,\mu)$ satisfy $\doubling{2(2\kappa+1)\delta}$ for $\delta > 0$, $\kappa > 1$, $T$ an operator satisfying $\Kcond{4(2\kappa +1)\delta}$, and such that the associated functional $\varphi$ satisfies \eqref{condition_phi} when $r(B_1), r(B_2) \leq 2(2\kappa +1) \delta$. Then for any $p \in (1, \infty]$, there is some constant $C$ which depends only on $p$, $\kappa$, the doubling constants, and the constants $\alpha$, $L$, $\varepsilon$, in proposition \ref{p_phi_larger_ball} and in \eqref{condition_phi} such that for any non negative $f$, $\|M_{\varphi, \kappa \delta} f\|_p \leq C \|M_{\varphi, \delta} f\|_p$. \end{proposition} \begin{proof} We have : \begin{align*} M_{\varphi, \kappa \delta} f(x) &= M_{\varphi, \delta} f(x) + \sup_{\substack{x\in B,\\ \delta < r(B) \leq \kappa\delta}} \varphi(B) \int_B |f|\mathop{}\mathrm{d}\mu \\ &\leq M_{\varphi,\delta} f(x) + C \sup_{\substack{x\in B,\\ r(B) = \kappa\delta}} \varphi(B) \int_{B(x,2\kappa\delta)} |f|\mathop{}\mathrm{d}\mu \end{align*} Using that for $x \in B$, $B\subset B(x,2r(B)) \subset B(x,2\kappa\delta)$ and that for any ball $B$ with radius greater than $\delta$, by \eqref{condition_phi} (on balls with radius at most $\kappa \delta$), we have $\varphi(B) \leq AL \kappa^\eta \varphi\left(\frac{\kappa\delta}{r(B)}B\right)$. Now, for any ball $B$ containing $x$ with radius equal to $\kappa \delta$. For $y \in B(x,2\kappa\delta)$, consider the ball $Q(y) = B(y,\delta)$. We have $Q(y) \subset B(x,(2\kappa + 1)\delta)$, thus using $\doubling{(2\kappa +1)\delta}$, we have that $\mu(B(x,2\kappa\delta)) \leq A^2(2\kappa + 1)^\eta \mu(Q(y))$. For $y \in B(x,(2\kappa +1)\delta)$, we also have that $B \subset B(z, 2(2\kappa +1)\delta)$, thus using \eqref{condition_phi} (for balls with radius at most $2(2\kappa +1)\delta)$), $\doubling{2(2\kappa +1)}$ and $\Kcond{4(2\kappa +1)\delta)}$, we get that $\varphi(B) \leq A^2 \parenfrac{2(2\kappa +1)}{2\kappa}^\eta \alpha \varphi(Q(y))$. Putting all this together, we get : \begin{align*} \varphi(B) \int_{B(x,2\kappa\delta)} |f|\mathop{}\mathrm{d}\mu &= \varphi(B) \fint_{B(x,2\kappa\delta)} \mu(B(x,2\kappa\delta)) |f|\mathop{}\mathrm{d}\mu \\ &\leq C \varphi(B) \fint_{B(x,2\kappa\delta)} \mu(Q(y)) |f(y)| \mathop{}\mathrm{d}\mu(y) \\ &\leq C \fint_{B(x,2\kappa\delta)} \varphi(B) \int_{Q(y)} \mathop{}\mathrm{d}\mu(z) |f(y)|\mathop{}\mathrm{d}\mu(y) \\ &\leq C\frac{1}{\Vol{x}{2\kappa\delta}}\int_{B(x,(2\kappa +1)\delta)} \varphi(B) \int_{B(x,2\kappa\delta)\cap B(z,\delta)} |f(y)|\mathop{}\mathrm{d}\mu(y) \mathop{}\mathrm{d}\mu(z) \\ &\leq C A\parenfrac{2\kappa +1}{2\kappa}^\eta \fint_{B(x,(2\kappa+1)\delta)} \varphi(B(z,\delta)) \int_{B(z,\delta)} |f(y)| \mathop{}\mathrm{d}\mu(y) \mathop{}\mathrm{d}\mu(z) \\ &\leq C \fint_{B(x,(2\kappa+1)\delta)} M_{\varphi, \delta} f \mathop{}\mathrm{d}\mu \end{align*} And the constant $C$ depends only on the doubling constants, $L$, $\alpha$ and $\kappa$. Then we have : \begin{equation} M_{\varphi,\kappa\delta} f(x) \leq M_{\varphi,\delta} f(x) + C M_{(2\kappa +1) \delta}\left(M_{\varphi, \delta}f\right)(x) \end{equation} The theorem follows from the boundedness of the classical maximal function $M_{(2\kappa +1)\delta}$ on any $L^p$, $p > 1$, under $\doubling{2(2\kappa +1)\delta}$. \end{proof} \begin{theorem}\label{cutoff_kernel} Let $\delta > 0$. Let $\rho > 0$ be the sidelength constant of dyadic cubes. Suppose that $(X,d, \mu)$ satisfy $\doubling{2(6\rho +1) \delta}$. Assume that $K$ satisfies $\Kcond{4(6\rho +1)\delta}$, and that $\varphi$ satisfies \eqref{condition_phi} for balls with radius at most $2(6\rho+1)\delta$. Let $p \geq 1$. Then there is a constant $C > 0$ (which depends only on the doubling constants, $\rho$, $p$ and of the constants in \eqref{condition_phi}, \eqref{K_cond}) such that we have : \begin{equation}\label{eq p_bound} \int_X |T_\delta f|^p \mathop{}\mathrm{d}\mu \leq C \int_X (M_{\varphi, \delta} f)^p \mathop{}\mathrm{d}\mu \end{equation} \end{theorem} \begin{proof} We will show that there exist some constant $C > 0$ such that for any non negative function $f$, we have $\int_X |T_\delta f|^p \mathop{}\mathrm{d}\mu \leq C \int_X (M_{\varphi, 3\rho\delta} f)^p \mathop{}\mathrm{d}\mu$. Then the theorem will follows by proposition \ref{pr:M_phi_p_bound}. To prove this, we define, for any $m\in \mathbf{Z}$, the operator $T_m$ by : \begin{equation*} T_mf(x) = \int_{d(x,y) > \rho^m} K_\delta (x,y) f(y) \mathop{}\mathrm{d}\mu(y) \end{equation*} Then, if for any $m \in \mathbf{Z}$, and for any non negative measurable functions $f,g$, we have : \begin{equation}\label{eq p_dual_bound} \int_X T_m f g\mathop{}\mathrm{d}\mu = \int_{d(x,y) > \rho^m} K_\delta (x,y) f(y) g(x) \mathop{}\mathrm{d}\mu(x,y) \leq C\|M_{\varphi, 3\delta} f\|_p \|g\|_{p'} \end{equation} Then by the monotone convergence theorem, taking $m\rightarrow -\infty$, the same inequality holds but with $T_m$ replaced by $T$, and by duality, \eqref{eq p_bound} is true. Take $m \in \mathbf{Z}$, and let $f, g$ be non negative measurable functions. Let $\collection{D}_m = \set*{\mathcal{E}_\alpha^k}_{\alpha \in \mathbf{N}^*}^{k \geq m}$ be a decomposition of $X$ in dyadic cubes given by theorem \ref{cubes} with sidelengths $\rho^k$. If $(x,y) \in X$ are such that $d(x,y) > \rho^m$, we take the integer $l \geq m$ such that $\rho^l < d(x,y) \leq \rho^{l+1}$. Let $Q$ be the cube of length $\rho^l$ containing $x$, $B(Q) = B\left(c_Q, \rho^{l+1}\right)$ the containing ball. We recall that we have $\rho^{-1} B(Q) \subset Q \subset B(Q)$. $d(c_Q, y) \leq d(c_Q, x) + d(x,y) \leq 2\rho^{l+1}$, thus $y \in 2B(Q)$. Since $d(x,y) > \rho^l = \frac{1}{2\rho} r\left(2B(Q)\right)$, we have by definition of $\varphi$, $K(x,y) \leq \varphi(2B(Q)) \leq \alpha\varphi(B(Q))$ by proposition \ref{p_phi_larger_ball} (which needs to hold for balls of radius $2\rho\delta$, thus we need $\Kcond{4\rho\delta}$. And if we suppose that $\delta \leq \rho^l = \ell(Q)$, then $d(x,y) \geq \delta$ and $K_\delta(x,y) = 0$. We have proved that if $Q$ is the cube of length comparable with $d(x,y)$, containing $x$, we have $y \in 2B(Q)$ and : \begin{equation*} K_\delta(x,y) \leq C\varphi(B(Q))\chi_{\set*{R \in \collection{D}_m,\, \ell(R) < \delta}}(Q)\chi_{Q}(x) \chi_{2B(Q)}(y) \end{equation*} If $r$ is the largest integer such that $\rho^r < \delta$, define $\collection{D}_m^r = \set*{\mathcal{E}_\alpha^k;\; m \leq k \leq r}$. For any $x, y \in X$ with $d(x,y) > \rho^m$, there is at least one cube $Q\in \collection{D}_m$ such that the previous inequation holds, and since both sides of it are zero if $\ell(Q) \geq \delta$, we have, for any $x,y \in X$ : \begin{equation*} K_\delta(x,y) \leq \sum_{Q \in \collection{D}_m^r} C \varphi(B(Q))\chi_{Q}(x) \chi_{2B(Q)}(y) \end{equation*} And so, for any $f, g \geq 0$ : \begin{equation*} \int_X T_mf g \mathop{}\mathrm{d}\mu\leq C\sum_{Q \in \collection{D}_m^r} \varphi(B(Q)) \int_{2B(Q)} f \mathop{}\mathrm{d}\mu \int_Q g \mathop{}\mathrm{d}\mu \end{equation*} But for any fixed integer $k \geq m$, the cubes of length of length $\rho^k$, $\set*{\mathcal{E}_\alpha^k}$ are pairwise disjoints, and $X = \bigcup_\alpha \mathcal{E}_\alpha^k$. Then using this decomposition for $k = r$, \begin{equation*} \int_X T_mf g \mathop{}\mathrm{d}\mu\leq C \sum_{\alpha \geq 1} \sum_{\substack{Q \in \collection{D}_m^r\\ Q \subset \mathcal{E}_\alpha^r}} \varphi(B(Q)) \int_{2B(Q)} f \mathop{}\mathrm{d}\mu \int_Q g \mathop{}\mathrm{d}\mu \end{equation*} Then for a constant $\gamma \geq 1$ to be determined, for any $\alpha \geq 1$, and $n \in \mathbf{Z}$, define : \begin{equation} \collection{C}_\alpha^n = \set*{Q \in \collection{D}_m^r, Q \subset \mathcal{E}_\alpha^r;\; \gamma^n < \frac{1}{\mu(B(Q))}\int_Q g \mathop{}\mathrm{d} \mu \leq \gamma^{n+1}} \end{equation} We let $n_\alpha$ be the unique integer such that $\mathcal{E}_\alpha^r \in \collection{C}_\alpha^{n_\alpha}$. Notice that $\set*{\collection{C}_\alpha^{n}}_{n\in \mathbf{Z}}$ is a partition of $\set{Q \in \collection{D}_m^r;\, Q \subset \mathcal{E}_\alpha^r}$. Then we have : \begin{equation*} \int_X T_mf g \mathop{}\mathrm{d}\mu \leq C \sum_{\alpha \geq 1} \sum_{n\in \mathbf{Z}} \gamma^{n+1} \sum_{Q \in \collection{C}_\alpha^n} \varphi(B(Q)) \mu(B(Q)) \int_{2B(Q)} f \mathop{}\mathrm{d}\mu \end{equation*} For any $\alpha \geq 1$, we let $\set*{Q_{j,\alpha}^n}_{j\in J_n}$, for some index set $J_n$, be the collection of the maximal dyadic cubes subset of $\mathcal{E}_{\alpha}^r$ such that $\gamma^n < \frac{1}{\mu\left(B\left(Q_{j,\alpha}^n\right)\right)} \int_{Q_{j,\alpha}^n} g \mathop{}\mathrm{d} \mu$. If $n \leq n_\alpha$, then there is exactly one such maximal cube : $\mathcal{E}_\alpha^r$. Also, we have an injection from the set of the couples $(n, Q)$ with $n \leq n_\alpha$, $Q \in \collection{C}_\alpha^n$ to $\set*{Q\in D_m^r: Q \subset \mathcal{E}_\alpha^r}$, thus : \begin{equation*} \sum_{n \leq n_\alpha} \gamma^{n+1} \sum_{Q\in \collection{C}_\alpha^n} \varphi(B(Q))\mu(B(Q)) \int_{2B(Q)} f \mathop{}\mathrm{d}\mu \leq \gamma^{n_\alpha + 1} \sum_{\substack{Q\in \collection{D}_m^r\\ Q \subset \mathcal{E}_\alpha^r}} \varphi(B(Q))\mu(B(Q)) \int_{2B(Q))} f \mathop{}\mathrm{d} \mu \end{equation*} If $n > n_\alpha$, then any $Q_{j,\alpha}^n$ is a strict subset of $\mathcal{E}_\alpha^r$. For such a maximal cube $\mathcal{F}$, we let $P$ be his dyadic parent i.e.\ the only cube of length $\rho \ell(R)$ containing $P$. We have $P \subset \mathcal{E}_\alpha^r$, and by using the maximality of $\mathcal{F}$, and that $B(R) \subset 2B(P)$, and using the $\rho \delta$-doubling ($B(P)$ has radius less than $\rho \delta$) : \begin{equation}\label{eq_max_cubes_gamma} \gamma^n < \frac{1}{\mu(B(\mathcal{F}))} \int_\mathcal{F} g \mathop{}\mathrm{d} \mu \leq \frac{\mu(B(P))}{\mu(B(\mathcal{F}))} \frac{1}{\mu(B(P))}\int_P g \mathop{}\mathrm{d} \mu \leq C \rho^\eta \gamma^n = \kappa \gamma^n, \end{equation} the constant $\kappa$ depending only on $\rho$ and on the doubling constant. Then choosing $\gamma > \kappa$, we have $\frac{1}{\mu(B(\mathcal{F}))} \int_\mathcal{F} g \mathop{}\mathrm{d} \mu \leq \gamma^{n+1}$, thus $\mathcal{F} \in \collection{C}_\alpha^n$. Thus for a fixed $n > n_\alpha$, every cube in $\collection{C}_\alpha^n$ is in a (unique) $Q_{j,\alpha}^n$, which are disjoint in $j$ by maximality. Thus, writing $Q_{j,\alpha}^{n_\alpha}$ for $\mathcal{E}_\alpha^r$ we have : \begin{equation*} \int_X \left(T_mf\right) g \mathop{}\mathrm{d}\mu \leq C \sum_{\alpha \geq 1} \sum_{n \geq n_\alpha} \gamma^{n+1} \sum_{j \in J_n} \sum_{\substack{Q \in \collection{D}_\alpha^m\\Q\subset Q_{j,\alpha}^n}} \varphi(B(Q)) \mu(B(Q)) \int_{2B(Q)} f \mathop{}\mathrm{d}\mu \end{equation*} Now we use the following lemma (see lemma $6.1$ of \cite{PerezWheeden03}) : \begin{lemma} Let $(X,d,\mu)$ satisfy $\doubling{\delta}$. Let $\varphi$ be a functional on balls that satisfy \eqref{condition_phi} for balls of radius at most $\rho \delta$. Then there is a constant $C$ which depends only on the constant $L$ of \eqref{condition_phi} and on the doubling constant such that for any $f \geq 0$ and any dyadic cube $Q_0 \in \collection{D}_m^r$, with $\rho^r \leq \delta$, \begin{equation} \sum_{\substack{Q\in \collection{D}_m \\ Q \subset Q_0}} \varphi(B(Q))\mu(B(Q)) \int_{2B(Q))} f \mathop{}\mathrm{d} \mu \leq C \varphi(B(Q_0))\mu(B(Q_0)) \int_{3B(Q_0)} f \mathop{}\mathrm{d} \mu \end{equation} \end{lemma} \begin{proof} By \eqref{condition_phi}, we have : \begin{align} \sum_{\substack{Q\in \collection{D}_m \\ Q \subset Q_0}} \varphi(B(Q))\mu(B(Q)) \int_{2B(Q))} f \mathop{}\mathrm{d} \mu &\leq L \varphi(B(Q_0))\mu(B(Q_0)) \sum_{\substack{Q\in \collection{D}_m \\ Q \subset Q_0}} \left(\frac{\ell(Q)}{\ell(Q_0)}\right)^\varepsilon \int_{2B(Q))} f \mathop{}\mathrm{d} \mu \nonumber \\ &\leq L \varphi(B(Q_0))\mu(B(Q_0)) \sum_{l= 0}^{+\infty} \rho^{-\varepsilon l} \sum_{\substack{Q\in \collection{D}_m \\ Q \subset Q_0 \\ \ell(Q) = \rho^{-l}\ell(Q_0)}} \int_{2B(Q))} f \mathop{}\mathrm{d} \label{eq:lemma eq 1}\mu. \end{align} Then for $Q \in \collection{D}_m, Q \subset Q_0$, and $\ell(Q) \leq \ell(Q_0)$ we have $2B(Q) \subset 3B(Q_0)$. Indeed, if $y \in 2B(Q)$, then : \begin{align*} d(y,x_{Q_0}) &\leq d(y, x_{Q}) + d(x_Q, x_{Q_0}) \\ &\leq 2 r(B(Q)) + r(B(Q_0)) \\ &\leq 3 r(B(Q_0)). \end{align*} Thus, the left hand side of \eqref{eq:lemma eq 1} is less than : \begin{equation*} L \varphi(B(Q_0))\mu(B(Q_0)) \int_{3B(Q_0))} f(x) \sum_{l=0}^\infty \rho^{-\varepsilon l} \sum_{\substack{Q\in \collection{D}_m \\ Q \subset Q_0 \\ \ell(Q) = \rho^{-l}\ell(Q_0)}} \chi_{2B(Q)}(x) \mathop{}\mathrm{d} \mu(x) . \end{equation*} Then it suffices to show that for each $l$, any $x$ of $3B(Q_0)$ is in at most $N$ of the $2B(Q)$, with $\ell(Q) = \rho^{-l}\ell(Q_0)$, with $N$ independant of the choices of $x$ and $Q_0$. For $l = 0$, there is only one $Q$ : $Q_0$ itself, and thus it is true. Now fix $l > 1$, let $x\in M$, and $Q$ be a cube of sidelength $\rho^{-l}\ell(Q_0)$ such that $x\in 2B(Q)$. We write $\ell = \ell(Q) \leq \rho^{-1}\delta$. Then for $y \in Q$, $d(x, y) \leq d(x, x_Q) + d(y, x_Q) \leq 3\rho \ell\leq 3\delta$. Then we have $B(x_Q, \ell) \subset Q \subset B(x, 3\rho\ell)$. By the proposition \ref{p_covering_card}, then there can be at most $N$ disjoint balls of radius $\ell \leq \delta$ with center in a ball of radius $3\rho\ell$, with the constant $N$ depending only on $\rho$ and on the $\delta$-doubling constant. Thus \begin{equation*} \sum_{l=0}^\infty \rho^{-\varepsilon l} \sum_{\substack{Q\in \collection{D}_m \\ Q \subset Q_0 \\ \ell(Q) = \rho^{-l}\ell(Q_0)}} 1 \leq N \frac{1}{1- \rho^{-\varepsilon}}, \end{equation*} and the lemma follows. \end{proof} Then applying the lemma : \begin{equation*} \int_X \left(T_mf\right) g \mathop{}\mathrm{d}\mu \leq C \sum_{\alpha \geq 1} \sum_{n \geq n_\alpha} \gamma^{n+1} \sum_{j \in J_n} \varphi\left(B\left(Q_{j,\alpha}^n\right)\right) \mu\left(B\left( Q_{j,\alpha}^n \right) \right) \int_{3B\left( Q_{j,\alpha}^n \right)} f \mathop{}\mathrm{d} \mu. \end{equation*} And thus since $Q_{j,\alpha}^n \in \collection{C}_\alpha^n$, $\gamma^n \leq \frac{1}{\mu\left(B(Q_{j,\alpha}^n)\right)} \int_{Q_{j,n}^\alpha} g \mathop{}\mathrm{d}\mu$, and so, \begin{equation*} \int_X \left(T_mf\right) g \mathop{}\mathrm{d}\mu \leq C \gamma \sum_{\alpha \geq 1} \sum_{n \geq n_\alpha} \sum_{j \in J_n} \varphi\left(B(\left(Q_{j,\alpha}^n\right)\right) \int_{3B\left( Q_{j,\alpha}^n \right)} f \mathop{}\mathrm{d} \mu \int_{Q_{j,\alpha}^n} g \mathop{}\mathrm{d} \mu, \end{equation*} and we have : \begin{equation} \int_X \left(T_mf\right) g \mathop{}\mathrm{d} \mu \leq c \sum_{\alpha, n, j} \varphi\left(B\left(Q_{j,\alpha}^n\right)\right) \mu\left(Q_{j,\alpha}^n\right) \int_{3B(Q_{j,\alpha}^n)} f \mathop{}\mathrm{d} \mu \frac{1}{\mu(Q_{j\alpha}^n)} \int_{Q_{j,\alpha}^n} g \mathop{}\mathrm{d} \mu. \end{equation} Then using Hölder's inequality, and that by \eqref{condition_phi} there is some constant c depending only on $\alpha, A, L, \varepsilon$ such that $ \varphi(B) \leq c\varphi(3B)$ (ball of radius $3\rho \delta$), we get : \begin{samepage} \begin{multline*} \int_X \left(T_mf\right) g \mathop{}\mathrm{d} \mu \leq C\left(\sum_{\alpha, n, j}\mu\left(Q_{j,\alpha}^n \right) \left( \varphi\left(B\left(3Q_{j,\alpha}^n\right)\right) \int_{3B(Q_{j,\alpha}^n)} f \mathop{}\mathrm{d} \mu \right)^p \right)^\frac{1}{p} \\ \left(\sum_{\alpha, n, j}\mu\left(Q_{j,\alpha}^n\right) \left( \frac{1}{\mu(Q_{j\alpha}^n)} \int_{Q_{j,\alpha}^n} g \mathop{}\mathrm{d} \mu \right)^{p'} \right)^\frac{1}{p'} \end{multline*} \end{samepage} Now we just need to establish a majoration of $\mu(Q_{j,\alpha}^n)$ by a constant time the measure of a set $E_{j,\alpha}^n$, with the $E_{j,\alpha}^n$ being pairwise disjoint in $j, n, \alpha$. For this, define $\Omega_\alpha^n$ by \begin{equation} \Omega_\alpha^n = \set*{ x \in \mathcal{E}_\alpha^r;\; \sup_{\substack{ Q \in \collection{D}_m^r\\ x \in Q}} \frac{1}{\mu(B(Q))} \int_Q g \mathop{}\mathrm{d} \mu > \gamma^n} = \bigcup_{j\in J_n} Q_{j,\alpha}^n \end{equation} and define the set $E_{j,\alpha}^n = Q_{j,\alpha}^n \setminus \Omega_\alpha^{n+1}$. We have that $E_{j,\alpha}^n \subset \Omega_\alpha^n \setminus \Omega_\alpha^{n+1}$, and the $E_{j,\alpha}^n$ are pairwise disjoints in $j, n, \alpha$. Now we want to show that for $\gamma$ chosen large enough, $\mu(Q_{j,\alpha}^n) \leq 2\mu(E_{j,\alpha}^n)$. First, $Q_{j,\alpha}^n \cap \Omega_{\alpha}^{n+1} = \bigcup_i \left(Q_{j,\alpha}^n \cap Q_{i,\alpha}^{n+1}\right)$. But we have $\frac{1}{\mu\left(B\left( Q_{i,\alpha}^{n+1} \right)\right)} \int_{Q_{i,\alpha}^{n+1}} g \mathop{}\mathrm{d} \mu > \gamma^{n+1} > \gamma^n$, thus by maximality of $Q_{j,\alpha}^{n}$, and by the properties of dyadic cubes, etiher $Q_{i,\alpha}^{n+1} \subset Q_{j,\alpha}^n$ or $Q_{j,\alpha}^n \cap Q_{i,\alpha}^{n+1} = \emptyset$. Hence : \begin{equation*} \mu\left( Q_{j,\alpha}^n \cap \Omega_\alpha^{n+1} \right) = \sum_{i: Q_{j,\alpha}^n \cap Q_{i,\alpha}^{n+1} = \emptyset} \mu \left( Q_{j,\alpha}^n \cap Q_{i,\alpha}^{n+1}\right) = \sum_{i: Q_{i,\alpha}^{n+1} \subset Q_{j,\alpha}^n} \mu\left(Q_{i,\alpha}^{n+1}\right) \end{equation*} But : \begin{equation*} \mu\left(Q_{i,\alpha}^{n+1}\right) \leq \mu\left(B\left(Q_{i,\alpha}^{n+1}\right)\right) \leq \gamma^{-n - 1} \int_{Q_{i,\alpha}^{n+1}} g \mathop{}\mathrm{d} \mu. \end{equation*} And since the $Q_{i,\alpha}^{n+1}$ considered are disjoints and subsets of $Q_{j,\alpha}^n$ : \begin{equation*} \mu(Q_{j,\alpha}^n \cap \Omega_\alpha^{n+1}) \leq \gamma^{-n-1} \int_{Q_{j,\alpha}^n} g \mathop{}\mathrm{d} \mu \leq \kappa \gamma^{-1} \mu(B(Q_{j,\alpha}^n)), \end{equation*} where $\kappa$ is the constant in \eqref{eq_max_cubes_gamma}. But we have : \begin{equation*} \mu(Q_{j,\alpha}^n) = \mu(E_{j,\alpha}^n) + \mu(Q_{j,\alpha}^n\cap \Omega_{\alpha}^{n+1}), \end{equation*} and so choosing $\gamma = 2\kappa$, it follows that : \begin{equation*} \mu\left(Q_{j,\alpha}^n\right) \leq \frac{\gamma}{\gamma - \kappa} \mu\left(E_{j,\alpha}^n \right) = 2\mu\left(E_{j,\alpha}^n \right) \end{equation*} Thus we have : \begin{samepage} \begin{multline*} \int_X \left(T_mf\right) g \mathop{}\mathrm{d} \mu \leq 2C\left(\sum_{\alpha, n, j}\mu\left(E_{j,\alpha}^n \right) \left( \varphi\left(B\left(3Q_{j,\alpha}^n\right)\right) \int_{3B(Q_{j,\alpha}^n)} f \mathop{}\mathrm{d} \mu \right)^p \right)^\frac{1}{p}\\ \left(\sum_{\alpha, n, j}\mu\left(E_{j,\alpha}^n\right) \left( \frac{1}{\mu(Q_{j\alpha}^n)} \int_{Q_{j,\alpha}^n} g \mathop{}\mathrm{d} \mu \right)^{p'} \right)^\frac{1}{p'}. \end{multline*} \end{samepage} But since $E_{j,\alpha}^n \subset Q_{j,\alpha}^n$, $\mu\left(E_{j,\alpha}^n \right) \left( \varphi\left(B\left(3Q_{j,\alpha}^n\right)\right) \fint_{3B\left(Q_{j,\alpha}^n\right)} f \mathop{}\mathrm{d} \mu \right)^p \leq \int_{E_{j,\alpha}^n} \left(M_{\varphi, 3\rho^{r+1}} f\right)^p \mathop{}\mathrm{d}\mu$, and a similar inequality for the integral on $g$. In addition using that the $E_{j,\alpha}^n$ are pairwise disjoint, and that $\rho^r < \delta$, we get : \begin{equation} \int_{X} \left(T_mf\right) g \mathop{}\mathrm{d} \mu \leq 2C\left( \int_{X} \left(M_{\varphi, 3\rho\delta } f\right)^p \mathop{}\mathrm{d}\mu\right)^\frac{1}{p} \left( \int_X \left( M_{d, \delta} g \right)^{p'} \mathop{}\mathrm{d}\mu \right)^\frac{1}{p'}; \end{equation} then using proposition \ref{p_dyad_max}, for all $f, g \geq 0$, there is a constant $C$ which depends only on $p, A, \alpha, \varepsilon$ (specifically it depends on the constants for the $\rho \delta$-doubling) such that : \begin{equation*} \int_{X} \left(T_mf\right) g \mathop{}\mathrm{d} \mu \leq C \left\|M_{\varphi, 3\rho\delta} f \right\|_p \|g\|_{p'}. \end{equation*} This holds under $\doubling{r\delta}$, $\Kcond{2\rho\delta}$ and the fact that \eqref{condition_phi} holds for balls of radius at most $3\rho\delta$. The stronger hypotheses are what we need to apply proposition \ref{pr:M_phi_p_bound} which gives us : \begin{equation} \int_{X} \left(T_mf\right) g \mathop{}\mathrm{d} \mu \leq C \left\|M_{\varphi, \delta} f \right\|_p \|g\|_{p'}. \end{equation} Which proves the theorem. \end{proof} Finally we have : \begin{corollary}\label{cor:riesz_bounded} Let $\mu$ be a measure satisfy $\doubling{R}$ and $\rd{R}$, for $R > 0$, $\eta \geq \nu > 0$ ($\eta \geq \nu$ is automatic). Let $s \leq \nu$. Let $\delta \leq R$. If $K(x,y) = \frac{d(x,y)^s}{\Vol{x}{d(x,y)}}$, then the associated operator $T_\delta$ satisfy the hypotheses of theorem \ref{cutoff_kernel}. Moreover, the theorem still holds with $M_{\varphi, \delta}f$ replaced by the following maximal function : \begin{equation} M_{s, \delta} f(x) = \sup_{0 < r < \delta} r^{s}\fint_{B(x,r)} |f|\mathop{}\mathrm{d}\mu . \end{equation} \end{corollary} \begin{proof} First, take some $b > 1$, by proposition \ref{pr:reverse_doubling bigger R}, $\mu$ is $bR$-reverse doubling of order $\nu$. Then, we must verify that $K$ satisfy the hypotheses of theorem \ref{cutoff_kernel}. Let $d(x,y) \leq R$ and $d(x, y') \leq b d(x,y)$, then we have by doubling and reverse doubling, : \begin{align*} \frac{1}{\Vol{x}{d(x,y)}} &\leq \frac{1}{\Vol{x}{d(x,y')}} \frac{\Vol{x}{b d(x,y)}}{\Vol{x}{d(x,y)}} \frac{\Vol{x}{d(x,y')}}{\Vol{x}{b d(x,y)}}, \\ &\leq C b^{\eta-\nu} \left( \frac{d(x,y')}{d(x,y)}\right)^\nu \frac{1}{\Vol{x}{d(x,y'}}. \end{align*} Thus, provided that $s \leq \nu$ : \begin{equation*} K(x,y) \leq C b^{\eta - \nu} \left(\frac{d(x,y')}{d(x,y)}\right)^{\nu -s} K(x,y') \leq C b^{\eta - s} K(x,y'). \end{equation*} Furthermore, if $d(x',y) \leq \alpha d(x,y)$, by doubling there are $c, C$ such that $c \Vol{y}{d(x',y)} \leq \Vol{x'}{d(x',y)} \leq C \Vol{y}{d(x',y)}$, and so doing the same calcuations we have : \begin{equation*} K(x,y) \leq C b^{\nu - s} K(x',y). \end{equation*} And there are $C_1, C_2 > 1$ such that \eqref{K_cond} is satisfied. Then, using the definition of $\varphi$ and doubling, $c \frac{r(B)^s}{\mu(B)} \leq \varphi(B) \leq C \frac{r(B)^s}{\mu(B)}$ for some constants that depends only on $s, \rho$ and the doubling constant. Then since we have, for $B_1 \subset B_2$, $r(B_1)^s \leq 2^s r(B_2)^s$, we easily verify that $\varphi$ satisfy \eqref{condition_phi} with $\varepsilon = s$. Then it is enough to prove that the centered and uncentered version of the maximal function $M_{s,\delta}$ are equivalent in $L^p$ norms. This follow from the same argument as that of proposition \ref{pr:uncentered equiv centered}. \end{proof} \section{Relative Faber-Krahn inequality and estimates on the heat kernel and the Riesz and Bessels potentials}\label{sec:Faber Krahn} \subsection{Faber-Krahn and doubling} The results from this subsection are due to A.A. Grigor'yan \cite{Grigoryan94, Grigor'yanBook09}, or are slight adaptation of his results to the R-doubling case. \begin{theorem}\cite{Grigor'yanBook09} Let $(M,g,\mu)$ be a weighted manifold, and let $\set*{B(x_i, r_i)}_{i\in I}$ be a family of relatively comapct balls in M, where $I$ is an arbitrary index set. Assume that, for any $i\in I$, the Faber-Krahn inequality holds : \begin{equation} \lambda_1(U) \geq a_i \mu(U)^{-2/\eta}, \end{equation} for any open set $U \subset B(x_i,r_i)$, where $a_i > 0$. Let $\Omega = \bigcup_{i\in I} B\left(x_i, \frac{r_i}{2}\right)$. Then for all $x,y \in \Omega$ and $t \geq t_0 > 0$ : \begin{equation} p_t(x,y) \leq \frac{C(\eta) \left(1 + \frac{d(x,y)^2}{t}\right)^{\eta/2} \exp\left(-\frac{d(x,y)^2}{4t} - \lambda_1(M) (t - t_0)\right)}{\left(a_i a_j \min\left(t_0, r_i^2\right) \min\left(t_0, r_j^2\right)\right)^{\eta/4}}, \end{equation} where $i, j$ are the indices such that $x\in B\left(x_i, \frac{r_i}{2}\right)$ and $y \in B\left(x_j, \frac{r_j}{2}\right)$. \end{theorem} On a manifold which admits $\FK{R}$, applying this theorem with the family $\set*{B(x,r)}_{\substack{x\in M,\\ 0 < r \leq R}}$, $a_{x,r} = \frac{b}{r^2} \Vol{x}{r}^{2/\eta}$, $t_0 = t$, and $r = \sqrt{t}$ when $t \leq R^2$ we get : \begin{align*} p_t(x,y) &\leq C(\eta) \frac{ \left(1 + \frac{d(x,y)^2}{t}\right)^{\eta/2} e^{-\frac{d(x,y)^2}{4t}}}{\left(a_{x,\sqrt{t}}b_{y,\sqrt{t}} t^2 \right)^{\eta/4}}, \\ &\leq \frac{C(\eta)}{b^{\eta/2}} \frac{ e^{-\frac{d(x,y)^2}{ct}}}{\Vol{x}{\sqrt{t}}^{1/2} \Vol{y}{\sqrt{t}}^{1/2}}. \end{align*} If $t > R^2$, then we do the same thing, but with $r = R$. Thus we obtain the following : \begin{theorem}\label{th:heat_kernel_estimate} Let $(M,g,\mu)$ be a weighted Riemannian manifold, suppose that there is $R > 0$ such that $M$ satisfy $\FK{R}$. Then $\mu$ satisfy $\doubling{R }$, and for any $c > 4$ there is some constant $C > 0$ such that the heat kernel satisfies the upper bound : \begin{align} p_t(x,y) &\leq \frac{C}{\Vol{x}{\sqrt{t}}^{1/2}\Vol{y}{\sqrt{t}}^{1/2}} e^{-\frac{d(x,y)^2}{ct}},\quad t \leq R^2 \\ p_t(x,y) &\leq \frac{C}{\Vol{x}{R}^{1/2}\Vol{y}{R}^{1/2}} e^{-\frac{d(x,y)^2}{ct}},\quad t > R^2. \end{align} The constant $C$ depends only on $b$ and $\eta$ in the Faber-Krahn inequality and on the $c > 4$ chosen. \end{theorem} The estimate on the heat kernel follows from Theorem 5.2 of \cite{Grigoryan94}. The R-doubling follow from the proof of Proposition 5.2 of the same article. Conversely, we have : \begin{proposition}\cite{Grigoryan94} \label{pr:heat + doubling implies FKR} Let $(M, g, \mu)$ be a complete, weighted Riemannian manifold. If $\mu$ satisfies $\doubling{R}$, if for any $x\in M$, $X\setminus B\left(x, \frac{3}{4}R\right) \neq \emptyset$ and if there is some constant $B$ such the heat kernel satisfies : \begin{equation}\label{eq:diagonal estimate} p_t(x,x) \leq \frac{B}{\Vol{x}{\sqrt{t}}}, \end{equation} for all $x \in M$, and for all $0 < t \leq R^2$, then there is some constant $\kappa \in (0, 1)$, which depends only on the doubling and reverse doubling constants, such that $M$ admits a relative Faber-Krahn inequality at scale $\kappa R$, with $\eta$ being the doubling order and $b$ depending only on $A, B$, and $\kappa$ depends only on the doubling constants and on $B$. \end{proposition} \subsection{An estimate on the heat kernel} \begin{proposition}\label{pr:estimate on the heat kernel} Let $(M, g, \mu)$ be a weighted manifold satisfying $\FK{R}$ for $R > 0$, then for any $c > 4$ and $\gamma \in (0, 1)$ there exists constants $C > 0$, $\hat{c} > 1$ such that for any $\lambda > 0$ with $R\lambda > \hat{c}$, we have : \begin{equation}\label{eq:heat kernel estimate lambda} \begin{aligned} p_t(x,y) &\leq \frac{C}{\Vol{x}{\sqrt{t}}} e^{-\frac{d(x,y)^2}{ct}}, &\sqrt{t} \leq \lambda^{-1}\\ p_t(x,y) &\leq \frac{C}{\Vol{x}{\lambda^{-1}}} e^{(1 - \gamma)\lambda^2 t} e^{-\frac{d(x,y)^2}{ct}}, &\sqrt{t} > \lambda^{-1}. \end{aligned} \end{equation} With $C$ depending only on $b$, $\eta$ and $c$, and $\hat{c}$ depending on $b, \eta, c$ and $\gamma$. \end{proposition} \begin{proof} Let $c > 4$, $\gamma \in (0, 1)$. If $t \leq \lambda^{-1} < R^2$, then applying theorem \ref{th:heat_kernel_estimate}, we have, for any $\kappa > 1$ such that $c/\kappa > 4$ : \begin{align*} p_t(x,y) &\leq \frac{C}{\Vol{x}{\sqrt{t}}^{1/2}\Vol{y}{\sqrt{t}}^{1/2}} e^{-\frac{\kappa d(x,y)^2}{ct}} \\ &\leq \frac{C}{\Vol{x}{\sqrt{t}}} e^{\frac{D}{2}\frac{d(x,y)}{\sqrt{t}} - \frac{\kappa}{c}\frac{d(x,y)^2}{t}} \\ &\leq \frac{C}{\Vol{x}{\sqrt{t}}} e^{-\frac{d(x,y)^2}{ct}}, \end{align*} using proposition \ref{pr:volume different centers}. If $t \geq \lambda^{-1}$, then similarly : \begin{equation*} p_t(x,y) \leq \frac{C}{\Vol{x}{\lambda^{-1}}}e^{\frac{D d(x,y) }{2R} - \frac{\kappa d(x,y)^2}{ct}}. \end{equation*} Then we have $\frac{Dd(x,y)}{2R} - \frac{(\kappa - 1)d(x,y)^2}{ct} - (1-\gamma)\lambda^2 t \leq \left(\frac{cD^2}{16 (\kappa - 1) R^2} - (1 - \gamma)\lambda^2 \right) t$, for all $t > 0$, $x, y \in M$. Thus for $\hat{c} = \sqrt{\frac{c}{(1-\gamma)(\kappa - 1)}} \frac{D}{4}$, the for all $\lambda$ such that $\lambda R \geq \hat{c}$, we have: \begin{equation} p_t(x,y) \leq \frac{C}{\Vol{x}{\lambda^{-1}}}e^{(1-\gamma)\lambda^2 t} e^{-\frac{d(x,y)^2}{ct}} \end{equation} \end{proof} \begin{remark} If $d(x,y) \leq R$ then we can actually do better, then $e^{Dd(x,y)/2R} \leq C$ for a constant which doesn't depends on $d(x,y)$. Then we have : \begin{equation}\label{eq:heat estimate without lambda} p_t(x,y) \leq \frac{C}{\Vol{x}{\min(\sqrt{t}, \lambda^{-1})}}e^{-\frac{d(x,y)^2}{ct}} \end{equation} \end{remark} \subsection{Estimation of the Riesz potential} Let $s > 0$. Define the Riesz potential to be the operator $I_s = \Delta^{-s/2}$ on $L^2(M,\mu)$. We have by the spectral theorem, for $f$ positive, measurable : \begin{align*} I_s f(x) &= \frac{1}{\Gamma\left(\frac{s}{2}\right)} \int_0^\infty t^{s/2 - 1} e^{-t\Delta}f(x) \mathop{}\mathrm{d} t \\ &= \frac{1}{\Gamma\left(\frac{s}{2}\right)} \int_M f(y)\int_0^\infty t^{s/2 - 1} p_t(x,y) \mathop{}\mathrm{d} t \mathop{}\mathrm{d}\mu(y) \\ &= \int_M i_s(x,y) f(y) \mathop{}\mathrm{d}\mu \end{align*} With the "kernel" $i_s$ defined by : \begin{equation} i_s(x,y) = \frac{1}{\Gamma\left(\frac{s}{2}\right)} \int_0^\infty t^{s/2 - 1} p_t(x,y)\mathop{}\mathrm{d} t. \end{equation} \begin{proposition}\label{pr:riesz_potential} Let $(M,g,\mu)$ be a manifold satisfying $\FK{}$ and $\rd{}$, $\nu > 0$. Then for any $s < \nu$, there is a constant $C$, depending only on the Faber-Krahn and reverse doubling constants, such that : \begin{equation} i_s(x,y) \leq C\frac{d(x,y)^s}{\Vol{x}{d(x,y)}} \end{equation} \end{proposition} \begin{proof} If $M$ admits a relative Faber-Krahn inequality, then there are constants $C > 0$, $c > 4$ such that $p_t(x,y) \leq \frac{C}{\Vol{x}{\sqrt{t}}} e^{-\frac{d(x,y)^2}{c t}}$, for all $x,y \in M$, $t > 0$. Thus : \begin{equation*} i_s(x,y) \leq C_s \int_0^\infty \frac{t^{s/2 - 1}}{\Vol{x}{\sqrt{t}}} e^{-\frac{d(x,y)^2}{ct}} \mathop{}\mathrm{d} t \end{equation*} We integrate separately between $0$ and $d^2$ and $d^2$ and $+\infty$, and using the doubling and reverse doubling properties of the measure we get : \begin{multline*} i_s(x,y) \leq C \frac{d^\eta}{\Vol{x}{d(x,y)}} \int_0^{d^2} t^{s/2 - \eta/2 - 1} e^{-\frac{d(x,y)^2}{ct}} \mathop{}\mathrm{d} t +\\mathbf{C} \frac{d^\nu}{\Vol{x}{d(x,y)}}\int_{d^2}^\infty t^{s/2 - \nu/2 - 1} e^{-\frac{d(x,y)^2}{ct}} \mathop{}\mathrm{d} t \end{multline*} \end{proof} When $\nu > s$, the second integral is convergent and less than $\frac{2}{\nu - s} d^{s - \nu}$. For the first integral, through the change of variables $t = d^2/cu$ there is some constant $c_s$ such that it is equal to $c_s d^{s - \eta} \int_1^{\infty} u^{\eta/2 - s/2 - 1} e^{-u} \mathop{}\mathrm{d} u$, and this new integral is convergent if $\eta > s$, which it is since we must have $\eta \geq \nu$, and equal to a constant depending only on $\eta, s$. Thus putting all of this together we have : \begin{equation*} i_s(x,y) \leq C \frac{d(x,y)^s}{\Vol{x}{d(x,y)}} \end{equation*} With the constant depending only on $s, \eta, \nu$ as well as the constants of the relative Faber Krahn inequality. \subsection{Estimation of the Bessel potential} Define the Bessel potential for $\lambda > 0$, $s > 0$ to be the operator $G_{s,\lambda} = \left( \Delta + \lambda^2 \right)^{-s/2}$ on $L^2(M, \mu)$. We have, by the spectral theorem : \begin{equation} G_{s,\lambda} = \frac{1}{\Gamma\left(\frac{s}{2}\right)}\int_0^\infty t^{s/2 - 1} e^{-\lambda^2 t} e^{-t \Delta} \mathop{}\mathrm{d} t \end{equation} Similar to the previous section, we have for positive $f$ : \begin{equation} G_{s,\lambda}f(x) = \int_M g_{s,\lambda}(x,y) f(y) \mathop{}\mathrm{d}\mu(y) , \end{equation} with $g_{s,\lambda}$ defined by : \begin{equation} g_{s,\lambda}(x,y) = \frac{1}{\Gamma\left(\frac{s}{2}\right)} \int_0^\infty t^{s/2-1} e^{-\lambda^2 t} p_t(x,y) \mathop{}\mathrm{d} t \end{equation} \begin{proposition}\label{prop_sep_g_lambda} There is a constant $\hat{c}$ such that if $(M,g,\mu)$ is a weighted manifold that satisfy $\FK{R}$ and $\rd{R}$ for $R > 0$, $\nu > 1$, then for any $\lambda$ such that $\lambda R > \hat{c}$, then for any $s < \nu$, there are constants $C > 0$, and $\gamma \in (0, 1)$, depending only on the Faber Krahn and reverse doubling constants, such that : \begin{equation} g_{s,\lambda}(x,y) \leq C \left( \frac{d(x,y)^s}{\Vol{x}{d(x,y)}}\chi_{\set*{\lambda d(x,y)\leq 1}} + \frac{\lambda^{-s}}{\Vol{x}{\lambda^{-1}}}\left(\chi_{\set*{\lambda d(x,y) > 1}}\right) \right) e^{-\gamma \lambda d(x,y)}\label{separation_g_lambda} \end{equation} \end{proposition} \begin{proof} It is enough to show the proposition for $\lambda = 1$, $R > \hat{c}$. Indeed, $G_{s,\lambda} = \lambda^{-s} \left( \frac{\Delta}{\lambda^2} + 1 \right)^{-s/2}$. $\Delta/\lambda^2$ is the Laplacian $\Delta'$ for $(M, g', \mu')$ with $g'= \lambda^2 g$ and $\mathop{}\mathrm{d}\mu' = \lambda^n \mathop{}\mathrm{d}\mu$. The geodesic distance $d'$ associated with the metric $g'$ is simply $d' = \lambda d$, and if $(M,g,\mu)$ admits a relative Faber Krahn inequality at scale $R$, then $(M, g', \mu')$ admits a relative Faber Krahn inequality, with the same constants, at scale $\lambda R$. Then using that $g_{s,\lambda}(x,y) = \lambda^{-s} g'_{s, 1}(x,y)$, with $g'_{s, 1}$ the kernel of $\left( \Delta' + 1 \right)^{-s/2}$, it follows that \eqref{separation_g_lambda} being true for $\lambda = 1$ and all $R > \hat{c}$ implies \eqref{separation_g_lambda} for all $(\lambda, R)$ such that $R \lambda > \hat{c}$. We have : \begin{equation*} g_{s,1}(x,y) = \frac{1}{\Gamma\left(\frac{s}{2}\right)} \left( \int_0^{1} t^{s/2-1} e^{-t} p_t(x,y) \mathop{}\mathrm{d} t + \int_{1}^\infty t^{s/2-1} e^{- t} p_t(x,y) \mathop{}\mathrm{d} t \right) \end{equation*} Let $J_0 = \int_0^1 t^{s/2-1} e^{-t} p_t(x,y) \mathop{}\mathrm{d} t$ and $J_\infty = \int_1^\infty t^{s/2-1} e^{-t} p_t(x,y) \mathop{}\mathrm{d} t$. To simplify the notations we will write $d = d(x,y)$ until the end of this section. \begin{lemma}There is some constant $\hat{c} > 1$ such that for $R \geq \hat{c}$, there is some $\gamma > 0$, such that for any $s < \nu$ we have : \begin{equation} \int_0^1 t^{s/2 - 1} e^{-t} p_t(x,y) \mathop{}\mathrm{d} t \leq c \left( \frac{d^s}{\Vol{x}{d}} \chi_{\set{ d \leq 1}} + \frac{1}{\Vol{x}{1}}\chi_{\set{d > 1}} \right) e^{-\gamma d}. \end{equation} \end{lemma} \begin{proof} We treat the cases $d \leq 1$ and $d > 1$ separately. When $d(x,y) \leq 1$ we have : $J_0 = J_{0,1} + J_{0,2}$ with \begin{equation*} J_{0,2} = \int_{d^2}^{1} t^{s/2-1} e^{- t} p_t(x,y) \mathop{}\mathrm{d} t. \end{equation*} We have, by proposition \ref{pr:estimate on the heat kernel}, that , there is a constant $\hat{c}$ such that if $R \geq \hat{c}$, then for all $t \leq 1$ we have $p_t(x,y) \leq \frac{C}{\Vol{x}{\sqrt{t}}} e^{-\frac{d(x,y)^2}{ct}}$. For such $R \geq \hat{c} > 1$, using the R-reverse doubling, we have that for any $t \in (d^2, 1)$, $\sqrt{t} \leq 1 < R$, and thus we have $\Vol{x}{d} \leq a \left(\frac{d}{\sqrt{t}}\right)^\nu \Vol{x}{\sqrt{t}}$. Using all this we get : \begin{align*} J_{0,2} &\leq C \int_{d^2}^{1} \frac{t^{s/2 - 1} e^{-t} e^{-\frac{d^2}{ ct}}}{\Vol{x}{\sqrt{t}}} \mathop{}\mathrm{d} t, \\ &\leq C a^{-1} \frac{d^\nu}{\Vol{x}{d}} \int_{d^2}^{1} t^{s/2 - \nu/2 - 1} e^{-t} e^{-\frac{d^2}{ ct }} \mathop{}\mathrm{d} t, \\ &\leq C e^{1/4}\frac{ d^{\nu} e^{-d}}{\Vol{x}{d}} \int_{d^2}^1 t^{s/2 - \nu/2 - 1} \mathop{}\mathrm{d} t; \quad \text{ since } e^{- t} \leq e^{- d^2},\;e^{- d^2} \leq e^{1/4} e^{-d}, \\ &\leq C \frac{d^\nu e^{-d}}{\Vol{x}{d}} \frac{2}{\nu - s}\left( d^{s - \nu} - 1\right), \\ &\leq C \frac{ d^{s}}{\Vol{x}{d}} e^{- d}, \end{align*} since we have $\nu > s$. Now we estimate $J_{0, 1}$ : \begin{align*} J_{0,1} &\leq C \int_0^{d^2} \frac{t^{s/2 - 1} e^{-t} e^{-\frac{d^2}{ct}}}{\Vol{x}{\sqrt{t}}} \mathop{}\mathrm{d} t, \\ &\leq AC \frac{d^\eta}{\Vol{x}{d}} \int_0^{d^2} t^{s/2 - \eta/2 - 1} e^{- t} e^{-\frac{d^2}{ct }} \mathop{}\mathrm{d} t, \\ &\leq C \frac{d^s}{\Vol{x}{d}} \int_1^\infty u^{\eta/2 - s/2 - 1} e^{- d^2/cu} e^{-u} \mathop{}\mathrm{d} u,\quad \text{ change of variable } t = d^2/cu, \\ &\leq C \frac{d^s}{\Vol{x}{d}} \int_1^\infty u^{\eta/2 - s/2 - 1} e^{-d^2/cu - u/2} e^{-u/2} \mathop{}\mathrm{d} u \\ \end{align*} We use that $e^{-d^2/cu - u/2} \leq C e^{-\gamma_1 d}$ for some constant $\gamma_1$ which depends on $c$. Then, $\int_1^\infty u^{\eta/2 - s/2 - 1} e^{-u/2} \mathop{}\mathrm{d} u$ converges to a constant and : \begin{equation*} J_{0,1} \leq C \frac{d^s}{\Vol{x}{d}} e^{-\gamma_1 d} \end{equation*} Thus, for a constant $C$ depending only on $s, c$ and the doubling and reverse doubling constants, we have : \begin{equation*} J_0\chi_{\set{d(x,y) \leq 1}} \leq C \frac{d^s}{\Vol{x}{d}} e^{-\gamma_1 d}\chi_{\set{d(x,y) \leq 1}}. \end{equation*} If $d(x,y) > 1$, then we have : \begin{align*} J_0 &\leq C \int_0^{1} \frac{t^{s/2 - 1} e^{- t} e^{-\frac{d^2}{ct}}}{\Vol{x}{\sqrt{t}}}\mathop{}\mathrm{d} t, \\ &\leq AC \frac{1}{\Vol{x}{1}} \int_0^{1} t^{s/2 - \eta/2 - 1} e^{- t} e^{-\frac{d^2}{c t}} \mathop{}\mathrm{d} t, \text{ since $d > 1$ we have : } \\ &\leq C \frac{1}{\Vol{x}{1}} e^{-d^2/2c} \int_0^1 t^{s/2 - \eta/2 -1} e^{-\frac{1}{2ct}}\mathop{}\mathrm{d} t, \\ &\leq C \frac{1}{\Vol{x}{1}} e^{-\gamma_2 d}. \end{align*} Since the integral converge and is a constant depending on only $s, \eta, c$, and using that $e^{-ax^2} \leq C e^{-ax}$ Then for $\gamma = \min(\gamma_1, \gamma_2)$ and a constant $C$ which depends only on $s, c$ and the doubling and reverse doubling constants, we have : \begin{equation*} J_0 \leq c \left( \frac{d^s}{\Vol{x}{d}} \chi_{\set{ d \leq 1}} + \frac{1}{\Vol{x}{1}}\chi_{\set{d > 1}} \right) e^{-\gamma d} \end{equation*} \end{proof} \begin{lemma} There is some $\hat{c}$ such that if $R \geq \hat{c}$, there is some constant $\gamma > 0$, such that for any $s$ we have : \begin{equation} \int_1^\infty t^{s/2 - 1} e^{-t} p_t(x,y) \mathop{}\mathrm{d} t \leq C \left(\frac{d^s}{\Vol{x}{d}} \chi_{\set{d \leq 1}} + \frac{1}{\Vol{x}{1}} \chi_{\set{d > 1}} \right) e^{-\gamma d} \end{equation} \end{lemma} \begin{proof} From Proposition \ref{pr:estimate on the heat kernel}, it follows that, since $R \geq \hat{c}$, then : \begin{equation*} J_\infty \leq \frac{C}{\Vol{x}{1}} \int_{1}^\infty t^{s/2-1}e^{-\frac{d^2}{ct}} e^{-\gamma_0 t}\mathop{}\mathrm{d} t \end{equation*} For any $\alpha \in (0,1)$ $e^{-\alpha t - \frac{ d^2}{ct}}$ admits a maximum when $\alpha t = \frac{d^2}{c t}$, and so is less than $e^{-2 \sqrt{\frac{\alpha}{c}} d}$. Take $\gamma_1 \in (0, 1)$ such that : $\alpha = \frac{c}{4}\gamma_1^2 < \gamma_0$, then we have : \begin{equation*} J_\infty \leq C\left(\int_1^\infty t^{s/2 - 1} e^{(\alpha - \gamma_0)t} \mathop{}\mathrm{d} t \right) \frac{1}{\Vol{x}{1}} e^{-\gamma_1 d} \end{equation*} Thus there is $\gamma \in (0,1)$ depending on $\gamma_0, c$, and a constant $C$ depending only on $s, c, \gamma_0$ and the doubling constants such that \begin{equation*} J_\infty \leq C \frac{1}{\Vol{x}{1}} e^{-\gamma d} \end{equation*} But we also have, when $d \leq 1$, we have: $\frac{1}{\Vol{x}{1}} \leq a^{-1} \frac{d^\nu}{\Vol{x}{d}} \leq a^{-1} \frac{d^s}{\Vol{x}{d}}$ using $\rd{R}$ and $s < \nu$. Hence, for $d(x,y) \leq 1$ we have : \begin{equation*} J_\infty \leq C\frac{d^s}{\Vol{x}{d}}e^{-\gamma_1 d}. \end{equation*} Thus there is a constant $C$ such that : \begin{equation*} J_\infty \leq C \left(\frac{d^s}{\Vol{x}{d}} \chi_{\set{d \leq 1}} + \frac{1}{\Vol{x}{1}} \chi_{\set{d > 1}} \right) e^{-\gamma d}. \end{equation*} \end{proof} And so there is $c_0 > 0$ which depends on $s, \gamma_0, c$ and the doubling and reverse doubling constants, and $\gamma \in (0, 1)$ depending on $c$ and $\gamma_0$, such that : \begin{equation} g_{s,1}(x,y) \leq c_0 \left( \frac{d(x,y)^s}{\Vol{x}{d(x,y)}}\chi_{\set*{ d(x,y) \leq 1}} + \frac{1}{\Vol{x}{1}}\left(\chi_{\set*{d(x,y) > 1}}\right) \right) e^{-\gamma d} \end{equation} \end{proof} \section{Proof of the main results}\label{sec:Proof} Let $(M,g,\mu)$ be a weighted Riemannian manifold. Let $V\in L^1_{loc}(M, \mathop{}\mathrm{d}\mu)$, $V \geq 0$, for any $R > 0$ and $p \geq 1$, we define $N_p(V)$ and $N_{p,R}(V)$ as in \eqref{eq:Morrey} and \eqref{eq:def morrey norm}. Notice that $N_p(V) = M_{2p}(V^p)^{1/p}$. Though we can deduce theorem \ref{th:Fefferman-Phong generalized} as a special case of \ref{th:Weak_positivity}, we start by giving a separate, simpler proof of it. The general idea behind the proof of both theorems remains the same, but in the case of theorem \ref{th:Weak_positivity}, much more care will be required in establishing the bounds on the norm of certain operators. \subsection{Proof of Theorem \ref{th:Fefferman-Phong generalized}} We first make the technical hypothesis that $\mu$ satisfy the reverse doubling property $\rd{}$, with a reverse doubling order $\nu > 1$. We assume that $M$ admits $\FK{}$ and $\rd{}$. Let $\psi \in \SmoothComp{M}$, define $\varphi = \Delta^{1/2} \psi$, or $\psi = \Delta^{-1/2} \varphi$. We have, using that $\Delta^{-1/2}\left(V^{1/2}\cdot\right)$ is the adjoint of $V^{1/2} \Delta^{-1/2}$ : \begin{align*} \left\langle V\psi,\psi \right\rangle &= \left\|V^{1/2} \Delta^{-1/2} \varphi \right\|^2 \\ &\leq \left\|V^{1/2}\Delta^{-1/2} \right\|^2_{L^2\rightarrow L^2} \left\|\varphi\right\|^2 \\ &\leq \left\|\Delta^{-1/2}\left( V^{1/2}\cdot \right) \right\|^2_{L^2\rightarrow L^2} \left\|\Delta^{1/2}\psi\right\|^2 \\ &\leq \left\|\Delta^{-1/2}\left( V^{1/2}\cdot \right) \right\|^2_{L^2\rightarrow L^2} \left\|\nabla\psi\right\|^2 \end{align*} But, by proposition \ref{pr:riesz_potential} and theorem \ref{th_perez_wheeden}, we have that $\|\Delta^{-1/2} f\|_2 \leq C\|M_1 f \|_2$. Moreover for $q = 2p$, we have : (something wrong here with $N_p$) \begin{align*} M_{1}\left(V^{1/2} f\right)(x) &\leq M_q \left(V^{q/2}\right)(x)^{1/q} M_0(|f|^{q'})(x)^{1/q'} \\ &\leq N_p(V)^{1/2} M_0(|f|^{q'})(x)^{1/q'}, \\ \end{align*} using that $N_p(V) = M_{2p}\left(V^p\right)^{1/p}$. Then, using the fact that for any $r > 1$, $M_0$ is bounded on $L^r$ we have : \begin{align*} \left\|M_1 \left(V^{1/2} f \right)\right\|_2 &\leq N_p(V)^{1/2} \left\| M_0(|f|^{q'}) \right\|_{2/q'}^{1/q'} \\ &\leq C N_p(V)^{1/2} \left\| |f|^{q'} \right\|_{2/q'}^{1/q'} \\ &\leq C N_p(V)^{1/2} \left\| f \right\|_2 \end{align*} Thus we can estimate the operator norm of $\Delta^{-1/2}\left(V^{1/2}\cdot\right)$, and we get : \begin{equation} \int_M V \psi^2 \mathop{}\mathrm{d}\mu \leq C N_p(V) \left\| \nabla\psi \right\|^2 \end{equation} \subsection{Proof of Theorem \ref{th:Weak_positivity}} We first prove the following weaker version of theorem \ref{th:Weak_positivity}. The more general result will follows by removing the technical hypothesis of $\rd{R}$, $\nu > 1$. \begin{theorem}\label{th:c_lambda_estimates} Let $(M, g, \mu)$ be a weighted Riemannian manifold, satisfying $\FK{R}$ for some $R > 0$, and $\rd{R}$ for some $\nu > 1$. Then for any $p > 1$, there are positive constants $\hat{c}, C_p$, with $\hat{c} > 1$, depending only on the Faber-Krahn and doubling constants (and, for $C_p$, on $p$) such that for any $\lambda > \hat{c}R^{-1}$, for any $V \in L^1_{loc}(M, \mathop{}\mathrm{d}\mu)$, $V \geq 0$ and $\psi \in \SmoothComp{M}$ : \begin{equation}\label{eq:c_lambda_estimates} \int_M V \psi^2 \mathop{}\mathrm{d}\mu \leq C_p N_{p,\lambda^{-1}}(V) \left( \int_M |\nabla\psi|^2 \mathop{}\mathrm{d}\mu + \lambda^2 \int_M \psi^2 \mathop{}\mathrm{d}\mu \right) \end{equation} \end{theorem} \subsubsection{Proof of Theorem \ref{th:c_lambda_estimates}} Let $(M,g,\mu)$ be a weighted manifold satisfying $\FK{R}$ and $\rd{R}$, with $R > 0$ and $\nu > 1$. For $\hat{c}$ the constant given in proposition \ref{pr:estimate on the heat kernel}, take $\lambda > 2\frac{\hat{c}}{R}$. For $s > 1$, $\delta > 0$ we recall that $M_{s, \delta}$ is the maximal function defined by : \begin{equation} M_{s,\delta} f(x) = \sup_{r < \delta} r^s \fint_{B(x,r)} f \mathop{}\mathrm{d}\mu \end{equation} For a given $\lambda > 0$, and $p \geq 1$ we will note $K_p = N_{p, \lambda^{-1}}(V) = \sup_x M_{2p,\lambda^{-1}} \left(V^p\right)(x)^{1/p}$. If $K_p$ is infinite, then the previous inequality is obviously true. Then, if we suppose that have $K_p < \infty$, we have : \begin{lemma}Let $(M, g, \mu)$ be a weighted manifold. Then for $V \geq 0$ locally integrable, $\lambda \geq 0$ we have : \begin{equation}\label{eq:C_lambda bounded L2 bessel} \left\langle V\psi,\psi\right\rangle \leq \left\|G_{1,\lambda} \left( V^{1/2} \cdot \right) \right\|_{L^2\rightarrow L^2}^2 \left( \|\nabla\psi\|^2 + \lambda^2 \|\psi\|^2 \right) \end{equation} \end{lemma} \begin{proof} Let $\psi \in \mathcal{C}_0^\infty(M)$, and define $\varphi = \left( \Delta + \lambda^2 \right)^{1/2} \psi$. Then $\psi = G_{1,\lambda} \varphi$ and, using the fact that $G_{1,\lambda}\left(V^{1/2}\cdot \right)$ is the adjoint of $V^{1/2} G_{1,\lambda}$ : \begin{align*} \left\langle V\psi,\, \psi \right\rangle &= \left\langle V^{1/2}G_{1,\lambda} \varphi,\, V^{1/2}G_{1,\lambda}\varphi \right\rangle \\ &= \left\|V^{1/2} G_{1,\lambda} \varphi \right\|_2^2 \\ &\leq \left\| V^{1/2} G_{1,\lambda} \right\|_{L^2 \rightarrow L^2}^2 \|\varphi \|_2^2 \\ &\leq \left\| G_{1,\lambda} \left(V^{1/2} \cdot \right) \right\|_{L^2 \rightarrow L^2}^2 \left\| \left( \Delta + \lambda^2\right)^{1/2} \psi \right\|_2^2 \\ &\leq \left\| G_{1,\lambda} \left(V^{1/2} \cdot \right) \right\|_{L^2 \rightarrow L^2}^2 \left( \left\|\nabla \psi \right\|_2^2 + \lambda^2 \|\psi\|_2^2 \right), \end{align*} which is what we wanted to show. \end{proof} Now, since $M$ satisfy $\FK{R}$ and $\rd{R}$, with $\nu > 1$, we can apply the proposition \ref{prop_sep_g_lambda}, thus for $R\lambda > \hat{c}$, we have $G_{1,\lambda} \leq c_0(T_1 + T_2)$, with \begin{equation}\label{eq:T_1 and T_2} \begin{aligned} T_1 f(x) &= \int_{\lambda d(x,y) \leq 1} \frac{d(x,y)^s}{\Vol{x}{d(x,y)}} e^{-\gamma \lambda d(x,y)} f(y) \mathop{}\mathrm{d}\mu (y) \\ T_2 f(x) &= \frac{\lambda^{-s}}{\Vol{x}{\lambda^{-1}}} \int_{\lambda d(x,y) > 1} e^{-\gamma \lambda d(x,y)} f(y) \mathop{}\mathrm{d}\mu(y) \end{aligned} \end{equation} Thus, we have $\left\|G_{1,\lambda}\left(V^{1/2} \cdot \right) \right\|_2 \leq c_0\left( \left\|T_1\left(V^{1/2} \cdot \right) \right\|_2 + \left\|T_2\left(V^{1/2} \cdot \right) \right\|_2\right)$. Then all we need to do is to evaluate those two operator norms. \begin{lemma}Let $(M, g, \mu)$ be a weighted Riemannian manifold, let $\lambda > 0$. Assume $\doubling{R}$ and $\rd{R}$, for $R \geq \lambda^{-1}$. Then for $T_1$ defined as in \eqref{eq:T_1 and T_2}, and $V \geq 0$ locally integrable, there is some constant $C_{1,p}$ which depends only on $p$, $\gamma$ and the reverse doubling and doubling constants, such that : \begin{equation} \left\|T_1\left(V^{1/2}\cdot \right)\right\|_2 \leq C_{1,p} K_p^{1/2} \end{equation} \end{lemma} \begin{proof}We can apply corollary \ref{cor:riesz_bounded} : for any $p \geq 1$, and any locally integrable $f$, we have $\|T_1 f\|_p \leq c_p \| M_{1,\lambda^{-1}}f\|_p$. Then, for any $\psi \in \mathcal{C}_0^\infty(M)$, for $q = 2p$, $q' = q/(q-1)$ : \begin{align*} M_{1,\lambda}\left(V^{1/2} \psi\right)(x) &\leq \left(M_{2p,\lambda^{-1}}\left(V^{p}\right)(x)\right)^{1/2p}\left( M_{0,\lambda^{-1}}\left(\psi^{q'}\right)(x)\right)^{1/q'} \\ &\leq K_p^{1/2} M_{0,\lambda^{-1}}\left(\psi^{q'}\right)(x)^{1/q'} \\ \left\|T_1\left(V^{1/2}\psi \right) \right\|_2 &\leq c_p K_p^{1/2} \left\| M_{0,\lambda^{-1}} \left(\psi^{q'}\right) \right\|_{2/q'}^{1/q'} \\ &\leq c_p \tilde{c}_{2/q'} K_p^{1/2} \left\| \psi \right\|_2 \end{align*} With $\|M_{0,\lambda^{-1}} f\|_r \leq \tilde{c}_r \|f\|_r$ for any $f\in L^r$, $r\in (1,\infty]$. Thus : \begin{equation} \left\|T_1\left(V^{1/2}\, \cdot \, \right) \right\|_{L^2 \rightarrow L^2} \leq C_{0,p} K_p^{1/2} \end{equation} \end{proof} \begin{lemma}Let $(M, g, \mu)$ be a weighted Riemannian manifold, let $\lambda > 0$. Assume $\doubling{R}$, $\rd{R}$ for $R \geq \lambda^{-1}$. Then for $T_2$ defined as in \eqref{eq:T_1 and T_2}, and $V \geq 0$ locally integrable, there is some constant $C_{2,p}$ which depends only on $p$, $\gamma$ and the doubling and reverse doubling constants, such that : \begin{equation} \left\|T_2\left(V^{1/2}\cdot \right)\right\|_2 \leq C_{2,p} K_p^{1/2} \end{equation} \end{lemma} \begin{proof}We majorate $T_2(V^{1/2})$ by an operator for which we can use the Schur Test. We have : \begin{align*} T_2 f(x) &= \frac{\lambda^{-1}}{\Vol{x}{\lambda^{-1}}} \int_{\lambda d > 1} e^{-\gamma \lambda d(x,y)} f(y) \mathop{}\mathrm{d}\mu(y) \\ &= \gamma \frac{1}{\Vol{x}{\lambda^{-1}}} \int_{\lambda^{-1}}^\infty e^{-\gamma \lambda r} \int_{\lambda^{-1} < d < r} f(y) \mathop{}\mathrm{d}\mu(y)\mathop{}\mathrm{d} r \end{align*} Then, for $\psi \in \mathcal{C}_0^\infty(M)$, $q = 2p$, $q' = q/(q-1)$, by Hölder's inequality : \begin{equation} T_2\left( V^{1/2} \psi \right)(x) \leq \frac{\gamma}{\Vol{x}{\lambda^{-1}}} \int_{\lambda^{-1}}^{\infty} e^{-\gamma\lambda r} \left( \int_{\lambda^{-1} < d < r} V^{p} \mathop{}\mathrm{d}\mu \right)^{1/2p} \left( \int_{\lambda^{-1} < d < r} \psi^{q'} \mathop{}\mathrm{d} \mu \right)^{1/q'} \mathop{}\mathrm{d} r \end{equation} Then we cover the annulus $B(x,r) \setminus B(x,\lambda^{-1})$ by balls $B_i = B(x_i, \lambda^{-1})$, $x_i \in B(x,r)$, such that for $i\neq j$, $\frac{1}{2}B_i \cap \frac{1}{2} B_j = \emptyset$. We have : \begin{align*} \int_{\lambda^{-1} < d < r} V^p \mathop{}\mathrm{d} \mu &\leq \sum_i \int_{B_i} V^{p} \mathop{}\mathrm{d} \mu \leq \sum_i \lambda^{2p} K_p^p\mu(B_i) \\ &\leq A^2 \lambda^{2p} K_p^p \sum_i \mu\left(\frac{1}{2}B_i\right) \\ &\leq C \lambda^{2p} K_p^p \Vol{x}{r+ \frac{1}{2\lambda}} \end{align*} Then : \begin{equation} T_2\left( V^{1/2} \psi \right)(x) \leq \gamma K_p^{1/2} \lambda \int_{\lambda^{-1}}^\infty e^{-\gamma\lambda r} \frac{\Vol{x}{r + \frac{1}{2\lambda}}^{1/2p}}{\Vol{x}{\lambda^{-1}}} \left( \int_{\lambda^{-1}< d < r} \psi^{q'} \mathop{}\mathrm{d} \mu \right)^{1/q'} \mathop{}\mathrm{d} r \end{equation} Since the measure is R-doubling, with $R > 2 \hat{c} \lambda^{-1}$, it is also R'-doubling for all $R' \leq R$ and with the same constants. Then for $1 < \rho < 2$, let $R' = \rho \hat{c}\lambda^{-1}$, we have $\lambda^{-1} < R' \leq R$, then by the propositions \ref{pr:annuli_bound} and \ref{pr:exp doubling} : \begin{align*} \frac{\Vol{x}{r+\frac{1}{2\lambda}}}{\Vol{x}{\lambda^{-1}}} &\leq C \frac{\Vol{x}{r}}{\Vol{x}{\lambda^{-1}}} \\ &\leq C \frac{\Vol{x}{r}}{\Vol{x}{R'}} \frac{\Vol{x}{R'}}{\Vol{x}{\lambda^{-1}}} \\ &\leq C e^{D\frac{r}{R'}} \end{align*} With $C$ depending only on $\rho, \hat{c}$ and the doubling constant. Thus : \begin{equation} T_2\left(V^{1/2}\psi \right) (x) \leq C K_p^{1/2} \lambda \int_{\lambda^{-1}}^\infty e^{\left( \left( \frac{D}{2p R'} - \gamma\lambda \right) r \right)} \left( \frac{1}{\Vol{x}{\lambda^{-1}}}\int_{\lambda^{-1} < d < r} \psi^{q'} \mathop{}\mathrm{d}\mu \right)^{1/q'} \mathop{}\mathrm{d} r \end{equation} And the constant $C$ depends on $p$, $b$, $\eta$ and on the chosen arbitrary parameters. Finally for $\rho = \frac{D}{(1 - \theta)2p\gamma\hat{c}}$ with $\theta \in (0,1)$ we get $\left(\frac{D}{2pR'} - \gamma \lambda\right) = \left( \frac{D}{2p\rho \hat{c}} - \gamma \right)\lambda = -\theta\gamma\lambda$, thus we have : \begin{equation} T_2\left(V^{1/2}\psi \right) (x) \leq C K_p^{1/2} \lambda \int_{\lambda^{-1}}^\infty e^{-\theta\gamma\lambda r} \left( \frac{1}{\Vol{x}{\lambda^{-1}}}\int_{\lambda^{-1} < d < r} \psi^{q'} \mathop{}\mathrm{d}\mu \right)^{1/q'} \mathop{}\mathrm{d} r \end{equation} Note that we can indeed suppose $\rho = \frac{D}{(1-\theta)2p\gamma\hat{c}}$ : by the proof of proposition \ref{pr:estimate on the heat kernel}, we have $\frac{D}{\hat{c}} = 4\sqrt{\frac{(1 - \gamma)(\kappa - 1)}{c}}$, with $1 < \kappa < \frac{1}{4}c$, and so $\rho = \frac{\sqrt{(1 - \gamma)(\kappa - 1)}}{\gamma} \frac{2}{(1 - \theta) p \sqrt{c}}$. Since the choice of $c > 4$ in the estimate on the heat kernel is arbitrary, and since $\gamma$ can always be taken arbitrarily small, we can choose them so that $\frac{D}{(1-\theta)2p\gamma\hat{c}}$ is equal to the chosen $\rho$. Then we have by Hölder's inequality : \begin{align*} T_2\left(V^{1/2}\psi \right) &\leq C K_p^{1/2} \lambda\left( \int_{\lambda^{-1}}^\infty e^{-\theta{\gamma\lambda}r} \mathop{}\mathrm{d} r \right)^{1/q} \left( \int_{\lambda^{-1}}^\infty \frac{e^{-\theta{\gamma\lambda}r}}{\Vol{x}{\lambda^{-1}}} \int_{\lambda^{-1} < d < r} \psi^{q'} \mathop{}\mathrm{d}\mu \mathop{}\mathrm{d} r \right)^{1/q'} \\ &\leq C K_p^{1/2} \lambda \left(\frac{1}{\theta\gamma\lambda} e^{-{\theta\gamma}} \right)^{1/q} \left(\frac{1}{\theta\gamma\lambda} \int_{\lambda d > 1} e^{-\theta\gamma\lambda d(x,y)} \frac{\psi^{q'}(y)}{\Vol{x}{\lambda^{-1}}}\mathop{}\mathrm{d}\mu(y) \right)^{1/q'} \\ &\leq \frac{C K_p^{1/2}}{\theta\gamma} e^{-\theta\gamma/q} \left(\int_{\lambda d > 1} e^{-\theta{\gamma\lambda}d(x,y)} \frac{\psi^{q'}(y)}{\Vol{x}{\lambda^{-1}}}\mathop{}\mathrm{d}\mu(y) \right)^{1/q'} \end{align*} We will now show that there is a $\theta \in (0, 1)$ such that the operator $S$ defined by : \begin{equation} S\psi = \int_{\lambda d > 1} e^{-\theta{\gamma\lambda}d(x,y)} \frac{\psi(y)}{\Vol{x}{\lambda^{-1}}}\mathop{}\mathrm{d}\mu(y), \end{equation} is bounded on every $L^p$ for $p\in [1, \infty]$. We use the Schur test : $S$ is given by the kernel $K(x,y) = \frac{e^{-\theta\gamma \lambda d(x,y)}}{\Vol{x}{\lambda^{-1}}} \chi_{\set{\lambda d(x,y) > 1}}$, then if for some constant $L > 0$, $\int_M K(x,y) \mathop{}\mathrm{d}\mu(y) < L$ for almost every $x \in M$, and $\int_M K(x,y) \mathop{}\mathrm{d}\mu(x) < L$ for almost every $y \in M$, $S$ is bounded on all $L^p$, $1 \leq p \leq +\infty$, with all the operator norms being less than $L$. We have : \begin{align*} \int_{d\lambda > 1} \frac{e^{-\theta\gamma \lambda d(x,y)}}{\Vol{x}{\lambda^{-1}}} \mathop{}\mathrm{d}\mu(y) &= \frac{1}{\Vol{x}{\lambda^{-1}}} \int_{d\lambda > 1} e^{-\theta \gamma \lambda d(x,y)} \mathop{}\mathrm{d}\mu(y) \\ &\leq \frac{1}{\Vol{x}{\lambda^{-1}}} \theta \gamma \lambda \int_{\lambda^{-1}}^\infty e^{-\theta \gamma \lambda r} \Vol{x}{r} \mathop{}\mathrm{d}\mu(r) \\ &\leq \theta \gamma \lambda\frac{\Vol{x}{R'}}{\Vol{x}{\lambda^{-1}}} \int_{\lambda^{-1}}^\infty e^{-\theta\gamma \lambda r}\frac{\Vol{x}{r}}{\Vol{x}{R'}} \mathop{}\mathrm{d} r \\ &\leq C \lambda A \left(\rho \hat{c} \right)^\eta \int_{\lambda^{-1}}^\infty e^{\left( \frac{D}{R'} - \theta\gamma\lambda\right)r} \mathop{}\mathrm{d} r \\ &\leq C \lambda \int_{\lambda^{-1}}^\infty e^{\left( 2(1 - \theta)p \gamma - \theta \gamma \right) \lambda r} \mathop{}\mathrm{d} r \\ &\leq \tilde{c}_1, \end{align*} this for any $\theta$ such that $2(1 - \theta)p \gamma - \theta \gamma \leq -\frac{\gamma}{2}$, and the constant $\tilde{c}_1$ depends on $\theta, \gamma, b, \eta, \rho$, but not on $\lambda$ or $R$. We also have \begin{align*} \int_{d\lambda > 1} \frac{e^{-\theta\gamma \lambda d(x,y)}}{\Vol{x}{\lambda^{-1}}} \mathop{}\mathrm{d}\mu(x) &= \theta\gamma\lambda \int_{\lambda^{-1}}^\infty e^{-\theta\gamma\lambda r} \int_{B(y,r)} \frac{\mathop{}\mathrm{d}\mu(x)}{\Vol{x}{\lambda^{-1}}} \mathop{}\mathrm{d} r \\ &\leq C \lambda \int_{\lambda^{-1}}^\infty e^{-\theta\gamma\lambda r} C e^{D\frac{r}{R'}} \frac{\Vol{y}{r}}{\Vol{y}{\lambda^{-1}}} \mathop{}\mathrm{d} r\\ &\leq C \lambda \int_{\lambda^{-1}}^\infty e^{-\theta\gamma\lambda r} e^{2D\frac{r}{R'}} \mathop{}\mathrm{d} r \\ &\leq C \lambda \int_{\lambda^{-1}}^\infty e^{\left(4(1 - \theta) p \gamma - \theta \gamma \right) \lambda r}\mathop{}\mathrm{d} r \\ &\leq \tilde{c}_2, \end{align*} where we take $\theta$ to be such that $4(1 - \theta)p \gamma - \theta \gamma = -\frac{\gamma}{2}$, i.e.\ $\theta = \frac{\frac{1}{2} + 4p}{1 + 4p} \in (0, 1)$. And $\tilde{c}_2$ does not depend on $\lambda, R$. Then we also have $2(1 - \theta)p \gamma - \theta\gamma \leq -\frac{\gamma}{2}$. Thus by the Schur test, $S$ is bounded on $L^p$ for all $1 \leq p \leq \infty$ with an operator norm that does not depend on $\lambda, R$. Since $T_2 \left(V^{1/2}\psi \right) \leq C \left(S\left(\psi^{q'}\right)\right)^{1/q'}$, then : \begin{align*} \left\|T_2 \left(V^{1/2}\psi \right) \right\|_2^2 &\leq C \left\| S \left(\psi^{q'}\right) \right\|_{2/q'}^{1/q'} \\ &\leq C \left\|\psi\right\|_2^2 \end{align*} Then we can conclude that there is some constant $C_{1,p}$, which depends only on $p, b, \eta$ and the $\gamma, c$ that we chose in the estimation of the heat kernel, such that : \begin{equation} \left\|T_2\left( V^{1/2} \cdot \right) \right\|_{L^2 \rightarrow L^2} \leq C_{1,p} K_p^{1/2} \end{equation} \end{proof} And so, applying all three lemmas, we have $\left\| G_{1,\lambda} \left(V^{1/2} \cdot \right) \right\|_{L^2 \rightarrow L^2}^2 \leq \left(C_{1,p}+ C_{2,p}\right)^2 K_p$. Thus we have $\hat{c}$ and $C_p$ constants depending only on the doubling constants (and for $C_p$, on $p$), such that for $R\lambda > \hat{c}$, $V \geq 0$ locally integrable, \begin{equation*} \int_M V \psi^2 \mathop{}\mathrm{d}\mu \leq C_p N_{p,\lambda^{-1}}(V) \left( \int_M |\nabla\psi|^2 \mathop{}\mathrm{d}\mu + \lambda^2 \int_M \psi^2 \mathop{}\mathrm{d}\mu \right) \end{equation*} We will now prove the theorems \ref{th:Weak_positivity}, \ref{th:lower_bound estimates} and \ref{th:Positive lambda_1} when $\rd{R}$, $\nu > 1$ holds. \subsubsection{Proof of theorem \ref{th:Weak_positivity}} Since theorem \ref{th:c_lambda_estimates} holds only for $\lambda > \hat{c}R^{-1}$, $\hat{c} > 1$, we immediately get \eqref{eq:Weak_majoration} only for $R' \leq \frac{1}{\hat{c}}R$. We need just a bit more work to get it for $R$. \begin{proof}[Proof of Theorem \ref{th:Weak_positivity}] We have, by theorem \ref{th:c_lambda_estimates}, for any $p > 1$, and $\lambda > 0$ such that $\lambda R > \hat{c}$ : \begin{equation*} \left\langle V\psi, \psi \right\rangle \leq C_p N_{p, \lambda^{-1}}(V) \left( \|\nabla \psi\|^2 + \lambda^2 \|\psi\|^2\right). \end{equation*} In fact, this inequality also holds for $\lambda = \frac{\hat{c}}{R}$. We apply the previous inequality with this lambda, and use that the function $r \mapsto N_{p,r}(V)$ is non-decreasing, and $\lambda^{-1} < R$. Thus : \begin{equation} \left\langle V\psi, \psi \right\rangle \leq C_p N_{p,R}(V) \left( \|\nabla\psi\|^2 + \frac{\hat{c}^2}{R^2} \|\psi\|^2\right) \leq C_p \hat{c}^2 N_{p,R}(V) \left( \|\nabla\psi\|^2 + \frac{1}{R^2}\|\psi\|^2 \right) \end{equation} \end{proof} \subsubsection{Proof of theorem \ref{th:Positive lambda_1}} We now suppose that $\lambda_1(M) > 0$. Then the previous results can be strenghtened to prove theorem \ref{th:Positive lambda_1}. \begin{proof} We apply theorem \ref{th:Weak_positivity}, and use that $\lambda_1(M) \int_M \psi^2 \mathop{}\mathrm{d}\mu \leq \int_M |\nabla\psi|^2\mathop{}\mathrm{d}\mu$. Then : \begin{equation*} \left\langle V\psi,\psi\right\rangle \leq C_p N_{p,R}(V) \left(1 + \frac{1}{\lambda_{1}(M)R^2}\right) \int_M |\nabla\psi|^2\mathop{}\mathrm{d}\mu \end{equation*} Then : \begin{equation*} \frac{\lambda_1(M) R^2}{C_p N_{p,R}(V)(1 + \lambda_1(M) R^2)} \int_M V\psi^2 \mathop{}\mathrm{d}\mu \leq \int_M |\nabla \psi|^2 \mathop{}\mathrm{d}\mu \end{equation*} And : \begin{equation*} \frac{\lambda_1(M) R^2}{2C_p N_{p,R}(V)(1 + \lambda_1(M) R^2)} \int_M V\psi^2 \mathop{}\mathrm{d}\mu + \frac{\lambda_1(M)}{2} \int_M \psi^2 \mathop{}\mathrm{d}\mu \leq \int_M |\nabla \psi|^2 \mathop{}\mathrm{d}\mu \end{equation*} Then, for any $V$, we have : \begin{equation} \left\langle V\psi, \psi\right\rangle \leq \frac{C_p N_{p,R}(V)(1 + \lambda_1(M)R^2)}{\lambda_1(M)R^2} \left( \|\nabla\psi\|^2 - \frac{\lambda_1(M)}{2} \|\psi\|^2 \right), \end{equation} which is \eqref{eq:Positive lambda_1}. \end{proof} \subsection{Proof of theorem \ref{th:lower_bound estimates}} Let $C_p$ be the constant of theorem \ref{th:Weak_positivity}. We let \begin{equation} L = \sup_{x, \delta} \left(2C_p \left(\fint_{B(x,\delta)} V^p \mathop{}\mathrm{d}\mu \right)^{1/p} - \delta^{-2} \right) \end{equation} Then we have : \begin{align*} \left(\fint_{B(x,\delta)} V^p \mathop{}\mathrm{d}\mu \right)^{1/p} &\leq \frac{L + \delta^{-2}}{2C_p}, \\ \left(M_{2p,\delta} (V^p)(x) \right)^{1/p} &\leq \frac{\delta^2 L + 1}{2C_p}. \end{align*} Take $\delta = L^{-1/2}$, then $N_{p,\delta}(V) \leq \frac{1}{C_p}$. Then by theorem \ref{th:Weak_positivity} we have : \begin{equation} \left\langle V\psi, \psi\right\rangle - \|\nabla\psi\|_2^2 \leq L\|\psi\|^2, \end{equation} thus \begin{equation} - \lambda_1(\Delta - V) \leq \sup_{x, \delta} \left(2C_p \left(\fint_{B(x,\delta)} V^p \mathop{}\mathrm{d}\mu \right)^{1/p} - \delta^{-2} \right). \end{equation} Meanwhile, let $r < \lambda^{-1} \leq R$, and define $f_r : [0, \infty) \rightarrow [0, +\infty)$ by $f(t) = r$ if $t \leq r$, $f(t) = 2r - t$ if $t \in (r, 2r]$ and $f_r(t) = 0$ if $t > 2r$. Then for $o \in M$, $\psi = f_r(d(o,x))$. $\psi$ is a Lipschitz function with compact support, and we have, by $\doubling{R}$ : \begin{align*} \lambda_1(\Delta - V) &\leq \frac{\|\nabla \psi\|^2 - \int_M V\psi^2 \mathop{}\mathrm{d}\mu}{\|\psi\|^2} \\ &\leq \frac{\Vol{x}{2r}}{r^2\Vol{x}{r}} - \fint_{B(x,r)} V\mathop{}\mathrm{d}\mu \\ &\leq A r^{-2} - \fint_{B(x,r)} V\mathop{}\mathrm{d}\mu \\ &\leq (r/\sqrt{A})^{-2} - A^{-1 - \eta/2} \fint_{B(x,r/\sqrt{A})} V \mathop{}\mathrm{d}\mu, \end{align*} this for all $r > 0$. Thus : \begin{equation} -\lambda_1(\Delta - V) \geq \sup_{x,\delta} \left( A^{-1 - \eta/2} \fint_{B(x,\delta)} V \mathop{}\mathrm{d}\mu - \delta^{-2}\right). \end{equation} \subsection{Removing the dependancy on reverse doubling} Let $M$ be a manifold that satisfy $\FK{}$. We consider $\tilde{M} = \mathbf{R}\times M$, $(\tilde{M},\tilde{g}, \tilde{\mu})$ the product Riemannian manifold : $\tilde{g} = \mathop{}\mathrm{d} x^2 + g$, $\mathop{}\mathrm{d}\tilde{\mu} = \mathop{}\mathrm{d} x \mathop{}\mathrm{d}\mu$. For $V \in L_{loc}^1(M)$ we define $\tilde{V}(x,m) = V(m)$. We write $\tilde{\Delta}$ for the laplacian on $(\tilde{M},\tilde{g},\tilde{\mu})$, and $\Delta$ for the laplacian on $(M,g,\mu)$. The Morrey norm in $\tilde{M}$ is written $\tilde{N}_{p,R}$. We have : \begin{proposition} $(\tilde{M},\tilde{g},\tilde{\mu})$ satisfies the following properties : \begin{enumerate} \item If $\mu$ is $R$-doubling, then $\tilde{\mu}$ is $R$-doubling, and $R$-reverse doubling with order $\nu > 1$. \item The heat kernel of $\tilde{M}$ is $\tilde{p}_t((x,m),(y,n)) = \frac{1}{\sqrt{4\pi t}} e^{-\frac{|x - y|^2}{4t}} p_t(m,n)$. \item If $M$ satisfies $\FK{R}$, then there is some $\theta \in (0,1)$ such that $\tilde{M}$ satisfies $\FK{\theta R}$. $\theta$ depends only on the Faber Krahn constants. \item $\lambda_1(\tilde{\Delta} - \tilde{V}) = \lambda_1(\Delta - V)$ \item If $\mu$ is $R$-doubling, then there are two constants $c, C$ which depends only on the doubling constant, such that $cN_{p,R}(V) \leq \tilde{N}_{p,R}(\tilde{V}) \leq C N_{p,R}(V)$ \end{enumerate} \end{proposition} \begin{proof}\ $1.$ For $E\subset \mathbf{R}$ measurable, we denote $|E|$ the usual lebesgue measure of $E$. We have : \begin{equation}\label{eq:volume comparison} |(-r/2,r/2)|\mu(B(m, r/2)) \leq \tilde{\mu}(\tilde{B}((x,m), r)) \leq |(-r, r)|\mu(B(m,r)). \end{equation} From this, with $r \leq R$ we immediately get $\tilde{\mu}(\tilde{B}((x,m), 2r)) \leq 4 A^2 \tilde{\mu}(\tilde{B}((x,m), r))$, with $A$ the $R$-doubling constant of $\mu$. Moreover, since $\mu$ is $R$-doubling, it is $R$-reverse doubling, with reverse doubling order $\nu > 0$. Then, we have, for $r < r' < \theta R$ : \begin{align*} \frac{\tilde{\mu}(\tilde{B}((x,m),r'))}{\tilde{\mu}(\tilde{B}((x,m),r))} &\geq \frac{r'}{2r}\frac{ \mu(B(m,r'/2))}{\mu(B(m,r))} \\ &\geq \frac{1}{2A} \frac{r'}{r} \frac{\Vol{m}{r'}}{\Vol{m}{r}} \\ &\geq \frac{a}{2A} \parenfrac{r'}{r}^{1+\nu} \end{align*} Thus $\tilde{\mu}$ is reverse doubling of order $\tilde{\nu} = 1 + \nu > 1$. $2., 4.$ We have $\tilde{\Delta} = -\frac{\mathop{}\mathrm{d}^2}{\mathop{}\mathrm{d} x^2} + \Delta$. Thus $\tilde{p}_t((x,m),(y,n)) = \frac{1}{\sqrt{4\pi t}} e^{-\frac{|x - y|^2}{4t}} p_t(m,n)$, and the spectrum of $\tilde{\Delta} - \tilde{V}$ is : \begin{equation*} Sp(\tilde{\Delta} - \tilde{V}) = \set*{\lambda + \lambda';\quad \lambda \in Sp(\Delta - V), \lambda' \geq 0}. \end{equation*} Thus the infimum of the spectrum of $\tilde{\Delta} - \tilde{V}$ is the infimum of the spectrum of $\Delta - V$. $3.$ We use proposition \ref{pr:heat + doubling implies FKR}. $5.$ We use \eqref{eq:volume comparison}. Using that $\int_{\tilde{B}} \tilde{V} \mathop{}\mathrm{d}\tilde{\mu} \leq 2r \int_B V\mathop{}\mathrm{d}\mu$, we have : \begin{equation*} \frac{r^{2p}}{\tilde{\mu}(\tilde{B}((x,m),r)} \int_{\tilde{B}} \tilde{V}^p \mathop{}\mathrm{d}\tilde\mu \leq \frac{r^{2p}}{(r/2) \Vol{m}{r/2}} 2r \int_B V^p \mathop{}\mathrm{d}\mu. \end{equation*} Then by $R$ doubling $\tilde{N}_{p,R}(\tilde{V}) \leq 4A N_{p,R}(V)$. The other inequality is obtained in a similar same way. \end{proof} \begin{proof}[Proof of theorem \ref{th:Weak_positivity}] From the points $1., 3.$ of the above proposition, if $(M,g,\mu)$ is a manifold that satisfy $\FK{R}$, then there is some $\theta \in (0, 1)$, depending only on the Faber Krahn constants, such that $(\tilde{M},\tilde{g},\tilde{\mu})$ satisfy $\FK{\theta R}$ and $\rd{R}$ with $\nu > 1$. Then we can apply \ref{th:Weak_positivity} to $\tilde{M}$ : there is a constant $\tilde{C}_p$ such that if $\tilde{V}$ is such that $\tilde{C}_p \tilde{N}_{p,R}(\tilde{V}) \leq 1$, then $\lambda_1(\tilde{\Delta} - \tilde{V}) \geq -\frac{1}{\theta^2 R^2}$. Using $5.$, then there is a constant $C_p > 0$ such that $C_p N_{p,R}(V) \geq \tilde{C_p} \tilde{N}_{p,R}(\tilde{V})$. Then since $\lambda_1(\Delta - V) = \lambda_1(\tilde{\Delta} - \tilde{V})$, if $\mathbf{C}_p N_{p,R}(V) \leq 1$, then $\lambda_1(\Delta - V) \geq -\frac{1}{\theta^2 R^2}$. For an arbitrary $V \geq 0$, locally integrable, with $N_{p,R}(V) < +\infty$, we can apply the above to $V/C_p N_{p,R}(V)$, then for any $\psi \in \SmoothComp{M}$ : \begin{equation} \frac{1}{C_p N_{p,R}(V)}\int_M V \psi^2 \mathop{}\mathrm{d} \mu \leq \frac{1}{\theta^2} \int_M \left( |\nabla \psi|^2 + \frac{1}{ R^2} \psi^2 \right) \mathop{}\mathrm{d}\mu, \end{equation} \noindent which is \eqref{eq:Weak_majoration}. \end{proof} \section{Hardy inequality}\label{sec:Hardy} For some point $o \in M$, the $L^2$ Hardy inequality : \begin{equation} \forall \psi \in \SmoothComp{M},\; \int_M \frac{\psi(x)^2}{d(o,x)^2} \mathop{}\mathrm{d}\mu(x) \leq C \int_M |\nabla \psi(x)|^2 \mathop{}\mathrm{d}\mu(x) \end{equation} is equivalent to $\Delta - V \geq 0$, with $V(x) = \frac{1}{C}d(o,x)^{-2}$. Moreover, we have : \begin{proposition} Let $(M, g, \mu)$ be a weighted Riemannian manifold, $R \in (0, \infty]$. If $\mu$ satisfy $\doubling{R}$ and $\rd{R}$ with $\nu > 1$, then for any $p \in (1, \nu/2)$, there is a constant $K_p < \infty$ such that for all $r < R$ we have : \begin{equation}\label{eq:hardy_good_potential} r^2 \left( \fint_{B(x,r)} d(o,y)^{-2p} \mathop{}\mathrm{d} \mu \right)^{1/p} \leq K_p. \end{equation} \end{proposition} \begin{proof} We let $\rho(y) = d(o,y)$, $B= B(x,r)$, for $r < R$. If $r \leq \rho(x)/2$, then for $y \in B(x,r)$, $\rho(y) \geq \rho(x) - r \geq \rho(x)/2 \geq r$. Then : \begin{equation*} \int_B \rho(y)^{-2p} \mathop{}\mathrm{d}\mu \leq r^{-2p} \mu(B). \end{equation*} If $r > \rho(x)/2$, then $B(x,r) \subset B(o,3r)$, and : \begin{align*} \int_B \rho^{-2p} \mathop{}\mathrm{d}\mu &\leq \int_{B(o,3r)} \rho^{-2p} \mathop{}\mathrm{d}\mu \\ &\leq \int_0^\infty (2p - 1)t^{-2p - 1} \Vol{o}{\min(t, 3r)} \mathop{}\mathrm{d} t \\ &\leq \int_0^{3r} a^{-1} (2p - 1)t^{\nu - 2p - 1} (3r)^{-\nu} \Vol{o}{3r} \mathop{}\mathrm{d} t + r^{-2p} \Vol{o}{3r} \\ &\leq \left( \frac{1}{3^{3p}a} \frac{2p-1}{\nu - 2p} + 1\right) r^{-2p}\Vol{o}{3r} \\ &\leq C_p r^{-2p}\Vol{x}{r}, \end{align*} since $\nu > 2p$, with the constant $C_p$ depending uniquely on $p$ and the doubling and reverse doubling constants. \end{proof} Then applying theorems \ref{th:Weak_positivity} and \ref{th:Fefferman-Phong generalized}, we immediately obtain : \begin{corollary} If $(M,g,\mu)$ satisfy $\FK{R}$ and $\rd{R}$ with $\nu > 2$, then there is a constant $C$ such that for any $\psi \in \mathcal{C}_0^\infty(M)$, $o\in M$, \begin{equation} \int_M \frac{\psi(x)^2}{d(o,x)^2} \mathop{}\mathrm{d}\mu(x) \leq C \left(\|\nabla \psi\|_2^2 + \frac{1}{R^2}\|\psi\|_2^2 \right). \end{equation} \end{corollary} \begin{corollary} If $(M,g,\mu)$ satisfy $\FK{}$, $\rd{}$ with $\nu > 2$ then there is a constant $C$ such that : \begin{equation}\label{eq:Hardy_C} \forall \psi \in \SmoothComp{M},\; \int_M \frac{\psi(x)^2}{d(o,x)^2} \mathop{}\mathrm{d}\mu(x) \leq C \int_M |\nabla\psi|^2 \mathop{}\mathrm{d}\mu. \end{equation} \end{corollary} The second corollary being theorem \ref{th:Hardy}. This time the condition on the reverse doubling order is not merely a technical hypothesis. It is, in fact, a necessary condition for the Hardy inequality to holds if we assume the measure $\mu$ to be doubling : \begin{proposition} Let $(M,g,\mu)$ be a weighted Riemannian manifold, with $\mu$ a doubling measure, assume that there is a constant $\nu > 2$ such that for any $o \in M$, $\psi \in \SmoothComp{M}$, $M$ admits the Hardy inequality : \begin{equation}\label{eq:Hardy_nu} \parenfrac{\nu-2}{2}^2 \int_M \frac{\psi(x)^2}{d(o,x)^2} \mathop{}\mathrm{d}\mu(x) \leq \int_M |\nabla\psi|^2 \mathop{}\mathrm{d}\mu, \end{equation} then $\mu$ satisfy $\rd{}$. \end{proposition} Note that that we can always write a Hardy inequality \eqref{eq:Hardy_C} in the form \eqref{eq:Hardy_nu} simply by chosing $\nu = 2 + 2\sqrt{1/C}$. Using a method from \cite{Carron16, LiWang01}, we have : \begin{proof} Take $0 < r < R$, define $f(t) = r^{-\frac{\nu- 2}{2}}$ for $0 \leq t \leq r$, $f(t) = t^{-\frac{\nu-2}{2}}$ for $r \leq t \leq R$, $f(t) = 2 R^{-\frac{\nu - 2}{2}} - R^{-\frac{\nu}{2}} t$ for $R \leq t \leq 2R$ and $f(t) = 0$ for $t \geq 2R$. When $r \leq t \leq R$, we have $f'(t)^2 = \left(\frac{\nu - 2}{2}\right)^2 \frac{f(t)^2}{t^2}$. Then for some point $o \in M$ choose $\phi(x) = f(d(o,x))$, the Hardy inequality applied to $\varphi$ leads to : \begin{equation} \left(\frac{\nu-2}{2}\right)^2\int_{B(o,r)} \frac{\phi(x)^2}{d(o,x)^2} \mathop{}\mathrm{d}\mu(x) \leq \int_{B(o,2R)\setminus B(o, R)} |\nabla \phi|^2 \mathop{}\mathrm{d}\mu(x), \end{equation} then : \begin{equation} \left(\frac{\nu-2}{2}\right)^2 r^{-\nu} \Vol{o}{r} \leq R^{-\nu} \mu(B(o,2R)\setminus B(o,R)) \leq A R^{-\nu} \Vol{o}{R}, \end{equation} using that $\mu$ is doubling. Thus there is some constant $a > 0$ such that : \begin{equation} a \left(\frac{R}{r}\right)^\nu \leq \frac{\Vol{o}{R}}{\Vol{o}{r}}, \end{equation} and $\mu$ is reverse doubling of order $\nu > 2$. \end{proof}
{ "timestamp": "2020-12-17T02:14:52", "yymm": "2012", "arxiv_id": "2012.08841", "language": "en", "url": "https://arxiv.org/abs/2012.08841" }
\section{Introduction} The recent COVID$-19$ pandemic broke down geographical boundaries and led to an \textit{infodemic} of fake news and conspiracy theories \cite{zhou2020recovery}. Evidence based fact verification (English only) has been studied as a weapon against fake news and disinformation \cite{thorne2018fact}. Conspiracy theories and disinformation can propagate from one language to another and some languages are more evidence rich (English). During the US 2020 elections, evidence of online Spanish language disinformation aimed at Latino-American voters was reported \cite{rogers2020}. Polyglotism is not uncommon. According to a 2017 Pew Research study, $91\%$ of European students learn English in school \footnote{\url{https://www.pewresearch.org/fact-tank/2020/04/09/most-european-students-learn-english-in-school/}}. Furthermore, recent machine translation advances are increasingly bringing down language barriers \cite{liu2020multilingual, johnson2017google}. Disinformation can be defined as intentionally misleading information \cite{fallis2015disinformation, fetzer2004disinformation}. The ``good cop'' of the Internet \cite{cohen2018conspiracy}, Wikipedia has become a source of ground truth as seen in the recent literature on evidence-based fact verification. There are more than 6mln English Wikipedia articles \footnote{\url{https://meta.wikimedia.org/wiki/List_of_Wikipedias}} but resources are lower in other language editions, such as Romanian (400K). As a case study we evaluate a claim about Ion Mihai Pacepa, former agent of the Romanian secret police during communism, author of books on disinformation \cite{pacepa2013disinformation, pacepa1987red}. Related conspiracy theories can be found on internet platforms, such as rumors about his death \cite{impact2020}, or Twitter posts in multiple languages, with strong for or against language, such as (English and Portuguese) \footnote{\url{https://twitter.com/MsAmericanPie_/status/1287969874036379649}} or (English and Polish) \footnote{\url{https://twitter.com/hashtag/Pacepa}}. Strong language has been associated with propaganda and fake news \cite{zhou2020survey}. In the following sections we review the relevant literature, present our methodology, experimental results and the case study resolution, and conclude with final notes. We make code, datasets, API, and trained models available \footnote{\url{https://github.com/D-Roberts/multilingual_nli_ECIR2021}}. \section{Related Work} The literature review touches on three topics: online disinformation, multilingual NLP and evidence based fact verification. \textbf{Online Disinformation.} Previous disinformation studies focused on election related activity on social media platforms like Twitter, botnet generated hyperpartisan news, 2016 US presidential election \cite{brachten2017strategies, bastos2019brexit, bessi2016social, grinberg2019fake}. To combat online disinformation one must retrieve reliable evidence at scale since fake news tend to be more viral and spread faster \cite{silverman, schroepfer2019creating, zhou2020survey, vosoughi2018spread}. \textbf{Multilingual NLP Advances.} Recent multilingual applications leverage pre-training of massive language models that can be fine-tuned for multiple tasks. For example, the cased multilingual BERT (mBERT) \cite{DBLP:journals/corr/abs-1810-04805}, \footnote{\url{https://github.com/google-research/bert/blob/master/multilingual.md}} is pre-trained on a corpus of the top 104 Wikipedia languages, with 12 layers, 768 hidden units, 12 heads and 110M parameters. Cross-lingual transfer learning has been evaluated for tasks such as: natural language inference \cite{conneau2018xnli, artetxe2019massively}, document classification \cite{schwenk2018corpus}, question answering \cite{clark2020tydi}, fake Indic language tweet detection \cite{kar2020no}. \textbf{English-Only Evidence Retrieval and Fact Verification.} Fact based claim verification is framed as a natural language inference (NLI) task that retrieves its evidence. An annotated dataset was shared \cite{thorne2018fever} and a task \cite{thorne2018fact} was set up to retrieve evidence from Wikipedia documents and predict claim verification status. Recently published SotA results rely on pre-trained BERT flavors or XLNet \cite{yang2019xlnet}. DREAM \cite{zhong2019reasoning}, GEAR \cite{zhou2019gear} and KGAT \cite{liu2020fine} achieved SotA with graphs. Dense Passage Retrieval \cite{karpukhin2020dense} is used in RAG \cite{lewis2020retrieval} in an end-to-end approach for fact verification. \section{Methodology} The system depicted in Fig. \ref{fig1} is a pipeline with a multilingual evidence retrieval component and a multilingual fact verification component. Based on input claim $c_{l_i}$ in language $l_i$ the system retrieves evidence $E_{l_j}$ from Wikipedia edition in language $l_j$ and supports, refutes or abstains (not enough info). We employ English and Romanian as sample languages. \begin{figure} \includegraphics[width=\textwidth]{system_diagram_final.eps} \caption{Overview of the multilingual evidence retrieval and fact verification system.} \label{fig1} \end{figure} We use all the annotated $110K$ verifiable claims provided in the initial FEVER task \cite{thorne2018fever} for training the end to end system in Fig. \ref{fig1}. \textbf{Multilingual Document Retrieval.} To retrieve top Wikipedia $n_l$ documents $D_{c, n_l}$ per claim for each evidence language $l$, we employ an ad-hoc entity linking system \cite{hanselowski2018ukp} based on named entity recognition in \cite{cucerzan2007large}. Entities are parsed from the (English) claim $c$ using the AllenNLP \cite{gardner2018allennlp} constituency parser. We search for the entities and retrieve 7 English \cite{hanselowski2018ukp} and 1 Romanian Wikipedia pages (higher number of Romanian documents did not improve performance) using MediaWiki API \footnote{\url{https://www.mediawiki.org/wiki/API:Main_page}} each. Due to the internationally recognized nature of the claim entities, 144.9K out of 145.5K training claims have Romanian Wikipedia search results. \textbf{Multilingual Sentence Selection.} All sentences $\cup_{n_l}\{S_{D_{c , n_l}}\}$ from each retrieved document are supplied as input to the sentence selection model. We removed diacritics in Romanian sentences \cite{sennrich2016edinburgh} and prepended evidence sentences with the page title to compensate for the missed co-reference pronouns \cite{soleimani2020bert, yoneda2018ucl}. We frame the multilingual sentence selection as a two-way classification task \cite{hanselowski2018ukp, sakata2019faq}. One training example is a pair of an evidence sentence and the claim \cite{zhou2019gear, yoneda2018ucl}. The annotated evidence sentence-claim pairs from FEVER are given the True label. We randomly sample 32 sentences per claim from the retrieved documents as negative sentence-claim pairs (False label). We have 2 flavors of the fine-tuned models: EnmBERT only includes English negative sentences and EnRomBERT includes 5 English and 27 Romanian negative evidence sentences. The architecture includes an mBERT encoder $E_r(\cdot)$ \cite{wolf2019huggingface} \footnote{\url{https://github.com/huggingface/transformers}} and an MLP classification layer $\phi(\cdot)$. During training, all the parameters are fine-tuned and the MLP weights are trained from scratch. The encoded first $<CLS>$ token, is supplied to the MLP classification layer. For each claim, the system outputs all the evidence sentence-claim pairs ranked in the order of the predicted probability of success $P(\mathbf{y}=1|\mathbf{x}) = \phi(E_{r}(\mathbf{x}))$ (pointwise ranking \cite{cao2007learning}). \textbf{Multilingual Fact Verification.} The fact verification step (NLI) training takes as input the 110K training claims paired with each of the 5 selected evidence sentences (English only for EnmBERT or En and Ro for EnRomBERT), and fine-tunes the three-way classification of pairs using the architecture in Fig. \ref{fig1}). We aggregate the predictions made for each of the 5 evidence sentence-claim pairs based on logic rules \cite{malon2018team}(see Fig. \ref{fig1}) to get one prediction per claim. Training of both sentence selection and fact verification models employed the Adam optimizer \cite{kingma2014adam}, batch size of 32, learning rate of $2e-5$, cross-entropy loss, and 1 and 2 epochs of training, respectively. \textbf{Alternative Conceptual End-to-End Multilingual Retrieve-Verify System.} The entity linking approach to document retrieval makes strong assumptions about the presence of named entities in the claim. Furthermore, the employed constituency parser \cite{gardner2018allennlp} assumes that claims are in English. To tackle these limitations, we propose a conceptual end-to-end multilingual evidence retrieval and fact verification approach inspired by the English-only RAG ~\cite{lewis2020retrieval}. The system automatically retrieves relevant evidence passages in language $l_j$ from a multilingual corpus corresponding to a claim in language $l_i$. In Fig. \ref{fig1}, the 2-step multilingual evidence retrieval is replaced with a multilingual version of dense passage retrieval (DPR) \cite{karpukhin2020dense} with mBERT backbone. The retrieved documents form a latent probability distribution. The fact verification step conditions on the claim $x_{l_i}$ and the latent retrieved documents $z$ to generate the label $y$, $P(y|x_{l_i}) = \sum_{z \in D_{top-k, l_j}} p(z|x_{l_i})p(y|x_{l_i}, z)$. The multilingual retrieve-verify system is jointly trained and the only supervision is at the fact verification level. We leave this promising avenue for future experimental evaluation. \section{Experimental Results} In the absence of equivalent end-to-end multilingual fact verification baselines, we compare performance to English-only systems using the official FEVER scores \footnote{\url{https://github.com/sheffieldnlp/fever-scorer}} on the original FEVER datasets \cite{thorne2018fever}. Furthermore, the goal of this work is to use multilingual systems trained in evidence rich languages to combat disinformation in evidence poor languages. To this end we evaluate the transfer learning ability of the trained verification models on an English-Romanian translated dataset. We translated 10 supported and 10 refuted claims (from the FEVER developmental set) together with 5 evidence sentences each (retrieved by the EnmBERT system) and combined in a mix and match development set of 400 examples. \textbf{Calibration results on FEVER development and test sets.} In Table \ref{tab1} and Fig. \ref{fig2} we compare EnmBERT and EnRomBERT verification accuracy (LA-3) and evidence recall on the fair FEVER development (dev) set, the test set and on a golden-forcing dev set. The fair dev set includes all the claims in the original FEVER dev set and all the sentences from the retrieved documents (English and/or Romanian). The golden forcing dev set forces all ground truth evidence into the sentence selection step input, effectively giving perfect document retrieval recall \cite{liu2020fine}. On the fair dev set, the EnmBERT system reaches within $5\%$ accuracy of English-only BERT-based systems such as \cite{soleimani2020bert} (LA-3 of 67.63\%). We also reach within $5\%$ evidence recall (Table \ref{tab1} $88.60\%$) as compared to English-only KGAT \cite{liu2020fine} and better than \cite{soleimani2020bert}. Note that any of the available English-only systems with BERT backbone such as KGAT \cite{liu2020fine} and GEAR \cite{zhou2019gear} can be employed with an mBERT (or another multilingual pre-trained) backbone to lift the multilingual system performance. \begin{figure} \includegraphics[width=\textwidth, height=5cm]{fig1.eps} \caption{Error analysis per class. `LA-2' is Accuracy for `Supports' $\&$ `Refutes' Claims}\label{fig2} \end{figure} \begin{table} \centering \caption{Calibration of models evaluation using the official FEVER scores $\%$ in \cite{thorne2018fever}.}\label{tab1} \begin{tabular}{l|l | l l l l l} \hline Dataset & Model & Prec@5 & Rec@5 & FEVER & LA-3 Acc \\ \hline Fair-Dev & EnmBERT-EnmBERT & 25.54 & $\mathbf{88.60}$ & 64.62 & $\mathbf{67.63}$ \\ Fair-Dev & EnRomBERT-EnRomBERT & 25.20 & 88.03 & 61.16 & 65.20 \\ \hline Test & EnmBERT-EnmBERT & 25.27 & 87.38 & 62.30 & 65.26 \\ Test & EnRomBERT-EnRomBERT & 24.91 & 86.80 & 58.78 &63.18 \\ \hline \end{tabular} \end{table} To better understand strengths and weaknesses of the system performance and the impact of including Romanian evidence in training EnRomBERT, we present a per class analysis in Fig. 2 \ref{fig2}. We also calculate accuracy scores for only `SUPPORTS' and `REFUTES' claims (FEVER-2). The English-only SotA label accuracy (LA-2) on FEVER-2 is currently given in RAG \cite{lewis2020retrieval} at $89.5\%$ on the fair dev set and our EnRomBERT system reaches within $5\%$. We postulate that the noise from including Romanian sentences in training improves the FEVER-2 score (see Fig. \ref{fig2}), EnRomBERT coming within $5\%$ of \cite{thorne2020avoiding} English-only FEVER-2 SotA of $92.2\%$ on the golden-forcing dev set. In the per-class analysis, on 'SUPPORTS' and 'REFUTES' classes in Fig. \ref{fig2}, EnRomBERT outperforms EnmBERT on both fair and golden-forcing dev sets. To boost the NEI class performance, future research may evaluate the inclusion of all claims, including NEI, in training. Furthermore, retrieval in multiple languages may alleviate the absence of relevant evidence for NEI claims.\textbf{Transfer Learning Performance} Table ~\ref{tab2} shows EnmBERT and EnRomBERT transfer learning ability evaluated directly in the fact verification step using the previously retrieved and manually translated 400 mixed claim-evidence pairs. We report the classification accuracy on all 400 mixed examples, and separately for En-En (English evidence and English claims), En-Ro, Ro-En and Ro-Ro pairs. EnmBERT's zero-shot accuracy on Ro-Ro is $85\%$ as compared to $95\%$ for En-En, better than EnRomBERT's. EnmBERT outperforms EnRomBERT as well for Ro-En and En-Ro pairs. We recall that Romanian evidence sentences were only included in EnRomBERT training as negative evidence in the sentence retrieval step. If selected in the top 5 evidence sentences, Romanian sentences were given the NEI label in the fact verification step. Hence, EnRomBERT likely learned that Romanian evidence sentences are NEI, which led to a model bias against Romanian evidence. \textbf{Disinformation Case Study} We employ EnmBERT to evaluate the claim ``Ion Mihai Pacepa, the former Securitate general, is alive''. The document retriever retrieves Wikipedia documents in English, Romanian and Portuguese. Page summaries are supplied to the EnmBERT sentence selector, which selects top 5 evidence sentences (1XEn, 2XRo, 2XPt). Based on the retrieved evidence, the EnmBERT fact verification module predicts `SUPPORTS' status for the claim. For illustration purposes, the system is exposed as an API \footnote{\url{https://github.com/D-Roberts/multilingual_nli_ECIR2021}}. \begin{table} \centering \caption{Fact Verification Accuracy ($\%$) for Translated Parallel Claim - Evidence Sentences.}\label{tab2} \begin{tabular}{l|l l l l l} \hline Model & Mixed & En-En & En-Ro & Ro-En & Ro-Ro \\ \hline EnmBERT & 95.00 & 95.00 & 50.00 & 65.00 & 85.00 \\ EnRomBERT & 95.00 & 95.00 & 25.00 & 0.00 & 50.00 \\ \hline \end{tabular} \end{table} \section{Final Notes} In this article we present a first approach to building multilingual evidence retrieval and fact verification systems to combat global disinformation. Evidence poor languages may be at increased risk of online disinformation and multilingual systems built upon evidence rich languages in the context of polyglotism can be an effective weapon. To this end, our trained EnmBERT system shows cross-lingual transfer learning ability for the fact verification step on the original FEVER-related claims. This work opens future lines of research into end-to-end multilingual retrieve-verify systems for disinformation suspect claims, in multiple languages, with multiple reliable evidence retrieval sources available in addition to Wikipedia. \bibliographystyle{splncs04}
{ "timestamp": "2021-01-21T02:05:32", "yymm": "2012", "arxiv_id": "2012.08919", "language": "en", "url": "https://arxiv.org/abs/2012.08919" }
\section{Introduction} In many applications, the relevant processes are accurately described by a partial differential equation (PDE) of reaction-advection-diffusion type coupled to a Boltzmann-type kinetic equation that models a distribution of particles in position-velocity space. Examples of such applications include bacterial chemotaxis~\cite{rousset2013bacterialchemotaxis}, rarified gas dynamics~\cite{sun2004hybridrarifiedgas}, and plasma physics~\cite{reiter2005eirene,stangeby2000plasmaboundary,viola2014snowflakevsconv}. The simulation of such coupled models is computationally challenging due to the different dimensionality of both parts of the model. This work focuses on particle-tracing methods for the Boltzmann-BGK kinetic equation, which are implemented in the neutral particle transport codes EIRENE~\cite{reiter2005eirene} and DEGAS2~\cite{stotler1994degas2} for nuclear fusion applications. These codes are coupled to PDE solvers for the plasma, such as the deterministic B2~\cite{reiter2005eirene} or UEDGE~\cite{fenstermacher1995uedgedegas} codes or the stochastic EMC3 code~\cite{feng2002emc3}. Such coupled codes are used to perform plasma edge simulations to evaluate and design fusion reactors and operational conditions. Due to evolutions in fusion reactor design, the Monte Carlo simulations are often the computational bottleneck due to highly heterogeneous and highly reactive conditions~\cite{krasheninnikov2017physicsdetachement}. These conditions therefore require careful selection of the most suitable Monte Carlo estimation procedure, or (potentially) the use of different estimation procedures in different parts of the space-time domain. This forms the motivation for this paper, in which we study these particle tracing methods at length. In this paper, we consider a prototypical model of this type that appears in plasma edge simulations in nuclear fusion reactors, such as ITER \cite{wiesen2015SOLPSITER}. The neutral model contains the interactions with the plasma and during these interactions, mass, momentum, and energy are exchanged between plasma and neutral particles. The exchanges between neutrals and plasma are modeled as source terms in the plasma equations and estimating them is the aim of the neutral model simulation. We will discuss this neutral model, together with three different unbiased simulation strategies and four different source term estimators. Combined, this yields eleven different relevant procedures to estimate the source terms for the plasma equation. Different source term estimation procedures result in different statistical behaviour and a different computational cost. Selecting the best estimator forms a fundamental way of reducing the variance and simulation cost of the Monte Carlo procedure~\cite{kahn1956MCapplications}. Currently, only a few works are available that compare the performance of these estimation procedures, usually in a very restrictive setting, leaving the choice of estimation procedure to the preference or experience of the user. A first important comparison of the estimation procedures was conducted in~\cite{macmillan1966comparisonneutrMC}, where an invariant imbedding methodology~\cite{bellman1975ii} is used to derive ODEs for the statistical error of a limited set of estimation procedures in a one-dimensional forward scattering scenario. In this degenerate scenario, neutrals always have the same velocity and do not change direction, leading to very simple particle paths. Indira~\cite{indira1988analyticalleakageest} performed a similar study that included forward-backward scattering, but limited the study to estimation procedures for leakage, the number of particles that leave the domain. Both the setting in \cite{macmillan1966comparisonneutrMC} and \cite{indira1988analyticalleakageest} allowed for significant simplifications, resulting in ODEs for the statistical error with a comprehensive analytical solution. We also refer to the work of Lux for approximate formulas for the variance of the most commonly used estimators \cite{lux1979vareff_techrep}, and for sufficient conditions for one estimator to outperform an other one \cite{lux1978standardvarred}. While useful, Lux' results do not capture the highly non-trivial behaviour at high scattering rates and low absorption rates, where the paths are generally the most complex. For completeness, we also refer to \cite{spanier1970analyticvarred, sarkar1979scoremomentumIS, indira1986weightmomentseqexpandscattangl, indira1989optimization, solomon2011scoremomentumWW}, in which analytical calculations of the variance are presented to optimize importance sampling. This paper forms a next step in the systematic study of source term estimators in coupled finite-volume/Monte-Carlo methods. In particular, we discuss the different procedures that can be used for the Monte Carlo estimation of the mass and momentum source terms, and find which estimation procedure attains the lowest statistical error and computational cost as a function of the plasma background parameters. We first consider mass source estimation in a simplified backward-forward scattering model problem, for which analytical expressions of the statistical error and computational error can be found via the invariant imbedding methodology. This problem, although simplified, represents a significant complication over previous work. We extend our analytical work with experiments that cover both mass and momentum source estimation in a multi-speed setting where an invariant imbedding procedure would become infeasible. This numerical extension shows the relevance of the highly simplified forward-backward scattering model for selection of the mass source estimation procedure, but also the large difference between optimal choice of estimation procedure for mass source estimation compared to momentum source estimation. In Chapters~2 and~3 of~\cite{mortier2020phdthesis}, a more elaborate version of the here presented results are given. The remainder of this paper is organized as follows. In Section~\ref{sec:iif_modelsim}, we briefly describe the model problem and the three simulation types we consider. Then, in Section~\ref{sec:iif_est}, we present the four estimator types and present the eleven relevant source term estimation procedures we arrive at. In Section~\ref{sec:iif_bestest_1D0D}, we present the simplified problem and the invariant imbedding procedure that enables us to calculate the statistical properties for each of these estimation procedures. With these calculated statistical properties, we identify the best estimation procedure throughout the parameter domain and we present the potential gain in doing so. In Section~\ref{sec:ii_res_1D1D}, we expand on this by considering mass and momentum estimation in a more relevant but more complex setting, for which we conduct and present the numerical experiments. In Section~\ref{sec:iif_conclusion}, we finally present our conclusions and hint at future work. \section{Kinetic neutral model and its Monte Carlo simulation\label{sec:iif_modelsim}} In this section, we briefly review the kinetic neutral model and the three Monte Carlo discretizations we consider. A detailed description and derivation of these methods can be found in~\cite[Chapter 2]{mortier2020phdthesis}. The neutrals in a fusion reactor originate according to a source term $\VAnsource(x,v)$ and then move with their velocity, until they undergo a collision. This collision can either be with the wall or with plasma particles. Generally, a collision with a plasma particle can either be an ionization (absorption) at which the neutral disappears from the neutral simulation, or a charge-exchange interaction (scattering), which is modeled by the particle receiving a new velocity from the post-collisional velocity distribution $\VApostcolveldistrvr$. With absorption rate $\VAratea(x)$ and scattering rate $\VArates(x)$, this behaviour can be written as the following stationary kinetic equation for the neutral particle distribution $\phi_\text{n}(x,v)$: \begin{equation} \underbrace{\vphantom{\int}{v}\nabla\phi_\text{n}(x,v)}_\text{transport}=\underbrace{\VAnsource(x,v)}_{\substack{\text{source from}\\ \text{the plasma}}}-\underbrace{\vphantom{\int}\VAratea({x})\phi_\text{n}(x,v)}_{\substack{\text{sink due to}\\ \text{absorption}}} -\underbrace{\vphantom{\int}\VArates({x})\phi_\text{n}(x,v)+\int\!\!\VArates({x})\phi_\text{n}(x,v')\VApostcolveldistrvr\text{d}{v}'}_{\substack{\text{velocity redistribution}\\ \text{due to scattering}}}\,,\label{eq:iif_modelsim_model_kinetic} \end{equation} where we ignored boundary hits. We consider three types of simulation: \begin{itemize} \item \textbf{Analog simulation.} A first method to obtain samples according to Equation~\eqref{eq:iif_modelsim_model_kinetic}, is by modeling particle paths $(x(t),v(t))$ that adhere to the underlying particle model that gave rise to the kinetic equation. For that reason, this simulation is called \emph{analog} and referred to by \texttt{a}. We simulate particles by letting them undergo collisions with the plasma, which can be either an absorption or a scattering. Upon scattering, particles change velocity. Particles disappear from the simulation as soon as they undergo an absorption event. A potential weakness in this method, is that with the finite number of particles, potentially zero of them could penetrate highly absorbing regions in the domain. This caveat is resolved by the two subsequent simulation methods. \item \textbf{Non-analog collision type simulation.} A second method to obtain samples according to Equation~\eqref{eq:iif_modelsim_model_kinetic} also simulates particles by sampling from $\VAnsource(x,v)$ and letting them undergo collisions with the plasma at the total rate $R_\text{a}(x)+R_\text{s}(x)$. However, instead of executing either an absorption or a scattering effect, the collision-type simulation \emph{always} executes a scattering event, and takes care of absorption by introducing a particle weight that is updated at each collision. This no longer follows the underlying particle model of absorption exactly, but instead modifies the collisions, hence the name \emph{non-analog collision type simulation}, which we refer to by \texttt{nac}. \item \textbf{Non-analog track-length type simulation.} Finally, a third method takes care of absorption by continuously adjusting the particle weight along the trajectory. The only collision events that remain are then scattering events, which occur with the rate $R_\text{s}(x)$. This method is called the \emph{non-analog track-length type simulation} and referred to by \texttt{natl}. \end{itemize} On top of collisions, particles can also undergo boundary hits. A usual boundary condition is partial reflection, which can also be implemented as a probability of disappearing from the simulation or as a weight update. The rates $R_\text{a}(x)$ and $R_\text{s}(x)$ depend on the plasma state, which is usually given as a piecewise constant function on a finite volume grid. To keep track of the local rates as the particle moves, we include grid cell crossings as a third type of event. These will be of practical importance for some estimators presented in Section~\ref{sec:iif_est}. \paragraph{Equations of motion.} The particle state is represented as a function $t\in[0,T_\text{end}]\mapsto(x(t),v(t),w(t))$, with $x(t)$ the time-dependent position, $v(t)$ the velocity and $w(t)$ the particle weight. At the initial time $T_0=0$ the initial state, $(x_0,v_0)$ samples $\VAnsource(x,v)$, and $w(0)=1$. From there, the particle moves with a constant velocity, that potentially changes at every event. \begin{align} (x_0,v_0)&\sim\VAnsource(x,v)\,,\\ x(0)&=x_0\,,\\ \frac{\text{d}x(t)}{\text{d}t}&=v(t)\,,\\ v(t)&=v_\VAeventno\text{ for }t\in[T_\VAeventno,T_{\VAeventno+1}),\ \ k\in\{0,\dots,\VAnofevents-1\}\,. \end{align} The $T_\VAeventno$ form the event times and $T_\VAnofevents=T_\text{end}$ is the time instance at which the particle disappears from the simulation. This final event can either be an absorption collision or the particle leaving the domain through the boundary. The velocity at this final time does not change, ${v}(T_\VAnofevents)=\VAdiscreteveleend_{\VAnofevents-1}$. We now first look into the calculation of the event times $T_k$, after which we discuss the generation of a new velocity and the updating of the particle weight. \paragraph{Event times.} If we ignore the possibility of an other event taking place first, the next time at which an event takes place of each of the three kinds separately: collision with the plasma, boundary hit, or grid cell crossing, the next time, can be readily found from $(x(T_{\VAeventno}),v(T_{\VAeventno}))$. We will discuss each of the three events separately now and then find the actual next event as the one that occurs first. The next collision time would be sampled by inverting \begin{equation} \int_{T_\VAeventno}^{T_{\VAeventno+1}^c}\VArate_{\ast}(x(T_\VAeventno+(t-T_\VAeventno)v(T_\VAeventno))\text{d}t\,,\text{ with }\epsilon\sim\mathcal{E}(1)\,,\label{eq:single_particle_scatabsT} \end{equation} where $\VArate_{\ast}(x)$ represents the total rate of collisions with the plasma occurring in the simulation. In the analog and non-analog collision type simulations this is $\VAratet(x)=\VAratea(x)+\VArates(x)$ and in the non-analog track-length type simulation, absorption is taken care of along the particle trajectory, and not during collisions, so the total rate of collisions is only $\VArates(x)$. A boundary hit occurs at the first instance the boundary is reached, hence \begin{equation} T_{\VAeventno+1}^b=T_\VAeventno+\text{min}(\tau|\tau>0,x(T_{\VAeventno})+\tau v(T_{\VAeventno})\in\partial\mathcal{D})\,, \end{equation} with $\partial\mathcal{D}$ the boundary of the domain. Similarly to a boundary hit, a grid cell crossing occurs at the first instance a grid cell is reached. With the grid cells denoted by the set $\left\{\VAgridcelldefgridcellno\right\}_{\VAgridcellno=1}^\VAnofgridcells$, the grid cell boundary can be written as $\cup_\VAgridcellno\partial\VAgridcelldefgridcellno$ and the grid cell crossing time, not considering other events, is \begin{equation} T_{\VAeventno+1}^g=T_\VAeventno+\text{min}(\tau|\tau>0,x(T_{\VAeventno})+\tau v(T_{\VAeventno})\in\cup_\VAgridcellno\partial\VAgridcelldefgridcellno\setminus\partial\mathcal{D})\,. \end{equation} The next event that actually occurs is that for which $T_{\VAeventno+1}^\star$, $\star\in\{c,b,g\}$, is the lowest. To encode the nature of the $({\VAeventno+1})$-th event, we use the variables $c_{\VAeventno+1}$ for a collision with the plasma, $b_{\VAeventno+1}$ for a boundary hit, and $g_{\VAeventno+1}$ for a grid cell crossing, such that \begin{align} (T_{\VAeventno+1},c_{\VAeventno+1},b_{\VAeventno+1},g_{\VAeventno+1})=\left\{\begin{array}{lll} (T_{\VAeventno+1}^c,1,0,0) & \text{ if }&T_{\VAeventno+1}^c=\text{min}(T_{\VAeventno+1}^c,T_{\VAeventno+1}^b,T_{\VAeventno+1}^g)\,,\\ (T_{\VAeventno+1}^b,0,1,0) & \text{ else if }&T_{\VAeventno+1}^b=\text{min}(T_{\VAeventno+1}^c,T_{\VAeventno+1}^b,T_{\VAeventno+1}^g)\,,\\ (T_{\VAeventno+1}^g,0,0,1) & \text{ else if }&T_{\VAeventno+1}^g=\text{min}(T_{\VAeventno+1}^c,T_{\VAeventno+1}^b,T_{\VAeventno+1}^g)\,. \end{array}\right. \end{align} Now the next event time, and the nature of the next event is established, we discuss how each event is simulated. \paragraph{Collision events in analog simulation.} In the non-analog collision type and non-analog track-length type simulations, a collision event with the plasma is always executed as a scattering event. In the analog simulation, a collision is either an absorption (with probability $\frac{\VAratea(x(T_\VAeventno))}{\VAratet(x(T_\VAeventno))}$) or a scattering event (with the complementary probability $\frac{\VArates(x(T_\VAeventno))}{\VAratet(x(T_\VAeventno))}$). In an analog simulation, the nature of this collision determined by a Bernouilli distributed random number \begin{equation} a_\VAeventno\sim\mathcal{B}\left(\frac{\VAratea(x(T_\VAeventno))}{\VAratet(x(T_\VAeventno))}\right)\,.\label{eq:iif_modelsim_absorption} \end{equation} If $a_\VAeventno$ equals one at a plasma collision, this signifies absorption, if it is zero, a scattering occurs. If the event is an absorption, it is the last event, with event index \begin{equation} \VAa{\VAnofevents}=\text{min}(\VAeventno|a_\VAeventno c_\VAeventno=1)\,.\label{eq:iif_modelsim_Kabsorption} \end{equation} If it is a scattering event, a new velocity is sampled as \begin{equation} {v}_{\VAeventno}^s\sim \VApostcolveldistr({v}|{x}(T_{\VAeventno}))\,. \end{equation} \paragraph{Collision events in a non-analog collision type simulation.}In the non-analog collision type simulation, the absorption events are not executed as such, but instead all collisions with the plasma are executed as scattering collisions. To remove the resulting bias, a weight loss is introduced, where the particles lose a fraction $\frac{\VAratea(x(T_\VAeventno))}{\VAratet(x(T_\VAeventno))}$ at every collision with the plasma. One way of interpreting this different simulation type is by considering every particle as representing an infinite number of particles moving together. At every collision a fraction $\frac{\VAratea(x(T_\VAeventno))}{\VAratet(x(T_\VAeventno))}$ of this infinite amount is absorbed and the remaining fraction scatters together and moves further. For non-analog collision type simulations, we therefore update the particle weight as \begin{align} {w}(t)&={\VAdiscreteweight}_\VAeventno \text{ for } t\in[{T}_\VAeventno,{T}_{\VAeventno+1}[,\text{ }\VAeventno\in\{0,\dots,{\VAnofevents}-1\}\,.\label{eq:single_particle_w1}\\ {\VAdiscreteweight}_{\VAeventno+1}&={\VAdiscreteweight}_\VAeventno\left(1-{c}_{\VAeventno+1}\dfrac{\VAratea({{x}}({T}_{\VAeventno+1}))}{\VAratet({{x}}({T}_{\VAeventno+1}))}\right)\,,\label{eq:single_particle_reweigh_nac}\\ \end{align} \paragraph{Absorption in non-analog track-length type simulation.} The track-length simulation strategy removes the absorption collisions with the plasma entirely and instead performs absorption continuously along the particle path. This means during every $\text{d}t$, the expected absorbed fraction, $\VAratea(x(t))\text{d}t$ is removed. By adopting the viewpoint of every particle representing an infinite set of particles, this can be interpreted as that during every interval $\text{d}t$ the expected amount of particles is absorbed. The particle behaviour is described as a function $t\in[0,{T}_\text{end}]\mapsto ({{x}}(t),{{v}}(t),{w}(t))$, where the weight is now described as a continuous function, determined by Equation~\eqref{eq:single_particle_w_natl}: \begin{align} \dfrac{\text{d}{w}(t)}{\text{d}t}=-\VAratea({{x}}(t)){w}(t)\,,\label{eq:single_particle_w_natl}\qquad {\VAdiscreteweight}(0)=1\,. \end{align} In practical fusion simulations, the neutral particles of the Monte Carlo simulation usually move against a piecewise constant plasma background. This facilitates sampling of the next event times by transforming the integral in Equation~\eqref{eq:single_particle_scatabsT} to a sum. For both the analog simulation (\texttt{a}) as the non-analog collision type simulation (\texttt{nac}), this is the only simplification with respect to the general plasma model. For the non-analog track-length type simulation, the weight function ${w}(t)$ also simplifies. Since we take grid cell crossings to be events, the plasma background is always constant in between two events. Consequently, with $\VAratea^{{\VAgridcellno}(\VAeventno)}$ the absorption rate in between events $\VAeventno$ and $\VAeventno+1$, it follows from Equation~\eqref{eq:single_particle_w_natl} that the weight function becomes \begin{equation} {w}(t)={w}_\VAeventno e^{-\VAratea^{{\VAgridcellno}(\VAeventno)}(t-{T}_{\VAeventno})}\text{ for }t\in[{T}_\VAeventno,{T}_{\VAeventno+1}], k\in[0,\dots,{K}-1].\label{eq:iif_modelsim_natl_pwconst_weight} \end{equation} Therefore, the non-analog track-length type simulation can also be performed by a simple weight update at events. With ${d}_\VAeventno=|{\VAdiscreteposeend}_{\VAeventno+1}-{\VAdiscreteposeend}_\VAeventno|$ the distance between events $\VAeventno$ and $\VAeventno+1$, the weight update can be written as \begin{equation} {\VAdiscreteweight}_{\VAeventno+1}=e^{-\VAratea\frac{{d}_\VAeventno}{{\VAdiscreteveleend}_\VAeventno}}{\VAdiscreteweight}_\VAeventno\,. \end{equation} Table~\ref{tab:overview_weightchanges} provides an overview of how the different simulation types reweigh at the $\VAeventno$-th event when the reaction rates are constant in between events. \begin{table}[H] \centering \begin{tabular}{cccc} Simulation & Scattering event & Boundary hit & Grid cell crossing\\ \hline \texttt{a} & 1 & 1 &1\\ \texttt{nac} & $\VAratea/\VAratet$ & 1 &1\\ \texttt{natl} & $e^{-\frac{\VAratea}{|{\VAdiscreteveleend}_{\VAeventno}|}{d}_\VAeventno}$ & $e^{-\frac{\VAratea}{|{\VAdiscreteveleend}_{\VAeventno}|}{d}_\VAeventno}$ & $e^{-\frac{\VAratea}{|{\VAdiscreteveleend}_{\VAeventno}|}{d}_\VAeventno}$ \end{tabular} \caption{$\VAdiscreteweight_{\VAeventno+1}/\VAdiscreteweight_{\VAeventno}$, the post-event reweighing factor at event $\VAeventno$ in the different simulations, with $d_\VAeventno=|\VAdiscreteposeend_{\VAeventno+1}-\VAdiscreteposeend_{\VAeventno}|$, and $\VAratea$, respectively $\VAratet$, the constant absorption rate, respectively total reaction rate, between events $\VAeventno$ and $\VAeventno+1$.} \label{tab:overview_weightchanges} \end{table} \paragraph{Boundary hit.} At the boundary, the particle can either be absorbed (with probability $\alpha(x)$ or reflected (with probability $1-\alpha(x)$). To select the type, we again generate a Bernouilli distributed variable: \begin{equation} \beta_{\VAeventno}\sim\mathcal{B}(\alpha(\VAdiscreteposeend_{\VAeventno}^b))\,.\label{eq:single_particle_refl}\\ \end{equation} If $b_{\VAeventno}=1$ and $\beta_{\VAeventno}=1$, the particle leaves the domain, as is expressed by \begin{align} \VAnofevents_\text{out}&=\min\left(\VAeventno|b_{\VAeventno}\beta_{\VAeventno}=1\right)\,.\label{eq:single_particle_nout} \end{align} Otherwise, if $b_{\VAeventno}=1$ and $\beta_{\VAeventno}=0$, the particle is reflected, which leads to a new deterministic velocity based on its current velocity and the normal to the boundary at the reflection location, via the equation \begin{equation} {v}^r_{\VAeventno+1}=\VAreflectionfunction(\VAdiscreteveleend_{\VAeventno})\,,\label{eq:single_particle_reflV} \end{equation} with $\VAreflectionfunction({v})\equiv -{v}$, a deterministic function that represents perfect reflection at the wall. \paragraph{Grid cell crossing.} At a grid cell crossing, essentially nothing happens, i.e. the particle continues to move with its current velocity. \paragraph{New velocity.} Depending on the event type, the new velocity differs. This can be described as \begin{equation} v_{\VAeventno+1}=\underbrace{(1-a_{\VAeventno+1})c_{\VAeventno+1}v_{\VAeventno+1}^s}_\text{scattering collision} +\underbrace{(1-\beta_{\VAeventno+1})b_{\VAeventno+1}v_{\VAeventno+1}^r}_\text{boundary reflection} +\underbrace{(a_{\VAeventno+1}c_{\VAeventno+1}+\beta_{\VAeventno+1}b_{\VAeventno+1}+g_{\VAeventno+1})v_\VAeventno}_\text{absorption, exit at boundary, or grid cell crossing}\,, \end{equation} where the velocity at the final event does not change. \paragraph{End time.} The particle path ends when it leaves the domain (or at the first absorption event for an analog simulation), hence \begin{align} \VAnofevents&=\min(\VAnofevents_\text{a},\VAnofevents_\text{out})\,,\label{eq:iif_modelsim_a_K}\\ T_\text{end}&=T_\VAnofevents=\min\left(T_{\VAnofevents_\text{a}},T_{\VAnofevents_\text{out}}\right)\,.\label{eq:iif_modelsim_a_Tend} \end{align} \paragraph{Particle trajectory.} The entire particle trajectory can be determined by the set of events with the variables that encode the event types: \begin{equation} \VAparticlepath=\{\VAdiscreteposeend_\VAeventno,\VAdiscreteveleend_\VAeventno,c_\VAeventno,b_\VAeventno,g_\VAeventno,a_\VAeventno,\beta_\VAeventno\}_{\VAeventno=0}^\VAnofevents\,. \end{equation} For non-analog collision type and non-analog track-length type simulations the $a_\VAeventno$ have no impact of course. \section{Source term estimation procedures\label{sec:iif_est}} The simulation techniques of Section~\ref{sec:iif_modelsim} provide particle trajectories sampling the Boltzmann-BGK equation, Equation~\eqref{eq:iif_modelsim_model_kinetic}. The quantities of interest we are after are the stationary rates of mass, momentum, and energy transfer from the neutral particles to the plasma in each of the $\VAnofgridcells$ grid cells $\{\VAgridcelldefgridcellno\}_{\VAgridcellno=1}^\VAnofgridcells$. These quantities feature as source terms in the plasma equations. To find these source terms, the simulation methods need to be combined with source term estimators, which we introduce in this section. In their general form, the stationary rates of mass, momentum, and energy transfer can be written as \begin{equation} \VAgeneriek{S}^{{\VAgridcellno}}= \int \left(\VAratea({x})\VAageneriek{ \VAsourceatev}({v})+\VArates({x})\int \VAsgeneriek{\VAsourceatev}({v}\rightarrow{v}')\VApostcolveldistrvpr\text{d}{v}'\right)\VAgridcellinddefgridcellno({x}) \phi_\text{n}({x},{v})\text{d}{v}\text{d}{x}\,, \label{eq:sources_fromcols_gen_FV} \end{equation} with $\VAgridcellinddefgridcellno({x})$ the characteristic function of the grid cell $\VAgridcelldefgridcellno$ and with $\VAageneriek{\VAsourceatev}({v})$ and $\VAsgeneriek{\VAsourceatev}({v}\rightarrow {v}')$ the exchanges due to a particle with velocity $v$ that is absorbed, respectively a particle with velocity $v$ that is scattered into a particle with velocity $v'$. In this series of papers, we study the mass and momentum source extensively, the source contributions for these are \begin{align} \VAamass{\VAsourceatev}({v})&=1\,, & &\VAsmass{\VAsourceatev}({v}\rightarrow {v}')=0\,,\label{eq:sources_sourceatev_mass}\\ \VAamom{\VAsourceatev}({v})&={v}\,, & &\VAsmom{\VAsourceatev}({v}\rightarrow {v}')={v}-{v}'\,.\label{eq:sources_sourceatev_mom} \end{align} A Monte Carlo approximation replaces Equation~\eqref{eq:sources_fromcols_gen_FV} by contributions at events, averaged over $\VAnofparticles$ particle paths $\{\VAparticlepath_\VAparticleno\}_{\VAparticleno=1}^\VAnofparticles$. In general, each of the $\VAnofevents^\VAparticleno+1$ events of a particle path can result in a \emph{score} for the source term estimator in the grid cell $\VAgridcellno$. Depending on the event type and the estimator type, the scoring can differ. We denote the score at an event as $\VAsourceatev^{\VAgridcellno,\VAplaceholderest}_{\text{e},\VAplaceholderquant}$, with the superscript $\VAplaceholderest$ serving as a place-holder for the estimator type, the $\VAplaceholderev$ for the event type, and the subscript $\VAplaceholderquant$ for the estimated quantity. The resulting Monte Carlo estimator then reads \begin{multline} \VAMCapprox{S}^{{\VAgridcellno},\VAplaceholderest}_\VAplaceholderquant=\frac{1}{\VAnofparticles}\sum_{\VAparticleno=1}^\VAnofparticles\sum_{\VAeventno=0}^{\VAnofevents^\VAparticleno} \left(\vphantom{\VAsourceatev^{\VAgridcellno,\VAplaceholderest}_k}\right.\underbrace{\VAageneriek{\VAsourceatev}^{\VAgridcellno,\VAplaceholderest}(\VAparticlepath^\VAparticleno,\VAeventno)b_\VAeventno^\VAparticleno a_\VAeventno^\VAparticleno}_\text{absorption}+\underbrace{\VAsgeneriek{\VAsourceatev}^{\VAgridcellno,\VAplaceholderest}(\VAparticlepath^\VAparticleno,\VAeventno)c_\VAeventno^\VAparticleno(1-a_\VAeventno^\VAparticleno)}_\text{scattering}\left.\vphantom{\VAsourceatev^{\VAgridcellno,\VAplaceholderest_k}}\right.\\ \left.+\underbrace{\VAegeneriek{\VAsourceatev}^{\VAgridcellno,\VAplaceholderest}(\VAparticlepath^\VAparticleno,\VAeventno)b_\VAeventno^\VAparticleno(1-\beta_\VAeventno^\VAparticleno)}_\text{boundary absorption}+\underbrace{\VArgeneriek{\VAsourceatev}^{\VAgridcellno,\VAplaceholderest}(\VAparticlepath^\VAparticleno,\VAeventno)b_\VAeventno^\VAparticleno\beta_\VAeventno^\VAparticleno}_\text{boundary reflection}+\underbrace{\VAggeneriek{\VAsourceatev}^{\VAgridcellno,\VAplaceholderest}(\VAparticlepath^\VAparticleno,\VAeventno)g_\VAeventno^\VAparticleno}_\text{grid cell crossing}\right)\,, \label{eq:sources_frompaths_general_an} \end{multline} where all the symbols are introduced in Section~\ref{sec:iif_modelsim}. The Monte Carlo approximation in Equation~\eqref{eq:sources_frompaths_general_an} is for analog particle paths. For non-analog particle paths, the scores should be multiplied by the appropriate weights. Most of the different estimators can be used for the three types of simulations we provided in Section~\ref{sec:iif_modelsim}. Combined, a simulation type and source term estimator type form a \emph{source term estimation procedure}. In the remainder of this section, we will present four types of estimators: the analog estimator (Section~\ref{subsec:iif_est_a}), collision estimator (Section~\ref{subsec:ii_est_c}), next-event estimator (Section~\ref{subsec:ii_est_ne}), and track-length estimator (Section~\ref{subsec:ii_est_tl}). For each of these estimators, we will present the source contributions at every event for the analog simulation, discuss how to apply it to the two non-analog simulation types, and present the fundamental cases of estimating the expected number of absorption events, with \begin{equation} \VAaabs{\VAsourceatev}({v})=1\quad \text{and}\quad\VAsabs{\VAsourceatev}({v}\rightarrow {v}')=0\,,\label{eq:iif_est_nofabssources} \end{equation} and the expected number of scattering events, with \begin{equation} \VAascat{\VAsourceatev}({v})=0\quad\text{and}\quad\VAsscat{\VAsourceatev}({v}\rightarrow {v}')=1\,,\label{eq:iif_est_nofscatsources} \end{equation} for the practically relevant case of a piecewise constant plasma. All other potential quantities of interest can be easily found by multiplying with the actual $\VAageneriek{\VAsourceatev}({v})$ and $\VAsgeneriek{\VAsourceatev}({v}\rightarrow {v}')$ given in Equation~\eqref{eq:sources_sourceatev_mass} for mass and~\eqref{eq:sources_sourceatev_mom} for momentum. Then, in Section~\ref{subsec:iif_est_overview}, we present an overview of how to execute the resulting source term estimation procedures in the practically relevant case of a piecewise constant plasma. As for the simulation techniques, we will restrict ourselves to a short review of the estimators and not derive them in full, which has been done in~\cite[Chapter 2]{mortier2020phdthesis}. \subsection{Analog estimator\label{subsec:iif_est_a}} The most straightforward estimator for $\VAgeneriek{S}^{{\VAgridcellno}}$ is the analog estimator. An analog estimator scores according to the physical event that is being simulated. In an analog simulation, an analog estimator scores \begin{equation} \VAageneriek{\VAsourceatev}^{\VAgridcellno,\texttt{a}}(\VAparticlepath^\VAparticleno,\VAeventno)=\VAageneriek{\VAsourceatev}(v_\VAeventno)\VAgridcellinddefgridcellno({x}_\VAeventno^\VAparticleno) \label{eq:ii_est_a_scoreatev_abs} \end{equation} at each absorption collision and \begin{equation} \VAsgeneriek{\VAsourceatev}^{\VAgridcellno,\texttt{a}}(\VAparticlepath^\VAparticleno,\VAeventno)=\VAsgeneriek{\VAsourceatev}(v_\VAeventno\rightarrow v_{\VAeventno+1})\VAgridcellinddefgridcellno({x}_\VAeventno^\VAparticleno) \label{eq:ii_est_a_scoreatev_scat} \end{equation} at each scattering collision. The superscript \texttt{a} denotes the analog estimator. At other events, there is no physical exchange with the plasma, hence $\VAegeneriek{\VAsourceatev}^{\VAgridcellno,\texttt{a}}(\VAparticlepath^\VAparticleno,\VAeventno)=\VArgeneriek{\VAsourceatev}^{\VAgridcellno,\texttt{a}}(\VAparticlepath^\VAparticleno,\VAeventno)=\VAggeneriek{\VAsourceatev}^{\VAgridcellno,\texttt{a}}(\VAparticlepath^\VAparticleno,\VAeventno)=0$. \paragraph{Extension to non-analog simulations.} Non-analog simulations are not relevant for an analog estimator, since the events lose their physical meaning. \paragraph{Fundamental cases.} The source term estimator for the expected number of absorption events is readily found by using Equation~\eqref{eq:iif_est_nofabssources} in Equations~\eqref{eq:ii_est_a_scoreatev_abs} and~\eqref{eq:ii_est_a_scoreatev_scat}, giving $\VAaabs{\VAsourceatev}(\VAparticlepath,\VAeventno)=1$ and $\VAsabs{\VAsourceatev}(\VAparticlepath,\VAeventno)=0$. Similarly for the expected number of scattering events with Equation~\eqref{eq:iif_est_nofscatsources}, we find $\VAascat{\VAsourceatev}^{\VAgridcellno,\texttt{a}}(\VAparticlepath,\VAeventno)=0$ and $\VAsscat{\VAsourceatev}^{\VAgridcellno,\texttt{a}}(\VAparticlepath,\VAeventno)=\VAgridcellinddefgridcellno(\VAdiscreteposeend_\VAeventno)$. The plasma being piecewise-constant has no impact on what has to be scored here. We denote the analog estimator for the number of absorption events in an analog simulation by \texttt{a\_a\_abs} and for the number of scattering events by \texttt{a\_a\_sc}. \subsection{Collision estimator\label{subsec:ii_est_c}} A first alternative estimator does not distinguish between absorption and scattering collisions, but samples the expected contribution due to a collision, \begin{equation} \VAcgeneriek{\VAsourceatev}({x},{v})=\frac{\VAratea({x})}{\VAratet({x})}\VAageneriek{\VAsourceatev}({v})+\frac{\VArates({x})}{\VAratet({x})}\int \VAsgeneriek{\VAsourceatev}({v}\rightarrow{v}')\VApostcolveldistrvpr\text{d}{v}' \label{eq:ii_est_c_sourceatcol} \end{equation} at every collision with the plasma in the considered grid cell, or $\VAcgeneriek{\VAsourceatev}({x},{v})\VAgridcellinddefgridcellno(\VAdiscreteposeend_\VAeventno^\VAparticleno)$ in general. This estimator is called the collision estimator and we use \texttt{c} to refer to it. Note that the integral over the post-collisional velocity ${v}'$ in Equation~\eqref{eq:ii_est_c_sourceatcol} is not difficult to compute: since $\VAsgeneriek{\VAsourceatev}({v}\rightarrow{v}')$ is a simple polynomial in ${v}'$, the solution of the inner integral is a combination of moments of $\VApostcolveldistr^{\,{\VAgridcellno}}({v}')$, which are known from the plasma simulation. For instance, if the momentum is the estimated quantity, $\VAamom{\VAsourceatev}({v})={v}$ and $\VAsmom{\VAsourceatev}({v}\rightarrow {v}')={v}-{v}'$ and Equation~\eqref{eq:ii_est_c_sourceatcol} becomes \begin{equation} \VAcmom{\VAsourceatev}({x},{v})=\frac{\VAratea({x})}{\VAratet({x})}{v}+\frac{\VArates({x})}{\VAratet({x})}({v}-\VAplasmaspeed({x}))\,,\label{eq:ii_est_c_sourceatcol_mom} \end{equation} with $\VAplasmaspeed({x})$ the expected post-collisional velocity at position ${x}$, a quantity that is computed by the plasma simulation. With $\VAcgeneriek{\VAsourceatev}({x},{v})$ as defined in Equation~\eqref{eq:ii_est_c_sourceatcol}, the scores for a collision estimator can be written as \begin{equation} \VAageneriek{\VAsourceatev}^{\VAgridcellno,\texttt{c}}(\VAparticlepath,\VAeventno)=\VAsgeneriek{\VAsourceatev}^{\VAgridcellno,\texttt{c}}(\VAparticlepath,\VAeventno) =\VAcgeneriek{\VAsourceatev}(\VAdiscreteposeend_\VAeventno,\VAdiscreteveleend_\VAeventno)\VAgridcellinddefgridcellno(\VAdiscreteposeend_\VAeventno^\VAparticleno)\,,\label{eq:ii_est_c_scoreatev} \end{equation} and $\VAegeneriek{\VAsourceatev}^{\VAgridcellno,\texttt{c}}(\VAparticlepath,\VAeventno)=\VArgeneriek{\VAsourceatev}^{\VAgridcellno,\texttt{c}}(\VAparticlepath,\VAeventno)=\VAggeneriek{\VAsourceatev}^{\VAgridcellno,\texttt{c}}(\VAparticlepath,\VAeventno)=0$. \paragraph{Extension to non-analog simulations.} For the non-analog collision type simulation, the only change consists in multiplying the score at each collision event $k$ with the weight at the collision, being ${\VAdiscreteweight}_{\VAeventno-1}$. In the non-analog track-length type simulation, the score at each collision event $k$ has to be multiplied by ${\VAdiscreteweight}_\VAeventno\frac{\VAratet({x})}{\VArates({x})}$, where the second factor compensates for the fact that the absorption collisions with the plasma do not occur. \paragraph{Fundamental cases.} When the plasma is piecewise constant, the reaction rates are constants $\VAratea^\VAgridcellno$, $\VArates^\VAgridcellno$, and $\VAratet^\VAgridcellno$ in each grid cell $\VAgridcellno$ and the post-collisional velocity distribution is a position-independent function $\VApostcolveldistr^{\,{\VAgridcellno}}({v}')$. Then, we can rewrite the score at a collision as \begin{equation} \VAcgeneriek{\VAsourceatev}({x},{v})\VAgridcellinddefgridcellno({x})=\VAcgeneriek{\VAsourceatev}^\VAgridcellno({v})\VAgridcellinddefgridcellno({x})=\left(\frac{\VAratea^\VAgridcellno}{\VAratet^\VAgridcellno}\VAageneriek{\VAsourceatev}({v})+\frac{\VArates^\VAgridcellno}{\VAratet^\VAgridcellno}\int \VAsgeneriek{\VAsourceatev}({v}\rightarrow{v}')\VApostcolveldistr^{\,\VAgridcellno}({v}')\text{d}{v}'\right)\VAgridcellinddefgridcellno({x})\,. \label{eq:ii_est_c_sourceatcol_piecewiseconst} \end{equation} When we want to estimate the number of absorption and scattering events, this means $\frac{\VAratea^\VAgridcellno}{\VAratet^\VAgridcellno}$, respectively $\frac{\VArates^\VAgridcellno}{\VAratet^\VAgridcellno}$ has to be scored at every collision. When applying these two estimators to the same simulation, the result only differs by a constant. Consequently, the statistical properties of a collision estimator for the expected number of absorption events and for the expected number of scattering events are identical, and shared by a collision estimator for the expected total number of collisions. We thus restrict ourselves to only considering one of these fundamental cases, namely a collision estimator for the total number of collisions. Since we can apply the collision estimator to all three simulation types, we obtain three estimation procedures: \texttt{a\_c}, \texttt{nac\_c}, and \texttt{natl\_c}. \subsection{Next-event estimator\label{subsec:ii_est_ne}} The next-event estimator looks one step further into the future. As the name indicates, the estimation looks as far as the next event and counts at the beginning of every flight path the expected contribution of the event (collision with the plasma, boundary hit, or grid cell crossing). In the general case, this type of estimator requires the computation of an integral at every scoring instant, rendering this estimator very expensive. In the relevant case of piecewise constant plasma, however, this integral has a simple analytical solution. In general, the expected contribution at $x$ for a particle with velocity $v$ during the infinitesimal time interval $\text{d}\tau$ is \begin{equation} \left(\VAratea({x})\VAageneriek{\VAsourceatev}({v})+\VArates({x})\int \VAsgeneriek{\VAsourceatev}({v}\rightarrow{v}')\VApostcolveldistrvpr\text{d}{v}'\right)\text{d}t=\VAratet(x)\VAcgeneriek{\VAsourceatev}({x},{v})\text{d}\tau\,. \end{equation} If a particle starts a flight at $(x_\VAeventno,v_\VAeventno)$, it will retain its velocity for the entirety of the flight path and it can thus only reach positions that are on the line \begin{equation} \VAdiscreteposeend_\VAeventno+\frac{\VAdiscreteveleend_\VAeventno}{|\VAdiscreteveleend_\VAeventno|}d\,,\qquad 0\leq d\,. \end{equation} Since a grid cell crossing represents an event, the maximal distance the particle could potentially travel is the distance to the edge of the current grid, \begin{equation} D_\VAeventno^\VAgridcellno=D^\VAgridcellno(\VAdiscreteposeend_\VAeventno,\VAdiscreteveleend_\VAeventno)=\max\left(d|\VAdiscreteposeend_\VAeventno+\frac{\VAdiscreteveleend_\VAeventno}{|\VAdiscreteveleend_\VAeventno|}d\in\VAgridcelldefgridcellno\right)\,, \label{eq:est_ne_bigD} \end{equation} when $\VAgridcellinddefgridcellno(\VAdiscreteposeend_\VAeventno)=1$. For convenience, we assume here that the domain boundaries are also grid cell boundaries. Hence, all the positions from $\VAdiscreteposeend_\VAeventno$ to $\VAdiscreteposeend_\VAeventno+\frac{\VAdiscreteveleend_\VAeventno}{|\VAdiscreteveleend_\VAeventno|}D_\VAeventno^\VAgridcellno$ can be reached in this flight. However, each position $\VAdiscreteposeend_\VAeventno+\frac{\VAdiscreteveleend_\VAeventno}{|\VAdiscreteveleend_\VAeventno|}d$, $0\leq d\leq D_\VAeventno^\VAgridcellno$, is only reached by the fraction of particles that did not collide with the plasma beforehand, which is an expected fraction \begin{equation} e^{-\int_0^d\VAratet\left(\VAdiscreteposeend_\VAeventno+\frac{\VAdiscreteveleend_\VAeventno}{|\VAdiscreteveleend_\VAeventno|}\ell\right)\frac{\text{d}\ell}{|\VAdiscreteveleend_\VAeventno|}}\, \end{equation} in analog simulations. Combining the above information leads to the following expected contribution in the grid cell $\VAgridcelldefgridcellno$ from $(\VAdiscreteposeend_\VAeventno,\VAdiscreteveleend_\VAeventno)$ until the next event: \begin{multline} \VAsgeneriek{\VAsourceatev}^{\VAgridcellno,\texttt{ne}}(\VAparticlepath,\VAeventno)=\VArgeneriek{\VAsourceatev}^{\VAgridcellno,\texttt{ne}}(\VAparticlepath,\VAeventno)=\VAggeneriek{\VAsourceatev}^{\VAgridcellno,\texttt{ne}}(\VAparticlepath,\VAeventno)\\ =\VAgridcellinddefgridcellno(\VAdiscreteposeend_\VAeventno)\int_0^{D_\VAeventno^\VAgridcellno}\VAratet\left(\VAdiscreteposeend_\VAeventno+\frac{\VAdiscreteveleend_\VAeventno}{|\VAdiscreteveleend_\VAeventno|}d\right)\VAcgeneriek{\VAsourceatev}\left(\VAdiscreteposeend_\VAeventno+\frac{\VAdiscreteveleend_\VAeventno}{|\VAdiscreteveleend_\VAeventno|}d,\VAdiscreteveleend_\VAeventno\right)e^{-\int_0^d\VAratet\left(\VAdiscreteposeend_\VAeventno+\frac{\VAdiscreteveleend_\VAeventno}{|\VAdiscreteveleend_\VAeventno|}\ell\right)\frac{\text{d}\ell}{|\VAdiscreteveleend_\VAeventno|}}\frac{\text{d}d}{|\VAdiscreteveleend_\VAeventno|}\,\label{eq:iif_est_ne_scorewithint} \end{multline} and $\VAageneriek{\VAsourceatev}^{\VAgridcellno,\texttt{ne}}(\VAparticlepath,\VAeventno)=\VAegeneriek{\VAsourceatev}^{\VAgridcellno,\texttt{ne}}(\VAparticlepath,\VAeventno)=0$. The remaining integral in Equation~\eqref{eq:iif_est_ne_scorewithint} becomes tractable when the plasma background is of a simple form. For instance, in the practically relevant case with piecewise constant plasma, Equation~\eqref{eq:iif_est_ne_scorewithint} simplifies to \begin{equation} \VAsgeneriek{\VAsourceatev}^{\VAgridcellno,\texttt{ne}}(\VAparticlepath,\VAeventno)=\VArgeneriek{\VAsourceatev}^{\VAgridcellno,\texttt{ne}}(\VAparticlepath,\VAeventno)=\VAggeneriek{\VAsourceatev}^{\VAgridcellno,\texttt{ne}}(\VAparticlepath,\VAeventno)= \VAgridcellinddefgridcellno(\VAdiscreteposeend_\VAeventno)\VAcgeneriek{\VAsourceatev}^\VAgridcellno(\VAdiscreteveleend_\VAeventno)\left(1-e^{-\frac{\VAratet^\VAgridcellno}{|\VAdiscreteveleend_\VAeventno|}D_\VAeventno^\VAgridcellno}\right)\,,\label{eq:iif_est_ne_score_const} \end{equation} with $\VAcgeneriek{\VAsourceatev}^\VAgridcellno(\VAdiscreteveleend_\VAeventno)$ as in Equation~\eqref{eq:ii_est_c_sourceatcol_piecewiseconst}. \paragraph{Extension to non-analog simulations.} In an \texttt{nac} or \texttt{natl} simulation, the only difference lies in applying the correct weight at all the scattering collisions, reflections, and grid cell crossings, namely ${\VAdiscreteweight}_\VAeventno$. \paragraph{Fundamental cases.} In the two fundamental cases of estimating the expected number of absorption and scattering events for a piecewise-constant plasma, we can again restrict ourselves to only considering estimating the expected total number of collisions (with $\VAcgeneriek{\VAsourceatev}^\VAgridcellno(\VAdiscreteveleend_\VAeventno)=1$), since they only differ by a constant factor. Applying a next-event estimator for the total number of collisions to the three simulation types, leads to three different estimation procedures: \texttt{a\_ne}, \texttt{nac\_ne}, and \texttt{natl\_ne}. \subsection{Track-length estimator\label{subsec:ii_est_tl}} In contrast to the three previous estimator types, the track-length estimator only works if the plasma background has a simple shape. This is true for the piecewise constant plasma background, we will consider here. The track-length estimator then scores \begin{equation} \VAsgeneriek{\VAsourceatev}^{\VAgridcellno,\texttt{tl}}(\VAparticlepath,\VAeventno)= \VArgeneriek{\VAsourceatev}^{\VAgridcellno,\texttt{tl}}(\VAparticlepath,\VAeventno)= \VAggeneriek{\VAsourceatev}^{\VAgridcellno,\texttt{tl}}(\VAparticlepath,\VAeventno)\\ =\VAgridcellinddefgridcellno(\VAdiscreteposeend_\VAeventno)\VAgridcellinddefgridcellno(\VAdiscreteposeend_{\VAeventno+1})\VAcgeneriek{\VAsourceatev}^\VAgridcellno(\VAdiscreteveleend_\VAeventno)\frac{\VAratet^\VAgridcellno}{|\VAdiscreteveleend_\VAeventno|}|\VAdiscreteposeend_{\VAeventno+1}-\VAdiscreteposeend_\VAeventno|\,,\label{eq:ii_est_tl_sourceatev} \end{equation} and \begin{equation} \VAageneriek{\VAsourceatev}^{\VAgridcellno,\texttt{tl}}(\VAparticlepath,\VAeventno)=\VAegeneriek{\VAsourceatev}^{\VAgridcellno,\texttt{tl}}(\VAparticlepath,\VAeventno)=0\,. \end{equation} The name track-length arises from the fact that the score is related to the travelled length $|\VAdiscreteposeend_{\VAeventno+1}-\VAdiscreteposeend_{\VAeventno}|$. The factor by which the travelled length is multiplied, \begin{equation} \VAcgeneriek{\VAsourceatev}^\VAgridcellno(\VAdiscreteveleend_\VAeventno)\frac{\VAratet^\VAgridcellno}{|\VAdiscreteveleend_\VAeventno|}\,,\label{eq:iif_est_tl_expectedscoreperlength} \end{equation} thus expresses the expected contributed to the source by an infinitesimal travelled length. Using the probability density function for the travelled length $d_\VAeventno=|\VAdiscreteposeend_{\VAeventno+1}-\VAdiscreteposeend_{\VAeventno}|$, \begin{equation} \text{P}(d_\VAeventno|x_\VAeventno,v_\VAeventno)=\left\{\begin{array}{ll} \VAratet^\VAgridcellno e^{-\VAratet^\VAgridcellno \frac{d_\VAeventno}{|\VAdiscreteveleend_\VAeventno|}}\text{d}d_\VAeventno & \text{if }0\leq d_\VAeventno<D_\VAeventno^\VAgridcellno\,, \\ e^{-\VAratet\frac{D_\VAeventno^\VAgridcellno}{|v_\VAeventno|}} & \text{if }d_\VAeventno=D_\VAeventno^\VAgridcellno\,, \end{array}\right. \end{equation} the estimator from Equation~\eqref{eq:ii_est_tl_sourceatev} can be seen to be unbiased by taking the expected value over all possible distances $d_\VAeventno$: \begin{align} \mathbb{E}\left[\frac{\VAratet^\VAgridcellno}{|\VAdiscreteveleend_\VAeventno|}d_\VAeventno\right]&=\int_0^{D_\VAeventno^\VAgridcellno}\frac{\VAratet^\VAgridcellno}{|\VAdiscreteveleend_\VAeventno|}d \VAratet^\VAgridcellno e^{-\VAratet^\VAgridcellno\frac{d}{|\VAdiscreteveleend_\VAeventno|}}\text{d}d+\frac{\VAratet^\VAgridcellno}{|\VAdiscreteveleend_\VAeventno|}D_\VAeventno^\VAgridcellno e^{-\VAratet\frac{D_\VAeventno^\VAgridcellno}{|\VAdiscreteveleend_\VAeventno|}}\,,\\ &=1-\left(1+\VAratet^\VAgridcellno\frac{D_\VAeventno^\VAgridcellno}{|\VAdiscreteveleend_\VAeventno|}\right)e^{-\VAratet^\VAgridcellno\frac{D_\VAeventno^\VAgridcellno}{|\VAdiscreteveleend_\VAeventno|}}+\frac{\VAratet^\VAgridcellno}{|\VAdiscreteveleend_\VAeventno|}D_\VAeventno^\VAgridcellno e^{-\VAratet\frac{D_\VAeventno^\VAgridcellno}{|\VAdiscreteveleend_\VAeventno|}}\,,\\ &=1-e^{-\VAratet^\VAgridcellno\frac{D_\VAeventno^\VAgridcellno}{|\VAdiscreteveleend_\VAeventno|}}\,, \end{align} which equals the corresponding factor in Equation~\eqref{eq:iif_est_ne_score_const}, proving unbiasedness. \paragraph{Extension to non-analog simulations.} For a non-analog estimation procedure, the time spent in a grid cell should be multiplied by its corresponding weight. In an \texttt{nac} simulation, this weight is constant during a single flight path and equals the weight at the beginning. In an \texttt{natl} simulation, the weight changes continuously along the path. When the plasma is constant during that flight, the weight during the time interval $[{T}_\VAeventno,{T}_{\VAeventno+1}]$ is expressed, according to Equation~\eqref{eq:iif_modelsim_natl_pwconst_weight}, as \begin{equation} {w}_\VAeventno e^{-\VAratea^\VAgridcellno\frac{\ell}{|{\VAdiscreteveleend}_\VAeventno|}}\,, \end{equation} with $\ell\in[0,{d}_\VAeventno]$ and ${d}_\VAeventno=|{\VAdiscreteposeend}_{\VAeventno+1}-{\VAdiscreteposeend}_\VAeventno|$. Using this correct weight during every infinitesimal travelled length $\text{d}\ell$, gives \begin{equation} \int_0^{{d}_\VAeventno}{w}_\VAeventno e^{-\VAratea^\VAgridcellno\frac{\ell}{|{\VAdiscreteveleend}_\VAeventno|}}\text{d}\ell={w}_\VAeventno\frac{|{\VAdiscreteveleend}_\VAeventno|}{\VAratea^\VAgridcellno}\left(1-e^{-\VAratea^\VAgridcellno\frac{\ell}{|{\VAdiscreteveleend}_\VAeventno|}}\right) \end{equation} as a weighted travelled distance. By multiplying with the expected score per travelled length from Equation~\eqref{eq:iif_est_tl_expectedscoreperlength}, we find the correct score for an \texttt{natl\_tl} estimation procedure to be \begin{equation} \VAsgeneriek{{\VAsourceatev}}^{\VAgridcellno,\texttt{tl}}(\VAparticlepath,\VAeventno)=\VArgeneriek{{\VAsourceatev}}^{\VAgridcellno,\texttt{tl}}(\VAparticlepath,\VAeventno)=\VAggeneriek{{\VAsourceatev}}^{\VAgridcellno,\texttt{tl}}(\VAparticlepath,\VAeventno)= \VAgridcellinddefgridcellno({\VAdiscreteposeend}_\VAeventno)\VAgridcellinddefgridcellno({\VAdiscreteposeend}_{\VAeventno+1})\VAcgeneriek{\VAsourceatev}^\VAgridcellno(\VAdiscreteveleend_\VAeventno)\VAnatl{\VAdiscreteweight}_\VAeventno\frac{\VAratet^\VAgridcellno}{\VAratea^\VAgridcellno}\!\!\left(\!1-e^{-\frac{\VAratea^\VAgridcellno|{\VAdiscreteposeend}_{\VAeventno+1}-{\VAdiscreteposeend}_\VAeventno|}{|{\VAdiscreteveleend}_{\VAeventno-1}|}}\right) \label{eq:ii_est_tl_sourceatev_natl}\,. \end{equation} and \begin{equation} \VAageneriek{{\VAsourceatev}}^{\VAgridcellno,\texttt{tl}}(\VAparticlepath,\VAeventno)=\VAegeneriek{{\VAsourceatev}}^{\VAgridcellno,\texttt{tl}}(\VAparticlepath,\VAeventno)=0\,. \end{equation} \paragraph{Fundamental cases.} As for the collision and next-event estimators, changing the estimated quantity namely only modifies the factor $\VAcgeneriek{\VAsourceatev}^\VAgridcellno(\VAdiscreteveleend_\VAeventno)$ in Equation~\eqref{eq:ii_est_tl_sourceatev} (or similar equations for \texttt{nac} and \texttt{natl}). The track-length estimators for the expected number of absorption and scattering events thus have identical statistical properties as a track-length estimator for the expected total number of collisions. We consequently only have to consider three different estimation procedures for the expected total number of collisions: \texttt{a\_tl}, \texttt{nac\_tl}, and \texttt{natl\_tl}. \subsection{Overview of the estimation procedures applied to the fundamental cases\label{subsec:iif_est_overview}} In the previous four sections we have discussed the estimation procedures that are studied in this paper series. We have spent specific attention to the practically relevant case of a piecewise constant plasma and the estimation of the number of absorption and the number of scattering collisions, since these fundamental cases lead to the quantities of interest by combining them appropriately with $\VAageneriek{ \VAsourceatev}({v})$ and $\VAsgeneriek{\VAsourceatev}({v}\rightarrow{v}')$. We found in the previous sections that these fundamental cases result in eleven estimation procedures with different statistical behaviour. As we have discussed in Sections~\ref{subsec:ii_est_c}--\ref{subsec:ii_est_tl}, the statistical properties of estimating the expected number of absorption and scattering events for collision, next-event, and track-length estimators are identical to each other and to estimators for the expected total number of collisions. For that reason, we only consider the latter. For the analog estimator however, we do consider the estimator for the expected number of absorption events and for the expected number of scattering events separately, since they have different statistical properties. Table~\ref{tab:overview_scores} provides an overview of what should be scored at every event by each of these fundamental estimation procedures. The estimation procedures are applied to estimate the expected total number of collisions, except for the analog estimators, which estimate the total number of absorption collisions (\texttt{a\_a\_abs}), respectively the total number of scattering collisions (\texttt{a\_a\_sc}). Together with Table~\ref{tab:overview_weightchanges}, the table of this section provides a full overview of how to practically implement these estimation procedures. \begin{table}[H] \centering \begin{tabular}{ccccc} Estimator & $\VAa{\VAsourceatev}^\VAgridcellno$ & $\VAs{\VAsourceatev}^\VAgridcellno$ & $\VAr{\VAsourceatev}^\VAgridcellno=\VAg{\VAsourceatev}^\VAgridcellno$ & $\VAe{\VAsourceatev}^\VAgridcellno$ \\ \hline \texttt{a\_a\_abs} & 1 & 0 & 0 & 0 \\ \texttt{a\_a\_sc} & 0 & 1 & 0 & 0 \\ \texttt{a\_c} & 1 & 1 & 0 & 0 \\ \texttt{nac\_c} & / & ${\VAdiscreteweight}_{\VAeventno-1}$ & 0 & 0 \\ \texttt{natl\_c} & / & ${\VAdiscreteweight}_\VAeventno\frac{\VAratet^\VAgridcellno}{\VArates^\VAgridcellno}$ & 0& 0 \\ \texttt{a\_tl} & 0 & $\frac{\VAratet^\VAgridcellno}{|\VAdiscreteveleend_{\VAeventno}|} d_\VAeventno$ & $\frac{\VAratet^\VAgridcellno}{|\VAdiscreteveleend_{\VAeventno}|} d_\VAeventno$ &0\\ \texttt{nac\_tl} & / & ${\VAdiscreteweight}_{\VAeventno}\frac{\VAratet^\VAgridcellno}{|{\VAdiscreteveleend}_{\VAeventno}|} {d}_\VAeventno$ & ${\VAdiscreteweight}_{\VAeventno}\frac{\VAratet^\VAgridcellno}{|{\VAdiscreteveleend}_{\VAeventno}|} {d}_\VAeventno$ & 0\\ \texttt{natl\_tl} & / & ${\VAdiscreteweight}_{\VAeventno}\frac{\VAratet^\VAgridcellno}{\VAratea^\VAgridcellno}\left(1-e^{-\frac{\VAratea^\VAgridcellno}{|{\VAdiscreteveleend}_{\VAeventno}|}{d}_\VAeventno}\right)$ & ${\VAdiscreteweight}_{\VAeventno}\frac{\VAratet^\VAgridcellno}{\VAratea^\VAgridcellno}\left(1-e^{-\frac{\VAratea^\VAgridcellno}{|{\VAdiscreteveleend}_{\VAeventno}|}{d}_\VAeventno}\right)$& 0 \\ \texttt{a\_ex} & 0 & $1-e^{-\frac{\VAratet^\VAgridcellno}{|\VAdiscreteveleend_\VAeventno|}D_\VAeventno^\VAgridcellno}$ & $1-e^{-\frac{\VAratet^\VAgridcellno}{|\VAdiscreteveleend_\VAeventno|}D_\VAeventno^\VAgridcellno}$ & 0 \\ \texttt{nac\_ex} & / & ${\VAdiscreteweight}_\VAeventno\left(1-e^{-\frac{\VAratet^\VAgridcellno}{|{\VAdiscreteveleend}_\VAeventno|}{D}_\VAeventno^\VAgridcellno}\right)$ & ${\VAdiscreteweight}_\VAeventno\left(1-e^{-\frac{\VAratet^\VAgridcellno}{|{\VAdiscreteveleend}_\VAeventno|}{D}_\VAeventno^\VAgridcellno}\right)$& 0 \\ \texttt{natl\_ex} & / &${\VAdiscreteweight}_\VAeventno\left(1-e^{-\frac{\VAratet^\VAgridcellno}{|{\VAdiscreteveleend}_\VAeventno|}{D}_\VAeventno^\VAgridcellno}\right)$ & ${\VAdiscreteweight}_\VAeventno\left(1-e^{-\frac{\VAratet^\VAgridcellno}{|{\VAdiscreteveleend}_\VAeventno|}{D}_\VAeventno^\VAgridcellno}\right)$& 0 \end{tabular} \caption{An overview of how the different estimation procedures score at events in grid cell $\VAgridcellno$ with $d_\VAeventno=|\VAdiscreteposeend_{\VAeventno+1}-\VAdiscreteposeend_{\VAeventno}|$ and $D_\VAeventno^\VAgridcellno=\max(d|\VAdiscreteposeend_{\VAeventno}+d\frac{\VAdiscreteveleend}{|\VAdiscreteveleend|}\in\VAgridcelldefgridcellno)$. The actual scores are the factors in this table multiplied by the factor $\VAgridcellinddefgridcellno(\VAdiscreteposeend_\VAeventno)$, and by an additional factor $\VAgridcellinddefgridcellno(\VAdiscreteposeend_{\VAeventno+1})$ for the track-length estimators. These were left out for clarity.} \label{tab:overview_scores} \end{table} \section{Best estimation procedure in a simplified setting\label{sec:iif_bestest_1D0D}} To estimate the different source terms, each of the eleven different source term estimation procedures introduced in the previous sections can be used. The main question this papers tries to answer is which of these should be selected, given knowledge of the background plasma parameters such as the event rates and post-collisional velocity distribution. To do so, we study the statistical and computational properties of each of the estimation procedures and determine based on these which estimation procedures performs best. This will aid in a substantiated estimation procedure selection given the problem setting, and even enables the selection of different estimation procedures for different regions of the problem domain. This can be beneficial when the background is very heterogeneous or when the background changes during the simulation, for instance due to iterations of the coupled FV/MC system or due to grid refinement. In this section, we first consider a simplified forward-backward scattering setting, which admits closed sets of ordinary differential equations (ODEs) for the statistical error and the computational cost for the eleven estimation procedures under study. With these results, we will compare the different estimation procedures and indicate the best estimation procedure throughout the parameter space. In Section~\ref{sec:ii_res_1D1D} we will provide several numerical extensions to mass and momentum source term estimation in a more realistic setting. In Section~\ref{subsec:iif_bestest_1D0D_1D0D}, we first present the simplified setting for which we have analytical results. Then, in Section~\ref{subsec:iif_bestest_1D0D_ii}, we present the invariant imbedding methodology that will yield closed sets of ODEs. By evaluating these ODEs, we find the best estimation procedure depending on the model parameters. The resulting partition of the parameter space based on the best mass source term estimation procedure is presented in Section~\ref{subsec:iif_bestest_1D0D_bestest} both when considering statistical error and computational cost. \subsection{Simplified 1D0D simulation\label{subsec:iif_bestest_1D0D_1D0D}} To facilitate an analytical study of the performance of the different estimation procedures, we simplify the one-dimensional model to be spatially homogeneous and to only have forward-backward scattering. Concretely, we restrict the velocity to being $\pm 1$, making the model zero-dimensional in velocity. Since it is still one-dimensional in space, we refer to this model as \emph{1D0D}. The post-collisional velocity in the {1D0D} setting is completely determined by one parameter value: the probability of going right after a collision, $\VAPr$. This reduces the post-collision velocity distribution to \begin{equation} \VApostcolveldistr^\text{1D0D}(v)=\VAPr\delta(v-1)+(1-\VAPr)\delta(v+1)\,, \end{equation} with $\delta$ the Dirac-delta. The constant size of the velocity and the space-independence of the rates, result in constant values $\VAcsa=\frac{\VAratea}{|v|}$, $\VAcss=\frac{\VArates}{|v|}$, and $\VAcst=\frac{\VAratet}{|v|}$. We call these quantities the cross-sections and use them as parameters in the sequel. We use as an initial condition that all particles enter from the left, \begin{equation} \VAnsourceveldistr^\text{1D0D}(v)=\delta(v-1)\,. \end{equation} Furthermore, we take the probability of being reflected equal to be 0 at each end of the domain, hence $\alpha(r)\equiv 1$. This reduced problem, has three remaining dimensions to its parameter space: the survival probability at a collision $\VAcss/\VAcst=\VArates/\VAratet$, the dedimensionalized total collision rate $\VAcst\VAdomlength=\frac{\VAratet}{|v|}\VAdomlength$ and the post-collisional parameter $\VAPr$. \subsection{Invariant imbedding procedure\label{subsec:iif_bestest_1D0D_ii}} For the simplified 1D0D model described in Section~\ref{subsec:iif_bestest_1D0D_1D0D}, we can derive ODEs for the statistical properties of the different estimation procedures with an invariant imbedding procedure. The invariant imbedding procedure~\cite{chandrasekhar1950radtransii, bellman1975ii} consists of expressing a moment of a quantity, the score for example, for a slab of length $\VAiidomainlengthvar+\Delta\VAiidomainlengthvar$ as a function of moments of quantities for a slab of length $\VAiidomainlengthvar$ and its extension $\Delta \VAiidomainlengthvar$. By taking the limit of $\Delta \VAiidomainlengthvar\rightarrow 0$, an ODE is formed with the domain length as an integration variable. For the other quantities that arise, a similar procedure can be followed until the set of ODEs is closed. For each of the eleven mass source estimation procedures, this invariant imbedding procedure leads to a system of ODEs with sizes up to 22 for the statistical properties. In each of these ODE systems the parameters $(\VAcsa,\VAcss,\VAPr)$ feature as parameters that can be fixed and the domain length $\VAiidomainlengthvar$ is the integration variable. We note that in this 1D0D problem, the constant size of the velocity has as an additional effect that energy of the particles always remains proportional to their mass. This means that the performance results for mass source estimation that we attain by this method hold for energy source estimation as well. To illustrate this procedure, we include part of the invariant imbedding procedure for the non-analog collision type track-length estimation procedure (\texttt{nac\_tl}). The full derivation for each mass source estimation procedure can be found in~\cite{mortier2020iiappendix}. \subsection{Invariant imbedding applied to the track-length estimator on a non-analog collision type simulation\label{subsec:ii2_ii_example}} In a non-analog collision type simulation (\texttt{nac}), every collision is executed as a scattering event, at which the particle weight is multiplied by the factor $\VAcss/\VAcst$ to keep the simulation unbiased, see Section~\ref{sec:iif_modelsim}. The total factor by which the particle weight changed after the particle passed through a domain of length $\VAiidomainlengthvar$ in an \texttt{nac} simulation, is denoted by $\VAiiWnac(\VAiidomainlengthvar)$. During this passage, multiple collisions might have taken place. A track-length estimator for the expected number of collisions, scores $\VAcst d$, with $d$ the distance travelled and $\VAcst$ the total cross-section, see Section~\ref{subsec:ii_est_tl}. To estimate the expected number of a certain type of collisions, $\VAcst$ is replaced by $\VAcsa$ for absorption, respectively by $\VAcss$ for scattering events. To clarify the difference between the cross-section as the probability of colliding per travelled length and its role in the score, we denote the score for a travelled distance $d$ as $\VAcs d$. The total score by an \texttt{nac\_tl} estimation procedure by a single particle path through a domain of length $\VAiidomainlengthvar$ is denoted by $\VAiitlnac(\VAiidomainlengthvar)$. Here, we will focus on the second moment of an \texttt{nac\_tl} estimation procedure sample, an indispensable quantity to compute the statistical properties of the estimation procedure. We only consider the contribution by paths that leave and enter the domain from the left. Other outcomes are treated similarly. We denote the outcome by a subscript of two letters, of which the first denotes the place of entry and the second the place of exit. The probability of a path under the condition that it enters and leaves from the left is denoted by $\VAiiPnacll(\VAiidomainlengthvar)$. We focus on the quantity $\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}[\VAiitlnacll^2(\VAiidomainlengthvar)]$ in this example, which is the contribution to the second moment of a track-length estimator in a non-analog collision type simulation in a domain of length $\VAiidomainlengthvar$ by the particle paths that enter and leave from the left. We will express $\VAiiPnacll(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)\mathbb{E}[\VAiitlnacll^2(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)]$, as a function of quantities over $\VAiidomainlengthvar$. To do so, we condition the paths that enter and leave the domain of length $\VAiidomainlengthvar+\Delta\VAiidomainlengthvar$ from the left by their behaviour in the part of length $\Delta\VAiidomainlengthvar$. Since our aim is to arrive at an ODE for $\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}[\VAiitlnacll^2(\VAiidomainlengthvar)]$, we can neglect contributions to $\VAiiPnacll(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)\mathbb{E}[\VAiitlnacll^2(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)]$ of order $o(\Delta\VAiidomainlengthvar),\ \Delta\VAiidomainlengthvar\rightarrow0$. This means that we can neglect paths that collide more than once in the $\Delta\VAiidomainlengthvar$ part of the domain, since these have a probability of order $\mathcal{O}(\Delta\VAiidomainlengthvar^2),\ \Delta\VAiidomainlengthvar\rightarrow0$ to occur, since, when travelling a length of $\Delta\VAiidomainlengthvar\rightarrow0$, the probability of colliding is $\VAcst\Delta\VAiidomainlengthvar,\ \Delta\VAiidomainlengthvar\rightarrow0$. When restricting the outcome of the paths to leaving and entering from the left, and to having at most one collision in the $\Delta\VAiidomainlengthvar$ part of the domain, there are five options left of how the particle can act within the domain of length $\Delta\VAiidomainlengthvar$, which are shown in Figure~\ref{fig:ii_illustration}. \begin{figure}[H] \centering \resizebox{.6\columnwidth}{!}{\input{figuren//ii_outcomes/ii_outcomes.tikz}} \caption{The possible paths in a domain of length $\VAiidomainlengthvar+\Delta\VAiidomainlengthvar$ that start and end on the left side in a non-analog simulation and have a probability of order at most 1 in $\Delta\VAiidomainlengthvar$. The symbol \protect\input{figuren//ii_outcomes/ii_scattersymbol.tikz} in the part of length $\Delta\VAiidomainlengthvar$ identifies that a collision took place there. The dashed lines in the $\VAiidomainlengthvar$ part of the domain signify that the behaviour there is irrelevant, as long as the outcome is correct: entering and leaving the $\VAiidomainlengthvar$ part from the left. The figure is copied from~\cite{mortier2020iiappendix}.} \label{fig:ii_illustration} \end{figure} \newcommand{j}{j} With $\VAiicondition{\VAiiPnac}{ll,j}(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)$ and $\mathbb{E}[\VAiicondition{\VAiitlnac}{ll,j}^2(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)]$ the probability of the $j$-th case of Figure~\ref{fig:ii_illustration}, respectively the second moment of $\VAiitlnacll(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)$ conditioned on the $j$-th case of Figure~\ref{fig:ii_illustration}, we can write \begin{equation} \VAiiPnacll(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)\mathbb{E}\left[\VAiitlnacll^2(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)\right]=\sum_{j=1}^5\VAiicondition{\VAiiPnac}{ll,j}(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)\mathbb{E}\left[\VAiicondition{\VAiitlnac}{ll,j}^2(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)\right]\,,\label{eq:ii_ii_exa_PllTll_conditioned} \end{equation} by Taylor expansion of the exponential. We will now explicitly elaborate $\VAiicondition{\VAiiPnac}{ll,1}(\VAiidomainlengthvar)$ and $\mathbb{E}[\VAiicondition{\VAiitlnac}{ll,1}(\VAiidomainlengthvar)]$, after which we will provide all five terms and show how this leads to an ODE. In the first case of Figure~\ref{fig:ii_illustration}, the particle does not collide in the $\Delta\VAiidomainlengthvar$ part of the domain after its entry, returns to the $\Delta\VAiidomainlengthvar$ part as the outcome of its passage through the $x$ part of the domain and, again does not collide in the $\Delta\VAiidomainlengthvar$ part, after which it exits the domain of length $\VAiidomainlengthvar+\Delta\VAiidomainlengthvar$. The probability of this case is thus \begin{equation} \VAiicondition{\VAiiPnac}{ll,1}(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)=\left(1-e^{-\VAcst\Delta\VAiidomainlengthvar}\right)\VAiiPnacll(\VAiidomainlengthvar)\left(1-e^{-\VAcst\Delta\VAiidomainlengthvar}\right)\,, \end{equation} the probability of not colliding when passing through the $\Delta\VAiidomainlengthvar$ part for the first time, times the probability of returning to the $\Delta\VAiidomainlengthvar$ part, times the probability of not colliding in the second passage through the $\Delta\VAiidomainlengthvar$ part. Since we can neglect all contributions of $o(\Delta\VAiidomainlengthvar),\ \Delta\VAiidomainlengthvar\rightarrow0$, we write \begin{equation} \VAiicondition{\VAiiPnac}{ll,1}(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)=\VAiiPnacll(\VAiidomainlengthvar)(1-2\VAcst\Delta\VAiidomainlengthvar)+\mathcal{O}(\Delta\VAiidomainlengthvar^2),\ \Delta\VAiidomainlengthvar\rightarrow0\,.\label{eq:ii_ii_exa_Pnacll1} \end{equation} The score by a particle conditioned on the first case equals \begin{equation} \VAiicondition{\VAiitlnac}{ll,1}(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)=\VAcs\Delta\VAiidomainlengthvar+\VAiitlnacll(\VAiidomainlengthvar)+\VAiiWnacll(\VAiidomainlengthvar)\VAcs\Delta\VAiidomainlengthvar\,,\label{eq:ii2_ii_exa_Etlnacll1_-1} \end{equation} with $\Delta\VAiidomainlengthvar$ the travelled length during the first passage through the $\Delta\VAiidomainlengthvar$ part of the domain, and consequently $\VAcs\Delta\VAiidomainlengthvar$ is the corresponding track-length score, where a general cross-section is used which is to be replaced by $\VAcst$ for estimating the total number of collisions. Then, the particle moves through the $\VAiidomainlengthvar$ part of the domain, the point of entry and exit for that passage is the left side, meaning that the score contribution is represented by $\VAiitlnacll(\VAiidomainlengthvar)$ and the weight change during this passage is the factor $\VAiiWnacll(\VAiidomainlengthvar)$. Finally, during the second passage through the $\Delta\VAiidomainlengthvar$ part of the domain, with a reduced weight $\VAiiWnacll(\VAiidomainlengthvar)$, the score contribution is $\VAiiWnacll(\VAiidomainlengthvar)\VAcs\Delta\VAiidomainlengthvar$. From Equation~\eqref{eq:ii2_ii_exa_Etlnacll1_-1}, we readily find the second moment of $\VAiicondition{\VAiitlnac}{ll,1}(\VAiidomainlengthvar+\VAcs\Delta\VAiidomainlengthvar)$, conditioned on the first case, to equal \begin{multline} \mathbb{E}\left[(\VAiitlnacll(\VAiidomainlengthvar)+\VAcs\Delta\VAiidomainlengthvar+\VAiiWnacll(\VAiidomainlengthvar)\VAcs\Delta\VAiidomainlengthvar)^2\right]=\mathbb{E}\left[\VAiitlnacll^2(\VAiidomainlengthvar)+2\VAiitlnacll(\VAiidomainlengthvar)\VAcs\Delta\VAiidomainlengthvar+2\VAiitlnacll(\VAiidomainlengthvar)\VAiiWnacll(\VAiidomainlengthvar)\VAcs\Delta\VAiidomainlengthvar\right]\\ +\mathcal{O}(\Delta\VAiidomainlengthvar^2),\ \Delta\VAiidomainlengthvar\rightarrow0\,,\label{eq:ii_ii_Etlnacll1_0} \end{multline} \begin{multline} \hphantom{\mathbb{E}\left[(\VAiitlnacll(\VAiidomainlengthvar)+\VAcs\Delta\VAiidomainlengthvar+\VAiiWnacll(\VAiidomainlengthvar)\VAcs\Delta\VAiidomainlengthvar)^2\right]}=\mathbb{E}\left[\VAiitlnacll^2(\VAiidomainlengthvar)\right]+2\VAcs\Delta\VAiidomainlengthvar\mathbb{E}\left[\VAiitlnacll(\VAiidomainlengthvar)\right]+2\VAcs\Delta\VAiidomainlengthvar\mathbb{E}\left[\VAiitlnacll(\VAiidomainlengthvar)\VAiiWnacll(\VAiidomainlengthvar)\right]\\ +\mathcal{O}(\Delta\VAiidomainlengthvar^2),\ \Delta\VAiidomainlengthvar\rightarrow0\,,\label{eq:ii_ii_Etlnacll1} \end{multline} The new quantities on the right hand side of Equation~\eqref{eq:ii_ii_Etlnacll1} ($\mathbb{E}[\VAiitlnacll(\VAiidomainlengthvar)]$, $\mathbb{E}[\VAiitlnacll^2(\VAiidomainlengthvar)]$, and $\mathbb{E}[\VAiitlnacll(\VAiidomainlengthvar)\VAiiWnacll(\VAiidomainlengthvar)]$) are all at most second-order in the path variables ($\VAiitlnacll(\VAiidomainlengthvar)$ and $\VAiiWnacll(\VAiidomainlengthvar)$), as is the quantity, $\mathbb{E}[\VAiitlnacll^2(\VAiidomainlengthvar+\Delta \VAiidomainlengthvar)]$, we are after. This feature of the invariant imbedding procedure is true for all our derivations, and allows to find a closed set of ODEs, since the number of second-order moments we can take is finite and each of them depends on moments of at most second order. With Equations~\eqref{eq:ii_ii_exa_Pnacll1} and~\eqref{eq:ii_ii_Etlnacll1} we can find the first term of the right hand side of Equation~\eqref{eq:ii_ii_exa_PllTll_conditioned} as \begin{multline} \VAiicondition{\VAiiPnac}{ll,1}(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)\mathbb{E}\left[\VAiicondition{\VAiitlnac}{ll,1}^2(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)\right]=\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}\left[\VAiitlnacll^2(\VAiidomainlengthvar)\right]-2\VAcst\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}\left[\VAiitlnacll^2(\VAiidomainlengthvar)\right]\Delta\VAiidomainlengthvar\\ +2\VAcs\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}\left[\VAiitlnacll(\VAiidomainlengthvar)\right]\Delta\VAiidomainlengthvar+2\VAcs\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}\left[\VAiitlnacll(\VAiidomainlengthvar)\VAiiWnacll(\VAiidomainlengthvar)\right]\Delta\VAiidomainlengthvar+\mathcal{O}(\Delta\VAiidomainlengthvar^2),\ \Delta\VAiidomainlengthvar\rightarrow0\,, \end{multline} For the other four cases of Figure~\ref{fig:ii_illustration}, a similar method yields \begin{align} \VAiicondition{\VAiiPnac}{ll,2}(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)\mathbb{E}\left[\VAiicondition{\VAiitlnac}{ll,2}^2(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)\right]&=\VAPr\frac{\VAcss^2}{\VAcst}\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}\left[\VAiitlnacll^2(\VAiidomainlengthvar)\right]\Delta\VAiidomainlengthvar+\mathcal{O}\left(\Delta\VAiidomainlengthvar^2\right),\ \Delta\VAiidomainlengthvar\rightarrow0\\ \VAiicondition{\VAiiPnac}{ll,3}(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)\mathbb{E}\left[\VAiicondition{\VAiitlnac}{ll,3}^2(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)\right]&=0+\mathcal{O}\left(\Delta\VAiidomainlengthvar^2\right),\ \Delta\VAiidomainlengthvar\rightarrow0\\ \VAiicondition{\VAiiPnac}{ll,4}(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)\mathbb{E}\left[\VAiicondition{\VAiitlnac}{ll,4}^2(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)\right]&=(1-\VAPr)\VAcst\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}\left[\VAiitlnacll^2(\VAiidomainlengthvar)\right]\Delta\VAiidomainlengthvar+\mathcal{O}\left(\Delta\VAiidomainlengthvar^2\right),\ \Delta\VAiidomainlengthvar\rightarrow0\\ \begin{split} \VAiicondition{\VAiiPnac}{ll,5}(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)\mathbb{E}\left[\VAiicondition{\VAiitlnac}{ll,5}^2(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)\right]&=\VAPr\VAcst\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}\left[\VAiitlnacll^2(\VAiidomainlengthvar)\right]\!\Delta\VAiidomainlengthvar\\ &\quad+2\VAPr\VAcss\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}\left[\VAiitlnacll(\VAiidomainlengthvar)\right]\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}\left[\VAiiWnacll(\VAiidomainlengthvar)\VAiitlnacll(\VAiidomainlengthvar)\right]\!\Delta\VAiidomainlengthvar\\ &\quad+\VAPr\frac{\VAcss^2}{\VAcst}\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}\left[\VAiiWnacll^2(\VAiidomainlengthvar)\right]\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}\left[\VAiitlnacll^2(\VAiidomainlengthvar)\right]\!\Delta\VAiidomainlengthvar+\mathcal{O}\left(\Delta\VAiidomainlengthvar^2\right),\ \Delta\VAiidomainlengthvar\rightarrow0 \end{split} \end{align} Substituting these five terms in Equation~\eqref{eq:ii_ii_exa_PllTll_conditioned} gives an expression for $\VAiiPnacll(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)\mathbb{E}[\VAiitlnacll^2(\VAiidomainlengthvar+\Delta\VAiidomainlengthvar)]$. Dividing that result by $\Delta\VAiidomainlengthvar$ and taking the limit $\Delta\VAiidomainlengthvar\rightarrow0$, results in the ordinary differential equation for $\VAiiPnacll\mathbb{E}[\VAiiEtlnacll^2]$, \begin{multline} \frac{\text{d}\left(\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}[\VAiitlnacll^2(\VAiidomainlengthvar)]\right)}{\text{d}\VAiidomainlengthvar}=-2\VAcst\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}\left[\VAiitlnacll^2(\VAiidomainlengthvar)\right] +2\VAcs\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}\left[\VAiitlnacll(\VAiidomainlengthvar)\right]+2\VAcs\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}\left[\VAiitlnacll(\VAiidomainlengthvar)\VAiiWnacll(\VAiidomainlengthvar)\right]\\ +\VAPr\frac{\VAcss^2}{\VAcst}\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}\left[\VAiitlnacll^2(\VAiidomainlengthvar)\right] +(1-\VAPr)\VAcst\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}\left[\VAiitlnacll^2(\VAiidomainlengthvar)\right]\\ +\VAPr\VAcst\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}\left[\VAiitlnacll^2(\VAiidomainlengthvar)\right] +2\VAPr\VAcss\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}\left[\VAiitlnacll(\VAiidomainlengthvar)\right]\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}\left[\VAiiWnacll(\VAiidomainlengthvar)\VAiitlnacll(\VAiidomainlengthvar)\right]\\ +\VAPr\frac{\VAcss^2}{\VAcst}\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}\left[\VAiiWnacll^2(\VAiidomainlengthvar)\right]\VAiiPnacll(\VAiidomainlengthvar)\mathbb{E}\left[\VAiitlnacll^2(\VAiidomainlengthvar)\right]\,,\label{eq:ii2_exa_result} \end{multline} which equals Equation~(66) in~\cite{mortier2020iiappendix}. The initial value to solve this ODE is 0, because the probability to turn in a slab of length zero is zero, hence so is $\VAiiPnacll(0)$ and the travelled length in a slab of length zero also becomes zero, so $\mathbb{E}[\VAiitlnacll^2(0)]=0$. The same procedure can be applied for each of the new terms in Equation~\eqref{eq:ii2_exa_result}, leading to a closed system of ODEs. The full details of this, and all other derivations for the other estimation procedures, are included in~\cite{mortier2020iiappendix}. To evaluate the performance of the different estimation procedures for the space spanned by the non-dimensional parameters introduced in Section~\ref{subsec:iif_bestest_1D0D_1D0D} ($\VAcss/\VAcst$, $\VAcst\VAdomlength$, $\VAPr$), the ODE systems are to be evaluated until $\VAdomlength$. In this integration process, the results for smaller values of $\VAdomlength$ are also found. Hence, to find the measure of performance of the estimation procedures on a fine mesh of parameter values, we have to integrate these ODEs for the desired values of $\VAcss/\VAcst$ and $\VAPr$ and the step length and final value of the ODE integration are determined by the desired values of $\VAcst\VAdomlength$. \subsection{Best estimation procedure for the mass source in the 1wawaD0D model\label{subsec:iif_bestest_1D0D_bestest}} For each of the estimation procedures described in Sections~\ref{sec:iif_modelsim} and~\ref{sec:iif_est}, the method described in Sections~\ref{subsec:iif_bestest_1D0D_ii} and~\ref{subsec:ii2_ii_example} provides us with the statistical properties for the entire ($\VAcss/\VAcst$,$\VAcst\VAdomlength$,$\VAPr$) parameter space, the execution of which can be found in~\cite{mortier2020iiappendix}. These statistical properties are discussed at length in \cite[Chapter 3]{mortier2020phdthesis}, and some partial results are presented in~\cite{mortier2017invimb}. In this section, we will use the obtained statistical properties to obtain a partition of the ($\VAcss/\VAcst$,$\VAcst\VAdomlength$,$\VAPr$) parameter space based on which estimation procedure performs best. In Section~\ref{subsubsec:iif_bestest_1D0D_bestest_var}, we will use the statistical error for a fixed number of simulated particles as a measure of performance and in Section~\ref{subsubsec:iif_bestest_1D0D_bestest_cost}, we will consider the computational cost for a given statistical error as a measure of performance. Both measures of performance apply to a different setting: when the number of particles is considered fixed, the goal of the estimation procedure selection is to minimize the statistical error, which is achieved via the partition of Section~\ref{subsubsec:iif_bestest_1D0D_bestest_var}. Alternatively, when the number of simulated particles is not fixed, but a certain statistical error is aimed for, the computational cost for that error is to be minimized, as is achieved in Section~\ref{subsubsec:iif_bestest_1D0D_bestest_cost}. In Section~\ref{sec:ii_res_1D1D}, we will expand on these results by also considering momentum sources and a more realistic 1D1D setting. In that next section, we only have numerical results. \subsubsection{Best estimation procedure for the mass source in the 1D0D model based on statistical error\label{subsubsec:iif_bestest_1D0D_bestest_var}} The first measure of performance we consider is the expected statistical error on the result, given the number of simulated particles. This error is proportional to the standard deviation of the contribution due to a single particle and ODEs for this quantity are derived in~\cite{mortier2020iiappendix}. Figure~\ref{fig:best_1D0D_mass_stdv_bestest} shows the best 1D0D mass source estimation procedure for a large part of the parameter domain. This figure shows there are three competitive estimation procedures: \texttt{nac\_ne}, \texttt{natl\_ne}, and \texttt{natl\_tl}. At this point, it is relevant to point to two analytical results by Lux: \cite[Theorem 5.22, p. 267]{lux1991MCPT} and~\cite[Theorem 5.24, p. 276]{lux1991MCPT}, which are initially provided in~\cite{lux1978standardvarred}. These state that the next-event estimator for the number of collisions (and thus for the mass source) never has a variance that is larger than the collision estimator for the same simulation, respectively that the collision type survival biasing never has a variance larger than the corresponding analog simulation, granted that the estimator used remains the same. These results are visible in Figure~\ref{fig:best_1D0D_mass_stdv_bestest} by the absence of any collision estimator and of any estimation procedure with an analog simulation. The dominating estimation procedure is a next-event estimator with a non-analog collision type simulation, since at low survival probability ($\VAcss/\VAcst$) most of the score is established at the first event in such a simulation --- which is characteristic for a next-event estimator and at higher survival probabilities and not too low $\VAPr$, the higher number of scoring events in a non-analog collision type estimator is beneficial. At low $\VAPr$ and significant collisionality, the neutral that enters from the left, will nearly always leave the domain to the left. The main variation arises from when exactly this occurs. Shorter paths have a small \texttt{ne} contribution, but balanced by the larger weight and vice versa for longer paths. This balance of the resulting score reduces the variance. The \texttt{natl\_tl} estimation procedure becomes optimal when scattering is trivial ($\VAPr=1$), since its variance is zero then, since the \texttt{natl\_tl} score only depends on the travelled length, which will always be exactly $\VAdomlength$ for an \texttt{natl} simulation with $\VAPr=1$. Track-length aspects also become beneficial when the particle trajectories become more complex (high survival probability $\VAcss/\VAcst$) because it takes the more global `path length' into account and when the total collisionality is low, the situation for which it was originally designed. The precise cut-offs are non-trivial in each of the cases. In the rest of the 1D0D parameter space --- being the part with $\VAcst\VAdomlength=\frac{\VAratet}{|v|}\VAdomlength>10$ --- the picture remains intact. There remain significant portions with the \texttt{natl\_ne} estimator having the upper hand and the \texttt{natl\_tl} estimation procedure remains optimal for a small fraction near $\VAcss/\VAcst=1$ and $\VAPr=1$. \begin{figure}[H]\centering \begin{tabular}{m{.25\textwidth}m{.25\textwidth}m{.25\textwidth}m{.2\textwidth}} \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mass_var_v1_Psurv4_1D0D_anal_paper_nodots.pdf}{1}{1}{$\dfrac{\VAcss}{\VAcst}=0.75$}} & \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mass_var_v1_Psurv5_1D0D_anal_paper_nodots.pdf}{0}{1}{$\dfrac{\VAcss}{\VAcst}=0.5$}} & \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mass_var_v1_Psurv6_1D0D_anal_paper_nodots.pdf}{0}{1}{$\dfrac{\VAcss}{\VAcst}=0.25$}} & \localcolorlegendoneE \end{tabular} \caption{A partition of the parameter domain based on the 1D0D mass source estimation procedure with the lowest statistical error.}\label{fig:best_1D0D_mass_stdv_bestest} \end{figure} In the EIRENE code, the default mass source estimation procedure is the \texttt{a\_tl} estimation procedure, which is not one of the three competitive estimation procedures found from the comparison based on statistical error in the 1D0D case. In Figure~\ref{fig:best_1D0D_mass_stdv_gain}, we present the factor by which the standard deviation of the \texttt{a\_tl} estimation procedure is higher than the lowest possible over all estimation procedures. As can be seen in the figure, except for a very low collisionality, or when the survival rate is high and the collisionality not high, the potential gain by using the optimal estimation procedure is very large. Before we draw conclusions, we must first note that these results are preliminary, since they apply to the simplified 1D0D case and currently only look at the statistical error. In a fusion setting, survival probability and collisionality are typically very high, meaning that the top part of the left-most figure from Figure~\ref{fig:best_1D0D_mass_stdv_gain} should be considered. There, these preliminary results indicate that the current \texttt{a\_tl} choice is not optimal when considering the statistical error in a 1D0D setting. \begin{figure}[H]\centering \begin{tabular}{b{.25\textwidth}b{.25\textwidth}b{.25\textwidth}b{.2\textwidth}} \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_tl_mass_var_Psurv0k75_1D0D_anal_paper.pdf}{1}{1}{$\dfrac{\VAcss}{\VAcst}=0.75$}} & \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_tl_mass_var_Psurv0k5_1D0D_anal_paper.pdf}{0}{1}{$\dfrac{\VAcss}{\VAcst}=0.5$}} & \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_tl_mass_var_Psurv0k25_1D0D_anal_paper.pdf}{0}{1}{$\dfrac{\VAcss}{\VAcst}=0.25$}} & \resizebox{!}{.24\textwidth}{\maakmooietickscb{figuren/improvement/improvement_colorbar.pdf}{}} \end{tabular} \caption{The factor by which the standard deviation increases when, in a 1D0D setting, the standard mass source estimation procedure choice, \texttt{a\_tl}, is used instead of the best estimation procedure as depicted in Figure~\ref{fig:best_1D0D_mass_stdv_bestest}.} \label{fig:best_1D0D_mass_stdv_gain} \end{figure} \subsubsection{Best estimation procedure for the mass source in the 1D0D model based on computational cost\label{subsubsec:iif_bestest_1D0D_bestest_cost}} A more relevant measure of performance than the statistical error, is the computational cost for a given statistical error. The number of particles required to obtain a certain statistical error is proportional to the variance of a single particle contribution, $\sigma^2$ and we approximate the simulation cost as being proportional to the number of scattering collisions. Combined, this means that the total computational cost for a given statistical error is proportional to \begin{equation} \sigma^2\mathbb{E}[\text{collisions per path}]\,. \end{equation} The number of collisions per path is independent of the chosen estimator (\texttt{a\_abs}, \texttt{a\_sc}, \texttt{c}, \texttt{tl}, \texttt{ne}), but solely depends on the simulation type (\texttt{a}, \texttt{nac}, \texttt{natl}). Figure~\ref{fig:best_1D0D_mass_cost} shows the same results as Figures~\ref{fig:best_1D0D_mass_stdv_bestest} and~\ref{fig:best_1D0D_mass_stdv_gain} but now with the computational cost as measure of performance. The connection with the relative standard deviation is that it is squared and multiplied by the simulation-dependent expected number of collisions. Hence, if there would have been a border between two estimation procedures with the same simulation type in figure~\ref{fig:best_1D0D_mass_stdv_bestest}, that border would not be different now. The number of collisions is always lowest for \texttt{a} and highest for \texttt{nac}. These effects are visible when comparing of the first row of Figure~\ref{fig:best_1D0D_mass_cost} to Figure~\ref{fig:best_1D0D_mass_stdv_bestest}: \texttt{natl\_tl} and \texttt{natl\_ne} take over parts of the domain that went to \texttt{nac\_ne} in Figure~\ref{fig:best_1D0D_mass_stdv_bestest}. On top of that, a procedure with an analog simulation becomes competitive: \texttt{a\_ne}. For small values of the survival fraction and not too low values of $\VAcst\VAdomlength$, non-analogous particles can undergo many collisions without having a significant contribution to the estimation. Those long-living non-analogous particles constitute a computational cost, but have nearly no impact on the computed value. This is the reason analog simulations can also be the best option. When increasing $\VAcst\VAdomlength$ now beyond 10, the effect on the expected number of scattering collisions also has to be taken into account. A simple calculation gives that this number is finite when $\VAcst\VAdomlength\rightarrow\infty$ when $\VAPr<0.5$. For values of $\VAPr$ larger than $0.5$, an analog estimation procedure, \texttt{a\_ne} for the most part, becomes optimal when $\VAcst\VAdomlength$ increases, except of course for $\VAPr=1$, since \texttt{natl\_tl} then provides zero variance and hence also zero in our computational cost measure. When $\VAPr\leq0.5$ there remains a significant fraction of the parameter domain where \texttt{natl\_ne} and \texttt{nac\_ne} are optimal, no matter how large $\VAcst\VAdomlength$ is. When we now compare the best estimation procedure as determined by the first row of Figure~\ref{fig:best_1D0D_mass_cost} with the default choice \texttt{a\_tl}, which is shown in the second row of Figure~\ref{fig:best_1D0D_mass_cost}, we find it is now also a reasonable choice in a high-collisional isotropic situation when the survival probability is high. This is the most important regime in fusion research, meaning \texttt{a\_tl} is a proper choice there. There will be many different parameter values throughout the domain however, and as is visible in the second row of Figure~\ref{fig:best_1D0D_mass_cost}, for many parameter values \texttt{a\_tl} is not a reasonable choice. It might be that selecting a single one estimation procedures for all parameter values, or in the context of a fusion simulation, for all grid cells, gives decent results nonetheless. This is however not the case. No matter what estimation procedure you pick, there is always a parameter set for which another estimation procedure performs infinitely better. Indeed, when the statistical error is regarded, the \texttt{natl\_tl} procedure is the only one to attain zero relative error when $\VAPr=1$, so it would be infinitely better than all others. On the other hand, the \texttt{nac\_ne} procedure outperforms it in a large part of the domain, and it can outperform it unboundedly. Namely, when the survival probability is very low, a \texttt{nac\_ne} contribution will be nearly deterministic, resulting in zero standard deviation and zero cost. If furthermore $\VAPr$ is low, the variance of \texttt{natl\_tl} can become relatively high. Furthermore, as discussed above, when increasing $\VAcst\VAdomlength$ beyond 10, \texttt{a\_ne} becomes infinitely better in terms of cost than any estimation procedure with a non-analog simulation type, when $\VAPr\ge 0.5$ and $\VAcst\VAdomlength$ becomes large. Since, depending on the parameter set, there can be an unbounded difference in performance between estimation procedures, it can be beneficial to use different estimation procedures in different parts of the domain. Doing so is allowed, since each of estimators is unbiased and each of the simulation types corresponds to the same model. \begin{figure}[H]\centering \begin{tabular}{m{.25\textwidth}m{.25\textwidth}m{.25\textwidth}m{.2\textwidth}} \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mass_cost_v1_Psurv4_1D0D_anal_paper_nodots.pdf}{1}{0}{$\dfrac{\VAcss}{\VAcst}=0.75$}} & \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mass_cost_v1_Psurv5_1D0D_anal_paper_nodots.pdf}{0}{0}{$\dfrac{\VAcss}{\VAcst}=0.5$}} & \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mass_cost_v1_Psurv6_1D0D_anal_paper_nodots.pdf}{0}{0}{$\dfrac{\VAcss}{\VAcst}=0.25$}} & \localcolorlegendoneB \end{tabular}\\ \begin{tabular}{b{.25\textwidth}b{.25\textwidth}b{.25\textwidth}b{.2\textwidth}} \vspace{-.5cm}\resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_tl_mass_cost_Psurv0k75_1D0D_anal_paper.pdf}{1}{1}{}} & \vspace{-.5cm}\resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_tl_mass_cost_Psurv0k5_1D0D_anal_paper.pdf}{0}{1}{}} & \vspace{-.5cm}\resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_tl_mass_cost_Psurv0k25_1D0D_anal_paper.pdf}{0}{1}{}} & \resizebox{!}{.24\textwidth}{\maakmooietickscb{figuren/improvement/improvement_colorbar.pdf}{}} \end{tabular} \caption{The best 1D0D mass source term estimation procedure and the potential gain when considering computational cost for a given statistical error. The first row presents a partition of the parameter domain based on the estimation procedure with the lowest statistical error. The second row presents the factor by which the standard deviation increases when the standard mass choice, \texttt{a\_tl}, is used instead of the best estimation procedure.} \label{fig:best_1D0D_mass_cost} \end{figure} In Section~\ref{sec:ii_res_1D1D}, we will see how well these mass source term estimation results hold up in a 1D1D setting, and we will consider momentum source term estimation. \section{Numerical extensions to a realistic one-dimensional grid cell and momentum estimation\label{sec:ii_res_1D1D}} In this section, we extend our analytical results in two ways: by considering continuous velocities and by considering momentum source estimation. In a setting with continuous velocities, the invariant imbedding method used in Section~\ref{sec:iif_bestest_1D0D} becomes intractable. We thus resort to numerical experiments in which an MC estimation is repeated multiple times for each estimation procedure and each parameter value and the statistical properties were computed. Because these results are obtained via this numerical method, the different parameter values are on a much coarser mesh in the parameter space than in Section~\ref{sec:iif_bestest_1D0D}. The extension to momentum is also only numerically available, both for 1D0D and 1D1D, and shows the importance of using different estimation procedures for the different quantities in most settings. In Section~\ref{subsec:iif_1D1D_1D1D}, we explain the more realistic 1D1D simulation and present a mapping from 1D1D to 1D0D to allow a comparison of results in both settings. In Section~\ref{subsec:iif_1D1D_mass} we then present the numerical 1D1D mass source estimation results and in Section~\ref{subsec:iif_1D1D_mom}, we present the numerical extension to momentum source estimation. \subsection{A setting with continuous velocities: 1D1D simulation\label{subsec:iif_1D1D_1D1D}} The 1D1D setting which we experiment on, is identical to the 1D0D setting described in Section~\ref{subsec:iif_bestest_1D0D_1D0D}, except for the post-collisional velocity distribution. In a general setting, the post-collisional velocity distribution can take any form, but in most applications, a Maxwellian distribution is a proper first approximation. We will consequently focus on a Maxwellian with mean $\mu_v$ and variance $\sigma_v^2$: \begin{equation} \VApostcolveldistr^\text{1D1D}(v)=\frac{1}{\sqrt{2\pi}\sigma_v}e^{\frac{(v-\mu_v)^2}{2\sigma_v^2}}\,. \end{equation} We further retain the plasma background homogeneity (constant $\mu_v$, $\sigma_v$, $\VArates$, and $\VAratea$), fully absorbent walls, and $(x_0,v_0)=(0,1)$. Now the velocity can have different sizes, meaning the cross-section ($\VAcs=\VArate/|v|$) is no longer constant. We will however still present our results in terms of the 1D0D parameters $\VAcst\VAdomlength$, $\VAcss/\VAcst$, and $\VAPr$, by using an appropriate mapping. The mapping we will use from the 1D1D to the 1D0D parameter space is such that the first and second moment of the post-collisional velocity distribution match. This is achieved by choosing the parameter $\VAPr$ appropriately and by rescaling the velocity, which is implemented by rescaling the cross-sections to obtain an equivalent result. The survival probability is not modified. Such a mapping is illustrated in Figure~\ref{fig:ii_1D0D1D1D_sketch_veldistr}, where a 1D0D and a 1D1D post-collisional velocity distribution are shown with identical first and second moments. \begin{comment} Discussie van de effecten: \begin{itemize} \item Sowieso verhoogde variantie door extra stochasticity: IS FOUT \item Veranderende fysica: naar de ene kant gaan heeft een andere snelheidsverdeling dan naar de andere kant gaan: je gaat niet gemakkelijk het domein uit gaan door plots naar links te gaan bv. omdat je dan traag gaat \item Je kan gecorreleerd 1D0D 1D1D laten bewegen en dan zijn de paden erg gelijkaardig (met name bij Pr niet te klein, bv .75 en L niet te klein, want dan is er wel een gelijkaardige verwachte afstand. Als dat niet het geval is, breekt de correlatie wat, maar het feit dat de `verwachte afstand tot de volgende botsing' bewaard blijft is wel belangrijk. Het verschil in fysica zit hem volledig in dat \item Hier dus een prentje van twee gecorreleerde paden die tonen dat ze al met al gelijkaardig bewegen, maar die ook alles wat fout gaat bezit. \item Los van de 'fysica', wat ik ruwweg zie als de hoeveelheid en de positie van botsingen, heb je ook nog wat die estimators doen, en dan zijn er wel drastische verschillen tussen 1D0D en 1D1D. Met de collisions die ruwweg hetzelfde blijven is er niets, maar de variantie op de snelheidsgrootte geeft grote effecten op , maar je bijdrage aan de score zal voor tl en ne wel groot zijn, wat subtiele verschillen geeft met 1D0D. \item Bij grote sigma's zal er een aanzienlijk deel van de deeltjes erg snel gaan, met illustratie \end{itemize} \end{comment} By changing the post-collisional velocity distribution, the neutral particle behaviour changes. Figures~\ref{fig:ii_1D0D1D1D_sketch_1D0Dpath} and~\ref{fig:ii_1D0D1D1D_sketch_1D1Dpath} illustrate some of the effects by showing a neutral path with forward-backward scattering in Figure~\ref{fig:ii_1D0D1D1D_sketch_1D0Dpath} and a neutral path with scattering according to a normal distribution with equal first two moments in Figure~\ref{fig:ii_1D0D1D1D_sketch_1D1Dpath}. Besides matching of the first two moments of the distributions, the paths have also been sampled in a correlated manner, resulting in equal collision times and similar but not equal velocities. In these figures we have also illustrated the post-collisional velocity distribution by the greyscale squares, respectively lines. These show the probability distribution of the next collision, given the collision time and the previous collision position are known. This visualisation lays bare the most prevalent difference in the neutral behaviour between the two cases, namely the additional stochasticity on the travelled length. One derivative effect of this is the interaction with the boundaries: in a 1D1D simulation, crossing the boundary as a next event is always possible (indicated by the (sometimes extremely faintly) small squares outside of the domain), which is not the case for a 1D0D simulation, where, with given collision times, the probability of leaving the domain is only non-zero close to the boundaries. Another aspect that can impact the variance associated with estimation procedures comes from the dependence of the velocity in many scoring procedures. Slow parts of the path, may not contribute much to the distance travelled, but have a much larger impact on the score and weight than their slower counterparts. The velocity of the particle is clear in Figure~\ref{fig:ii_1D0D1D1D_sketch_1D1Dpath} by the slope of the lines, but we have emphasized the importance of the slow parts by having the thickness of the lines be proportional to $1/v_\VAeventno$. This effect was crucial in the prevalence of \texttt{natl\_tl} in Section~\ref{subsubsec:iif_bestest_1D0D_bestest_var}. \begin{figure}[H] \centering \begin{subfigure}{.28\columnwidth} \centering \oneDandzeroDpostcolveldistrplot \caption{The velocity distributions.} \label{fig:ii_1D0D1D1D_sketch_veldistr} \end{subfigure} \begin{subfigure}{.28\columnwidth} \centering \zeroDpathplot \caption{A 1D0D path.} \label{fig:ii_1D0D1D1D_sketch_1D0Dpath} \end{subfigure} \begin{subfigure}{.28\columnwidth} \centering \oneDpathplot \caption{A 1D1D path.} \label{fig:ii_1D0D1D1D_sketch_1D1Dpath} \end{subfigure} \caption{A sketch illustrating the difference between forward-backward scattering (blue) and more realistic scattering (red) by plotting two correlated paths.} \label{fig:ii_1D0D1D1D_sketch} \end{figure} These new effects can impact the different estimation procedures profoundly, as we will investigate in the next section. \subsection{Mass source estimation in 1D1D\label{subsec:iif_1D1D_mass}} The effect of using continuous velocities (1D1D) instead of forward-backward scattering (1D0D) is clear when we compare the best mass source estimation procedure throughout the parameter domain based on statistical error for 1D1D in the first row of Figure~\ref{fig:ii_best_1D1D_mass_var} with the results for 1D0D from Figure~\ref{fig:best_1D0D_mass_stdv_bestest}. The \texttt{natl\_tl} estimation procedure, which benefited from the connection between travelled distance and score, loses terrain compared to other estimation procedures. This is most notable in the region close to $\VAPr=0.5$, which corresponds to high variance of the post-collisional velocity, and consequently additional variation on the travelled distance compared to the 1D0D case. Of special interest is the appearance of another estimation procedure, \texttt{nac\_tl}, under select circumstances. Further, the boundary between \texttt{natl\_ne} and \texttt{nac\_ne} shifts marginally in the advantage of \texttt{natl\_ne}. The second row of Figure~\ref{fig:ii_best_1D1D_mass_var} shows the potential loss if the default estimation procedure, \texttt{a\_tl} is used instead of the best estimation procedure, and is to be compared with Figure~\ref{fig:best_1D0D_mass_stdv_gain} for 1D0D. Both sets of figures are very similar, with an exception to the isotropic situation, where the variance on the post-collisional velocity is much larger. There, \texttt{a\_tl} has deteriorated compared to the 1D0D situation. This deterioration can be understood from the same context as the deterioration of \texttt{natl\_tl}, since both \texttt{a\_tl} and \texttt{natl\_tl} benefit from the, now diminished, connection between travelled distance and scoring. \begin{figure}[H]\centering \begin{tabular}{m{.25\textwidth}m{.25\textwidth}m{.25\textwidth}m{.2\textwidth}} \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mass_var_v1_Psurv4_1D1D_num_paper.pdf}{1}{0}{$\dfrac{\VAcss}{\VAcst}=0.75$}} & \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mass_var_v1_Psurv5_1D1D_num_paper.pdf}{0}{0}{$\dfrac{\VAcss}{\VAcst}=0.5$}} & \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mass_var_v1_Psurv6_1D1D_num_paper.pdf}{0}{0}{$\dfrac{\VAcss}{\VAcst}=0.25$}} & \localcolorlegendoneA \end{tabular}\\ \begin{tabular}{b{.25\textwidth}b{.25\textwidth}b{.25\textwidth}b{.2\textwidth}} \vspace{-.5cm}\resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_tl_mass_var_Psurv4_1D1D_num_paper.pdf}{1}{1}{}} & \vspace{-.5cm}\resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_tl_mass_var_Psurv5_1D1D_num_paper.pdf}{0}{1}{}} & \vspace{-.5cm}\resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_tl_mass_var_Psurv6_1D1D_num_paper.pdf}{0}{1}{}} & \resizebox{!}{.24\textwidth}{\maakmooietickscb{figuren/improvement/improvement_colorbar.pdf}{}} \end{tabular} \caption{The best 1D1D mass source term estimation procedure and the potential gain when considering the statistical error. The first row presents a partition of the parameter domain based on the estimation procedure with the lowest statistical error. The dots represent the exact position in parameter space where the simulation occurred, crosses indicate inconclusive results. The second row presents the factor by which the standard deviation increases when the standard mass choice, \texttt{a\_tl}, is used instead of the best estimation procedure.} \label{fig:ii_best_1D1D_mass_var} \end{figure} The results when considering cost (variance times number of collisions) are shown in Figure~\ref{fig:iif_1D1D_mass_cost}. As was the case for 1D0D in Figure~\ref{fig:best_1D0D_mass_cost}, going from variance as a measure of performance to cost is advantageous for \texttt{a} and \texttt{natl} simulations. The most notable effect is the increased domain of \texttt{a\_ne} in the region of large variance on the post-collisional velocity (around $\VAPr$). When the variance on the post-collisional velocity is larger, the positive effect of more scoring events on the variance is mitigated somehow, so executing an absorption is less adverse. The important conclusion from the second row of Figure~\ref{fig:best_1D0D_mass_cost}, that the \texttt{a\_tl} estimation procedure is sufficiently good in the relevant high-collisional isotropic case when the survival probability is high, is retained in the results of Figure~\ref{fig:iif_1D1D_mass_cost}. The only large difference with the 1D0D results is the deterioration of the \texttt{a\_tl} estimation procedure in the low-collisional isotropic case. This follows from the similar detoriation of the standard deviation visible when comparing Figure~\ref{fig:best_1D0D_mass_stdv_gain} and the second row of Figure~\ref{fig:ii_best_1D1D_mass_var}. \begin{figure}[H]\centering \begin{tabular}{m{.25\textwidth}m{.25\textwidth}m{.25\textwidth}m{.2\textwidth}} \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mass_cost_v1_Psurv4_1D1D_num_paper.pdf}{1}{0}{$\dfrac{\VAcss}{\VAcst}=0.75$}} & \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mass_cost_v1_Psurv5_1D1D_num_paper.pdf}{0}{0}{$\dfrac{\VAcss}{\VAcst}=0.5$}} & \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mass_cost_v1_Psurv6_1D1D_num_paper.pdf}{0}{0}{$\dfrac{\VAcss}{\VAcst}=0.25$}} & \localcolorlegendoneB \end{tabular}\\ \begin{tabular}{b{.25\textwidth}b{.25\textwidth}b{.25\textwidth}b{.2\textwidth}} \vspace{-.5cm}\resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_tl_mass_cost_Psurv4_1D1D_num_paper.pdf}{1}{1}{}} & \vspace{-.5cm}\resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_tl_mass_cost_Psurv5_1D1D_num_paper.pdf}{0}{1}{}} & \vspace{-.5cm}\resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_tl_mass_cost_Psurv6_1D1D_num_paper.pdf}{0}{1}{}} & \resizebox{!}{.24\textwidth}{\maakmooietickscb{figuren/improvement/improvement_colorbar.pdf}{}} \end{tabular} \caption{The best 1D1D mass source term estimation procedure and the potential gain when considering computational cost for a given statistical error. The first row presents a partition of the parameter domain based on the estimation procedure with the lowest computational cost error. The dots represent the exact position in parameter space where the simulation occurred, crosses indicate inconclusive results. The second row presents the factor by which the computational cost increases when the standard mass choice, \texttt{a\_tl}, is used instead of the best estimation procedure.} \label{fig:iif_1D1D_mass_cost} \end{figure} An important additional conclusion from this section is the large parallel between the results for the 1D0D case and for the 1D1D case. The conclusion that the \texttt{a\_tl} estimation procedure functions well in fusion-relevant case, is retained, as well as most of the trade-off between \texttt{a\_tl} and the best estimation procedure. Consequently also the conclusion that in regions of the domain with deviating parameter values, different methods should be used is retained. \subsection{Momentum source estimation in 1D1D\label{subsec:iif_1D1D_mom}} We have numerically extended our analysis to momentum source estimation. The results presenting the best momentum estimation procedure for the 1D1D setting are shown in Figure~\ref{fig:iif_1D1D_mom_var} for statistical error and in Figure~\ref{fig:iif_1D1D_mom_cost} for computational cost. The default momentum source estimation procedure in the EIRENE code is currently \texttt{a\_c}, which was selected via trial and error. When we compare the best momentum estimation procedure to the best mass estimation procedure, we note the presence of a collision estimator \texttt{nac\_c}, and the much larger prevalence of the \texttt{nac\_ne} estimator when considering statistical error in the first row of Figure~\ref{fig:iif_1D1D_mom_var} and the \texttt{a\_ne} estimator when considering computational cost in the first row of Figure~\ref{fig:iif_1D1D_mom_cost}. As can be seen in the second row of Figure~\ref{fig:iif_1D1D_mom_cost}, the default momentum estimation procedure, \texttt{a\_c}, now functions very well for the largest part of the parameter domain when considering the computational cost, with notable exceptions when the background is highly anisotropic ($\VAPr\approx0$ or $\VAPr\approx1$) or for very low collisionality. \begin{figure}\centering \begin{tabular}{m{.25\textwidth}m{.25\textwidth}m{.25\textwidth}m{.2\textwidth}} \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mom_var_v1_Psurv1_1D1D_num_paper.pdf}{1}{0}{$\dfrac{\VAcss}{\VAcst}=0.98$}} & \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mom_var_v1_Psurv5_1D1D_num_paper.pdf}{0}{0}{$\dfrac{\VAcss}{\VAcst}=0.5$}} & \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mom_var_v1_Psurv6_1D1D_num_paper.pdf}{0}{0}{$\dfrac{\VAcss}{\VAcst}=0.25$}} & \localcolorlegendoneC\\ \end{tabular}\\ \begin{tabular}{b{.25\textwidth}b{.25\textwidth}b{.25\textwidth}b{.2\textwidth}} \vspace{-0.5cm}\resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_c_mom_var_Psurv1_1D1D_num_paper.pdf}{1}{1}{}} & \vspace{-0.5cm}\resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_c_mom_var_Psurv5_1D1D_num_paper.pdf}{0}{1}{}} & \vspace{-0.5cm}\resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_c_mom_var_Psurv6_1D1D_num_paper.pdf}{0}{1}{}} & \resizebox{!}{.24\textwidth}{\maakmooietickscb{figuren/improvement/improvement_colorbar.pdf}{}} \end{tabular} \caption{The best 1D1D momentum source term estimation procedure and the potential gain when considering the statistical error. The first row presents a partition of the parameter domain based on the estimation procedure with the lowest statistical error. The dots represent the exact position in parameter space where the simulation occurred, crosses indicate inconclusive results. The second row presents the factor by which the standard deviation increases when the standard mass choice, \texttt{a\_tl}, is used instead of the best estimation procedure.} \label{fig:iif_1D1D_mom_var} \end{figure} \begin{figure}\centering \begin{tabular}{m{.25\textwidth}m{.25\textwidth}m{.25\textwidth}m{.2\textwidth}} \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mom_cost_v1_Psurv2_1D1D_num_paper.pdf}{1}{0}{$\dfrac{\VAcss}{\VAcst}=0.94$}} & \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mom_cost_v1_Psurv5_1D1D_num_paper.pdf}{0}{0}{$\dfrac{\VAcss}{\VAcst}=0.5$}} & \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mom_cost_v1_Psurv6_1D1D_num_paper.pdf}{0}{0}{$\dfrac{\VAcss}{\VAcst}=0.25$}} & \localcolorlegendoneB\\ \end{tabular}\\ \begin{tabular}{b{.25\textwidth}b{.25\textwidth}b{.25\textwidth}b{.2\textwidth}} \vspace{-0.5cm}\resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_c_mom_cost_Psurv2_1D1D_num_paper.pdf}{1}{1}{}} & \vspace{-0.5cm}\resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_c_mom_cost_Psurv5_1D1D_num_paper.pdf}{0}{1}{}} & \vspace{-0.5cm}\resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_c_mom_cost_Psurv6_1D1D_num_paper.pdf}{0}{1}{}} & \resizebox{!}{.24\textwidth}{\maakmooietickscb{figuren/improvement/improvement_colorbar.pdf}{}} \end{tabular} \caption{The best 1D1D momentum source term estimation procedure and the potential gain when considering computational cost for a given statistical error. The first row presents a partition of the parameter domain based on the estimation procedure with the lowest computational cost error. The dots represent the exact position in parameter space where the simulation occurred, crosses indicate inconclusive results. The second row presents the factor by which the computational cost increases when the standard mass choice, \texttt{a\_tl}, is used instead of the best estimation procedure.} \label{fig:iif_1D1D_mom_cost} \end{figure} To evaluate the representativeness of the 1D0D results as predictors for the 1D1D results, we have performed the same experiments for the 1D0D case, the results of which are shown in Figures~\ref{fig:iif_1D0D_mom_var} and~\ref{fig:iif_1D0D_mom_cost}. When we compare Figures~\ref{fig:iif_1D0D_mom_var} and~\ref{fig:iif_1D0D_mom_cost} with Figures~\ref{fig:iif_1D1D_mom_var} and~\ref{fig:iif_1D1D_mom_cost}, we note remarkable agreement, but with several shifts in which estimator performs best, and the addition of \texttt{a\_c} as a potentially competitive estimator in the 1D0D case. \begin{figure}\centering \begin{tabular}{m{.25\textwidth}m{.25\textwidth}m{.25\textwidth}m{.2\textwidth}} \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mom_var_v1_Psurv1_1D0D_num_paper.pdf}{1}{0}{$\dfrac{\VAcss}{\VAcst}=0.98$}} & \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mom_var_v1_Psurv5_1D0D_num_paper.pdf}{0}{0}{$\dfrac{\VAcss}{\VAcst}=0.5$}} & \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mom_var_v1_Psurv6_1D0D_num_paper.pdf}{0}{0}{$\dfrac{\VAcss}{\VAcst}=0.25$}} & \localcolorlegendoneC\\ \end{tabular}\\ \begin{tabular}{b{.25\textwidth}b{.25\textwidth}b{.25\textwidth}b{.2\textwidth}} \vspace{-0.5cm}\resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_c_mom_var_Psurv1_1D0D_num_paper.pdf}{1}{1}{}} & \vspace{-0.5cm}\resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_c_mom_var_Psurv5_1D0D_num_paper.pdf}{0}{1}{}} & \vspace{-0.5cm}\resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_c_mom_var_Psurv6_1D0D_num_paper.pdf}{0}{1}{}} & \resizebox{!}{.24\textwidth}{\maakmooietickscb{figuren/improvement/improvement_colorbar.pdf}{}} \end{tabular} \caption{The best 1D0D momentum source term estimation procedure and the potential gain when considering the statistical error. The first row presents a partition of the parameter domain based on the estimation procedure with the lowest statistical error. The dots represent the exact position in parameter space where the simulation occurred, crosses indicate inconclusive results. The second row presents the factor by which the standard deviation increases when the standard mass choice, \texttt{a\_tl}, is used instead of the best estimation procedure.} \label{fig:iif_1D0D_mom_var} \end{figure} \begin{figure}\centering \begin{tabular}{m{.25\textwidth}m{.25\textwidth}m{.25\textwidth}m{.2\textwidth}} \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mom_cost_v1_Psurv2_1D0D_num_paper.pdf}{1}{0}{$\dfrac{\VAcss}{\VAcst}=0.94$}} & \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mom_cost_v1_Psurv5_1D0D_num_paper.pdf}{0}{0}{$\dfrac{\VAcss}{\VAcst}=0.5$}} & \resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/mom_cost_v1_Psurv6_1D0D_num_paper.pdf}{0}{0}{$\dfrac{\VAcss}{\VAcst}=0.25$}} & \localcolorlegendoneD\\ \end{tabular}\\ \begin{tabular}{b{.25\textwidth}b{.25\textwidth}b{.25\textwidth}b{.2\textwidth}} \vspace{-0.5cm}\resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_c_mom_cost_Psurv2_1D0D_num_paper.pdf}{1}{1}{}} & \vspace{-0.5cm}\resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_c_mom_cost_Psurv5_1D0D_num_paper.pdf}{0}{1}{}} & \vspace{-0.5cm}\resizebox{.24\textwidth}{!}{\maakmooieticks{figuren/improvement/improvement_a_c_mom_cost_Psurv6_1D0D_num_paper.pdf}{0}{1}{}} & \resizebox{!}{.24\textwidth}{\maakmooietickscb{figuren/improvement/improvement_colorbar.pdf}{}} \end{tabular} \caption{The best 1D0D momentum source term estimation procedure and the potential gain when considering computational cost for a given statistical error. The first row presents a partition of the parameter domain based on the estimation procedure with the lowest computational cost error. The dots represent the exact position in parameter space where the simulation occurred, crosses indicate inconclusive results. The second row presents the factor by which the computational cost increases when the standard mass choice, \texttt{a\_tl}, is used instead of the best estimation procedure.} \label{fig:iif_1D0D_mom_cost} \end{figure} The appearance of collision estimators (\texttt{nac\_c} and \texttt{a\_c}) as best estimation procedures is to be contrasted to the proven fact by Lux~\cite{lux1991MCPT} that collision estimators can never outperform next-event estimators for mass source estimation. This is now clearly shown to not hold for momentum source estimation. In this section we have shown that different estimation procedures are optimal for momentum source estimation than for mass source estimation. Typically, we estimate both with the same simulation. Since the noise on the higher moments is typically larger, the selection of the momentum estimation procedure can have a higher priority. The simulation type for the mass estimation procedure is then fixed, but the estimator can still be chosen freely. \section{Conclusion and future prospects\label{sec:iif_conclusion}} We compared the performance of a coherent set of source term estimation procedures for a particle tracing Monte Carlo method for the Boltzmann-BGK kinetic equation. We first considered the best estimation procedure with the analytical results for mass source estimation in a simplified 1D0D setting with forward-backward scattering, which were obtained via the invariant imbedding methodology in~\cite{mortier2020iiappendix}. Our results show three estimation procedures are competitive for mass source estimation in a 1D0D setting when variance is considered (\texttt{natl\_tl}, \texttt{nac\_ne}, and \texttt{natl\_ne}) and four when cost is considered (\texttt{a\_ne}, \texttt{natl\_tl}, \texttt{nac\_ne}, and \texttt{natl\_ne}). We have also shown that the potential profit of using the optimal estimation procedure instead of the standard choice and discussed the unboundedness of the potential gain when the background parameters are unknown. Numerical experimentation showed that our analysis for the mass source term in a 1D0D setting remains relevant in a multi-velocity setting (1D1D): the same estimation procedures remain optimal, but there are some shifts in the regions in which they are optimal due to the changed neutral model. The mass source estimation results also clearly show the currently used default estimation procedure (\texttt{a\_tl}) in the EIRENE code is a valid choice in the regime that is relevant for the application. This is to be expected, since this estimation procedure has been selected by trial-and-error. A next-event estimator, or a more flexible, region-dependent choice of estimation procedure, might provide sharp improvements. As an additional numerical extension, we compared the different estimation procedures for momentum source estimation. Compared with the results for mass estimation, the optimal estimation procedure changes more drastically, especially when cost is considered as a measure of performance. Now five or seven estimation procedures are optimal when variance, respectively cost is considered. Again, the current default momentum source estimation option, \texttt{a\_c}, is shown to be a proper choice according to our study. Our results provide an indicative tool to select the proper estimation procedure without having to resort to trial-and-error. This enables the selection of different estimation procedures for different regions of the domain, which can be extremely relevant when the domain is highly heterogeneous or when grid refinement drastically changes the total collisionality per grid cell. Due to the unbiasedness of the estimators and the consistency of the different simulation types, there is no restriction on using different estimation procedures even conditionally on the velocity at entry in the grid cell. Higher dimensions, or energy source term estimation are possible extensions of this work. These form future work. Since neutrals are transported mostly along lines, the one-dimensional analysis, perhaps on the quantitative level, and definitely the qualitative results are expected to hold in higher dimensions. Our comparison has been restricted to a single grid cell, hence another logical expansion of our results is to multiple grid cells. This will entail non-local effects due to the simulation types. In an analog simulation the probability of reaching a grid cell is restricted compared to non-analog simulations, which, in turn, experience adverse effects due to their weight distribution. Non-analog simulations entail small-weight particles that will have little impact on the estimate, but still require an equally costly simulation as particles with a higher weight. Evidently, non-analog simulation types will hold the possibility for the lowest variance, but when regarding the measure of cost, they are expected to lose performance with respect to analog estimation procedures compared to the single grid cell situation. \section*{Acknowledgements} The first author is funded by a PhD fellowship of the Research Foundation Flanders (FWO) under fellowship number 1189919N.\\ This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 and 2019-2020 under grant agreement No 633053. The views and opinions expressed herein do not necessarily reflect those of the European Commission.\\ Parts of the work are supported by the Research Foundation Flanders (FWO) under project grant G078316N.\\ The computational resources and services used in this work were provided by the VSC (Flemish Supercomputer Center), funded by the Research Foundation Flanders (FWO) and the Flemish Government, department EWI. \bibliographystyle{abbrv}
{ "timestamp": "2020-12-17T02:19:26", "yymm": "2012", "arxiv_id": "2012.08981", "language": "en", "url": "https://arxiv.org/abs/2012.08981" }
{ "timestamp": "2020-12-17T02:15:54", "yymm": "2012", "arxiv_id": "2012.08881", "language": "en", "url": "https://arxiv.org/abs/2012.08881" }
\section{Introduction} Two-dimensional transition metal dichalcogenites (2D-TMDs) have recently attracted enormous scientific attention for their distinctive optical properties. Like graphene, TMDs materials consist of layers held together only by van der Waals forces. 2D TMDs (\textit{e.g.} MoS$_2$, WS$_2$, MoSe$_2$, WSe$_2$ \textit{etc.}) are atomically thin semiconductors, with a transition from indirect to direct bandgap in the few-layer limit \cite{Mak_MoS2monofirst_PhysRevLett_2010}. Due to their high binding energy, electron-hole pairs form stable excitons even at room-temperature. A pseudospin can be attributed to each of the two valleys 2D TMDs possess, making it possible to address them selectively with circularly polarized light \cite{Cao_TMDCcircular_NatCom_2012, Xu_TMDCspins_NatPhys_2014, Zhu_WS2bilayerValleyPolarization_PNAS_2014}. The electronic and optical properties of these 2D semiconductors, for instance the excitons and valley pseudospin, make them an interesting platform for opto-electronics \cite{Zhang_TMDCtransistor_science_2014, Wang_TMDCelectronics_NatNano_2012} and valleytronics \cite{Mak_TMDCvalleyHall_science_2014, Schaibley_valleytronics_NatRev_2016, Irina_2020} applications. One of the ways of creating 2D TMD monolayer samples is by exfoliation from bulk \cite{Irina_2020}. An alternative method to produce layered TMDs materials is Chemical Vapor Deposition (CVD) \cite{Song_CVDgrownWS2_ACSNano_2013, Zhang_CVDgrownWS2_ACSNano_2013, Cong_CVDgrownWS2_AdvOptMat_2014, Orofeo_CVDgrownWS2_APL_2014, Thangaraja_WS2crystals_MatLett_2015, Liu_CVDgrownWS2_NanoscResLett_2017}. While CVD can reproduce horizontal layers as found in naturally occurring TMDs, adjusting the growth conditions enables the growth of nanostructures with exciting properties and applications in nanotechnology, for instance vertical walls, flowers and pyramids \cite{Sabrya2020}. Under certain CVD growth parameters TMDs materials form pyramid-like structures \cite{Zhang_MoS2spirals_NanoLett_2014, Sarma_spiralWS2_RSCAdv_2016}, having many active adsorption site useful for applications in hydrogen sensors \cite{Agrawal_MoS2pyramids_JEnergy_2020} and water disinfection \cite{Cheng_MoS2pyramids_MatInterf_2018}, at the same time possessing interesting electronic properties like ferromagnetism \cite{Zhou_MoS2pyramid_Nanoscale_2018} and high mobility \cite{Zheng_MoS2pyramids_AdvMat_2017, Chen_WSe2pyramids_ACSNano_2014}. Moreover, pyramid-like TMDs structures have applications in non-linear optics \cite{Fan_SHGpyramid_ACSNano_2017, Zheng_MoS2pyramids_AdvMat_2017, Lin_SHGpyramids_ACSNano_2018}, as they exhibit higher non-linear optical conversion efficiency than monolayers due to the thickness increase, while demonstrating a much larger non-linear optical response than multilayer TMDs. It is interesting to note that only in a limited number of studies in the literature photoluminescence of these spiral or pyramid like structures is reported \cite{Zhang_MoS2spirals_NanoLett_2014, Sarma_spiralWS2_RSCAdv_2016, Zheng_MoS2pyramids_AdvMat_2017, Cheng_MoS2pyramids_MatInterf_2018}. In some, the non linear optical response is studied \cite{Fan_SHGpyramid_ACSNano_2017, Zheng_MoS2pyramids_AdvMat_2017, Lin_SHGpyramids_ACSNano_2018}. In most, the TMDs spirals and pyramids are studied using Raman spectroscopy \cite{Chen_WSe2pyramids_ACSNano_2014, Zhang_MoS2spirals_NanoLett_2014, Sarma_spiralWS2_RSCAdv_2016, Zheng_MoS2pyramids_AdvMat_2017, Fan_SHGpyramid_ACSNano_2017, Zhou_MoS2pyramid_Nanoscale_2018, Cheng_MoS2pyramids_MatInterf_2018, Agrawal_MoS2pyramids_JEnergy_2020}. However, comparing the measured optical response of pyramid-like structures of different studies needs to be done with caution, as the terms spiral flake or pyramid are used for nanostructures that have different thicknesses, geometry and sizes. Raman spectroscopy is a powerful tool to study 2D TMDs, as knowledge on the vibrational modes of the layers provides insights in their structure \cite{Lee_MoS2Ramanfirst_ACSNano_2010, Zhao_RamanTMDlinear_Nanoscale_2013, Berkdemir_RamanWS2_ScientRep_2013}. Commonly studied are the characteristic vibrational modes of TMDs, the E$^1_{2g}$ that corresponds to the in-plane displacement of the atoms, and the A$_{1g}$ that corresponds to the out-of-plane displacement of the chalcogenide atoms, as well as the longitudinal acoustic phonon LA(M) \cite{Berkdemir_RamanWS2_ScientRep_2013,Zhao_RamanTMDlinear_Nanoscale_2013, Mitioglu_RamanWS2LAmode_PRB_2014, Molas_RamanWS2_ScientRep_2017}. Studying the Raman response as a function of temperature \cite{Peimyoo_temperatureRamanWS2_NanoRes_2015, Gaur_temperatureRamanWS2_PhysChemC_2015, Li_MoS2Raman_pressureshift_APL_2016, Li_RamanWS2_defectsT_JPhysChemC_2019, Fan_resonanceRamanTMD_JApplPhys_2014}, excitation polarization \cite{Mitioglu_RamanWS2LAmode_PRB_2014, Zhao_RamanTMDlinear_Nanoscale_2013, Chen_helicityRamanTMD_NanoLett_2015} and excitation wavelength \cite{Carvalho2015, Corro_resonantRamanTMD_NanoLetters_2016, McDonnell_resonantRamanWS2_NanoLetters_2018}, provides information on structural properties like number of layers, strain and defect density. It is important to note that the existence of exciton resonances in 2D TMDs has large implications for its Raman response. Raman features are greatly enhanced when the excitation is in resonance with an excitonic transition \cite{Zucker_resonantExciton_PhysRevLett_1983, Berkdemir_RamanWS2_ScientRep_2013,Zhao_RamanTMDlinear_Nanoscale_2013, Carvalho2015, Corro_resonantRamanTMD_NanoLetters_2016, McDonnell_resonantRamanWS2_NanoLetters_2018}. Resonance Raman spectroscopy on 2D TMDs materials results in the excitation of higher-order phononic resonances \cite{Golasa_multiphononMoS2_APL_2014}, which yields rich Raman spectra with many more modes than the two mentioned characteristic modes. Many questions arise about the nature of the optical response from complex CVD-grown TMDs pyramid like nanostructures. It is unknown how the nanogeometry of TMDs pyramids influences its photoluminescence and Raman response and how it depends on temperature and polarization. For potential applications, this knowledge is paramount. \begin{figure*}[htp] \centering \includegraphics[width = 0.8\linewidth] {pyramids_intro.pdf} \caption{\textcolor{black}{\textbf{Hollow WS$_2$ pyramids}} \\ \textcolor{black}{\textbf{a.} SEM image of the hollow WS$_2$ pyramid. The lines along the sides indicate the single stair steps, whereas the darker region in the middle is the crater. \textit{inset.} Schematic representation of the hollow pyramid. \textbf{b.} AFM image of the hollow WS$_2$ pyramid. The blue line indicates the position of the AFM crosscut depicted in \textbf{c}, ranging from \textbf{A} the pyramid crater to \textbf{B} the substrate. \textbf{d.} SEM image of a WS$_2$ monolayer on the same substrate. \textbf{e.} Photoluminescence spectrum of the WS$_2$ monolayer (black dotted line), and spectra of a pyramid obtained with a \SI{595}{nm} excitation (orange) and a \SI{561}{nm} excitation (green). When converting the x-axis to wavenumber $\Delta\nu$ in \textbf{f}, the spectral response on the two different lasers overlaps nicely, indicating that the collected light originates from Raman processes rather than photoluminescence.}} \label{fig_intro} \end{figure*} Here, we study the Raman and photoluminesence response of CVD-grown hollow WS$_2$ pyramids, comparing it to the optical response of WS$_2$ monolayers. Even though the WS$_2$ monolayers and pyramids are grown on the same substrate and under the same conditions, their measured optical response is completely different. We find, surprisingly, that the pyramids exhibit a strongly reduced photoluminescence (PL) with respect to horizontal layers. The reduced PL enables us to study the Raman signal of the hollow WS$_2$ pyramids, that contains both the characteristic Raman peaks of flat layers and a great number of higher-order phononic resonances. In contrast with the monolayers, the measured optical response of the hollow WS$_2$ pyramids is non-uniform over the nanostructures. Annular dark-field (ADF) scanning transmission electron microscopy (STEM) measurements confirm position-dependent variations in atomic arrangement. \section{Results and discussion} \subsection{Hollow WS$_2$ pyramids} \textbf{Figure \ref{fig_intro}a} presents an SEM image of a CVD-grown hollow WS$_2$ pyramid (the substrate is a SiN film on Si, see Methods). The WS$_2$ is crystallized in a 3R-phase (see Fig.\ref{fig_TEM-morp-and-3R}c-d and Section A in the Supplementary Materials.) The clear lines along the pyramid sides indicate single steps (see Fig.\ref{fig_microscope}c and Fig.\ref{fig_TEM-morp-and-3R}a,b in the Supplementary Materials). The geometry of the darker middle becomes more clear when examining the AFM image in Fig.\ref{fig_intro}b. The height profile measured with the AFM along the blue line is presented in Fig.\ref{fig_intro}c. The bottom of the crater in the middle is roughly \SI{5.6}{nm} high with respect to the substrate, whereas the pyramid sides reach a height of \SI{44}{nm}. The inset of Fig.\ref{fig_intro}a displays a schematic representation of the hollow pyramid, depicting the stair-like sides in white and the crater with a bottom of finite thickness in the middle. \mbox{Figure \ref{fig_intro}e} presents optical spectra obtained by exciting a WS$_2$ pyramid \textcolor{black}{at the stair-like side} with either a \SI{595}{nm} laser (orange) or a \SI{561}{nm} laser (green). The spectra contain a similar sequence of peaks, but their spectral position does not overlap in wavelength. However, these peaks are located at the same relative frequency distance to the excitation laser, as depicted in Fig.\ref{fig_intro}f. The clear overlap of the larger peak positions in the spectra from the two different lasers indicates that the collected light originates from an inelastic Raman process rather than from photoluminescence. \mbox{Figure \ref{fig_intro}d} presents an SEM image of a single horizontal layer (monolayer) WS$_2$ grown on the same substrate (next to hollow pyramids and monolayers, also fully grown pyramids are present on this substrate, see Supplementary Materials, Fig.\ref{fig_microscope}a, Fig.\ref{fig_ratio} and Fig.\ref{fig_peak_pos}). Comparing in Fig.\ref{fig_intro}e the optical response of the monolayer (black dotted line) with that of the pyramid, it becomes apparent that the pyramid exhibits a strongly reduced photoluminescence with respect to monolayer WS$_2$. A small background under the Raman modes is visible in the spectrum of the pyramid around the PL wavelength of \SI{630}{nm}. If this is remnant PL emerging from the hollow WS$_2$ pyramid, it has an intensity of at most 1\% of the WS$_2$ monolayer (see Supplementary Materials Fig.\ref{fig_background}). The immense reduction of the photoluminescence intensity from the hollow pyramids is unexpected. Even though the pyramid crater is only a few nanometers thin, the pyramid spectra do not resemble spectra of standard few-layered horizontal WS$_2$ (see Supplementary Materials Fig.\ref{fig_background}f). When the WS$_2$ thickness increases from monolayer to bulk, it transitions from a direct to an indirect bandgap semiconductor \cite{Mak_MoS2monofirst_PhysRevLett_2010}. It is important to note that, in contrast to the hollow pyramids, the reduced PL from the direct bandgap can be easily measured for few-layered WS$_2$ \cite{Irina_2020, Mak_MoS2monofirst_PhysRevLett_2010}. Moreover, the hollow WS$_2$ pyramids do not exhibit any luminescence at the known wavelength of the indirect bandgap, as would have been expected from both bulk and few-layer WS$_2$ (see Supplementary Materials Fig.\ref{fig_background}f). \textcolor{black}{Therefore we conclude that the increase of the layer number from monolayer WS$_2$ to the pyramid crater cannot explain the reduction of the PL intensity.} CVD-grown horizontal 2D TMDs flakes exhibit a similar amount of photoluminescence as exfoliated samples. Hence, the PL intensity reduction by at least two orders of magnitude of the hollow pyramids cannot merely be explained as being intrinsic to the CVD growth process, \textit{e.g.} through an increase in defect density. Furthermore, the 3R-WS$_2$ nanostructure cannot explain the PL intensity reductions, as 3R-WS$_2$ exhibits the same photoluminescence as the naturally occurring 2H-WS$_2$ \cite{Yang_WS2RamanPL3R_Nanotech_2019, Du_3RcircularPL_PRB_2019}. We conclude that our hollow WS$_2$ pyramids have a lower quantum efficiency than WS$_2$ flakes: assuming that the optical absorption of a WS$_2$ monolayer and a pyramid is the same, the quantum efficiency of these pyramids is at least two orders of magnitude lower than that of a monolayer WS$_2$. Given the fact that the pyramids have a thickness of \SI{5}{nm} - \SI{44}{nm}, it is safe to assume that a pyramid actually absorbs more than a monolayer, therefore the quantum efficiency is likely to be at least another order of magnitude lower. We attribute the decrease in the quantum efficiency to the increase in possible non-radiative loss channels due to the presence of all the edges in the structure of these pyramids. \textcolor{black}{The increase in non-radiative loss channels due to the edges in the pyramid structure is therefore the main factor that} leads to a severe quenching of the exciton photoluminescence, without influencing the Raman modes. The reduction of the PL enables us to study the Raman response of the hollow WS$_2$ pyramids in more detail, as many Raman features are usually obscured by the PL spectrum. The pyramid spectra obtained with the \SI{595}{nm} excitation exhibit roughly 10-12 Raman features, three of which have not been reported before for neither horizontal WS$_2$ layers nor WS$_2$ nanostructures (see Fig.\ref{fig_determine}). For the \SI{561}{nm} excitation, fewer Raman features are visible in the spectra, and these features have a lower intensity. This can be attributed to the fact that the \SI{595}{nm} excitation light is close to the A-exciton resonance, whereas the \SI{561}{nm} is out-of-resonance with both the A- and B-excitons. Raman modes of TMDs can be greatly enhanced when they are excited in resonance with an excitonic transition \cite{Zucker_resonantExciton_PhysRevLett_1983, Berkdemir_RamanWS2_ScientRep_2013,Zhao_RamanTMDlinear_Nanoscale_2013, Corro_resonantRamanTMD_NanoLetters_2016, McDonnell_resonantRamanWS2_NanoLetters_2018}. \subsection{\textcolor{black}{Structural characterization}} \begin{figure*}[htp] \centering \includegraphics[width = \linewidth] {pyramids_TEM.pdf} \caption{\textcolor{black}{\textbf{ADF-STEM of a hollow WS$_2$ pyramid}} \\ Atomic resolution annular dark-field scanning transmission electron microscopy images taken in \textbf{a-b} the middle region, and \textbf{c.} the side of a hollow WS$_2$ pyramid. Subtle variations in the atomic arrangement can be observed. The corresponding FFTs, as given as insets in each of the panels, highlight this feature further. Changes in the Bragg reflections occur between them. One of which is a reduction of the intensity of one of the Bragg reflections in the middle region of the hollow pyramid, as marked by the yellow circles in \textbf{b}.} \label{fig_TEM_mt} \end{figure*} In order to help interpret the \textcolor{black}{studied} optical response, we perform a detailed structural characterization of the hollow WS$_2$ pyramids by means of Transmission Electron Microscopy measurements. \textbf{Figure \ref{fig_TEM_mt}} displays annular dark-field (ADF) scanning transmission electron microscopy (STEM) images taken in different regions of these WS$_2$ nanostructures. Figures \ref{fig_TEM_mt}a and \ref{fig_TEM_mt}b display atomic resolution ADF-STEM images of two different locations corresponding to the middle region of the hollow pyramid, while Fig.\ref{fig_TEM_mt}c corresponds to the pyramid side. By comparing these three images, we can clearly observe differences in the atomic arrangement. In Figure \ref{fig_TEM_mt}a, the atomic distribution displays a well defined hexagonal shape, and this symmetry is also highlighted by the corresponding fast Fourier transform (see inset in Fig.\ref{fig_TEM_mt}a). As we can observe from Fig.\ref{fig_TEM_mt}b, the atomic arrangements at this location are somewhat different from those observed in Fig.\ref{fig_TEM_mt}a. This structural variation is also reflected in the corresponding FFT (inset in Fig.\ref{fig_TEM_mt}b), where one of the Bragg reflections exhibits a reduced intensity (marked with a yellow circles to facilitate its visualization). Structural variation is also observed in the side region of the hollow pyramid. The difference in contrast in Fig.\ref{fig_TEM_mt}c corresponds to two stair-like steps in the pyramid side. By comparing their relative atomic arrangement, we can determine that, while the external step exhibits a clear hexagonal honeycomb structure, this arrangement is lost in the subsequent layer. Note also that, as observed from the results obtained in the middle region of the nanostructures, even across its sides the atomic arrangement can vary slightly (see also Figure \ref{fig_additional_TEM} in the Supplementary Materials). These subtle variations of the atomic arrangement might be induced by the local presence of strain, which in turn results into a slight change of the orientation of the flake. Importantly, the level of structural disorder is more marked in the middle of the pyramids as compared to the sides (see Figure \ref{fig_additional_TEM}) due to the additional presence of free-standing WS$_2$ flakes arising from the walls of the hollow pyramid. \begin{figure*}[htp] \centering \includegraphics[width = 0.6\linewidth] {pyramids_determination.pdf} \caption{\textbf{Characterization of Raman peaks} \\ \textcolor{black}{\textbf{a.} The optical response of the hollow pyramids (orange) can be fitted by eleven Lorentzian lineshapes (red) and two Gaussians (grey). The Raman features are indicated by arrows with their spectral position in cm$^{-1}$. The last three features (in blue) have not been reported before. \textbf{b.} Part of the features can be explained as being multiphonon resonances involving the LA(M) phonon. The blue line depicts the higher order resonances of $A_{1g}$+n*LA(M). The red line depicts the higher order resonances of n*LA(M). }} \label{fig_determine} \end{figure*} \subsection{Characterization of vibrational modes} In \textbf{Figure \ref{fig_determine}a} we present a WS$_2$ pyramid spectrum\textcolor{black}{, acquired at the pyramid side,} in which all Raman features are indicated with arrows. Commonly, only three Raman modes are measured on both horizontal TMDs layers or nanostructures. We measure 10-12 Raman features, three of which have not been reported previously (indicated in blue in Fig.\ref{fig_determine}a). The other modes can be attributed following a limited amount of previous investigations (see also Table \ref{table_peaks}). In order to analyze the spectra in more detail, we fit the overall spectrum with a collection of eleven Lorentzian lineshapes (red) and a background consisting of two Gaussians (grey) (see Supplementary Materials Section 1 for a discussion on the background). This way we are able to attribute the new Raman modes from hollow WS$_2$ pyramids to being multiphonon resonances involving the LA(M) phonon, adopting the methodology for high frequency Raman features in MoS$_2$ \cite{Golasa_multiphononMoS2_APL_2014}. The blue line in Fig.\ref{fig_determine}b depicts the higher order resonances of $A_{1g}$+n*LA(M). The peak \SI{580}{cm^{-1}} is commonly attributed to $A_{1g}$+LA(M) \cite{Molas_RamanWS2_ScientRep_2017, Berkdemir_RamanWS2_ScientRep_2013, Peimyoo_temperatureRamanWS2_NanoRes_2015, Gaur_temperatureRamanWS2_PhysChemC_2015}, and the peak at \SI{769}{cm^{-1}} is attributed to $A_{1g}$+2LA(M) by Molas \textit{et al} \cite{Molas_RamanWS2_ScientRep_2017}. Thus we attribute the newly observed peak at \SI{1128}{cm^{-1}} (n=4) to $A_{1g}$+4LA(M). The red line in Fig.\ref{fig_determine}b depicts the higher order resonances of n*LA(M). The peak at \SI{702}{cm^{-1}} is commonly attributed to 4LA(M) \cite{Berkdemir_RamanWS2_ScientRep_2013, Peimyoo_temperatureRamanWS2_NanoRes_2015}, which is twice the first 2LA(M) Raman peak at \SI{350}{cm^{-1}}. Therefore we attribute the newly observed small shoulder of the last peak around \SI{1057}{cm^{-1}} to 6LA(M). The expected resonance at 3LA(M) (red line in Fig.\ref{fig_determine}b) would spectrally overlap with the first silicon peak at \SI{520}{cm^{-1}}, as well as the expected resonance at n=3 (blue line in Fig.\ref{fig_determine}b) would overlap with the second silicon peak \SI{955}{cm^{-1}}, so these possible features cannot be distinguished from the substrate response. The expected 5LA(M) (red line in Fig.\ref{fig_determine}b) would be around \SI{880}{cm^{-1}}, but is too dim to distinguish very clearly from the background. The peak at \SI{833}{cm^{-1}} is 2*$A_{1g}$, and the peak at \SI{475}{cm^{-1}} is commonly attributed to LA(M)+2ZA(M) (see Table \ref{table_peaks}). We conclude that the observed high-frequency Raman modes are multiphonon resonances involving the LA(M) phonon, excited because the \SI{595}{nm} laser is in resonance with the A-exciton. The highest frequency Raman modes have not been reported before on horizontal WS$_2$ layers (indicated in blue in Fig.\ref{fig_determine}a). A possible explanation for not observing them on monolayers is that investigating Raman modes on horizontal WS$_2$ layers is experimentally challenging because of the presence of photoluminescence, that is much brighter than any Raman feature. An intriguing alternative hypothesis is that the nanogeometry of the hollow pyramid plays a role in exciting the higher order Raman modes more resonantly, \textit{e.g.} through a higher phonon density of states. \begin{table*}[htp] \begin{center} \begin{tabular}{| c | c | c | c | c | c |} \hline position & std (cm$^{-1}$) & brightness & attributed to & possibly & literature \\ \hline 350 cm$^{-1}$ & 1.6 & ++ & E$^1_{2g}$ / 2LA(M) & & \cite{Berkdemir_RamanWS2_ScientRep_2013,Zhao_RamanTMDlinear_Nanoscale_2013, Mitioglu_RamanWS2LAmode_PRB_2014, Corro_resonantRamanTMD_NanoLetters_2016, Peimyoo_temperatureRamanWS2_NanoRes_2015, Thripuranthaka_temperatureRamanTMD_APL_2014, Molas_RamanWS2_ScientRep_2017} \\ \hline 417 cm$^{-1}$ & 1.5 & ++ & A$_{1g}$ & & \cite{Berkdemir_RamanWS2_ScientRep_2013,Zhao_RamanTMDlinear_Nanoscale_2013, Corro_resonantRamanTMD_NanoLetters_2016, Peimyoo_temperatureRamanWS2_NanoRes_2015, Thripuranthaka_temperatureRamanTMD_APL_2014, Molas_RamanWS2_ScientRep_2017} \\ \hline 475 cm$^{-1}$ & 4.7 & - - & LA + 2ZA & & \cite{Molas_RamanWS2_ScientRep_2017} \\ & & & or E''(M) + TA(M) & & \cite{McDonnell_resonantRamanWS2_NanoLetters_2018} \\ \hline 520 cm$^{-1}$ & 0.8 & ++ & Si & 3LA(M) & \cite{Parker_Silicon_PhysRev_1967, Uchinokura_Silicon_SolStateComm_1972, Weinstein_Silicon_SolStateComm_1972} \\ \hline 580 cm$^{-1}$ & 1.0 & +- & A$_{1g}$ + LA(M) & & \cite{Berkdemir_RamanWS2_ScientRep_2013, Peimyoo_temperatureRamanWS2_NanoRes_2015, Thripuranthaka_temperatureRamanTMD_APL_2014,Molas_RamanWS2_ScientRep_2017,Gaur_temperatureRamanWS2_PhysChemC_2015} \\ \hline 702 cm$^{-1}$ & 1.4 & + & 4LA(M) & 2E$^1_{2g}$ & \cite{Berkdemir_RamanWS2_ScientRep_2013, Peimyoo_temperatureRamanWS2_NanoRes_2015, Thripuranthaka_temperatureRamanTMD_APL_2014} (\cite{Molas_RamanWS2_ScientRep_2017}) \\ \hline 769 cm$^{-1}$ & 1.0 & + & A$_{1g}$ + E$^1_{2g}$ & A$_{1g}$ + 2LA(M) & \cite{Molas_RamanWS2_ScientRep_2017} \\ \hline 833 cm$^{-1}$ & 1.2 & + & 2A$_{1g}$ & & \cite{Molas_RamanWS2_ScientRep_2017} \\ \hline 880 cm$^{-1}$ & & - - & & 5LA(M) & \\ \hline 958 cm$^{-1}$ & 3.3 & +- & Si & A$_{1g}$ + 3LA(M) & \cite{Parker_Silicon_PhysRev_1967, Uchinokura_Silicon_SolStateComm_1972, Weinstein_Silicon_SolStateComm_1972} \\ \hline 1057 cm$^{-1}$ & 1.0 & - - & & 6LA(M) & \\ \hline 1128 cm$^{-1}$ & 4.0 & + & & A$_{1g}$ + 4LA(M) & \\ \hline \end{tabular} \end{center} \caption{\textcolor{black}{Overview of the measured Raman features on the hollow pyramid (see Fig.\ref{fig_determine}) at room-temperature. The first column gives the peak position determined by fitting, with the statistical standard deviation in the second column and the brightness in the third. The fourth column indicates known Raman modes, where the last column gives a few references. The fifth columns indicates possible explanations, either taken from only one article or being our hypothesis.} } \label{table_peaks} \end{table*} \begin{figure*}[htp] \centering \includegraphics[width = 0.9\linewidth] {pyramids_position.pdf} \caption{\textbf{Position dependence of intensity and shape of spectra} \\ \textcolor{black}{\textbf{a.} Intensity map of the first WS$_2$ Raman feature around \SI{350}{cm^{-1}}, with a step size of around \SI{1.5}{\mu m}. The stars indicate the positions of the spectra in \textbf{d-e}. Note that the x and y axis in the map are slightly skewed due to experimental constraints (see Experimental Section). \textbf{b.} Intensity map of the Si Raman peak at \SI{520}{cm^{-1}}. \textbf{c.} Optical image of a hollow WS$_2$ pyramid. Note the white colour at the sides, indicating a clear increase in scattering from the sides with respect to the top. \textbf{d,e.} Pyramid spectra at \SI{300}{K} and \SI{4}{K}. The substrate spectrum (green) shows the two Si peaks at 520 and \SI{955}{cm^{-1}}. The spectrum at the hollow part of the pyramid (red) has an overall higher intensity than at the side (black). \textbf{f.} Map of the ratio between the first two Raman features in the spectra (E$_{2g}$/A$_{1g}$). }} \label{fig_position} \end{figure*} \subsection{Position dependence of spectral features} The hollow WS$_2$ pyramids contain two distinct regions: the crater in the middle and the stair-like sides. We find that these regions exhibit a different spectral response. This is in contrast from the more homogeneous spectral response for horizontal WS$_2$ flakes (see Supplementary Materials, Fig.\ref{fig_ratio} and Fig.\ref{fig_peak_pos}). \textbf{Figure \ref{fig_position}a} presents an intensity map of the first Raman peak (E$_{2g}$,2LA). To create this map, the maximum value of the fitted peak is used (see Fig.\ref{fig_determine}a). The peak intensity is higher and the peaks are more pronounced at the pyramid crater than at the stair-like sides. For the other WS$_2$ Raman peaks, as well as at different temperatures and using different excitation wavelengths, this intensity distribution looks similar (see Supplementary Materials Fig.\ref{fig_intensity}). \mbox{Figure \ref{fig_position}b} presents an intensity map for the silicon peak at \SI{520}{cm^{-1}}. As expected, there is a constant intensity for the substrate next to the pyramid. It is interesting to note that the intensity of this substrate peak also decreases on the pyramid edges. We hypothesise that light scatters a lot from the stair-like pyramid sides, as seen from the bright white colour of the sides in the optical image (Fig.\ref{fig_position}c). This both reduces the available excitation light to excite Raman modes, and scattering of the resulting Raman response reduces the detected light, including the Raman response of the silicon substrate. \mbox{Figures \ref{fig_position}d-e} depict spectra on three different positions: on the hollow pyramid middle (red), on the substrate (green) and on the pyramid side (black) (indicated by stars in Fig.\ref{fig_position}a,b). In both the room temperature spectra in Fig.\ref{fig_position}d and \SI{4}{K} spectra in Fig.\ref{fig_position}e (temperature dependence will be discussed later), the two silicon peaks at 520 and \SI{955}{cm^{-1}} can be clearly distinguished (in green). The spectrum at the pyramid crater (red) has clearly a higher overall intensity. In addition to the Raman peaks, there is also a background visible. Especially at \SI{4}{K}, this background signal becomes extremely high, turning the signal from all the higher frequency Raman modes into mere shoulders. Based on its spectral position, we attribute this background to intermediate gap states or defect states (see Supplementary materials Fig.\ref{fig_background}). It is interesting to note that this background is significantly higher in the pyramid crater than on the sides, which might be originated by the presence of crystallographic defects. \mbox{Figure \ref{fig_position}f} presents a map of the intensity ratio between the first two Raman peaks (E$_{2g}$/A$_{1g}$). Just like the intensity of the individual peaks, this intensity ratio is also non-uniform over the WS$_2$ pyramid. In the pyramid crater, the peak ratio is approximately 1.0, whereas on the stair-like sides, the A$_{1g}$ peak is higher. The difference in peak ratio between the pyramid crater and sides is also present and much higher for the \SI{561}{nm} excitation (see Supplementary Materials Fig.\ref{fig_ratio}). Note that the spectral features of a fully grown pyramid are very similar to the spectral response from the hollow pyramid sides (see Supplementary Materials Fig.\ref{fig_ratio}). We conclude that both the peak intensity and the overall shape of the spectra are non-uniform along the WS$_2$ pyramid, behaving differently at the hollow pyramid crater and the stair-like sides. This indicates a difference in the atomic arrangement between the two parts of the pyramid. \begin{figure*}[htp] \centering \includegraphics[width = 0.6\linewidth] {pyramids_spectralshift.pdf} \caption{\textbf{Position dependence of spectral position} \\ \textbf{a.} Map at \SI{4}{K} of the spectral position of the first WS$_2$ Raman feature (2LA(M),E$_{2g}$)\textcolor{black}{, with a step size of around \SI{1.5}{\mu m}}. The spectral position is fairly homogeneous over most of the pyramid, except for the edge, where it is blue shifted up to \SI{320}{cm^{-1}}. The stars indicate the position of the spectra in \textbf{b.} Three gray lines are drawn as guides to the eye. Comparing the position of the Si peak of the three spectra with the line at \SI{520}{cm^{-1}} indicates that the spectral position of this peak does not shift. Comparing the position of the first WS$_2$ peak with the line at \SI{355}{cm^{-1}} does indicate the large shift of the spectrum on the pyramid edge (light blue) with respect to the rest of the nanostructure.} \label{fig_shift} \end{figure*} We also find a spatial non-uniformity in the spectral position of the peaks. \textbf{Figure \ref{fig_shift}a} depicts the spectral position of the first WS$_2$ Raman peak (E$_{2g}$,2LA) as a function of position. This spectral position is fairly uniform over most of the nanostructure, except for the right edge, where the spectral position is red shifted significantly up to \SI{320}{cm^{-1}}. Moreover, a small blue shift of the peak is seen on the left pyramid edge. The second WS$_2$ Raman peak exhibits a similar spectral shift (not shown), as do spectra obtained for a \SI{561}{nm} excitation. This spectral shift of peaks becomes more evident when comparing the spectra at the positions of the blue stars (Fig.\ref{fig_shift}b). The second line indicates the position of the silicon peak at \SI{520}{cm^{-1}}, which is clearly constant in all three spectra. However, when comparing the line at \SI{355}{cm^{-1}} with the position of the first Raman peak, it is clear that the WS$_2$ peaks in the spectrum are blue shifted. The spectral position of Raman modes in TMD materials is known to depend on the number of layers \cite{Lee_MoS2Ramanfirst_ACSNano_2010, Zhao_RamanTMDlinear_Nanoscale_2013, Berkdemir_RamanWS2_ScientRep_2013, Zheng_MoS2pyramids_AdvMat_2017, Zhou_MoS2pyramid_Nanoscale_2018}. One might therefore have expected a gradual spectral shift along the stair-like sides of the pyramid, because of their gradual increase in WS$_2$ layer thickness. Unfortunately, the diffraction-limited laser spot of size \SI{450}{nm} (see Methods) is much bigger than the width of the individual terraces. Therefore, if the size of the steps is one or even a few layers, we do not have the resolution to distinguish thickness-dependent changes in the Raman response of individual steps. The changes in the Raman response are smallest for low N, where N is the number of layers. The reported difference in spectral position for different layer thicknesses is at most \SI{5}{cm^{-1}} (between a monolayer and a bilayer), much less than the shift of \SI{30}{cm^{-1}} that we observe at the edge of this pyramid. Therefore this spectral shift cannot be explained by a thickness increase alone. \begin{figure*}[htp] \centering \includegraphics[width = 0.6\linewidth] {pyramids_temperature.pdf} \caption{\textbf{Temperature dependence of spectral features} \\ \textcolor{black}{\textbf{a-b.} Spectra obtained as a function of temperature with a \SI{595}{nm} excitation on the stair-like pyramid sides (black star in Fig.\ref{fig_position}) and on the hollow pyramid middle (red star in Fig.\ref{fig_position}), respectively. \textbf{c.} Spectra obtained with a \SI{561}{nm} excitation of the hollow pyramid middle as a function of temperature. The background visible at higher frequencies overlaps in wavelength with the background under the \SI{595}{nm} spectra in Fig.\ref{fig_position}b (see Supplementary Materials Fig.\ref{fig_background}) \textbf{d.} Temperature dependence of the intensity ratio of the first two Raman features, the fingerprint of WS$_2$ material. The intensity ratio is presented for spectra upon \SI{595}{nm} excitation (circles), spectra at the pyramid crater (stars) and pyramid sides (diamonds) upon \SI{561}{nm} excitation.}} \label{fig_temperature} \end{figure*} The spectral position of Raman modes in TMD materials does not only depend on the number of layers, it is also known to be influenced by the defect density \cite{Mignuzzi_MoS2Raman_defectshift_PhysRevB_2015, Parkin_MoS2Raman_defectshift_ACSNano_2016}, strain \cite{Rice_MoS2Raman_strainshift_PRB_2013, Yang_MoS2Raman_strainshift_ScieRep_2014} and pressure \cite{Li_MoS2Raman_pressureshift_APL_2016}. The reported shift due to strain is 2-\SI{3}{cm^{-1}} \cite{Rice_MoS2Raman_strainshift_PRB_2013, Yang_MoS2Raman_strainshift_ScieRep_2014} and due to an increased defect density is 5-\SI{10}{cm^{-1}} \cite{Mignuzzi_MoS2Raman_defectshift_PhysRevB_2015, Parkin_MoS2Raman_defectshift_ACSNano_2016}. For both strain and defects, the A$_{1g}$ peak is much less affected than the E$_{2g}$ peak. The reported shift due to pressure is up to \SI{40}{cm^{-1}} for pressures up to \SI{20}{GPa} \cite{Li_MoS2Raman_pressureshift_APL_2016}. Since our measurements were performed in either ambient (room temperature) or vacuum (low temperature) conditions, we do not expect a spectral shift due to pressure. It is not unlikely that a large defect density and/or the presence of strain are present in the hollow pyramids. Interestingly, spectra taken on fully grown WS$_2$ pyramids with curved edges do exhibit small shifts in the spectral peak position along the edges with highest curvature, where a higher stress or strain is expected (see Supplementary Materials Fig.\ref{fig_peak_pos}). Therefore we assume that the origin of the large spectral shift in the hollow WS$_2$ pyramid in Fig.\ref{fig_shift} lies in a combination of the mentioned effects of defect density and strain or stress. Having said that, the previously reported shifts, even when added, are much lower than the \SI{30}{cm^{-1}} that we observe on the edge of the hollow pyramid, so we cannot exclude unknown other causes related to the specific nanogeometry of the hollow pyramid. In this context, it is interesting to note that we measure an average spectral position of the first Raman peak on a WS$_2$ monolayer of \SI{357}{cm^{-1}}, which is higher than the average of \SI{350}{cm^{-1}} of this and other WS$_2$ pyramids (see Supplementary Materials Fig.\ref{fig_peak_pos}). Given that the first WS$_2$ Raman feature is a combination of the E$_{2g}$ and 2LA(M) phonon, we hypothesise that the first Raman feature in the monolayer has a larger contribution from the E$_{2g}$ than the same first feature in the hollow pyramid spectra. We conclude that the spectral features of the hollow pyramids, namely intensity, peak ratio and spectral peak position, vary in space over the nanostructures. This in contrast with the homogeneous distributions of these spectral features on a WS$_2$ monolayer. Moreover, the spectral position of the first WS$_2$ Raman feature is different for a WS$_2$ monolayer than for a hollow WS$_2$ pyramid, indicating a larger contribution from the E$_{2g}$ than the 2LA(M) phonon. \subsection{Temperature dependence of spectral features} Studying the temperature dependence of spectral features provides insights on the structural properties of the WS$_2$ pyramids. \textbf{Figure \ref{fig_temperature}a} presents spectra at four temperatures at the pyramid side (black star in Fig.\ref{fig_position}a), obtained with a \SI{595}{nm} excitation. With decreasing temperature, the Raman modes become more pronounced. Note for instance the three features at 702, 769 and \SI{833}{cm^{-1}}. The intensity of both the Raman features and the background increases with decreasing temperature. This intensity increase of the background is even more clear in Fig.\ref{fig_temperature}b-c, that present the spectra from the hollow pyramid middle (red star in Fig.\ref{fig_position}a) obtained upon either \SI{595}{nm} or \SI{561}{nm} excitation. The background also seems to exhibit a spectral blue shift as a function of temperature, with its maximum moving from \SI{635}{nm} at room temperature to \SI{620}{nm} at \SI{4}{K}. Based on the temperature dependence of its spectral position, we attribute this background to intermediate gap states or defect states rather than excitons, trions or an indirect bandgap response (see Supplementary Materials Section 3). \mbox{Figure \ref{fig_temperature}d} presents the temperature dependence of the average intensity ratio of the first two Raman peaks (E$_{2g}$/A$_{1g}$). As shown in Fig.\ref{fig_position}f, this ratio is not uniform over the pyramid, but is higher at the hollow pyramid middle than at the stair-like sides. This non-uniformity is most evident in the spectra obtained by \SI{561}{nm} excitation (diamonds in Fig.\ref{fig_temperature}d), as the A$_{1g}$ peak in the spectra from the pyramid sides almost completely disappears at room temperature (see Supplementary Materials Fig.\ref{fig_ratio}). For a \SI{595}{nm} excitation, at room temperature the first Raman peak ($E_{2g}$,2LA(M)) is 1.5x higher than the second Raman peak ($A_{1g}$) and at \SI{4}{K} this ratio is exactly inverted (circles in Fig.\ref{fig_temperature}d). The peak ratio upon \SI{561}{nm} excitation of spectra taken at the pyramid middle follow a similar temperature-dependent behaviour (stars in Fig.\ref{fig_temperature}d). The temperature-dependent intensity increase of TMDs Raman peaks has been reported previously for horizontal TMDs layers, and is attributed to an increase in phonon thermal population \cite{Fan_resonanceRamanTMD_JApplPhys_2014, Gaur_temperatureRamanWS2_PhysChemC_2015, Peimyoo_temperatureRamanWS2_NanoRes_2015}. The difference in intensity ratio between the E$_{2g}$ and A$_{1g}$ for the different excitation frequencies can be explained by the more resonant \SI{595}{nm} laser exciting the Raman peaks differently than the out-of-resonance \SI{561}{nm} laser \cite{Carvalho2015, Corro_resonantRamanTMD_NanoLetters_2016}. \textcolor{black}{The strength of the exciton-phonon interaction, and therefore the resonance condition, is different for the in-plane E$_{2g}$ than the out-of-plane A$_{1g}$ Raman modes \cite{Mastrippolito_excitonPhonon_Nanoscale_2020, Corro_resonantRamanTMD_NanoLetters_2016}. This explains why the ratio E$_{2g}$/A$_{1g}$ is higher for a \SI{561}{nm} excitation, \textit{e.g.}, out-of-resonance with the excitonic transition, than for the resonant \SI{595}{nm} excitation. The Raman intensity ratio also depends on the layer thickness of the material \cite{Zhao_RamanTMDlinear_Nanoscale_2013}. However, the main difference in intensity ratio is reported between a monolayer and a bilayer, whereas we observe different relative Raman intensities between the few-layer pyramid crater and the thick pyramid edge.} Moreover, the temperature-dependent behaviour of the Raman intensity also depends on the defect density in the sample \cite{Li_RamanWS2_defectsT_JPhysChemC_2019}. These factors are not mutually independent, \textit{e.g.}, \textcolor{black}{the WS$_2$ thickness influences the exciton-phonon interaction,} the resonance of the excitation affects the influence of phonons and defects on the Raman intensity. The temperature dependence of the Raman peak intensities, excited with two different frequencies, is therefore an interplay between the phonon thermal population, the resonance conditions for the different phonon peaks and the defect density in the structure. It is interesting to note in this context the intensity ratio of the first two Raman peaks on a WS$_2$ monolayer, excited at \SI{561}{nm}. This ratio is 1.4 at room temperature, much lower than both the intensity ratio at the hollow pyramid middle and at the stair-like sides, excited at \SI{561}{nm} (see Supplementary Materials Fig.\ref{fig_ratio}). Unexpectedly, this indicates a clear difference in structure and/or thickness between the WS$_2$ monolayer and hollow pyramid. The large difference in intensity ratio on the hollow pyramid middle and the stair-like sides suggests a difference in atomic arrangement, pointing out that nanogeometrical differences induce spectral modifications. \section{Conclusions} We have studied the optical response of hollow WS$_2$ pyramids, comparing them with WS$_2$ monolayers grown on the same substrate. The optical response of these nanostructures is completely different, as hollow WS$_2$ exhibit a strongly reduced photoluminescence with respect to WS$_2$ monolayers. This enables us to study the rich variety of Raman peaks that the pyramids exhibit as a result of the resonant excitation. Following the hypothesis of a multiphonon excitation involving the longitudinal acoustic phonon LA(M), we are able to explain the origin of all 10-12 observed Raman resonances. In contrast with monolayers, the measured optical response of the pyramids is non-uniform in both intensity, intensity ratio between peaks, spectral shape and spectral position. We attribute the spectral differences between the hollow pyramid middle and the stair-like sides to differences in both nanogeometry and atomic arrangement. ADF-STEM measurements confirm variations in the atomic arrangement, where the level of disorder is more marked in the pyramid crater than on the sides. Next to a positional dependence, we measure the temperature dependent behaviour of the spectral response of the hollow WS$_2$ pyramids. With decreasing temperature, the spectra change in intensity and shape. We see clear differences between spectra obtained with a resonant and out-of-resonance excitation laser. As the optical response of WS$_2$ monolayers, exhibiting photoluminescence, is completely different, we therefore deduce to have fabricated a platform of structures with tunable optical properties. Both nanostructures offer exciting possibilities, with applications ranging from opto-electronics to non-linear optics. \section{Methods} \label{methods} The WS$_2$ hollow pyramids are directly grown on a microchip using chemical vapour deposition (CVD) techniques. This microchip is composed of a silicon frame with nine windows over which a continuous silicon nitride (Si$_3$N$_4$) film is spanned. Preceding the CVD growth procedure\cite{Sabrya2020}, tungsten trioxide (WO$_3$) is deposited onto the microchip. This deposition is achieved by dispersing \SI{50}{mg} of WO$_3$ in \SI{1}{mL} of isopropanol, followed by a deposition by means of a pipette. The subsequent sulfurization process \cite{Sabrya2020} is carried out in a gradient tube furnace from Carbolite Gero. The microchip is placed in the middle zone and a crucible holding \SI{400}{mg} of sulfur is placed upstream from it. The middle zone is heated to 750\textcelsius{}, and kept at this reaction temperature for 1 hour after which the system is naturally cooled down to room temperature. Sulfurization is carried out under an argon flow of \SI{150}{sccm}. The zone containing the sulfur is heated to 220\textcelsius{}. The optical measurements are performed using a home-built spectroscopy set-up. We placed the sample in a Montana cryostation S100, using Attocube ANPxy101/RES piezo scanners to perform the presented raster scans. The small cross-coupling between the in-plane and out-of-plane piezo scanners causes a skewing between the x and y axis of the spectral-feature maps depicted in this work. Comparing the maps with the optical and SEM images of the pyramids, we can correlate the position of certain spectral features with the position on the pyramid. The cryostation is cooled down from room temperature to \SI{200}{K}, \SI{100}{K} and \SI{4}{K}. The sample is illuminated through an \SI{0.85}{NA} Zeiss 100x objective. Two continuous wave lasers are used, one with a wavelength of \SI{595}{nm} and a power of \SI{1.6}{mW/mm^2}, and one with a wavelength of \SI{561}{nm} and a power of \SI{3.6}{mW/mm^2}. To avoid the consequences of tight focussing on (circular) polarization, a \SI{2}{mm} laser diameter is used, slightly underfilling the objective in the excitation path. \textcolor{black}{This results in an excitation spot size of approximately \SI{450}{nm}}. A linear (vertical) polarization is used for the laser light. The sample emission is collected in reflection through the same objective as in excitation, and projected onto a CCD camera (Princeton Instruments ProEM 1024BX3) and spectrometer (Princeton Instruments SP2358) \textit{via} a 4f lens system. The excitation light is filtered out using colour filters. The transmission electron microscopy (TEM) measurements were carried out using an ARM200F Mono-JEOL microscope with Cs probed corrected. The microscope was operated at \SI{200}{kV} both in TEM and STEM modes, with the monochromator on and a slit of \SI{2}{\mu m} inserted. For the atomic resolution ADF-STEM measurements, an objective aperture of \SI{30}{\mu m} and a camera length of \SI{12}{cm} were used. The convergence semi-angle was \SI{23}{mrad}. \section{Acknowledgements} The authors acknowledge funding from ERC Starting Grant “TESLA” No. 805021. The authors acknowledge dr Martin Caldarola and dr Filippo Alpeggiani for their help in the data analysis. \clearpage \section*{Supplementary Materials} \subsection{WS$_2$ hollow pyramid sample} \begin{figure*}[htp] \centering \includegraphics[width = 0.6\linewidth] {SEM_microscope.pdf} \caption{\textbf{Hollow WS$_2$ pyramid sample} \\ \textbf{a.} Wide-field optical image of a part of the sample, with hollow pyramids (red), WS$_2$ monolayer flakes (green) and full pyramids (blue). The black squares are windows in the silicon frame over which a silicon nitride film is spanned (see Methods). Under these growth conditions, many hollow pyramids arise size 10 - \SI{25}{\mu m}, comparable to the one presented in this work. \textbf{b,c.} SEM images of the hollow WS$_2$ pyramid studied in the main text. In \textbf{b}, the black triangle in the middle is the bottom of the pyramid crater. The top rim of the pyramid can also be distinguished around the triangle of the pyramid crater. The steps in the stair-like sides can be easily recognized in \textbf{c}. } \label{fig_microscope} \end{figure*} Figure \ref{fig_microscope}a presents a wide-field optical image of a part of the sample. Under these growth conditions both hollow pyramids (red), monolayer flakes (green) and full pyramids (blue) are created. Note that there are many hollow pyramids with a comparable size to the one studied in this work (\SI{15}{\mu m}). Using the same CVD growing conditions (see Methods) yields similar samples with the same distribution of pyramid-like structures. To provide better insight in the morphology of the hollow WS$_2$ pyramids, Fig.\ref{fig_microscope}b,c depict higher magnification SEM images of the hollow pyramid depicted in Fig.1a in the main text. The black triangle in the middle of Fig.\ref{fig_microscope}b is the bottom of the pyramid crater. The top rim can also be distinguished around the crater triangle. The steps of the stair-like sides can clearly be recognized in Fig.\ref{fig_microscope}c. \begin{figure*}[hbp] \centering \includegraphics[width = 0.7\linewidth] {Morphology-and-crystal-structure.png} \caption{\textbf{Morphology and crystal structure of the hollow WS$_2$ pyramids} \\ \textbf{a-b.} Low-magnification ADF-STEM images of a hollow WS$_2$ pyramid. The step-like nature of the pyramid side is clearly visible by the changes in contrast with every step. \textbf{c.} Atomic resolution image corresponding to the side of the hollow pyramid (left panel) and the ADF intensity profile (right panel) acquired along the black outlines region in the atomic resolution image. \textbf{d.} Schematic atomic model of the top-view (upper panel) and side-view (lower) of the crystalline structure associated to the 3R-WS$_2$ phase.} \label{fig_TEM-morp-and-3R} \end{figure*} In addition, Transmission Electron Microscopy (TEM) measurements are performed to gain access to the atomic structure of the hollow pyramids. Figure \ref{fig_TEM-morp-and-3R}a and \ref{fig_TEM-morp-and-3R}b display low-magnification annular dark-field (ADF) scanning transmission electron microscopy (STEM) images of the side of a hollow WS$_2$ pyramid. The variations in the contrast visualise clearly the step-like nature of the hollow pyramid side. Figure \ref{fig_TEM-morp-and-3R}c presents an atomic-resolution ADF-STEM image corresponding to the side of the pyramid. Each bright spot corresponds to an atomic column that is composed of alternating tungsten (W) and sulfur (S) atoms. Using an ADF linescan, extracted from the atomic resolution image across six lattice points, we confirm that the WS$_2$ within the hollow pyramid crystallizes in a 3R crystal phase (see Fig.\ref{fig_TEM-morp-and-3R}d). \subsection{Spectral background} The hollow pyramid spectra exhibit a number of Raman modes plus a background (see Fig.3, Fig.6 in the main text). Using the temperature-dependent spectral position of this background, we can explain its origin. \mbox{Figure \ref{fig_background}} depicts the spectral response of the hollow pyramid upon a \SI{595}{nm} excitation (orange) and a \SI{561}{nm} excitation (green), comparing this with the photoluminescence of a monolayer exciton (black dotted line). The spectral response of the hollow pyramid to a \SI{595}{nm} excitation exhibits a background under the higher order Raman modes, whereas the spectral response of a \SI{561}{nm} excitation exhibits a background separated from the Raman modes, that overlap spectrally (see Fig.\ref{fig_background}\textcolor{black}{c-d}). The spectral position of the photoluminescence peak is completely different (\mbox{610 - \SI{630}{nm}} in the temperature range \SI{4}{K} - \SI{300}{K}). Figure \ref{fig_background}\textcolor{black}{e} depicts the spectral position of the pyramid background, determined from the spectra of \SI{561}{nm} excitation (in green), and the spectral position of the exciton PL (black dotted line). Both the pyramid background and the exciton PL peak are blue-shifting with decreasing temperature, however their spectral position is different from each other. At room temperature (Fig.\ref{fig_background}a), the spectral position of the PL peak (at \SI{630}{nm}) is very close to the background of the pyramid spectra acquired with a \SI{595}{nm} excitation (around \SI{645}{nm}). At \SI{4}{K} (Fig.\ref{fig_background}\textcolor{black}{d}), the PL peak (at \SI{615}{nm}) overlaps roughly with the first few Raman features of the pyramid spectra acquired with a \SI{595}{nm} excitation (around \SI{620}{nm}). However, at \SI{200}{K} and especially \SI{100}{K} (Fig.\ref{fig_background}\textcolor{black}{b,c}), the PL peak overlaps neither with the first Raman features nor the background of the pyramid spectra acquired with a \SI{595}{nm} excitation. Therefore we conclude that the background of the hollow pyramid spectra is not photoluminescence from the direct bandgap of WS$_2$. Another potential explanation for the background in the pyramid spectra is emission from the indirect bandgap. Few-layer WS$_2$ samples exhibit a combination of direct and indirect bandgap emission \cite{Mak_MoS2monofirst_PhysRevLett_2010}. Figure \ref{fig_background}f compares the room-temperature spectra of a WS$_2$ trilayer (grey) and five WS$_2$ layers (blue) (exfoliated on a Si substrate) with a spectrum from the hollow WS$_2$ pyramid. The room-temperature spectral position of the indirect bandgap ranges from \SI{700}{nm} for a bilayer to \SI{850}{nm} for multilayers \cite{SuHyun_2018}. This constitutes a large spectral separation with the measured spectrum (compare Fig.\ref{fig_background}a). Moreover, with decreasing temperature, the indirect bandgap is reported to exhibit a red shift, \textit{i.e.}, away from the exciton position \cite{Molas_PLindirectTemp_Nanoscale_2017, Zhao_PLIndirectTemp_NanoLett_2013}, whereas the pyramid background exhibits a similar blue shift to the exciton PL (see Fig.\ref{fig_background}\textcolor{black}{e}). An alternative explanation would be emission from a charged exciton or trion. However, the reported spectral position of the trion lies closer to the exciton than the background in the hollow pyramid spectra \cite{Plechinger_WS2trionsTemp_PSS_2015, Kato_WS2trions_ACSNano_2016}. We conclude that this background originates in intermediate gap states or defect states, that are reported to be in a range of spectral positions further away from the exciton PL than the trion \cite{Jadczak_TMDtrionsTemp_Nanotech_2017, Kato_WS2trions_ACSNano_2016, He_WS2defect_ACSNano_2016}, but closer than the indirect bandgap \cite{SuHyun_2018, Molas_PLindirectTemp_Nanoscale_2017, Zhao_PLIndirectTemp_NanoLett_2013}. It is interesting to note that the spectral background from these intermediate gap states is more present in the pyramid crater than on the pyramid sides. We propose this might be originated in the higher density of crystallographic defects. \begin{figure*}[htp] \centering \includegraphics[width = \linewidth] {background.pdf} \caption{\textbf{Comparison pyramid, monolayer and few-layer spectra} \\ \textcolor{black}{\textbf{a-d.} Spectral response of a pyramid upon a \SI{595}{nm} excitation (orange) and a \SI{561}{nm} excitation (green), and the spectral response of a monolayer upon \SI{595}{nm} excitation (black dotted line), at temperatures between room temperature and \SI{4}{K} (the spectra are re-scaled for easier comparison, see legends). The spectral response of the monolayer upon a \SI{561}{nm} excitation (green dotted line in \textbf{a}) has a lower intensity than to the \SI{595}{nm} excitation, but is at the same wavelength. \textbf{e.} The background under the higher order Raman features in the spectra of the \SI{595}{nm} excitation, and the spectral position of the exciton PL are both blue-shifting with decreasing temperature. \textbf{f.} In contrast to the spectra of a WS$_2$ trilayer (in grey) and five layers of WS$_2$ (in blue) (exfoliated on a Si substrate), the spectrum of the WS$_2$ pyramid (in orange) does not exhibit light from an indirect bandgap at 800 - \SI{850}{nm} wavelength. Moreover, although reduced with respect to the monolayer, the PL from the direct bandgap of the few-layers of WS$_2$ is clearly distinguishable from the background. \textbf{g.} Comparison of the spectral response of the hollow pyramid (in orange) and of five layers of WS$_2$ (in blue), acquired at \SI{4}{K} with a \SI{595}{excitation}. }} \label{fig_background} \end{figure*} \begin{figure*}[htp] \centering \includegraphics[width = 0.9\linewidth] {peak_intensity.pdf} \caption{\textbf{Position dependence of Raman intensity} \\ \textbf{a-d} Intensity map of the Raman features of the hollow WS$_2$ pyramid \textbf{a.} around \SI{350}{cm^{-1}} (2LA,E$_{2g}$), \textbf{b.} around \SI{417}{cm^{-1}} (A$_{1g}$), \textbf{c.} around \SI{702}{cm^{-1}} (4LA) and \textbf{d.} around \SI{833}{cm^{-1}} (2A$_{1g}$), taken at room temperature upon a \SI{561}{nm} excitation. \textbf{e-h} Intensity map of the Raman features of the hollow WS$_2$ pyramid \textbf{e.} around \SI{350}{cm^{-1}} (2LA,E$_{2g}$), \textbf{f.} around \SI{417}{cm^{-1}} (A$_{1g}$), \textbf{g.} around \SI{770}{cm^{-1}} (A$_{1g}$+2LA) and \textbf{h.} around \SI{1120}{cm^{-1}} (6LA), taken at \SI{4}{K} upon a \SI{561}{nm} excitation. Note that in all cases (compare with Fig.4a,b in the main text) the intensity of the Raman features from the pyramid crater is significantly higher than the Raman intensity from the stair-like sides.} \label{fig_intensity} \end{figure*} \begin{figure*}[htp] \centering \includegraphics[width = 0.9\linewidth] {peak_ratio.pdf} \caption{\textbf{Position dependence of peak ratio} \\ \textbf{a-c} Room-temperature map of the ratio between the first two Raman features in the spectra (E$_{2g}$/A$_{1g}$) of \textbf{a} the monolayer, \textbf{b} the hollow pyramid and \textbf{c} a full WS$_2$ pyramid (compare SEM images) (blue stars indicate the positions of the spectra in \textbf{d-f}). \textbf{a.} The peak ratio on the monolayer is homogeneous along the full flake, 1.4 on average (upon a \SI{561}{nm} excitation). \textbf{b.} The peak ratio on the hollow pyramid (upon a \SI{561}{nm} excitation) is much higher along the stair-like pyramid sides (namely 4-7) than on the pyramid crater (roughly 1.0). \textbf{c.} The peak ratio on the full pyramid is significantly higher along the upper side. Note in the presented SEM image that this side of this full WS$_2$ pyramid is curved, potentially inducing strain. \textbf{d.} Raman spectrum of the WS$_2$ monolayer. \textbf{e.} Spectrum of the side of the WS$_2$ hollow pyramid upon \SI{561}{nm} excitation. Note that the A$_{1g}$ feature is merely a shoulder on the E$_{2g}$,2LA(M) feature, which explains the high peak ratio in \textbf{b}. \textbf{f.} Spectrum of the full WS$_2$ pyramid (\SI{595}{nm} excitation), which is comparable to the spectra of the hollow WS$_2$ pyramid presented in the main text.} \label{fig_ratio} \end{figure*} \textcolor{black}{\subsection{Spectral features of pyramids, monolayer and few-layer WS$_2$}} Figure \ref{fig_intensity} presents maps of the intensity of different Raman features in spectra from the hollow WS$_2$ pyramid, taken at room temperature (Fig.\ref{fig_intensity}a-d) and at \SI{4}{K} (Fig.\ref{fig_intensity}e-h) upon a \SI{561}{nm} excitation. Note that for all the Raman features the intensity from the pyramid crater is higher than from the pyramid sides, as was the case for the Raman features depicted in Fig.4a,b in the main text (\SI{595}{nm} excitation, \SI{200}{K}). We conclude that the intensity distribution of the Raman features is independent of temperature or excitation frequency. As alluded to in the main text, we hypothesise that light scatters from the stair-like pyramid sides and thus reduces the available excitation light to excite any Raman modes, or that scattering of the resulting Raman response reduces the amount of light detected. \textcolor{black}{Figure \ref{fig_background}g depicts a comparison between the spectral response of the pyramid (in orange) and five layers of exfoliated WS$_2$ (in blue) at \SI{4}{K} and upon a \SI{595}{nm} excitation. The spectral response of the few-layer WS$_2$ exhibits a combination of Raman modes and PL from the excitonic resonance. Due to the high PL intensity, the higher-order Raman modes are much less clear in the few-layer WS$_2$ than in the pyramid spectrum. In both spectra, the A$_{1g}$ mode has a higher intensity than the E$_{2g}$ mode. The ratio of E$_{2g}$/A$_{1g}$ is 0.60 for few-layer WS$_2$, whereas it is on average 0.79 for the pyramid spectra (see Fig.6b in the main text). } Figure \ref{fig_ratio}a-c present room-temperature maps of the ratio between the first two Raman peaks in the spectra (E$_{2g}$/A$_{1g}$) of respectively the WS$_2$ monolayer, the hollow pyramid and a full WS$_2$ pyramid (compare the presented SEM images). In contrast with the position dependent peak ratio of the hollow pyramids, the peak ratio on the monolayer is homogeneous along the full flake (see Fig.\ref{fig_ratio}a). This ratio is around 1.4, as can be observed in the Raman spectrum in Fig.\ref{fig_ratio}d. The Raman peak ratio of the hollow pyramids upon \SI{561}{nm} excitation is much higher on the pyramid sides than in the pyramid crater (see Fig.\ref{fig_ratio}b). This difference in ratio was already apparent upon \SI{595}{nm} excitation, as presented in Fig.4f in the main text, but the contrast between the pyramid sides and crater is much larger upon \SI{561}{nm} excitation. The Raman peak ratio at the pyramid sides is as high as 4-7 times, as the A$_{1g}$ feature is reduced to merely a shoulder on the E$_{2g}$,2LA(M) feature. This becomes apparent in the spectrum in Fig.\ref{fig_ratio}e. The SEM image under Fig.\ref{fig_ratio}c depicts a full WS$_2$ pyramid grown on the same substrate. This pyramid exhibits a clear curvature. Interestingly, the Raman peak ratio of the spectra from this pyramid (upon a \SI{595}{nm} excitation) exhibit a difference on the side with the largest curvature, potentially induced by strain. Figure \ref{fig_ratio}f presents a spectrum of the full WS$_2$ pyramid, which is similar to the spectra of the hollow WS$_2$ pyramid in the main text. We conclude that the Raman peak ratio provides information on differences in the atomic structure between different nanostructures, as well information on differences in both atomic structure and strain or stress within the same nanostructure. \begin{figure*}[htp] \centering \includegraphics[width = 0.9\linewidth] {peak_position.pdf} \caption{\textbf{Position dependence of spectral position} \\ \textbf{a-b} Room temperature map of the first WS$_2$ Raman feature around 350cm$^{-1}$ on \textbf{a} the monolayer and \textbf{b} a full WS$_2$ pyramid (compare SEM images). Comparing \textbf{a} and \textbf{b} and Fig.5 in the main text, it becomes apparent that the Raman peak position on the monolayer is significantly different than on the pyramids. \textbf{c} depicts the average spectral position of the first Raman peak on all measured pyramids and the monolayer. Note that the bar does not indicate errors, but rather the spread of the peak position along the pyramids. The first Raman feature on the monolayer is around \SI{357}{cm^{-1}}, whereas on all measured pyramids, it is around \SI{350}{cm^{-1}}\textcolor{black}{, and around \SI{348}{cm^{-1}} on five layers of WS$_2$}. Possibly this Raman peak has a larger contribution from the E$_{2g}$ than the 2LA(M) phonon in the monolayer than in the pyramid spectra.} \label{fig_peak_pos} \end{figure*} Another position-dependent spectral feature alluded to in the main text is the spectral position of the Raman peaks. Figure \ref{fig_peak_pos} presents a map of the spectral position of the first Raman peak on the WS$_2$ monolayer and the full WS$_2$ pyramid respectively. The peak position is fairly homogeneous along both nanostructures compared to the hollow pyramid (Fig.5a in the main text). Figure \ref{fig_peak_pos}c depicts the average spectral position of the first Raman peak, as measured on different WS$_2$ pyramids and the monolayer (the bar does not indicate errors, but rather the spread of the peak position along the pyramids). The peak position on the monolayer is around \SI{357}{cm^{-1}}, which is significantly different than the peak position of around \SI{350}{cm^{-1}} on all the pyramid structures \textcolor{black}{and \SI{348}{cm^{-1}} for five layers of exfoliated WS$_2$}. We hypothesise that this first Raman peak has a larger contribution from the E$_{2g}$ than the 2LA(M) phonon in the monolayer than in the pyramid spectra. In conclusion, the different nanogeometry of the WS$_2$ pyramids induces spectral changes with respect to monolayer WS$_2$.\\ \subsection{Structural characterisation} \begin{figure*}[htp] \centering \includegraphics[width = 0.8\linewidth] {Additional-TEM_V4.pdf} \caption{\textbf{ADF-STEM of a hollow WS$_2$ pyramid} \\ \textbf{a.} Atomic resolution ADF-STEM image taken at the side of a hollow WS$_2$ pyramid. \textbf{inset} FFTs are taken within two visual different areas of this given region, highlighting a clear difference between the two areas. \textcolor{black}{\textbf{b-d.}} Low-magnification ADF images of the same hollow pyramid. The characteristic morphology of the hollow pyramids can clearly be observed, and is also extracted from the line profile over the white line given within \textbf{b} (see inset). \textcolor{black}{Each of these images show free-standing flakes which arise from the walls of the hollow pyramid. The free-standing flake shown in \textbf{d} suggest a growth mechanism where layer-by-layer stacking and screw-dislocation-driven growth mechanisms co-exist. \textbf{inset} A schematic visualises this co-existence. Here a free-standing flake initiates its growing from a central nucleation point via layer-by-layer growth (blue), after which a separate nucleation event occurs leading to a screw-dislocation-driven growth (green) on top.}} \label{fig_additional_TEM} \end{figure*} Figure \ref{fig_additional_TEM} displays additional results obtained in the structural characterization measurements by means of Transmission Electron Microscopy (TEM). Figure \ref{fig_additional_TEM}a presents a high-resolution ADF-STEM image taken at the side of a hollow WS$_2$ pyramid. Within Figure 2 of the main text it could already be observed that the atomic arrangements present in the sides of the WS$_2$ pyramid and the middle are not the same. Figure \ref{fig_additional_TEM}a depicts how even within a given region the atomic arrangement can vary. This is highlighted by the differences in the FFTs taken in the lower left corner (marked by the green rectangle in Fig.\ref{fig_additional_TEM}a) and the top right corner (marked by the blue rectangle in Fig.\ref{fig_additional_TEM}b) of this given region. Where the FFT in the blue area shows two nicely arranged hexagonal patterns, \textit{e.g.}, an inner and an outer hexagon, the green area presents one hexagon. These subtle variations of the atomic arrangement might be induced by the local presence of strain, which in turn results into a slight change of the orientation of the flake. Figures \ref{fig_additional_TEM}b \textcolor{black}{to \ref{fig_additional_TEM}d} depict low-magnification ADF images of the hollow WS$_2$ pyramid. Within Fig.\ref{fig_additional_TEM}b the hollow nature of the pyramid can clearly be observed, and illustrated even clearer in the line profile taken along the white line given in Fig.\ref{fig_additional_TEM}b. In addition, \textcolor{black}{Fig.\ref{fig_additional_TEM}b-d} depict how free-standing WS$_2$ flakes seem to arise from the walls of the hollow pyramid. These increase the level of structural disorder in the middle of the pyramids as compared to the sides. \textcolor{black}{The standing flake depicted in the low-magnification ADF image of Fig.\ref{fig_additional_TEM}d highlights the possible growth mechanism leading to the hollow WS$_2$ pyramids. In this figure it can be observed that both layer-by-layer stacking and screw-dislocation-driven growth mechanisms contribute to the overall growth mechanism of the hollow WS$_2$ pyramids.} \clearpage \bibliographystyle{unsrt}
{ "timestamp": "2021-11-11T02:09:46", "yymm": "2012", "arxiv_id": "2012.08900", "language": "en", "url": "https://arxiv.org/abs/2012.08900" }
\section{ISSUES TO BE ADDRESSED} \begin{enumerate} \item The development of elementary particle theory in the period from roughly 1955 to 1980 elucidates the manner in which theoretical physicists chose to follow a given line of research. \item Elementary particle theory and quantum field theory in this period provides a good laboratory in which to ask the question since there were a great many directions investigated, although in some cases a direction may have only lasted a few years. Therefore it has the advantage of an overview of ``many generations" compressed into a short period of time [Fruit-fly analogy]. \item For the same reason, it may appear from the outside that particle physics was often a chaotic, ill-motivated discipline, subject to rapidly changing fashions. Again, from the outside it often seemed that some theorists behaved as ambulance chasers, and when arriving on the scene, a feeding frenzy occurred. The pack then seemed to move on quickly, leaving a few stragglers to pick over the remaining bones. In fact, it is my contention that the shifts in direction in particle theory were highly motivated, highly structured, and the result of the interplay of several complex factors. \end{enumerate} \section{MOTIVATING FACTORS TO BE CONSIDERED} \begin{enumerate} \item A crucial calculation may cause a large number of theorists to join a topic or leave a topic. There are examples of both factors. \item A crucial experiment can also be a strong motivating factor in change of direction. Could it be that crucial calculations played a greater role than crucial experiments for the period in question? \item The role of dominant figures is an extremely important, but not necessarily an overriding consideration. The most interesting situations occur when there exists more than one dominant figure at a given time, with divergent or conflicting views. Then how do members of the community choose their line of research? To ask this question necessarily places emphasis on the behavior of the highly competent, active theorists who are not the dominant figures, since given conflicting views, an active theorist must make a choice. The dominant personalities may be de-emphasized in making this choice. If so, then how does a theorist choose a research direction? Examples Crucial Calculations that motivated joining a direction \begin{enumerate} \item Adler-Weisberger $\longrightarrow$ current algebra \cite{PhysRevLett.14.1051,PhysRevLett.14.1047} \item Green –Schwartz anomaly calculation $\longrightarrow$ superstrings \cite{Green:1984sg} \item Renormalizability of non-Abelian gauge theories $\longrightarrow$ QCD and the Standard Model \cite{PhysRevD.5.823,THOOFT1972189} \item Asymptotic freedom $\longrightarrow$ the Standard Model \cite{tHooft:1971qjg,PhysRevLett.30.1343,PhysRevLett.30.1346} \end{enumerate} A Crucial Calculation that motivated leaving a direction Coleman- Mandula $\longrightarrow$ leave relativistic SU(6) \cite{PhysRev.159.1251} Crucial Experiments: That motivated a new direction \begin{enumerate} \item Discovery of Omega-minus $\longrightarrow$ SU(3) as a symmetry of strong interactions \cite{Samios:1980vh} \item Weak neutral currents $\longrightarrow$ the standard model \cite{Erler_2013} \item Discovery of J / $\psi$ $\longrightarrow$ acceptance of the quark model \cite{PhysRevLett.33.1404,PhysRevLett.33.1406} \end{enumerate} That motivated one to leave a direction Deep inelastic scattering $\longrightarrow$ leave hadronic strings \cite{PhysRevLett.23.930} Baroque Explanations Regge poles as fundamental to a description of strong interactions \cite{PhysRevLett.7.394} Strategic Retreat from strong interactions as fundamental, Post 1960 \item A Search for a Simple Explanation When a particular line of research involves a formulation which is ``too baroque", theorists will either abandon the line completely, or seek to imbed it in a broader, simpler description. Usually this does not involve a crucial calculation (or experiment), but rather the desire of the community for a simple theory. \item Strategic Retreat Sometimes a crucial calculation will close off avenues of research, without alternatives being available at the time. The theory community must then reorganize its thinking along new lines. How is this done? Examples exist where this is a community effort, not that just of a major figure. \item Philosophical Underpinnings Some people do not leave a line of research Foolhardy or courageous? This most frequently occurs with “baroque“ theories. [S-matrix vs. string theories], or may occur with simpler theories, but with little experimental support. It is very difficult to judge before and during a period of research. But things moved so rapidly in particle theory, some judgements could be made. Philosophical underpinnings were not the issue for QM vs.Einstein, with QM accepted. Chew in S-matrix theory \cite{PhysRevLett.7.394}, had strong philosophical motivations, but the direction was ultimately unprofitable. Green –Schwartz in string theory \cite{Green:1984sg} also has strong philosophical motivations, and that direction has paid off in unexpected ways. Usually one must wait and see, with possible surprises. In that context, string theory has provided new insights into black hole issues. \item Theories with No Experimental Support Why did theorists persist in pursuing theories with no visible (experimental) support, much to the puzzlement (or scorn) of their experimental colleagues? They sought to embed baroque theories into simpler structures, and hoped to achieve clarification of first principles. [SUSY, SUGRA, superstrings, quantum gravity]. Hope springs eternal that a crucial calculation or experiment will establish the theory in some broad context (not necessarily definitive). \item Outcomes Have these strategies employed by the particle physics been successful and economical of effort? I claim that there were very few dead ends. Essentially almost all of the mainstream ideas proposed have been incorporated or subsumed into later theories. However, the actual contribution of old ideas to newer theories may have sometimes taken unexpected form. Judgement of the efficiency of the effort is complex, but I believe that it has shown to be highly successful, well motivated, in a rapidly moving community. \end{enumerate} Specific Examples (more or less in chronological order, biased by personal experience) \begin{enumerate} \item Disaster [ A crucial calculation ends a subject] In September1955 I arrived at the University of Rochester as a 1st year graduate student. At that time, John Greene had just defended his thesis attempting to explain the binding energy of the deuteron as an n—p bound-state, bound by pion exchanges, using quantum field theory, as understood then \cite{jgreene}. Greene investigated pseudo-scalar and pseudo-vector pion-nucleon couplings; pair suppression theories; intermediate coupling theories, etc.; all to two loop order, i.e. up to 3 pion exchanges. All these attempts to explain deuteron binding were failures. In retrospect he was studying the wrong problem, with the wrong degrees of freedom, and with the wrong methods. But that would not become clear until roughly 10 years later. At that time Rochester was a leading center of high energy physics, but there were parallels with Greene’s calculations found by others. To the best of my knowledge, this was the last field theoretic calculation carried out in Rochester in strong interactions until many years later. It also marked the recognition that Nuclear Physics and Particle Physics were distinct disciplines. There were several possible issues raised by these failures. The dominant possibilities considered were: \begin{enumerate} \item The field theoretic description of the strong interactions was correct, but that perturbation theory was inadequate for the task. \item Field theory, per se, might be incorrect at the short— distances probed by the nuclear forces; so that a fundamental reformulation of the theory was required. Nobody suggested that one was working with the wrong degrees of freedom; as quarks and gluons appeared much later. \end{enumerate} \item Strategic Retreat and Reconstruction Two prevailing schools of thought; with two sets of dominant figures. These were: \begin{enumerate} \item Dispersion relations (championed by Chew, and collaborators) \cite{smatrix} to organize, understand, and eventually predict the strong-interactions. \item Phenomenology of the weak interactions and the role of group theory (with a leading role played by Gell-Mann \cite{osti_4008239}, and later by Weinberg \cite{PhysRevD.8.605,PhysRevD.8.4482}, especially with current algebra). Both points of view involved treating physics at short-distances as a black-box. Which was more fundamental, i.e. which one would lead to a more fundamental reformulation? The betting in 1956 was that strong interactions were more fundamental. In any case, theorists generally chose one or the other of the two approaches. To be more specific, let us look briefly at the choices faced by a young theorist in choosing a direction for his research. \item Dispersion relations The forward dispersion relations for pi-N scattering tested nothing but locality, microscopic causality, and unitarity (conservation of probability). Causality required that field operators satisfy space-like commutation relations. The forward pi-N scattering amplitudes could rigorously be shown to satisfy a relation analogous to the Kramers- Kronig relation of optics. A test of the relation seemed to indicate failure of the relation. A CRISIS The first research problem that I worked on resolved the question in favor of the dispersion relation \cite{PhysRev.112.1802}. This was fundamental stuff, right? I cast my lot with the dispersion relation crowd. WRONG Dispersion relations did lead to S-matrix theory, Regge poles, but played a diminished role in the development of the standard model. \item Weak Interactions and Symmetries Using methods of group theory, one studied the weak Interactions phenomenologically. For the most part, one used group theory to relate different (homologous) amplitudes, and searched for Lie groups to classify the known hadrons which came to populate an ever larger particle zoo. At the outset, this seemed far removed from the 1st principles being probed by dispersion theory. Right? Wrong? This line of research gradually led to the V – A theory of the weak interactions \cite{sudarshan1957proc,PhysRev.109.193}, SU(3) classification of the strong interactions \cite{osti_4008239}, pions as Nambu-Goldstone bosons \cite{PhysRev.117.648,goldstone}, PCAC \cite{PhysRev.111.354}, soft-pions, current algebra \cite{PhysRevD.8.605,PhysRevD.8.4482}, effective chiral Lagrangians, partons \cite{Feynman:1969wa,PhysRev.185.1975}, and the quark model \cite{osti_4008239}. The effort to make the effective chiral Lagrangian (which embodied all the information of current algebra) compatible with the quark model \cite{GELLMANN1964214,zweigcern,PhysRevLett.13.598,PhysRev.139.B1006,PhysRevD.2.1285} led to the standard model of the strong interactions (quarks and gluons), to the (SU(2) x U(1)) electroweak theory \cite{PhysRevD.8.605,PhysRevD.8.4482,PhysRev.184.1625,Glashow:1961tr}, and finally the standard model itself. There was no a priori way of anticipating the right direction in 1956. By the time current algebra had reached center stage in the mid to late 1960’s Weinberg had become one of the dominant figures. At that time, there was little competition left from alternate points of view. The reconstruction had taken place, and there was a long period of verification of the standard model. \end{enumerate} \item A Crucial Calculation vs. a Baroque Theory The crucial calculation which led most theorists to accept current algebra was the discovery and verification of the Adler-Weissberger relation. The ingredients, reviewed above, were \begin{enumerate} \item the strong, electromagnetic, and weak currents were identical (up to scale factors) \item the validity of equal-time current-algebra \item PCAC \item the validity of the forward pi-N dispersion relations [ See above] \end{enumerate} (Note that similar dispersion relations played a parallel role in the development of other analogous sum rules testing current algebra, and later the parton model and quark model.) Although Weinberg was a dominant figure in the development of current algebra, he did not lead theorists away from S-matrix theory, Regge poles, hadron strings, etc. The success of current algebra need not have led the abandonment of S-matrix theory, since there was no obvious conflict, at least at that time that fashion shifted. What did? \begin{enumerate} \item Challenged by the success of current algebra, Mandelstam \cite{PhysRev.184.1625} and others attempted to incorporate PCAC and current algebra into S-matrix theory, with no success. [A crucial calculation, as a failure]. \item S-matrix phenomenology had become too baroque, too many Regge poles, and too many free parameters needed to fit data. How could this be a fundamental description? Apparently not. \item Hadronic string theory had fundamental difficulties of principle; ghosts, tachyons, anomalies. \item Hadronic string theory predicted that scattering cross-sections would fall exponentially with momentum transfer, since the theory was “soft” at short distances. Deep inelastic scattering experiments showed that the fall-off was much more gradual (power-law) giving evidence of hard fundamental constituents at short-distances. This gave rise to the parton model and the quark-gluon model of the strong interactions \cite{salam}. \end{enumerate} Thus, the shift in fashion from S-matrix theory to current algebra and eventually to the standard model presented the interplay of many complex issues for the individual theorist to consider: \begin{enumerate} \item a shift in dominant figures, \item a crucial calculation supporting current algebra (Adler-Weisberger), \item a crucial calculation failure (Mandelstam and PCAC in S-matrix theory), \item a baroque phenomenology of the strong interactions vs. the desire for simplicity, \item a crucial experiment deep inelastic scattering. \end{enumerate} Later the definitive crucial experiment that convinced theorists that quarks were concrete degrees of freedom of the underlying theory, and not mathematical bookkeeping devices for the SU(3) classification of hadrons, was the discovery of the J/$\psi$ by the Ting and Richter groups \cite{PhysRevLett.33.1404,PhysRevLett.33.1406}. \item History Repeats? \begin{enumerate} \item Is the standard model the baroque theory of the present era? Too many free parameters. (masses, coupling constant, etc.). \item The effort to enlarge the theory so as to “simplify” the description of the standard model, in order to unify strong with electroweak and maybe gravity, has led to a wide variety of theories with no present experimental support; technicolor, supersymmetry, GUTS, supergravity, superstrings. However, although these theories are philosophically well motivated, which as we have remarked, is not a guarantee of success. \item Black hole unitarity is a fundamental issue. \item Cosmological constant a Pandora’s box? \item New theoretical attempt had roots in many lines of thought in particle theory. Many issues of principle need to be clarified, such as the compatibility of gravity with quantum theory. \item Experimental support for “new” ideas? \end{enumerate} \end{enumerate} Summary \begin{enumerate} \item The post-war history of particle theory involved many complex and competing issues, leading to complicated decisions for a choice of research program by individual theorists. \item Directions changed rapidly as old lines of research became fully mature, or were closed off by crucial calculations or experiments. People generally did not stick with apparently unpromising directions. \item New directions were opened up by new crucial calculations, which were often not by the dominant figures. \item Dominant figures played an important, but not exclusive role in the particle physics community. \item We were in an era where issues of principle and crucial calculations played a greater role than experimental information. Indications are that this now may have shifted back to a primary position for experiment. \item It is very difficult for the individual theorist to choose the most fruitful direction to pursue, based on available information at the time. Luck can be an important part of the choice. \item Outcomes of previous lines of thought are almost always to be found in present research areas. \end{enumerate} \section{CONTEMPORARY PARALLELS?} Presently searches for “Beyond the Standard Model" is a theme which has not yet yielded concrete results. Clues may be coming from neutrino experiments, since neutrino masses point to “new physics". However, at the moment this search is dominated by experimental physics, not theory. LIGO, related gravitational wave searches, as well as significant results from astrophysics, have focused on the inclusion of gravity as necessary for a unified point of view. In this context, the interplay between black holes and unitarity is a prominent issue of theoretical physics, with a final unification not yet in sight. In my view, we are awaiting results from crucial experiments, with crucial calculations playing a subsidiary role, in contrast to the “golden age" of elementary particle physics. \acknowledgments We are grateful to Isaac Cohen and Jonathan Harper for their aid in preparing the manuscript. \vspace{\baselineskip} \vspace{\baselineskip} \noindent The choice of references is not intended to be comprehensive, but are chosen to be illustrative of the main issues of the text. \bibliographystyle{JHEP}
{ "timestamp": "2020-12-17T02:16:18", "yymm": "2012", "arxiv_id": "2012.08887", "language": "en", "url": "https://arxiv.org/abs/2012.08887" }
\section{Introduction} \label{sec:introduction} Initiatives by companies such as SpaceX, Amazon, and Telesat are building large, low earth orbit (LEO) satellite networks that provide global Internet access without the need for terrestrial fiber. The Starlink constellation is already partly deployed in a test phase~\cite{Pultarova2015-ml}. \begin{figure} \centering \captionsetup[subfigure]{justification=centering} \begin{subfigure}{.5\columnwidth} \centering \includegraphics[height=3.5cm]{Graphs/topology_hierarchical.pdf} \caption{Hierarchical, tiered topology} \label{fig:topology_hierarchical} \end{subfigure}% \begin{subfigure}{.5\columnwidth} \centering \includegraphics[height=3.5cm]{Graphs/topology_sat.pdf} \caption{Distributed, dynamic topology} \label{fig:topology_sat} \end{subfigure}% \caption{While requests currently traverse hierarchically arranged tiers of networks (Figure~\ref{fig:topology_hierarchical}), all global clients will be able to communicate directly in the shared wide-area satellite network (Figure~\ref{fig:topology_sat}).} \label{fig:topology} \vspace{-0.25cm} \end{figure} These new satellite connection networks differ tremendously from the traditional tiered topology of the Internet, as we illustrate in Figure~\ref{fig:topology}. Currently, users connect to the Internet via Tier 3 networks provided by their local ISPs, which connect to Tier 2 networks and the backbone of only a handful of Tier 1 networks. Operators of content delivery networks (CDNs) exploit this hierarchical topology: by placing their points-of-presence (PoP) within Tier 3 networks that group clients in the same vicinity, they can serve many clients with low access latency by replicating web content close to its consumers~\cite{AkamaiTechnologiesundated-ro,Sivasubramanian2004-eo}. In satellite access networks, all consumers have direct or near-direct access to the global satellite backhaul network. Additionally, as the individual satellites are not geostationary, they continuously connect to different ground stations as they orbit over the earth. The distributed and dynamic network presents a significant challenge for CDN operators as their placement of PoPs can no longer exploit a hierarchical network topology. While CDNs for satellite networks have been proposed as potential use-cases for orbital edge computing~\cite{Bhosale2020-aa,Bhattacherjee2020-kr}, to the best of our knowledge, PoP placement has so far not been investigated. In this paper, we propose four novel approaches to PoP placement in large LEO satellite constellations and evaluate them through simulation. To this end, we make the following core contributions: \begin{itemize} \item We propose four different PoP selection strategies for satellite access networks (Section~\ref{sec:strategies}). \item We present a simulation environment for web requests in satellite networks and use it to evaluate our four strategies (Section~\ref{sec:evaluation}). \item We discuss their implications for future development of large LEO satellite networks (Section~\ref{sec:discussion}). \end{itemize} \section{Background} \label{sec:background} In this section, we briefly introduce and describe the state of the art in large LEO satellite communication networks and CDNs. The remainder of this paper is based on this terminology. \subsection{Large LEO Satellite Communication Networks} While satellite-backed Internet access using geostationary satellites at an altitude of 35,000km has been in operation for decades, the induced communication latency makes it infeasible for most use cases~\cite{Clarke1945-qb,Iida2000-il}. Currently, however, SpaceX, Amazon, and others are designing new, large LEO satellite communication networks that comprise thousands of satellites that orbit the earth at a much lower altitude of less than 600km to provide global Internet access~\cite{Pultarova2015-ml}. Rather than only relaying signals between two ground stations, these satellites feature inter-satellite links (ISL) that facilitate network connection between satellites. Due to the surrounding vacuum, ISLs can leverage a 50\% faster speed of light compared to fiber cables, and the path from ingress to egress satellite is more direct than traversing the traditional tiers of the global Internet. Consequently, two ground stations can not only communicate directly through this satellite network but can also do so with reduced latency compared to terrestrial fiber~\cite{Khan2015-wf}. Especially SpaceX market this new ``space Internet'' not just as an option for locations without access to terrestrial fiber but as an alternative that outperforms conventional connection methods~\cite{Del_Portillo2019-al,Giuliari2020-pj,Klenze2018-og,Bhattacherjee2018-vc,Bhattacherjee2019-jz}. \begin{figure} \centering \includegraphics[width=\linewidth]{Graphs/starlink.pdf} \caption{Phase~\textrm{I} of the planned Starlink constellation developed by SpaceX with 24 planes of 66 satellites each, orbit inclination of 53°, and altitude of 550km. To show the +Grid ISLs we mark the links for one satellite as an example~\cite{Bhattacherjee2019-jz,Mark_Handley2018-de}.} \label{fig:constellation} \vspace{-0.25cm} \end{figure} In these large LEO satellite constellations, satellites are arranged in planes evenly spaced around the earth, with each plane comprising an evenly-spaced group of satellites in the same orbit. To give an example, phase \textrm{I} of the planned SpaceX constellation comprises a total of 1,584 satellites, with 24 planes of 66 satellites each (Figure~\ref{fig:constellation}). The orbits each have an inclination of 53° and an altitude of 550km~\cite{Bhattacherjee2019-jz,Mark_Handley2018-de}. An orbit's inclination describes the angle of its plane compared to the earth's equatorial plane, while the orbit's altitude is its distance to the surface of the earth~\cite{nasa-dq}. As the constellation uses only circular orbits, a satellite's altitude remains constant over its orbital period. Here, the ISLs are likely to be arranged in a neighbor-grid, or \emph{+Grid} pattern, where each satellite keeps links to its successor and predecessor in its plane in addition to two cross-plane links to neighboring satellites in both adjacent planes. The constellation imposes a connected grid over the globe, covering each ground point with access to the network and enabling routing between any two ground terminals over the satellite network~\cite{Bhattacherjee2019-jz,Klenze2018-og,Mark_Handley2018-de,Handley2019-ce}. A data item sent from a server to a client over the satellite network is first passed from the server to a satellite uplink. Data centers are likely to be equipped with one or more of these uplink dishes to provide fast and direct access to the satellite network to serve clients better. Second, the data item is sent to the optimal satellite. This might be the nearest satellite with the best visibility or connection, or it could be the one that is the closest to the target location of the request to optimize latency. It passes through the satellite network over ISLs to the satellite with a connection to the target ground station. Finally, the item is passed from the satellite network to a satellite dish on the ground, from where it reaches the target computer. An as-of-yet unknown aspect is how these ground stations will be deployed: while a Starlink satellite dish, for example, is too big for typical consumer hardware such as a mobile phone, it could be installed on private houses, covering only a few tens of devices within that house; as an uplink for a radio tower providing network access for hundreds of devices at a time; or even on a city or communal level for thousands of connected devices. Even a hybrid of these installation options is possible, depending on the needs and available resources in different areas. The tradeoff between managing only a few high-bandwidth connections or lots of low-bandwidth connections per satellite is likely to be handled differently on a provider-by-provider basis~\cite{Handley2018-ay,Bhattacherjee2018-vc,Bhattacherjee2019-jz}. \subsection{Replicating Web Content in Content Delivery Networks} CDNs replicate and distribute web documents of different web sites at well-chosen locations close to clients and redirect client requests to these locations. These locations at the edge of the Internet are referred to as points-of-presence (PoPs) of the CDN~\cite{Tanenbaum2016-jp,Sivasubramanian2004-eo}. \begin{figure} \includegraphics[width=\linewidth]{Graphs/tiered_pops.pdf} \caption{CDN PoPs in the tiered Internet. While most clients are served from PoPs within their ISP access networks, additional PoPs serve networks where no capacity has been allocated and are used for tiered distribution of content~\cite{AkamaiTechnologiesundated-ro}.} \label{fig:cdnpops} \vspace{-0.25cm} \end{figure} Placing these PoPs close to clients reduces access latency and minimizes the required bandwidth in the network. Simultaneously, to save on operational costs, the CDN operator benefits from operating as few PoPs as possible, i.e., PoPs should also cover many potential clients at the same time. As a result, PoPs are often placed in Tier 3 networks that directly serve end-users. We show an example of PoP placement in the tiered Internet infrastructure in Figure~\ref{fig:cdnpops}. For fault tolerance and tiered distribution, additional PoPs can also be installed in Tier 2 or Tier 1 networks~\cite{AkamaiTechnologiesundated-ro}. For example, Akamai, one of the largest CDN operators, operates approximately 300,000 servers in more than 130 countries~\cite{akamai-rn}. Placing compute and storage PoPs at the edge of the networks is also the basis of today's edge and fog computing~\cite{paper_bermbach_fog_vision}. \section{Discussion} \label{sec:discussion} Our simulation shows that there are indeed benefits to PoP placement within satellite constellations. In this section, we discuss implications of our work as well as threats to validity. \subsection{Practical Feasibility of Satellite PoPs} As briefly mentioned in the previous section, adding storage capabilities to satellites is a non-trivial task. Nevertheless, such storage resources are necessary to provide satellite PoPs within a LEO constellation, as we propose with our SAT, SAT-TTL, and SAT-REP strategies. While the idea of storage servers in satellites seems fantastical, it is feasible, as early research has shown. Bhattacherjee et al.~\cite{Bhattacherjee2020-kr} use a Starlink satellite and commodity HPE 64-core servers to analyze the costs and challenges for such a deployment and find the results encouraging. Neither weight, volume, nor radiation are an issue given the hardware setup and orbit altitude. Power consumption, especially by cooling, warrants further analysis but should also not be prohibitive. From a cost perspective, such a satellite server is estimated to be only 3x the cost of a server in a data center on earth. Given these promising first findings and the overall low average storage requirements per satellite as determined in our evaluation, we conclude that allocating storage resources for satellite PoPs is certainly challenging yet not unreasonable. \subsection{Impact on Request Latency} In our evaluation, we focus on the tradeoff between two metrics: storage and bandwidth. Request latency is a third aspect that is important for CDNs. We omit this for two reasons: First, latency and bandwidth are not independent of each other. We use request hops as an estimator for bandwidth in our evaluation, but this could also be used to estimate latency. When a request requires fewer hops, it will also incur lower latency. Second, we have argued that one of the main advantages of satellite-backed Internet is lower request latency, as endpoints communicate over a direct connection and as ISLs benefit from a 50\% faster light propagation in a vacuum than in fiber. For our use case, where users download web content, the latency induced by network communication even over large distances is still lower than the 80ms that humans can perceive, thus demanding no optimization~\cite{Handley2018-ay,Mohan2020-cn}. \subsection{CDN Scalability} Another challenge of a satellite-based CDN is scalability~\cite[p.21]{book_cloud_service_benchmarking}. First, there is the individual PoP's scalability. Each PoP is deployed with a fixed amount of local storage. For both the GST and SAT strategies, this fixed amount of local storage cannot be extended as it is inaccessible, either because the equipment is managed by the end-user or because it is attached to a LEO satellite. Compared to PoPs in terrestrial data centers, scalability could become an issue if the CDN operations show that more storage is required at specific PoPs. The lifetime of user equipment and satellites of only a few years, however, still presents opportunities to adapt local PoP storage. Second, there is the overall scalability of a CDN in LEO satellite constellations. For instance, we consider the flash crowd phenomenon, or ``slashdot effect,'' where a sudden spike in popularity for a particular content occurs. In a satellite network without integrated CDN PoPs, this would lead to all clients requesting this data from the origin servers simultaneously, a significant strain on the uplink between this server and the satellite that serves its ground terminal. Through replication within the network, however, this bandwidth use can be limited. The extent of this optimization depends on the number of PoPs needed to satisfy the demand for this popular data item. With millions of ground stations fetching this item from the origin server to serve it from the local store on subsequent requests, as is the case with the GST strategy we propose, the server still has to answer millions of requests. In the SAT, SAT-TTL, and SAT-REP strategies, however, only a handful of satellites that serve those millions of ground stations need to fetch the requested content, which is more akin to a tiered distribution. Consequently, the load on the origin server is limited by the satellite PoPs, even in the face of this flash crowd event. Here, the CDN leads to better scalability of the network. And third, we must consider that a LEO satellite constellation may evolve over time, with more satellites added to increase throughput and coverage. We have seen that the time between handoffs in the SAT-TTL and SAT-REP strategies decreases with the number of satellites, which leads to more frequent purges or more data transfer, respectively. There may be a point at which adding additional satellites actually decreases the CDN's performance. Depending on the size of the constellation, it can thus make sense to designate only a subset of all satellites as PoPs and take additional hops to fetch replicas over ISLs. \subsection{Satellite Movement Prediction} To anticipate ground station handover for the SAT-TTL and SAT-REP strategies, we use the satellite's orbital periods, which we take to be constant, and assume even spacing of satellites within a plane. These assumptions cannot entirely hold true in the real world, as the earth is not perfectly spherical with even distribution of gravitational forces. Gravity, atmospheric drag, and the need to dodge obstacles such as debris all influence the flight paths of satellites; their orbits and spacing may thus change over time~\cite{Blitzer1956-cz,navipedia-wu}. To counteract this, Starlink satellites are equipped with thrusters to manipulate their flight trajectories and to de-orbit them at the end of their lifetime~\cite{spacex-js}. We abstract from all these factors in our simulation as we have found them to only have minor impacts on the performance of the different PoP selections strategies. \subsection{Data Consistency of Content} A further challenge is data consistency of content~\cite{paper_bermbach_consistency}. In our simulation, our workload is based on static content that is not updated. In practice, content can be updated at any time, which requires invalidating local replicas at PoPs or proactively pushing replicas to these PoPs. While the details depend on the specifics of the replication algorithms in use, we note that this is a problem that grows with the number of PoPs. This is another disadvantage of the GST strategy compared to the SAT strategies. Compared to the number of servers managed by Akamai, for instance, invalidating content on a few thousand satellite PoPs should be manageable. \subsection{Dynamic Content} When we conclude a significant improvement in bandwidth use in the network through the operation of PoPs, we can, of course, only consider content that can be replicated by CDNs. This content is essentially static, such as images, chunked videos, scripts, or static web pages. All other web traffic is unaffected by our optimizations as dynamically generated or user-specific content cannot be served from CDNs. However, we observe a recent trend towards static web content that specifically aim to make additional content available through CDNs, e.g., the JAMStack~\cite{Biilmann2019-bc}. Furthermore, Amazon Web Services already allows developers to run simple serverless functions with Lambda@Edge in some of their CloudFront PoPs, which allows dynamic content to be distributed over their CDN~\cite{Amazon_Web_Services2020-im}. If more content providers adapt such paradigms, we could see improvements to the web's overall performance, especially in satellite networks using our proposed CDN PoP selection strategies. \subsection{Security and Privacy} In our paper, we did not address security aspects as data delivered by CDNs tends to be (semi-)public in practice. If there is sensitive data, however, the GST strategy is potentially much more vulnerable as the PoP may be physically accessible. The other three strategies open up interesting questions regarding privacy legislation since satellites fall under the jurisdiction of the state where they are registered~\cite{noauthor_1967-mk}. \section{Related Work} \label{sec:relwork} The idea of serving web content within satellite networks is not entirely new. Gallucio et al.~\cite{Galluccio2012-ct,DOro2014-dk} present \emph{SatCache}, a scheme for content caching in information-centric satellite networks. SatCache, however, considers only ground stations for caching. Wu et al.\cite{Wu2016-rm} extend SatCache and develop a two-layer model that integrates caching within satellites. They show how this second cache layer can reduce bandwidth usage when several ground stations share a single satellite. Both papers, however, focus on static geostationary satellites rather than highly dynamic LEO constellations. ESA's SHINE project~\cite{Romano2018-lr,shine-zv} combines satellite backhaul networks and edge distribution networks for secure distribution of multimedia content. In the scope of the project, Luglio et al.~\cite{Luglio2018-vp} present different caching strategies and delivery models for such multimedia content. As the project has only recently started, results are still pending. Both SHINE and the related SCORSESE~\cite{scorsese-mq} project do not consider satellite PoPs and assume static geostationary satellites. Liu et al.~\cite{Liu2018-xx} propose local caches in LEO satellites. In contrast to our work, however, they present an algorithm for replicating files to satellites, while we focus on identifying the network components which shall serve as PoPs. Furthermore, their proposed algorithm starts with a random distribution of files across all satellites before swapping files between satellites as more information about client preferences becomes available. While their approach lowers access latency compared to SatCache, the authors ignore bandwidth impacts on ISLs. These, however, will be severe with satellites swapping files frequently as is necessary in dynamic LEO constellations. Both Bhosale et al.~\cite{Bhosale2020-aa} and Bhattacherjee et al.~\cite{Bhattacherjee2020-kr} propose CDN replicas in LEO satellites as potential use-cases for orbital edge computing, i.e., placing compute resources in the satellite network. They confirm that a CDN web service is feasible on satellite servers and how it could be implemented. We build upon this work and investigate how PoP placement for such a CDN could work. Only a handful of research publications address the challenges of large LEO satellite networks from a perspective of Internet and network engineering: Bhattacherjee et al.~\cite{Bhattacherjee2018-vc,Bhattacherjee2019-jz}, Klenze et al.~\cite{Klenze2018-og}, Handley~\cite{Handley2018-ay,Handley2019-ce}, Giuliari et al.~\cite{Giuliari2020-pj}, and Papa et al.~\cite{Papa2020-ny} discuss network topology and routing in LEO satellite networks. Furthermore, Dai et al.~\cite{Dai2020-xf} emphasize the mobility of clients and user terminals and the resulting challenge of handover between access satellites. Although content replication in satellite networks is a novel topic, data replication in dynamically changing networks in general has received widespread research attention in the context of mixed cloud/fog/edge environments~\cite{paper_bermbach_fog_vision,paper_bonomi_fog}. For example, FBase~\cite{paper_hasenburg_towards_fbase,techreport_hasenburg_2019} proposes application-controlled replica placement in fog environments through the \emph{keygroup} abstraction that can be adapted at runtime. This could be used to schedule data replication in accordance with the orbital mechanics of large LEO satellite constellation. Similarly, the Global Data Plane~\cite{Zhang2015-cb}, Nebula~\cite{Ryden2014-ow}, or a combination of IPFS and RazorFS~\cite{Confais2017-bc} have been proposed as storage middleware for fog systems, yet their design makes assumptions about the underlying networks that do not hold in satellite networks. Finally, our experiments relied on simulation as physical access to, e.g., the Starlink network, is not possible. Here, systems such as MockFog~\cite{paper_hasenburg_mockfog,hasenburg2020mockfog} could be adapted to emulate the satellite infrastructure. This would allow the research community to go beyond simulation and to evaluate LEO edge systems through system experiments. \section{Conclusion \& Future Work} \label{sec:conclusion} In this paper, we have presented how the novel topology of Internet networks backed by large LEO satellite constellations are challenging the assumptions taken by CDN operators today. Where PoPs are currently placed in Tier 3 access networks to serve groups of clients with homogenous interests and in close physical proximity, the global, converged access and backhaul network developed by companies such as SpaceX and Amazon requires re-thinking PoP placement. We propose four strategies for PoP placement, both in ground stations and satellites. Through simulation of a satellite constellation, we find that satellite PoPs can significantly reduce the bandwidth required to fulfill client requests without high storage requirements within the individual satellites. The traditional CDN is not the only use-case for resource allocation in satellite networks: with the advent of edge computing, where compute resources are available at the edge of the network, it makes sense to examine the possibilities of offloading compute tasks to Internet satellites as well. \section{Strategies for PoP Selection} \label{sec:strategies} When deploying a CDN for LEO satellite-based Internet, significant savings in terms of bandwidth usage and improved access latency for clients can be achieved. The critical question is where to put the CDN's data, i.e., to choose the PoPs. In this section, we introduce four novel strategies for selecting such CDN PoPs in networks that rely on the novel large LEO satellite constellations: ground station PoPs (\emph{GST}), simple satellite PoPs (\emph{SAT}), satellite PoPs with time-to-live (\emph{SAT-TTL}), and satellite PoPs with internal replication (\emph{SAT-REP}). \subsection{Ground Station PoPs (GST)} As ground station hardware is too large for end-user devices, a ground station will usually act as a gateway for several devices. Given that all devices served by a single ground station would be located in the same general area, they are likely to also access similar content~\cite{Hasenburg2020-xi,Hasenburg2020-gf,DOro2014-dk,paper_hasenburg_geobroker}. These two factors make ground stations suitable PoPs when serving web content via the satellite network. Furthermore, deploying hardware, i.e., storage devices, on the ground is no challenge and comparatively cheap. In GST, content is requested from the local ground station and, if available, served from there. If the content is not already stored locally, it is fetched from the origin location over the satellite network, and a copy is stored in the ground station's local store. This PoP placement strategy decreases bandwidth usage and is most efficient when there is a high ratio of devices to ground stations. \subsection{Simple Satellite PoPs (SAT)} \label{subsec:localsatellite} With many ground stations deployed and few devices per ground station, e.g., with every household using their own satellite dish to connect to the network, placing PoPs in the ground stations may not offer significant advantages over on-device caching. In such scenarios, request paths from multiple ground stations, and thus from multiple end-user devices, intersect on a satellite, so that we must consider satellites as possible PoPs for content replication. Of course, deploying storage hardware within these satellites is a considerable challenge compared to deploying that same hardware at ground stations. We further discuss this challenge in Section~\ref{sec:discussion}. The naive approach to satellite PoPs keeps a local copy of data items on the first satellite that the requesting ground station connects to. Subsequent requests can then be served from this PoP. This does not make use of any ISLs and has no negative impact on ISL bandwidth consumption. The main challenge of this approach is that satellites are not geostationary: A satellite storing a local copy of a data item requested on one side of the planet will move to the opposite side of the earth within an hour, and the data item may be of no use there. If the item is popular enough that it is requested several times within a few minutes, or if the requested item is relevant to large geographic areas, this could still provide a sufficient increase in efficiency. Moreover, when the satellite has orbited the planet once, its locally stored data becomes relevant again. \subsection{Satellite PoPs with Time-to-Live (SAT-TTL)} \begin{figure*} \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{Graphs/handoff_1.pdf} \caption{initial constellation} \label{fig:flying1} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{Graphs/handoff_2.pdf} \caption{handoff within a plane} \label{fig:flying2} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{Graphs/handoff_3.pdf} \caption{handoff across planes} \label{fig:flying3} \end{subfigure} \caption{Handoff between satellites as the satellites orbit around the earth and the earth rotates underneath the constellation. Once a ground station loses the connection with a satellite, it connects to that satellite's successor in the same plane or to a neighbor in the adjacent plane.} \label{fig:flying} \vspace{-0.25cm} \end{figure*} To forego the issue of a satellite ``carrying'' a locally replicated data item to another part of the world where different content is required and thereby unnecessarily blocking its limited storage, we introduce a time-to-live (TTL). This TTL does not apply to individual data items but rather to the complete local PoP storage. The local store of a satellite PoP should be purged as soon as a ground station that it serves connects to a new satellite so that only replicas for a single location are kept on a satellite. We show this handoff in Figures~\ref{fig:flying1} and~\ref{fig:flying2}. As the satellites are evenly spaced within a plane, this handoff happens at a constant interval -- the satellites' orbital period divided by the number of satellites within one plane. Additionally, a ground station connects to a new plane every once in a while as the earth rotates underneath the satellite constellation (Figure~\ref{fig:flying3}). This duration is derived from the time one full rotation of earth takes, i.e., one day or 86,400s, and the number of planes in the constellation. However, a full orbit of a LEO satellite is faster, in the order of one to two hours. Consequently, if the number of satellites per plane is on the same order of magnitude as the number of planes in the constellation (or even smaller as is the case with the phase~\textrm{I} Starlink constellation), it is sufficient to consider the in-plane handoff for the TTL. \begin{equation} \label{eq:ttl} T_{TTL} = \frac{T_{orbit}}{\# satellites / plane} \end{equation} We thus determine this $T_{TTL}$ as shown in Equation~\ref{eq:ttl}. In the case of Starlink with an orbital period of 5,730s and 66 satellites within every plane, $T_{TTL}$ would be 86.8s. To compare, a cross-plane handoff with the 24 planes in the constellation happens only once every 3,600s. Compared to the SAT strategy, this PoP strategy leads to less storage required at each satellite, yet also requires more bandwidth. After each duration of $T_{TTL}$, if a data item is requested, it has to be fetched from the origin servers to keep replicas ready for subsequent requests. Assuming a strictly location-based demand of data items, the outcome of SAT-TTL should be comparable to LRU caching, where clients frequently connect to a new cache. \subsection{Satellite PoPs with Internal Replication (SAT-REP)} Fetching data from the origin server after each TTL expiration of SAT-TTL may, however, lead to unnecessary bandwidth usage. Alternatively, it may be possible to use the ISLs in the satellite constellation: As satellites within a plane and the planes themselves are evenly spaced within the constellation, it is always possible to uniquely identify the next satellite a ground station connects to and to preemptively propagate the local PoP store to that satellite. As every satellite has a direct ISL to its successor and predecessor within its plane, this propagation requires only a single hop, whereas fetching data items from the origin server again could require multiple hops. In this strategy, each satellite PoP removes the locally stored data items after the TTL expires yet first propagates its entire local store to its next satellite. We illustrate this for handoffs within a plane in Figures~\ref{fig:crossreplica1} and~\ref{fig:crossreplica2}. Hence, a ground station is always connected to a satellite PoP with local copies of the data items it requires without fetching that data from the origin location again. \begin{figure*} \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{Graphs/propagation_1.pdf} \caption{$T_{0}$} \label{fig:crossreplica1} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{Graphs/propagation_2.pdf} \caption{$T_{intra}$} \label{fig:crossreplica2} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{Graphs/propagation_3.pdf} \caption{$T_{cross}$} \label{fig:crossreplica3} \end{subfigure} \caption{Replication of data in the SAT-REP strategy as the satellites orbit around the earth and the earth rotates underneath the constellation. When a ground station is connected to a satellite at $T_{0}$ and serves requests of that ground station from its local replica, it propagates its local store to either its successor in the same plane at $T_{intra}$ or to its neighbor in the adjacent plane at $T_{cross}$ depending on earth rotation.} \label{fig:crossreplica} \vspace{-0.25cm} \end{figure*} The additional challenge here, however, is the rotation of the earth underneath the satellite location. Preemptively propagating data items within a single plane works only as long as that plane is the closest to the ground station. As the earth rotates, the ground station slowly moves towards the next plane of satellites. Consequently, an additional cross-plane propagation has to be initiated periodically, which we show in Figure~\ref{fig:crossreplica3}. While we argue that this is negligible for the SAT-TTL strategy, it does make a difference here as the local replica set is long-lived. \begin{subequations} \begin{equation} \label{eq:propagation_intra} T_{intra} = \frac{T_{orbit}}{\# satellites / plane} \end{equation} \begin{equation} \label{eq:propagation_cross} T_{cross} = \frac{86400s}{\#planes} \end{equation} \end{subequations} We thus consider two points in time for propagation of the local PoP store to a different satellite: $T_{intra}$ for intra-plane propagation and $T_{cross}$ for cross-plane propagation, as shown in Equations~\ref{eq:propagation_intra} and~\ref{eq:propagation_cross}, respectively. $T_{intra}$ is calculated in the same way as the $T_{TTL}$ in the SAT-TTL strategy, by dividing the orbital period among the evenly spaced satellites of a plane. Cross-plane propagation at $T_{cross}$ is the duration of one full rotation of the earth, 24 hours, divided by the number of planes in the constellation. For example, in the phase~\textrm{I} Starlink constellation with 24 evenly spaced planes, such a cross-plane replication would occur every 3,600s, or every hour, while intra-plane propagation would occur every 86.8s. Compared to the SAT strategy, this approach uses less storage as replicas of data items are only stored in satellites connected to the ground stations that request these items. Proactively propagating local storage also means that fewer items have to be fetched from the origin servers compared to both the SAT-TTL and the SAT strategy. However, the frequent propagation requires more bandwidth than in the SAT and SAT-TTL strategy, where every satellite's local replica store acts independently. In this way, every strategy is a tradeoff decision, which we explore through simulation in the next section. \section{Evaluation} \label{sec:evaluation} To evaluate our four proposed CDN PoP selection strategies in addition to the default strategy where only the origin server keeps content replicas, we simulate a full day of web requests in a large LEO satellite network. We shortly describe our simulation environment, introduce our simulation tool, and present the results. \subsection{Simulation Scenario} We run two experiments in which we simulate the phase~\textrm{I} Starlink constellation as described in~\cite{Bhattacherjee2019-jz}, which comprises 24 planes of 66 satellites each and provides Internet access to consumers in the form of ground stations on earth. Over 24 hours, clients request items from an origin location in one-second intervals. These requests are first sent to their ground station's nearest satellite. They are then routed to the satellite closest to the origin server's ground station. This request can be intercepted by a ground station or satellite with a local copy of that data item, depending on the PoP selection strategy which we evaluate. We show an overview of the parameters of our two experiments in Table~\ref{tab:parameters}. \input{Tables/parameters} Please, note that our work aims to evaluate PoP selection strategies; hence, we need to abstract from the orthogonal problem of cache replacement algorithms in local stores of the PoPs. We thus assume in our simulation experiments that a local store can grow infinitely at every PoP. While this is unrealistic in real deployments, it helps us understand the amount of different data items a node has to handle without simulating different store sizes and cache replacement algorithms. It can be argued that choosing the correct store size and replacement algorithm for a PoP depends on the amount of data items a PoP has to manage, i.e., it is a result of our work. We run our simulation on a country-level, i.e., in every simulation we use a list of cities and their respective population of a single country. The assumption is that the requested items are similar within a single country~\cite{DOro2014-dk,Hasenburg2020-xi,Hasenburg2020-gf}. To compare the impact of country size, we use data sets of two countries that we simulate separately. We use a dataset of cities in the US with a population larger than 40,000, generated from the R \texttt{maps} data set\footnote{\url{https://github.com/adeckmyn/maps}}. Here, cities are spread over a large landmass with most of the population, i.e., clients on either coast of the country. For comparison, we use a set of all cities and towns in Switzerland, which is a significantly smaller country. This dataset is based on \emph{OpenStreetMaps} data\footnote{\url{https://openstreetmaps.org}}. As our data origin we choose a single ground station within the country we simulate. When simulating our proposed strategies, we assume a tiered content delivery; the individual PoPs, for example the satellites in the SAT strategy, pull data from this single ground station, save a copy locally, and later serve this copy. We also consider different client numbers for ground stations as this influences the GST strategy's effectiveness: First, we assume 10,000 clients per ground terminal, i.e., a neighborhood or university campus sharing a ground station. Second, we run the simulation with only 100 clients per terminal, which corresponds to a cell tower or larger building. Third, we assume ten clients sharing one ground station, such as a single household or small business. We show individual results as GST-10000, GST-100, and GST-10, respectively. \subsection{Simulation Tool} We extend the \emph{SILLEO-SCNS} routing simulator presented in~\cite{Kempton2020-qx}, which was written in Python3 and uses the \texttt{PyAstronomy} and \texttt{python-igraph} packages. Our improved and extended version is available as open-source\footnote{\url{https://github.com/pfandzelter/LLEOSCN-CDN-Sim}}. First, we added a workload generator that uses the distribution functions for requests and data in a small item cache in a CDN as identified by~\cite{Shafiq2016-kj}. The analyzed CDN serves terrestrial users, yet we have no reason to believe that satellite Internet is used differently than its terrestrial counterpart. The workload generator creates a set of data items in a specified size range and with specific popularities, and then generates requests from given client locations. Second, our simulation tool produces a list of traces for each time step, where each trace is a client request with client ground station, ISL path, server ground station, and the item size, which defines the required bandwidth for that request. Third, we added a separate CDN replication step that takes the request traces and simulates where, how, and when replicas would be stored for the different PoP strategies. From here, we derive our final results as the storage and bandwidth requirements for the different PoP strategies can be calculated. \subsection{Results} \label{sec:results} We present our results in two parts: the bandwidth usage in the network and storage requirements at the PoPs. Although both simulation experiments simulate a full day, we present only a small excerpt of 10 minutes in each graph: After an initial ramp-up period, the patterns seen over the experiment duration are mostly constant, and this smaller view is useful for a more detailed analysis. \textbf{Bandwidth:} The amount of bandwidth used is the main target of optimization through CDN PoPs in our case. Bandwidth usage caused by a request depends on two factors: the requested item's size and the number of hops that the request needs. As the size of each item does not change with different PoP selection strategies, we look at the number of hops needed by requests to estimate the strain on the network, which we show in Figure~\ref{fig:hops}. For hops, we consider only those that pass through the satellite network, i.e., between two ground stations, and disregard additional hops between user devices and ground terminals. Note that the bandwidth usage is proportional to the end-user latency for accessing data items when we assume comparable network latency for all network hops which is realistic for the ISL of the simulated LEO constellation. \begin{figure*} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\linewidth]{Graphs/us-hops.pdf} \caption{US locations} \label{fig:hops_us} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\linewidth]{Graphs/ch-hops.pdf} \caption{Swiss locations} \label{fig:hops_swiss} \end{subfigure} \caption{Average number of hops per request with different strategies.} \label{fig:hops} \vspace{-0.25cm} \end{figure*} For the simulation without any PoPs within the network, we observe that the average number of hops per request fluctuates between 12 and 16 for the US location set and between 18 and 33 for the Swiss ground station set. This fluctuation and unexpectedly high hop count, especially for the small Swiss data set, is caused by the complex network topology of the satellite networks, as described by Handley~\cite{Handley2018-ay}. Over any location covered by the LEO constellation, planes from opposing sides of the planet cross over, with one moving in a North-East direction and the other in a South-East direction. As a ground station connects to the closest satellite, it is possible that two ground stations that are physically close to one another connect to opposing planes and thus have a considerable network distance. For the GST strategy, we observe that 99\% of all requests are served from the ground station's local store after a short period. Here, the average hop count is close to zero after the initial ramp-up period. After a few requests to the most popular items have been made, less than 1\% of items have to be fetched from the origin location and complete the full path through the network. Given the items' popularity, this happens quickly after the start of the simulation. At this point, a majority of requests can be served from the ground station PoPs. To give an example from the simulation experiment with locations in Switzerland, 99\% of the requests are consistently served from PoPs after a simulated duration of 4s, 274s, and 2,680s with 10,000, 100, and 10 clients per ground station, respectively. While this is a positive effect for hop counts, it has adverse effects for the local store at the ground station PoPs, as we show in the next part. For the SAT, SAT-TTL, and SAT-REP strategies, we also see high PoP hit ratios between 95\% and 100\%, yet as each request has to be sent to the satellite first, the average item request takes one hop. The exception is the SAT-TTL at the expiration of the satellite local store's TTL every 87s. Here, as the local store on every satellite is emptied, the full path from client ground stations to origin server ground stations has to be traversed to fetch the requested data items. After these items have been fetched and replicas are stored in the new satellites' stores, requests can again be served from the satellite PoPs. In the SAT-REP strategy, additional bandwidth is required to propagate the local satellite stores to other satellites. As local satellite stores are small, however, we find this impact to be negligible compared to the constant bandwidth used by the links between satellites and ground stations. \textbf{Storage:} Storing more data items leads to higher requirements for the individual replica servers, which results in a higher investment for the CDN operator. As such, storage requirements at every CDN PoP should be as low as possible. Furthermore, in a real deployment, storage would of course be limited -- storage requirements that exceed this capacity limit cannot be fulfilled so that the corresponding data would need to be requested from the origin server. Thus, the storage requirements are a proxy for how often end-users will benefit from the existence of the CDN. Again, please, note that we do not limit storage in our experiments which allows us to separate the evaluation of PoP selection strategies from cache eviction policies. The average storage used across all PoPs in the network is shown in Figure~\ref{fig:store}. For our baseline test without any PoPs within the satellite network, no storage is necessary, as no data is replicated within the network. For the GST strategy, we observe that the storage used per node is highly dependent on the granularity of ground stations: The more clients share a ground station (and, consequently, the fewer ground stations exist in total), the higher the local replicas' average size. We can explain this effect with the distribution of requests. The full set of requests is distributed equally across all ground stations. When the number of ground stations increases, fewer requests are made from each station, which leads to a lower impact of infrequently requested items. In other words, the more clients a ground station serves, the higher the amount of requests from that ground station, and, subsequently, the higher the total set of unique items present in these requests. Another intriguing effect here is that the average storage amount per ground station shows no change after the initial ramp-up period as almost all content is served from the ground station PoPs at this time, and only a small percentage of new data items is requested and added to the local stores. This results in an average storage amount per node that is orders of magnitude higher than with the SAT, SAT-TTL, and SAT-REP strategies. Additionally, the GST strategy also leads to a significantly higher number of PoPs with this higher average storage requirement (millions of ground stations compared to at most 1,584 satellites), so the total amount of used storage in our CDN is higher as well. \begin{figure*} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\linewidth]{Graphs/us-store.pdf} \caption{US locations} \label{fig:store_us} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=1\linewidth]{Graphs/ch-store.pdf} \caption{Swiss locations} \label{fig:store_swiss} \end{subfigure}% \caption{Storage use averaged over all PoPs in the constellation with different strategies.} \label{fig:store} \vspace{-0.25cm} \end{figure*} In the SAT strategy simulation, while the average size of the local store within a satellite PoP does not change noticeably, the number of satellites that store information increases continuously. After simulated 49,675s, no satellites with empty local stores remain in the constellation in the simulation with US locations. For the smaller set of locations in Switzerland, this duration is higher at 85,117s as ground stations are closer to each other and connect to a smaller total number of satellites at a single point in time. After this period, all 1,584 satellites have an average store size of 10MB or 1MB in the simulation with US and Swiss locations, respectively. This fact is important when comparing the SAT-TTL to the SAT-REP (shown in brackets) strategy: Here, the average storage per node is about 100kB for both strategies in the US simulation and 10kB (100kB) in the Switzerland simulation. However, the amount of nodes storing data is significantly lower than with the SAT strategy. On average, for the US simulation only 55 (99) satellites store some data locally. In the smaller Switzerland simulation, this is the case for only 3 (16) satellites. \subsection{Implications: Choosing a Strategy} When selecting PoPs for a CDN network, a tradeoff between optimizing bandwidth usage and allocating storage at PoPs has to be made. In our case, we have shown two extremes: in our baseline tests without PoPs, no storage is required, yet we see the highest strain on all network links. In the GST strategy, regardless of ground station granularity, we see that the most popular items are replicated to all ground stations after a short ramp-up period. This leads to a higher storage requirement but lower bandwidth use, as most requests can be served from the local store. For the CDN operator, managing this tradeoff comes down to a business decision as both network and storage have costs attached to them. If network hops are cheaper than storage, sending all content over the network instead of using any PoPs can be the best option. On the other hand, if storage is cheaper than using bandwidth in the network, reducing hops through many PoPs at ground stations, as with the GST strategy, is better. While there are many unknowns regarding costs in satellite networks, we expect that a solution somewhere in the middle will be optimal. Each satellite has a limited routing and bandwidth capacity that cannot be scaled easily, and upgrading a satellite's networking capabilities requires launching a replacement satellite and phasing out the old one, which is costly. Nevertheless, deploying storage in the network is also a considerable investment. Low-cost storage hardware in ground stations is feasible, yet the massive scale has to be considered: in our US simulation with 100 clients per ground station, a total of 1.2 million PoPs, one at each ground station, would have to be deployed. In the simulation, we find that each of those PoPs handles unique content on the order of 1GB, so a considerable chunk of storage would need to be allocated to each PoP. At a fixed 1,584, the number of satellite-based PoPs is, thus, more manageable. However, we recognize that deploying storage hardware to space is not a trivial task, given the additional space and power requirements, and the additional maintenance overhead. Furthermore, in our simulation with the SAT-REP strategy, we find that the average storage requirement per node is significantly smaller than with the GST strategy -- on the order of 100kB. In practice, we envision the most cost-efficient middle-ground strategy to be a combination of SAT-REP with a reasonable cache eviction policy as in SAT-TTL. While replication can forego the spikes of bandwidth use after TTL expiration, purging infrequently accessed data items leads to lower storage requirements.
{ "timestamp": "2021-03-05T02:42:24", "yymm": "2012", "arxiv_id": "2012.08979", "language": "en", "url": "https://arxiv.org/abs/2012.08979" }
\section*{Introduction} Entity linking (Entity Normalization) is the task of mapping entity mentions in text documents to standard entities in a given knowledge base. For example, the word ``Paris'' is \emph{ambiguous}: It can refer either to the capital of France or to a hero of Greek mythology. Now given the text ``Paris is the son of King Priam'', the goal is to determine that, in this sentence, the word refers to the Greek hero, and to link the word to the corresponding entity in a knowledge base such as YAGO \cite{suchanek2007yago} or DBpedia \cite{auer2007dbpedia}. In the biomedical domain, entity linking maps mentions of diseases, drugs, and measures to normalized entities in standard vocabularies. It is an important ingredient for automation in medical practice, research, and public health. Different names of the same entities in Hospital Information Systems seriously hinder the integration and use of medical data. If a medication appears with different names, researchers cannot study its impact, and patients may erroneously be prescribed the same medication twice. The particular challenge of biomedical entity linking is not the ambiguity: a word usually refers to only a single entity. Rather, the challenge is that the surface forms vary markedly, due to abbreviations, morphological variations, synonymous words, and different word orderings. For example, \textit{``Diabetes Mellitus, Type 2''} is also written as \textit{``DM2''} and \textit{``lung cancer''} is also known as \textit{``lung neoplasm malignant''}. In fact, the surface forms vary so much that all the possible expressions of an entity cannot be known upfront. This means that standard disambiguation systems cannot be applied in our scenario, because they assume that all forms of an entity are known. One may think that variation in surface forms is not such a big problem, as long as all variations of an entity are sufficiently close to its canonical form. Yet, this is not the case. For example, the phrase \textit{"decreases in hemoglobin"} could refer to at least 4 different entities in MedDRA, which all look alike: \textit{"changes in hemoglobin"}, \textit{"increase in hematocrit"}, \textit{"haemoglobin decreased"}, and \textit{"decreases in platelets"}. In addition, biomedical entity linking cannot rely on external resources such as alias tables, entity descriptions, or entity co-occurrence, which are often used in classical entity linking settings. For this reason, entity linking approaches have been developed particularly for biomedical entity linking. Many methods use deep learning: the work of \citet{li2017cnn} casts biomedical entity linking as a ranking problem, leveraging convolutional neural networks (CNNs). More recently, the introduction of BERT has advanced the performance of many NLP tasks, including in the biomedical domain \cite{huang2019clinicalbert,lee2020biobert,ji2020bert}. BERT creates rich pre-trained representations on unlabeled data and achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks, outperforming many task-specific architectures. However, considering the number of parameters of pre-trained BERT models, the improvements brought by fine-tuning them come with a heavy computational cost and memory footprint. This is a problem for energy efficiency, for smaller organizations, or in poorer countries. In this paper, we introduce a very lightweight model that achieves a performance statistically indistinguishable from the state-of-the-art BERT-based models. The central idea is to use an alignment layer with an attention mechanism, which can capture the similarity and difference of corresponding parts between candidate and mention names. Our model is 23x smaller and 6.4x faster than BERT-based models on average; and more than twice smaller and faster than the lightweight BERT models. Yet, as we show, our model achieves comparable performance on all standard benchmarks. Further, we can show that adding more complexity to our model is not necessary: the entity-mention priors, the context around the mention, or the coherence of extracted entities \cite[as used, e.g., in][]{hoffart2011robust} do not improve the results any further. \footnote{All data and code are available at \url{https://github.com/tigerchen52/Biomedical-Entity-Linking}.} \section*{Related Work} In the biomedical domain, much early research focuses on capturing string similarity of mentions and entity names with rule-based systems~\cite{dogan2012inference, kang2013using, d2015sieve}. Rule-based systems are simple and transparent, but researchers need to define rules manually, and these are bound to an application. To avoid manual rules, machine-learning approaches learn suitable similarity measures between mentions and entity names automatically from training sets~\cite{leaman2013dnorm, dougan2014ncbi, ghiasvand2014r, leaman2016taggerone}. However, one drawback of these methods is that they cannot recognize semantically related words. Recently, deep learning methods have been successfully applied to different NLP tasks, based on pre-trained word embeddings, such as word2vec \cite{mikolov2013distributed} and Glove \cite{pennington2014glove}. \citet{li2017cnn} and \citet{wright2019normco} introduce a CNN and RNN, respectively, with pre-trained word embeddings, which casts biomedical entity linking into a ranking problem. However, traditional methods for learning word embeddings allow for only a single context-independent representation of each word. Bidirectional Encoder Representations from Transformers (BERT) address this problem by pre-training deep bidirectional representations from unlabeled text, jointly conditioning on both the left and the right context in all layers. \citet{ji2020bert} proposed an biomedical entity normalization architecture by fine-tuning the pre-trained BERT / BioBERT / ClinicalBERT models \cite{devlin2018bert,huang2019clinicalbert,lee2020biobert}. Extensive experiments show that their model outperforms previous methods and advanced the state-of-the-art for biomedical entity linking. A shortcoming of BERT is that it needs high-performance machines. \section*{Our Approach} Formally, our inputs are (1) a \emph{knowledge base} (KB), i.e., a list of entities, each with one or more names, and (2) a \emph{corpus}, i.e., a set of text documents in which certain text spans have been tagged as entity mentions. The goal is to link each entity mention to the correct entity in the KB. To solve this problem, we are given a training set, i.e., a part of the corpus where the entity mentions have been linked already to the correct entities in the KB. Our method proceeds in 3 steps: \begin{description} \item[\textbf{Preprocessing.}] We preprocess all mentions in the corpus and entity names in the KB to bring them to a uniform format. \item[\textbf{Candidate Generation.}] For each mention, we generate a set of candidate entities from the KB. \item[\textbf{Ranking Model.}] For each mention with its candidate entities, we use a ranking model to score each pair of mention and candidate, outputting the top-ranked result. \end{description} \noindent Let us now describe these steps in detail. \subsection*{Preprocessing} We preprocess all mentions in the corpus and all entity names in the KB by the following steps: \textbf{Abbreviation Expansion.} Like previous work~\cite{ji2020bert}, we use the Ab3p Toolkit~\cite{sohn2008abbreviation} to expand medical abbreviations. The Ab3p tool outputs a probability for each possible expansion, and we use the most probable expansion. For example, Ab3p knows that ``DM'' is an abbreviation of ``Diabetes Mellitus'', and so we replace the abbreviation with its expanded term. We also expand mentions by the first matching one from an abbreviation dictionary constructed by previous work \cite{d2015sieve}, and supplement 20 biomedical abbreviations manually (such as Glycated hemoglobin (HbA1c)). Our dictionary is available in the supplementary material and online. \textbf{Numeral Replacement.} Entity names may contain numerals in different forms (e.g., Arabic, Roman, spelt out in English, etc.) We replace all forms with spelled-out English numerals. For example, ``type \uppercase\expandafter{\romannumeral2} diabetes mellitus'' becomes ``type two diabetes mellitus''. For this purpose, we manually compiled a dictionary of numerals from the corresponding Wikipedia pages. Finally, we remove all punctuation, and convert all words to lowercase. \begin{figure*}[t] \centering \includegraphics[width=0.8\textwidth]{picture/model.pdf} \caption{The architecture of our ranking model, with the input mention ``decreases in hemoglobin'' and the input entity candidate ``haemoglobin decreased''.} \label{fig:architecture} \end{figure*} \textbf{KB Augmentation.} We augment the KB by adding all names from the training set to the corresponding entities. For example, if the training set links the mention ``GS'' in the corpus to the entity ``Adenomatous polyposis coli'' in the KB, we add ``GS'' to the names of that entity in the KB. \subsection*{Candidate Generation}\label{sec:cand} Our ranking approach is based on a deep learning architecture that can compute a similarity score for each pair of a mention in the corpus and an entity name in the KB. However, it is too slow to apply this model to all combinations of all mentions and all entities. Therefore, we generate, for each mention $M$ in the corpus, a set $C_M$ of candidate entities from the KB. Then we apply the deep learning method only to the set $C_M$. To generate the candidate set $C_M$, we calculate a score for $M$ and each entity in the KB, and return the top-$k$ entities with the highest score as the candidate set $C_M$ (in our experiments, $k=20$). As each entity has several names, we calculate the score of $M$ and all names of the entity $E$, and use the maximum score as the score of $M$ and the entity $E$. To compute the score between a mention $M$ and an entity name $S$, we split each of them into tokens, so that we have $M=\{m_{1}, m_{2},..., m_{|M|}\}$ and $S=\{s_{1}, s_{2},..., s_{|S|}\}$. We represent each token by a vector taken from pre-trained embedding matrix $\mathbf V \in \mathbb{R}^{d\times | V |}$ where $d$ is the dimension of word vectors and $V$ is a fixed-sized vocabulary (details in the section of \nameref{sec:experimental setting}). To take into account the possibility of different token orderings in $M$ and $S$, we design the \emph{aligned cosine similarity} (\textit{ACos}), which maps a given token $m_i \in M$ to the most similar token $s_j \in S$ and returns the cosine similarity to that token: \begin{equation} \textit{ACos}(m_{i}, S) = \max \{ cos(m_{i}, s_{j}) \mid s_{j} \in S \} \end{equation} \noindent The similarity score is then computed as the sum of the aligned cosine similarities. To avoid tending to long text, and to make the metric symmetric, we add the similarity scores in the other direction as well, yielding: \begin{multline} \textit{sim}(M,S) = \frac{1}{\left| M \right| + \left| S \right|} (\sum_{m_{i} \in M} \textit{ACos}(m_{i}, S) \\ + \sum_{s_{j} \in S} \textit{ACos}(s_{j},M)) \end{multline} \noindent We can now construct the candidate set $C_M = \{\langle{}E_{1}, S_{1}\rangle,$ $\langle{}E_{2}, S_{2}\rangle,$ $..., \langle{}E_{k}, S_{k}\rangle\}$ where $E_i$ is the id of the entity, and $S_i$ is the chosen name of the entity. This set contains the top-$k$ ranked entity candidates for each mention $M$. Specifically, if there are candidates whose score is equal to 1 in this set, we will filter out other candidates whose score is less than 1. \subsection*{Ranking Model} Given a mention $M$ and its candidate set $C_M = \{\langle{}E_{1}, S_{1}\rangle,$ $\langle{}E_{2}, S_{2}\rangle,$ $..., \langle{}E_{k}, S_{k}\rangle\}$, the ranking model computes a score for each pair of the mention and an entity name candidate $S_i$. Figure~\ref{fig:architecture} shows the corresponding neural network architecture. Let us first describe the base model. This model relies exclusively on the text similarity of mentions and entity names. It ignores the context in which a mention appears, or the prior probability of the target entities. To compute the text similarity, we crafted the neural network following the candidate generation: it determines, for each token in the mention, the most similar token in the entity name, and vice versa. Different from the candidate generation, we also take into account character level information here and use an alignment layer to capture the similarity and difference of correspondences between mention and entity names. \paragraph{Representation Layer.} As mentioned in the \nameref{sec:cand}, we represent a mention $M$ and an entity name $S$ by the set of the embeddings of its tokens in the vocabulary $V$. However, not all tokens exist in the vocabulary $V$. To handle out-of-vocabulary words, we adopt a recurrent Neural Network (RNN) to capture character-level features for each word. This has the additional advantage of learning the morphological variations of words. We use a Bi-directional LSTM (BiLSTM), running a forward and backward LSTM on a character sequence \cite{graves2013speech}. We concatenate the last output states of these two LSTMs as the character-level representation of a word. To use both word-level and character-level information, we represent each token of a mention or entity name as the concatenation of its embedding in $V$ and its character-level representation. \paragraph{Alignment Layer.} To counter the problem of different word orderings in the mention and the entity name, we want the network to find, for each token in the mention, the most similar token in the entity name. For this purpose, we adapt the attention mechanisms that have been developed for machine comprehension and answer selection~\cite{chen2016enhanced,wang2016compare}. Assume that we have a mention $M = \{\bar{m}_{1},$ $\bar{m}_{2},$ $..., \bar{m}_{|M|}\}$ and an entity name $S = \{\bar{s}_{1},$ $\bar{s}_{2},$ $..., \bar{s}_{|S|}\}$, which were generated by the Representation Layer. We calculate a $|M|\times|S|$-dimensional weight matrix $W$, whose element $w_{i,j}$ indicates the similarity between the token $i$ of the mention and the token $j$ of the entity name, $w_{ij} = \bar{m}_{i}^{T} \bar{s}_{j}$. Thus, the $i^{th}$ row in $W$ represents the similarity between the $i^{th}$ token in $M$ and each token in $S$. We apply a softmax function on each row of $W$ to normalize the values, yielding a matrix $W'$. We can then compute a vector $\tilde{m}_i$ for the $i^{th}$ token of the mention, which is the sum of the vectors of the tokens of $S$, weighted by their similarity to $\bar{m}_i$: \begin{equation} \tilde{m}_{i} = \sum_{j=1}^{t} w_{ij}' \bar{s}_{j} \end{equation} \noindent This vector ``reconstructs'' $\bar{m}_i$ by adding up suitable vectors from $S$, using mainly those vectors of $S$ that are similar to $\bar{m}_i$. If this reconstruction succeeds (i.e., if $\bar{m}_i$ is similar to $\tilde{m}_i$), then $S$ contained tokens which, together, contain the same information as $\bar{m}_i$. \ignore{ it so that we obtain an attention matrix where each element $\alpha_{ij} \in [0, 1]$: \begin{equation} \alpha_{ij} = \frac{exp ( w_{ij} )}{\sum_{k=1}^{t} w_{ik}} \end{equation} while we also apply a softmax function on each column of $W$ to get the attention matrix for $S$: \begin{equation} \beta_{ij} = \frac{exp ( w_{ij} )}{\sum_{k=1}^{t} w_{lj}} \end{equation} After, the alignment representation can be computed as a weighted sum: \begin{align} \tilde{m}_{i} = \sum_{j=1}^{t}\beta_{ij} \bar{s}_{j} &&\text{and}&& \tilde{s}_{j} = \sum_{i=1}^{l}\alpha_{ij} \bar{m}_{i} \end{align} where $\tilde{m}_{i}$ is the most relevant part to $\bar{m}_{i}$ that selected from $ S = \{ \bar{s}_{1}, \bar{s}_{2},..., \bar{s}_{t}\}$. We do the same operation for each word in $S$ to get $\tilde{s}_{j}$. In this step, we can find the corresponding parts of two texts to compare without being influenced by the order of words } To measure this similarity, we could use a simple dot-product. However, this reduces the similarity to a single scalar value, which erases precious element-wise similarities. Therefore, we use the following two comparison functions \cite{tai2015improved,wang2016compare}: \begin{equation} \textit{sub}(\bar{m}_{i}, \tilde{m}_{i}) = (\bar{m}_{i}-\tilde{m}_{i}) \odot (\bar{m}_{i}-\tilde{m}_{i}) \end{equation} \begin{equation} \textit{mul}(\bar{m}_{i}, \tilde{m}_{i}) = \bar{m}_{i} \odot \tilde{m}_{i} \end{equation} \noindent where the operator $\odot$ means element-wise multiplication. Intuitively, the functions $sub$ and $mul$ represent subtraction and multiplication, respectively. The function \emph{sub} has similarities to the Euclidean distance, while \emph{mul} has similarities to the cosine similarity -- while preserving the element-wise information. Finally, we obtain a new representation of each token $i$ of the mention by concatenating $\bar{m}_{i}, \tilde{m}_{i}$ and their difference and similarity: \begin{equation} \hat{m}_{i} = [\bar{m}_{i}, \tilde{m}_{i}, \textit{sub}(\bar{m}_{i}, \tilde{m}_{i}), \textit{mul}(\bar{m}_{i}, \tilde{m}_{i}) ] \end{equation} \noindent By applying the same procedure on the columns of $W$, we can compute analogously a vector $\tilde{s}_{j}$ for each token vector $s_j$ of $S$, and obtain the new representation for the $j^{th}$ token of the entity name as \begin{equation} \hat{s}_{j} = [\bar{s}_{j}, \tilde{s}_{j}, \textit{sub}(\bar{s}_{j}, \tilde{s}_{j}), \textit{mul}(\bar{s}_{j}, \tilde{s}_{j}) ] \end{equation} \noindent This representation augments the original representation $\bar{s}_{j}$ of the token by the ``reconstructed'' token $\tilde{s}_{j}$, and by information about how similar $\tilde{s}_{j}$ is to $\bar{s}_{j}$. \paragraph{CNN Layer.} We now have rich representations for the mention and the entity name, and we apply a one-layer CNN on the mention $[\hat{m}_{1}, \hat{m}_{2},..., \hat{m}_{|M|}]$ and the entity name $[\hat{s}_{1}, \hat{s}_{2},..., \hat{s}_{|S|}]$. We adopt the CNN architecture proposed by \cite{kim2014convolutional} to extract n-gram features of each text: \begin{equation} f_{M} = \textit{CNN}([\hat{m}_{1}, \hat{m}_{2},..., \hat{m}_{M}]) \end{equation} \begin{equation} f_{E} = \textit{CNN}([\hat{s}_{1}, \hat{s}_{2},..., \hat{s}_{S}]) \end{equation} \noindent We concatenate these to a single vector $f_{\textit{out}} = [ f_{M}, f_{E} ]$. \paragraph{Output Layer.} We are now ready to compute the final output of our network using a two-layer fully connected neural network: \begin{equation} \Phi ( M, E ) = \textit{sigmoid} (W_{2}~~\textit{ReLU}(W_{1}~f_{\textit{out}} + b_{1} ) + b_{2} ) \end{equation} \noindent where $W_{2}$ and $W_{1}$ are learned weight matrices, and $b_1$ and $b_2$ are bias values. This constitutes our base model, which relies solely on string similarity. We will now see how we can add add prior, context, and coherence features. \subsection*{Extra Features}\label{sec:extra} \paragraph{Mention-Entity Prior.} Consider an ambiguous case such as \textit{``You should shower, let water flow over wounds, pat dry with a towel.''} appearing in hospital Discharge Instructions. In this context, the disease name \textit{``wounds''} is much more likely to refer to \textit{``surgical wound''} than \textit{``gunshot wound''}. This prior probability is called the \emph{mention-entity prior}. It can be estimated, e.g., by counting in Wikipedia how often a mention is linked to the page of an entity~\cite{hoffart2011robust}. Unlike DBpedia and YAGO, biomedical knowledge bases generally do not provide links to Wikipedia. Hence, we estimate the mention-entity prior from the training set, as: \begin{equation} \textit{prior}(M,E) = \log \textit{count}(M, E) \end{equation} \noindent where $\textit{count}(M, E)$ is the frequency with which the mention $M$ is linked to the target entity $E$ in the training dataset. To reduce the effect of overly large values, we apply the logarithm. This prior can be added easily to our model by concatenating it in $f_{\textit{out}}$: \begin{equation} f_{\textit{out}} = [ f_{M}, f_{E}, \textit{prior}(M,E) ] \end{equation} \paragraph{Context.} The context around a mention can provide clues on which candidate entity to choose. We compute a context score that measures how relevant the keywords of the context are to the candidate entity name. We first represent the sentence containing the mention by pre-trained word embeddings. We then run a Bi-directional LSTM on the sentence to get a new representation for each word. In the same way, we apply a Bi-directional LSTM on the entity name tokens to get the entity name representation $cxt_{E}$. To select keywords relevant to the entity while ignoring noise words, we adopt an attention strategy to assign a weight for each token in the sentence. Then we use a weighted sum to represent the sentence as $cxt_{M}$. The context score is then computed as the cosine similarity between both representations: \begin{equation} \textit{context}(M, E) = \cos (cxt_{M}, cxt_{E}) \end{equation} As before, we concatenate this score to the vector $f_{out}$. \paragraph{Coherence.} Certain entities are more likely to occur together in the same document than others, and we can leverage this disposition to help the entity linking. To capture the co-occurrence of entities, we pre-train entity embeddings in such a way that entities that often co-occur together have a similar distributed representation. We train these embeddings with Word2Vec~\cite{mikolov2013distributed} on a collection of PubMed abstracts\footnote{ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/}. Since the entities in this corpus are not linked to our KB, we consider every occurrence of an exact entity name as a mention of that entity. Given a mention $M$ and a candidate entity $E$, we compute a coherence score to measure how often the candidate entity co-occurs with the other entities in the document. We first select the mentions around $M$. For each mention, we use the first entity candidate (as given by the candidate selection). This gives us a set of entities $P_{M} = \{ {p}_{1}, {p}_{2},..., {p}_{k}\}$, where each element is a pre-trained entity vector. Finally, the coherence score is computed as: \begin{equation} \textit{coherence}(M, E) = \frac{1}{k} \sum_{i=1}^{k} \cos(p_{i},p_{E}) \end{equation} \noindent where $p_{E}$ is the pre-trained vector of the entity candidate $E$. This score measures how close the candidate entity $E$ is, on average, to the other presumed entities in the document. As before, we concatenate this score to the vector $f_{\textit{out}}$. More precisely, we pre-trained separate entity embeddings for the three datasets and used the mean value of all entity embeddings to represent missing entities. \subsection*{NIL Problem} The NIL problem occurs when a mention does not correspond to any entity in the KB. We adopt a traditional threshold method, which considers a mention unlinkable if its score is less than a threshold $\tau$. This means that we map a mention to the highest-scoring entity if that score exceeds $\tau$, and to NIL otherwise. The threshold $\tau$ is learned from a training set. For datasets that do not contain unlinkable mentions, we set the threshold $\tau$ to zero. \subsection*{Training} For training, we adopt a triplet ranking loss function to make the score of the positive candidates higher than the score of the negative candidates. The objective function is: \begin{multline} \theta ^{*} = \mathop{\arg\min}_{\theta} \sum_{D \in \mathcal{D}}\sum_{M \in D}\sum_{E \in C} \\ \max (0, \gamma + \Phi ( M, E^{+} ) - \Phi ( M, E^{-} )) \end{multline} \noindent where $\theta$ stands for the parameters of our model. $\mathcal{D}$ is a training set containing a certain number of documents and $\gamma$ is the parameter of margin. $E^{+}$ and $E^{-}$ represent a positive entity candidate and a negative entity candidate, respectively. Our goal is to find an optimal $\theta$, which makes the score difference between positive and negative entity candidates as large as possible. For this, we need triplets of a mention $M$, a positive example $E^+$ and a negative example $E^-$. The positive example can be obtained from the training set. The negative examples are usually chosen by random sampling from the KB. In our case, we sample the negative example from the candidates that were produced by the candidate generation phase (excluding the correct entity). This choice makes the negative examples very similar to the positive example, and forces the process to learn what distinguishes the positive candidate from the others. \section*{Experiments} \begin{table}[b!] \small \begin{tabu} {p{1.3cm} X[c] X[c] X[c] X[c] X[c] X[c]} \toprule &\multicolumn{2}{c}{ShARe/CLEF} &\multicolumn{2}{c}{NCBI} &\multicolumn{2}{c}{ADR} \\ &train &test &train &test &train &test \\ \midrule documents &199 &99 &692 &100 &101 &99 \\ mentions &5816 &5351 &5921 &964 &7038 &6343 \\ NIL &1641 &1750 &0 &0 &47 &18 \\ \midrule concepts &\multicolumn{2}{c}{88140} &\multicolumn{2}{c}{9656} &\multicolumn{2}{c}{23668} \\ synonyms &\multicolumn{2}{c}{42929} &\multicolumn{2}{c}{59280} &\multicolumn{2}{c}{0}\\ \bottomrule \end{tabu} \caption{Dataset Statistics}\label{tab:datasets} \end{table} \ignore{ \begin{table*}[!t] \small \begin{minipage}{.25\linewidth} \caption{Performance of different models. Results in gray are not statistically different from the top result.}\label{tab:performance_comparison} \end{minipage}% \hfill% \begin{minipage}{.72\linewidth}% \begin{threeparttable} \begin{tabular}{cccccc} \toprule \multirow{1}{*}{Model}&ShARe/CLEF&NCBI&ADR\cr \midrule DNorm \cite{leaman2013dnorm}&-&82.20$\pm$4.05&-\cr UWM \cite{ghiasvand2014r} &89.50$\pm$1.38&-&-\cr Sieve-based Model \cite{d2015sieve}&\cellcolor{lightgray!50}90.75$\pm$1.31&84.65$\pm$3.84&-\cr TaggerOne \cite{leaman2016taggerone}&-&\cellcolor{lightgray!50}88.80$\pm$3.32&-\cr Learning to Rank \cite{xu2017uth_ccb}&-&-&92.05$\pm$1.12\cr CNN-based Ranking \cite{li2017cnn}&\cellcolor{lightgray!50}90.30$\pm$1.33&86.10$\pm$3.63&-\cr BERT-based Ranking \cite{ji2020bert}&\cellcolor{lightgray!50}{\bf91.06$\pm$\bf1.29}&\cellcolor{lightgray!50}89.06$\pm$3.32&\cellcolor{lightgray!50}{\bf93.22$\pm$\bf1.04}\cr Our Base Model &\cellcolor{lightgray!50}90.10$\pm$1.35 &\cellcolor{lightgray!50}89.07$\pm$3.32&\cellcolor{lightgray!50}92.89$\pm$1.06\cr Our Base Model + Extra Features &\cellcolor{lightgray!50}90.43$\pm$1.33 &\cellcolor{lightgray!50}{\bf89.59$\pm$3.22}&\cellcolor{lightgray!50}93.00$\pm$1.06\cr \bottomrule \end{tabular} \end{threeparttable} \end{minipage}% \end{table*} } \begin{table*}[!t] \centering \small \begin{threeparttable} \begin{tabular}{cccccc} \toprule \multirow{1}{*}{Model}&ShARe/CLEF&NCBI&ADR\cr \midrule DNorm \cite{leaman2013dnorm}&-&82.20$\pm$3.09&-\cr UWM \cite{ghiasvand2014r} &89.50$\pm$1.02&-&-\cr Sieve-based Model \cite{d2015sieve}&\cellcolor{lightgray!50}90.75$\pm$0.96&84.65$\pm$3.00&-\cr TaggerOne \cite{leaman2016taggerone}&-&\cellcolor{lightgray!50}88.80$\pm$2.59&-\cr Learning to Rank \cite{xu2017uth_ccb}&-&-&92.05$\pm$0.84\cr CNN-based Ranking \cite{li2017cnn}&\cellcolor{lightgray!50}90.30$\pm$1.00&86.10$\pm$2.79&-\cr BERT-based Ranking \cite{ji2020bert}&\cellcolor{lightgray!50}{\bf91.06$\pm$\bf0.96}&\cellcolor{lightgray!50}89.06$\pm$2.63&\cellcolor{lightgray!50}{\bf93.22$\pm$\bf0.79}\cr Our Base Model &\cellcolor{lightgray!50}90.10$\pm$1.00 &\cellcolor{lightgray!50}89.07$\pm$2.63&\cellcolor{lightgray!50}92.63$\pm$0.81\cr Our Base Model + Extra Features &\cellcolor{lightgray!50}90.43$\pm$0.99 &\cellcolor{lightgray!50}{\bf89.59$\pm$2.59}&\cellcolor{lightgray!50}92.74$\pm$0.80\cr \bottomrule \end{tabular} \end{threeparttable} \caption{Performance of different models. Results in gray are not statistically different from the top result.}\label{tab:performance_comparison} \end{table*} \subsection*{Datasets and Metrics. } We evaluate our model on three datasets (shown in Table~\ref{tab:datasets}). The \textbf{ShARe/CLEF} corpus~\cite{pradhan2013task} comprises 199 medical reports for training and 99 for testing. As Table~\ref{tab:datasets} shows, $28.2\%$ of the mentions in the training set and $32.7\%$ of the mentions in the test set are unlinkable. The reference knowledge base used here is the SNOMED-CT subset of the UMLS 2012AA~\cite{bodenreider2004unified}. The \textbf{NCBI} disease corpus~\cite{dougan2014ncbi} is a collection of 793 PubMed abstracts partitioned into 693 abstracts for training and development and 100 abstracts for testing. We use the July 6, 2012 version of MEDIC~\cite{davis2012medic}, which contains 9,664 disease concepts. The TAC 2017 Adverse Reaction Extraction (\textbf{ADR}) dataset consists of a training set of 101 labels and a test set of 99 labels. The mentions have been mapped manually to the MedDRA 18.1 KB, which contains 23,668 unique concepts. Following previous work, we adopt accuracy to compare the performance of different models. \subsection*{Experimental Settings} \label{sec:experimental setting} We implemented our model using Keras, and trained our model on a single Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz, using less than 10Gb of memory. Each token is represented by a 200-dimensional word embedding computed on the PubMed and MIMIC-III corpora~\cite{zhang2019biowordvec}. As for the character embeddings, we use a random matrix initialized as proposed in \citet{he2015delving}, with a dimension of $128$. The dimension of the character LSTM is $64$, which yields $128$-dimensional character feature vectors. In the CNN layer, the number of feature maps is $32$, and the filter windows are $[1, 2, 3]$. The dimension of the context LSTM and entity embedding is set to $32$ and $50$ respectively. We adopt a grid search on a hold-out set from training samples to select the value $\tau$, and and find an optimal for $\tau = 0.75$. During the training phase, we select at most $20$ entity candidates per mention, and the parameter of the triplet rank loss is $0.1$. For the optimization, we use Adam with a learning rate of $0.0005$ and a batch size of $64$. To avoid overfitting, we adopt a dropout strategy with a dropout rate of $0.1$. \subsection*{Competitors} We compare our model to the following competitors: \textbf{DNorm} \cite{leaman2013dnorm}; \textbf{UWM} \cite{ghiasvand2014r}; \textbf{Sieve-based Model} \cite{d2015sieve}; \textbf{TaggerOne} \cite{leaman2016taggerone}; a model based on \textbf{Learning to Rank} \cite{xu2017uth_ccb}; \textbf{CNN-based Ranking} \cite{li2017cnn}; and \textbf{BERT-based Ranking} \cite{ji2020bert}. \section*{Results} \subsection*{Overall Performance} During the candidate generation, we generate 20 candidates for each mention. The recall of correct entities on the ShARe/CLEF, NCBI, and ADR test datasets is 97.79\%, 94.27\%, and 96.66\% respectively. We thus conclude that our candidate generation does not eliminate too many correct candidates. Table~\ref{tab:performance_comparison} shows the performance of our model and the baselines. Besides accuracy, we also compute a binomial confidence interval for each model (at a confidence level of 0.02), based on the total number of mentions and the number of correctly mapped mentions. The best results are shown in bold text, and all performances that are within the error margin of the best-performing model are shown in gray. We first observe that, for each dataset, several methods perform within the margin of the best-performing model. However, only two models are consistently within the margin across all datasets: BERT and our method. Adding extra features (prior, context, coherence) to our base model yields a small increase on the three datasets. However, overall, even our base model achieves a performance that is statistically indistinguishable from the state of the art. \begin{table}[!t] \centering \small \begin{threeparttable} \begin{tabular}{cccc} \toprule \multirow{1}{*}{Model}&ShARe/CLEF&NCBI&ADR\cr \midrule - Character Feature &-1.21&-0.31&-0.30\cr - Alignment Layer & \underline{-3.80}&\underline{-4.06}&\underline{-3.17}\cr - CNN Layer &-1.87&-0.93&-0.35\cr \rowcolor{lightgray!50} Our Base Method &90.10&89.07&92.63\cr + Mention-Entity Prior &+0.33&+0.04&+0.03\cr + Context &-0.09&+0.21&-0.24\cr + Coherence &-0.02&+0.27&+0.11\cr \bottomrule \end{tabular} \caption{Ablation study}\label{tab:ablation} \end{threeparttable} \end{table} \begin{table*}[!t] \centering \begin{threeparttable} \begin{tabular}{ccccccc} \toprule \multirow{1}{*}{Model}&Original ADR&10\%&30\%&50\%&70\%&90\%\cr \midrule + Ordering Change &92.63&92.20&92.18&91.95&92.31&92.05\cr + Typo &92.63&92.03&91.61&91.38&91.41&91.13\cr \bottomrule \end{tabular} \caption{Performance in the face of typos: Simulated ADR Datasets}\label{tab:simulate} \end{threeparttable} \end{table*} \ignore{ \begin{table*}[!t] \begin{threeparttable} \begin{tabular}{cccc} \toprule &Sieve-based Model&BERT-based Ranking&Our Base Model\cr \midrule Parameter Numbers &-&110M/340M&6.5M/4.9M/2.3M\cr Abbreviation Expansion Tool &$\checkmark$&$\checkmark$&$\checkmark$\cr Abbreviation Dictionary &$\checkmark$&$\checkmark$&$\checkmark$\cr Numeral Dictionary &$\checkmark$&$\checkmark$&$\checkmark$\cr Synonym Dictionary&$\checkmark$&$\times$&$\times$\cr Spelling Check Dictionary&$\times$&$\checkmark$&$\times$\cr Stemming Tool&$\checkmark$&$\checkmark$&$\times$\cr Information Retrieval Tool &$\times$&$\checkmark$&$\times$\cr \bottomrule \end{tabular} \caption{Model parameter numbers and external resources used.} \end{threeparttable} \end{table*} } \begin{table*}[!t] \centering \small \begin{threeparttable} \begin{tabular}{cccccccccc} \toprule Model&Parameters&\multicolumn{2}{c}{ShARe/CLEF}&\multicolumn{2}{c}{NCBI}&\multicolumn{2}{c}{ADR}&Avg&Speedup\cr &&CPU &GPU &CPU &GPU &CPU &GPU && \cr \midrule BERT (large)&340M&2230s&1551s&353s&285s&2736s&1968s&1521s&12.3x\cr BERT (base)&110M&1847s&446s&443s&83s&1666s&605s&848s&6.4x\cr TinyBERT$_{6}$&67M&1618s&255s&344s&42s&2192s&322s&796s&6.0x\cr MobileBERT (base)&25.3M&1202s&330s&322s&58s&1562s&419s&649s&4.7x\cr ALBERT (base)&12M&836s&\textbf{129s}&101s&24s&1192s&170s&409s&2.6x\cr Our Base Model&4.6M&\textbf{181s}&131s&\textbf{38s}&\textbf{22s}&\textbf{196s}&\textbf{116s}&\textbf{114s}&-\cr \bottomrule \end{tabular} \caption{Number of model parameters and observed inference time} \label{tab:running time} \end{threeparttable} \end{table*} \subsection*{Ablation Study} To understand the effect of each component of our model, we measured the performance of our model when individual components are removed or added. The results of this ablation study on all three datasets are shown in Table~\ref{tab:ablation}. The gray row is the accuracy of our base model. The removal of the components of the base model is shown above the gray line; the addition of extra features (see the section of \nameref{sec:extra}) below. If we remove the Alignment Layer (underlined), the accuracy drops the most, with up to 4.06 percentage points. This indicates that the alignment layer can effectively capture the similarity of the corresponding parts of mentions and entity names. The CNN Layer extracts the key components of the names, and removing this part causes a drop of up to 1.87 percentage points. The character-level feature captures morphological variations, and removing it results in a decrease of up to 1.21 percentage points. Therefore, we conclude that all components of our base model are necessary. Let us now turn to the effect of the extra features of our model. The Mention-Entity Prior can bring a small improvement, because it helps with ambiguous mentions, which occupy only a small portion of the dataset. The context feature, likewise, can achieve a small increase on the NCBI dataset. On the other datasets, however, the feature has a negative impact. We believe that this is because the documents in the NCBI datasets are PubMed abstracts, which have more relevant and informative contexts. The documents in the ShARe/CLEF and ADR datasets, in contrast, are more like semi-structured text with a lot of tabular data. Thus, the context around a mention in these documents is less helpful. The coherence feature brings only slight improvements. This could be because our method of estimating co-occurrence is rather coarse-grained, and the naive string matching we use may generate errors and omissions. In conclusion, the extra features do bring a small improvement, and they are thus an interesting direction of future work. However, our simple base model is fully sufficient to achieve state-of-the-art performance already. \subsection*{Performance in the Face of Typos} To reveal how our base model works, we further evaluate it on simulated ADR datasets. We generate two simulated datasets by randomly adding typos and changing word orderings of mention names. As described in Table~\ref{tab:simulate}, as we gradually add typos, the accuracy does not drop too much, and adding 90\% of typos only results in a 1.5 percent drop. This shows our model can deal well with morphological variations of biomedical names. Besides, ordering changes almost have no effect on our base model, which means it can capture correspondences between mention and entity names. \subsection*{Parameters and Inference Time} To measure the simplicity of our base model, we analyze two dimensions: the number of model parameters and the practical inference time. In Table~\ref{tab:running time}, we compare our model with BERT models, including three popular lightweight models: ALBERT\cite{lan2019albert}, TinyBERT\cite{jiao2019tinybert}, and MobileBert\cite{sun2020mobilebert}. Although ALBERT's size is close to our model, its performance is still 2.2 percentage points lower than the BERT$_{\textit{BASE}}$ model on average. The second column in the table shows the number of parameters of different models. Our model uses an average of only 4.6M parameters across the three data sets, which is 1.6x to 72.9x smaller than the other models. The third column to the tenth column show the practical inference time of the models on the CPU and GPU. The CPU is described in the \nameref{sec:experimental setting}, and the GPU we used is a single NVIDIA Tesla V100 (32G). Our model is consistently the fastest across all three datasets, both for CPU and GPU (except in the fourth column). On average, our model is 6.4x faster than other BERT models, and our model is much lighter on the CPU. \subsection*{Model Performance as Data Grows} In this section, we study how our model performs with an increasing amount of training samples, by subsampling the datasets. As shown in Figure~\ref{fig:smalldata}, the performance of our base model keeps growing when we gradually increase the number of training samples. When using 50\% of the training samples, the accuracies of ShARe/CLEF, NCBI, and ADR dataset are already $0.8342, 0.8747,$ and $0.9106$, respectively. More data leads to better performance, and thus our model is not limited by its expressivity, even though it is very simple. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{picture/model_efficiency.pdf} \caption{Model efficiency on a small amount of data.} \label{fig:smalldata} \end{figure} \section*{Conclusion} In this paper, we propose a simple and lightweight neural model for biomedical entity linking. Our experimental results on three standard evaluation benchmarks show that the model is very effective, and achieves a performance that is statistically indistinguishable from the state of the art. BERT-based models, e.g., have 23 times more parameters and require 6.4 times more computing time for inference. Future work to improve the architecture can explore \emph{1)} automatically assigning a weight for each word in the mentions and entity names to capture the importance of each word, depending, e.g., on its grammatical role; \emph{2)} Graph Convolutional Networks (GCNs) \cite{kipf2016semi,wu2020dynamic} to capture graph structure across mentions and improve our notion of entity coherence. \goodbreak \section*{Acknowledgments} This project was partially funded by the DirtyData project (ANR-17-CE23-0018-01). \section*{Introduction} Entity linking (Entity Normalization) is the task of mapping entity mentions in text documents to standard entities in a given knowledge base. For example, the word ``Paris'' is \emph{ambiguous}: It can refer either to the capital of France or to a hero of Greek mythology. Now given the text ``Paris is the son of King Priam'', the goal is to determine that, in this sentence, the word refers to the Greek hero, and to link the word to the corresponding entity in a knowledge base such as YAGO \cite{suchanek2007yago} or DBpedia \cite{auer2007dbpedia}. In the biomedical domain, entity linking maps mentions of diseases, drugs, and measures to normalized entities in standard vocabularies. It is an important ingredient for automation in medical practice, research, and public health. Different names of the same entities in Hospital Information Systems seriously hinder the integration and use of medical data. If a medication appears with different names, researchers cannot study its impact, and patients may erroneously be prescribed the same medication twice. The particular challenge of biomedical entity linking is not the ambiguity: a word usually refers to only a single entity. Rather, the challenge is that the surface forms vary markedly, due to abbreviations, morphological variations, synonymous words, and different word orderings. For example, \textit{``Diabetes Mellitus, Type 2''} is also written as \textit{``DM2''} and \textit{``lung cancer''} is also known as \textit{``lung neoplasm malignant''}. In fact, the surface forms vary so much that all the possible expressions of an entity cannot be known upfront. This means that standard disambiguation systems cannot be applied in our scenario, because they assume that all forms of an entity are known. One may think that variation in surface forms is not such a big problem, as long as all variations of an entity are sufficiently close to its canonical form. Yet, this is not the case. For example, the phrase \textit{"decreases in hemoglobin"} could refer to at least 4 different entities in MedDRA, which all look alike: \textit{"changes in hemoglobin"}, \textit{"increase in hematocrit"}, \textit{"haemoglobin decreased"}, and \textit{"decreases in platelets"}. In addition, biomedical entity linking cannot rely on external resources such as alias tables, entity descriptions, or entity co-occurrence, which are often used in classical entity linking settings. For this reason, entity linking approaches have been developed particularly for biomedical entity linking. Many methods use deep learning: the work of \citet{li2017cnn} casts biomedical entity linking as a ranking problem, leveraging convolutional neural networks (CNNs). More recently, the introduction of BERT has advanced the performance of many NLP tasks, including in the biomedical domain \cite{huang2019clinicalbert,lee2020biobert,ji2020bert}. BERT creates rich pre-trained representations on unlabeled data and achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks, outperforming many task-specific architectures. However, considering the number of parameters of pre-trained BERT models, the improvements brought by fine-tuning them come with a heavy computational cost and memory footprint. This is a problem for energy efficiency, for smaller organizations, or in poorer countries. In this paper, we introduce a very lightweight model that achieves a performance statistically indistinguishable from the state-of-the-art BERT-based models. The central idea is to use an alignment layer with an attention mechanism, which can capture the similarity and difference of corresponding parts between candidate and mention names. Our model is 23x smaller and 6.4x faster than BERT-based models on average; and more than twice smaller and faster than the lightweight BERT models. Yet, as we show, our model achieves comparable performance on all standard benchmarks. Further, we can show that adding more complexity to our model is not necessary: the entity-mention priors, the context around the mention, or the coherence of extracted entities \cite[as used, e.g., in][]{hoffart2011robust} do not improve the results any further. \footnote{All data and code are available at \url{https://github.com/tigerchen52/Biomedical-Entity-Linking}.} \section*{Related Work} In the biomedical domain, much early research focuses on capturing string similarity of mentions and entity names with rule-based systems~\cite{dogan2012inference, kang2013using, d2015sieve}. Rule-based systems are simple and transparent, but researchers need to define rules manually, and these are bound to an application. To avoid manual rules, machine-learning approaches learn suitable similarity measures between mentions and entity names automatically from training sets~\cite{leaman2013dnorm, dougan2014ncbi, ghiasvand2014r, leaman2016taggerone}. However, one drawback of these methods is that they cannot recognize semantically related words. Recently, deep learning methods have been successfully applied to different NLP tasks, based on pre-trained word embeddings, such as word2vec \cite{mikolov2013distributed} and Glove \cite{pennington2014glove}. \citet{li2017cnn} and \citet{wright2019normco} introduce a CNN and RNN, respectively, with pre-trained word embeddings, which casts biomedical entity linking into a ranking problem. However, traditional methods for learning word embeddings allow for only a single context-independent representation of each word. Bidirectional Encoder Representations from Transformers (BERT) address this problem by pre-training deep bidirectional representations from unlabeled text, jointly conditioning on both the left and the right context in all layers. \citet{ji2020bert} proposed an biomedical entity normalization architecture by fine-tuning the pre-trained BERT / BioBERT / ClinicalBERT models \cite{devlin2018bert,huang2019clinicalbert,lee2020biobert}. Extensive experiments show that their model outperforms previous methods and advanced the state-of-the-art for biomedical entity linking. A shortcoming of BERT is that it needs high-performance machines. \section*{Our Approach} Formally, our inputs are (1) a \emph{knowledge base} (KB), i.e., a list of entities, each with one or more names, and (2) a \emph{corpus}, i.e., a set of text documents in which certain text spans have been tagged as entity mentions. The goal is to link each entity mention to the correct entity in the KB. To solve this problem, we are given a training set, i.e., a part of the corpus where the entity mentions have been linked already to the correct entities in the KB. Our method proceeds in 3 steps: \begin{description} \item[\textbf{Preprocessing.}] We preprocess all mentions in the corpus and entity names in the KB to bring them to a uniform format. \item[\textbf{Candidate Generation.}] For each mention, we generate a set of candidate entities from the KB. \item[\textbf{Ranking Model.}] For each mention with its candidate entities, we use a ranking model to score each pair of mention and candidate, outputting the top-ranked result. \end{description} \noindent Let us now describe these steps in detail. \subsection*{Preprocessing} We preprocess all mentions in the corpus and all entity names in the KB by the following steps: \textbf{Abbreviation Expansion.} Like previous work~\cite{ji2020bert}, we use the Ab3p Toolkit~\cite{sohn2008abbreviation} to expand medical abbreviations. The Ab3p tool outputs a probability for each possible expansion, and we use the most probable expansion. For example, Ab3p knows that ``DM'' is an abbreviation of ``Diabetes Mellitus'', and so we replace the abbreviation with its expanded term. We also expand mentions by the first matching one from an abbreviation dictionary constructed by previous work \cite{d2015sieve}, and supplement 20 biomedical abbreviations manually (such as Glycated hemoglobin (HbA1c)). Our dictionary is available in the supplementary material and online. \textbf{Numeral Replacement.} Entity names may contain numerals in different forms (e.g., Arabic, Roman, spelt out in English, etc.) We replace all forms with spelled-out English numerals. For example, ``type \uppercase\expandafter{\romannumeral2} diabetes mellitus'' becomes ``type two diabetes mellitus''. For this purpose, we manually compiled a dictionary of numerals from the corresponding Wikipedia pages. Finally, we remove all punctuation, and convert all words to lowercase. \begin{figure*}[t] \centering \includegraphics[width=0.8\textwidth]{picture/model.pdf} \caption{The architecture of our ranking model, with the input mention ``decreases in hemoglobin'' and the input entity candidate ``haemoglobin decreased''.} \label{fig:architecture} \end{figure*} \textbf{KB Augmentation.} We augment the KB by adding all names from the training set to the corresponding entities. For example, if the training set links the mention ``GS'' in the corpus to the entity ``Adenomatous polyposis coli'' in the KB, we add ``GS'' to the names of that entity in the KB. \subsection*{Candidate Generation}\label{sec:cand} Our ranking approach is based on a deep learning architecture that can compute a similarity score for each pair of a mention in the corpus and an entity name in the KB. However, it is too slow to apply this model to all combinations of all mentions and all entities. Therefore, we generate, for each mention $M$ in the corpus, a set $C_M$ of candidate entities from the KB. Then we apply the deep learning method only to the set $C_M$. To generate the candidate set $C_M$, we calculate a score for $M$ and each entity in the KB, and return the top-$k$ entities with the highest score as the candidate set $C_M$ (in our experiments, $k=20$). As each entity has several names, we calculate the score of $M$ and all names of the entity $E$, and use the maximum score as the score of $M$ and the entity $E$. To compute the score between a mention $M$ and an entity name $S$, we split each of them into tokens, so that we have $M=\{m_{1}, m_{2},..., m_{|M|}\}$ and $S=\{s_{1}, s_{2},..., s_{|S|}\}$. We represent each token by a vector taken from pre-trained embedding matrix $\mathbf V \in \mathbb{R}^{d\times | V |}$ where $d$ is the dimension of word vectors and $V$ is a fixed-sized vocabulary (details in the section of \nameref{sec:experimental setting}). To take into account the possibility of different token orderings in $M$ and $S$, we design the \emph{aligned cosine similarity} (\textit{ACos}), which maps a given token $m_i \in M$ to the most similar token $s_j \in S$ and returns the cosine similarity to that token: \begin{equation} \textit{ACos}(m_{i}, S) = \max \{ cos(m_{i}, s_{j}) \mid s_{j} \in S \} \end{equation} \noindent The similarity score is then computed as the sum of the aligned cosine similarities. To avoid tending to long text, and to make the metric symmetric, we add the similarity scores in the other direction as well, yielding: \begin{multline} \textit{sim}(M,S) = \frac{1}{\left| M \right| + \left| S \right|} (\sum_{m_{i} \in M} \textit{ACos}(m_{i}, S) \\ + \sum_{s_{j} \in S} \textit{ACos}(s_{j},M)) \end{multline} \noindent We can now construct the candidate set $C_M = \{\langle{}E_{1}, S_{1}\rangle,$ $\langle{}E_{2}, S_{2}\rangle,$ $..., \langle{}E_{k}, S_{k}\rangle\}$ where $E_i$ is the id of the entity, and $S_i$ is the chosen name of the entity. This set contains the top-$k$ ranked entity candidates for each mention $M$. Specifically, if there are candidates whose score is equal to 1 in this set, we will filter out other candidates whose score is less than 1. \subsection*{Ranking Model} Given a mention $M$ and its candidate set $C_M = \{\langle{}E_{1}, S_{1}\rangle,$ $\langle{}E_{2}, S_{2}\rangle,$ $..., \langle{}E_{k}, S_{k}\rangle\}$, the ranking model computes a score for each pair of the mention and an entity name candidate $S_i$. Figure~\ref{fig:architecture} shows the corresponding neural network architecture. Let us first describe the base model. This model relies exclusively on the text similarity of mentions and entity names. It ignores the context in which a mention appears, or the prior probability of the target entities. To compute the text similarity, we crafted the neural network following the candidate generation: it determines, for each token in the mention, the most similar token in the entity name, and vice versa. Different from the candidate generation, we also take into account character level information here and use an alignment layer to capture the similarity and difference of correspondences between mention and entity names. \paragraph{Representation Layer.} As mentioned in the \nameref{sec:cand}, we represent a mention $M$ and an entity name $S$ by the set of the embeddings of its tokens in the vocabulary $V$. However, not all tokens exist in the vocabulary $V$. To handle out-of-vocabulary words, we adopt a recurrent Neural Network (RNN) to capture character-level features for each word. This has the additional advantage of learning the morphological variations of words. We use a Bi-directional LSTM (BiLSTM), running a forward and backward LSTM on a character sequence \cite{graves2013speech}. We concatenate the last output states of these two LSTMs as the character-level representation of a word. To use both word-level and character-level information, we represent each token of a mention or entity name as the concatenation of its embedding in $V$ and its character-level representation. \paragraph{Alignment Layer.} To counter the problem of different word orderings in the mention and the entity name, we want the network to find, for each token in the mention, the most similar token in the entity name. For this purpose, we adapt the attention mechanisms that have been developed for machine comprehension and answer selection~\cite{chen2016enhanced,wang2016compare}. Assume that we have a mention $M = \{\bar{m}_{1},$ $\bar{m}_{2},$ $..., \bar{m}_{|M|}\}$ and an entity name $S = \{\bar{s}_{1},$ $\bar{s}_{2},$ $..., \bar{s}_{|S|}\}$, which were generated by the Representation Layer. We calculate a $|M|\times|S|$-dimensional weight matrix $W$, whose element $w_{i,j}$ indicates the similarity between the token $i$ of the mention and the token $j$ of the entity name, $w_{ij} = \bar{m}_{i}^{T} \bar{s}_{j}$. Thus, the $i^{th}$ row in $W$ represents the similarity between the $i^{th}$ token in $M$ and each token in $S$. We apply a softmax function on each row of $W$ to normalize the values, yielding a matrix $W'$. We can then compute a vector $\tilde{m}_i$ for the $i^{th}$ token of the mention, which is the sum of the vectors of the tokens of $S$, weighted by their similarity to $\bar{m}_i$: \begin{equation} \tilde{m}_{i} = \sum_{j=1}^{t} w_{ij}' \bar{s}_{j} \end{equation} \noindent This vector ``reconstructs'' $\bar{m}_i$ by adding up suitable vectors from $S$, using mainly those vectors of $S$ that are similar to $\bar{m}_i$. If this reconstruction succeeds (i.e., if $\bar{m}_i$ is similar to $\tilde{m}_i$), then $S$ contained tokens which, together, contain the same information as $\bar{m}_i$. \ignore{ it so that we obtain an attention matrix where each element $\alpha_{ij} \in [0, 1]$: \begin{equation} \alpha_{ij} = \frac{exp ( w_{ij} )}{\sum_{k=1}^{t} w_{ik}} \end{equation} while we also apply a softmax function on each column of $W$ to get the attention matrix for $S$: \begin{equation} \beta_{ij} = \frac{exp ( w_{ij} )}{\sum_{k=1}^{t} w_{lj}} \end{equation} After, the alignment representation can be computed as a weighted sum: \begin{align} \tilde{m}_{i} = \sum_{j=1}^{t}\beta_{ij} \bar{s}_{j} &&\text{and}&& \tilde{s}_{j} = \sum_{i=1}^{l}\alpha_{ij} \bar{m}_{i} \end{align} where $\tilde{m}_{i}$ is the most relevant part to $\bar{m}_{i}$ that selected from $ S = \{ \bar{s}_{1}, \bar{s}_{2},..., \bar{s}_{t}\}$. We do the same operation for each word in $S$ to get $\tilde{s}_{j}$. In this step, we can find the corresponding parts of two texts to compare without being influenced by the order of words } To measure this similarity, we could use a simple dot-product. However, this reduces the similarity to a single scalar value, which erases precious element-wise similarities. Therefore, we use the following two comparison functions \cite{tai2015improved,wang2016compare}: \begin{equation} \textit{sub}(\bar{m}_{i}, \tilde{m}_{i}) = (\bar{m}_{i}-\tilde{m}_{i}) \odot (\bar{m}_{i}-\tilde{m}_{i}) \end{equation} \begin{equation} \textit{mul}(\bar{m}_{i}, \tilde{m}_{i}) = \bar{m}_{i} \odot \tilde{m}_{i} \end{equation} \noindent where the operator $\odot$ means element-wise multiplication. Intuitively, the functions $sub$ and $mul$ represent subtraction and multiplication, respectively. The function \emph{sub} has similarities to the Euclidean distance, while \emph{mul} has similarities to the cosine similarity -- while preserving the element-wise information. Finally, we obtain a new representation of each token $i$ of the mention by concatenating $\bar{m}_{i}, \tilde{m}_{i}$ and their difference and similarity: \begin{equation} \hat{m}_{i} = [\bar{m}_{i}, \tilde{m}_{i}, \textit{sub}(\bar{m}_{i}, \tilde{m}_{i}), \textit{mul}(\bar{m}_{i}, \tilde{m}_{i}) ] \end{equation} \noindent By applying the same procedure on the columns of $W$, we can compute analogously a vector $\tilde{s}_{j}$ for each token vector $s_j$ of $S$, and obtain the new representation for the $j^{th}$ token of the entity name as \begin{equation} \hat{s}_{j} = [\bar{s}_{j}, \tilde{s}_{j}, \textit{sub}(\bar{s}_{j}, \tilde{s}_{j}), \textit{mul}(\bar{s}_{j}, \tilde{s}_{j}) ] \end{equation} \noindent This representation augments the original representation $\bar{s}_{j}$ of the token by the ``reconstructed'' token $\tilde{s}_{j}$, and by information about how similar $\tilde{s}_{j}$ is to $\bar{s}_{j}$. \paragraph{CNN Layer.} We now have rich representations for the mention and the entity name, and we apply a one-layer CNN on the mention $[\hat{m}_{1}, \hat{m}_{2},..., \hat{m}_{|M|}]$ and the entity name $[\hat{s}_{1}, \hat{s}_{2},..., \hat{s}_{|S|}]$. We adopt the CNN architecture proposed by \cite{kim2014convolutional} to extract n-gram features of each text: \begin{equation} f_{M} = \textit{CNN}([\hat{m}_{1}, \hat{m}_{2},..., \hat{m}_{M}]) \end{equation} \begin{equation} f_{E} = \textit{CNN}([\hat{s}_{1}, \hat{s}_{2},..., \hat{s}_{S}]) \end{equation} \noindent We concatenate these to a single vector $f_{\textit{out}} = [ f_{M}, f_{E} ]$. \paragraph{Output Layer.} We are now ready to compute the final output of our network using a two-layer fully connected neural network: \begin{equation} \Phi ( M, E ) = \textit{sigmoid} (W_{2}~~\textit{ReLU}(W_{1}~f_{\textit{out}} + b_{1} ) + b_{2} ) \end{equation} \noindent where $W_{2}$ and $W_{1}$ are learned weight matrices, and $b_1$ and $b_2$ are bias values. This constitutes our base model, which relies solely on string similarity. We will now see how we can add add prior, context, and coherence features. \subsection*{Extra Features}\label{sec:extra} \paragraph{Mention-Entity Prior.} Consider an ambiguous case such as \textit{``You should shower, let water flow over wounds, pat dry with a towel.''} appearing in hospital Discharge Instructions. In this context, the disease name \textit{``wounds''} is much more likely to refer to \textit{``surgical wound''} than \textit{``gunshot wound''}. This prior probability is called the \emph{mention-entity prior}. It can be estimated, e.g., by counting in Wikipedia how often a mention is linked to the page of an entity~\cite{hoffart2011robust}. Unlike DBpedia and YAGO, biomedical knowledge bases generally do not provide links to Wikipedia. Hence, we estimate the mention-entity prior from the training set, as: \begin{equation} \textit{prior}(M,E) = \log \textit{count}(M, E) \end{equation} \noindent where $\textit{count}(M, E)$ is the frequency with which the mention $M$ is linked to the target entity $E$ in the training dataset. To reduce the effect of overly large values, we apply the logarithm. This prior can be added easily to our model by concatenating it in $f_{\textit{out}}$: \begin{equation} f_{\textit{out}} = [ f_{M}, f_{E}, \textit{prior}(M,E) ] \end{equation} \paragraph{Context.} The context around a mention can provide clues on which candidate entity to choose. We compute a context score that measures how relevant the keywords of the context are to the candidate entity name. We first represent the sentence containing the mention by pre-trained word embeddings. We then run a Bi-directional LSTM on the sentence to get a new representation for each word. In the same way, we apply a Bi-directional LSTM on the entity name tokens to get the entity name representation $cxt_{E}$. To select keywords relevant to the entity while ignoring noise words, we adopt an attention strategy to assign a weight for each token in the sentence. Then we use a weighted sum to represent the sentence as $cxt_{M}$. The context score is then computed as the cosine similarity between both representations: \begin{equation} \textit{context}(M, E) = \cos (cxt_{M}, cxt_{E}) \end{equation} As before, we concatenate this score to the vector $f_{out}$. \paragraph{Coherence.} Certain entities are more likely to occur together in the same document than others, and we can leverage this disposition to help the entity linking. To capture the co-occurrence of entities, we pre-train entity embeddings in such a way that entities that often co-occur together have a similar distributed representation. We train these embeddings with Word2Vec~\cite{mikolov2013distributed} on a collection of PubMed abstracts\footnote{ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/}. Since the entities in this corpus are not linked to our KB, we consider every occurrence of an exact entity name as a mention of that entity. Given a mention $M$ and a candidate entity $E$, we compute a coherence score to measure how often the candidate entity co-occurs with the other entities in the document. We first select the mentions around $M$. For each mention, we use the first entity candidate (as given by the candidate selection). This gives us a set of entities $P_{M} = \{ {p}_{1}, {p}_{2},..., {p}_{k}\}$, where each element is a pre-trained entity vector. Finally, the coherence score is computed as: \begin{equation} \textit{coherence}(M, E) = \frac{1}{k} \sum_{i=1}^{k} \cos(p_{i},p_{E}) \end{equation} \noindent where $p_{E}$ is the pre-trained vector of the entity candidate $E$. This score measures how close the candidate entity $E$ is, on average, to the other presumed entities in the document. As before, we concatenate this score to the vector $f_{\textit{out}}$. More precisely, we pre-trained separate entity embeddings for the three datasets and used the mean value of all entity embeddings to represent missing entities. \subsection*{NIL Problem} The NIL problem occurs when a mention does not correspond to any entity in the KB. We adopt a traditional threshold method, which considers a mention unlinkable if its score is less than a threshold $\tau$. This means that we map a mention to the highest-scoring entity if that score exceeds $\tau$, and to NIL otherwise. The threshold $\tau$ is learned from a training set. For datasets that do not contain unlinkable mentions, we set the threshold $\tau$ to zero. \subsection*{Training} For training, we adopt a triplet ranking loss function to make the score of the positive candidates higher than the score of the negative candidates. The objective function is: \begin{multline} \theta ^{*} = \mathop{\arg\min}_{\theta} \sum_{D \in \mathcal{D}}\sum_{M \in D}\sum_{E \in C} \\ \max (0, \gamma + \Phi ( M, E^{+} ) - \Phi ( M, E^{-} )) \end{multline} \noindent where $\theta$ stands for the parameters of our model. $\mathcal{D}$ is a training set containing a certain number of documents and $\gamma$ is the parameter of margin. $E^{+}$ and $E^{-}$ represent a positive entity candidate and a negative entity candidate, respectively. Our goal is to find an optimal $\theta$, which makes the score difference between positive and negative entity candidates as large as possible. For this, we need triplets of a mention $M$, a positive example $E^+$ and a negative example $E^-$. The positive example can be obtained from the training set. The negative examples are usually chosen by random sampling from the KB. In our case, we sample the negative example from the candidates that were produced by the candidate generation phase (excluding the correct entity). This choice makes the negative examples very similar to the positive example, and forces the process to learn what distinguishes the positive candidate from the others. \section*{Experiments} \begin{table}[b!] \small \begin{tabu} {p{1.3cm} X[c] X[c] X[c] X[c] X[c] X[c]} \toprule &\multicolumn{2}{c}{ShARe/CLEF} &\multicolumn{2}{c}{NCBI} &\multicolumn{2}{c}{ADR} \\ &train &test &train &test &train &test \\ \midrule documents &199 &99 &692 &100 &101 &99 \\ mentions &5816 &5351 &5921 &964 &7038 &6343 \\ NIL &1641 &1750 &0 &0 &47 &18 \\ \midrule concepts &\multicolumn{2}{c}{88140} &\multicolumn{2}{c}{9656} &\multicolumn{2}{c}{23668} \\ synonyms &\multicolumn{2}{c}{42929} &\multicolumn{2}{c}{59280} &\multicolumn{2}{c}{0}\\ \bottomrule \end{tabu} \caption{Dataset Statistics}\label{tab:datasets} \end{table} \ignore{ \begin{table*}[!t] \small \begin{minipage}{.25\linewidth} \caption{Performance of different models. Results in gray are not statistically different from the top result.}\label{tab:performance_comparison} \end{minipage}% \hfill% \begin{minipage}{.72\linewidth}% \begin{threeparttable} \begin{tabular}{cccccc} \toprule \multirow{1}{*}{Model}&ShARe/CLEF&NCBI&ADR\cr \midrule DNorm \cite{leaman2013dnorm}&-&82.20$\pm$4.05&-\cr UWM \cite{ghiasvand2014r} &89.50$\pm$1.38&-&-\cr Sieve-based Model \cite{d2015sieve}&\cellcolor{lightgray!50}90.75$\pm$1.31&84.65$\pm$3.84&-\cr TaggerOne \cite{leaman2016taggerone}&-&\cellcolor{lightgray!50}88.80$\pm$3.32&-\cr Learning to Rank \cite{xu2017uth_ccb}&-&-&92.05$\pm$1.12\cr CNN-based Ranking \cite{li2017cnn}&\cellcolor{lightgray!50}90.30$\pm$1.33&86.10$\pm$3.63&-\cr BERT-based Ranking \cite{ji2020bert}&\cellcolor{lightgray!50}{\bf91.06$\pm$\bf1.29}&\cellcolor{lightgray!50}89.06$\pm$3.32&\cellcolor{lightgray!50}{\bf93.22$\pm$\bf1.04}\cr Our Base Model &\cellcolor{lightgray!50}90.10$\pm$1.35 &\cellcolor{lightgray!50}89.07$\pm$3.32&\cellcolor{lightgray!50}92.89$\pm$1.06\cr Our Base Model + Extra Features &\cellcolor{lightgray!50}90.43$\pm$1.33 &\cellcolor{lightgray!50}{\bf89.59$\pm$3.22}&\cellcolor{lightgray!50}93.00$\pm$1.06\cr \bottomrule \end{tabular} \end{threeparttable} \end{minipage}% \end{table*} } \begin{table*}[!t] \centering \small \begin{threeparttable} \begin{tabular}{cccccc} \toprule \multirow{1}{*}{Model}&ShARe/CLEF&NCBI&ADR\cr \midrule DNorm \cite{leaman2013dnorm}&-&82.20$\pm$3.09&-\cr UWM \cite{ghiasvand2014r} &89.50$\pm$1.02&-&-\cr Sieve-based Model \cite{d2015sieve}&\cellcolor{lightgray!50}90.75$\pm$0.96&84.65$\pm$3.00&-\cr TaggerOne \cite{leaman2016taggerone}&-&\cellcolor{lightgray!50}88.80$\pm$2.59&-\cr Learning to Rank \cite{xu2017uth_ccb}&-&-&92.05$\pm$0.84\cr CNN-based Ranking \cite{li2017cnn}&\cellcolor{lightgray!50}90.30$\pm$1.00&86.10$\pm$2.79&-\cr BERT-based Ranking \cite{ji2020bert}&\cellcolor{lightgray!50}{\bf91.06$\pm$\bf0.96}&\cellcolor{lightgray!50}89.06$\pm$2.63&\cellcolor{lightgray!50}{\bf93.22$\pm$\bf0.79}\cr Our Base Model &\cellcolor{lightgray!50}90.10$\pm$1.00 &\cellcolor{lightgray!50}89.07$\pm$2.63&\cellcolor{lightgray!50}92.63$\pm$0.81\cr Our Base Model + Extra Features &\cellcolor{lightgray!50}90.43$\pm$0.99 &\cellcolor{lightgray!50}{\bf89.59$\pm$2.59}&\cellcolor{lightgray!50}92.74$\pm$0.80\cr \bottomrule \end{tabular} \end{threeparttable} \caption{Performance of different models. Results in gray are not statistically different from the top result.}\label{tab:performance_comparison} \end{table*} \subsection*{Datasets and Metrics. } We evaluate our model on three datasets (shown in Table~\ref{tab:datasets}). The \textbf{ShARe/CLEF} corpus~\cite{pradhan2013task} comprises 199 medical reports for training and 99 for testing. As Table~\ref{tab:datasets} shows, $28.2\%$ of the mentions in the training set and $32.7\%$ of the mentions in the test set are unlinkable. The reference knowledge base used here is the SNOMED-CT subset of the UMLS 2012AA~\cite{bodenreider2004unified}. The \textbf{NCBI} disease corpus~\cite{dougan2014ncbi} is a collection of 793 PubMed abstracts partitioned into 693 abstracts for training and development and 100 abstracts for testing. We use the July 6, 2012 version of MEDIC~\cite{davis2012medic}, which contains 9,664 disease concepts. The TAC 2017 Adverse Reaction Extraction (\textbf{ADR}) dataset consists of a training set of 101 labels and a test set of 99 labels. The mentions have been mapped manually to the MedDRA 18.1 KB, which contains 23,668 unique concepts. Following previous work, we adopt accuracy to compare the performance of different models. \subsection*{Experimental Settings} \label{sec:experimental setting} We implemented our model using Keras, and trained our model on a single Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz, using less than 10Gb of memory. Each token is represented by a 200-dimensional word embedding computed on the PubMed and MIMIC-III corpora~\cite{zhang2019biowordvec}. As for the character embeddings, we use a random matrix initialized as proposed in \citet{he2015delving}, with a dimension of $128$. The dimension of the character LSTM is $64$, which yields $128$-dimensional character feature vectors. In the CNN layer, the number of feature maps is $32$, and the filter windows are $[1, 2, 3]$. The dimension of the context LSTM and entity embedding is set to $32$ and $50$ respectively. We adopt a grid search on a hold-out set from training samples to select the value $\tau$, and and find an optimal for $\tau = 0.75$. During the training phase, we select at most $20$ entity candidates per mention, and the parameter of the triplet rank loss is $0.1$. For the optimization, we use Adam with a learning rate of $0.0005$ and a batch size of $64$. To avoid overfitting, we adopt a dropout strategy with a dropout rate of $0.1$. \subsection*{Competitors} We compare our model to the following competitors: \textbf{DNorm} \cite{leaman2013dnorm}; \textbf{UWM} \cite{ghiasvand2014r}; \textbf{Sieve-based Model} \cite{d2015sieve}; \textbf{TaggerOne} \cite{leaman2016taggerone}; a model based on \textbf{Learning to Rank} \cite{xu2017uth_ccb}; \textbf{CNN-based Ranking} \cite{li2017cnn}; and \textbf{BERT-based Ranking} \cite{ji2020bert}. \section*{Results} \subsection*{Overall Performance} During the candidate generation, we generate 20 candidates for each mention. The recall of correct entities on the ShARe/CLEF, NCBI, and ADR test datasets is 97.79\%, 94.27\%, and 96.66\% respectively. We thus conclude that our candidate generation does not eliminate too many correct candidates. Table~\ref{tab:performance_comparison} shows the performance of our model and the baselines. Besides accuracy, we also compute a binomial confidence interval for each model (at a confidence level of 0.02), based on the total number of mentions and the number of correctly mapped mentions. The best results are shown in bold text, and all performances that are within the error margin of the best-performing model are shown in gray. We first observe that, for each dataset, several methods perform within the margin of the best-performing model. However, only two models are consistently within the margin across all datasets: BERT and our method. Adding extra features (prior, context, coherence) to our base model yields a small increase on the three datasets. However, overall, even our base model achieves a performance that is statistically indistinguishable from the state of the art. \begin{table}[!t] \centering \small \begin{threeparttable} \begin{tabular}{cccc} \toprule \multirow{1}{*}{Model}&ShARe/CLEF&NCBI&ADR\cr \midrule - Character Feature &-1.21&-0.31&-0.30\cr - Alignment Layer & \underline{-3.80}&\underline{-4.06}&\underline{-3.17}\cr - CNN Layer &-1.87&-0.93&-0.35\cr \rowcolor{lightgray!50} Our Base Method &90.10&89.07&92.63\cr + Mention-Entity Prior &+0.33&+0.04&+0.03\cr + Context &-0.09&+0.21&-0.24\cr + Coherence &-0.02&+0.27&+0.11\cr \bottomrule \end{tabular} \caption{Ablation study}\label{tab:ablation} \end{threeparttable} \end{table} \begin{table*}[!t] \centering \begin{threeparttable} \begin{tabular}{ccccccc} \toprule \multirow{1}{*}{Model}&Original ADR&10\%&30\%&50\%&70\%&90\%\cr \midrule + Ordering Change &92.63&92.20&92.18&91.95&92.31&92.05\cr + Typo &92.63&92.03&91.61&91.38&91.41&91.13\cr \bottomrule \end{tabular} \caption{Performance in the face of typos: Simulated ADR Datasets}\label{tab:simulate} \end{threeparttable} \end{table*} \ignore{ \begin{table*}[!t] \begin{threeparttable} \begin{tabular}{cccc} \toprule &Sieve-based Model&BERT-based Ranking&Our Base Model\cr \midrule Parameter Numbers &-&110M/340M&6.5M/4.9M/2.3M\cr Abbreviation Expansion Tool &$\checkmark$&$\checkmark$&$\checkmark$\cr Abbreviation Dictionary &$\checkmark$&$\checkmark$&$\checkmark$\cr Numeral Dictionary &$\checkmark$&$\checkmark$&$\checkmark$\cr Synonym Dictionary&$\checkmark$&$\times$&$\times$\cr Spelling Check Dictionary&$\times$&$\checkmark$&$\times$\cr Stemming Tool&$\checkmark$&$\checkmark$&$\times$\cr Information Retrieval Tool &$\times$&$\checkmark$&$\times$\cr \bottomrule \end{tabular} \caption{Model parameter numbers and external resources used.} \end{threeparttable} \end{table*} } \begin{table*}[!t] \centering \small \begin{threeparttable} \begin{tabular}{cccccccccc} \toprule Model&Parameters&\multicolumn{2}{c}{ShARe/CLEF}&\multicolumn{2}{c}{NCBI}&\multicolumn{2}{c}{ADR}&Avg&Speedup\cr &&CPU &GPU &CPU &GPU &CPU &GPU && \cr \midrule BERT (large)&340M&2230s&1551s&353s&285s&2736s&1968s&1521s&12.3x\cr BERT (base)&110M&1847s&446s&443s&83s&1666s&605s&848s&6.4x\cr TinyBERT$_{6}$&67M&1618s&255s&344s&42s&2192s&322s&796s&6.0x\cr MobileBERT (base)&25.3M&1202s&330s&322s&58s&1562s&419s&649s&4.7x\cr ALBERT (base)&12M&836s&\textbf{129s}&101s&24s&1192s&170s&409s&2.6x\cr Our Base Model&4.6M&\textbf{181s}&131s&\textbf{38s}&\textbf{22s}&\textbf{196s}&\textbf{116s}&\textbf{114s}&-\cr \bottomrule \end{tabular} \caption{Number of model parameters and observed inference time} \label{tab:running time} \end{threeparttable} \end{table*} \subsection*{Ablation Study} To understand the effect of each component of our model, we measured the performance of our model when individual components are removed or added. The results of this ablation study on all three datasets are shown in Table~\ref{tab:ablation}. The gray row is the accuracy of our base model. The removal of the components of the base model is shown above the gray line; the addition of extra features (see the section of \nameref{sec:extra}) below. If we remove the Alignment Layer (underlined), the accuracy drops the most, with up to 4.06 percentage points. This indicates that the alignment layer can effectively capture the similarity of the corresponding parts of mentions and entity names. The CNN Layer extracts the key components of the names, and removing this part causes a drop of up to 1.87 percentage points. The character-level feature captures morphological variations, and removing it results in a decrease of up to 1.21 percentage points. Therefore, we conclude that all components of our base model are necessary. Let us now turn to the effect of the extra features of our model. The Mention-Entity Prior can bring a small improvement, because it helps with ambiguous mentions, which occupy only a small portion of the dataset. The context feature, likewise, can achieve a small increase on the NCBI dataset. On the other datasets, however, the feature has a negative impact. We believe that this is because the documents in the NCBI datasets are PubMed abstracts, which have more relevant and informative contexts. The documents in the ShARe/CLEF and ADR datasets, in contrast, are more like semi-structured text with a lot of tabular data. Thus, the context around a mention in these documents is less helpful. The coherence feature brings only slight improvements. This could be because our method of estimating co-occurrence is rather coarse-grained, and the naive string matching we use may generate errors and omissions. In conclusion, the extra features do bring a small improvement, and they are thus an interesting direction of future work. However, our simple base model is fully sufficient to achieve state-of-the-art performance already. \subsection*{Performance in the Face of Typos} To reveal how our base model works, we further evaluate it on simulated ADR datasets. We generate two simulated datasets by randomly adding typos and changing word orderings of mention names. As described in Table~\ref{tab:simulate}, as we gradually add typos, the accuracy does not drop too much, and adding 90\% of typos only results in a 1.5 percent drop. This shows our model can deal well with morphological variations of biomedical names. Besides, ordering changes almost have no effect on our base model, which means it can capture correspondences between mention and entity names. \subsection*{Parameters and Inference Time} To measure the simplicity of our base model, we analyze two dimensions: the number of model parameters and the practical inference time. In Table~\ref{tab:running time}, we compare our model with BERT models, including three popular lightweight models: ALBERT\cite{lan2019albert}, TinyBERT\cite{jiao2019tinybert}, and MobileBert\cite{sun2020mobilebert}. Although ALBERT's size is close to our model, its performance is still 2.2 percentage points lower than the BERT$_{\textit{BASE}}$ model on average. The second column in the table shows the number of parameters of different models. Our model uses an average of only 4.6M parameters across the three data sets, which is 1.6x to 72.9x smaller than the other models. The third column to the tenth column show the practical inference time of the models on the CPU and GPU. The CPU is described in the \nameref{sec:experimental setting}, and the GPU we used is a single NVIDIA Tesla V100 (32G). Our model is consistently the fastest across all three datasets, both for CPU and GPU (except in the fourth column). On average, our model is 6.4x faster than other BERT models, and our model is much lighter on the CPU. \subsection*{Model Performance as Data Grows} In this section, we study how our model performs with an increasing amount of training samples, by subsampling the datasets. As shown in Figure~\ref{fig:smalldata}, the performance of our base model keeps growing when we gradually increase the number of training samples. When using 50\% of the training samples, the accuracies of ShARe/CLEF, NCBI, and ADR dataset are already $0.8342, 0.8747,$ and $0.9106$, respectively. More data leads to better performance, and thus our model is not limited by its expressivity, even though it is very simple. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{picture/model_efficiency.pdf} \caption{Model efficiency on a small amount of data.} \label{fig:smalldata} \end{figure} \section*{Conclusion} In this paper, we propose a simple and lightweight neural model for biomedical entity linking. Our experimental results on three standard evaluation benchmarks show that the model is very effective, and achieves a performance that is statistically indistinguishable from the state of the art. BERT-based models, e.g., have 23 times more parameters and require 6.4 times more computing time for inference. Future work to improve the architecture can explore \emph{1)} automatically assigning a weight for each word in the mentions and entity names to capture the importance of each word, depending, e.g., on its grammatical role; \emph{2)} Graph Convolutional Networks (GCNs) \cite{kipf2016semi,wu2020dynamic} to capture graph structure across mentions and improve our notion of entity coherence. \goodbreak \section*{Acknowledgments} This project was partially funded by the DirtyData project (ANR-17-CE23-0018-01).
{ "timestamp": "2021-05-25T02:02:20", "yymm": "2012", "arxiv_id": "2012.08844", "language": "en", "url": "https://arxiv.org/abs/2012.08844" }
\section{Introduction}\label{sect:introduction} Neural ordinary differential equations (Neural ODEs) \cite{neuralODEs}, which are analogous to a continuous-depth version of deep residual networks \cite{he2016deep}, exhibit considerable computational efficiency on time-series modeling tasks. Although Neural ODEs do not necessarily improve the performance of contemporary deep models, they enable the rich theory and tools from the field of differential equations to be applied to deep models. Examples include a better characterization of Neural ODEs \cite{rubanova2019latent,dupont2019augmented,durkan2019neural,jia2019neural}, and a better understanding of their robustness \cite{yan2020robustness}, stability \cite{yang2020dynamical}, and controllability \cite{quaglino2019snode,holl2020learning,kidger2020neural}. As the use of Neural ODEs on real-world applications increases \cite{finlay2020train,lechner2020neural,erichson2020lipschitz,lechner2020learning,hasani2020natural}, so does the importance of ensuring their safety through the use of verification techniques. In this paper, we establish a theoretical foundation for the verification of Neural ODE networks. In particular, we introduce \emph{Stochastic Lagrangian Reachability} (SLR), a new analysis technique with provable convergence and conservativeness guarantees for Neural ODEs $\partial_t x\,{=}\,f$, with field $f(x,x(0),t,\theta)$, hidden states $x(t)$, and parameters $\theta$. (SLR works in fact for any nonlinear system defined by a set of nonlinear differential equations.) At the core of SLR is the translation of the reachability problem to a global optimization problem, at every time step $t$. The latter is solved globally, by uniformly sampling states $x$ from an initial ball $\mathcal{B}_0$, and locally, by computing a local minimum via gradient descent from $x$. SLR avoids gradient descent if $x$ is within a spherical-cap around a previously sampled state or its corresponding local minimum. The radius of the cap is derived from the interval computation of the local Lipschitz constant of the objective function within the cap. The minimum computed by SLR at time $t$ stochastically defines an as-tight-as-possible ellipsoid covering all states reached at $t$ by the solution starting in $\mathcal{B}_0$, with tolerance $\mu$ and confidence $1\,{-}\,\gamma$, for given values of $\mu$ and $\gamma$. See Figure~\ref{fig:Notation}. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{Figures_notation.pdf} \caption{The conservative reachset $\mathcal{B}_j$ at time $t_j$ computed using Lagrangian reachability and global optimization, for a Neural ODE starting from the ball $\mathcal{B}_0$ at time $t_0$.} \label{fig:Notation} \end{figure} Since SLR employs interval arithmetic only locally to compute the spherical-caps (also called safety or tabu regions), it avoids the infamous wrapping effect \cite{lohnerOrig} of deterministic reachability methods (see Table~\ref{tab:related_works}), which prevents them from being deployed in practice. Consequently, our approach scales up to large-scale, real-life Neural ODEs. To the best of our knowledge, none of the available tools has been successfully applied to Neural ODEs. We also introduce a novel forward formulation of the adjoint sensitivity method \cite{pontryagin2018mathematical} to compute the loss gradients in the optimization flow. This enables us to improve the time complexity of the optimization process compared to similar methods \cite{neuralODEs,zhuang2020adaptive}. \noindent\textbf{Summary of results.} In this work, we present a thorough theoretical approach to the problem of providing safety guarantees for the class of time-continuous neural networks formulated as Neural ODEs. As the main result, we develop SLR, a differentiable stochastic Lagrangian reachability framework, formulated as a global optimization problem. In particular, we prove that SLR converges (Theorem~\ref{thm:convergence guarantee}) to tight ellipsoidal safe regions (Theorem~\ref{thm:safety region radius}), within $\mathcal{O}(-\ln\gamma (\delta_0/r_{bound})^{2n})$ number of iterations (Theorem~\ref{thm:convergence rate}). This implies that for a given confidence level $\gamma$, our algorithm terminates according to the proposed rate, which leads to the important conclusion that the problem of constructing an ellipsoid abstraction of the true reachsets with probabilistic guarantees for Neural ODEs is decidable (the computed abstraction is conservative with confidence $\gamma$). We summarize our key contributions as follows: \begin{itemize} \itemsep0em \item We introduce a theoretical framework for the verification of Neural ODEs by restating the reachability problem as a set of global-optimization problems. \item We solve each optimization problem globally, via uniform sampling, and locally, through gradient descent (GD), thereby avoiding costly Hessian computations in the process. \item GD is avoided in spherical-caps around the start/end states of previous searches. The cap radius is derived from its local Lipschitz constant, computed via interval arithmetic. \item We design a forward-mode GD algorithm based on the adjoint sensitivity method for (Neural) ODEs. \item We prove convergence properties of SLR, its safety guarantees, and discuss its time and space complexity. \end{itemize} \section{Related Work} \noindent\textbf{Global optimization.} The literature on global optimization for continuous problems is vast and includes many different approaches depending on the smoothness assumptions made about the objective function. Evolutionary strategies like those based on the covariance matrix \cite{cma,cmaes} work for general continuous objectives. Deterministic interval-based branch-and-bound methods \cite{neumaier_2004,globintervals} work for differentiable objectives, and Lipschitz global optimization \cite{piyavskii,schubert,lipschitz} for objectives satisfying the Lipschitz condition. Our work is closest to the BRST algorithm \cite{brst,brst2,brst3} which for smooth objectives uses Hessians to compute the basins of attraction for local minima as ellipsoidal bounds. Such basins define tabu regions. The final estimate for the global minimum and reasonable confidence bounds are provided.\\[1mm] \noindent\textbf{Stochastic reachability.} Existing work is mainly concerned with the verification of safety guarantees for stochastic hybrid systems with continuous dynamics (ODEs) in each mode. Stochasticity is introduced in several ways: uncertainty in the model parameters \cite{sreach,sisat,probreach}, uncertainty in the discrete jumps between modes \cite{e71eb65e23844408b72fe95a84f88cb6}, and uncertainty in the initial state \cite{10.1145/3126508}. The work of \cite{reliablecomput} focuses on the probabilistic verification of continuous-time ODEs with uncertainty in parameters and initial states. \noindent\textbf{Reachability for continuous dynamical systems.} Most of the relevant techniques are deterministic and based on interval arithmetic. We provide a qualitative summary of existing reachability methods for continuous-time systems in Table~\ref{tab:related_works}. \begin{table*}[t] \scriptsize \centering \caption{A Perspective on Related Work} \vspace{-2mm} \begin{tabular}{l|c|c|c|c} \toprule \textbf{Technique} & \textbf{Deterministic} & \textbf{Parallelizable (single step)} & \textbf{Basis} & \textbf{wrapping effect} \\ \midrule LRT \cite{Cyranka2017} & yes & no & Infinitesimal strain theory & yes \\ CAPD \cite{CAPD}& yes & no & Lohner algorithm & yes \\ Flow-star \cite{flowstar}& yes & no & Taylor models & yes \\ $\delta$-reachability \cite{deltadecidable} & yes & no & approximate satisfiability & yes\\ C2E2 \cite{c2e2} & yes & no & discrepancy function & yes\\ LDFM \cite{fansimul}& yes & yes & simulation, matrix measures & no\\ TIRA \cite{tira} & yes & yes & second-order sensitivity & no\\ Isabelle/HOL \cite{isabelle} & yes & no & proof-assistant & yes\\ Breach \cite{breach,donze}& yes & yes & simulation, sensitivity & no\\ PIRK \cite{pirk} & yes & yes & simulation, contraction bounds & no\\ HR \cite{hr} & yes & no & hybridization & yes\\ ProbReach \cite{probreach2} & no & no & $\delta$-reachability, probability interval & yes \\ VSPODE \cite{reliablecomput} & no &no & p-boxes & yes \\ GP \cite{gp} & no & no & Gaussian process& no \\ SLR \textbf{Ours} & no & yes & stochastic Lagrangian reachability & no\\ \bottomrule \end{tabular} \caption*{\footnotesize \textbf{Note:} Deterministic refers to approaches that provide an overapproximation of the reach-set without any uncertainties. A ``No” in the deterministic column indicates a stochastic approach that yields a reach-set with a corresponding confidence interval.} \label{tab:related_works} \vspace{-4mm} \end{table*} \section{Setup}\label{sect:preliminaries} In this section, we introduce our notation, preliminary concepts, and definitions required to construct our theoretical setup for the verification of Neural ODEs. \noindent\textbf{Neural ODE.} The derivative of the hidden states $x$ is computed by a neural network $f$ parameterized by $\theta$ as follows \cite{neuralODEs}: \begin{equation} \partial_t x = f(x,x(0),t,\theta), x_0 \in \mathcal{B}_0 \label{neuralode} \end{equation} We require that the Neural ODE is Lipschitz-continuous and forward-complete. The solution to this initial-value problem can be computed by numerical ODE solvers, from any initial system state $x(0)\,{=}\,x_0$. Consequently, the numerical solution can be trained by reverse-mode automatic differentiation \cite{rumelhart1986learning}, either through the solver, by a vanilla backpropagation algorithm \cite{hasani2020liquid}, or by treating the solver as a blackbox and using the adjoint sensitivity method \cite{pontryagin2018mathematical}. \noindent\textbf{Geometrical deformation in time by a flow $\chi$.} To describe the optimization problem, we use Eulerian and Lagrangian coordinates from classical continuum mechanics. We regard the set of initial states, which is the ball $\mathcal{B}_0\,{=}\,B(x_0,\delta_0)$, as a body that is being deformed in time by a flow $\chi$. Given a point $x\,{\in}\,\mathcal{B}_0$ in Eulerian coordinates (the undeformed configuration), there is at every time $t_j\,{>}\,t_0$ the representation $x(t_j)\,{=}\,\chi_{t_0}^{t_j}(x)$ of that point in Lagrangian coordinates (the configuration deformed by $\chi$). The deformation of $\mathcal{B}_0$ in time is related to the Neural ODE, where $\chi$ is defined as the solution flow of Eq.~\eqref{neuralode}. \noindent\textbf{Reachset.} A reachset is the set of all states reached at a target time $t$, given the initial states and a flow. More formally: \begin{definition} Given a set of initial states $\mathcal{B}_0$ at time $t_0$, the target time $t_j\,{\ge}\,t_0$, and the flow $\chi$ of the Neural ODE~\eqref{neuralode}, we call $\mathcal{B}_j(\mathcal{B}_0)\,{\subset}\,\mathbb{R}^n$ a conservative \emph{reachset} enclosure if $\chi_{t_0}^{t_j}(x)\,{\in}\, \mathcal{B}_j(\mathcal{B}_0)$, for all $x\,{\in}\, \mathcal{B}_0$; i.e., the reachset bounds all state-trajectories of the Neural ODE. \end{definition} Whenever the initial set $\mathcal{B}_0$ is known from the context, we simply refer to the reachset as \textit{the Reachset at time $t_j$}, or $\mathcal{B}_j$. \noindent\textbf{Reachtube.} A reachtube is a series of reachsets within a determined time-horizon. Formally: \begin{definition}\label{def:Reachtube} Given a set of initial states $\mathcal{B}_0$ at time $t_0$, and a time horizon $T$, we use $B(\mathcal{B}_0,T)$ to denote a sequence of time-stamped reachsets $\mathcal{B}_1$, $\dots$, $\mathcal{B}_k$ with $t_0 \,{\le}\, t_1 \,{\le}\,{\dots} \,{\le}\, t_k\,{=}\,T$. \end{definition} Whenever the initial set, time horizon, and flow are known from the context, we use the term \emph{reachtube over-approximation} or $\mathcal{B}$, for that sequence of reachsets. \begin{definition}[Ellipsoid]\label{def:ellipsoid} Given $A_j, M_j \in \mathbb{R}^{n\times n}$, $M_j\succ 0$ with $A_j^T A_j = M_j$ and $\lVert x \rVert_{M_j} = \sqrt{x^TM_jx}$, we call $B_{M_j}(x_0,\delta)$ a ball in metric $M_j$ (or an ellipsoid) with center $x_0$ and radius $\delta$ if $\lVert x-x_0\rVert_{M_j}\le\delta$ for all $x\,{\in}\, B_{M_j}(x_0,\delta)$. \end{definition} \noindent\textbf{Reachability as an optimization problem.} Given a time horizon $T$, an initial ball $\mathcal{B}_0 \,{=}\, B_I(x_0, \delta_0)$ with center $x_0$ and radius $\delta_0$, and Euclidean metric $M_0\,{=}\,I$, our goal is to find a tight reachtube $\mathcal{B}$, bounding all state-trajectories of the Neural ODE~\eqref{neuralode}. We capture the reachsets of $\mathcal{B}$ by ellipsoids $\mathcal{B}_j\,{=}\, B_{M_j}(\chi_{t_0}^{t_j}(x_0),\delta_j)$ with center $\chi_{t_0}^{t_j}(x_0)$, radius $\delta_j$, and metric $M_j$. At every time $t_j$, we use as the center $\chi_{t_0}^{t_j}(x_0)$, the numerical integration of $x_0$, and as the metric $M_j$, the optimal metric in $\chi_{t_0}^{t_j}(x_0)$ minimizing the volume of the ellipsoid, as proposed in~\cite{gruenbacher2020lagrangian}. Thus, our goal is to find at every time step $t_j$, a radius $\delta_j$ which (stochastically) guarantees that $\mathcal{B}_j$ is a conservative reachset. I.e., at each $t_j$, we want to find the maximal distance of all $\chi_{t_0}^{t_j}(x)$ to center $\chi_{t_0}^{t_j}(x_0)$ in metric $M_j$ for $x\,{\in}\,\mathcal{B}_0$, and define $\delta_j$ as this distance. Thus the optimization problem can be defined as follows: \begin{align} \delta_j &\ge \max_{x\in\mathcal{B}_0} \left\lVert \chi_{t_0}^{t_j}(x) - \chi_{t_0}^{t_j}(x_0)\right\rVert_{M_j}\label{eq:optim1} = \max_{x\in\mathcal{B}_0} \dist\left(\chi_{t_0}^{t_j}(x)\right) \end{align} where we use $\dist(\chi_{t_0}^{t_j}(x))$ to describe the distance in Eq.~\eqref{eq:optim1} when metric $M_j$ and starting point $x_0$ are known. As we require Lipschitz-continuity and forward-completeness, the map $x \mapsto \chi_{t_0}^{t_j}(x)$ is a homeomorphism and commutes with closure and interior operators. In particular, the image of the boundary of the set $\mathcal{B}_0$ is equal to the boundary of the image $\chi_{t_0}^{t_j}(\mathcal{B}_0)$. Thus, Eq.~\eqref{eq:optim1} has its optimum on the surface of the initial ball $\mathcal{B}_0^S = \textrm{surface}(\mathcal{B}_0)$, and we will only consider points on the surface. In order to be able to optimize this problem, we describe the points on the surface with (n-dimensional) polar coordinates such that every point $x\,{\in}\,\mathcal{B}_0^S$ is represented by a tuple $(\delta_0,\varphi)$, with angles $\varphi \,{=}\, (\varphi_1,\dots,\varphi_{n-1})$ and center $x_0$, having a conversion function $x((\delta_0,\varphi),x_0)$ from polar to Cartesian coordinates, defined as follows: \begin{equation}\label{eq:polar} \begin{aligned} &x((\delta_0,\varphi),x_0) = \\ &\begin{pmatrix} x_{0,1} + \delta_0 \cos(\varphi_1)\\ \vdots\\ x_{0,n-1} + \delta_0 \sin(\varphi_1)\cdot\ldots\cdot\sin(\varphi_{n-2})\cos(\varphi_{n-1})\\ x_{0,n} + \delta_0 \sin(\varphi_1)\cdot\ldots\cdot\sin(\varphi_{n-2})\sin(\varphi_{n-1})\\ \end{pmatrix} \end{aligned} \end{equation} Whenever the center $x_0$ and the radius $\delta_0$ of the initial ball $\mathcal{B}_0$ are known from the context, we will use the following notation: $x(\varphi)$ for the conversion from polar to Cartesian coordinates and $\varphi(x)$ for Cartesian to polar. Using polar coordinates, we restate the optimization problem~\eqref{eq:optim1} as follows: \begin{align}\label{eq:optim2} \delta_j &= \max_{x\in\mathcal{B}_0} \left\lVert \chi_{t_0}^{t_j}(x) - \chi_{t_0}^{t_j}(x_0)\right\rVert_{M_j}\nonumber\\ &= \max_{\varphi\in \mathbb{R}^{n-1}} \underbrace{\left\lVert \chi_{t_0}^{t_j}(x(\varphi)) - \chi_{t_0}^{t_j}(x_0)\right\rVert_{M_j}} _{=-L(\varphi)}\nonumber\nonumber\\ &=\min_{\varphi\in \mathbb{R}^{n-1}} L(\varphi) = m^\star, \end{align} We call $L$ the \emph{loss function} in polar coordinates at time $t_j$ that we would like to minimize. Note that $L$ also depends on the initial radius $\delta _0$ and initial center $x_0$; as these are fixed inputs, we do not consider them in the notation. \section{Main Results} In this section, we present our verification framework for Neural ODEs, which we call \textbf{Stochastic Lagrangian Reachability (SLR)}. As the main results of this paper, we show that the algorithm guarantees safety and converges to the tightest ellipsoid, almost surely, in the limit of the number of samples. We then compute the convergence rate and discuss space and time complexities. \begin{algorithm}[t] \caption{Finding the local minimum} \label{algorithm:gradient descent} \begin{algorithmic}[1] \REQUIRE target time $t_j$, termination tolerance $\epsilon > 0$, learning rate $\gamma > 0$, initial guess $\varphi\in\mathbb{R}^{n-1}$, loss function $L$, gradient of loss $\nabla_\varphi L$ \STATE $l \leftarrow L(\varphi)$, $l_{prev} \leftarrow \infty$ \WHILE{$|l-l_{prev}|/|l_{prev}| > \epsilon$} \STATE \textbf{compute} $\nabla_\varphi L$\label{line:loss gradient} \STATE $\varphi \leftarrow \varphi - \alpha \nabla_\varphi L$ \STATE $l_{prev} \leftarrow l$ \STATE $l \leftarrow L(\varphi)$ \ENDWHILE \RETURN $\varphi, l$ \end{algorithmic} \end{algorithm} \subsection{Gradient Computation} Our algorithm uses gradient descent locally when solving the global optimization problem of Eq.~\eqref{eq:optim2}. Gradient descent is started from uniformly sampled points, which are not contained in already constructed safety regions. Uniform sampling is used to repeatedly select an initial point from the surface of the ball $\mathcal{B}_0$. Gradient descent is used from this point to find a local minimum. SLR is inspired by the \emph{gradient-only tabu-search} (GOTS) proposed in~\cite{stepanenko}. Instead of tabu regions, we use \emph{safety radii} $r(\varphi)$ to construct an area around already visited points $\varphi$, where we know for sure what the minimum value inside that region is. In the following, we describe the computational steps of the loss's gradient for the main SLR algorithm in greater detail. Given the target time $t_j$, termination tolerance $\epsilon \,{>}\, 0$, learning rate $\gamma\,{>}\, 0$, initial guess $\varphi\in\mathbb{R}^{n-1}$, and loss function $L$, we seek to compute the gradient of loss $\nabla_\varphi L$. We introduce a new framework to compute the loss's gradient which is needed in Line~\ref{line:loss gradient} of Algorithm~\ref{algorithm:gradient descent} to find the local minimum. Using the chain rule, we can express the gradient $\nabla_\varphi L$ as follows: \begin{equation} \label{eq:derivatives} \begin{split} &\frac{\partial L(\cdot)}{\partial \varphi}(\varphi) = -\left.\frac{\partial \dist \circ \chi_{t_0}^{t_j} \circ x (\cdot)}{\partial\varphi}\right.\\ & = - \underbrace{\left.\frac{\partial \dist}{\partial y}\right|_{y=\chi_{t_0}^{t_j}( x(\varphi))}}_{(a)} \cdot \underbrace{\left.\frac{\partial \chi_{t_0}^{t_j}}{\partial x}\right|_{x=x(\varphi)}}_{(c)} \cdot \underbrace{\frac{\partial x(\cdot)}{\partial \varphi}}_{(b)} \end{split} \end{equation} \textbf{Part (a) - loss gradient wrt $y$:} The differentiation of the loss function defined in Eq.~\eqref{eq:optim1} can be expressed as \begin{align}\label{eq:loss gradient x} \partial_y \dist (y) = A_j(y-\chi_{t_0}^{t_j}(x_0))\dist(y)^{-1}A_j, \end{align} with $A_j$ from Def.~\ref{def:ellipsoid} and $M_j$ as the metric in $\chi_{t_0}^{t_j}(x_0)$ minimizing the volume of the ellipsoid~\cite{gruenbacher2020lagrangian}. \textbf{Part (b) - polar gradient:} $ x(\varphi)$ describes the transformation from polar coordinates to Cartesian coordinates, as given in Eq.~\eqref{eq:polar}. The differentiation with respect to $\varphi$ is straightforward to obtain using the product rule and the derivatives of $\sin$ and $\cos$: \begin{equation}\label{eq:polar gradient} \begin{aligned} &\partial_\varphi x(\varphi) = \\ &\begin{pmatrix} -\delta_0 \sin(\varphi_1)\\ \delta_0 \left( \cos(\varphi_1)\cos(\varphi_2) - \sin(\varphi_1)\sin(\varphi_2) \right)\\ \vdots\\ \end{pmatrix} \end{aligned} \end{equation} \textbf{Part (c) - gradient of the flow:} The partial derivative $\partial_x \chi_{t_0}^{t_j} (x)$ in $x$ of the Neural ODE solution flow $\chi$ with respect to the initial condition is called the gradient of the flow or \emph{deformation gradient} in~\cite{linTE,contMec}, and the \emph{sensitivity matrix} in~\cite{breach,donze}. Let $I$ be the identity matrix in $\mathbb{R}^{n\times n}$. As we now show, the sensitivity matrix $\partial_x \chi_{t_0}^{t_j}(x)$ is a solution of the \emph{variational equations} associated with~\eqref{neuralode}: \begin{equation} \label{eq:variational} \begin{aligned}[c] \partial_x\chi_{t_0}^{t_j}(x) = F(t_j,x)\hspace{12ex}\\ \partial_t F(t,x) = (\partial_x f)(\chi_{t_0}^{t}(x)) F(t,x),\quad F(t_0,x)=I \end{aligned} \end{equation} \noindent \emph{Proof sketch}: By interchanging the differentiation order, we obtain $\partial_t(\partial_x \chi_{t_0}^{t}(x))\,{=}\,\partial_x (\partial_t \chi_{t_0}^{t}(x))$. Since $\chi_{t_0}^{t}(x)$ is a solution of Eq.~\eqref{neuralode}, $\partial_x (\partial_t \chi_{t_0}^{t} (x))\,{=}\,\partial_x (f(\chi_{t_0}^{t}(x)))$. By the chain rule, we get $\partial_t (\partial_x \chi_{t_0}^{t} (x))\,{=}\,(\partial_x f)(\chi_{t_0}^{t}(x)) \partial_x \chi_{t_0}^{t}(x)$. \begin{algorithm}[t] \caption{Computation of $\nabla_\varphi L$} \label{algorithm:computing gradient} \begin{algorithmic}[1] \REQUIRE target time $t_j$, initial value $\varphi\in \mathbb{R}^{n-1}$, Neural ODE $f$, gradients $\partial_x \dist$ and $\partial_\varphi x$ \STATE $b \leftarrow x(\varphi), F \leftarrow I$ \STATE $[b,F] \leftarrow$ solve\_ivp($[f(b,t),(\partial_b f)(b)\cdot F],[0,t_j],[b,F])$ \STATE $\nabla_\varphi L \leftarrow -\partial_y \dist(y) \cdot F \cdot \partial_\varphi x$ \RETURN $\nabla_\varphi L$ \COMMENT{Required in line~\ref{line:loss gradient} of algorithm~\ref{algorithm:gradient descent}} \end{algorithmic} \end{algorithm} \noindent\textbf{Forward-mode use of adjoint sensitivity method.} The integral of Eq.~\eqref{eq:variational} has the same form of the auxiliary ODE used for reverse-mode automatic differentiation of Neural ODEs, when optimized by the adjoint sensitivity method \cite{neuralODEs} with one exception. In contrast to \cite{neuralODEs}, which requires one to run the adjoint equation backward and have access to the termination time of the flow, our approach enjoys a simultaneous forward-mode use of the adjoint equation. This is due to the way we determine the loss function in the ODE space. In retrospect, this enables us to obtain the gradients of the loss at the current state-computation step. This property enables us to improve the optimization runtime by 50\%, compared to the optimization scheme used in \cite{neuralODEs}: we save half of the time because we do not have to go backward to compute the loss. More precisely, solving Eq.~\eqref{eq:variational} until target time $t_j$ requires knowledge of $\chi_{t_0}^{t}(x)$ for all $t\in[t_0,t_j]$. This ensures that we already know the value of $\chi_{t_0}^{t}(x_0)$ when needed to compute the right side of Eq.~\eqref{eq:variational} during integration of $F(t,x)$. Algorithm~\ref{algorithm:computing gradient} demonstrates the computation of the gradient $\nabla_\varphi L$ of the loss function. \subsection{Safety-Region Computation}\label{sec:TR and TD} With our global search strategy, we are covering the feasible region $\mathcal{B}_0^S$ with already visited points $\mathcal{V}$. Consequently, we have access to the global minimum in all of those regions: \begin{align}\label{eq:local minimum} \bar{m} = \min_{\varphi\in\mathcal{V}}L(\varphi) \end{align} with $\bar{m}\ge m^\star$, where $m^\star$ is the global minimum of Eq.~\eqref{eq:optim2}. We now identify safety regions for a Neural ODE flow and describe how this is incorporated in the SLR algorithm. \begin{definition}[Safety Region]\label{def:TR} Let $\varphi_i\,{\in}\,\mathcal{V}\subseteq\mathbb{R}^{n-1}$ be an already-visited point. A safety-radius $r_{\varphi_i}\,{=}\,r(\varphi_i)$ defines a \emph{safe spherical-cap} $B(\varphi_i,r_{\varphi_i})^S \,{=}\, B(x(\varphi_i),r_{\varphi_i}) \cap \mathcal{B}_0^S$, because $L(\psi)\ge\mu\cdot\bar{m}$ for all $\psi$ s.t. $x(\psi)\in B(\varphi_i,r)^S$. \end{definition} Our objective is to use the local Lipschitz constants to define a radius $r_\varphi$ around an already visited point $\varphi$ s.t.\ we can guarantee that $B(\varphi,r_\varphi)^S$ is a safety region. \begin{definition}[Lipschitz]\label{def:lipschitz} The local Lipschitz constant (LLC) of a function $L$ in a region $A$ is defined as a $\lambda_A \ge 0$ with \begin{align*} \|L(x)-L(y)\|\le \lambda_A \|x-y\| \quad\forall x,y\in A. \end{align*} \end{definition} In the following theorem, we use the LLC to define the radius $r_\varphi$ of the safety (or tabu) region $ B(\varphi,r_\varphi)^S$ around an already-visited point $\varphi\in\mathcal{V}$. \begin{theorem}[Radius of Safety Region]\label{thm:safety region radius} At target time $t_j$, let $\bar{m}$ be the current global minimum, as in Eq.~\eqref{eq:local minimum}. Let $\varphi\in\mathcal{V}$ be an already-visited point with value $L(\varphi)$ ($\ge \bar{m}$), and let $r_\varphi$ and $ B(\varphi,r_\varphi)^S$ be defined as follows with $\mu\ge 1$: % \begin{align}\label{eq:safety radius} r_{\varphi} = \lambda_{\Sigma_\varphi}^{-1}\left(L(\varphi)-\mu\cdot\bar{m}\right) \end{align} % with $\lambda_{\Sigma_\varphi} = \max_{x(\psi)\in\Sigma_\varphi}\lVert \partial_x \chi_{t_0}^{t_j}(x(\psi)) \rVert_{M_{0,j}}$. If $\Sigma_\varphi$ is chosen s.t.\ $\Sigma_\varphi\supseteq B(\varphi,r_\varphi)^S$, then it holds that: % \begin{align}\label{eq:safety radius result} L(\psi)\ge \mu\cdot\bar{m}\quad\forall x(\psi)\in B(\varphi,r_\varphi)^S \end{align} % \end{theorem} The full proof is provided in the Appendix. \emph{Proof sketch:} The Lipschitz constant defines a relation between the values in the domain and the ones in the range of the function. \begin{algorithm}[t] \caption{Computing the Radius of the Safety Region} \label{algorithm:radius for safety region} \begin{algorithmic}[1] \REQUIRE target time $t_j$, visited point $\varphi$, termination tolerance $\epsilon\,{>}\,0$, initial ball $\mathcal{B}_0$ with radius $\delta_0$, minimum of visited points $\bar{m}$, loss function $L$, tolerance $\mu \,{\ge}\,1$, region $\Sigma_\varphi$ in which to compute the LLC $\lambda$. \vspace*{2mm} \STATE $\Sigma_\varphi \leftarrow \mathcal{B}_0, s\leftarrow \delta_0$ \STATE $\lambda \leftarrow$ computeLipschitz($\Sigma_\varphi$)\label{line:compute lipschitz} \STATE $r \leftarrow 1/\lambda \cdot (L(\varphi)-\mu\cdot\bar{m})$ \WHILE{$|r-s|/r > \epsilon$ \textbf{or} $s<r$} \STATE \textbf{set} $s \leftarrow r + |s-r|/2$ \STATE $\Sigma_\varphi \leftarrow B(\varphi,s)^S$ \STATE $\lambda \leftarrow$ computeLipschitz($\Sigma_\varphi$) \STATE $r \leftarrow 1/\lambda \cdot (L(\varphi)-\mu\cdot\bar{m})$ \ENDWHILE \RETURN $r$ \end{algorithmic} \end{algorithm} Theorem~\ref{thm:safety region radius} says that areas around already-visited samples are safe. The size of the safety areas increases if we have a better current global minimum. Therefore, the theorem demonstrates that we can improve the convergence rate if we optimize the loss, by possibly finding a better current global minimum. This justifies the use of gradient descent together with a more global search strategy. Algorithm~\ref{algorithm:radius for safety region} computes the radius in Eq.~\eqref{eq:safety radius} as a fixpoint of the choice of $\Sigma_\varphi$. For an over-approximation of the LLC in Line~\ref{line:compute lipschitz}, we use the triangle inequality and the mean value inequality with a change in metric~\cite[Lemma 2]{Cyranka2017}. We then solve Eq.~\ref{eq:variational} using interval arithmetic to obtain an interval gradient matrix $[\mathcal{F}_t]\owns\partial_x\chi_{t_0}^{t_j}(x)$ $\forall\,x\in\,\Sigma_\varphi$, and take the maximum singular value of $[\mathcal{F}_t]$, as proposed in~\cite{gruenbacherArch19}. Depending on the Neural ODE, it is presumably faster to pick $s\,{=}\,\delta_0$, and to always use the LLC $\lambda_{\mathcal{B}_0}$ of the entire initial ball. As a result of the way we select $r_\varphi$ in Theorem~\ref{thm:safety region radius}, we are able to increase the radii $r_{\varphi}$ as soon as a new region with a smaller local minimum than the previous ones is discovered. Thus: $\bar{m}\,{\le}\, \bar{m}_{prev}\Rightarrow L(\varphi)\,{-}\,\mu\cdot\bar{m} \ge L(\varphi)\,{-}\,\mu\bar{m}_{prev}\Rightarrow r_\varphi \,{\ge}\, r_{\varphi, prev}$. \begin{algorithm}[t] \caption{Stochastic Lagrangian Reachability} \label{algorithm:SLR} \begin{algorithmic}[1] \REQUIRE time horizon T, sequence of timesteps $t_j$ ($t_0\le t_1\le\dots\le t_k=T$), tolerance $\mu\,{\ge}\,1$, confidence level $\gamma\,{\in}\, (0,1)$, loss function $L$, gradient of loss $\nabla_\varphi L$ \vspace*{2mm} \FOR{$(j=1; j\le k; j=j+1)$}\label{line:for loop Reachsets} \STATE $\mathcal{V}, \mathcal{U} \leftarrow \{\}$ \quad(list of visited and random points) \STATE $\mathcal{S}\leftarrow \{\}$ \quad (total covered area) \STATE $\bar{p}\leftarrow 0$, $\bar{m} \leftarrow 0$ \WHILE{$\bar{p} < 1 - \gamma$} \STATE \textbf{sample} $\varphi \in \mathbb{R}^{n-1}$ \STATE $\mathcal{V}\leftarrow \mathcal{V} \cup \{\varphi\}$ \STATE $\mathcal{U}\leftarrow \mathcal{U} \cup \{\varphi\}$ \IF{$x(\varphi) \notin \mathcal{S}$} \STATE $\varphi_{min} \leftarrow$ local minimum starting at $\varphi$ using gradient descent with $\nabla_\varphi L$\label{line:gradient descent} \STATE $\mathcal{V}\leftarrow \mathcal{V} \cup \{\varphi_{min}\}$ \STATE $m \leftarrow L(\varphi_{min})$ \ELSE \STATE $m \leftarrow L(\varphi)$ \ENDIF \IF{$m \le \bar{m}$} \STATE $\bar{m} \leftarrow m$ \STATE \textbf{set} $S\leftarrow\{\}$ \FORALL{$\varphi_i\in \mathcal{V}$} \STATE \textbf{compute} new radius $r=r(\varphi_i)$ such that $L(\psi)\ge\mu\cdot\bar{m}$, \quad $\forall\psi\colon x(\psi)\in B(\varphi_i,r)^S$\label{line:increase radii} \STATE \textbf{set} $\mathcal{S}\leftarrow \mathcal{S}\cup B(\varphi_i,r)^S$ \ENDFOR \ELSE \STATE \textbf{compute} radius $r=r(\varphi)$ only for current $\varphi$ such that $L(\psi)\ge\mu\cdot\bar{m}$, \quad $\forall\psi\colon x(\psi)\in B(\varphi,r)^S$ \STATE \textbf{set} $\mathcal{S}\leftarrow \mathcal{S}\cup B(\varphi,r)$ \ENDIF \STATE $\bar{p} \leftarrow \mathbb{P}(\mu \cdot \bar{m} \le m^\star)$ with $\mu\cdot\bar{m}\le\min_{\varphi\in\mathcal{S}} L(\varphi)$ \ENDWHILE \STATE $\delta_j\leftarrow -\bar{m}$ \ENDFOR \RETURN $(\delta_1,\dots,\delta_k)$ \end{algorithmic} \end{algorithm} \subsection{Stochastic Lagrangian Reachability} By using local gradient computation, global uniform sampling, and safety regions as in Algorithm~\ref{algorithm:radius for safety region}, we present our SLR verification technique, as outlined in Algorithm~\ref{algorithm:SLR}. Given a Neural ODE as in Eq.~\eqref{neuralode} and a set of initial states $\mathcal{B}_0$, we start by specifying a confidence level $\gamma\in(0,1)$ and a tolerance $\mu,{\ge}\,1$ for the entire Reachtube. The algorithm returns radii $\delta_j$, $j\,{\in}\,\{1,\dots,k\}$, and the stochastic guarantee stating that $\mathcal{B}_j (=B_{M_j}(\chi_{t_0}^{t_j},\delta_j))$ overestimates by $\mu$ the true conservative Reachsets with a probability higher than $1\,{-}\,\gamma$. This holds also for the whole Reachtube, as it is defined by a series of Reachsets (Def.~\ref{def:Reachtube}). As we reinitialize the variables at the beginning of every new timestep $t_j$, and apply gradient descent to the loss function of the initial polar coordinates $\varphi$ at time $t_0$, we do not accumulate errors from one timestep to the next one. This is a prominent advantage compared to methods using interval arithmetic, and thus accumulating the wrapping effect, e.g.~\cite{capdTheory, CyrankaCDC18,fansimul}. Another advantage is that we can compute the for-loop in line~\ref{line:for loop Reachsets} of Algorithm~\ref{algorithm:SLR} (thus the Reachsets of the Reachtube) in parallel. At every timestep $t_j$, we sample random points and construct safety regions around them until we reach the desired probability $1\,{-}\,\gamma$ of being inside the tolerance region defined by $\mu$. After sampling a new point, we check if this point is already in the covered area. If not, then we apply gradient descent to find a local minimum and compare this local minimum to the smallest value $\bar{m}$. Otherwise, if the sampled point is already in the covered area and thus in at least one safety region, we already know the lower bounds for that region and do not look for the local minimum again. This approach is similar to using baisins of attraction, but is more scalable because we do not require Hessian computation. In line~\ref{line:increase radii}, we recompute the radii of the safety regions when we find a new smallest value $\bar{m}$. By computing the current probability $\bar{p}$ of having reached the desired confidence level, we check whether we have to resample more points or whether we are able to finish that timestep and save the radius $\delta_j$ of the stochastic Reachset at time $t_j$. \subsection{Stochastic Guarantees of Reachsets}\label{stochastic} In this section, we derive the stochastic convergence guarantees and convergence bounds for finding the global minimum of Eq.~\eqref{eq:optim2} using SLR at every timestep $t_j$. \begin{figure}[t] \centering \includegraphics[width=0.7\columnwidth]{Spherical_cap.pdf} \caption{Illustration of a safety region $B(\varphi,r_\varphi)^S$, which is a spherical cap $\Ystar$. In this figure, the area of cap $\Ystar$ (in light blue) is greater than the volume of an $n-1$-dimensional ball (in dark blue) with radius $\rho(r_\varphi)$, which is used in the convergence rate. } \label{fig:Probability1} \end{figure} Let $\bar{m}\,{=}\, \min_{\varphi\in\mathcal{V}}L(\varphi)$ be defined as in Eq.~\eqref{eq:local minimum}, and let $m^\star\,{=}\,\min_{\varphi\in \mathbb{R}^{n-1}} L(\varphi)$ be the global minimum and $\varphi^\star$ an argument s.t.\ $L(\varphi^\star)=m^\star$. We start by defining the probability of $\Bphi$ covering $x(\varphi^\star)$: \begin{align} &\mathbb{P}(\Bphi\owns x(\varphi^\star))\nonumber\\ &= \mathbb{P}\left(\| x(\varphi^\star)- x(\varphi)\|_2)\le \rmu\right)\nonumber\\ &= \mathbb{P}(x(\varphi)\in \Ystar) = \mathbb{P}(\Ystar)\label{eq:probability region} \end{align} with $\rmu$ as defined in Eq.~\eqref{eq:safety radius} and $\Ystar = B(\varphi^\star,r_\varphi)^S$ being the spherical-cap in Fig.~\ref{fig:Probability1}. By using the area of the spherical cap $\Ystar$ and the area of the initial ball's surface $\mathcal{B}_0^S$, the probability defined by Eq.~\eqref{eq:probability region} can be described as follows: \begin{align} \mathbb{P}(\Ystar) = \frac{\area(\Ystar)}{\area(\mathcal{B}_0^S)} \end{align} The area of $\Ystar$ can be computed using the formulas in~\cite{hypersphericalCap}. Next we derive some probabilities: \begin{align} \mathbb{P}(\Bphi[\varphi_j]\not\owns\varphi^\star) &= 1 - \mathbb{P}(\Ystar[\varphi_j])\nonumber\\ \mathbb{P}(\forall \varphi\in\mathcal{U}\colon \Bphi[\varphi]\not\owns\varphi^\star) &= \prod_{\varphi\in\mathcal{U}} \left( 1 - \mathbb{P}(\Ystar[\varphi]) \right)\nonumber\\ \mathbb{P}(\exists \varphi\in\mathcal{U}\colon\Bphi[\varphi]\owns\varphi^\star) &= 1 - \prod_{\varphi\in\mathcal{U}} \left( 1 - \mathbb{P}(\Ystar[\varphi]) \right)\label{eq:probability of finding min} \end{align} Using Theorem~\ref{thm:safety region radius}, if $\varphi^\star\in\Bphi[\varphi]$ for some $\varphi\in\mathcal{U}$, then $\mu\cdot\bar{m}\le L(\varphi^\star) = m^\star$ holds, and thus: \begin{align}\label{eq:probability of mu} \begin{split} &\mathbb{P}(\mu\cdot\bar{m}\le m^\star) \ge\\ & \mathbb{P}(\exists \varphi\in\mathcal{U}\colon\Bphi[\varphi]\owns\varphi^\star) \end{split} \end{align} \begin{theorem}[Convergence Guarantees]\label{thm:convergence guarantee} Given $\gamma\in(0,1)$, $\mu\ge 1$, local Lipschitz constant $\lambda_{\mathcal{B}_0^S}$ and $N = |\mathcal{U}|$, where N is the number of uniform-randomly generated points during global search process. Let $\bar{m}\,{=}\, \min_{\varphi\in\mathcal{V}}L(\varphi)$ as defined in Eq.~\eqref{eq:local minimum}, $m^\star\,{=}\,\min_{\varphi\in \mathbb{R}^{n-1}} L(\varphi)$ the global minimum, and $\varphi^\star$ an argument s.t.\ $L(\varphi^\star)=m^\star$. Then: \begin{align} \lim_{N\rightarrow\infty}\mathbb{P}(\mu\cdot\bar{m}_N\le m^\star) = 1\label{eq:convergence} \end{align} and thus \begin{align} \forall\gamma\in(0,1),\exists N\in\mathbb{N}\textrm{ s.t. } \mathbb{P}(\mu\cdot\bar{m}_N\le m^\star) \ge 1-\gamma \end{align} \end{theorem} \vspace{2ex} The full proof is provided in the Appendix. \emph{Proof sketch:} By creating a lower bound $r_{bound}$ for all $\rmu$, s.t.\ $\mathbb{P}(\mathcal{C}(\rmu)\ge\mathbb{P}(\mathcal{C}(r_{bound}))$, we underestimate Eq.~\eqref{eq:probability of finding min} by $1-(1-\mathbb{P}(\mathcal{C}(r_{bound})))^N$. Using this bound and Eq.~\eqref{eq:probability of mu}, we show that the convergence guarantee holds. Theorem \ref{thm:convergence guarantee} shows that in the limit of the number of samples, the reachset constructed by Algorithm~\ref{algorithm:SLR} converges with probability~1 to the smallest ellipsoid that encloses the true reachable set. Note that the algorithm cannot converge to the true reachable set because we approximate the reachset by ellipsoids, while the true reachset might be of arbitrary geometrical shape. Nonetheless, we proved that it provides the smallest possible ellipsoid that contains a true reachset. Moreover, although Theorem~\ref{thm:convergence guarantee} shows that we achieve the tightest elliptical reachsets, it does not determine whether the algorithm can terminate or not, as the theorem is proven in the case of infinite samples. We now prove that SLR indeed converges at a reasonable rate. \subsection{Convergence Rate for SLR} Theorem~\ref{thm:convergence rate} computes a convergence rate for Algorithm~\ref{algorithm:SLR}. \begin{theorem}[Convergence Rate]\label{thm:convergence rate} Given $\gamma\in(0,1)$, $\mu\ge 1$, local Lipschitz constant $\lambda_{\mathcal{B}_0^S}$, and dimension $n$, let $\varphi_1$ be the first random sample point. We can guarantee that $\mathbb{P}(\mu\cdot\bar{m}\le m^\star)\ge 1-\gamma$ if we perform at most $N_{max}$ iterations of the SLR Algorithm~\ref{algorithm:SLR}, with \begin{equation} \begin{split} &N_{max} = \\ &\ln{\gamma}\left/\ln\left(1-\frac1{2\sqrt{\pi}}\frac{\Gamma(n/2)}{\Gamma((n+1)/2)}\left(\frac{\rho(r_{bound})}{\delta_0}\right)^{n-1}\right)\right.\label{eq:guarantee maximum N} \end{split} \end{equation} and asymptotically it holds that \begin{align} N_{max} = \mathcal{O}\left(-\ln\gamma \left(\frac{\delta_0}{r_{bound}}\right)^{2n}\right),\label{eq:guarantee asymptotically} \end{align} with $r_{bound} = \lambda_{\mathcal{B}_0^S}^{-1}(1-\mu)L(\varphi_1)$ and $\rho(r_{bound}) = r_{bound} \cdot\sin(\pi/2 - \arcsin(r/2\delta_0))$. \end{theorem} The full proof is provided in the Appendix. \emph{Proof sketch:} As the radius $\rmu$ of the spherical cap is very small, we underestimate the area of the cap by removing the curvature and using the volume of an $n-1$ dimensional ball with radius $\rho(r_{bound})$ as shown in Fig.~\ref{fig:Probability1}. Thus, after finishing our global search strategy for timestep $t_j$, we have the stochastic guarantee that the functional values of every $\varphi\in\mathbb{R}^{n-1}$ are greater or equal to $\mu\cdot\bar{m}$. This implies that we should initiate the search with a relatively large $\mu=\mu_1$, obtaining for every $\varphi$ a relatively large value of $r_{\varphi,\mu_1}$ and therefore obtain a faster coverage of the search space. Subsequently, we can investigate whether the reachset $\mathcal{B}_j$ with radius $\delta_j=-\mu_1\cdot\bar{m}$ intersects with a region of bad (unsafe) states. If this is not the case, we can proceed to the next timestep $t_{j+1}$. Otherwise, we reduce $\mu$ to $\mu_2 < \mu_1$, which reduces the safety regions $\Bphi$ and thus the already-covered-set $\mathcal{S}$. This means that we continue with our search strategy until the desired probability $1-\gamma$ is reached again for a smaller radius $\delta_j=-\mu_2\cdot\bar{m}$. Accordingly, we can find a first radius for $\mathcal{B}_j$ faster and refine it as long as $\mathcal{B}_j$ intersects with the region of bad states. Theorem~\ref{thm:convergence rate} guarantees convergence of the algorithm. It shows that for a given confidence level $\gamma$, our algorithm terminates after at most $N_{max}$ steps. Essentially, the theorem leads us to the significant result that the problem of constructing an ellipsoid abstraction of the true reachset with probabilistic guarantees for a Neural ODE is able to terminate. Additionally, the theorem assumes that we know the local Lipschitz constant, which is a reasonable assumption for proving convergence. In practice, one can safely replace the true Lipschitz constant by an upper-bound. \subsection{Computational Complexity} The complexity of Algorithm~\ref{algorithm:gradient descent} depends on the geometry of the loss surface. In particular, Algorithm~\ref{algorithm:gradient descent} may terminate after one iteration in case of a flat surface, whereas an exponential number may be needed for ill-posed problems, as is common practice when deriving convergence rates for gradient descent \cite{nagy2003steepest, drori2017exact} The runtime of Algorithm~\ref{algorithm:computing gradient} is determined by the complexity of the ODE solver for simulating the given differential equation. For example, given the number of integration steps (implicit interpretation of the number of layers in a deep model) $L$, and the time horizon of the simulation $T$, Algorithm~\ref{algorithm:computing gradient} runs in time $\mathcal{O}(L \times T)$ and constant memory cost $\mathcal{O}(1)$ for each layer of a neural network $f$. The complexity of Algorithm~\ref{algorithm:radius for safety region} depends on the local Lipschitz constant and the smoothness of the flow. Computing the true Lipschitz constant of a neural network is known to be NP-complete \cite{virmaux2018lipschitz}. However, Algorithm~\ref{algorithm:radius for safety region} operates correctly when we replace the true Lipschitz constant by an easier-to-compute upper bound, obtained for instance by means of interval arithmetic. Algorithm~\ref{algorithm:SLR} implements the main routine of our framework. Its complexity for a given confidence score $\gamma$ equals the convergence rate $N_{max}$ proven in Theorem~\ref{thm:convergence rate}, Eq.~\eqref{eq:guarantee asymptotically} for every Reachset. In particular, the runtime of Algorithm~\ref{algorithm:SLR} depends exponentially on the dimension of the given Neural ODE and logarithmically on the confidence score. \section{Conclusions and Future Work}\label{conclusions} In this paper, we considered the verification problem for Neural ODEs. We introduced the SLR verification scheme, which is based on solving a global optimization problem. We designed a forward formulation of the adjoint method for the gradient descent algorithm. We also established strong convergence guarantees for SLR, showing that it can establish tight ellipsoidal bounds for the Neural ODE under consideration, at an arbitrary time horizon. An important future direction will be to improve the current convergence rate, which is exponential in the dimensionality of the Neural ODE network. Existing statistical verification methods are mostly concerned with the verification of (hybrid) dynamical systems having various uncertainties in model parameters, discrete jumps between modes, and/or initial states. We emphasize that reachability computation for Neural ODEs developed at scale will require dedicated methods tailored for that specific purpose. \section*{Acknowledgements} The authors would like to thank the reviewers for their insightful comments. RH and RG were partially supported by Horizon-2020 ECSEL Project grant No. 783163 (iDev40). RH was partially supported by Boeing. ML was supported in part by the Austrian Science Fund (FWF) under grant Z211-N23 (Wittgenstein Award). SG was funded by FWF project W1255-N23. JC was partially supported by NAWA Polish Returns grant PPN/PPO/2018/1/00029. SS was supported by NSF awards DCL-2040599, CCF-1918225, and CPS-1446832.
{ "timestamp": "2020-12-17T02:15:24", "yymm": "2012", "arxiv_id": "2012.08863", "language": "en", "url": "https://arxiv.org/abs/2012.08863" }
\section{Introduction} \label{intro} Many investigations have been conducted in the past half century to understand materials that, upon cooling, do not simply transition from an amorphous fluid state to an ordered solid state. These materials instead go through a glass transition wherein they maintain a disordered arrangement of molecules like a liquid, but have macroscopic physical properties akin to solids with a more crystalline structure. In particular, the viscosity of these materials grows exponentially as the molecular dynamics slow due to a relatively minor change in temperature. In these samples as the glass transition is approached, the molecular dynamics are spatially heterogeneous \cite{kob97,donati98,poole98,karmakar14,richert02}, which is known to be a prevalent characteristic of the glassy slowing down in these materials. Specifically, at any moment there will be regions within the material where the particles exhibit slower mobility, compared to an average particle in the system, while other regions have relatively faster dynamics \cite{donati99,berthier04}. The sizes of the low and high mobility regions grow as the temperature continues to cool \cite{donati98}. It has long been theorized that the local structure of the material plays a significant role in determining which areas would tend towards slower kinetics as the system evolves \cite{kivelson94,shintani06,coslovich11,royall15,schoenholz16}. In 2004 Widmer-Cooper {\it et al}.~\cite{widmercooper04} provided evidence that the structure of the system was linked to its dynamics by introducing the concept of propensity. The idea is to simulate a system of particles many times over, having the particles always start from the exact same initial spatial configuration. This would create an iso-configurational (IC) ensemble of simulations. The difference in each simulation is that the initial velocities of the particles are randomized, consistent with the expected distribution of velocities given the temperature of the system. In this way, they observed the trajectory of each particle many times starting from the same spatial configuration, but with no other memory of the previous state. They then defined propensity as \begin{equation} p_i=\langle \Delta r_i^2 \rangle_{IC} \label{propensity} \end{equation} where $\Delta r_i$ is defined as the distance particle $i$ traveled over a specific time, and the averaging is done for the same particle over the iso-configurational ensemble. In particular the specific time chosen is typically $\tau_\alpha$, the relaxation time scale for which the higher mobility particles have (relatively) large displacements. Widmer-Cooper {\it et al.} found that some particles have lower propensity: these particles tend to travel less distance than a system wide average. Likewise, other particles have a high propensity value, and are more likely to move a large distance -- to rearrange. Their conclusion is that indeed some aspect of the dynamics must be linked to the spatial structure. However, this method does not identify exactly what details of the structure matter, and so subsequent work looked for structural quantities correlated with propensity. The early studies were done with bidisperse systems: mixtures of two particle sizes, to prevent crystallization. Not surprisingly, particles belonging to the smaller species in the bidisperse mixture dominated the high propensity population \cite{widmercooper05}. Later, several promising candidates for structural signals (free volume, size composition of neighbors, and initial potential energy) showed no significant relationship with propensity \cite{widmercooper05,widmercooper06,matharoo06}. Other work found that links between dynamics and local structures may be system dependent in binary atomic mixtures \cite{hocky14}. In a study of supercooled water, specific tetrahedral structures were found to correlate with low propensity regions \cite{malaspina09}. A study of a Lennard-Jones system found a connection between propensity and number of neighbors of various types of particles \cite{razul11}. A collection of research began to focus on the fact that propensity was spatially correlated and that regions of high and low propensity were present throughout the system \cite{widmercooper04,widmercooper07}. That is, areas existed with a significant portion of high propensity particles and therefore these regions were more likely to undergo re-arrangement at $\tau_\alpha$ timescales. These domains could be found by coarse-graining the propensity values in the system and good results could be obtained at low temperature with IC ensembles consisting of as little as 50-100 simulations \cite{widmercooper07}. Of course this does not mean that such a region will always relax, as in order to do so the particles in such a region would have to move in a coordinated manner \cite{appignanesi07}. Indeed it was found that the propensity calculated for motions over $\tau_\alpha$ timescales were correlated with the Debye-Waller factor, which is a measure of the variability in particle position during the $\beta$-relaxation time scale (a shorter time scale than $\tau_\alpha$) \cite{widmercooper06}. Thus, longer term rearrangements are signaled by shorter term, coordinated motion in high propensity areas. Some studies examined the connection between high-propensity regions and the potential energy landscape. Propensity measured over a time corresponding to the maximum non-gaussian parameter, (also a $\beta$-relaxation timescale), was found to correlate well with motions over localized, unstable saddles in the potential energy landscape \cite{coslovich06}. Other work showed that one can use the potential energy function to identify low-frequency, quasilocalized normal modes of vibration, and that the locations of these vibrational modes match well with higher propensity regions \cite{widmercooper08}. Finally, the evolution of propensity itself occurs on intermediate time scales as meta-basin transitions occur \cite{appignanesi09,appignanesi09a}. All of these prior investigations have been simulation based, due to the fact that in order to measure propensity an iso-configurational ensemble has to be created: the initial positions of all particles need to be realized many times. There would be great challenges presented if an attempt was made to create such an ensemble experimentally. However, if those challenges were to be overcome, then the calculation of propensity would be no more difficult than the calculation of other dynamical measures. A good candidate for attempting such an experiment is colloids. These systems are comprised of small ($\sim 1$~$\mu$m diameter) solid particles in a liquid, which undergo Brownian motion \cite{hunter12rpp}. Rather than controlling temperature, one controls the volume fraction, the fraction of volume occupied by the solid particles \cite{pusey86}. As this is increased toward the glass transition volume fraction, the slowing down of the dynamics has a great many similarities with the glass transition in systems of small molecules or polymers \cite{hunter12rpp}. A wide variety of investigations have shown that these materials are model glass formers \cite{pusey86,vanmegen91,rosenberg89,marshall90,bartsch92,vanmegen94,hartl01}. In particular, colloidal samples exhibit dynamical heterogeneity near the glass transition \cite{marcus99,kegel00,weeks00,latka09,mazoyer09}, the key quantity being probed by propensity measurements. One could imagine initializing a set of colloidal particles, letting the system evolve long enough for it to have irreversibly rearranged itself, and then use a system of laser tweezers to bring the particles back to its original configuration. The Brownian motion of the solvent would ensure that initial movement away from this set configuration would be random. This should achieve physically what has only ever been simulated. This would likely require a quasi-two-dimensional experiment, which is fairly common with colloids \cite{marcus99,cicuta03,konig07,ebert08,chen10,zhang11,vivek17,illing17}. In particular, experiments by the Ganapathy group have successfully used laser tweezers to manipulate quasi-2D colloidal glass systems \cite{gokhale14,himanagamanasa15}, although only to pin particles rather than move them to specific locations. The one physical reality that cannot be avoided is the fact that experimental colloids, unlike their simulated counterparts, are polydisperse: as physical particles, they do not all have the same size. The polydispersity is typically quantified by the standard deviation of the particle size distribution divided by the mean value, and it is often about 5\% \cite{bosma02,poon12,kurita12}. In this paper, we seek to understand how the measurement of propensity would be affected by having such diversity in particle size. We do this by simulating the well-characterized bidisperse Kob-Andersen glass-forming system \cite{kob95a,kob95b}, altered by introducing polydispersity. The polydispersity does not dramatically change the system, although it does slightly enhance the propensity signal (the variability between the lowest and highest observed propensities). We then demonstrate that to reconstruct the initial state, it is not sufficient to merely bring back a similar size particle to an initial position; it is crucial to bring back the same particle to the initial position. Returning to the original motivating question behind propensity \cite{widmercooper04}, this demonstrates that the structure-dynamics link must include particle size as part of what is meant by local structure. This is fairly obvious for glasses composed of mixtures of small molecules or atoms with distinct and identical sizes (such as metallic glasses \cite{inoue11,zhang14,zhang15,kim15}), and a bit more intriguing and nontrivial for mildly polydisperse colloidal glasses. \section{Methods} \subsection{Simulations} \label{simulations} \begin{figure}[t] \centerline{ \includegraphics[width=6cm]{showSize3}} \caption{One of the ten initial configurations used to generate ensembles for $T=0.475$ and $\delta=1\%$. Purple shades are for species A particles while greens represent the smaller species B. Darker hues represent the plus variants of each species while lighter shades indicate the minus variants.} \label{showsize} \end{figure} We start with the standard Kob-Andersen bidisperse glass former in 3 dimensions \cite{kob95a}. This mixture is governed by the Lennard-Jones (LJ) potential \cite{lennardjones31} which is of the form \begin{equation} \label{lj} V_{LJ}=4\epsilon \bigg{[}\Big{(}\frac{\sigma}{r}\Big{)}^{12}-\Big{(}\frac{\sigma}{r}\Big{)}^{6}\bigg{]}. \end{equation} The interactions for both species in the system, denoted $A$ and $B$, are set by the parameters $\sigma_{AA}=1.0, \epsilon_{AA}=1.0, \sigma_{BB}=0.88, \epsilon_{BB}=0.5, \sigma_{AB}=0.8, $ and $\epsilon_{AB}=1.5$. Our mixture consists of $N_A$=800 and $N_B$=200 particles in a cube with periodic boundary conditions and sides of $9.4$, which matches the density used in Ref.~\cite{kob95a}. All distances are given in terms of $\sigma_{11}$, the time step is set to 0.005 for all simulations, time is given in reduced units of $(m\sigma_{11}^2/\epsilon_{11})^{1/2}$, and temperature is given in reduced units of $\epsilon\slash k_B$. Simulations are conducted with the LAMMPS \footnote{http://lammps.sandia.gov} software package, which uses the Verlet algorithm, and were done in the NVT regime~\cite{lammps1}. For temperatures of $T=1.0$ and $T=0.475$, we initialize 10 different particle configurations by running for $\tau=5\times 10^4$. We can be confident that all systems were equilibrated at this point as $\tau_\alpha \approx 5500$ is the structural relaxation time for our coldest system. \begin{table*}[t] \centering \begin{tabular}{r@{\hspace{8pt}}l@{\hspace{20pt}}r@{\hspace{8pt}}l@{\hspace{20pt}}r@{\hspace{8pt}}l} $\sigma_{AA} = 1.00$ & $\epsilon_{AA} = 1.00$ & $\sigma_{BB} = 0.88$ & $\epsilon_{BB} = 0.50$ \hspace{20pt} & $\sigma_{AB} = 0.80$ & $\epsilon_{AB} = 1.50$ \\ \hline $\sigma_{A_+A_+} = 1.01$ & $\epsilon_{A_+A_+} = 1.01$ & $\sigma_{B_+B_+} = 0.8888$ & $\epsilon_{B_+B_+} = 0.5050$ & $\sigma_{A_+B_+} = 0.8076$ & $\epsilon_{A_+B_+} = 1.51425$ \\ $\sigma_{A_-A_-} = 0.99$ & $\epsilon_{A_-A_-} = 0.99$ & $\sigma_{B_-B_-} = 0.8712$ & $\epsilon_{B_-B_-} = 0.4950$ & $\sigma_{A_-B_-} = 0.7924$ & $\epsilon_{A_-B_-} = 1.48575$ \\ $\sigma_{A_+A_-} = 1.00$ & $\epsilon_{A_+A_-} = 1.00$ & $\sigma_{B_+B_-} = 0.8800$ & $\epsilon_{B_+B_-} = 0.5000$ & $\sigma_{A_+B_-} = 0.7724$ & $\epsilon_{A_+B_-} = 1.44825$ \\ \hline $\sigma_{A_+A_+} = 1.05$ & $\epsilon_{A_+A_+} = 1.05$ & $\sigma_{B_+B_+} = 0.9240$ & $\epsilon_{B_+B_+} = 0.5250$ & $\sigma_{A_+B_+} = 0.8380$ & $\epsilon_{A_+B_+} = 1.57125$ \\ $\sigma_{A_-A_-} = 0.95$ & $\epsilon_{A_-A_-} = 0.95$ & $\sigma_{B_-B_-} = 0.8360$ & $\epsilon_{B_-B_-} = 0.4750$ & $\sigma_{A_-B_-} = 0.7620$ & $\epsilon_{A_-B_-} = 1.42875$ \\ $\sigma_{A_+A_-} = 1.00$ & $\epsilon_{A_+A_-} = 1.00$ & $\sigma_{B_+B_-} = 0.8800$ & $\epsilon_{B_+B_-} = 0.5000$ & $\sigma_{A_+B_-} = 0.6620$ & $\epsilon_{A_+B_-} = 1.24125$ \\ & & & & $\sigma_{A_-B_+} = 0.9380$ & $\epsilon_{A_-B_+} = 1.75875$ \\ \end{tabular} \caption{Lennard Jones interaction parameters. The top row is for the Kob-Andersen binary \cite{kob95a}. The other sections are for the quartet system with $\delta=1\%$ (middle) and $\delta = 5\%$ (lower). The left section of this table shows interactions between A type particles, the middle section is for B types, and the right section is for mixed AB interactions.} \label{ljparam} \end{table*} \subsection{Introduction of Polydispersity} To study the effects of polydispersity, the standard Kob-Andersen bidisperse system must be altered so that there are more than just two particle sizes. While it would be ideal to draw particle sizes from a continuous distribution, if $n$ distinct sizes exist in our system we would need to define $n(n+1)/2$ LJ potentials. As a means of keeping computational times reasonable, we use a quartet system that contains a larger ($+$) and smaller ($-$) variant for each of the two original particle sizes. The magnitude of the variation is controlled by the parameter $\delta$. In having four different particle sizes, the interactions of the system are now governed by 10 distinct particle combinations and new LJ parameters of $\sigma$ and $\epsilon$ can be calculated using the following equations. For $X, Y\in A,B$ and $i,j \in +,-$ we define: \begin{eqnarray} \label{ssame} \sigma_{X_+X_+} &=& (1+\delta)\sigma_{XX}, \\ \sigma_{X_+X_-} &=& \sigma_{XX}, \nonumber \\ \sigma_{X_-X_-} &=& (1-\delta)\sigma_{XX} \nonumber, \end{eqnarray} \begin{equation} \label{sdiff} \sigma_{A_iB_j} = 2(\sigma_{B_jB_j}+0.02\big)-\sigma_{Ai_iA_i}, \end{equation} \begin{equation} \label{epsilon} \epsilon_{X_iY_j} = \epsilon_{XY}\frac{\sigma_{X_iY_j}}{\sigma_{XY}}. \end{equation} The form of eq.~\ref{sdiff} was chosen as it will produce the correct scaling used in the original bidisperse system for forces between $A$ and $B$ type particles. The rather favorable $AB$ interactions produced by these parameters discourages the original system from crystallizing. Equation~\ref{epsilon} is the result of requiring the Lennard-Jones force, $F=\frac{dV}{dr}$, evaluated at the distance where the potential energy $V_{LJ}=0$, to be held constant within each column of Table ~\ref{ljparam}. This corresponds to changing the parent's $\epsilon$ value by the same percentage as the parent's $\sigma$ value was changed. A sample configuration at equilibrium is shown in fig.~\ref{showsize}. Here we see that the system is well mixed in that the different sizes are randomly distributed over the simulation volume. Table ~\ref{ljparam} shows the values of $\sigma$ and $\epsilon$ for the binary system, a quartet with $\delta=1\%$, and a quartet with $\delta=5\%$. The masses of all the particles are kept fixed at $m=1$, which is the same choice as the original Kob-Andersen system. For comparison with colloids, this is a reasonable choice, as colloids are sufficiently small that their mass is not an important parameter for their dynamics, which are purely Brownian and thus mass-independent. A useful consequence of eq.~\ref{ssame} is that the colloidal polydispersity of the $A$ and $B$ species in our mixture (defined as standard deviation of sizes $\sigma$ divided by mean size \cite{poon12}) is exactly equal to $\delta$. Of course, there is a qualitative difference between the bi-modal distribution that we are using for each species and a continuous distribution, but numerically they are the same. \begin{figure} \includegraphics[width=8cm]{dynStruc1.eps} \caption{\textbf{(a)} Mean squared displacement. Color indicates a $\delta$ value of $0\%$, $1\%$, $2\%$, $3\%$, and $5\%$ for dark blue, light blue, green, orange, and red respectively. \textbf{(b)} Self part of the intermediate scattering function with same coloring scheme as panel (a). The wavevector used to calculate each $F(\Delta t;T,\delta)$ curve is taken by finding the $k$ value where the maximum $S(k)$ is observed for the corresponding $T$ and $\delta$ [see fig.~\ref{statst}(b) inset]. Values for $k$ range from 7.1 to 7.3. For both (a) and (b), all curves represent the geometric average value of their respective function over the 10 different initial configurations we use.} \label{dynst} \end{figure} Before we consider propensity measurements, we first wish to confirm that our quartet system is qualitatively similar to the original Kob-Andersen bidisperse system in terms of structure and dynamics. Conceptually, we mimic how analysis would be done for an experimental system where polydispersity was ignored: we treat the A and B particles separately, but do not distinguish between $+$ and $-$ variations. In an experiment with a nominally bidisperse system, the A and B particles are presumed distinguishable but not necessarily the differences between particles of a given type \cite{cates15}. The results below focus on the A particles (both $A_{+}$ and $A_{-}$) as is often done for this system. We initialize 10 separate systems for each value of $\delta$ and bring them to equilibrium as described above. Each system is then run for $\tau = 10^4$ in order to determine structural and dynamical quantities. We start with dynamics; in particular, we calculate the mean squared displacement and the self part of the intermediate scattering function for each system. Figure~\ref{dynst} shows the results for all $A$ type particles. For both of these functions, we calculate the geometric mean for the set of 10 systems representing each $T$ and $\delta$ combination and it is these means which are displayed in fig.~\ref{dynst}. The geometric mean is used because for $T=0.475$ with $\delta=5\%$, there is a high level of variation seen in both functions produced by the individual configurations. By determining when $F(\Delta t; T,\delta)$ decays to $e^{-1}$ we define the relaxation time $\tau_\alpha(T,\delta)$. In our subsequent analysis, when treating a system with a given $T$ and $\delta$, we always use that system's specific $\tau_\alpha$. Again, the motivation here is to mimic what would be done in an colloidal experiment where one would measure $\tau_\alpha$ of the actual system rather than considering a hypothetical system where each member of a particle species has the exact same size. Additionally, we can now compare our new quartet system to the original binary. For both panels in fig.~\ref{dynst}, all data lie nearly on top of the the functions plotted from the binary case, which is shown in dark blue. The lone exception is when $\delta = 5\%$ and $T=0.475$. Under these conditions we see significant deviations indicating that the dynamics of the system have slowed considerably. To characterize static structure, fig.~\ref{statst}(a) shows the pairwise correlation function and fig.~\ref{statst}(b) shows the static structure factor; the data are for $A$ type particles and at all values of $\delta$ examined. Again, no distinction is made between the two variants of the $A$ particles in order to mimic calculations done in colloidal experiments. Similar to the dynamical measures of structure, these static quantities show little variation from the binary system except for when temperature is low, and $\delta$ is at $5\%$. One possible explanation for why we see changes in the static and dynamical functions at high levels of $\delta$ is that as polydispersity increases, the difference between $A$ and $B$ type particles starts to become blurred. Notice that for $\delta=1\%$ in Table~\ref{ljparam}, all of the interaction distances $\sigma$ are greater for all possible $AA$ interactions than of any of the possible $BB$ interactions. These are in turn are greater than any of the $\sigma$ values for mixed species ($AB$) interactions, which implies that $A$ and $B$ particles prefer to be neighbors -- mixing is favorable. At higher values of $\delta$ the relative sizes become changed. At $\delta=3\%$, $\sigma_{A_-B_+} > \sigma_{B_-B_-}$, (0.8828 as compared to 0.8536). At $\delta=5\%$, we have $\sigma_{A_-B_-} = 0.7620$ and $\sigma_{A_+B_-} = 0.6620$, meaning that these pairs of particle types have an even stronger preference to be neighbors, and potentially create regions of slower dynamics -- the cost of separating these pairs, in terms of requiring additional volume, are more severe. We also have $\sigma_{A+A+}=1.050$, $\sigma_{A-A-} = 0.950$, and $\sigma_{B+B+}=0.924$, so the distinction between the $A+$ and $A-$ particles are stronger than the distinction between $A-$ and $B+$ particles. In a colloidal setting this would represent the inability to distinguish a smaller variant of a `large' particle from a large variant of a `small' particle. \begin{figure} \includegraphics[width=8cm]{statStruc2.eps} \caption{\textbf{(a)} Radial distribution function for species $A$ particles; both $+$ and $-$ variants where applicable. Color indicates a $\delta$ value of $0\%$, $1\%$, $2\%$, $3\%$, and $5\%$ for dark blue, light blue, green, orange, and red respectively.\textbf{(b)} Static structure factor for all variants of species A particles. Coloring scheme is the same as panel (a). In both of the main panels the functions for $T=1.0$ have had 2 added to them so that they are separated from the lower temperature plots. The insets show the the first peak in $g(r)$ and $S(k)$ respectively. For the insets, the addition of 2 has been removed.} \label{statst} \end{figure} On the basis of figs.~\ref{dynst} and \ref{statst}, we conclude that the quartet system ($\delta > 0$) is reasonably similar to the original bidisperse system, with the possible exception of $\delta = 5\%$ which has slower dynamics. All of these systems are reasonable glass-formers. \section{Propensity Results} \subsection{Simulation of Polydispersity} \begin{figure}[b] \includegraphics[width=8cm]{pdist.eps} \caption{Probability distributions of propensities for type A particles at the structural relaxation times for various systems as labeled. For all series, the data from each of the 10 ensembles is combined and normalized. \textbf{(a)} Normalization is done by dividing by the mean across all 10 ensembles that make up each individual distribution. The coefficient of variation for these distributions is 0.30 for dashed, 0.61 for dots, and 0.76 for solid. \textbf{(b)} The solid line is a reproduction of the solid line distribution in panel (a). The other series represent a breakdown of that low temperature/high polydispersity distribution into the two variant sizes ($A_+$ and $A_-$). Here each distribution is normalized by dividing by the mean value of the distribution that contains all size variants. The coefficient of variation for the dash-dot line $0.59$ and for the dashed line is $0.86$. } \label{pdist} \end{figure} After all systems are equilibrated, we construct an iso-configurational ensemble of 100 runs for each system. From this we are able to calculate the propensity $p_i$ of each particle (eq.~\ref{propensity}). While the value of propensity for any given particle indicates its own ability to move independent of the initial kinetics of the system, we are more interested in the distribution of propensities across the entire system. Figure~\ref{pdist}a displays the probability distribution of propensities for $A$ type particles under several conditions. Each condition is modeled using 10 separate ensembles and the propensity values for all ensembles are combined into a single distribution. In order to make comparisons between different conditions easier, the propensities are shown normalized by the mean value of the distribution. If the distribution is narrow, with all particles having nearly identical values, then measuring propensity is not conveying much information about the initial structures in the system. However, if there is a broad distribution of propensities, then it should be easier to find structures that either impede or facilitate mobility. The simulations at $T=0.475$ show that a wider variety of outcomes are seen compared to the higher temperature. At this low temperature, it becomes less probable to observe particles with propensities near the mean value of the system and more probable to find ones with both extremely low and extremely high propensities; propensity conveys more information. \begin{figure}[t] \includegraphics[width=8cm]{spdvd.eps} \caption{Each filled symbol represents the average value of coefficient of variation in propensity of type $A$ particles at the structural relaxation time across the 10 different initial configurations at temperatures as indicated. The downward pointing open triangles correspond to the $c_v$ calculated just for the $A_-$ particles, and the upward pointing open triangles correspond to the $A_+$ particles. In these cases $c_v$ was calculated by computing the standard deviation of the propensity of the individual particle type, and then dividing by the mean propensity for all $A$ particles. Separate $\tau_\alpha$ are calculated for each $T$ and $\delta$ combination.} \label{spdvd} \end{figure} In order to quantify the width of any given distribution and thereby the heterogeneity of observed propensities, we calculate the coefficient of variation, $c_v$, of the propensity probability distributions. $c_v$ is defined as the standard deviation of a given distribution divided by the mean value of the same distribution; $\sigma_{p_i}/\langle p_i\rangle$. A larger $c_v$ means that propensity is more ``interesting'' -- there is more difference between the low and high propensity particles. When calculating $c_v$ we only consider the $A$ type particles, though both the plus and minus variants are included. The solid symbols in fig.~\ref{spdvd} show the coefficient of variation for various values of polydispersity at both $T=1.0$ and $T=0.475$. For the higher temperature, the introduction of polydispersity does not significantly change the propensity distribution width. This is unsurprising given the measurements in shown in fig.~\ref{dynst} for this temperature. The MSD curve does not display a glassy plateau suggesting that the dynamics are relatively spatially homogeneous across the system. We can conclude from this that the local structure does not play a significant role in determining the dynamics, and so modifying that structure by adding variation to the particle sizes should not have much effect on the dynamics or the heterogeneity of those dynamics. However, at the lower temperature fig.~\ref{spdvd} shows that there is generally an increase in the coefficient of variation of the propensity as the polydispersity is increased. The bidisperse system is more glassy at this temperature, as evidenced by the plateau in the MSD curve, and so we would expect that altering the local structures at this temperature would affect the spatial heterogeneity of the dynamics. It appears that with the higher values of $\delta$ the system has become more glassy in that the MSD plateau is longer, and more extreme propensity values are observed. The solid line in fig.~\ref{pdist}a shows the distribution for $T=0.475$ and $\delta=5\%$, which has a notably long tail on the right with some particles displaying over five times the mean propensity value. In general we find that these high propensity tails are dominated by the $A-$ variants, and that as $\delta$ increases, these smaller variants make up a larger percentage of all high propensity particles. Figure~\ref{pdist}(b) shows a breakdown of the propensity distribution for $T=0.475$ and $\delta=5\%$ by variant size, and the open symbols of fig.~\ref{spdvd} show the $c_v$ for the variant distributions. It is important to note that for these variant distributions, the normalization and the calculation of $c_v$ were done by dividing by the average value of all $A$ type particles, which makes for a more useful comparison. Figure~\ref{pdist}(b) indicates that the high propensity tail of the distribution consisting of all $A$ type particles is dominated by the $-$ variants. Correspondingly, fig.~\ref{spdvd} shows that the $c_v$ value for distributions containing only $-$ variants, (downward triangles), is higher at all $\delta$ values than the values for the $+$ variants, (upward triangles). We are unable to explain the drop in the coefficient of variation observed from $\delta=0\%$ to $\delta=1\%$ for the $T=0.475$ data. To check the results we created a second set of 10 ensembles for $\delta=0\%$ for which we get a statistically similar value to the one plotted in fig.~\ref{spdvd}. Additionally, we created 10 ensembles where $\delta=0.001\%$. For this set of ensembles we found that $c_v=0.56$ which is also below the presented value for $\delta=0\%$. Lastly, the general trend observed is not sensitive to the time interval over which propensity was calculated. A similar trend existed at all late time scales including time of the maximum non-gaussian parameter for displacements $(t^*)$ \cite{kob97}. We do note that the same sequence can be observed in the self part of the intermediate scattering functions (ISF) shown in fig.~\ref{dynst}b. At late timescales, there is a slight drop in the ISF from $\delta=0\%$ to $1\%$ and $2\%$, but then a rise for $3\%$ and $5\%$. This correlation suggests a relationship between the two measurements (ISF and $c_v$). \subsection{Simulation of Human Error} One of the larger challenges that would have to be overcome if a propensity measurement were to be made on a colloidal system is the fact that this would involve resetting the physical system to its initial configuration many times, for example by using laser tweezers. We envision this to be a difficult process in part because of the polydispersity of the particles. We want to determine how measurements of propensity would be affected if the system is reset without regard to the specific size of each particle. That is, if an $A$ type particle exists at a specific location in the initial configuration, how important is it that the exact same particle is placed back at that location compared to an $A$ type with a slightly different size? To investigate this, we alter the procedure that creates the iso-configurational ensembles. For each member of the ensemble, not only are the velocities of each particle randomized, but additionally a random subset of size $S$ are selected to be swapped with a different variant of the same species. That is, $A_+ \leftrightarrow A_-$ and $B_+ \leftrightarrow B_-$. Swaps between species were not examined. In each case the swaps are distributed with a $4:1$ ratio between $A$ and $B$ type particles in keeping with the ratio of those particles in the system. As an example, if $S=5$, then 5 pairs of particles will have their size variation swapped with four pairs being the $A$ type particles and 1 pair being $B$ types; thus ten particles will be ``incorrect'' in the reconstruction. The pairs that are chosen are random across each of the 100 simulations in the ensemble, so that for $S=5$ each particle would on average only be chosen to be part of a swapped pair once in the ensemble, $(100\times 5\times2/1000 = 1$). Ensembles are created with $S = 15, 30, 60,125, 250$. When 250 swaps are made then exactly half of the particles will be involved in each swap, as $N=1000$, making this the condition under which we achieve the highest level of random changes to the system. The inclusion of this maximal level of randomization simulates an experimenter that attempts to construct an iso-configurational ensemble ignoring any difference in particle size due to polydispersity. Assembly in such a manner would be considerably easier, and we wish to understand whether such a shortcut could be justified. \subsection{Results from Introduction of Error} Figure~\ref{spdvs} shows the effect that the number of swaps has on the coefficient of variation of propensity across the system. The top panel shows this for the higher temperature of $T=1.0$. Similar to the results of increasing polydispersity, increasing the number of swaps made at this temperature only causes a slight decrease in the coefficient of variation. Thus the values of propensity observed in these systems appears to be little changed by the introduction of swaps, even for larger amounts of polydispersity (larger $\delta$). Of course, the low $c_v$ for the $\delta=0$ system indicates that at this temperature the dynamics of the system are fairly homogeneous. There is little information to be found in the local structures of the system to give insight into the dynamics. Thus there is little information to be lost even when many structural changes occur due to the swapping of highly polydisperse particles. \begin{figure}[t] \includegraphics[width=8cm]{spdvs.eps} \caption{Coefficient of variation of propensity values (for $A$ particles and at $\tau_\alpha$) as a function of the number of swaps. Color and shape indicates $\delta$ values of $1\%$, $2\%$, $3\%$, and $5\%$ for blue triangles, green squares, orange pentagons, and red circles respectively. } \label{spdvs} \end{figure} Data are unable to be collected for $S$ values of 125 and 250 for the highest polydispersity level of $\delta=5\%$. Given the higher temperature, particles can be found farther from the Lennard-Jones potential minimum while the system is at equilibrium, and in particular some particle pairs are closer together than the distance that minimizes their potential energy. When a large number of swaps occur, it becomes very probable that a $+$ variant is moved to a position where a $-$ variant had strayed far from the potential minimum and close to a $+$ variant. The resulting force on this $++$ particle pair jumps several orders of magnitude, resulting in extraordinary velocities and the eventual failure of the software to be able to keep track of at least one of the particles. As it became probable that multiple simulations in the ensemble would lose particles, we only present data for the lowest four values of $S$ for $\delta=5\%$. Data for the missing points could possibly be collected if the time step of the simulations is made smaller, but seems unlikely to affect the conclusions. We assume that in a colloidal system, mistakes causing such an increase of force would be readily apparent and could be corrected. \begin{figure}[t] \includegraphics[width=8cm]{pdist1} \caption{Probability distributions of propensity of type $A$ particles and the effect of introducing errors into the construction of the iso-configurational ensembles. The distributions for no errors, $S=0$, with $\delta=0\%$ and $\delta=5\%$ are reproduced from fig.~\ref{pdist}a as the dotted line and the solid dark blue line respectively. As the number of errors increases, the distributions become narrower, indicating a decrease in the ability for the initial structure to clearly predict future dynamics. } \label{pdist1} \end{figure} \begin{figure}[b] \centerline{\includegraphics[width=8cm]{3dpair4}} \caption{The color of each particle indicates the particle's propensity as compared to the system mean propensity, for a system with zero swaps (left) and 250 swaps (right). The redder a particle appears the higher its level of propensity, while blue indicates lower propensity particles. White particles experience the mean level of propensity. Both systems have $T=0.475$, $\delta=1\%$, and have the exact same particle configuration, highlighting how accidental mistakes (the swaps) decrease the propensity signal present in the zero-mistakes system (left).} \label{3dpair} \end{figure} \begin{figure*} \includegraphics[width=17cm]{cvvt.eps} \caption{The coefficient of variation for propensity as function of time. All panels represent simulations done at $T=0.475$, but for different levels of polydispersity in the quartet, $\delta$. The solid circles in each plot show the function for the original KA binary mixture. The dark blue solid line in each panel shows the $S=0$ data for simulations in which no swaps were done when constructing the iso-configurational ensemble. In (a), the number of swaps $S$ increases as 15, 30, 60, 125, and 250 as the curves decrease below the dark blue solid line ($S=0$); the colors and line styles are identical in the other panels. The open circles are for a binary mixture at $T=1.0$.} \label{cvvt} \end{figure*} The lower panel of fig.~\ref{spdvs} shows results for a temperature of $T=0.475$. Here we see that as the number of swaps is increased, the coefficient of variation of the propensity decreases dramatically. This is true for all polydispersity levels, but there is a greater effect for larger values of $\delta$. Figure~\ref{pdist1} shows the probability distribution of propensities for $T=0.475$, $\delta=5\%$, and all amounts of swaps tested. For comparison, the distribution for the same temperature in the original Kob-Andersen binary ($\delta=0$) is reproduced here. As the number of mistakes increases, with this level of polydispersity, the distribution of propensities narrows: both high and low propensity particles are lost. Thus the dynamics of the system are (apparently) becoming more uniform as we introduce mistakes in assembling the iso-configurational ensemble. Note that the presence of these mistakes could cause the relationship between temperature and the coefficient of variation to be viewed incorrectly. In fig.~\ref{spdvd} we see that at all levels of $\delta$, $c_v$ is larger for lower temperature systems: propensity becomes more significant as $T$ approaches the glass transition $T_g$. However, $c_v = 0.24$ for ($T=1.0,\delta=5\%, S=60$), while $c_v=0.21$ for the same conditions at $T=0.475$. All points with $\delta \ge 3\%$ and $S \ge 60$ also show this inverted relationship between $c_v$ and $T$. The uniformity of the dynamics caused by large numbers of swaps can also be seen in fig.~\ref{3dpair} where color represents $p_i/\langle p_i \rangle$. Both renderings are for systems at $T=0.475$, $\delta=1\%$, and the ensembles which produced both data sets stem from the same initial configuration. The system on the left has had no swaps made when determining propensity, while 250 swaps were made for the system on the right, thus maximum randomness when constructing the isoconfigurational ensemble. Keep in mind that the propensities are calculated from an ensemble of 100 simulations, and that in each of those simulations particles were chosen at random to be swapped. So for the image on the right, each particle had a 50/50 chance of being selected for a swap in each simulation run. The uniformity in color for the swapped system, in comparison to the non-swapped system, clearly shows that the measured propensity (right) appears to be more uniform throughout the system -- as compared to the true propensity (left). Figure~\ref{cvvt} shows the coefficient of variation as a function of time for the simulations conducted at $T=0.475$, with each of the four panels representing a different level of polydispersity. This data was collected by calculating the propensity of the particles over 100 different time scales, while previously we have only discussed propensity for $\tau_\alpha$. In each plot, the different lines represent simulations where different numbers of swaps occurred. Panel (a) shows the data for $\delta=1\%$. Notice that there is a small jump in the initial value of $c_v$ for all values of $S$, indicating that when even a small number of mistakes occur during reassembly, there is a wider variety of displacements happening in the first step of the simulation. We assume that this is due to the fact that any time swaps occur, the equilibrium of the system is disturbed and we are straying from the iso-configurational ensemble. It is possible that the inherent structure \cite{stillinger83} of the system has been changed, though we did not calculate this. However, at this low level of polydispersity, all of the curves follow a similar shape and so we conclude that the evolution of the material is similar in nature to a pure binary mixture. As more swaps are introduced, displacements become more uniform at longer timescales. \begin{figure}[htbp] \centerline{\includegraphics[width=6cm]{p250vp0.eps}} \caption{Panels (a) and (b) show plots which compare a particle's propensity when it is in an ensemble with 250 swaps (maximal random errors) to an ensemble with 0 swaps (the true propensity). The solid line indicates $x=y$, marking particles that experience no effect from the swaps. The dashed line indicates a line of best fit to the data. The levels of polydispersity are different with $\delta=1\%$ in (a) and $\delta=5\%$ in (b). The last panel, (c), also contains data with $\delta=1\%$ but here the true propensity from one ensemble is compared to the true propensity from a second ensemble of equal size, demonstrating the variability in propensity due to the finite number (100) of configurations generated for the isoconfigurational ensemble. $T=0.475$ in all three plots.} \label{p250vp0} \end{figure} Of course as we introduce more size variation and simulate more errors in reconstructing the ensemble, the initial rise in $c_v$ becomes more pronounced. Under more extreme conditions such as $\delta \geq 3\%$ with $S \geq 125$ we see very large rises in $c_v$ at early time scales as the swapping causes some particles to be very near each other resulting in large force values. It is these extreme outliers that raise the $c_v$. Additionally, the general shape of the time evolution has changed completely as the secondary rise in $c_v$ is suppressed or even non-existent. We conclude that in these cases the swapping is enough to render the idea of an iso-configurational mostly meaningless as displacements across the ensemble become more uniform. We examine this further in fig.~\ref{p250vp0} by comparing the propensity value for each particle in systems where no swaps occur to the propensity of the same particle in ones where many swaps occur. The data in panel (a) are for simulations at $T=0.475$ with $\delta=1\%$ just as in fig.~\ref{3dpair}, except here we include data from all 10 initial configurations and do not normalize by the mean. The horizontal axis is the propensity for a given particle when no swaps are made -- the true propensity that we desire to measure. The vertical axis is the propensity for the particle that exists at the same location in the iso-configurational ensemble, but for the situation using $250$ swaps; in other words, when the maximum number of random mistakes is realized. The plot shows that there is a relatively high level of correlation between the two data sets: particles that have higher levels of propensity in the original system tend to still have a high level of propensity in the swapped system -- keeping in mind that in the swapped system, a ``particle'' with a certain propensity now corresponds to the {\it position} of a particle prior to the swapping, as the literal particle may or may not stay in that location. This indicates that the structural information is still present despite the large number of swaps that were made, though variation in the plot indicates that this information may be harder to discover. As a control, fig.~\ref{p250vp0}(c) is a similar plot, but here the true propensity derived from one ensemble is plotted against the true propensity from a second ensemble where both sets had the same temperature and level of polydispersity. The correlation between the two sets is 0.93 and this gives a sense of the amount of spread present in the data due to the inherent fluctuations from constructing iso-configurational ensembles with randomized initial velocities. If one were to construct more than 100 realizations in the iso-configurational ensemble, the variability in these data would converge toward the $x=y$ line. The relationship between the true propensity and propensity measured with maximum swaps is much more muddled in fig.~\ref{p250vp0}(b), which is the same type of plot but for a system with $\delta=5\%$. Here there is very little correlation between the propensities in the original and the swapped systems, indicating that any structural information conveyed by the propensity in the original system has been lost. It seems that while some particles have higher or lower propensities in the system with swaps, that has no correlation with having higher or lower true propensity when there are no swaps. The measured propensity is apparently due to the swapping algorithm itself, rather than the original structure. For example, as discussed above, if a swapped particle moves into a position where it experiences a dramatically larger force immediately after the swap, that might result in a larger displacement and thus (every time it is swapped) a larger measured propensity. But the presence of this sort of swap-error-induced propensity tells us nothing about the intrinsic dynamics of the original system; this is not the structure-dynamics relationship we are looking for. We wish to quantify how polydispersity $\delta$, swapping errors $S$, and temperature $T$ interplay. To do this, lines of best fit are found for the data in panels (a) and (b) of fig.~\ref{p250vp0}, which are shown as dashed lines, as well as for other values of $\delta$, $S$, and $T$. A slope of 1, depicted in the plots as solid lines, would indicate good predictability between the true propensity and the measured propensity. However, these plots show that the slope is less than 1, indicating less correlation between the measured and true propensity values. The summary of all our data is shown in fig.~\ref{swapslopes}, which has the slopes for all systems. Again, we see that at $T=1.0$ (dashed lines), not much structural information is lost even for large numbers of swaps. For $\delta=3\%, S=250,$ and $T=1.0$, the lowest slope value found is $\approx 0.6$. For the lower temperature data (solid lines), we see the drop off can be quite severe, with the worst case (corresponding to panel (b) of fig.~\ref{p250vp0}) having a slope of $\approx 0.1$. At this point nearly all of the apparent propensity is fictitious, with little correlation to the propensity one wishes to measure. While a drop in slope value indicates that structural information is scrambled by the introduction of errors, it is also clear that these mistakes cause the measured propensity values to increase in general. In both fig.~\ref{p250vp0}(a) and (b), the data lie above the $y=x$ line. Compare this with the plot in fig.~\ref{p250vp0}(c) where the data appear equally above and below the line. This general increase in the measured propensity values appears to correlate with the level of polydispersity, as the increase is clear but mild in fig.~\ref{p250vp0}(a) where $\delta=1\%$ but is quite pronounced in fig.~\ref{p250vp0}(b) where $\delta=5\%$. If a high value of propensity is supposed to indicate a region in the material that is likely to be the site of a re-arrangement in the near future, then an experiment conducted with high levels of polydispersity, at low temperature, and with an assumption of indistinguishably between members of each particle species, then the results would be incorrectly interpreted as a majority of the system being prone to reorganization. We reiterate that swapping generally causes all particles to have moderate to high values of propensity. This is the reason why the slope of the best fit line decreases, in other words, why the slopes plotted in Fig.~\ref{swapslopes} are all below 1, because the true low propensity particles have their propensities increased with swapping, while the true high propensity particles generally keep a high propensity value with swapping. \begin{figure}[htb] \includegraphics[width=8cm]{swapslopes.eps} \caption{The slope values from when propensity with a given number of swaps was plotted against the propensity with zero swaps. Each line is a particular $T$ and $\delta$ combination. All solid lines are $T=0.475$ and all dashed lines are $T=1.0$. $\delta=$ 1\%, 2\%, 3\%, and 5\% are respectively dark blue, light blue, green, and red.} \label{swapslopes} \end{figure} \section{Conclusions} The most challenging part of attempting a propensity measurement with a colloidal system will inevitably be determining a way to produce an iso-configurational ensemble. The difficulties faced in re-assembly will only be made worse by the fact that real particles will exhibit polydispersity. We have shown that the intrinsic polydispersity to colloids will not be an insurmountable obstacle as long as care is taken to minimize mistakes when reassembling the system. At a temperature of $T=0.475$ we see that even a mistake rate of $3\%$, ($S=30$), would result in the coefficient of variability of the propensity dropping from 0.76 to 0.31 for a system with $\delta=5\%$. Indeed, at a polydispersity level of $5\%$, the idea of an iso-configurational ensemble appears to be broken with even the occasional error in reconstruction. There are two potential goals one could investigate in a colloidal experiment. One may want to show that propensity becomes more significant as the glass transition is approached, and as we have argued above, a way to do this is to measure $c_v$: the coefficient of variation, where large values indicate propensity has more ``signal.'' Another goal would be to look for structure-dynamic relations, in which case one needs to measure individual propensity values accurately. For both of these goals, there are at least two potential solutions. First, one could use colloidal particles of low polydispersity, although those are difficult to find \cite{poon12}. Our data indicate $\delta < 3$\% is necessary. Second, one can work hard to achieve $S=0$: ensuring that every particle is returned to its specific initial position. If you have a choice, this second option is the better choice. Our results show that even a polydisperse system ($\delta=5$\%) has reasonable dynamics and a nicely measurable propensity, so long as one avoids reconstruction errors. \begin{acknowledgement} This work was supported by the National Science Foundation (Grant No. DMR-1609763). \end{acknowledgement} Author contributions: C.D. and E.R.W. designed the project; C.D. conducted the simulations, analyzed the data, and prepared the figures; C.D. and E.R.W. wrote the paper.
{ "timestamp": "2021-03-02T02:32:04", "yymm": "2012", "arxiv_id": "2012.09025", "language": "en", "url": "https://arxiv.org/abs/2012.09025" }
\section{Introduction} Unmanned aerial vehicles (UAVs) are expected to be widely deployed in future networks to enable various new image acquisition applications such as live broadcast, virtual reality, and so on, by leveraging their advantages of low cost, high mobility, as well as flexible deployment \cite{Zeng1}. To maximize the efficiency of UAV-assisted image acquisition, it is of paramount importance to well design the UAV's three-dimensional (3D) trajectory such that the images of ground targets (GTs) can be captured with minimum traveling distance, while guaranteeing satisfactory image quality. However, unlike the widely studied UAV-enabled communications, this problem still remains unaddressed in the literature, to the authors' best knowledge. Particularly, there are two challenging issues that need to be resolved in designing the 3D trajectory for UAV-assisted image acquisition. First, prior studies on UAV-assisted image acquisition mostly consider the vertical photography (VP) model, where the UAV-mounted camera is assumed to have a fixed shooting angle which is perpendicular to the ground. As such, the UAV needs to fly above each GT for image acquisition to ensure that the GT is displayed at the center of the captured image \cite{Sai}. However, this strategy may result in long traveling distance and high energy consumption, especially for image acquisition of multiple GTs that are far apart from each other \cite{Mavrinac}. Fortunately, this issue has been resolved in the latest UAV-mounted camera which is able to flexibly adjust its oblique shooting angle according to the GT's location \cite{Hohle}. However, to the best of our knowledge, there still lacks a tractable model to characterize the quality of the images captured by the angle-rotatable camera. Moreover, the trajectory design for UAV-assisted image acquisition is substantially different from that in other missions such as UAV-assisted data collection. Specifically, the UAV should approach each GT such that the captured image can satisfy the resolution requirement, for which the feasible region is generally a complicated function of the GT's location. This makes the trajectory design for taking images of multiple GTs fundamentally different from that for collecting data from multiple ground users (e.g., \cite{Zeng2, Lyu}), where the feasible region for the UAV to meet the communication requirement is generally a cylindrical shape with the ground plane centered at the user. Motivated by the above, we propose in this correspondence a novel \emph{oblique photography (OP)} model to characterize the resolution of the captured image. Based on this model, we study the 3D UAV trajectory optimization problem to minimize its traveling distance for taking images of multiple GTs, while guaranteeing a minimum resolution requirement of each captured image. The formulated problem is shown to be a variant of the traveling salesman problem with neighbourhoods (TSPN) \cite{Dumitrscu}, where the neighbourhood represents the feasible region for the UAV to capture the image with satisfactory resolution. Note that although TSPN has been studied in \cite{Zeng2, Lyu} for the two-dimensional (2D) case under disk-shaped neighbourhood, or \cite{Yuan, Isler1} for the 3D case under other regular-shaped neighbourhood, these algorithms cannot be applied to our problem with an irregular neighbourhood region. To tackle this challenging problem, we simplify the UAV trajectory as line segments connected by multiple waypoints, each corresponding to the image-taking location of one GT. Then, we propose an alternating optimization algorithm for finding a suboptimal solution to this simplified problem, by alternately optimizing the waypoint locations and the GT visiting order. Numerical results show that the proposed scheme outperforms various benchmark schemes in terms of the traveling distance, while meeting the image resolution requirement. \vspace{-2mm} \section{System Model and Problem Formulation} We consider a UAV-assisted image acquisition system with one UAV being dispatched to take images of $K$ GTs, denoted by the set $\mathcal{K}{\rm{=}}\{1,...,K\}$. In the following, we first propose a novel UAV-assisted OP model which is tailored for the angle-rotatable camera, and then formulate the 3D UAV trajectory optimization problem. \vspace{-3mm} \subsection{UAV-Assisted OP Model} In Fig. \ref{F1}(a), we present a UAV-assisted OP model where rectangle {\small $A'B'C'D'$} is the camera's image plane whose coverage region on the ground is an isosceles trapezoid (i.e., {\small $ABCD$}). Points {\small $T'$} and {\small $T$} represent the centers of the image plane and GT$_k$, respectively. Let $[{{\bf{w}}_k^T,0}]$ in meter (m) denote the 3D coordinate of GT$_{k}$ where ${{\bf{w}}_k}\!\in\!{\mathbb{R}^{2 \times 1}}$ represents its horizontal coordinate. For simplicity, we assume that GT$_{k}$ is a disk with a known radius $r_k$ in\;m. Let ${d_{u, k}}$ denote the distance from the UAV to GT$_{k}$ in\;m, i.e., {\small $|T'T|$}, which is given by \begin{figure}[htbp!] \setlength{\belowcaptionskip}{-6mm} \centering \includegraphics[width=0.48\textwidth]{f1.pdf} \caption{Illustration of UAV-assisted OP model.} \label{F1} \end{figure} \begin{equation}\label{E1} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} {d_{u, k}} = \sqrt {{{\left\| {{{\textbf{q}}_k} - {{\textbf{w}}_k}} \right\|}^{\rm{2}}}{\rm{ + }}{z_k^2}}, \end{equation}where ${{\textbf{q}}_k}\!\in\!{\mathbb{R}^{2 \times 1}}$ and $z_k$ denote the UAV's horizontal and vertical coordinate when taking image of GT$_k$, respectively. In the conventional VP model, the image resolution is characterized by ground sample distance (GSD) that each pixel can represent. However, with the angle-rotatable camera, the GSDs that the pixels can represent are different, thus rendering the image resolution representation with GSD inapplicable to the OP model. Therefore, we redefine the image resolution as the ratio of GT$_k$'s area to the camera's coverage area. Specifically, we denote by $f_0$ the camera's focal length, and $w_0$ and $l_0$ the width and length of the image plane, respectively. Let ${\theta_k}$ denote the camera's oblique angle (i.e., {\small $\angle {OO'T}$} in Fig. \ref{F1}(a)) and we have $\cos {\theta_k}{\rm{ = }}\frac{{{z_k}}}{{{d_{u, k}}}}$ and $\tan {\theta_k}{\rm{=}}\frac{{\|{{{\textbf{q}}_k} - {{\textbf{w}}_k}}\|}}{{{z_k}}}$.{\footnote{The UAV-mounted camera can rotate its oblique shooting angle towards the GT with the angle $\theta_k$ as shown in Fig. \ref{F1}(a).}} As such, the camera’s coverage area, denoted by $S_k^c$, can be expressed as below with the detailed derivation presented in Appendix {\ref{A}}: \begin{equation}\label{E2} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} \begin{aligned} S_k^c = S_k^v\times\phi\left( {{\theta_k}} \right) = \frac{{4z_k^2}}{{b_1b_2}} \times \frac{1}{{{{( {1 - \frac{{1}}{{b_1^2}}{{\tan }^2}{\theta_k}})}^2}{{\cos }^3}{\theta_k}}}, \end{aligned} \end{equation} where $b_1{\rm{=}}\frac{2f_0}{w_0}$ and $b_2{\rm{=}}\frac{2f_0}{l_0}$ are constants determined by the camera's parameter setting. Note that $S_k^v$ is the camera's coverage area when taking the image right above GT$_k$, which is proportional to $z_k^2$, and $\phi \left( {{\theta_k}} \right)$ is defined as the {\emph{coverage scaling factor}} which is monotonically increasing with respect to (w.r.t.) ${\theta_k}$. It is worth mentioning that the proposed OP model reduces to the conventional VP model when ${\theta_k} = 0$. Based on the above, the image resolution, denoted by ${{\cal I}_k}$, can be characterized by the UAV's 3D image-taking location as \begin{equation}\label{E4} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} \begin{small} {{\cal I}_k} = \frac{S_{GT_k}}{S_k^c} = \frac{{a_k{{( {z_k^2 -\frac{1}{{{b_1^2}}}{{{\left\| {{{\textbf{q}}_k} - {{\textbf{w}}_k}} \right\|}^{\rm{2}}}}})}^2}}}{{{{( {{{\left\| {{{\textbf{q}}_k} - {{\textbf{w}}_k}} \right\|}^{\rm{2}}}{\rm{ + }}z_k^2})}^{\frac{{\rm{3}}}{{\rm{2}}}}}z_k^{\rm{3}}}}, \end{small} \end{equation} where $a_k{=}\frac{{b_1b_2\pi r_k^2}}{{4}}$. In this correspondence, we consider a minimum resolution requirement for each GT$_k$ denoted by ${\overline{\cal{I}}}_k$, thus the UAV's image-taking location $[{\bf{q}}_k^T,z_k]$ should satisfy ${\cal{I}}_k{\geq} \overline{{\cal{I}}}_k$. Moreover, to let each GT$_k$ be completely projected in the camera's coverage region, $[{\bf{q}}_k^T,z_k]$ should also satisfy $r_k{\leq}\min(d_{1,k},d_{2,k})$, where $d_{1,k}$ and $d_{2,k}$ represent the distances from point $T$ to $AD$ and $BC$ (see Fig. \ref{F1}(b)), which are defined in Appendix B. It is also worth noting that $[{\bf{q}}_k^T, z_k]$ should satisfy $b_1z_k{-}\|{\bf{q}}_k{-}{\bf{w}}_k\|{\geq}0$ to meet the focal length requirement of the camera, as explained in Appendix A. \vspace{-5mm} \subsection{Problem Formulation} \begin{figure}[htbp!] \setlength{\belowcaptionskip}{-7mm} \centering \includegraphics[width=0.48\textwidth]{f2.pdf} \caption{Demonstration of the neighbourhood.} \label{F2} \end{figure} We aim to optimize the 3D UAV trajectory for capturing the images of the $K$ GTs, to minimize the UAV's traveling distance from given initial to final points denoted by $[{\bf{w}}_I^T,z_I]$ and $[{\bf{w}}_F^T,z_F]$, respectively, while ensuring a sufficiently high image resolution for all GTs. Note that since the UAV trajectory is continuous, this problem involves an infinite number of variables, thus making the problem difficult to solve. To simplify the trajectory design, we assume that the UAV trajectory consists of $K{\rm{+1}}$ consecutive line segments in a similar manner as \cite{Zhang}, with $K$ \emph{waypoints} each denoting the UAV's image-taking location for one GT. Let $\psi (k)\in\mathcal{K}$ denote the index of the $k$-th visited GT and $[{{\bf{q}}_{\psi (k)}^T},{z_{\psi (k)}}]$ denote the location of the waypoint at which the UAV takes the image of GT$_{\psi(k)}$. For consistence, we define $[{{\bf{q}}_{\psi (0)}^T},{z_{\psi (0)}}] = [{{\bf{w}}_I^T},{z_I}]$ and $[{{\bf{q}}_{{\psi (K+1)}}^T},{z_{{\psi (K+1)}}}]=[{{\bf{w}}_F^T},{z_F}]$. For notational convenience, we define ${\bf{\Psi}}\overset{\Delta}{=}[\psi(1),{...},\psi(K)]$, ${\bf{Q}}\overset{\Delta}{=}[{\bf{q}}_{\psi(1)}^T,...,{\bf{q}}_{\psi(K)}^T]$, and ${\bf{Z}}\overset{\Delta}{=}[z_{\psi(1)},...,z_{\psi(K)}]$. The UAV's traveling distance is thus given by \begin{equation}\label{E6} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} \!\!\!\!\!\!D{({\bf{Q}},{\bf{Z}},{\bf{\Psi}})}{\rm{=}}\!\!\sum\limits_{k = 0}^K\!\!{\sqrt{{{\|{{\bf{q}}_{\psi (k + 1)}}{-}{{\bf{q}}_{\psi (k)}}\|}}^2{+}{{({z_{\psi (k{+}1)}}{-}{z_{\psi (k)}})}}^2}}. \end{equation} Let ${\overline{{\cal{I}}}}_{\psi(k)}$ denote the resolution requirement of GT$_{\psi(k)}$. Then, under the given resolution constraints, the 3D UAV trajectory optimization problem can be formulated as \begin{subequations} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} \begin{align} ({\rm{P1}})\mathop {\min }\limits_{\scriptstyle {{{\bf{Q}}}},{\bf{Z}},{\bf{\Psi}} \hfill\atop \scriptstyle}\;\;\;\;&D{({\bf{Q}},{\bf{Z}},{\bf{\Psi}})}\notag \\ {\rm{s}}{\rm{.t}}{\rm{.}}\;\;\;\;\;&\psi (k) \in {\cal K},~~~~~~~~~~~~&\forall k \in {\cal K},\label{E9a}\\ &\mathop \cup \limits_{k = 1}^K \psi (k) = {\cal K},&\label{E9b}\\ &{{\cal I}_{\psi (k)}} \ge {\overline {\cal I} _{\psi (k)}},&\forall k \in {\cal K},\label{E9c}\\ &{r_{\psi(k)}} \le \min ( {d_{1,{\psi (k)}},d_{2, {\psi (k)}}}),\!\!&\forall k \in {\cal K},\label{E9d}\\ &b_1{z_{{{\psi (k)}}}}{\rm{-}}\|{{\bf{q}}_{{{\psi (k)}}}}{\rm{-}}{{\bf{w}}_{\psi (k)}}\| {\rm{\geq}} 0,&\forall k\in{\cal K},\label{E9e} \end{align} \end{subequations} where (\ref{E9a})-(\ref{E9b}) specify the feasible set of the GT visiting order ${\bf{\Psi}}$, and (\ref{E9c})-(\ref{E9e}) specify the the feasible region of the UAV's image-taking locations for each GT, which is termed as the ``neighbourhood'' for simplicity. In Fig. {\ref{F2}}, we illustrate the neighbourhood of the GT located at the original point with a minimum resolution requirement of ${\overline {\cal I}} = 0.4$, where the 3D view of the neighbourhood is depicted in Fig. {\ref{F2}}(a), which appears to be a {\emph{spherical sector}}, and the vertical profile of the neighbourhood is shown in Fig. {\ref{F2}}(b), which has the shape of a {\emph{crescent moon}}. Note that unlike UAV-assisted data collection where the communication quality generally increases as the UAV approaches the ground user, the UAV needs to maintain a certain distance from the GT for ensuring the image quality due to the non-convexity of the neighbourhood region as shown in Fig. {\ref{F2}}. It is also observed from Fig. {\ref{F2}}(b) that the image resolution gradually increases from the outside to the inside of the neighbourhood, which intuitively indicates that the UAV may prefer to take the image at the surface of the neighbourhood for reducing its traveling distance. Notice that (P1) can be shown to be a modified 3D TSPN problem {\cite{Dumitrscu}}, where the neighbourhood of each GT is a complicated function of the UAV's 3D location as well as the image resolution requirement. It is worth noting that such a 3D TSPN problem is generally NP-hard, and is more involved as compared to the 2D TSPN problem studied in e.g., {\cite{Zeng2}}. In the following, we propose an efficient iterative algorithm for finding a high-quality suboptimal solution to (P1). \vspace{-4mm} \section{Proposed Solution to (P1)} In this section, we propose an iterative optimization algorithm for solving (P1) by alternately optimizing one between the 3D UAV waypoints and the GT visiting order with the other set of variables being fixed at each time. Specifically, we first introduce the two subproblems, and then present the overall algorithm and analyze its computation complexity. \vspace{-4mm} \subsection{3D Waypoint Location Optimization} First, we present the subproblem for optimizing the 3D waypoint locations with given ${\bf{\Psi}}$, which is formulated as \begin{equation} \begin{aligned} ({\rm{P2.1}})\;\mathop {\min }\limits_{{\bf{Q}}, {\bf{Z}}} \;\;\;\;&D{({\bf{Q}},{\bf{Z}},{\bf{\Psi}})}\notag\\ {\rm{s}}{\rm{.t}}{\rm{.}}\;\;\;\;&(\ref{E9c}){\rm{-}}(\ref{E9e}). \notag \end{aligned} \end{equation} Since ${\bf{Q}}$ and ${\bf{Z}}$ are coupled with each other and the constraints (\ref{E9c})-(\ref{E9d}) are non-convex, (P2.1) is a non-convex optimization problem, which is difficult to solve in general. To make the problem more tractable, we make some transformations to (\ref{E9c})-(\ref{E9d}). For simplicity, we define ${\boldsymbol{\ell}}_{\psi(k)}\overset{\Delta}{=}{\bf{q}}_{\psi(k)}-{\bf{w}}_{\psi(k)}, \forall k\in \mathcal{K}$, and ${\bf{L}}\overset{\Delta}{=}[{\boldsymbol{\ell}}_{\psi(1)}^T,...,{\boldsymbol{\ell}}_{\psi(K)}^T]$. By taking logarithm of both sides of (\ref{E9c}), we obtain \begin{equation}\label{E10} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} \!\!\!\!\!{\rm-}\frac{3}{2}\ln({{\|{\boldsymbol{\ell}}_{\psi(k)}\|^2}\!{\rm{+}}z_{\psi (k)}^2})\!{\rm{-}}\!3\!\ln {z_{\psi (k)}}\!{\rm{\ge}}f_1(z_{\psi(k)},\!{\boldsymbol{\ell}}_{\psi(k)},\!r_{\psi(k)}), \end{equation}where \begin{equation} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} \begin{array}{l} f_1(z_{\psi(k)}, {\boldsymbol{\ell}}_{\psi(k)}, r_{\psi(k)}) \triangleq \ln {\frac{\overline {\cal I} _{\psi (k)}}{a_{\psi(k)}}} - 2\ln( {z_{\psi (k)}^2 {\rm-} \frac{1}{{b_1^2}}{\|{\boldsymbol{\ell}}_{\psi(k)}\|^2}}) \notag \end{array} \end{equation} is a convex function w.r.t. $z_{\psi (k)}$ and $\|{\boldsymbol{\ell}}_{\psi(k)}\|$, respectively. Next, by taking the square of both sides of (\ref{E9d}), (\ref{E9d}) can be equivalently transformed into \begin{equation}\label{E11} \begin{small} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} \begin{aligned} \hspace{-2mm} \!\!\!z_{\psi (k)}^4{\rm{+}} 2z_{\psi (k)}^2{\|{\boldsymbol{\ell}}_{\psi(k)}\|^2}{\rm{+}} {\|{\boldsymbol{\ell}}_{\psi(k)}\|^4} {\rm{\ge}} f_2(z_{\psi(k)}, {\boldsymbol{\ell}}_{\psi(k)}, r_{\psi(k)}), \end{aligned} \end{small} \end{equation} where \begin{equation} \begin{small} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} \begin{aligned} \hspace{-3mm} &f_2(z_{\psi(k)}, {\boldsymbol{\ell}}_{\psi(k)}, r_{\psi(k)}) \triangleq\\ &r_{\psi (k)}^2\max({{{( {b_1{z_{\psi (k)}}{\rm{+}}{\|{\boldsymbol{\ell}}_{\psi(k)}\|}})}^2}, b_2^2z_{\psi (k)}^2{\rm{+}} (1{\rm{+}}b_2^2){{\|{\boldsymbol{\ell}}_{\psi(k)}\|^2}}}) \notag \end{aligned} \end{small} \end{equation} is a convex function w.r.t. $z_{\psi (k)}$ and $\|{\boldsymbol{\ell}}_{\psi(k)}\|$, respectively. In the following, we apply the block coordinate descent (BCD) technique to decouple the joint optimization for ${\bf{Q}}$ and ${\bf{Z}}$ into two subproblems for ${\bf{Q}}$ and ${\bf{Z}}$, separately, each of which is sub-optimally solved by using the convex approximation technique as in {\cite{You1}}. \subsubsection{Optimizing ${\bf{Z}}$ with given ${\bf{Q}}$} With given ${\bf{Q}}$ and hence ${\bf{L}}$, (P2.1) reduces to the following optimization problem over the altitudes of the $K$ waypoints in ${\bf{Z}}$:. \begin{align} ({\rm{P2.1.1}})\;\mathop {\min }\limits_{{\bf{Z}}} \;\;\;\;&D{({\bf{Q}},{\bf{Z}},{\bf{\Psi}})}\notag\\ {\rm{s}}{\rm{.t}}{\rm{.}}\;\;\;&(\ref{E9e}), (\ref{E10}),(\ref{E11})\notag. \end{align} Problem (P2.1.1) is still hard to solve due to the non-convex constraints (\ref{E10})-(\ref{E11}). However, note that with given ${\bf{L}}$, the first term of the left-hand side (LHS) of (\ref{E10}) is convex w.r.t. $z_{\psi (k)}^2$, and so is the second term w.r.t. ${z_{\psi (k)}}$. As such, we can apply the convex approximation technique to approximate the two terms by their lower bounds as follows by using the first-order Taylor expansion at the given local point $z_{\psi (k)}^{(i)}$ of the $i$-th iteration: {\begin{small} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} \begin{align} &\!\!\!\!\!\!-\!\!\frac{3}{2}\ln ( {{\|{\boldsymbol{\ell}}_{\psi(k)}\|^2}{\rm{ + }}z_{\psi (k)}^2})\!\!\ge\!\! {\varphi _1}({z_{\psi (k)}})\!\! \triangleq \!\!-\!\frac{3}{2}\!\!\ln ( {{\|{\boldsymbol{\ell}}_{\psi(k)}\|^2}{\rm{ + (}}z_{\psi (k)}^{(i)}{)^2}}) \notag \\ &\!\!\!\!\!\! -\!\!\frac{3}{{2( {{\|{\boldsymbol{\ell}}_{\psi(k)}\|^2}{\rm{ + (}}z_{\psi (k)}^{(i)}{)^2}})}}({z_{\psi (k)}^2{\rm{-}} {{{\rm{(}}z_{\psi (k)}^{(i)})}^2}}),\label{E12}\\ &\!\!\!\!\!\!-3\ln {z_{\psi (k)}}{\rm{\ge}}{\varphi _2}({z_{\psi (k)}}){\rm{\triangleq}}{\rm{-}}3\ln z_{\psi (k)}^{(i)}{\rm{-}}\frac{3}{{z_{\psi (k)}^{(i)}}}({z_{\psi (k)}}{\rm{-}}z_{\psi (k)}^{(i)}),\label{E13} \end{align} \end{small}}where the equality holds at the point $z_{\psi (k)}=z_{\psi (k)}^{(i)}$. With (\ref{E12}) and (\ref{E13}), we approximate (\ref{E10}) with the following constraint: \begin{equation}\label{E14} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{3pt} \begin{array}{l} {\varphi _1}({z_{\psi (k)}}) + {\varphi _2}({z_{\psi (k)}}) \ge f_1(z_{\psi(k)}, {\boldsymbol{\ell}}_{\psi(k)}, r_{\psi(k)}). \end{array} \end{equation} For the constraint (\ref{E11}), it can be shown that its first and second terms on the LHS are both convex w.r.t. ${z_{\psi (k)}}$. This allows us to lower-bound the two terms as follows: \begin{align} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} &\!\!\!\!z_{\psi (k)}^4 \ge {\varphi _3}({z_{\psi (k)}})\!\triangleq\!{{\rm{(}}z_{\psi (k)}^{(i)}{\rm{)}}^4}{\rm{ + 4(}}z_{\psi (k)}^{(i)}{{\rm{)}}^3}({z_{\psi (k)}}{\rm{-}}z_{\psi (k)}^{(i)}), \label{E15}\\ &\!\!2z_{\psi (k)}^2{\|{\boldsymbol{\ell}}_{\psi(k)}\|^2} \ge {\varphi _4}({z_{\psi (k)}}) \notag \\ &\!\!\triangleq 2{{\rm{(}}z_{\psi (k)}^{(i)}{\rm{)}}^2}{\|{\boldsymbol{\ell}}_{\psi(k)}\|^2}{\rm{+}}4z_{\psi (k)}^{(i)}{\|{\boldsymbol{\ell}}_{\psi(k)}\|^2}({z_{\psi (k)}}{\rm{-}}z_{\psi (k)}^{(i)}), \label{E16} \end{align} where the equality holds at the point $z_{\psi (k)} = z_{\psi (k)}^{(i)}$. Therefore, we approximate (\ref{E11}) by replacing its LHS with its lower bound as the following constraint: \begin{equation}\label{E17} \begin{small} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} \begin{array}{l} \!\!\!\!\!\!\!\!\!\!{\varphi _3}({z_{\psi (k)}}){\rm{+}}{\varphi _4}({z_{\psi (k)}}){\rm{+}}{\|{\boldsymbol{\ell}}_{\psi(k)}\|^4}\!\!\ge\!\!f_2(z_{\psi(k)}, {\boldsymbol{\ell}}_{\psi(k)}, r_{\psi(k)}). \end{array} \end{small} \end{equation} As such, (P2.1.1) can be reformulated into an approximate form given below, with the LHSs of (\ref{E10}) and (\ref{E11}) replaced by their respective lower bounds: \begin{align} ({\rm{P2.1.2}})\;\mathop {\min }\limits_{\bf{Z}} \;\;\;\;&D{({\bf{Q}},{\bf{Z}},{\bf{\Psi}})} \notag \\ {\rm{s}}{\rm{.t}}{\rm{.}}~~\;&(\ref{E9e}), (\ref{E14}),(\ref{E17}).\notag \; \end{align} (P2.1.2) is a convex optimization problem, which can be efficiently solved via existing software, e.g., CVX. Moreover, it can be shown that the optimal solution to (P2.1.2) is guaranteed to be a feasible solution for (P2.1.1). \subsubsection{Optimizing ${\bf{Q}}$ with given ${\bf{Z}}$} With given ${\bf{Z}}$, (P2.1) reduces to the following problem for optimizing the horizontal waypoint locations ${\bf{Q}}$ (or equivalently ${\bf{L}}$): \begin{equation} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} \begin{aligned} ({\rm{P2.1.3}})\;\mathop {\min }\limits_{{{\bf{L}}}} \;\;\;\;&D{({\bf{Q}},{\bf{Z}},{\bf{\Psi}})} \notag \\ {\rm{s}}{\rm{.t}}{\rm{.}}~~\;&(\ref{E9e}),(\ref{E10}),(\ref{E11}).\notag\;\; \end{aligned} \end{equation} Since the first term on the LHS of (\ref{E10}) is convex w.r.t. $\|{\boldsymbol{\ell}}_{\psi(k)}\|^2$, it is lower-bounded by the first-order Taylor expansion at the given local point ${\boldsymbol{\ell}}_{\psi (k)}^{(i)}$ of the $i$-th iteration as \begin{equation}\label{E18} \begin{small} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} \begin{aligned} \hspace{-2mm} &\!\!\!\!\!\!\!\!-\!\frac{3}{2}\ln( {{\|{\boldsymbol{\ell}}_{\psi(k)}\|^2}{\rm{ + }}z_{\psi (k)}^2})\!\ge\!{\vartheta_1}({{{{\boldsymbol{\ell}}}}_{\psi (k)}})\!\!\triangleq\!\!-\!\frac{3}{2}\ln({{\|{\boldsymbol{\ell}}_{\psi(k)}^{(i)}\|^2}{\rm{ + }}z_{\psi (k)}^2})\\ &\!\!\!\!-\!\frac{3}{{2( {{\|{\boldsymbol{\ell}}_{\psi(k)}^{(i)}\|}^2{\rm{ + }}z_{\psi (k)}^2})}}( {{\|{\boldsymbol{\ell}}_{\psi(k)}\|^2} - {\|{\boldsymbol{\ell}}_{\psi(k)}^{(i)}\|}^2}), \end{aligned} \end{small} \end{equation}where the equality holds at the point ${\boldsymbol{\ell}}_{\psi (k)}{\rm{=}}{\boldsymbol{\ell}}_{\psi (k)}^{(i)}$. Then, (\ref{E10}) can be approximated as \begin{equation}\label{E19} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} \begin{array}{l} {\vartheta _1}({{{{\boldsymbol{\ell}}}}_{\psi (k)}}) - 3\ln {z_{\psi (k)}} \ge f_1(z_{\psi(k)}, {\boldsymbol{\ell}}_{\psi(k)}, r_{\psi(k)}). \end{array} \end{equation} Similarly, we can derive the lower bounds of the second and third terms on the LHS of (\ref{E11}) as follows by using the first-order Taylor expansion: \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} \begin{align} &2z_{\psi (k)}^2{\|\boldsymbol{\ell}_{\psi(k)}\|^2}\ge {\vartheta _2}({{\boldsymbol{\ell}}_{\psi (k)}}) \notag \\ &\triangleq 2z_{\psi (k)}^2{\|{\boldsymbol{\ell}}_{\psi(k)}^{(i)}\|^2}{\rm{+}}4z_{\psi (k)}^2{({{\boldsymbol{\ell}}_{\psi(k)}^{(i)}})^T}({\boldsymbol{\ell}}_{\psi(k)}-{\boldsymbol{\ell}}_{\psi(k)}^{(i)}),\label{E20} \\ &{\|{\boldsymbol{\ell}}_{\psi(k)}\|^4}\ge{\vartheta _3}({{\boldsymbol{\ell}}_{\psi (k)}}) \notag \\ &\triangleq {\|{\boldsymbol{\ell}}_{\psi(k)}^{(i)}\|^4}{\rm{+ }}4{\|{\boldsymbol{\ell}}_{\psi(k)}^{(i)}\|^2}{({{\boldsymbol{\ell}}_{\psi(k)}^{(i)}})^T}({\boldsymbol{\ell}}_{\psi(k)}-{\boldsymbol{\ell}}_{\psi(k)}^{(i)}). \label{E21} \end{align} With (\ref{E20}) and (\ref{E21}), the constraint in ({\ref{E11}}) is approximated as follows by replacing its LHS with its lower bound: \begin{equation}\label{E22} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} \begin{array}{l} \!\!\!\!z_{\psi (k)}^4{\rm{ + }}{\vartheta _2}({{\boldsymbol{\ell}}_{\psi (k)}}){\rm{+}} {\vartheta _3}({{\boldsymbol{\ell}}_{\psi (k)}})\!\ge\!f_2(z_{\psi(k)}, {\boldsymbol{\ell}}_{\psi(k)}, r_{\psi(k)}). \end{array} \end{equation} As a result, (P2.1.3) can be reformulated into the following approximate form: \begin{equation} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} \begin{aligned} ({\rm{P2.1.4}})\;\mathop {\min }\limits_{{{\bf{L}}}} \;\;\;\;&D{({\bf{Q}},{\bf{Z}},{\bf{\Psi}})} \notag \\ {\rm{s}}{\rm{.t}}{\rm{.}}\;\;\;\;&(\ref{E9e}), (\ref{E19}),(\ref{E22}).\notag\;\;\; \end{aligned} \end{equation} Since the constraints in (\ref{E19}) and (\ref{E22}) are convex, (P2.1.4) is a convex optimization problem, which can be efficiently solved via existing software, e.g., CVX. With the convex approximation technique, the objective value of (P2.1) can be shown to be non-increasing over the iterations similarly as in \cite{You1}, which is also lower-bounded by a finite value. Therefore, the proposed algorithm for optimizing the 3D waypoint locations is guaranteed to converge. \vspace{-3mm} \subsection{Visiting Order Optimization} With given waypoint locations $({\bf{Q}},{\bf{Z}})$, the subproblem for optimizing the GT visiting order ${\bf{\Psi}}$ is recast as \begin{align} ({\rm{P2.2}})\;\mathop {\min }\limits_{{\bf{\Psi}}} \;\;\;\;&D{({\bf{Q}},{\bf{Z}},{\bf{\Psi}})} \notag \\ \;\;\;\;\;\;\;{\rm{s}}{\rm{.t}}{\rm{.}}\;\;\;\;&(\ref{E9a}){\rm{,}}(\ref{E9b}) \notag. \end{align} (P2.2) is equivalent to a classic TSP, for which a high-quality suboptimal solution can be found with low computational complexity via binary integer programming \cite{Miller}. \vspace{-3.5mm} \subsection{Overall Algorithm and Computational Complexity} The proposed algorithm for (P1) is summarized as follows.{\footnote{Note that the proposed iterative algorithm can be generally extended to solve the TSPN problems in e.g., \cite{Lyu, Yuan, Isler1} by modifying the neighbourhood-related constraints in (P1).}} First, we initialize the visiting order $\bf{\Psi}$ by solving the TSP in (P2.2) based on ${\bf{q}}_k={\bf{w}}_k, z_k=0, \forall k\in \mathcal{K}$, i.e., considering waypoint locations at the GTs. Then, we iteratively optimize the 3D waypoint locations and GT visiting order based on Sections III-A and III-B, respectively. Next, \vspace{-1mm} \begin{figure}[htbp!] \setlength{\belowcaptionskip}{-4mm} \centering \includegraphics[width=0.49\textwidth]{f3.pdf} \caption{Comparison of the optimized UAV trajectories and traveling distances by different schemes.} \label{F3} \end{figure}with given $\bf{Q}$ and $\bf{Z}$, $\bf{\Psi}$ is optimized by solving (P2.2). The proposed algorithm stops until we cannot find a better solution within a prescribed precision requirement or a maximum number of iterations is reached. The overall complexity of the proposed algorithm is analyzed as follows. For the subproblem of waypoint locations optimization, ${\bf{Q}}$ and ${\bf{Z}}$ are iteratively optimized by using the convex software based on the interior-point method, and their individual complexity can be represented as $\mathcal{O}(K^{3.5})$ and $\mathcal{O}(K^{3.5})$, respectively. Then, let $I_1$ denote the number of iterations for the BCD method, the total computation complexity for optimizing the waypoint locations is $\mathcal{O}(K^{3.5}I_1)$. For the subproblem of visiting order optimization, the complexity for solving the classic TSP with the algorithm in \cite{Miller} is $\mathcal{O}(2^KK^2)$. As such, the overall complexity is $\mathcal{O}\left((2^KK^2{\rm{+}}K^{3.5}I_1)I_2\right)$, where $I_2$ denotes the number of inter-subproblem iterations. \vspace{-3mm} \section{Numerical Results} In this section, we provide numerical results to show the effectiveness of the proposed OP model as well as the corresponding 3D trajectory design. The parameters are set as $f_0{=}0.035$\;{m}, $w_0{=}0.0156$\;m, $l_0{=}0.0235$\;m {\cite{Sun}}, and $[{{\bf{w}}_I^T},{z_I}]{\rm{=}}[{{\bf{w}}_F^T},{z_F}]{\rm{=}}[0, 0, 0]$\;m. We consider $30$ GTs with the same radius of $r_k{\rm{=}}20$\;m, $k\!\in\!\mathcal{K}$, which are randomly distributed in a square area of $300{\rm{\times}}300$\;m$^2$. The resolution requirement of each GT, i.e., $\overline{\mathcal{I}}_k$, $k\!\in\!\mathcal{K}$, is independently and randomly set within $[0.01,0.4]$. Two benchmark schemes are considered: 1) 2D UAV trajectory under the conventional VP model, and 2) 2D UAV trajectory under the proposed OP model. For the two benchmark schemes, the UAV altitude is fixed (i.e., $z_k = 100$\;m, $\forall k \in {\cal K}$) to ensure that at least a feasible waypoint can be found for each GT to satisfy the image resolution requirement. Specifically, in the benchmark scheme 1, the image-taking waypoint for each GT is right above the GT and thus the UAV trajectory can be obtained by solving the TSP problem based on these waypoints. While in the benchmark scheme 2, the UAV trajectory can be obtained via the iterative algorithm proposed in Section III by alternately optimizing the horizontal waypoint locations in ${\bf{Q}}$ and the GT visiting order in ${\bf{\Psi}}$. Figs. \ref{F3}(a)-(c) show the UAV trajectories obtained by different schemes. It is observed from Fig. \ref{F3}(b) that compared to the UAV trajectory under the conventional VP model, the UAV's horizontal flight range can be greatly reduced by adopting the proposed OP model, even with the fixed altitude (as in benchmark scheme 2). Moreover, it is observed from Fig. \ref{F3}(c) that under the proposed OP model, the 3D UAV trajectory in general has a lower altitude than the 2D UAV trajectory. This is because the UAV can also satisfy the resolution requirement when taking the image of the GT at a lower altitude by exploiting a larger horizontal distance away from the GT, thus resulting in a shorter traveling distance, which can be inferred from Fig. {\ref{F2}}(b). Fig. \ref{F3}(d) shows the traveling distances versus the number of iterations for all schemes. Specifically, the circled dots denote the traveling distance obtained by optimizing the GT visiting order, while other dots correspond to the traveling distances obtained by optimizing the horizontal/vertical waypoint locations in ${\bf{Q}}$ or ${\bf{Z}}$. It is observed that the proposed OP model with 2D trajectory yields much shorter distance than the conventional VP model with 2D trajectory, which is further reduced by the proposed OP model with 3D trajectory due to additional degrees-of-freedom in the vertical trajectory optimization (see Fig. \ref{F3}(c)). \vspace{-3mm} \section{Conclusions} In this correspondence, we proposed a novel OP model to characterize the resolution of images captured by an angle-rotatable camera mounted on a UAV. Under the proposed OP model, we formulated a 3D UAV trajectory optimization problem to minimize the UAV's traveling distance while maintaining a given resolution requirement for the images taken from multiple GTs. The formulated problem was shown to be a modified 3D TSPN problem, for which we proposed an iterative algorithm for finding an efficient solution, by alternately optimizing the image-taking waypoints and the visiting order of the GTs. Numerical results were presented to show the effectiveness of the proposed scheme compared to other benchmark schemes. \vspace{-2mm} \begin{appendices} \section{}\label{A} For ease of explanation, we illustrate in Fig. \ref{F1}(c) the 2D profile of Fig. \ref{F1}(a). Specifically, we have {\small $|E'F'|{\rm{=}}w_0$}, {\small $|T'O'|{\rm{=}}f_0$}, and {\small $\angle {OO'T}{\rm{=}}{\theta_k}$}. According to the {\emph{triangle similarity theorem}} (TST), we have {\small $\frac{{|{O'{E''}}|}}{{|O'O|}}{\rm{=}}\frac{{|E'E''|}}{{|{{O}E}|}}$} where {\small $|{{O'{O}}}|{\rm{=}}({{d_{u, k}}{\rm{-}}{f_0}})\cos {\theta_k}$, $|{{O'{E''}}}|{\rm{=}}{f_0}\cos {\theta_k}{\rm{-}} \frac{{{w_0}}}{2}\sin {\theta_k}$}, and {\small $|{{E'{E''}}}|{\rm{=}}{f_0}\sin {\theta_k}{\rm{+}}\frac{{{w_0}}}{2}\!\cos {\theta_k}$}. To ensure {\small $|{O'E''}|{\rm{\geq}}0$, ${\theta_k}$} should satisfy that $0{\rm{\leq}}{\theta_k}{\rm{\leq}}\arctan\frac{{2f_0}}{w_0}$, which is equivalent to \begin{small} \begin{equation}\label{E23} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} b_1z_k-{\left\| {{{\textbf{q}}_k}-{{\bf{w}}_k}} \right\|}\geq 0, \end{equation} \end{small}where $b_1=\frac{2f_0}{w_0}$. Then, {\small ${{|OE|}}$} can be obtained as \begin{small} \begin{equation}\label{E24} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} |{{OE}}| = \left( {{d_{u,k}} - {f_0}} \right)\cos {\theta_k} \times \frac{{{f_0}\sin {\theta_k} + \frac{{{w_0}}}{2}\cos {\theta_k}}}{{{f_0}\cos {\theta_k} - \frac{{{w_0}}}{2}\sin {\theta_k}}}. \end{equation} \end{small}Similar to {\small $|{OE}|$}, {\small $|{OF}|$} is given by \begin{small} \begin{equation}\label{E25} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} |{{OF}}| = \left( {{d_{u,k}} - {f_0}} \right)\cos {\theta_k} \times \frac{{{f_0}\sin {\theta_k} - \frac{{{w_0}}}{2}\cos {\theta_k}}}{{{f_0}\cos {\theta_k} + \frac{{{w_0}}}{2}\sin {\theta_k}}}. \end{equation} \end{small}Since {\small $|{EF}|\!=\!|{OE}|\!-\!|{OF}|$}, {\small $|{EF}|$} can be derived as \begin{small} \begin{equation}\label{E26} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} |{{EF}}| = \frac{{{f_0}{w_0}\left( {{d_{u,k}} - {f_0}} \right)\cos {\theta_k}}}{{f_0^2{{\cos }^2}{\theta_k} - \frac{{w_0^2}}{4}{{\sin }^2}{\theta_k}}}. \end{equation} \end{small}Similarly, {\small $|{AD}|$} and {\small $|{BC}|$} can be obtained as follows with the derivation omitted for brevity. \begin{small} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} \begin{align} \!\!\!\!\!\!|{{BC}}| = \frac{{{l_0}\left( {{d_{u,k}} - {f_0}} \right)\cos {\theta_k}}}{{{f_0}\cos {\theta_k} - \frac{{{w_0}}}{2}\sin {\theta_k}}}, |{{AD}}| = \frac{{{l_0}({{d_{u,k}} - {f_0}})\cos {\theta_k}}}{{{f_0}\cos {\theta_k} + \frac{{{w_0}}}{2}\sin {\theta_k}}}. \label{E28} \end{align} \end{small} According to the trapezoid area formula, i.e., {\small $S_k^c = \frac{1}{2} \cdot |{EF}| \cdot (|{AD}| + |{BC}|)$}, the camera's coverage area is given by \begin{small} \begin{equation}\label{E29} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} S_k^c = \frac{{4{({d_{u,k}} - {f_0})}^2}}{{b_1b_2}} \times \frac{1}{{{{( {1 - \frac{{1}}{{b_1^2}}{{\tan }^2}{\theta_k}})}^2}{{\cos }}{\theta_k}}}, \end{equation} \end{small}where $b_2 = \frac{2f_0}{l_0}$. Since {$f_0\ll{d_{u,k}}$}, we have {${\left( {{d_{u,k}}-{f_0}} \right)^2}\approx d_{u,k}^2 = \frac{{z_k^2}}{{{{\cos }^2}{\theta_k}}}$}. Thus, (\ref{E29}) can be rewritten as \begin{small} \begin{equation}\label{E30} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} S_k^c \approx \frac{{4z_k^2}}{{b_1b_2}} \times \frac{1}{{{{( {1 - \frac{{1}}{{b_1^2}}{{\tan }^2}{\theta_k}})}^2}{{\cos }^3}{\theta_k}}}. \end{equation} \end{small} \vspace{-2mm} \section{}\label{B} As shown in Fig. \ref{F1}(b), it is required that both {\small $|{FT}|$} (denoted by {\small $d_{1, k}$}) and {\small $|{HT}|$} (denoted by {\small $d_{2, k}$}) derived in the following should be no smaller than $r_k$. According to the TST, we have {\small $\frac{|{AD}|}{|{BC}|}=\frac{|{FT}|}{|{ET}|}$} where {\small $|{ET}|+|{FT}|=|{EF}|$}. Therefore, we obtain \begin{small} \begin{equation}\label{E31} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} d_{1, k}=|{FT}|=\frac{|{AD}|\times|{EF}|}{|{AD}|+|{BC}|}. \end{equation} \end{small}With {\small $|{AD}|$}, {\small $|{BC}|$}, and {\small $|{EF}|$} given in Appendix {\ref{A}}, {\small $d_{1,k}$} can be approximated as \begin{small} \begin{equation}\label{E32} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} d_{1, k} \approx \frac{{z_k^2 + {{\left\| {{{\textbf{q}}_k} - {{\textbf{w}}_k}} \right\|}^{\rm{2}}}}}{{b_1{z_k} + \left\| {{{\textbf{q}}_k} - {{\textbf{w}}_k}} \right\|}}. \end{equation} \end{small} The area of {\small ${ABCD}$} is given by \begin{small} \begin{equation}\label{E33} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} \begin{aligned} &\!\!S_{A\!B\!C\!D}=\frac{1}{2}\times(|{AD}|+|{BC}|)\times |{EF}|\\ &\!\!\!\!=\frac{1}{2}|{AD}|\times |{FT}|\!+\!\frac{1}{2}|{BC}|\times|{ET}|\!+\!2\times\frac{1}{2} |{AB}|\times |{TH}|, \end{aligned} \end{equation} \end{small} where \begin{equation}\label{E34} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} |{AB}| = \sqrt{{(|{BC}|-|{AD}|)^2}/{4}+|{EF}|^2}. \end{equation}Therefore, according to (\ref{E33})-(\ref{E34}), $d_{2, k}$ can be obtained as \begin{small} \begin{equation}\label{E35} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} d_{2, k} {\rm{=}} |{HT}| = \frac{|{AD}|\times|{BC}|\times|{EF}|}{(|{AD}|+|{BC}|)\times|{AB|}}. \end{equation} \end{small}With {\small $|{AD}|$}, {\small $|{BC}|$}, and {\small $|{EF}|$} given in Appendix {\ref{A}}, $d_{2, k}$ can be approximated as \begin{small} \begin{equation}\label{E36} \setlength{\abovedisplayskip}{2pt} \setlength{\belowdisplayskip}{2pt} d_{2, k} \approx \frac{{z_k^2 + {{\left\| {{{\textbf{q}}_k} - {{\textbf{w}}_k}} \right\|}^{\rm{2}}}}}{{{{( {b_2^2z_k^2 + (1 + b_2^2){{\left\| {{{\textbf{q}}_k} - {{\textbf{w}}_k}} \right\|}^2}})}^{\frac{1}{2}}}}}. \end{equation} \end{small}Please note that we approximate {\small ${d_{u,k}}{\rm{-}}{f_0}$ as ${d_{u,k}}$} in (\ref{E32}) and (\ref{E36}) as that in Appendix {\ref{A}} for simplicity. \end{appendices} \vspace{-2mm}
{ "timestamp": "2020-12-17T02:15:25", "yymm": "2012", "arxiv_id": "2012.08865", "language": "en", "url": "https://arxiv.org/abs/2012.08865" }
\section{Introduction}\label{sec:intro} Induction is a fundamental proof method in dependently typed interactive theorem proving, and so in proof assistants such as Coq and Lean, the induction tactic is usually among the first tactics a novice encounters. Yet despite this elevated position, current induction tactics exhibit a number of usability issue. Experts have got used to these dark corners and routinely perform the busywork required to get around them. Newcomers to dependently typed theorem proving, however, often struggle to connect the workings of the induction tactic with their informal understanding of proof by induction. \redacted{Jasmin Blanchette} and colleagues noticed this when designing the \redacted{Logical Verification} course at \redacted{Vrije Universiteit Amsterdam}, which uses Lean to teach interactive theorem proving. They had to devote significant space and time to technical issues with Lean's standard induction tactic, which distracted from more fundamental topics. To shield novices (and experts) from these distractions, we need induction tactics which minimise the gap between formal and informal proof by induction. This paper describes an attempt at such a tactic. In particular, it addresses three usability issues with existing induction tactics. \paragraph{Indexed inductive types.} The standard induction tactics of Coq and Lean sometimes produce counterintuitive goals when used with indexed inductive types. Consider the type |fin|~|n| of natural numbers strictly less than |n|, which can be encoded as an indexed inductive type. Given this encoding, it is trivial on paper to prove by induction (or even mere case distinction) that |fin|~|0| is uninhabited. But when we apply the standard induction tactics of Coq and Lean to this goal, they produce a goal which asks us, unhelpfully, to prove |false| in an empty context. Similar problems occur regularly when formalising programming language metatheory, which tends to use indexed inductive types extensively to define inductive predicates and relations. Coq and Lean both provide case splitting tactics which prove that |fin|~|0| is uninhabited, but which cannot deal with goals that require induction. While this is obviously useful, it is pedagogically unfortunate: case splitting should produce the same goals as induction, only without the induction hypotheses. This connection is obscured if case splitting and induction tactics handle indexed types differently. Coq (but not Lean) also provides |dependent| |induction|, an alternative induction tactic which handles indexed inductive types better. It proves our lemma about |fin|~|0| immediately. But this tactic creates a new issue: it often produces unnecessarily complex induction hypotheses. In one of the examples I discuss below, |dependent| |induction| creates the induction hypothesis \begin{lstlisting} ∀ S' t', (while (λ _, true) S, t) = (while (λ _, true) S', t') → false \end{lstlisting} which is obviously equivalent to |false|. My induction tactic works like |dependent| |induction|, but simplifies the induction hypotheses to eliminate redundant arguments. This combination yields intuitive goals in many situations involving indexed induction types. \paragraph{Overly specific induction hypotheses.} Standard induction tactics tend to produce overly specific induction hypotheses. Consider this humble injectivity lemma: \begin{lstlisting} ∀ (n m : ℕ), n + n = m + m → n = m \end{lstlisting} If we perform induction on |n| after introducing |n|, |m| and the equation, we get an induction hypothesis about a fixed |m|: \begin{lstlisting} n + n = m + m → n = m \end{lstlisting} Unfortunately, this induction hypothesis does not help us make progress with the proof. Instead, we need an induction hypothesis which generalises over all |m|: \begin{lstlisting} ∀ m, n + n = m + m → n = m \end{lstlisting} Convincing Coq or Lean to generate the more general induction hypothesis is not hard. We can introduce |m| after the induction (rather than before) or use special syntax offered by the standard induction tactics. However, this still presents a problem for novices: they must first recognise that the original induction hypothesis is not helpful and that generalising |m| is the appropriate remedy. Neither of these insights will be obvious to someone not yet familiar with the mechanics of interactive theorem proving. That is why my induction tactic does the opposite of the standard tactics: rather than generating the most specific induction hypotheses by default, it generates the most general ones. In effect, it generalises every hypothesis like we generalised |m|, with some restrictions to avoid obviously useless generalisations. This sometimes produces overly general induction hypotheses, but for novices, that is a better problem to have. They can simply specialise an induction hypothesis to recover the more specific version. \paragraph{Naming.} Induction tactics generate many new hypotheses and these hypotheses must be named. This may seem like a trivial concern, but Lean's standard induction tactic shows that it is not. Consider this simple lemma about a particular formulation of the transitive closure |tc|~|r| of a binary relation |r| (which in Lean would usually live in the universe |Prop| of propositions, but for simplicity I pretend throughout that there is only one universe |Type|): \begin{lstlisting} ∀ α (r : α → α → Type) (a b c : α) (h₁ : tc r a b) (h₂ : tc r b c), tc r a c \end{lstlisting} When we use Lean's standard induction tactic to perform induction on |h₁|, it generates a rather forbidding goal: \begin{lstlisting} α : Type r : α → α → Type c a b h₁_x h₁_y h₁_z : α h₁_hr : r h₁_x h₁_y h₁_ht : tc r h₁_y h₁_z h₁_ih : tc r h₁_z c → tc r h₁_y c h₂ : tc r h₁_z c ⊢ tc r h₁_x c \end{lstlisting} The new hypotheses' names are clearly too cumbersome: experts and novices alike will want to immediately rename almost everything. Worse, the names are misleading because the first element of the transitive chain, |a|, is now |h₁_x| while |b| has become |h₁_z|. Novices will struggle to make the connection between the old and new hypotheses, and thus to understand how this goal is connected to the lemma they wanted to prove. Coq's standard induction tactic generates more helpful names than Lean's, but it too misses this connection. To prevent the resulting confusion, my induction tactic uses a number of heuristics to generate names that reflect common intuitions about how induction works. For the |tc| example, it generates this goal: \begin{lstlisting} α : Type r : α → α → Type a y b c : α hr : r a y h₁ : tc r y b ih : ∀ c, tc r b c → tc r y c h₂ : tc r b c ⊢ tc r a c \end{lstlisting} Omitting the |h₁| prefixes and preserving the names |a|, |b| and |c| makes for a much more reasonable-looking goal. \medskip The new induction tactic, which addresses the above usability issues, is described in Sect.~\ref{sec:impl}, after I lay some terminological groundwork in Sect.~\ref{sec:induction}. The description proceeds chronologically through every step the new induction tactic takes when processing a goal. The new induction tactic could be implemented in any dependently typed proof assistant with indexed inductive types and a suitable metaprogramming framework. (Parts of it do, however, make use of the somewhat controversial axiom~K.) An implementation for Lean~3 is available in mathlib~\cite{mathlib}, Lean's de facto standard library.\footnote{The version of mathlib which contains the implementation described in this paper is available online at \url{https://github.com/leanprover-community/mathlib/tree/d36af184d154f2e99f60fec5cd71bb3e53899d5c}. The relevant source file is \texttt{src/\allowbreak tactic/\allowbreak induction.lean}. An archive of this code is available at \url{https://doi.org/10.5281/zenodo.4327209}.} The tactic was also used in the 2020 edition of the \redacted{Logical Verification} course, where it replaced Lean's standard induction tactic. This allowed the course authors to significantly simplify the lecture notes and accompanying code, since the tricks experts use to make Lean's standard induction tactic work did not need to be taught any more. The new induction tactic is among the larger tactics written in Lean's metaprogramming framework~\cite{ebner2017}. This provides an opportunity to evaluate how the framework fares on a moderately complex task. In Sect.~\ref{sec:meta}, I describe some problems I encountered while implementing the tactic, as well as possible workarounds and suggestions for improvements. This case study will hopefully be useful to aspiring Lean metaprogrammers and may guide the design of the metaprogramming framework in the upcoming fourth version of Lean. In summary, I make the following contributions: \begin{itemize} \item I describe an induction tactic that is more ergonomic than the state of the art. The design is geared particularly towards novice users, but experts should also find it easier to work with. \item I provide an implementation of this tactic in Lean~3, which previously lacked a convenient induction tactic. \item I give an experience report about Lean's metaprogramming framework, pointing out some pitfalls and suggesting improvements. \end{itemize} \section{Induction in Dependent Type Theory}\label{sec:induction} Induction in dependent type theories is intimately connected with indexed inductive types~\cite{dybjer1994}, a fundamental concept of most modern dependently typed theorem provers. Indexed inductive types generalise non-indexed inductive types such as natural numbers and lists and are often used to define inductive predicates and relations. A typical example of this use is the transitive closure of a binary relation, which can be encoded in Lean as follows: \begin{lstlisting} inductive tc {α : Type} (r : α → α → Type) : α → α → Type | base : ∀ x y (hr : r x y), tc x y | step : ∀ x y z (hr : r x y) (ht : tc y z), tc x z \end{lstlisting} The above defines a type family |tc| of type \begin{lstlisting} ∀ {α : Type} (r : α → α → Type), α → α → Type \end{lstlisting} The first argument of |tc|, |α|, will be left implicit, as indicated by the curly braces. The second argument is the relation |r| whose transitive closure we are taking. The third and fourth arguments are elements of |α| that are related by |tc|~|r|. The transitive closure is inductively generated by two rules corresponding to the two constructors of |tc|. The |base| constructor says that if two elements |x| and |y| are related by |r|, then they are also related by |tc|~|r|. The |step| constructor says that if |r| relates |x| and |y|, and |tc|~|r| relates |y| and |z|, then |tc|~|r| relates |x| and |z|. In the type of |tc|, we distinguish between two kinds of arguments: \emph{parameters} and \emph{indices}. Arguments that appear before the colon, here |α| and |r|, are parameters of |tc|. Parameters are implicitly quantified over in the types of |tc|'s constructors, and whenever |tc| appears in a constructor type, it is implicitly applied to the parameters. Thus, the full type of |base| is \begin{lstlisting} ∀ {α} {r : α → α → Type} (x y : α), r x y → tc r x y \end{lstlisting} The arguments of |tc| after the colon are its indices. Unlike parameters, these may vary freely in the constructor types, and indeed our constructors instantiate the indices of |tc| with different expressions. Each inductive type has an associated induction principle, the (dependent) \emph{recursor}, which reflects the fact that every closed element of the inductive type consists of finitely many constructor applications. In Lean, a recursor is added as an axiom whenever we define an inductive type~\cite{carneiromsc}. For |tc|, we get the recursor |tc.rec|, whose type appears in Fig.~\ref{fig:tc.rec}. This type is derived from |tc| as follows: \begin{itemize} \item The first two arguments, |α| and |r|, are the parameters of |tc|. \item |M| is the type we are constructing, also known as the \emph{motive} of the induction. It is a predicate over elements of |tc| (and its indices). \item |Base| and |Step| are \emph{minor premises} corresponding to the constructors of |tc|. They ask users to give one proof of |M| for the case where |tc|~|r|~|x|~|y| was proved by |base| and one for the case where |tc|~|r|~|x|~|y| was proved by |step|. In the |step| case, we may assume a proof of |M| for the recursive constructor argument |ht|. \item From all this data, |tc.rec| concludes |M|~|x|~|y|~|e| for an arbitrary element |e| of |tc|~|r|~|x|~|y|. We call |e| the \emph{major premise} of the induction. This is the hypothesis on which we perform induction. \end{itemize} \begin{figure} \begin{tabular}{c} \begin{lstlisting} ∀ α (r : α → α → Type) (M : ∀ x y, tc r x y → Type) (Base: ∀ x y (hr : r x y), M x y (base x y hr)) (Step : ∀ x y z (hr : r x y) (ht : tc r y z), M y z ht → M x z (step x y z hr ht)) x y (e : tc r x y), M x y e \end{lstlisting} \end{tabular} \caption{The type of \texttt{tc.rec}}\label{fig:tc.rec} \end{figure} To perform induction in Lean, then, means to apply the recursor of an inductive type. For an example, we return to the transitivity of the transitive closure: \noindent \begin{minipage}{\linewidth} \begin{lstlisting} ∀ α (r : α → α → Type) (x y z : α) (hxy : tc r x y) (hyz : tc r y z), tc r x z \end{lstlisting} \end{minipage} We first fix |α|, |r|, |x|, |y| and |z|. The proof then proceeds by recursion on |hxy|, so this is our major premise. The parameters, |α| and |r|, are already determined by this choice, as are the major premise indices |x| and |y|. For the motive, we choose \begin{lstlisting} M := λ (x y : α) (_ : tc r x y), tc r y z → tc r x z \end{lstlisting} Substituting this motive in the minor premises, we are left with one proof obligation for each constructor, corresponding to the cases of the induction. The proof of our lemma then reads: \begin{lstlisting} λ α r x y z (hxy : tc r x y), tc.rec α r (λ x y _, tc r y z → tc r x z) <proof of Base minor premise> <proof of Step minor premise> x y hxy \end{lstlisting} An induction tactic helps with this rather arduous exercise by automating much of it. Ideally, users do not have to contend with motives, parameters or indices and are presented only with one intuitive new goal for each constructor. The next section explains how to achieve this in many cases. \section{Implementation of the Induction Tactic}\label{sec:impl} The following subsections describe each step the new induction tactic takes to perform an induction, in chronological order. \subsection{Generalisation of Complex Indices}\label{sec:index-generalisation} The first problem our induction tactic must solve is the treatment of complex index arguments in a major premise. To see what this means, consider a typical task in programming language metatheory: a simple lemma about the big-step semantics of a toy imperative language. The abstract syntax of our toy language is defined by the (non-indexed) inductive type |stmt| in Fig.~\ref{fig:stmt}. Its constructors represent, from top to bottom, a no-op statement; variable assignment; sequencing of statements; and a while loop. The |state| type mentioned by some constructors represents the current program heap as a map from variable names to their current values (which, for simplicity, are always natural numbers). The loop condition of a while loop is given as a predicate on the heap state. \begin{figure} \begin{tabular}{c} \begin{lstlisting} inductive stmt : Type | skip : stmt | assign : string → (state → ℕ) → stmt | seq : stmt → stmt → stmt | while : (state → Type) → stmt → stmt \end{lstlisting} \end{tabular} \caption{Syntax of a toy imperative language}\label{fig:stmt} \end{figure} The language's big-step semantics are given by the indexed inductive type |big_step| in Fig.~\ref{fig:big_step}, omitting some constructors. This type defines a relation between a program |S|, an initial state |s| and a final state |t|. If |big_step| |(S,|~|s)|~|t| is derivable, then |S|, when executed in heap state |s|, terminates in heap state |t|. We write |(S,|~|s)|~|⇒|~|t| for |big_step| |(S,|~|s)|~|t|. To enable this notation in Lean~3, and following standard informal practice, the first argument of |big_step| is a pair type (so |big_step| is partially uncurried). \begin{figure} \begin{tabular}{c} \begin{lstlisting} inductive big_step : stmt × state → state → Type | skip {s} : big_step (skip, s) s | while_true {b : state → Type} {S s t u} (hcond : b s) (hbody : big_step (S, s) t) (hrest : big_step (while b S, t) u) : big_step (while b S, s) L | while_false {b : state → Type} {S s} (hcond : ¬ b s) : big_step (while b S, s) s | ... \end{lstlisting} \end{tabular} \caption{Big-step semantics of the toy language}\label{fig:big_step} \end{figure} Now we want to prove that the infinite loop does not terminate. In Lean, this means solving the following goal: \begin{lstlisting} S : stmt s t : state h : (while (λ _, true) S, s) ⇒ t ⊢ false \end{lstlisting} Above the turnstile appear the local hypotheses of our goal, most importantly |h|, which says that the infinite loop steps to some state |t| and thus terminates. Right of the turnstile is our target, the canonical empty type |false|. On paper, this goal is easily proven by induction on the derivation of |h|. Lean and Coq's default induction tactics, however, fail us. Applying them yields unprovable subgoals. This is because in the type of |h|, |big_step| has a \emph{complex} index. A term is complex if it is anything other than a local hypothesis. If such a term appears as an index of an inductive type, trouble ensues. Here, the offending complex index is the first argument of |big_step|, \begin{lstlisting} (while (λ _, true) S, s) \end{lstlisting} A naive induction tactic now proceeds as follows. Our target is |false|, which depends neither on the hypothesis |h| nor its indices, so the motive of the induction is the constant function \begin{lstlisting} M := λ (x : stmt × state) (t : state) (p : x ⇒ t), false \end{lstlisting} Constructing the type of |big_step|'s recursor according to the schema from Sect.~\ref{sec:induction}, we get the following minor premise for the |skip| constructor: \begin{lstlisting} ∀ (s : state), M (skip, s) s skip \end{lstlisting} Yet this gives a plainly unprovable goal if we substitute the motive |M|: \begin{lstlisting} ∀ (s : state), false \end{lstlisting} The root cause of this issue is that by applying the recursor like we did, we effectively forgot that the first index of the major premise involved a |while|, not a |skip|. As a result, we cannot recognise that the major premise could not have been constructed by |big_step|'s |skip| constructor. This deficiency of induction tactics in the presence of complex indices is well known. The traditional solution, in the context of dependent type theory, is due to McBride~\cite{mcbride2002}. The remainder of this section describes a variant of his procedure, with one major change. McBride's tactic analyses arbitrary elimination principles, determines which of their arguments lead to problems similar to our complex index problems and generalises those arguments. In contrast, we only support the standard recursors of inductive types, for which we know that issues like the one we have seen are only caused by complex indices. This makes our tactic less general, but it also considerably simplifies the implementation (and its presentation below). In an educational setting, where custom elimination principles are rarely used, this seems like an acceptable trade-off. Coq's |dependent| |induction| uses a very similar restricted variant of McBride's approach (which is, to my knowledge, not described in the literature). Coq does support custom elimination principles with |dependent| |induction|, but in this case the tactic still only generalises complex indices. This capability could also be added, with moderate engineering effort, to our tactic. McBride's solution to the complex index problem is to replace any complex index |i| with a new hypothesis |Hi|, called an \emph{index placeholder}, and to add an \emph{index equation} |Hi|~|=|~|i| to the target. This ensures that we do not lose information about the value of the index. Applying this transformation yields an equivalent goal: \begin{lstlisting} S : stmt s t : state Hi : stmt × state h : Hi ⇒ t ⊢ Hi = (while (λ _, true) S, s) → false \end{lstlisting} Then we proceed as before. But since our goal now depends on the first index of |h|, we generate a different motive for the induction: \begin{lstlisting} λ (x : stmt × state) (t : state) _, x = (while (λ _, true) S, s) → false \end{lstlisting} The minor premise for |skip| changes accordingly, leaving us with this goal for the |skip| case: \begin{lstlisting} S : stmt s s' : state ieq : (skip, s') = (while (λ _, true) S, s) ⊢ false \end{lstlisting} This goal is provable because the equation |ieq|, which is derived from the index equation, is contradictory. We have, in effect, remembered that the index of |big_step| was a |while|, not a |skip|. To make this index generalisation procedure work for more complex goals, we must address two technical complications. First, when we replace a complex index in the major premise, we generally want to also replace it in the target and in the types of other hypotheses, to make sure that the goal remains type-correct. This is a somewhat crude heuristic since the replacement may itself introduce type errors, but it is right more often than wrong. However, we never replace the index in hypotheses that occur in the type of the major premise. A second complication arises when there are dependencies between the indices of an inductive family. Consider, for example, the family \begin{lstlisting} F : ∀ (x : X) (y : Y x), Type \end{lstlisting} where |x| and |y| are indices, and suppose that we want to perform induction on the hypothesis |h|~|:|~|F|~|t|~|u| (with |t| and |u| complex terms). The index generalisation procedure then replaces |t| with a new hypothesis |Ht|~|:|~|X| such that |Ht|~|=|~|t| and |u| with a new hypothesis |Hu|~|:|~|Y|~|Ht| such that |Hu|~|=|~|u|. But this last equation is not well-typed: |u| has type |Y|~|t|, not |Y|~|Ht|. In this situation, we must use a \emph{heterogeneous} equation, written |Hu|~|==|~|u|, where the two sides may have different types. At this point, one might become concerned for novice users of the induction tactic: would they not get overwhelmed with index placeholders and index equations? Fortunately, Sect.~\ref{sec:index-unification} shows that the new hypotheses can usually be eliminated automatically after we have applied the recursor, so our users do not get to see them. \subsection{Generalisation of Induction Hypotheses}\label{sec:ih-generalisation} One of the more arcane aspects of Coq and Lean's existing induction tactics is that they ask their users to specify which hypotheses can vary during the induction and which are fixed. This leads to counterintuitive behaviour even in simple cases. Pierce~\cite{sf1} illustrates the problem with the following injectivity lemma: \begin{lstlisting} ∀ (n m : ℕ), n + n = m + m → n = m \end{lstlisting} An unsuspecting novice will mechanically introduce |n|, |m| and the equation, then perform induction on |n|. This produces the following goal for the successor case: \begin{lstlisting} n m : ℕ ih : n + n = m + m → n = m h : n + n + 2 = m + m ⊢ n + 1 = m \end{lstlisting} Unfortunately, this gives us an induction hypothesis, |ih|, that is not applicable: it presumes |n|~|=|~|m| when we have, according to |h|, |n| = |m|~|-|~|1|. The solution is to let |m| vary during the induction instead of keeping it fixed, which gives a more sensible goal: \begin{lstlisting} n m : ℕ ih : ∀ m, n + n = m + m → n = m h : n + n + 2 = m + m ⊢ n + 1 = m \end{lstlisting} Now |ih| can be instantiated with |m|~|-|~|1| to close the goal. Generalising the induction hypothesis in this manner is not difficult, and existing tactics provide special syntax for it. But to a novice, it will be far from obvious that this is why our first proof attempt gets stuck. Novices often have trouble recognising that a goal is unprovable in the first place, and when they do, they may suspect any number of errors on their part. A novice-friendly induction tactic should therefore not fix every hypothesis by default, as the existing tactics do, but rather \emph{generalise} every hypothesis. This sometimes leads to an overly general induction hypothesis, but that is much less harmful: the user does not get stuck but merely has to apply the induction hypothesis to some additional arguments. Our new tactic also offers a convenient syntax to fix some or all hypotheses; if all hypotheses are fixed, the tactic behaves like the existing induction tactics. Implementing this automatic generalisation is straightforward in most cases. We simply revert (\enquote*{unintroduce}) all hypotheses before applying the recursor. However, there are three classes of hypotheses that should not be reverted: \begin{enumerate} \item Hypotheses which the user has explicitly fixed and their dependencies, i.e.\ those hypotheses which occur in the type of a fixed hypothesis. If we were to revert a dependency of a fixed hypothesis, we would also have to revert the fixed hypothesis. \item Hypotheses on which the major premise depends. Such hypotheses cannot be reverted without also reverting the major premise. \item Hypotheses which would not make the induction hypotheses more general if we were to revert them. \end{enumerate} The last class deserves further analysis. Usually, when we perform an induction, all hypotheses are relevant to the proof, so generalising a hypothesis leads to a more general induction hypothesis. However, that is not always the case. In longer proofs, it is occasionally convenient to prove a helper lemma (by induction) inline, without leaving the proof environment. These lemmas may involve only some of the hypotheses, but if we follow our generalise-everything approach, we also generalise all the other hypotheses in the context which have nothing to do with the helper lemma. This gives us an induction hypothesis with additional redundant arguments. Consider this example: \begin{lstlisting} x : X n m : ℕ ⊢ n + m = m + n \end{lstlisting} The first hypothesis, |x|, has nothing to do with the rest of the goal --- perhaps we were in the middle of a proof involving |x| and decided to prove commutativity of addition inline as a helper lemma. To that end, we perform induction on |n|. The naive generalisation algorithm would now revert both |x| and |m|, yielding an induction hypothesis with an obviously redundant argument: \begin{lstlisting} ∀ (x : X) (m : ℕ), n + m = m + n \end{lstlisting} To prevent this, we revert a hypothesis |h| only if it meets at least one of the following criteria: \begin{enumerate} \item |h| occurs in the target. Recall that the motive of the induction, which determines the induction hypothesis, is derived from the target. So if the target has the form |∀|~|x,|~|P|~|x| instead of |P|~|h|, we get a more general induction hypothesis. In the commutativity example, |m| fulfils this criterion, so it is generalised. \item |h| depends on the major premise, or on any of the dependencies of the major premise. Then, |h| is likely to be a property of the major premise that is relevant to the induction. For instance, if we have a major premise |n|~|:|~|ℕ|, a hypothesis |h|~|:| |n|~|>|~|0| and a target |P n|, the motive of the induction should be derived from the generalised target |n|~|>|~|0|~|→| |P|~|n|. Otherwise we would not be able to use the fact that |n|~|>|~|0| during the induction, and in particular we would not be able to discharge the case for |n|~|=|~|0| by noting that |0|~|≯|~|0|. The same reasoning also applies to dependencies of the major premise: performing the induction may give us additional information about these dependencies, so hypotheses mentioning them may be of interest. \end{enumerate} Conversely, any hypothesis |h| that does not meet either criterion --- such as |x| in the commutativity example --- should not be generalised. Such hypotheses have no connection to either the target or the major premise, so generalising them only adds redundant arguments to the induction hypothesis. Of course, our criteria only prevent the most obvious forms of over-generalisation: for the commutativity proof, |m| does not need to be generalised either. \medskip This step concludes the preprocessing, so now our tactic applies the recursor. To do so, we would usually have to generate a motive, which involves solving a higher-order unification problem. Luckily, Lean has a built-in heuristic that generates correct motives most of the time, so we do not have to concern us with this issue here. By applying the recursor, we generate one new goal for each case of the induction (i.e.\ each minor premise). The next steps are applied to each of these goals individually. \subsection{Unification of Index Equations}\label{sec:index-unification} In Sect.~\ref{sec:index-generalisation}, we introduced placeholders for the complex indices of the major premise and index equations to remember what the placeholders stand for. We can usually eliminate these equations again after the recursor has been applied, using McBride's |Qnify| tactic~\cite{mcbride1996}. |Qnify| implements a form of first-order unification. It works on a queue of equations which initially contains the equations for each index, starting with the first. The order is important when indices depend on each other since unification of earlier index equations may simplify later ones. The two sides of each equation are unified by applying the following set of rules until no rule applies any more: \paragraph{Substitution.} For an equation |eq|~|:| |x|~|=|~|t| or |eq|~|:| |t|~|=|~|x|, where |x| is a local hypothesis and |t| is a term in which |x| does not occur, delete |eq| and replace |x| with |t| everywhere in the goal. \paragraph{Injection.} For |eq|~|:| |C|~|t₁|~|...| |tₙ|~|=| |C|~|u₁|~|...| |uₙ|, where |C| is a constructor of an inductive type, delete |eq| and add new equations |tᵢ|~|=|~|uᵢ|. The new equations are added to the front of the queue, so they are processed immediately after this step. Some of the equations may have to be heterogeneous. \paragraph{Conflict.} For |eq|~|:| |C|~|t₁|~|...| |tₙ|~|=| |D|~|u₁|~|...| |uₘ|, where |C| and |D| are distinct constructors, solve the goal since |eq| is contradictory. \paragraph{Deletion.} For |eq|~|:| |t|~|=|~|u|, where |t| and |u| are definitionally equal, delete |eq|. \paragraph{Cycle.} For |eq|~|:| |x|~|=|~|t| (or symmetric), where |x| appears under constructors in |t|, solve the goal since |eq| is contradictory. The previous condition means that |t| must be of the form \begin{lstlisting} C₁ ... (C₂ ... (Cₙ ... x ...) ...) ... \end{lstlisting} where the |Cᵢ| are all constructors of the same inductive type and |n| is not zero. For example, this rule would match the equation |x|~|=| |succ|~|(succ|~|(succ|~|x))|, where |succ| is the successor constructor of |ℕ|. \paragraph{Homogenisation.} For |eq|~|:| |t|~|==|~|u|, where |t|~|:|~|T|, |u|~|:|~|U| and |T| is definitionally equal to |U|, replace |eq| with the equivalent homogeneous equation |t|~|=|~|u|. This rule typically applies because the types |T| and |U| were initially distinct --- hence the heterogeneous equation |t|~|==|~|u| --- but became definitionally equal during the unification of earlier equations. \medskip Recall the example from Sect.~\ref{sec:index-generalisation}. After generalising complex indices, we ended up with this goal in one of the cases of the induction: \begin{lstlisting} S : stmt s s' : state ieq : (skip, s') = (while (λ _, true) S, s) ⊢ false \end{lstlisting} Applying |Qnify| to the index equation |ieq|, we first use the injection rule since both sides of the equation are applications of the pair constructor |(_,_)|. This gives us new equations: \begin{lstlisting} ieq₁ : skip = while (λ _, true) S ieq₂ : s' = s \end{lstlisting} We then apply the conflict rule to |ieq₁| since |skip| and |while| are different constructors, solving the goal. Thus, users of our tactic never get to see this case of the induction. The homogenisation rule, which deals with heterogeneous equations, is only valid in certain type theories, namely those in which Streicher's axiom~K~\cite{hofmann1994} is derivable. This includes Lean, but excludes some other popular proof assistants, particularly those that seek to be compatible with the univalence axiom of homotopy type theory~\cite{hottbook}, which is incompatible with axiom~K. Induction tactics for such type theories would not use McBride's index generalisation method but rather that of Cockx et al.~\cite{cockx2014, cockx2018}, who show how to achieve a similar effect without using axiom K. Implementing the unification procedure is straightforward except for the cycle rule. To prove that an equation \begin{lstlisting} eq : x = C₁ (... (Cₙ x) ...) \end{lstlisting} is contradictory, we use a size measure |sizeof| which counts the number of constructors in a term. Lean generates this measure for every inductive type. Applying it on both sides of the equation yields an equation in |ℕ|: \begin{lstlisting} eq : sizeof x = sizeof x + n \end{lstlisting} For positive |n|, this can be discharged by applying an appropriate lemma. \subsection{Simplification of Induction Hypotheses}\label{sec:ih-simp} The index placeholders and index equations introduced in Sect.~\ref{sec:index-generalisation} also occur as additional arguments to the induction hypotheses generated by the recursor application. But like the equations themselves, these arguments can often be trivially eliminated. Prior work does not address this issue: Coq's |dependent| |induction| tactic makes no attempt to simplify the induction hypotheses and Lean's |cases| tactic, which also uses McBride's index generalisation technique, does not generate induction hypotheses in the first place. Let us again consider the |big_step| example from Sect.~\ref{sec:index-generalisation}, but now we focus on the first inductive case. After the index equations have been eliminated, we get the goal shown in Fig.~\ref{fig:before-ih-simp}, corresponding to the |while_true| constructor of |big_step|. \begin{figure*} \begin{tabular}{c} \begin{lstlisting} S : stmt s t u : state h₁ : (S, s) ⇒ t h₂ : (while (λ _, true) S, t) ⇒ u ih₁ : ∀ S' t', (S, s) = (while (λ _, true) S', t') → false ih₂ : ∀ S' t', (while (λ _, true) S, t) = (while (λ _, true) S', t') → false ⊢ false \end{lstlisting} \end{tabular} \caption{A goal before simplification of induction hypotheses}\label{fig:before-ih-simp} \end{figure*} The induction hypotheses, |ih₁| and |ih₂|, have been generalised over two index placeholders, |S'| and |t'|, and an index equation. But in |ih₂|, these are all redundant. We can only hope to apply |ih₂| if we instantiate |S'| with |S| and |t'| with |t|; any other instantiation (modulo propositional equality) would not satisfy the index equation. This is a common case, so we postprocess the induction hypotheses to eliminate such redundant arguments. To do so, we first replace each index placeholder in the type of |ih₂| with a fresh metavariable: \begin{lstlisting} (while (λ _, true) S, t) = (while (λ _, true) ?S', ?t') → false \end{lstlisting} We then iterate through the index equations --- here only one --- and unify the left-hand side of each with the right-hand side, using Lean's built-in unification procedure. If unification finds a unique solution, the metavariables are assigned accordingly: \begin{lstlisting} ?S' := S ?t' := t \end{lstlisting} Now we specialise the induction hypothesis, applying it to the terms we assigned to the metavariables: \begin{lstlisting} (while (λ _, true) S, t) = (while (λ _, true) S, t) → false \end{lstlisting} Finally, we delete any index equation whose left-hand side is definitionally equal to its right-hand side. This leaves us with a pleasantly simple induction hypothesis: \begin{lstlisting} ih₂ : false \end{lstlisting} This procedure does not always succeed in eliminating the index placeholders and index equations. If we apply it to the first induction hypothesis, |ih₁|, it instantiates the |t'| index placeholder with |s|, but it does not find a unique solution for |S'|. The induction hypothesis thus remains unwieldy: \begin{lstlisting} ih₁ : ∀ S', (S, s) = (while (λ _, true) S', s) → false \end{lstlisting} This is pedagogically unfortunate, as students are unlikely to fully understand why an equation appears in |ih₁|. Simplifying the equation to eliminate the |s| on both sides would help a little, but I have not encountered enough such situations in practice to justify complicating the implementation. Besides redundant index equations, an induction hypothesis can also contain contradictory index equations, e.g. \verb|skip|~\verb|=| |while|~|b|~|S|. The induction hypothesis can then never be applied and should be deleted. Unfortunately, my induction tactic currently does not do this, due to a limitation of Lean's built-in unification procedure, which does not allow us to distinguish between terms that are certainly unequal, such as |skip| and |while|~|b|~|S|, and terms that might be propositionally equal, such as |S| and |while|~|b|~|?S'|. The |Qnify| procedure from Sect.~\ref{sec:index-unification} could be adapted to this use case. \subsection{Naming of Constructor Arguments}\label{sec:naming} A surprisingly large portion of the new induction tactic is dedicated to naming. Finding intuitive names is important, particularly in an educational setting. When the names are chosen poorly, novices (and occasionally experts) may have trouble understanding how the new goals relate to the old goal. Lean's standard induction tactic uses a simple, predictable naming scheme, but the generated names are plainly too cumbersome for use in education. Consider again the fact that the transitive closure operator from Sect.~\ref{sec:induction} is transitive: \begin{lstlisting} ∀ α (r : α → a → Type) (a b c :α) (h₁ : tc r a b) (h₂ : tc r b c), tc r a c \end{lstlisting} Performing induction on |h₁|, Lean's induction tactic generates a rather intimidating goal in the inductive case, shown in Fig.~\ref{fig:naming:lean-old}. The goal illustrates a number of common problems: \begin{figure*} \centering \begin{subfigure}{.3\textwidth} \begin{lstlisting} α : Type r : α → α → Type c a b h₁_x h₁_y h₁_z : α h₁_hr : r h₁_x h₁_y h₁_ht : tc r h₁_y h₁_z h₁_ih : tc r h₁_z c → tc r h₁_y c h₂ : tc r h₁_z c ⊢ tc r h₁_x c \end{lstlisting} \caption{Lean's standard induction tactic}\label{fig:naming:lean-old} \end{subfigure} \begin{subfigure}{.3\textwidth} \begin{lstlisting} α : Type r : α -> α -> Type c x y z : α hr : r x y h₁ : tc r y z IHh₁ : tc r z c -> tc r y c h₂ : tc r z c ⊢ tc r x c \end{lstlisting} \caption{Coq's standard induction tactic}\label{fig:naming:coq} \end{subfigure} \begin{subfigure}{.3\textwidth} \begin{lstlisting} α : Type r : α → α → Type a y b c : α hr : r a y h₁ : tc r y b ih : ∀ c, tc r b c → tc r y c h₂ : tc r b c ⊢ tc r a c \end{lstlisting} \caption{The new induction tactic}\label{fig:naming:lean-new} \end{subfigure} \caption{Goals for the same proof produced by three different induction tactics}\label{fig:naming} \end{figure*} \begin{itemize} \item All new hypotheses generated by the induction tactic are prefixed with |h₁|. This clarifies their origin, but it also makes the goal hard to understand at a glance. Names like |h₁_x| are simply too long compared to a plain |x|. \item The first and middle elements of the transitive chain, which were called |a| and |b| in the lemma statement, are now called |x| and |z| (disregarding the |h₁| prefix). One could hardly blame a novice for being confused about how the old and new hypotheses relate to each other. \item As if to make the previous problem worse, Lean's standard induction tactic does not remove the hypotheses |a| and |b|, even though they are now redundant and have been effectively replaced by |h₁_x| and |h₁_z|. \end{itemize} Coq's standard induction tactic fares better, producing the goal in Fig.~\ref{fig:naming:coq}. The tactic drops the |h₁| prefixes and correctly removes the redundant hypotheses. Yet it, too, renames |a| to |x| and |b| to |z|. The new induction tactic fixes this last issue, producing the goal in Fig.~\ref{fig:naming:lean-new}. It recognises the connection between old and new hypotheses and names the new ones accordingly. The name |y| is not ideal, but other than that, no name would be out of place in an informal proof. To achieve this effect, we employ the following algorithm. Suppose we are in the case of the induction corresponding to a constructor |C|. Then we need to name the following new hypotheses: one hypothesis for each of |C|'s arguments; one induction hypothesis for each of |C|'s recursive arguments; and any index placeholders and index equations we have introduced and not subsequently eliminated. For the last category, a simple schema suffices. Index placeholders and index equations are usually eliminated anyway, but if they remain in the goal, we name them |index_i| and |induction_eq_i| for some |i|. Naming the induction hypotheses is also fairly straightforward. If there is only a single induction hypothesis, we name it |ih|. Otherwise, we use names like |ih_e|, where |e| is the hypothesis to which this induction hypothesis applies (meaning the hypothesis corresponding to the recursive constructor argument which gives rise to |ih_e|). For example, if we perform induction on some expression type, we may get subexpressions |e₁| and |e₂| and induction hypotheses |ih_e₁| and |ih_e₂|. Coq uses a similar scheme. Lean's standard induction tactic simply numbers the generated induction hypotheses, which is usually less helpful. Finally, we consider the constructor arguments, where the naming problem becomes interesting. Suppose we want to name the hypothesis corresponding to an argument |a|~|:|~|A| of constructor |C|. Then we try each rule from the following list, stopping at the first one that applies. These heuristics seem to yield intuitive names in many cases --- though humans use so many different heuristics that trying to incorporate them all would be a fool's errand. When our heuristics fail, users can of course give their own names. In larger developments, one should usually give explicit names anyway to make the proof script more robust. Still, having the induction tactic generate sensible names makes for a better user experience in the experimental phase of proof development, when the proof script is not yet polished. \paragraph{Recursion.} If |a| is a recursive argument, it is named after the major premise. So if we eliminate a natural number |n|, the number in the inductive case is also called |n|; if we eliminate an expression |e|, its subexpressions are called |e|, |e_1|, etc. These are likely to be good names since the subexpressions are of the same type as the parent expression. In Fig.~\ref{fig:naming:lean-new}, |h₁| is derived from a recursive argument in this way. Coq also uses this rule. \paragraph{Index association.} If |a| is associated with an index argument, it is named after that index argument. This is the rule responsible for our improvement over Coq in the example from Fig.~\ref{fig:naming}. We say that the argument |x| of |tc|'s |step| constructor is associated with the first index of |tc|. In the hypothesis |h₁|, which we are performing induction on, that first index is instantiated with |a|, so the hypothesis corresponding to |x| is named |a|. Capturing this situation in general requires a somewhat involved criterion. Suppose we are naming an argument \verb|a|~\verb|:|~\verb|A| of a constructor |C| whose return type is |F|~|j₁|~|...|~|jₙ|, where |F| is an inductive family with |n| indices. We say that |a| is associated with the |i|th index if it occurs in |jᵢ|. Now suppose our major premise is |e|~|:| |F|~|k₁|~|...|~|kₙ|. Consider those |kᵢ| such that |a| is associated with the |i|th index. If these |kᵢ| are all the same hypothesis |h|, and if the type of |h| is definitionally equal to the type of |a|, then the hypothesis corresponding to |a| is named after |h|. The stipulation about definitionally equal types exists to prevent confusion when a constructor argument is associated with an index of a different type. In such cases, it is usually better not to name the argument after the index, since names are often related to the types of the named entities. For instance, if an argument |a|~|:|~|α| is associated with an index |as|~|:|~|list α|, we do not want the hypothesis corresponding to |a| to be called |as|. The restriction could perhaps be relaxed to allow, for instance, an argument of type |list|~|α| to be associated with an index of type |list|~|β|, but I have found no need for this in practice. \paragraph{Named arguments.} If |a| is named in the definition of the constructor |C|, that name is used. In our example, the |Step| constructor has an argument called |hr|, so the corresponding hypothesis is also called |hr|. Coq also uses this rule. \paragraph{Type-based naming.} If |a|'s type, |A|, is associated with a list of typical variable names, we use these. Such an association is given by an instance of the type class |variable_names| for |A|, which contains a list of names. Later, when the tactic looks for a name for |a|, it performs a type class instance search for |variable_names|~|A|. If it finds an instance, it uses the first unused name from the associated list. We give such instances for some standard types (associating, for example, the names |n| and |m| with the type |ℕ|), but users can override these with their own higher-priority instances. The type class mechanism also allows us to give natural names for data structures such as lists: if a type |A| is associated with the variable name |x|, |list|~|A| is by default associated with the name |xs|. This mechanism was developed for the new induction tactic, but can be used by other tactics as well. \paragraph{Fallback.} If none of the above rules apply, |a| receives a default name: |h| if |A| is a proposition and |x| otherwise. \medskip The first three rules are ordered somewhat arbitrarily. I have found the given order to be the one that most often matches common naming preferences, but there are many examples where a different order would fit better. The example from Fig.~\ref{fig:naming} would arguably be improved if |h₁| was called |ht| instead, using the name from the constructor declaration rather than the recursion rule. But switching the priority of these rules would also change other names for the worse. If the name chosen for |a|, say |n|, is already in use, we fall back to |n_1|, |n_2|, etc. This complicates the implementation because many of the induction tactic's processing steps may remove hypotheses, so we only know which names are in use after all steps have finished. To address this issue, we initially give the introduced hypotheses temporary names, then run the naming algorithm as the last step of the tactic to obtain the final names. \section{Evaluation of Lean's Metaprogramming Framework}\label{sec:meta} I have implemented the tactic described in the previous section in the metaprogramming framework~\cite{ebner2017} of Lean~3. This provides an opportunity to evaluate how the framework fares on a moderately complex task. \subsection{Overview of the Framework}\label{sec:meta-overview} Like other modern metaprogramming approaches such as Mtac2~\cite{kaiser2018} or Idris's elaborator reflection~\cite{christiansen2016}, Lean metaprograms are written in Lean itself rather than its implementation language C++. They are marked with the |meta| keyword, which signifies a stage separation: |meta| definitions may refer to non-|meta| ones, but not the other way around. Metaprograms can therefore be inconsistent (e.g.\ they need not terminate) without compromising the consistency of the non-|meta| fragment. At the same time, metaprograms have access to all the data structures and functions defined in non-|meta| Lean, avoiding duplicate effort. Most metaprograms are \emph{tactics}, which means they have type |tactic|~|α| for some |α|. The |tactic| type family is a Haskell-style monad which provides an imperative embedded domain-specific language for writing tactics. Tactics operate on a tactic state with zero or more goals. A goal has a local context, containing the current list of hypotheses, and a target type; the objective of a tactic is usually to construct an element of the target type (represented as an abstract syntax tree). To do this, tactics can make use of a large number of built-in tactics which manipulate hypotheses and the target, add and remove goals, query and add definitions, unify expressions, check whether two expressions are definitionally equal, and more. This framework generally works well and leads to a remarkably seamless integration between regular programs and metaprograms. Still, while implementing the new induction tactic, I encountered some situations where it was less helpful or clear than it could be. The next subsections discuss these cases, which will hopefully be useful to prospective Lean metaprogrammers as well as designers of similar metaprogramming systems. \subsection{Tracking of Hypotheses}\label{sec:tracking} As mentioned, most tactics operate within a local context containing the hypotheses that are currently available. Internally, these hypotheses are represented as expressions identified by a unique name. They also have an external name which is shown to the user and is not necessarily unique in the context. Many tactics manipulate the context in some way, e.g.\ by adding or removing hypotheses or changing the types of existing hypotheses. The trouble with this is that any such modification changes the unique names of any affected hypotheses. As a result, any expression involving the changed hypotheses becomes invalid: it refers to a hypothesis that, to Lean, does not exist any more. As an example, consider the unification procedure from Sect.~\ref{sec:index-unification}. It operates on a queue of index equations, unifying each in turn. Naturally, we would want to represent this queue as a list of expressions, with each expression identifying one equation hypothesis. But this does not work. Unifying the first equation may change the types of subsequent equations and thus their unique names. When we then turn to the next equation in the queue, Lean will rightfully point out that the context contains no hypothesis with that unique name. The entire tail of the queue has been potentially invalidated. I encountered this issue multiple times --- it occurs whenever one needs to track hypotheses across calls to potentially context-altering tactics, which are numerous and do not always document the fact that they may invalidate hypotheses. One could imagine various workarounds for this issue. For the unification procedure, I ended up identifying hypotheses not by unique name but by external name. For this to work, the external names must be unique in the context, which in this case can be ensured since the induction tactic controls these names. This workaround is less applicable when dealing with preexisting hypotheses, whose external names may not be unique. Another possible approach would be to have context-altering tactics report a mapping from old unique names to new unique names, which would allow callers to update any stored expressions. This would, however, require changes to many tactics and callers would still have to manually perform the update. Perhaps the most convenient solution to this issue would be the introduction of yet another name for hypotheses: a \emph{stable name} which would remain unchanged when a hypothesis is modified. This would most closely reflect the tactic writer's intuition that changing the type of a hypothesis does not make it a different hypothesis. \subsection{Definitional Equality}\label{sec:defeq} Any author of tactics for a dependently typed proof assistant must contend with definitional equality: different expressions that are equal up to computation. For instance, |ℕ| and |let|~|T|~|:=| |ℕ|~|in|~|T| are definitionally equal types. Many tactics should treat them as interchangeable, though this depends on the tactics' use cases and user expectations. Checking for definitional equality, which involves partially normalising expressions, carries a sometimes considerable performance cost, so it would be too much to ask for a metaprogramming framework that fully abstracts over definitional equality. Still, Lean additionally complicates the matter in two ways. First, it lacks a comprehensive programming interface for pattern-matching on expressions up to definitional equality. Tactic authors must manually normalise expressions as much as necessary, using relatively rudimentary normalisation tactics, if they want to take definitional equality into account. Novice metaprogrammers can hardly be expected to do this accurately, and experts may be tempted to cut corners and ignore the issue. This shifts the burden onto tactic users, who need to make sure that their goals have just the right shape. While implementing the new induction tactic, I added the beginnings of an up-to-definitional-equality matching framework to mathlib, but so far this covers only a few constructions. The second way in which Lean complicates definitional equality is by introducing an additional notion of transparency. Each definition is marked with one of several transparency values, which indicate how eagerly the definition should be unfolded during normalisation. This is reasonable, and perhaps necessary: some definitions should indeed be unfolded almost always, others almost never. However, the programming interface around transparency encourages mistakes. Most tactics which take a transparency argument make this argument optional, so if one does not supply an explicit transparency, a default value is used. This makes it easy to make subtle mistakes (and I have made a few) where transparency is not propagated or the wrong transparency is used. It does not help that different tactics have different default transparency values. \subsection{Elaboration}\label{sec:elaboration} When writing a tactic, one must often construct expressions of some specific form. Lean provides essentially two ways to do this: directly, by writing out the abstract syntax tree of an expression (perhaps as a syntactically more pleasant quotation), or by elaborating a pre-expression. Pre-expressions are an abstract syntax representation of the expressions that users write in Lean's surface syntax. They are turned into regular expressions in a process called elaboration, which fills in many details --- mainly implicit and instance arguments --- that users may thankfully omit. Lean also allows us to use elaboration in tactics, which can be convenient. When writing expressions directly, we have to fill in many implicit arguments as well as universe parameters, a somewhat obscure feature of the type theory that one would usually prefer not to think about. Elaboration can do this for us, making tactics more readable and maintainable since they need only specify the main parts of an expression. Unfortunately, Lean's programming interface again makes using elaboration more difficult than it needs to be. This is mostly due to easily avoidable limitations: many functions that construct or deconstruct expressions operate only on fully elaborated expressions, even though they could also work with pre-expressions. As a result, Lean encourages its users to elaborate early, before an expression is fully constructed. But then the elaboration algorithm lacks information about the context in which a partially constructed expression will be used, so it can infer less implicit arguments. Due to these limitations, I have usually found it more convenient to write out the fully elaborated expression after all. \subsection{Nested and Mutual Inductive Types}\label{sec:ginductive} Lean takes the dictum that a proof system's kernel should be as small as possible more seriously than most dependently typed theorem provers. One ramification of this philosophy is that Lean's kernel does not support nested and mutual (collectively: generalised) inductive types. Instead, when a user writes a generalised inductive type, Lean compiles it to an equivalent non-generalised inductive type during elaboration. This approach makes the kernel smaller and thus more trustworthy. However, Lean's implementation also illustrates a major disadvantage: hiding the internal compilation process from users of the system requires much engineering effort. Lean does this imperfectly and as a result, generalised inductive types are a leaky abstraction. In metaprograms, the abstraction is in fact nonexistent: metaprograms only get to see the internal representation of a generalised inductive type. Thus, if a metaprogram wants to, for example, report an error about a particular generalised inductive type to the user, it has to reverse-engineer that generalised inductive type from its internal representation. This illustrates a more general issue with tactics which act on the kernel language rather than the source language: details about the user-level program invariably get lost in translation. The induction tactic suffers from this in a small way. One of the naming rules from Sect.~\ref{sec:naming} checks whether a constructor argument is named in the definition of the constructor. But in the kernel language, all arguments are named, so if a user writes the constructor type |X|~|→|~|Y|, our tactic sees |∀|~|(a|~|:|~|X),|~|Y|. Thus, when the tactic encounters a nondependent argument with name |a| (or |a_1| etc.), it assumes that this argument was not explicitly named --- but that assumption can be mistaken. Perhaps the user really wrote |∀|~|(a|~|:|~|X),|~|Y| in the hope that our naming rule would pick up the argument name |a|. (The community edition of Lean~3 recently changed the elaborator so that the default name for an unnamed argument is not |a| but $\breve{\alpha}$, assuming that no user would choose that character as a variable name. This mitigates the issue, but $\breve{\alpha}$ now occasionally shows up in goals to confuse Lean neophytes.) Perhaps the most effective way to prevent such loss of information would be to associate to each kernel expression the surface expression from which it was elaborated. For an inductive type, this would be the |inductive| declaration the user wrote; for a top-level definition, the equations given to the equation compiler. In the extreme, elaboration would become reversible, so tactics would be able to reconstruct the full program text. Lean's treatment of generalised inductive types in metaprograms also illustrates another issue. Effectively the only primitive metaprogram that is aware of generalised inductive types is a normalisation procedure, users of which can choose whether constructors of generalised inductive types should be unfolded to their internal representation. As it happens, this is sufficient for the purposes of our induction tactic. But it shows how a feature that was supposed to be dealt with during elaboration still permeates large parts of the system: every tactic that uses normalisation must decide what to do about constructors of generalised inductive types. Given such complications, one might wonder whether it would have been preferable to put generalised inductive types in the kernel language after all. \subsection{Open Expressions}\label{sec:open} While the previous sections have been critical of some parts of the metaprogramming framework, this section discusses a reasonable design choice that may nevertheless be surprising to novice tactic writers: the handling of open expressions. An expression is open when it contains at least one free variable. Such expressions occur naturally when we deconstruct terms with binders. For example, consider the following type: \begin{lstlisting} ∀ (n : ℕ) (f : fin ℕ), P n f \end{lstlisting} We can deconstruct this type into argument types |ℕ| and |fin|~\verb|#0| and result type |P|~\verb|#1|~\verb|#0|. The \verb|#0| and \verb|#1| are free variables, represented as De Bruijn indices, which refer to the variable bound by, respectively, the first and second preceding binder. Lean lets us construct such open expressions, but it does not let us to do much with them since most built-in tactics only work on closed expressions. Open expressions cannot, for example, be type-checked or unified. Instead, Lean's metaprogramming framework encourages users to treat expressions as \emph{locally nameless}~\cite{mcbride2004}. This means we effectively use hypotheses as free variables: while deconstructing an expression, we immediately replace any free variables with fresh hypotheses of the appropriate type. Our above example, so deconstructed, has argument types |ℕ| and |fin|~|cn| and result type |P|~|cn|~|cf|, where |cn|~|:|~|ℕ| and |cf|~|:| |fin|~|cn| are fresh hypotheses. This representation is considerably easier to work with, not only because Lean prefers it but also because we do not have to track the contexts of each expression as closely. Observe, for example, that in the first decomposition of our example, the \verb|#0| in the second argument and the \verb|#0| in the result type refer to different arguments. The locally nameless representation avoids such confusion. There is one downside to this representation: Lean makes no particular effort to optimise the construction and deconstruction of locally nameless expressions, so these operations can be somewhat inefficient. \section{Conclusion}\label{sec:conclusion} I have shown how to build a user-friendly induction tactic which is particularly suited to an educational setting. The tactic liberates its users from some of the technical, nonessential difficulties with existing induction tactics. It automatically generalises complex indices, ensuring that information contained in the indices of a hypothesis is not lost. It simplifies the resulting induction hypotheses, which would otherwise be obscured by redundant arguments. It automatically generalises induction hypotheses as much as possible so that users do not get stuck with an overly specific induction hypothesis. And it uses various heuristics to generate suitable names for all the new hypotheses it introduces. These usability improvements may spare experts some of the tedium of pre- and postprocessing their goals, and they should lift a considerable cognitive burden from novices. I have also discussed some issues with Lean's metaprogramming framework which I encountered while implementing the new induction tactic. Some of these are easily fixable (but impactful) limitations of the programming interface; others point to deeper issues with aspects of the framework's design. I hope that this discussion will help make Lean's already pleasant metaprogramming even better. \begin{acks} Jasmin Blanchette helped determine what features a novice-friendly induction tactic should have, provided many test cases and commented in great detail on drafts of this paper. Anne Baanen, Floris van Doorn, Gabriel Ebner, Rob Lewis and the anonymous reviewers gave detailed and insightful feedback on drafts of this paper and on the underlying code. The Lean Zulip community, particularly Mario Carneiro and Gabriel Ebner, patiently answered my many questions about Lean metaprogramming. Many thanks! This project was funded by the \grantsponsor{NWO}{NWO}{https://www.nwo.nl/en} under the Vidi programme (project No.~\grantnum{NWO}{016.Vidi.189.037}, Lean Forward). \end{acks} \balance \bibliographystyle{ACM-Reference-Format}
{ "timestamp": "2020-12-17T02:19:57", "yymm": "2012", "arxiv_id": "2012.08990", "language": "en", "url": "https://arxiv.org/abs/2012.08990" }
\section{Introduction} We investigate an unsuspected connection between non-harmonious logical connectives, such as Prior's {\it tonk}, and quantum computing. We argue that non-harmonious connectives model the information erasure, the non-reversibility, and the non-determinism that occur, among other places, in quantum measurement. More concretely, we introduce a propositional logic with a non-harmonious connective $\odot$ (read: ``sup'', for ``superposition'') and show that its proof language forms the core of a quantum programming language. \subsection{Insufficient, harmonious, and excessive connectives} In natural deduction, to prove a proposition $C$, the elimination rule of a connective $\vartriangle$ requires a proof of $A \vartriangle B$ and a proof of $C$ using, as extra hypotheses, exactly the premises needed to prove the proposition $A \vartriangle B$, with the introduction rules of the connective $\vartriangle$. This principle of inversion, or of harmony, has been introduced by Gentzen \cite{Gentzen} and developed, among others, by Prawitz \cite{Prawitz} and Dummett \cite{Dummett} in natural deduction, by Miller and Pimentel \cite{MillerPimentel} in sequent calculus, and by Read \cite{Read04,Read10,Read} for the rules of equality. For example, to prove the proposition $A \wedge B$, the introduction rule, in the usual additive style, of the conjunction requires proofs of $A$ and $B$ $$\irule{\Gamma \vdash A & \Gamma \vdash B} {\Gamma \vdash A \wedge B} {\mbox{$\wedge$-i}}$$ Hence, to prove a proposition $C$, the generalized elimination rule of the conjunction \cite{SchroederHeister,Parigot,NegriPlato} requires a proof of $A\wedge B$ and one of $C$, using, as extra hypotheses, the propositions $A$ and $B$ $$\irule{\Gamma \vdash A \wedge B & \Gamma, A, B \vdash C} {\Gamma \vdash C} {\mbox{$\wedge$-e}}$$ Here we say that the extra hypotheses $A$ and $B$ are {\em provided} by the elimination rule, as they appear in the left-hand side of the premise. In the same way, the propositions $A$ and $B$ are {\em required} by the introduction rule, as they appear in the right-hand side of the premises. This principle of inversion can thus be formulated as the fact that the propositions required by the introduction rule are the same as those provided by the elimination rule. It enables the definition of a reduction process where the proof $$\irule{\irule{\irule{\pi_1}{\Gamma \vdash A}{} & \irule{\pi_2}{\Gamma \vdash B}{} } {\Gamma \vdash A \wedge B} {\mbox{$\wedge$-i}} & \irule{\pi_3}{\Gamma, A, B \vdash C}{} } {\Gamma \vdash C} {\mbox{$\wedge$-e}}$$ reduces to $(\pi_1/A,\pi_2/B)\pi_3$, that is the proof $\pi_3$ where the use of the axiom rule with the propositions $A$ and $B$ has been replaced with the proofs $\pi_1$ and $\pi_2$. In the same way, to prove the proposition $A \vee B$, the introduction rules of the disjunction require a proof of $A$ or a proof of $B$ $$\irule{\Gamma \vdash A} {\Gamma \vdash A \vee B} {\mbox{$\vee$-i1}} \qquad\qquad \irule{\Gamma \vdash B} {\Gamma \vdash A \vee B} {\mbox{$\vee$-i2}}$$ hence, to prove a proposition $C$, the elimination rule of the disjunction requires a proof of $A\vee B$ and two proofs of $C$, one using, as extra hypothesis, the proposition $A$ and the other the proposition $B$ $$\irule{\Gamma \vdash A \vee B & \Gamma, A \vdash C & \Gamma, B \vdash C} {\Gamma \vdash C} {\mbox{$\vee$-e}}$$ and a proof reduction process can be defined in a similar way. The property that the elimination rule provides exactly the propositions required by the introduction rules can be split in two properties that it provides no more and no less (called ``harmony'' and ``reversed harmony'' in \cite{JacintoReadSL17}). We can also imagine connectives that do not verify this inversion principle, either because the elimination rule provides propositions not required by the introduction rule, or because the introduction rule requires propositions not provided by the elimination rule, or both. When the propositions provided by the elimination rule are not all required by the introduction rule, we call the connective {\em insufficient}. When the propositions provided by the elimination rule are required by the introduction rule, but some propositions required by the introduction rule are not provided by the elimination rule we call it {\em excessive}. An example of an {\em insufficient} connective is Prior's {\it tonk} \cite{Prior} whose introduction rule requires the proposition $A$, but whose elimination rule provides the proposition $B$, which is not required by the introduction rule $$\irule{\Gamma \vdash A} {\Gamma \vdash A~\mbox{\it tonk}~B} {\mbox{{\it tonk}-i}} \qquad \qquad \irule{\Gamma \vdash A~\mbox{\it tonk}~B & \Gamma, B \vdash C} {\Gamma \vdash C} {\mbox{{\it tonk}-e}}$$ Because of this insufficiency, the following proof cannot be reduced $$\irule{\irule{\irule{\pi_1}{\Gamma \vdash A}{} } {\Gamma \vdash A~\mbox{\it tonk}~B} {\mbox{{\it tonk}-i}} & \irule{\pi_2}{\Gamma, B \vdash C}{} } {\Gamma \vdash C} {\mbox{{\it tonk}-e}}$$ An example of an {\em excessive} connective is the connective $\bullet$ whose introduction rule requires the propositions $A$ and $B$, but whose elimination rule provides the proposition $A$, but not $B$, although, both are required by the introduction rule $$\irule{\Gamma \vdash A & \Gamma \vdash B} {\Gamma \vdash A \bullet B} {\mbox{$\bullet$-i}} \qquad \qquad \irule{\Gamma \vdash A \bullet B & \Gamma, A \vdash C} {\Gamma \vdash C} {\mbox{$\bullet$-e}}$$ This connective has the same introduction rule as conjunction, but a different elimination rule. Using the more common elimination rules of conjunction, it could be defined as having only one among its two elimination rules. For such connectives, a proof reduction process can be defined, for example the proof $$\irule{\irule{\irule{\pi_1}{\Gamma \vdash A}{} & \irule{\pi_2}{\Gamma \vdash B}{} } {\Gamma \vdash A \bullet B} {\mbox{$\bullet$-i}} & \irule{\pi_3}{\Gamma, A \vdash C}{} } {\Gamma \vdash C} {\mbox{$\bullet$-e}}$$ can be reduced to $(\pi_1/A)\pi_3$. Another example is the connective $\odot$ that has the introduction rule of the conjunction and the elimination rule of the disjunction $$\irule{\Gamma \vdash A & \Gamma \vdash B} {\Gamma \vdash A \odot B} {\mbox{$\odot$-i}} \qquad\qquad \irule{\Gamma \vdash A \odot B & \Gamma, A \vdash C & \Gamma, B \vdash C} {\Gamma \vdash C} {\mbox{$\odot$-e}}$$ In this case also, proofs can be reduced. Moreover, several proof reduction processes can be defined, exploiting, in different ways, the excess of the connective. For example, the proof $$\irule{\irule{\irule{\pi_1}{\Gamma \vdash A}{} & \irule{\pi_2}{\Gamma \vdash B}{} } {\Gamma \vdash A \odot B} {\mbox{$\odot$-i}} & \irule{\pi_3} {\Gamma, A \vdash C}{} & \irule{\pi_4}{\Gamma, B \vdash C} {} } {\Gamma \vdash C} {\mbox{$\odot$-e}}$$ can be reduced to $(\pi_1/A)\pi_3$, it can be reduced to $(\pi_2/B)\pi_4$, it also can be reduced, non-deterministically, either to $(\pi_1/A)\pi_3$ or to $(\pi_2/B)\pi_4$. Finally, to keep both proofs, we can add a rule ``parallel'' $$\irule{\Gamma \vdash A & \Gamma \vdash A} {\Gamma \vdash A} {\mbox{par}}$$ and reduce it to $$\irule{\irule{(\pi_1/A)\pi_3} {\Gamma \vdash C} {} & \irule{(\pi_2/B)\pi_4} {\Gamma \vdash C} {} } {\Gamma \vdash C} {\mbox{par}}$$ A final example is the quantifier $\faex$, which has the introduction rule of the universal quantifier and the elimination rule of the existential quantifier $$\irule{\Gamma \vdash A} {\Gamma \vdash \faex x~A} {\mbox{$\faex$-i $x$ not free in $\Gamma$}} \qquad\quad \irule{\Gamma \vdash \faex x~A & \Gamma, A \vdash C} {\Gamma \vdash C} {\mbox{$\faex$-e $x$ not free in $\Gamma, C$}}$$ \longversion{The proof $$\irule{\irule{\irule{\pi_1}{\Gamma \vdash A}{} } {\Gamma \vdash \faex x~A} {\mbox{$\faex$-i}} & \irule{\pi_2}{\Gamma, A \vdash C}{} } {\Gamma \vdash C} {\mbox{$\faex$-e}}$$ can be reduced, non-deterministically, to $((t/x)\pi_1/A)(t/x)\pi_2$, for any term $t$. } The quantifier $\nabla$ \cite{MillerTiu}, defined in sequent calculus rather than natural deduction, may also be considered as an excessive quantifier, as it has the right rule of the universal quantifier and the left rule of the existential one. But it involves a clever management of variable scoping, which we do not address here. \subsection{Information loss} \label{informationloss} With harmonious connectives, when a proof is built with an introduction rule, the information contained in the proofs of the premises of this rule is preserved. For example, the information contained in the proof $\pi_1$ is {\em present} in the proof $\pi$ $$\irule{\irule{\pi_1}{\Gamma \vdash A}{} & \irule{\pi_2}{\Gamma \vdash B}{} } {\Gamma \vdash A \wedge B} {\mbox{$\wedge$-i}}$$ in the sense that $\pi_1$ is a subproof of $\pi$. But it is moreover accessible. We say that a subproof $\pi'$ at tree-position $p$ in $\pi$ is {\em accessible}, if there exists a context $\kappa$, such that for all proofs $\pi''$, putting the proof $\pi[\pi'']_p$ where $\pi''$ is grafted at tree-position $p$ in $\pi$, in the context $\kappa$ yields a proof $\kappa[\pi[\pi'']_p]$ that reduces to $\pi''$. Indeed, putting the proof $$\vcenter{\irule{\irule{\pi_1}{\Gamma \vdash A}{} & \irule{\pi_2}{\Gamma \vdash B}{} } {\Gamma \vdash A \wedge B} {\mbox{$\wedge$-i}}} \qquad \textrm{in the context} \qquad \vcenter{\irule{\irule{[~]} {\Gamma \vdash A \wedge B} {} & \irule{}{\Gamma, A, B \vdash A}{\mbox{ax}} } {\Gamma \vdash A} {\mbox{$\wedge$-e}}}$$ yields the proof $$\irule{\irule{\irule{\pi_1}{\Gamma \vdash A}{} & \irule{\pi_2}{\Gamma \vdash B}{} } {\Gamma \vdash A \wedge B} {\mbox{$\wedge$-i}} & \irule{}{\Gamma, A, B \vdash A}{\mbox{ax}} } {\Gamma \vdash A} {\mbox{$\wedge$-e}}$$ that reduces to $\pi_1$. And the same holds for the proof $\pi_2$. The situation is different with an excessive connective: the excess of information, required by the introduction rule, and not returned by the elimination rule in the form of an extra hypothesis in the required proof of $C$ is lost. For example, the information contained in the proof $\pi_2$ is present in the proof $\pi$ $$\irule{\irule{\pi_1}{\Gamma \vdash A}{} & \irule{\pi_2}{\Gamma \vdash B}{} } {\Gamma \vdash A \bullet B} {\mbox{$\bullet$-i}}$$ but it is inaccessible as there is no context such that, for all $\pi_2$, putting the proof $$\irule{\irule{\pi_1}{\Gamma \vdash A}{} & \irule{\pi_2}{\Gamma \vdash B}{} } {\Gamma \vdash A \bullet B} {\mbox{$\bullet$-i}}$$ in that context yields a proof that reduces to $\pi_2$. The information contained in the proofs $\pi_1$ and $\pi_2$ is present in the proof $$\irule{\irule{\pi_1}{\Gamma \vdash A}{} & \irule{\pi_2}{\Gamma \vdash B}{} } {\Gamma \vdash A \odot B} {\mbox{$\odot$-i}}$$ but its accessibility depends on the way we decide to reduce the proof $$\irule{\irule{\irule{\pi_1}{\Gamma \vdash A}{} & \irule{\pi_2}{\Gamma \vdash B}{} } {\Gamma \vdash A \odot B} {\mbox{$\odot$-i}} & \irule{\pi_3} {\Gamma, A \vdash C}{} & \irule{\pi_4}{\Gamma, B \vdash C} {} } {\Gamma \vdash C} {\mbox{$\odot$-e}}$$ If we reduce it systematically to $(\pi_1/A)\pi_3$, then the information contained in $\pi_1$ is accessible, but that contained in $\pi_2$ is not. If we reduce it systematically to $(\pi_2/B)\pi_4$, then the information contained in $\pi_2$ is accessible, but not that contained in $\pi_1$. If we reduce it not deterministically to $(\pi_1/A)\pi_3$ or to $(\pi_2/B)\pi_4$, then the information contained in both $\pi_1$ and $\pi_2$ is accessible, but non-deterministically. If we reduce it to $$\irule{\irule{(\pi_1/A)\pi_3} {\Gamma \vdash C} {} & \irule{(\pi_2/B)\pi_4} {\Gamma \vdash C} {} } {\Gamma \vdash C} {\mbox{par}}$$ then the information contained in both $\pi_1$ and $\pi_2$ is inaccessible. Indeed, the information contained in the proof $\pi_1$ is present in the proof $$\irule{\irule{\pi_1}{\Gamma \vdash A}{} & \irule{\pi_2}{\Gamma \vdash A}{} } {\Gamma \vdash A} {\mbox{par}}$$ but it is inaccessible as there is no context such that for all $\pi_1$ putting the proof $$\irule{\irule{\pi_1}{\Gamma \vdash A}{} & \irule{\pi_2}{\Gamma \vdash A}{} } {\Gamma \vdash A} {\mbox{par}}$$ in that context yields a proof that reduces to $\pi_1$. The same holds for $\pi_2$. Note that, when the proof $$\irule{\irule{\irule{\pi_1}{\Gamma \vdash A}{} & \irule{\pi_2}{\Gamma \vdash B}{} } {\Gamma \vdash A \odot B} {\mbox{$\odot$-i}} & \irule{\pi_3} {\Gamma, A \vdash C}{} & \irule{\pi_4}{\Gamma, B \vdash C} {} } {\Gamma \vdash C} {\mbox{$\odot$-e}}$$ is reduced, non-deterministically, to $(\pi_1/A)\pi_3$ or to $(\pi_1/A)\pi_3$, the information contained in $\pi_1$ or that contained in $\pi_2$ is erased. It is not even present in the reduct. When it is reduced to $$\irule{\irule{(\pi_1/A)\pi_3} {\Gamma \vdash C} {} & \irule{(\pi_2/B)\pi_4} {\Gamma \vdash C} {} } {\Gamma \vdash C} {\mbox{par}}$$ then the information is inaccessible, but it remains present in the proof. So, while harmonious connectives model information preservation, reversibility, and determinism, these excessive connectives model information erasure, non-reversibility, and non-determinism. Such information erasure, non-reversibility, and non-determinism, occur, for example, in quantum physics, where the measurement of the superposition of two states does not yield both states back. The introduction rules alone do not define the meaning of such non-harmonious connectives, and neither do the elimination rules alone. The discrepancy between the meaning conferred by the introduction rules and the elimination rules, and the information loss it implies, are part of the meaning of such connectives. \subsection{Quantum physics and quantum languages} Several programming languages have been proposed to express quantum algorithms, for example \cite{AltenkirchGrattageLICS05,SelingerValironMSCS06,ZorziMSCS16,Lineal,ZXBook17,LambdaS,DiazcaroGuillermoMiquelValironLICS19}. The design of such quantum programming languages raises two main questions. The first is to take into account the linearity of unitary operators and for instance avoid cloning, and the second is to express the information erasure, non-reversibility, and non-determinism of measurement. The $\odot$ connective gives a new solution to this second problem. Qubits can be seen as proofs of the proposition $\top \odot \top$, in contrast with bits which are proofs of $\top\vee\top$, and measurement can be easily expressed with the elimination rule of $\odot$ (Section \ref{measure}). In previous work, we have attempted to formalize superposition in the $\lambda$-calculus. The calculus Lambda-$\mathcal S$~\cite{LambdaS} contains a primitive constructor $+$ and a primitive measurement symbol $\pi$, together with a rule reducing $\pi (t + u)$ non-deterministically to $t$ or to $u$. The superposition $t + u$ can be considered as the pair $\pair{t}{u}$. Hence, it should have the type $A \wedge A$. In other words, it is a proof-term of the proposition $A \wedge A$. In System I \cite{SystemI}, various type-isomorphisms have been introduced, in particular the commutativity isomorphism $A \wedge B \equiv B \wedge A$, hence $t + u \equiv u + t$. In such a system, where $A \wedge B$ and $B \wedge A$ are identical, it is not possible to define the two elimination rules as the two usual projections rules $\pi_1$ and $\pi_2$ of the $\lambda$-calculus. They were replaced with a single projection parametrized with a proposition $A$: $\pi_A$, such that if $t:A$ and $u:B$ then $\pi_A(t + u)$ reduces to $t$ and $\pi_B(t + u)$ to $u$. When $A = B$, hence $t$ and $u$ both have type $A$, the proof-term $\pi_A(t + u)$ reduces, non-deterministically, to $t$ or to $u$, like a measurement operator. These works on Lambda-${\cal S}$ and System I brought to light that the pair superposition / measurement, in a quantum programming language, behaves like a pair introduction / elimination, for some connective, in a proof language, as the succession of a superposition and a measurement yields a term that can be reduced. In System I, this connective was assumed to be a commutative conjunction, with a modified elimination rule, leading to a non-deterministic reduction. But, as the measurement of the superposition of two states does not yield both states back, this connective should probably be excessive. Moreover, as, to prepare the superposition $a \ket 0 + b \ket 1$, we need both $\ket 0$ and $\ket 1$ and the measurement, in the basis $\ket 0, \ket 1$, yields either $\ket 0$ or $\ket 1$, this connective should have the introduction rule of the conjunction, and the elimination rule of the disjunction. Hence, it should be the connective $\odot$. In this paper, we present a propositional logic with the connective $\odot$, a language of proof-terms, the $\odot$-calculus (read: ``the sup-calculus''), for this logic, and we prove a proof normalization theorem (Section \ref{seclogic}). We then extend this calculus, introducing scalars to quantify the propensity of a proof to reduce to another (Section \ref{secquantifying}) and show (Section \ref{secquantum}) that its proof language forms the core of a quantum programming language. A vector $\left(\begin{smallmatrix}a\\b\end{smallmatrix}\right)$ will be expressed as the proof $a.* + b.*$ of $\top \odot \top$, where $*$ is the symbol corresponding to the introduction rule of $\top$, $+$ that of $\odot$, and $a$ and $b$ are scalars. So, although propositional logic with $\odot$ is not a logic to reason about quantum programs, some of its propositions can be seen as types of quantum programs. \section{Propositional logic with \texorpdfstring{$\odot$}{O}} \label{seclogic} \begin{figure}[t] \[ \irule{A\in\Gamma} {\Gamma \vdash A} {\mbox{ax}} \qquad \irule{\Gamma \vdash A & \Gamma \vdash A} {\Gamma \vdash A} {\mbox{par}} \qquad \irule{} {\Gamma \vdash \top} {\mbox{$\top$-i}} \qquad \irule{\Gamma \vdash \bot} {\Gamma \vdash C} {\mbox{$\bot$-e}} \] \[ \irule{\Gamma, A \vdash B} {\Gamma \vdash A \Rightarrow B} {\mbox{$\Rightarrow$-i}} \qquad\qquad \irule{\Gamma \vdash A\Rightarrow B & \Gamma \vdash A} {\Gamma \vdash B} {\mbox{$\Rightarrow$-e}} \] \[ \irule{\Gamma \vdash A & \Gamma \vdash B} {\Gamma \vdash A \wedge B} {\mbox{$\wedge$-i}} \qquad\qquad \irule{\Gamma \vdash A \wedge B & \Gamma, A, B \vdash C} {\Gamma \vdash C} {\mbox{$\wedge$-e}} \] \[ \irule{\Gamma \vdash A} {\Gamma \vdash A \vee B} {\mbox{$\vee$-i1}} \qquad\quad \irule{\Gamma \vdash B} {\Gamma \vdash A \vee B} {\mbox{$\vee$-i2}} \qquad\quad \irule{\Gamma \vdash A \vee B & \Gamma, A \vdash C & \Gamma, B \vdash C} {\Gamma \vdash C} {\mbox{$\vee$-e}} \] \[ \irule{\Gamma \vdash A & \Gamma \vdash B} {\Gamma \vdash A \odot B} {\mbox{$\odot$-i}} \qquad\qquad \irule{\Gamma \vdash A \odot B & \Gamma, A \vdash C & \Gamma, B \vdash C} {\Gamma \vdash C} {\mbox{$\odot$-e}} \] \caption{The deduction rules of propositional logic with $\odot$\label{figdeductionrules}} \end{figure} We consider a constructive propositional logic with the usual connectives $\top$, $\bot$, $\Rightarrow$, $\wedge$, and $\vee$, (as usual, negation is defined as $\neg A = (A \Rightarrow \bot)$), and the extra connective $\odot$. The syntax of this logic is $$A = \top \mid \bot \mid A \Rightarrow A \mid A \wedge A \mid A \vee A \mid A \odot A$$ and its deduction rules are given in Figure \ref{figdeductionrules}. \subsection{Proof normalization} Reducible expressions (redexes) in this logic are the usual ones for the connectives $\Rightarrow$, $\wedge$, and $\vee$ $$\vcenter{\irule{\irule{\irule{\pi_1}{\Gamma, A\vdash B}{}} {\Gamma \vdash A \Rightarrow B} {\mbox{$\Rightarrow$-i}} & \irule{\pi_2}{\Gamma \vdash A}{} } {\Gamma \vdash B} {\mbox{$\Rightarrow$-e}}} \qquad \textrm{that reduces to}\qquad(\pi_2/A)\pi_1$$ $$\vcenter{\irule{\irule{\irule{\pi_1}{\Gamma \vdash A}{} & \irule{\pi_2}{\Gamma \vdash B}{} } {\Gamma \vdash A \wedge B} {\mbox{$\wedge$-i}} & \irule{\pi_3}{\Gamma, A, B \vdash C}{} } {\Gamma \vdash C} {\mbox{$\wedge$-e}}} \quad \textrm{that reduces to}\quad(\pi_1/A,\pi_2/B)\pi_3$$ $$\vcenter{\irule{\irule{\irule{\pi_1}{\Gamma \vdash A}{}} {\Gamma \vdash A \vee B} {\mbox{$\vee$-i1}} & \irule{\pi_2}{\Gamma, A \vdash C}{} & \irule{\pi_3}{\Gamma, B \vdash C}{} } {\Gamma \vdash C} {\mbox{$\vee$-e}}} \qquad \textrm{that reduces to}\qquad(\pi_1/A)\pi_2$$ and $$\vcenter{\irule{\irule{\irule{\pi_1}{\Gamma \vdash B}{}} {\Gamma \vdash A \vee B} {\mbox{$\vee$-i2}} & \irule{\pi_2}{\Gamma, A \vdash C}{} & \irule{\pi_3}{\Gamma, B \vdash C}{} } {\Gamma \vdash C} {\mbox{$\vee$-e}}} \qquad \textrm{that reduces to}\qquad(\pi_1/B)\pi_3$$ and the redex for the connective $\odot$ $$\irule{\irule{\irule{\pi_1}{\Gamma \vdash A}{} & \irule{\pi_2}{\Gamma \vdash B}{} } {\Gamma \vdash A \odot B} {\mbox{$\odot$-i}} & \irule{\pi_3} {\Gamma, A \vdash C}{} & \irule{\pi_4}{\Gamma, B \vdash C} {} } {\Gamma \vdash C} {\mbox{$\odot$-e}}$$ that reduces, in some cases, non-deterministically, to $(\pi_1/A)\pi_3$ or to $(\pi_2/B)\pi_4$, erasing some information, and in others, preserving information, to $$\irule{\irule{(\pi_1/A)\pi_3} {\Gamma \vdash C} {} & \irule{(\pi_2/B)\pi_4} {\Gamma \vdash C} {} } {\Gamma \vdash C} {\mbox{par}}$$ Adding rules, such as the parallel rule, permits to build proofs that cannot be reduced, because the introduction rule of some connectives and its elimination rule are separated by the parallel rule, for example $$\irule{\irule{\irule{\irule{\pi_1}{\Gamma \vdash A}{} & \irule{\pi_2}{\Gamma \vdash B}{}} {\Gamma \vdash A \wedge B} {\mbox{$\wedge$-i}} & \irule{\irule{\pi_3}{\Gamma \vdash A}{} & \irule{\pi_4}{\Gamma \vdash B}{} } {\Gamma \vdash A \wedge B} {\mbox{$\wedge$-i}} } {\Gamma \vdash A \wedge B} {\mbox{par}} & \irule{\pi_5}{\Gamma, A, B \vdash C}{} } {\Gamma \vdash C} {\mbox{$\wedge$-e}}$$ Reducing such a proof requires rules to commute the parallel rule either with the elimination rule below or with the introduction rules above. As the commutation with the introduction rules above is not always possible, for example in the proof $$\irule{\irule{\irule{\pi_1}{\Gamma \vdash A}{}} {\Gamma \vdash A \vee B} {\mbox{$\vee$-i1}} & \irule{\irule{\pi_2}{\Gamma \vdash B}{}} {\Gamma \vdash A \vee B} {\mbox{$\vee$-i2}} } {\Gamma \vdash A \vee B} {\mbox{par}}$$ the commutation with the elimination rule below is often preferred. In this paper, we favor the commutation of the parallel rule with the introduction rules, rather than with the elimination rules, whenever it is possible, that is for all connectives except disjunction. For example the proof $$\irule{\irule{\irule{\pi_1}{\Gamma \vdash A}{} & \irule{\pi_2}{\Gamma \vdash B}{}} {\Gamma \vdash A \wedge B} {\mbox{$\wedge$-i}} & \irule{\irule{\pi_3}{\Gamma \vdash A}{} & \irule{\pi_4}{\Gamma \vdash B}{} } {\Gamma \vdash A \wedge B} {\mbox{$\wedge$-i}} } {\Gamma \vdash A \wedge B} {\mbox{par}}$$ reduces to $$\irule{\irule{\irule{\pi_1}{\Gamma \vdash A}{} & \irule{\pi_3}{\Gamma \vdash A}{}} {\Gamma \vdash A} {\mbox{par}} & \irule{\irule{\pi_2}{\Gamma \vdash B}{} & \irule{\pi_4}{\Gamma \vdash B}{} } {\Gamma \vdash B} {\mbox{par}} } {\Gamma \vdash A \wedge B} {\mbox{$\wedge$-i}}$$ Such a commutation yields a stronger introduction property for the considered connective (Theorem \ref{introductions}). \subsection{Proof-terms} We introduce a term language, the $\odot$-calculus, for the proofs of this logic. Its syntax is \begin{align*} t =~& x\mid t\parallel u \mid *\mid \elimbot(t)\\ &\mid \lambda x~t\mid t~u \mid \pair{t}{u} \mid \elimand(t,\abstr{x,y}u)\\ &\mid \inl(t)\mid \inr(t) \mid \elimor(t,\abstr{x}u,\abstr{y}v)\\ &\mid t + u\mid \elimsup(t,\abstr{x}u,\abstr{y}v) \mid \elimsuppar(t,\abstr{x}u,\abstr{y}v) \end{align*} The variables $x$ express the proofs built with the axiom rule, the terms $t \parallel u$ those built with the parallel rule, the term $*$ that built with the $\top$-i rule, the terms $\elimbot(t)$ those built with the $\bot$-e rule, the terms $\lambda x~t$ those built with the $\Rightarrow$-i rule, the terms $t~u$ those built with the $\Rightarrow$-e rule, the terms $\pair{t}{u}$ those built with the $\wedge$-i rule, the terms $\elimand(t, \abstr{x, y}u)$ those built with the $\wedge$-e rule, the terms $\inl(t)$ those built with the $\vee$-i1 rule, the terms $\inr(t)$ those built with the $\vee$-i2 rule, the terms $\elimor(t,\abstr{x}u,\abstr{y}v)$ those built with the $\vee$-e rule, the terms $t + u$ those built with the $\odot$-i rule, and the terms $\elimsup(t,\abstr{x}u,\abstr{y}v)$ and $\elimsuppar(t,\abstr{x}u,\abstr{y}v)$ those built with the $\odot$-e rule. The proofs of the form $*$, $\lambda x~t$, $\pair{t}{u}$, $\inl(t)$, $\inr(t)$, and $t + u$ are called {\em introductions}, and those of the form $\elimbot(t)$, $t~u$, $\elimand(t,\abstr{x,y}u)$, $\elimor(t,\abstr{x}u,\abstr{y}v)$, $\elimsup(t,\abstr{x}u,\abstr{y}v)$, or $\elimsuppar(t,\abstr{x}u,\abstr{y}v)$ {\em eliminations}. Variables and terms of the form $t \parallel u$ are neither introductions nor eliminations. Free and bound variables are defined as usual. A proof-term is closed if it contains no free variables. \begin{figure}[t] $$\irule{x:A\in\Gamma} {\Gamma \vdash x:A} {\mbox{ax}} \qquad \irule{\Gamma \vdash t:A & \Gamma \vdash u:A} {\Gamma \vdash t \parallel u:A} {\mbox{par}} \qquad \irule{} {\Gamma \vdash *:\top} {\mbox{$\top$-i}} \qquad \irule{\Gamma \vdash t:\bot} {\Gamma \vdash \elimbot(t):C} {\mbox{$\bot$-e}} $$ $$\irule{\Gamma, x:A \vdash t:B} {\Gamma \vdash \lambda x~t:A \Rightarrow B} {\mbox{$\Rightarrow$-i}} \qquad \irule{\Gamma \vdash t:A\Rightarrow B & \Gamma \vdash u:A} {\Gamma \vdash t~u:B} {\mbox{$\Rightarrow$-e}}$$ $$\irule{\Gamma \vdash t:A & \Gamma \vdash u:B} {\Gamma \vdash \pair{t}{u}:A \wedge B} {\mbox{$\wedge$-i}} \qquad \irule{\Gamma \vdash t:A \wedge B & \Gamma, x:A, y:B \vdash u:C} {\Gamma \vdash \elimand(t,\abstr{x,y}u):C} {\mbox{$\wedge$-e}}$$ $$\irule{\Gamma \vdash t:A} {\Gamma \vdash \inl(t):A \vee B} {\mbox{$\vee$-i1}} \qquad \irule{\Gamma \vdash t:B} {\Gamma \vdash \inr(t):A \vee B} {\mbox{$\vee$-i2}}$$ $$ \irule{\Gamma \vdash t:A \vee B & \Gamma, x:A \vdash u:C & \Gamma, y:B \vdash v:C} {\Gamma \vdash \elimor(t,\abstr{x}u,\abstr{y}v):C} {\mbox{$\vee$-e}}$$ $$\irule{\Gamma \vdash t:A & \Gamma \vdash u:B} {\Gamma \vdash t + u:A \odot B} {\mbox{$\odot$-i}}$$ $$\irule{\Gamma \vdash t:A\odot B & \Gamma, x:A \vdash u:C & \Gamma, y:B \vdash v:C} {\Gamma \vdash \elimsup(t,\abstr{x}u,\abstr{y}v):C} {\mbox{$\odot$-e}}$$ $$\irule{\Gamma \vdash t:A\odot B & \Gamma, x:A \vdash u:C & \Gamma, y:B \vdash v:C} {\Gamma \vdash \elimsuppar(t,\abstr{x}u,\abstr{y}v):C} {\mbox{$\odot$-e}}$$ \caption{The typing rules of the $\odot$-calculus\label{figtypingrules}} \end{figure} \begin{figure}[t] \[ \begin{array}{r@{\,}l} (\lambda x~t)~u & \longrightarrow (u/x)t\\ \elimand(\pair{t}{u}, \abstr{x,y}v) & \longrightarrow (t/x,u/y)v\\ \elimor(\inl(t),\abstr{x}v,\abstr{y}w) & \longrightarrow (t/x)v\\ \elimor(\inr(u),\abstr{x}v,\abstr{y}w) & \longrightarrow (u/y)w\\ \elimsup(t+u,\abstr{x}v,\abstr{y}w) & \longrightarrow (t/x)v\\ \elimsup(t+u,\abstr{x}v,\abstr{y}w) & \longrightarrow (u/y)w\\ \elimsuppar(t+u,\abstr{x}v,\abstr{y}w) & \longrightarrow (t/x)v \parallel (u/y)w\\ \\ (\lambda x~t) \parallel (\lambda x~u) & \longrightarrow \lambda x~(t \parallel u)\\ \pair{t}{u} \parallel \pair{v}{w} & \longrightarrow \pair{t \parallel v}{u \parallel w} \\ \elimor(t \parallel u,\abstr{x}v,\abstr{y}w) & \longrightarrow \elimor(t,\abstr{x}v,\abstr{y}w) \parallel \elimor(u,\abstr{x}v,\abstr{y}w) \\ (t + u) \parallel (v + w) & \longrightarrow (t \parallel v) + (u \parallel w)\\ \\ t \parallel t& \longrightarrow t \end{array} \] \caption{The reduction rules of the $\odot$-calculus\label{figreductionrules}} \end{figure} The typing rules of the $\odot$-calculus are given in Figure \ref{figtypingrules} and its reduction rules in Figure \ref{figreductionrules}. The reduction relation is defined as usual as the smallest contextual relation that contains $\sigma l \longrightarrow \sigma r$, for all rules $l \longrightarrow r$ and substitutions $\sigma$. \longversion{ \subsection{Termination} We now prove the strong termination of proof reduction, that this that all reduction sequences are finite. The proof follows the same pattern as that for propositional natural deduction, that we recall in the Appendix. The $\odot$-calculus introduces two new features: the connective $\odot$, and its associated proof constructors $+$, $\elimsup$, and $\elimsuppar$ and the constructor $\parallel$ and the associated redexes. To handle these redexes, we prove the strong termination of an extended reduction system, in the spirit of Girard's ultra-reduction \cite{Girard}, whose strong termination obviously implies that of the rules of Figure \ref{figreductionrules}. \begin{definition}[Ultra-reduction] Ultra-reduction is defined with the rules of Figure \ref{figreductionrules}, plus the rules \begin{align*} t \parallel u & \longrightarrow t\\ t \parallel u & \longrightarrow u \end{align*} \end{definition} In the proof below, Propositions \ref{Var}, \ref{star}, \ref{abstraction}, \ref{pair}, \ref{inl}, \ref{inr}, \ref{elimbot}, \ref{application}, and \ref{elimand} have the same proofs as Propositions \ref{app-Var}, \ref{app-star}, \ref{app-abstraction}, \ref{app-pair}, \ref{app-inl}, \ref{app-inr}, \ref{app-ebot}, \ref{app-application}, and \ref{app-eand} in the strong termination of proof reduction for propositional natural deduction (except that the references to Propositions \ref{app-Var}, \ref{app-closure}, and \ref{app-CR3} must be replaced with references to Propositions \ref{Var}, \ref{closure}, and \ref{CR3}). So we shall omit these proofs. Propositions \ref{closure}, \ref{CR3}, and \ref{elimor} have proofs similar to those of Propositions \ref{app-closure}, \ref{app-CR3}, and \ref{app-eor}, but these proofs require minor tweaks. In contrast, Propositions \ref{sum}, \ref{parallel}, \ref{elimsup}, and \ref{elimsuppar} are specific. \begin{definition} \label{reducibility} We define, by induction on the proposition $A$, a set of proofs $\llbracket A \rrbracket$: \begin{itemize} \item $t \in \llbracket \top \rrbracket$ if $t$ strongly terminates, \item $t \in \llbracket \bot \rrbracket$ if $t$ strongly terminates, \item $t \in \llbracket A \Rightarrow B \rrbracket$ if $t$ strongly terminates and whenever it reduces to a proof of the form $\lambda x~u$, then for every $v \in \llbracket A \rrbracket$, $(v/x)u \in \llbracket B \rrbracket$, \item $t \in \llbracket A \wedge B \rrbracket$ if $t$ strongly terminates and whenever it reduces to a proof of the form $\pair{u}{v}$, then $u \in \llbracket A \rrbracket$ and $v \in \llbracket B \rrbracket$, \item $t \in \llbracket A \vee B \rrbracket$ if $t$ strongly terminates and whenever it reduces to a proof of the form $\inl(u)$, then $u \in \llbracket A \rrbracket$, and whenever it reduces to a proof of the form $\inr(v)$, then $v \in \llbracket B \rrbracket$, \item $t \in \llbracket A \odot B \rrbracket$ if $t$ strongly terminates and whenever it reduces to a proof of the form $u + v$, then $u \in \llbracket A \rrbracket$ and $v \in \llbracket B \rrbracket$. \end{itemize} \end{definition} \begin{definition} If $t$ is a strongly terminating proof, we write $|t|$ for the maximum length of a reduction sequence issued from $t$. \end{definition} \begin{proposition}[Variables] \label{Var} For any $A$, the set $\llbracket A \rrbracket$ contains all the variables. \end{proposition} \begin{proposition}[Closure by reduction] \label{closure} If $t \in \llbracket A \rrbracket$ and $t \longrightarrow^* t'$, then $t' \in \llbracket A \rrbracket$. \end{proposition} \begin{proof} If $t \longrightarrow^* t'$ and $t$ strongly terminates, then $t'$ strongly terminates. Furthermore, if $A$ has the form $B \Rightarrow C$ and $t'$ reduces to $\lambda x~u$, then so does $t$, hence for every $v \in \llbracket B \rrbracket$, $(v/x)u \in \llbracket C \rrbracket$. If $A$ has the form $B \wedge C$ and $t'$ reduces to $\pair{u}{v}$, then so does $t$, hence $u \in \llbracket B \rrbracket$ and $v \in \llbracket C \rrbracket$. If $A$ has the form $B \vee C$ and $t'$ reduces to $\inl(u)$, then so does $t$, hence $u \in \llbracket B \rrbracket$ and if $A$ has the form $B \vee C$ and $t'$ reduces to $\inr(v)$, then so does $t$, hence $v \in \llbracket C \rrbracket$. And if $A$ has the form $B \odot C$ and $t'$ reduces to $u + v$, then so does $t$, hence $u \in \llbracket B \rrbracket$ and $v \in \llbracket C \rrbracket$. \end{proof} \begin{proposition}[Girard's lemma] \label{CR3} Let $t$ be a proof that is not an introduction, such that all the one-step reducts of $t$ are in $\llbracket A \rrbracket$. Then $t \in \llbracket A \rrbracket$. \end{proposition} \begin{proof} Let $t, t_2, ...$ be a reduction sequence issued from $t$. If it has a single element, it is finite. Otherwise, we have $t \longrightarrow t_2$. As $t_2 \in \llbracket A \rrbracket$, it strongly terminates and the reduction sequence is finite. Thus $t$ strongly terminates. Furthermore, if $A$ has the form $B \Rightarrow C$ and $t \longrightarrow^* \lambda x~u$, then let $t , t_2, ..., t_n$ be a reduction sequence from $t$ to $\lambda x~u$. As $t_n$ is an introduction and $t$ is not, $n \geq 2$. Thus $t \longrightarrow t_2 \longrightarrow^* t_n$. We have $t_2 \in \llbracket A \rrbracket$, thus for all $v \in \llbracket B \rrbracket$, $(v/x)u \in \llbracket C \rrbracket$. And if $A$ has the form $B \wedge C$ and $t \longrightarrow^* \pair{u}{v}$, then let $t , t_2, ..., t_n$ be a reduction sequence from $t$ to $\pair{u}{v}$. As $t_n$ is an introduction and $t$ is not, $n \geq 2$. Thus $t \longrightarrow t_2 \longrightarrow^* t_n$. We have $t_2 \in \llbracket A \rrbracket$, thus $u \in \llbracket B \rrbracket$ and $v \in \llbracket C \rrbracket$. If $A$ has the form $B \vee C$ and $t \longrightarrow^* \inl(u)$, then let $t , t_2, ..., t_n$ be a reduction sequence from $t$ to $\inl(u)$. As $t_n$ is an introduction and $t$ is not, $n \geq 2$. Thus $t \longrightarrow t_2 \longrightarrow^* t_n$. We have $t_2 \in \llbracket A \rrbracket$, thus $u \in \llbracket B \rrbracket$. If $A$ has the form $B \vee C$ and $t \longrightarrow^* \inr(v)$, then let $t , t_2, ..., t_n$ be a reduction sequence from $t$ to $\inr(v)$. As $t_n$ is an introduction and $t$ is not, $n \geq 2$. Thus $t \longrightarrow t_2 \longrightarrow^* t_n$. We have $t_2 \in \llbracket A \rrbracket$, thus $v \in \llbracket C \rrbracket$. And if $A$ has the form $B \odot C$ and $t \longrightarrow^* u + v$, then let $t , t_2, ..., t_n$ be a reduction sequence from $t$ to $u + v$. As $t_n$ is an introduction and $t$ is not, $n \geq 2$. Thus $t \longrightarrow t_2 \longrightarrow^* t_n$. We have $t_2 \in \llbracket A \rrbracket$, thus $u \in \llbracket B \rrbracket$ and $v \in \llbracket C \rrbracket$. \end{proof} In Propositions \ref{star} to \ref{elimsuppar}, we prove the adequacy of each proof constructor. \begin{proposition}[Adequacy of $*$] \label{star} We have $* \in \llbracket \top \rrbracket$. \end{proposition} \begin{proposition}[Adequacy of $\lambda$] \label{abstraction} If for all $u \in \llbracket A \rrbracket$, $(u/x)t \in \llbracket B \rrbracket$, then $\lambda x~t \in \llbracket A \Rightarrow B \rrbracket$. \end{proposition} \begin{proposition}[Adequacy of $\pair{}{}$] \label{pair} If $t_1 \in \llbracket A \rrbracket$ and $t_2 \in \llbracket B \rrbracket$, then $\pair{t_1}{t_2} \in \llbracket A \wedge B \rrbracket$. \end{proposition} \begin{proposition}[Adequacy of $\inl$] \label{inl} If $t \in \llbracket A \rrbracket$, then $\inl(t) \in \llbracket A \vee B \rrbracket$. \end{proposition} \begin{proposition}[Adequacy of $\inr$] \label{inr} If $t \in \llbracket B \rrbracket$, then $\inr(t) \in \llbracket A \vee B \rrbracket$. \end{proposition} \begin{proposition}[Adequacy of $+$] \label{sum} If $t_1 \in \llbracket A \rrbracket$ and $t_2 \in \llbracket B \rrbracket$, then $t_1 + t_2 \in \llbracket A \odot B \rrbracket$. \end{proposition} \begin{proof} The proofs $t_1$ and $t_2$ strongly terminate. Consider a reduction sequence issued from $t_1 + t_2$. This sequence can only reduce $t_1$ and $t_2$, hence it is finite. Thus $t_1 + t_2$ strongly terminates. Furthermore, if $t_1 + t_2 \longrightarrow^* t'_1 + t'_2$, then $t'_1$ is a reduct of $t_1$ and $t'_2$ is a reduct of $t_2$. By Proposition \ref{closure}, $t'_1 \in \llbracket A \rrbracket$ and $t'_2 \in \llbracket B \rrbracket$. \end{proof} \begin{proposition}[Adequacy of $\parallel$] \label{parallel} If $t_1 \in \llbracket A \rrbracket$ and $t_2 \in \llbracket A \rrbracket$, then $t_1 \parallel t_2 \in \llbracket A \rrbracket$. \end{proposition} \begin{proof} The proofs $t_1$ and $t_2$ strongly terminate. We prove, by induction on $|t_1| + |t_2|$ and then on the structure of $A$, that $t_1 \parallel t_2 \in \llbracket A \rrbracket$. Using Proposition \ref{CR3}, we only need to prove that every of its one step reducts are in $\llbracket A \rrbracket$. If the reduction takes place in $t_1$ or in $t_2$, then we apply Proposition \ref{closure} and the induction hypothesis. Otherwise, either: \begin{itemize} \item The proposition $A$ has the form $B \Rightarrow C$, $t_1 = \lambda x~u_1$, $t_2 = \lambda x~u_2$, and the reduct is $\lambda x~(u_1 \parallel u_2)$. As $t_1 = \lambda x~u_1 \in \llbracket A \rrbracket = \llbracket B \Rightarrow C \rrbracket$, for every $w$ in $\llbracket B \rrbracket$, $(w/x)u_1 \in \llbracket C \rrbracket$. In a similar way, $(w/x)u_2 \in \llbracket C \rrbracket$. By induction hypothesis, $(w/x)(u_1 \parallel u_2) = (w/x)u_1 \parallel (w/x)u_2 \in \llbracket C \rrbracket$ and by Proposition \ref{abstraction}, $\lambda x~(u_1 \parallel u_2) \in \llbracket B \Rightarrow C \rrbracket = \llbracket A \rrbracket$. \item The proposition $A$ has the form $B \wedge C$, $t_1 = \pair {u_1}{v_1}$, $t_2 = \pair{u_2}{v_2}$, and the reduct is $\pair{u_1 \parallel u_2}{v_1 \parallel v_2}$. As $t_1 = \pair{u_1}{u_2} \in \llbracket A \rrbracket = \llbracket B \wedge C \rrbracket$, $u_1 \in \llbracket B \rrbracket$ and $v_1 \in \llbracket C \rrbracket$. In a similar way, $u_2 \in \llbracket B \rrbracket$ and $v_2 \in \llbracket C \rrbracket$. By induction hypothesis, $u_1 \parallel u_2 \in \llbracket B \rrbracket$ and $v_1 \parallel v_2 \in \llbracket C \rrbracket$ and by Proposition \ref{pair}, $\pair{u_1 \parallel u_2}{v_1 \parallel v_2} \in \llbracket B \wedge C \rrbracket = \llbracket A \rrbracket$. \item The proposition $A$ has the form $B \odot C$, $t_1 = u_1 + v_1$, $t_2 = u_2 + v_2$, and the reduct is $(u_1 \parallel u_2) + (v_1 \parallel v_2)$. As $t_1 = u_1 + u_2 \in \llbracket A \rrbracket = \llbracket B \odot C \rrbracket$, $u_1 \in \llbracket B \rrbracket$ and $v_1 \in \llbracket C \rrbracket$. In a similar way, $u_2 \in \llbracket B \rrbracket$ and $v_2 \in \llbracket C \rrbracket$. By induction hypothesis, $u_1 \parallel u_2 \in \llbracket B \rrbracket$ and $v_1 \parallel v_2 \in \llbracket C \rrbracket$ and by Proposition \ref{sum}, $(u_1 \parallel u_2) + (v_1 \parallel v_2) \in \llbracket B \odot C \rrbracket = \llbracket A \rrbracket$. \item The proofs $t_1$ and $t_2$ are equal and the reduct is $t_1$, that is in $\llbracket A \rrbracket$. \item The reduction rule is an ultra-reduction rule and the reduct is $t_1$ or $t_2$, that are in $\llbracket A \rrbracket$. \end{itemize} \end{proof} \begin{proposition}[Adequacy of $\elimbot$] \label{elimbot} If $t \in \llbracket \bot \rrbracket$, then $\elimbot(t) \in \llbracket C \rrbracket$. \end{proposition} \begin{proposition}[Adequacy of application] \label{application} If $t_1 \in \llbracket A \Rightarrow B \rrbracket$ and $t_2 \in \llbracket A \rrbracket$, then $t_1~t_2 \in \llbracket B \rrbracket$. \end{proposition} \begin{proposition}[Adequacy of $\elimand$] \label{elimand} If $t_1 \in \llbracket A \wedge B \rrbracket$, for all $u$ in $\llbracket A \rrbracket$ and for all $v$ in $\llbracket B \rrbracket$, $(u/x,v/y)t_2 \in \llbracket C \rrbracket $, then $\elimand(t_1, \abstr{x,y}t_2) \in \llbracket C \rrbracket$. \end{proposition} \begin{proposition}[Adequacy of $\elimor$] \label{elimor} If $t_1 \in \llbracket A \vee B \rrbracket$, for all $u$ in $\llbracket A \rrbracket$, $(u/x)t_2 \in \llbracket C \rrbracket $, and for all $v$ in $\llbracket B \rrbracket$, $(v/y)t_3 \in \llbracket C \rrbracket $, then $\elimor(t_1, \abstr{x}t_2, \abstr{y}t_3) \in \llbracket C \rrbracket$. \end{proposition} \begin{proof} By Proposition \ref{Var}, $x \in \llbracket A \rrbracket$, thus $t_2 = (x/x)t_2 \in \llbracket C \rrbracket$. In the same way, $t_3 \in \llbracket C \rrbracket$. Hence, $t_1$, $t_2$, and $t_3$ strongly terminate. We prove, by induction on $|t_1| + |t_2| + |t_3|$, and then on the size of $t_1$, that $\elimor(t_1, \abstr{x}t_2, \abstr{y}t_3) \in \llbracket C \rrbracket$. Using Proposition \ref{CR3}, we only need to prove that every of its one step reducts are in $\llbracket C \rrbracket$. If the reduction takes place in $t_1$, $t_2$, or $t_3$, then we apply Proposition \ref{closure} and the induction hypothesis. Otherwise, either: \begin{itemize} \item The proof $t_1$ has the form $\inl(w_2)$ and the reduct is $(w_2/x)t_2$. As $\inl(w_2) \in \llbracket A \vee B \rrbracket$, we have $w_2 \in \llbracket A \rrbracket$. Hence, $(w_2/x)t_2 \in \llbracket C \rrbracket$. \item The proof $t_1$ has the form $\inr(w_3)$ and the reduct is $(w_3/x)t_3$. As $\inr(w_3) \in \llbracket A \vee B \rrbracket$, we have $w_3 \in \llbracket B \rrbracket$. Hence, $(w_3/x)t_3 \in \llbracket C \rrbracket$. \item The proof $t_1$ has the form $t_1' \parallel t''_1$ and the reduct is $\elimor(t'_1, \abstr{x}t_2, \abstr{y}t_3) \parallel \elimor(t''_1, \abstr{x}t_2, \abstr{y}t_3)$. As $t_1 \longrightarrow t'_1$ with an ultra-reduction rule, we have by Proposition \ref{closure}, $t'_1 \in \llbracket A \vee B \rrbracket$. In a similar way, $t''_1 \in \llbracket A \vee B \rrbracket$. Thus, by induction hypothesis, $\elimor(t'_1, \abstr{x}t_2, \abstr{y}t_3) \in \llbracket A \vee B \rrbracket$ and $\elimor(t''_1, \abstr{x}t_2, \abstr{y}t_3) \in \llbracket A \vee B \rrbracket$. We conclude with Proposition \ref{parallel}. \end{itemize} \end{proof} \begin{proposition}[Adequacy of $\elimsup$] \label{elimsup} If $t_1 \in \llbracket A \odot B \rrbracket$, for all $u$ in $\llbracket A \rrbracket$, $(u/x)t_2 \in \llbracket C \rrbracket $, and for all $v$ in $\llbracket B \rrbracket$, $(v/y)t_3 \in \llbracket C \rrbracket $, then $\elimsup(t_1, \abstr{x}t_2, \abstr{y}t_3) \in \llbracket C \rrbracket$. \end{proposition} \begin{proof} By Proposition \ref{Var}, $x \in \llbracket A \rrbracket$, thus $t_2 = (x/x)t_2 \in \llbracket C \rrbracket$. In the same way, $t_3 \in \llbracket C \rrbracket$. Hence, $t_1$, $t_2$, and $t_3$ strongly terminate. We prove, by induction on $|t_1| + |t_2| + |t_3|$, that $\elimsup(t_1, \abstr{x}t_2, \abstr{y}t_3) \in \llbracket C \rrbracket$. Using Proposition \ref{CR3}, we only need to prove that every of its one step reducts are in $\llbracket C \rrbracket$. If the reduction takes place in $t_1$, $t_2$, or $t_3$, then we apply Proposition \ref{closure} and the induction hypothesis. Otherwise, the proof $t_1$ has the form $w_2 + w_3$ and the reduct is either $(w_2/x)t_2$ or $(w_3/x)t_3$. As $w_2 + w_3 \in \llbracket A \odot B \rrbracket$, we have $w_2 \in \llbracket A \rrbracket$ and $w_3 \in \llbracket B \rrbracket$. Hence, $(w_2/x)t_2 \in \llbracket C \rrbracket$ and $(w_3/x)t_3 \in \llbracket C \rrbracket$. \end{proof} \begin{proposition}[Adequacy of $\elimsuppar$] \label{elimsuppar} If $t_1 \in \llbracket A \odot B \rrbracket$, for all $u$ in $\llbracket A \rrbracket$, $(u/x)t_2 \in \llbracket C \rrbracket $, and for all $v$ in $\llbracket B \rrbracket$, $(v/y)t_3 \in \llbracket C \rrbracket $, then $\elimsuppar(t_1, \abstr{x}t_2, \abstr{y}t_3) \in \llbracket C \rrbracket$. \end{proposition} \begin{proof} By Proposition \ref{Var}, $x \in \llbracket A \rrbracket$, thus $t_2 = (x/x)t_2 \in \llbracket C \rrbracket$. In the same way, $t_3 \in \llbracket C \rrbracket$. Hence, $t_1$, $t_2$, and $t_3$ strongly terminate. We prove, by induction on $|t_1| + |t_2| + |t_3|$, that $\elimsuppar(t_1, \abstr{x}t_2, \abstr{y}t_3) \in \llbracket C \rrbracket$. Using Proposition \ref{CR3}, we only need to prove that every of its one step reducts are in $\llbracket C \rrbracket$. If the reduction takes place in $t_1$, $t_2$, or $t_3$, then we apply Proposition \ref{closure} and the induction hypothesis. Otherwise, the proof $t_1$ has the form $w_2 + w_3$ and the reduct is $(w_2/x)t_2 \parallel (w_3/x)t_3$. As $w_2 + w_3 \in \llbracket A \odot B \rrbracket$, we have $w_2 \in \llbracket A \rrbracket$ and $w_3 \in \llbracket B \rrbracket$. Hence, $(w_2/x)t_2 \in \llbracket C \rrbracket$ and $(w_3/x)t_3 \in \llbracket C \rrbracket$. We conclude with Proposition \ref{parallel}. \end{proof} \begin{theorem}[Adequacy] Let $t$ be a proof of $A$ in a context $\Gamma = x_1:A_1, ..., x_n:A_n$ and $\sigma$ be a substitution mapping each variable $x_i$ to an element of $\llbracket A_i \rrbracket$, then $\sigma t \in \llbracket A \rrbracket$. \end{theorem} \begin{proof} By induction on the structure of $t$. If $t$ is a variable, then, by definition of $\sigma$, $\sigma t \in \llbracket A \rrbracket$. For the thirteen other proof constructors, we use use the Propositions \ref{star} to \ref{elimsuppar}. As all cases are similar, we just give a few examples. \begin{itemize} \item If $t = u + v$, where $u$ is a proof of $B$ and $v$ a proof of $C$, then, by induction hypothesis, $\sigma u \in \llbracket B \rrbracket$ and $\sigma v \in \llbracket C \rrbracket$. Hence, by Proposition \ref{sum}, $\sigma u + \sigma v \in \llbracket B \odot C \rrbracket$, that is $\sigma t \in \llbracket A \rrbracket$. \item If $t = \elimsup(u_1,\abstr{x}u_2,\abstr{y}u_3)$, where $u_1$ is a proof of $B \odot C$, $u_2$ a proof of $A$, and $u_3$ a proof of $A$, then, by induction hypothesis, $\sigma u_1 \in \llbracket B \odot C \rrbracket$, for all $v$ in $\llbracket B \rrbracket$, $(v/x)\sigma u_2 \in \llbracket A \rrbracket$, and for all $w$ in $\llbracket C \rrbracket$, $(w/x)\sigma u_3 \in \llbracket A \rrbracket$. Hence, by Proposition \ref{elimsup}, $\elimsup(\sigma u_1,\abstr{x} \sigma u_2,\abstr{y} \sigma u_3) \in \llbracket A \rrbracket$, that is $\sigma t \in \llbracket A \rrbracket$. \end{itemize} \end{proof} } \longversion{ \begin{corollary}[Termination] Let $t$ be a proof of $A$ in a context $\Gamma = x_1:A_1, ..., x_n:A_n$. Then $t$ strongly terminates. \end{corollary} } \shortversion{ The following two theorems are proved in the long version arXiv'ed at~\cite{papier:arxiv}. \begin{theorem}[Termination] If $\Gamma \vdash t:A$, then $t$ strongly terminates. \end{theorem} } \longversion{ \begin{proof} Let $\sigma$ be the substitution mapping each variable $x_i$ to itself. Note that, by Proposition \ref{Var}, this variable is an element of $\llbracket A_i \rrbracket$. Then $t = \sigma t$ is an element of $\llbracket A \rrbracket$. Hence, it strongly terminates. \end{proof} \subsection{Introduction property} } \begin{theorem}[Introduction] \label{introductions} Let $t$ be a closed irreducible proof of $A$. \begin{itemize} \item If $A$ has the form $\top$, then $t$ has the form $*$. \item The proposition $A$ is not $\bot$. \item If $A$ has the form $B \Rightarrow C$, then $t$ has the form $\lambda x:B~u$. \item If $A$ has the form $B \wedge C$, then $t$ has the form $\pair{u}{v}$. \item If $A$ has the form $B \vee C$, then $t$ has the form $\inl(u)$, $\inr(u)$, or $u \parallel v$. \item If $A$ has the form $B \odot C$, then $t$ has the form $u + v$. \end{itemize} \end{theorem} \longversion{ \begin{proof} By induction on the structure of $t$. We first remark that, as the proof $t$ is closed, it is not a variable. Then, we prove that it cannot be an elimination. \begin{itemize} \item If $t = \elimbot(u)$, then $u$ is a closed irreducible proof of $\bot$ and, by induction hypothesis, no such proofs exist. \item If $t = u~v$, then $u$ is a closed irreducible proof of $B \Rightarrow A$, hence, by induction hypothesis, it has the form $\lambda x~u_1$ and the proof $t$ is not irreducible. \item If $t = \elimand(u,\abstr{x,y}v)$, then $u$ is a closed irreducible proof of $B \wedge C$, hence, by induction hypothesis, it has the form $\pair{u_1}{u_2}$ and the proof $t$ is not irreducible. \item If $t = \elimor(u,\abstr{x}v,\abstr{y}w)$, then $u$ is a closed irreducible proof of $B \vee C$, hence, by induction hypothesis, it has the form $\inl(u_1)$, $\inr(u_1)$, or $u_1 \parallel u_2$ and the proof $t$ is not irreducible. \item If $t = \elimsup(u,\abstr{x}v,\abstr{y}w)$ or $t = \elimsuppar(u,\abstr{x}v,\abstr{y}w)$, then $u$ is a closed irreducible proof of $B \odot C$, hence, by induction hypothesis, it has the form $u_1 + u_2$ and the proof $t$ is not irreducible. \end{itemize} Hence, $t$ is an introduction or a proof of the form $u \parallel v$. It $t$ is $*$, then $A$ is $\top$. If it has the form $\lambda x~u$, then $A$ has the form $B \Rightarrow C$. If it has the form $\pair{u}{v}$, then $A$ has the form $B \wedge C$. If it has the form $\inl(u)$ or $\inr(u)$, then $A$ has the form $B \vee C$. If it has the form $u + v$ then $A$ has the form $B \odot C$. We prove, that if it has the form $u \parallel v$, $A$ has the form $B \vee C$. The proofs $u$ and $v$ are two closed and irreducible proofs of $A$. If $A = \top$ then, by induction hypothesis, they are both $*$ and the proof $t$ is not irreducible. If $A = \bot$ then, they are irreducible proofs of $\bot$ and, by induction hypothesis, no such proofs exist. If $A$ has the form $B \Rightarrow C$ then, by induction hypothesis, they are both abstractions and the proof $t$ is not irreducible. If $A$ has the form $B \wedge C$, then, by induction hypothesis, they are both pairs and the proof $t$ is not irreducible. If $A$ has the form $B \odot C$, then, by induction hypothesis, they are both sums and the proof $t$ is not irreducible. Hence, $A$ has the form $B \vee C$. \end{proof} \begin{remark} We reap here the benefit of commuting, when possible, the parallel rule with the introduction rules, as, except for the disjunction, closed irreducible proofs are genuine introductions. Moreover, if we commuted the symbol $\parallel$ with the symbol $\elimsup$ instead of commuting it with the symbol $+$, the proof $\elimsup((t + u) \parallel (t + u), [x]v, [y]w)$ would reduce both to $\elimsup(t + u, [x]v, [y]w)$ and to $\elimsup(t + u, [x]v, [y]w) \parallel \elimsup(t + u, [x]v, [y]w)$, the first proof reducing, non-deterministically, to $(t/x)v$ and $(u/y)w$, and the second to $(t/x)v$, $(u/y)w$, $(t/x)v \parallel (u/y)w$, and $(u/y)w \parallel (t/x)v$ \cite{DiazcaroMartinezLSFA17,faggian:LIPIcs:2019:10526}. \end{remark} \smallskip \begin{proposition}[Disjunction] If the proposition $A \vee B$ has a closed proof, then $A$ has a closed proof or $B$ has a closed proof. \end{proposition} \begin{proof} Consider a closed proof of $A \vee B$ and its irreducible form $t$. We prove, by induction on the structure of $t$, that $A$ has a closed proof or $B$ has a closed proof. By Proposition \ref{introductions}, $t$ has the form $\inl(u)$, $\inr(u)$, or $u \parallel v$. If it has the form $\inl(u)$, $u$ is a closed proof of $A$. If it has the form $\inr(u)$, $u$ is a closed proof of $B$. If it has the form $u \parallel v$, $u$ is a closed irreducible proof of $A \vee B$. Thus, by induction hypothesis, $A$ has a closed proof or $B$ has a closed proof. \end{proof} } \section{Quantifying non-determinism} \label{secquantifying} When we have a non-deterministic reduction system, we often want to quantify the propensity of a proof to reduce to another. To do so, we enrich the term language with scalars, so that sums become linear combinations. Our set $S$ of scalars can be any set containing an element $1$ and equipped with addition and multiplication, such as $\mathbb{R}$ or $\mathbb{C}$. We define the $\odot^S$-calculus (read: ``the sup-S-calculus''), by extending the grammar of proofs, adding a category for weighted proofs $$\phi = a . t$$ where $a$ is a scalar and modifying the category of proofs as follows \begin{align*} t =~& x\mid \phi \parallel \chi\mid *\mid \elimbot(t)\\ &\mid \lambda x~t\mid t~u \mid \pair{t}{u} \mid \elimand(t,\abstr{x,y}u)\\ &\mid \inl(t)\mid \inl(r)\mid \elimor(t,\abstr{x}u,\abstr{y}v)\\ &\mid \phi + \chi \mid \elimsup(t,\abstr{x}u,\abstr{y}v) \mid \elimsuppar(t,\abstr{x}u,\abstr{y}v) \end{align*} where the arguments of $\parallel$ and $+$ are weighted proofs. Note that even in the case where there is a scalar $0$, we need a proof $t$ of $A$ to build the weighted proof $0 . t$. For example, the only irreducible proof of the proposition $\top \odot \top$ was $* + *$. Now, the irreducible proofs of this proposition will be all the proofs of the form $a.* + b. *$, for instance $\frac{1}{\sqrt{2}}.* + \frac{1}{\sqrt{2}}.*$, $1.* + 1.*$, $1.* + 0.*$, etc. For this, the typing rules are those of Figure \ref{figtypingrules} extended with an extra rule for weighted proofs $$\irule{\Gamma \vdash t:A} {\Gamma \vdash a . t : A} {}$$ The reduction rules are those of Figure \ref{figreductionrules} enriched with the scalars. They are given in Figure \ref{figredscalars}. All these rules reduce proofs, except the last one that reduces weighted proofs. Note that the proof $a.t \parallel b.t$ is irreducible: only the weighted proof $1.(a.t \parallel b.t)$ reduces to $(a+b).t$. \begin{figure}[t] \[ \begin{array}{r@{\,}l} (\lambda x~t)~u & \longrightarrow (u/x)t\\ \elimand(\pair{t}{u}, \abstr{x,y}v) & \longrightarrow (t/x,u/y)v\\ \elimor(\inl(t),\abstr{x}v,\abstr{y}w) & \longrightarrow (t/x)v\\ \elimor(\inr(u),\abstr{x}v,\abstr{y}w) & \longrightarrow (u/y)w\\ \elimsup(a.t + b.u,\abstr{x}v,\abstr{y}w) & \longrightarrow (t/x)v\\ \elimsup(a.t + b.u,\abstr{x}v,\abstr{y}w) & \longrightarrow (u/y)w\\ \elimsuppar(a.t+ b.u,\abstr{x}v,\abstr{y}w) & \longrightarrow a. (t/x)v \parallel b . (u/y)w\\ \\ a . (\lambda x~t) \parallel b . (\lambda x~u) & \longrightarrow \lambda x~(a . t \parallel b . u)\\ a . \pair{t}{u} \parallel b . \pair{v}{w} & \longrightarrow \pair{a . t \parallel b . v}{a . u \parallel b . w}\\ \elimor(a. t \parallel b. u,\abstr{x}v,\abstr{y}w) & \longrightarrow a. \elimor(t,\abstr{x}v,\abstr{y}w) \parallel b. \elimor(u,\abstr{x}v,\abstr{y}w) \\ a . (c.t + d.u) \parallel b. (e . v + f . w) & \longrightarrow 1.(ac.t \parallel be.v) +1.(ad.u \parallel bf.w) \\ \\ a. (b . t \parallel c . t) & \longrightarrow (a (b + c)) . t \end{array} \] \caption{The reduction rules of the $\odot^S$-calculus\label{figredscalars}} \end{figure} The termination proof of the $\odot$-calculus extends directly to the $\odot^S$-calculus: it suffices to define a translation $^\circ$ from the $\odot^S$-calculus to the $\odot$-calculus, erasing the scalars, and check that if $t \longrightarrow u$ in the $\odot^S$-calculus, then $t^\circ \longrightarrow u^\circ$ in the $\odot$-calculus. We can now use the scalars $a$ and $b$ to assign probabilities to the reductions \[ \elimsup(a.t + b.u,\abstr{x}v,\abstr{y}w) \longrightarrow (t/x)v \qquad\qquad \elimsup(a.t + b.u,\abstr{x}v,\abstr{y}w) \longrightarrow (u/y)w \] For instance, if the scalars are complex numbers, we can assign the probabilities $|a|^2/(|a|^2 + |b|^2)$ and $|b|^2/(|a|^2 + |b|^2)$ to these two reductions. But other choices are possible, as we shall see in Section \ref{secquantum}. \section{Application to quantum computing} \label{secquantum} We now show that the $\odot^{\mathbb C}$-calculus, with a reduction strategy allowing to reduce the proofs of the form $\elimsup(t,\abstr{x}u,\abstr{y}v)$ only when $t$ is closed and irreducible, contains the core of a small quantum programming language. Requiring $t$ to be closed and irreducible to reduce the proof $\elimsup(t,\abstr{x}u,\abstr{y}v)$ permits to assign probabilities to the reductions of this proof. In the examples below, we focus on algorithms on one and two qubits. The generalization to algorithms on $n$ qubits is straightforward. Note that the binary connective $\odot$ is always used with two identical propositions: $A \odot A$. \subsection{Bits} \label{exclusiveor} \begin{definition}[Bit] Let $\B = \top \vee \top$. The proofs $\boolfalse = \inl(*)$ and $\booltrue = \inr(*)$ are closed irreducible proofs of $\B$. \end{definition} \begin{remark} The proofs $\inl(*)$ and $\inr(*)$ are not the only closed irreducible proofs of $\B$, for example $1. \inl(*) \parallel 1. \inr(*)$ also is. \end{remark} \begin{definition}[Test] We let $\test(t,u,v) = \elimor(t,[x]u,[y]v)$ where $x$ and $y$ are variables not occurring in $u$ and $v$. We have $\test(\boolfalse,u,v) \longrightarrow u$ and $\test(\booltrue,u,v) \longrightarrow v$. \end{definition} Boolean operators on $\B$ can be easily defined, for example, the exclusive or is the proof $\oplus = \lambda x \lambda y~\test(x,y,\test(y,\booltrue,\boolfalse))$ of $\B \Rightarrow \B \Rightarrow \B$. \begin{definition}[2-bit] Let $\B^2 = \B \wedge \B$. The closed irreducible proofs of $\B^2$, $\pair{\boolfalse}{\boolfalse}$, $\pair{\boolfalse}{\booltrue}$, $\pair{\booltrue}{\boolfalse}$, and $\pair{\booltrue}{\booltrue}$ are written $\boolfalsefalse$, $\boolfalsetrue$, $\booltruefalse$, and $\booltruetrue$. \end{definition} \subsection{Qubits} \begin{definition}[Qubit] Let $\Q = \top \odot \top$. A qubit $a \ket 0 + b \ket 1$ is expressed as the proof $a . * + b . *$ of $\Q$. \end{definition} \begin{remark} If the qubits $\ket \psi = a \ket 0 + b \ket 1$ and $\ket{\psi'} = a' \ket 0 + b' \ket 1$ are expressed as proofs of $\Q$, then the qubit $c \ket \psi + d \ket{\psi'}$, that is $(c a + d a') \ket 0 + (c b + d b') \ket 1$, cannot be expressed in the $\odot^{\mathbb C}$-calculus with a linear combination $a.\ket \psi+b.\ket{\psi'}$, as the result would be a proof of $\Q \odot \Q$, and not of $\Q$. In contrast, the linear combination $c . \ket \psi \parallel d . \ket{\psi'}$ is a proof of $\Q$ and it reduces, in several steps, to $(c a + d a') . * + (c b + d b') . *$. \end{remark} \begin{definition}[2-qubit] Let $\Q^{\otimes 2} = (\top \odot \top) \odot (\top \odot \top)$. A 2-qubit $a \ket{00} + b \ket{01} + c \ket{10} + d \ket{11}$ is expressed as the proof $1.(a.*+b.*)+1.(c.*+d.*)$ of $\Q^{\otimes 2}$. \end{definition} For instance the 2-qbit $\ket{01}$, that is $\ket{0} \otimes \ket{1}$, is expressed as the proof $1.(0.*+1.*)+1.(0.*+0.*)$ and the entangled 2-qbit $\frac{1}{\sqrt{2}} \ket{00} + \frac{1}{\sqrt{2}} \ket{11}$ is expressed as the proof $1.(\frac{1}{\sqrt{2}}.*+0.*)+1.(0.*+\frac{1}{\sqrt{2}}.*)$. \subsection{Probabilities} If $t$ is a closed irreducible proof of $\Q$ of the form $a.* + b.*$, where $a$ and $b$ are not both $0$, then we assign the probability \noindent\begin{tabular}{rl} $\tfrac{|a|^2}{|a|^2 + |b|^2}$ & to the reduction $\elimsup(a.* + b.*,\abstr{x}v,\abstr{y}w) \longrightarrow (*/x)v$\\ and $\tfrac{|b|^2}{|a|^2 + |b|^2}$ & to the reduction $\elimsup(a.* + b.*,\abstr{x}v,\abstr{y}w) \longrightarrow (*/y)w$. \end{tabular} If $a = b = 0$, we associate any probability, for example $\tfrac 12$, to both reductions. If $t$ is a closed irreducible proof of $\Q^{\otimes 2}$ of the form $1.(a.*+b.*)+1.(c.*+d.*)$ where $a$, $b$, $c$, and $d$ are not all $0$, then we assign the probability $\tfrac{|a|^2 + |b|^2}{|a|^2 + |b|^2 + |c|^2 + |d|^2}$ to the reduction $$\elimsup(1.(a.*+b.*)+1.(c.*+d.*) ,\abstr{x}v,\abstr{y}w) \longrightarrow ((a.*+b.*)/x)v$$ and $\tfrac{|c|^2 + |d|^2}{|a|^2 + |b|^2 + |c|^2 + |d|^2}$ to the reduction $$\elimsup(1.(a.*+b.*)+1.(c.*+d.*) ,\abstr{x}v,\abstr{y}w) \longrightarrow ((c.*+d.*)/y)w$$ If $a$, $b$, $c$, and $d$ are all $0$, we associate any probability to these reductions. \subsection{Measure} \label{measure} The information erasing, non-reversible, and non-deterministic proof constructor $\elimsup$ permits to define several measurement operators in Figure \ref{figmesure}. \begin{figure}[t] \[ \begin{array}{r@{\,}l} \pi(t) &= \elimsup(t,\abstr{\_} \boolfalse,\abstr{\_} \booltrue)\\ \pi'(t) &= \elimsup(t,\abstr{x}1.x + 0.*,\abstr{y}0.* + 1.y)\\ \pi''(t) &= \elimsup(t, \abstr{x}\pair{\boolfalse}{1.x + 0.*}, \abstr{y}\pair{\booltrue}{0.* + 1.y})\\ \\ \pi_2(t) &=\elimsup(t, \abstr{\_}\boolfalse, \abstr{\_}\booltrue)\\ \pi'_2(t) &=\elimsup(t, \abstr{x}1. x + 1. (0.* + 0.*), \abstr{y}1 . (0.* + 0.*) + 1 .y)\\ \pi''_2(t) &=\elimsup(t,\abstr{x}\pair{\boolfalse}{1. x + 1 . (0.* + 0.*)}, \abstr{y}\pair{\booltrue}{1 .(0.* + 0.*) + 1.y}) \end{array} \] \caption{Measurement operators\label{figmesure}} \end{figure} If $t$ is an irreducible proof of $\Q$ of the form $a.* + b.*$, where $a$ and $b$ are not both $0$, then the proof $\pi(a.* + b.*)$ of the proposition $\B$ reduces, with probabilities $\tfrac{|a|^2}{|a|^2 + |b|^2}$ and $\tfrac{|b|^2}{|a|^2 + |b|^2}$, to $\boolfalse$ and to $\booltrue$. It is the result of the measurement. The proof $\pi'(a.* + b.*)$ of the proposition $\Q$ reduces, with the same probabilities as above, to $1 . * + 0 . *$ and to $0 . * + 1 . *$. It is the state after the measure. The proof $\pi''(a.* + b.*)$ of the proposition $\B \wedge \Q$ reduces, with the same probabilities as above, to $\pair{\boolfalse}{1.* + 0.*}$ and to $\pair{\booltrue}{0.* + 1.*}$. It is the pair formed by the result of the measurement and the state after the measure. If $t$ is an irreducible proof of $\Q^{\otimes 2}$ of the form $1.(a.*+b.*)+1.(c.*+d.*)$ where $a$, $b$, $c$, and $d$ are not all $0$, then the proof $\pi_2(t)$ of the proposition $\B$ reduces, with probabilities $\tfrac{|a|^2 + |b|^2}{|a|^2 + |b|^2 + |c|^2 + |d|^2}$ and $\tfrac{|c|^2 + |d|^2}{|a|^2 + |b|^2 + |c|^2 + |d|^2}$, to $\boolfalse$ and to $\booltrue$. It is the result of the partial measurement of the first qubit. The proof $\pi'_2(t)$ of the proposition $\Q^{\otimes 2}$ reduces, with the same probabilities as above, to $1.(a.* + b.*) + 1.(0.* + 0.*)$ and $1.(0.* + 0.*) + 1. (c.* + d.*))$. It is the state after the partial measure of the first qubit. The proof $\pi''_2(t)$ of the proposition $\B \wedge \Q^{\otimes 2}$ reduces, with the same probabilities as above, to $\pair{\boolfalse}{1.(a.* + b.*) + 1.(0.* + 0.*)}$ and to $\pair{\booltrue}{1.(0.* + 0.*) + 1.(c.* + d.*))}$. It is the pair formed by the result of the measurement and the state after the partial measure of the first qubit. Once we introduce the matrices, it will be possible to measure in a non-cannonical basis by changing basis, measuring, and changing basis again. \subsection{Matrices} The information erasing, non-reversible, and non-deterministic measurement operators are expressed with $\elimsup$. The information preserving, reversible, and deterministic unitary operators are expressed with $\elimsuppar$. \begin{definition}[Matrix in $\Q$] \label{def:matrix} A matrix is a proof of $\B \Rightarrow \Q$, that is a function mapping bits to qubits. The matrix $M = \left(\begin{smallmatrix} m_{00} & m_{01}\\ m_{10} & m_{11} \end{smallmatrix}\right)$ mapping $\boolfalse$ to $M_0 = m_{00}.* + m_{10}.*$ and $\booltrue$ to $M_1 = m_{01}.*+m_{11}.*$ is expressed as $$M=\lambda x~\test(x,M_0,M_1)$$ Note that $M\boolfalse \longrightarrow \test(\boolfalse,M_0,M_1) \longrightarrow M_0$. Similarly, $M\booltrue \longrightarrow^* M_1$. \end{definition} In Lineal~\cite{Lineal}, a matrix $\lambda x~t$, mapping canonical base vectors to arbitrary vectors, extends to an arbitrary vector $a.\ket{0} + b.\ket{1}$ as follows. When reducing the term $(\lambda x~t)~(a.\ket{0} + b.\ket{1})$, the term $\lambda x~t$ distributes over the linear combination $a.\ket{0} + b.\ket{1}$, yielding the term $a.(\lambda x~t)~\ket{0} + b.(\lambda x~t)~\ket{1}$ where, as the terms $\ket{0}$ and $\ket{1}$ are base vectors, the $\beta$-redexes $(\lambda x~t)~\ket{0}$ and $(\lambda x~t)~\ket{1}$ can be reduced. So the whole term reduces to $a.(\ket{0}/x)t +b.(\ket{1}/x)t$. In the $\odot^{\mathbb C}$-calculus, $\beta$-reduction is not restricted to base vectors, but the application of a matrix to a vector can be defined. \begin{definition}[Application of a matrix to a vector in $\Q$] We let $$\app = \lambda M \lambda t~\elimsuppar(t,\abstr{x}M~\boolfalse,\abstr{y}M~\booltrue)$$ \end{definition} If $M:\B \Rightarrow \Q$, then the proof $\app~M~(a.* + b.*)$ reduces to $a.(M~\boolfalse) \parallel b.(M~\booltrue)$. Therefore if $M$ is the expression of the matrix $\left(\begin{smallmatrix} m_{00} & m_{01}\\ m_{10} & m_{11} \end{smallmatrix}\right)$, as in Definition~\ref{def:matrix}, we have \( \app~M~(a.* + b.*) \longrightarrow^* (a m_{00} + b m_{01}).* +(a m_{10} + b m_{11}).* \). \begin{definition}[Matrix in $\Q^{\otimes 2}$] \label{def:matrixTwo} A matrix is a proof of $\B^2 \Rightarrow \Q^{\otimes 2}$, that is a function mapping 2-bits to 2-qubits. The matrix $M=(m_{ij})_{ij}$ is expressed as \[ M=\lambda x~\elimand\Bigl(x,\abstr{y,z}\test(y,\test(z,M_0,M_1),\test(z,M_2,M_3)) \Bigr) \] where $M_i = 1.(m_{0i}.*+m_{1i}.*)+1.(m_{2i}.*+m_{3i}.*)$ is the $i$-th column of $M$. \longversion{Note that, for example \begin{align*} M\booltruefalse \longrightarrow~& \elimand\Bigl(\booltruefalse, \abstr{y,z}\test(y,\test(z,M_0,M_1),\test(z,M_2,M_3)) \Bigr)\\ =~& \elimand\Bigl(\pair{\booltrue}{\boolfalse}, \abstr{y,z}\test(y,\test(z,M_0,M_1),\test(z,M_2,M_3))\\ \longrightarrow~& \test(\booltrue,\test(\boolfalse,M_0,M_1),\test(\boolfalse,M_2,M_3)) \Bigr)\\ \longrightarrow~&\test(\boolfalse,M_2,M_3)\\ \longrightarrow~& M_2 \end{align*} Similarly, $M\boolfalsefalse \longrightarrow^*M_0$, $M\boolfalsetrue \longrightarrow^*M_1$, and $M\booltruetrue \longrightarrow^*M_3$. } \shortversion{ Note that $M\boolfalsefalse \longrightarrow^*M_0$, $M\boolfalsetrue \longrightarrow^*M_1$, $M\booltruefalse \longrightarrow^*M_2$, and $M\booltruetrue \longrightarrow^*M_3$.} \end{definition} \begin{definition} \label{qubits} Taking $m_{ii} = 1$ and $m_{ij} = 0$ for $i\neq j$ yields the proof $\qubits$ of $\B^2 \Rightarrow \Q^{\otimes 2}$ mapping each 2-bit to the corresponding 2-qubit. For example $$\qubits~\booltruefalse\to^*1.(0.*+0.*)+1.(1.*+0.*)$$ \end{definition} \begin{definition}[Application of a matrix to a vector in $\Q^{\otimes 2}$] We let $$\app_2= \lambda M \lambda t~\elimsuppar(t, \abstr{y}\elimsuppar(y,\abstr{\_}M~\boolfalsefalse,\abstr{\_}M~\boolfalsetrue),\abstr{z} \elimsuppar(z,\abstr{\_}M~\booltruefalse,\abstr{\_}M~\booltruetrue))$$ \end{definition} Hence, if $\ket{\psi} = 1.(a.* + b.*)+1.(c.* + d.*)$ and $M : \B^2 \Rightarrow \Q^{\otimes 2}$, we have \shortversion{ \begin{align*} \app_2 &~M~\ket{\psi}\\ \longrightarrow^* &~ 1.\bigl( (a m_{00} + b m_{01} + c m_{02} + d m_{03}).* + (a m_{10} + b m_{11} + c m_{12} + d m_{13}).*\bigr) \\ + &~ 1.\bigl( (a m_{20} + b m_{21} + c m_{22} + d m_{23}).* + (a m_{30} + b m_{31} + c m_{32} + d m_{33}).*\bigr) \end{align*} } \longversion{ \begin{align*} &\app_2~M~\ket{\psi} \\ &\longrightarrow^* \elimsuppar(\ket{\psi}, \abstr{y} \elimsuppar(y,\abstr{\_}M~\boolfalsefalse,\abstr{\_}M~\boolfalsetrue), \abstr{z} \elimsuppar(z,\abstr{\_}M~\booltruefalse,\abstr{\_}M~\booltruetrue)) \\ &\longrightarrow 1.\left(a.M \boolfalsefalse\parallel b.M\boolfalsetrue\right) \parallel 1.\left( c.M\booltruefalse\parallel d.M\booltruetrue\right) \\ &\longrightarrow^* 1.\left( a.M_0\parallel b.M_1 \right) \parallel 1.\left( c.M_2\parallel d.M_3 \right) \\ &= 1.\begin{aligned}[t] \bigl( &\left( a.(1.(m_{00}.*+m_{10}.*) + 1.(m_{20}.*+m_{30}.*)) \right) \\ \parallel &\left( b.(1.(m_{01}.*+m_{11}.*) + 1.(m_{21}.*+m_{31}.*)) \right) \bigr) \end{aligned} \\ &\parallel 1.\begin{aligned}[t] \bigl( &\left( c.(1.(m_{02}.*+m_{12}.*) + 1.(m_{22}.*+m_{32}.*)) \right) \\ \parallel & \left( d.(1.(m_{03}.*+m_{13}.*) + 1.(m_{23}.*+m_{33}.*)) \right) \bigr) \end{aligned} \end{align*} In this proof, we have two layers of $\parallel$ on the top and two layers of $+$ at the bottom, these $\parallel$ and $+$ commute yielding the proof \begin{align*} 1.\bigl( 1.( 1.(& a m_{00}.* \parallel b m_{01}.* ) \parallel 1.( c m_{02}.* \parallel d m_{03}.* ) ) \\ + 1.( 1.(& a m_{10}.* \parallel b m_{11}.* ) \parallel 1.( c m_{12}.* \parallel d m_{13}.* ) ) \bigr) \\ +1.\bigl( 1.( 1.(& a m_{20}.* \parallel b m_{21}.* ) \parallel 1. ( c m_{22}.* \parallel d m_{23}.* ) ) \\ + 1.( 1.(& a m_{30}.* \parallel b m_{31}.* ) \parallel 1.( c m_{32}.* \parallel d m_{33}.* ) ) \bigr) \end{align*} that finally reduces to \begin{align*} 1.\bigl( (a m_{00} + b m_{01} + c m_{02} + d m_{03}).*& + (a m_{10} + b m_{11} + c m_{12} + d m_{13}).*\bigr) \\ +~ 1.\bigl( (a m_{20} + b m_{21} + c m_{22} + d m_{23}).*& + (a m_{30} + b m_{31} + c m_{32} + d m_{33}).*\bigr) \end{align*}} \subsection{An example: Deutsch's algorithm} Deutsch's algorithm allows to decide whether a 1-bit to 1-bit function $f$ is constant or not, applying an oracle $U_f$, implementing $f$, only once. It is an algorithm operating on 2-qubits. It proceeds in four steps. (1) Prepare the initial state \( \ket{+-} = \frac{1}{2} \ket{00} -\frac{1}{2} \ket{01} +\frac{1}{2} \ket{10} -\frac{1}{2} \ket{11} \). (2) Apply to it the unitary operator $U_f$, defined by $U_f\ket{x,y} = \ket{x,y \oplus f(x)}$ for $x,y\in\{0,1\}$, where $\oplus$ is the exclusive or. (3) Apply to it the unitary operator $H\otimes I= \frac 1{\sqrt 2}\left( \begin{smallmatrix} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 1 & 0 & -1 & 0 \\ 0 & 1 & 0 & -1 \\ \end{smallmatrix}\right) $. (4) Measure the first qubit. The output is $\ket{0}$, if $f$ is constant and $\ket{1}$ if it is not. In the $\odot^{\mathbb C}$-calculus, the initial state is $\ket{+-} = 1.(\frac{1}{2}.* + \frac{-1}{2}.*) + 1.(\frac{1}{2}.* + \frac{-1}{2}).*$ the operator mapping $f$ to $U_f$ is expressed as in Definition \ref{def:matrixTwo} $$U = \lambda f\ \lambda x~ \elimand\Bigl(x,\abstr{y,z}\test(y,\test(z,M_0,M_1), \test(z,M_2,M_3)) \Bigr)$$ \( \textrm{with}\qquad \begin{array}[t]{rl@{\qquad\qquad}rl} M_0 &= \qubits~\pair{\boolfalse}{\oplus~\boolfalse~(f~\boolfalse)} & M_2 &= \qubits~\pair{\booltrue}{\oplus~\boolfalse~(f~\booltrue)}\\ M_1 &= \qubits~\pair{\boolfalse}{\oplus~\booltrue~(f~\boolfalse)} & M_3 &= \qubits~\pair{\booltrue}{\oplus~\booltrue~(f~\booltrue)} \end{array} \) \\ where $\qubits$ is defined in Definition~\ref{qubits} and the exclusive or $\oplus$ in Section~\ref{exclusiveor}. The operator $H\otimes I$ is expressed as in Definition~\ref{def:matrixTwo} with $m_{00} = m_{20} = m_{11} = m_{31} = m_{02} = m_{13} = \frac 1{\sqrt 2}$, $m_{22} = m_{33} = - \frac 1{\sqrt 2}$, and all the other $m_{ij}$ are $0$. Finally, Deutsch's algorithm is the proof of $(\B\Rightarrow\B)\Rightarrow\B$ $$\mbox{\it Deutsch} = \lambda f~\pi_2 (\app_2~(H\otimes I)~(\app_2~(U~f)~\ket{+-}))$$ Given a constant function proof of $\B\Rightarrow\B$, we have $\mbox{\it Deutsch}\ f\to^*\boolfalse$, while if $f$ if not constant, $\mbox{\it Deutsch}\ f\to^*\booltrue$. \medskip \section{Conclusion} We have defined the notions of insufficient and excessive connectives in natural deduction, extended propositional logic with an excessive connective $\odot$, and investigated the properties of the proof language of the obtained logic. We leave open the question of the interpretation of this logic in a model, in particular a categorical one, besides the obvious Lindenbaum algebra. These notions of insufficient and excessive connectives are not specific to natural deduction and similar notions could be defined, for instance, in sequent calculus. In sequent calculus however, harmony can be defined in a stronger sense, that includes, not only the possibility to normalize proofs, but also to reduce the use of the axiom rule on non-atomic propositions to smaller ones \cite{MillerPimentel}: an analog of the $\eta$-expansion, but generalized to arbitrary connectives. The $\odot^{\mathbb C}$-calculus, the proof language of this logic, can express all quantum circuits, as it can express matrices and measurement operators. However, it is not restricted to only quantum algorithms, since the $\odot$ connective addresses the question of the the information erasure, non-reversibility, and non-determinism of measurement, but not that of linearity. We leave for future work the restriction of the calculus to linear operators, forbidding, for example, the non-linear proof of the proposition $\Q \Rightarrow \Q^{\otimes 2}$, that expresses cloning: $\lambda x~\elimsuppar(x, \abstr{\_}\elimsuppar(x, \abstr{\_} \ket{00},\abstr{\_} \ket{01}), \abstr{\_}\elimsuppar(x, \abstr{\_} \ket{10}, \abstr{\_} \ket{11}))$, where $\ket{00}$ is a notation for $\qubits~\boolfalsefalse$, etc. It is also possible to restrict to the fragment of the language where proofs of $\Q \Rightarrow \Q$ have the form $\lambda x~(\app~M~x)$, for some proof $M$ of $\B \Rightarrow \Q$. Then, we can also enforce unitarity, following the methods of~\cite{AltenkirchGrattageLICS05,DiazcaroGuillermoMiquelValironLICS19,LambdaS1}. \section*{Acknowledgements} The authors want to thank Jean-Baptiste Joinet, Dale Miller, Alberto Naibo, and Alex Tsokurov for useful discussions. \bibliographystyle{plain}
{ "timestamp": "2021-07-28T02:19:30", "yymm": "2012", "arxiv_id": "2012.08994", "language": "en", "url": "https://arxiv.org/abs/2012.08994" }
\section{Introduction} The observation of the phenomenon of dark matter on various length scales in our Universe remains one of the major puzzles of modern physics (see \emph{e.g.}~Ref.~\cite{Bertone:2004pz} for a review). The hypothesis of a new particle -- and possibly an entire new sector of particles beyond the standard model (BSM) -- is a widely considered explanation. In particular, the existence of a stable weakly interacting massive particle naturally explains the observed value for the relic density via the thermal freeze-out mechanism. While initially being promoted through the popularity of supersymmetry, by now the idea of frozen-out dark matter has entered the standard recipe for successful dark-matter model building well beyond the supersymmetric paradigm. An additional appeal of such a candidate emerges from the promising prospects to detect it. Its production from, scattering off and annihilation into standard-model particles -- probed at colliders, direct and indirect detection experiments, respectively -- provides three complementary search strategies accessible with current experimental sensitivities. Exploring the interplay of such observables has become a major direction of phenomenological research and brought forth the need for their efficient numerical computation. This has stimulated the development of automated numerical tools such as \textsc{micrOMEGAs}~\cite{Belanger:2018ccd}, \textsc{DarkSUSY}~\cite{Bringmann:2018lay}, \textsc{SuperIso Relic}~\cite{Arbey:2018msw} and -- in particular, as considered here -- \textsc{MadDM}~\cite{Ambrogi:2018jqj}. For the computation of cross sections and widths, \textsc{MadDM}\ utilizes the automatized matrix element generator \textsc{MadGraph5\_aMC@NLO}~\cite{Alwall:2011uj,Alwall:2014hca} (\textsc{MG5\_aMC}\ in the following). It is embedded as a plug-in of the \textsc{MG5\_aMC}\, platform. As such, \textsc{MadDM}\ supports all particle physics models that can be cast into the Universal FeynRules Output (UFO) format~\cite{Degrande:2011ua}, generated by \emph{e.g.}~\textsc{FeynRules}~\cite{Alloul:2013bka}, \textsc{SARAH}~\cite{Staub:2012pb} or \textsc{LanHEP}~\cite{Semenov:2014rea}. \textsc{MadDM}~1.0~\cite{Backovic:2013dpa}, released in 2013, introduced the relic density calculator, while versions 2.0~\cite{Backovic:2015cra} and 3.0~\cite{Ambrogi:2018jqj} extended the functionality by a comprehensive set of direct and indirect detection observables, respectively. For direct detection, the code not only computes the elastic spin-independent and spin-dependent dark-matter-nucleon cross sections. It also allows for the computation of the double differential event rates as a function of time, scattering angle and energy supporting a variety of target materials. For indirect detection, \textsc{MadDM}\ allows the user to compute the velocity averaged annihilation cross sections today and the corresponding energy spectra of prompt photons, cosmic rays and neutrinos. The generation of annihilation spectra can either be done by combining pre-computed spectra for individual annihilation channels from \textsc{PPPC4DMID}~\cite{Cirelli:2010xx} (fast mode) or by simulating events employing \textsc{Pythia}~8~\cite{Sjostrand:2014zea} for showering and hadronization (precise mode). The latter enables full flexibility, in particular allowing the user to consider arbitrary $2\to n$ processes. The program also includes experimental constraints from the direct detection experiments LUX~\cite{Akerib:2017kat}, Xenon1T~\cite{Aprile:2018dbl} and Pico-60~\cite{Amole:2017dex} as well as indirect detection constraints from gamma-ray observations of dwarf spheroidal galaxies by Fermi-LAT~\cite{Fermi-LAT:2016uux} that allow for the computation of a likelihood and exclusion limit. With this article, we release version~3.1 that introduces various minor improvements, such as a revised display command, an extended output of the relic density computation as well as updated constraints from Xenon1T~\cite{Aprile:2018dbl}. We provide a short user guide of \textsc{MadDM}\ that equips researchers with all relevant information required to readily perform comprehensive phenomenological studies of particle dark-matter models. In particular, in section~\ref{sec:getstart} we supply information on the installation and the general functionalities of the code. In section~\ref{sec:DMobs} we detail the observable-specific commands and settings. We summarize and give a brief outlook on upcoming developments in section~\ref{sec:conl}. \section{Getting started}\label{sec:getstart} In this section, we provide the basic information on how to install \textsc{MadDM}\, and describe the main commands via a quick tutorial. We depict its folder structure, the relevant output files and give a few tips on how to run it efficiently. \subsection{Installation}\label{sec:install} To install the \textsc{MadDM}\, plug-in, the user has to first download and untar the latest stable version of \textsc{MG5\_aMC}\ from \href{https://launchpad.net/mg5amcnlo}{https://launchpad.net/mg5amcnlo}. At the time of writing, this corresponds to version 2.8.2, which we assume for definiteness in the examples in the following. While \textsc{MG5\_aMC}, is now compatible with Python 3, this is not yet the case for \textsc{MadDM}, which works only with Python~2.7. Additionally, the user should make sure that there is a complete installation of the SciPy and NumPy modules.\footnote{Note that \textsc{MG5\_aMC}~2.8.X furthermore requires the Python module \texttt{six}.} Once the \textsc{MG5\_aMC}\ package has been untared, the user has to enter the corresponding directory, start \textsc{MG5\_aMC}, and install \textsc{MadDM}\ via the \textsc{MG5\_aMC}\ command line: \begin{verbatim} mydir$ tar -xzf MG5_aMC_v2.8.2.tar.gz mydir$ cd MG5_aMC_v2_8_2/ MG5_aMC_v2_8_2$ python2.7 bin/mg5_aMC MG5_aMC> install maddm MG5_aMC> quit MG5_aMC_v2_8_2$ \end{verbatim} The latest version of \textsc{MadDM}\, will automatically be downloaded and installed as a \textsc{MG5\_aMC}\ plug-in. The corresponding source code in located in \texttt{MG5\_aMC\_v2\_8\_2/PLUGIN/maddm} while the executable python file \texttt{maddm.py} is stored in \texttt{MG5\_aMC\_v2\_8\_2/bin/}. See section~\ref{sec:folder} and figure~\ref{fig:folder} for details on the folder structure. Note that \textsc{MadDM}\, is automatically interfaced with a few tools that support the computation of indirect-detection observables. These are \textsc{Pythia~8}~\cite{Sjostrand:2014zea}, the \textsc{PPPC4DMID} libraries for annihilation spectra~\cite{Cirelli:2010xx}, \textsc{DRAGON}~\cite{Evoli:2008dv} and the \textsc{GALPROP} libraries~\cite{Vladimirov:2010aq} (for \textsc{DRAGON}). When performing indirect-detection computations with \textsc{MadDM}\ for the first time, the user is asked whether these packages should be installed automatically. Note that \emph{(i)} the installation can take some time and that \emph{(ii)}~\textsc{Pythia}~8 and \textsc{PPPC4DMID} are needed for the computation of annihilation spectra, while \textsc{DRAGON} is needed for cosmic-ray propagation only. The user can also perform the installation at any time via the \textsc{MadDM}\ command-line interface: \begin{verbatim} MG5_aMC_v2_8_2$ python2.7 bin/maddm.py MadDM> install pythia8 MadDM> install PPPC4DMID MadDM> install dragon MadDM> install dragon_data_from_galprop MadDM> quit MG5_aMC_v2_8_2$ \end{verbatim} For further information about indirect detection and the usage of these packages see section~\ref{sec:ID}. \subsection{Command-line interface and tutorial mode}\label{sec:basic} Once the user has entered in \textsc{MadDM}\, by executing the \texttt{maddm.py} file, the first steps for any computation are to load a model and define the dark matter candidate. This is achieved by typing: \begin{verbatim} MG5_aMC_v2_8_2$ python2.7 bin/maddm.py MadDM> import model DMsimp_s_spin0 MadDM> define darkmatter xd \end{verbatim} where here we have considered a Dirac dark matter candidate denoted by the particle named \texttt{xd} within the simplified model called \texttt{DMsimp\_s\_spin0}. More details about models are provided in section~\ref{sec:models}. The relic abundance, direct and indirect detection observables for \texttt{xd} are computed via \begin{verbatim} MadDM> generate relic_density MadDM> add direct_detection MadDM> add indirect_detection MadDM> output my_process_dir \end{verbatim} \begin{figure}[b] \centering \begin{Verbatim}[fontsize=\footnotesize] The following switches determine which programs are run: /============ Description ============|====== values ======|======= other options =======\ | 1. Compute the Relic Density | relic = ON | OFF | | 2. Compute direct(ional) detection | direct = direct | OFF|directional | | 3. Compute indirect detection/flux | indirect = sigmav | flux_source|flux_earth|OFF | | 4. Run Multinest scan | nestscan = OFF | ON | \========================================================================================/ You can also edit the various input card: * Enter the name/number to open the editor * Enter a path to a file to replace the card * Enter set NAME value to change any parameter to the requested value /=============================================================================\ | 5. Edit the model parameters [param] | | 6. Edit the MadDM options [maddm] | \=============================================================================/ [60s to answer] > \end{Verbatim} \vspace{-2.3ex} \caption{Example of the launch interface after performing the \texttt{launch} command in \textsc{MadDM}\@. In the specific example, the relic density, direct and indirect detection calculations are turned on, while the performance of a scan with \textsc{MultiNest} is switched off. } \label{fig:lauchpromtp} \end{figure} The commands \texttt{generate} and \texttt{add} have the same functionalities they have in \textsc{MG5\_aMC}. In particular, \texttt{generate} is used as the first command, while \texttt{add} is used to retain the previously generated processes and add new ones. Note that a subsequent call of the \texttt{generate} command will erase all previous processes. The last command above creates a folder \texttt{my\_process\_dir} which contains all the code necessary to launch the required computations. Performing these computations for a given parameter point is done via the \texttt{launch} command. \begin{verbatim} MadDM> launch my_process_dir \end{verbatim} This opens the \emph{launch interface} that allows the user to change settings and model parameters, as shown in figure~\ref{fig:lauchpromtp}. There are two ways of making changes. First, (repeatedly) entering a number 1--4 allows to alternate between the options displayed, while entering 5 or 6 opens the files \texttt{param\_card.dat} or \texttt{maddm\_card.dat}, respectively, with a command-line editor (\texttt{vim} by default).\footnote{Note that the files may, of course, be changed by any other instance instead.} These files contain all model parameters and most of the \textsc{MadDM}\ settings, respectively. A second option is to directly type \texttt{\,set\;<parameter>\;<value>\,} in the launch interface,~for instance \begin{verbatim} > set mxd 500 \end{verbatim} for setting the dark-matter mass to 500\,GeV. Auto-completion is available (via pressing tab) to easily find the name of parameters. The observable-specific settings (the first four entries of figure~\ref{fig:lauchpromtp}) will be described in detail in section~\ref{sec:DMobs}. Once the user is done with all settings the launch interface is finally exited by pressing enter. Note that the \texttt{launch} command can be executed either in the same session or after quitting and restarting \textsc{MadDM}\@. In the former case, the specification of the directory where the process has been created is not necessary, as \textsc{MadDM}\ will launch the process of the last output in the session. \medskip A convenient way of being guided through the basic commands is the tutorial model. It is entered by typing \texttt{tutorial} in the \textsc{MadDM}\ command-line interface: \begin{verbatim} MG5_aMC_v2_8_2$ python2.7 bin/maddm.py MadDM> tutorial \end{verbatim} The screen output explains the basic commands and options that the user may follow. It can be exited by: \begin{verbatim} MadDM> tutorial stop \end{verbatim} \subsection{The \texttt{display} command} When computing the observables for dark matter models, it is possible to use the following commands \begin{verbatim} MadDM> display processes MadDM> display diagrams \end{verbatim} to either display a list of the generated processes or the respective Feynman diagrams. From \textsc{MadDM}~3.1 on, the display command allows for the following options: \begin{itemize} \item \texttt{relic}, \texttt{direct} or \texttt{indirect}: display only processes/diagrams related to relic density, direct detection or indirect detection; a combination of them is supported, see the example below; \item \texttt{last}: displays only processes/diagrams generated by the last command called, it overwrites any other option specified; \item \texttt{all}: works as the simultaneous presence of \texttt{relic}, \texttt{direct}, \texttt{indirect}: it displays only diagrams relevant for dark matter annihilation and it is the default setting if no options are provided. \end{itemize} For instance, the command \begin{verbatim} MadDM> display processes relic indirect \end{verbatim} displays all the processes related to relic density and indirect detection. \subsection{Running \texorpdfstring{\textsc{MadDM}}{MadDM} from a script}\label{sec:script} In certain applications, it might not be convenient or even possible to use the command-line interface of \textsc{MadDM}\ described in section~\ref{sec:basic}. An alternative is to control \textsc{MadDM}\ via a script. To do so the respective commands described in section~\ref{sec:basic} need simply to be written in a plain text file separated by line-breaks. The respective script can be passed as an argument when starting \textsc{MadDM}. The corresponding operations will then be executed. For instance, the user may create the two scripts:\footnote{Note that the content of the two scripts could as well be put into one file. The separation of the launch command is, however, often convenient as only this part needs to be rerun when choosing different parameters.}\medskip\\ \noindent \begin{mgscript}[adjusted title=generate.txt,width=0.45\linewidth,nobeforeafter,box align=top] import model DMsimp_s_spin0 define darkmatter xd generate relic_density add direct_detection add indirect_detection output my_process_dir \end{mgscript} \hfill \begin{mgscript}[adjusted title=launch.txt,width=0.45\linewidth,nobeforeafter,box align=top] launch my_process_dir indirect = flux_source direct = direct set sigmav_method madevent set nevents 20000 set mxd 500 \end{mgscript} \vspace{3ex} \noindent and execute MadDM: \begin{verbatim} MG5_aMC_v2_8_2$ python2.7 bin/maddm.py generate.txt MG5_aMC_v2_8_2$ python2.7 bin/maddm.py launch.txt \end{verbatim} Note that the settings \texttt{relic}, \texttt{direct}, \texttt{indirect} and \texttt{nestscan} should not be set by entering the numbers 1--4 (\emph{cf.}~section~\ref{sec:basic}) in scripts as the selected mode can depend on the machine on which the code runs and on the specific \textsc{MadDM}\, version installed. The observable-specific commands contained in \texttt{launch.txt} are detailed in section~\ref{sec:DMobs}. \subsection{Folder structure}\label{sec:folder} \begin{figure}[t] \centering { \begin{minipage}{6cm} \dirtree{% .1 MG5\_aMC\_v2\_8\_2. .2 bin. .3 maddm.py. .3 mg5\_aMC. .3 \dots. .2 models. .2 my\_process\_dir. .3 bin. .3 Cards. .4 maddm\_card.dat. .4 multinest\_card.dat. .4 param\_card.dat. .4 \dots. .3 Indirect. .3 output. .4 run\_01. .4 run\_02. .4 \dots. .3 \dots. .2 PLUGIN. .3 maddm. .3 \dots. .2 \dots. } \end{minipage} \begin{minipage}{8.5cm} \dirtree{% .1 run\_01. .2 maddm\_card.dat. .2 MadDM\_results.txt. .2 maddm.out. .2 Output\_Indirect. .3 antiprotons\_spectrum\_pythia8.dat. .3 gammas\_spectrum\_pythia8.dat. .3 neutrinos\_e\_spectrum\_pythia8.dat. .3 neutrinos\_mu\_spectrum\_pythia8.dat. .3 neutrinos\_tau\_spectrum\_pythia8.dat. .3 positrons\_spectrum\_pythia8.dat. .3 restx\_spectrum\_pythia8.dat. .3 pythia8.log. .3 run\_01\_DM\_banner.txt. .3 run\_shower.sh. .3 unweighted\_events.lhe.gz. } \end{minipage} } \caption{Schematic structure of the folders and files of \textsc{MG5\_aMC}\, and \textsc{MadDM}. {\bfseries{On the left:}} Main directory of \textsc{MG5\_aMC}\, where the python executable file \texttt{maddm.py} is located in the \texttt{bin} folder, while the source code is in \texttt{PLUGIN/maddm/}. The output directory \texttt{my\_process\_dir} contains all relevant setting cards (within \texttt{Cards}), and the output files in \texttt{output/run\_01} for instance. {\bfseries{On the right:}} Zoomed view of the \texttt{run\_01} directory, where the main results are stored, as labeled. The file \texttt{MadDM\_results.txt} recaps the value of all observable computed by the user. Notice that \texttt{Output\_Indirect} contains indirect-detection files, such as the energy spectra and the lhe event file.} \label{fig:folder} \end{figure} Figure~\ref{fig:folder} displays the general folder structure of \textsc{MG5\_aMC}\ after installation of the \textsc{MadDM}\, plug-in. The directory \texttt{bin} contains the python code to be executed, while \texttt{models} contains all models used, see section~\ref{sec:models} for more information. Once generated, \texttt{my\_process\_dir} contains all code, input and output for a certain process. Input parameters are stored in various files in the directory \texttt{Cards}. For instance, model parameters can be either set via the \texttt{set} command in the launch interface (see section~\ref{sec:basic}) or by changing the respective parameters in \texttt{Cards/param\_card.dat}. The output folder contains a sub-directory for each run within the process \texttt{run\_01}, \texttt{run\_02}, \dots which, in turn, contains all outputs of the computation (stored in \texttt{MadDM\_results.txt} and \texttt{maddm.out}) as well as a copy of the \texttt{maddm\_card.dat} used. Further output is linked to the directory \texttt{Output\_Indirect}, see section~\ref{sec:ID} for more details. \subsection{Running scans}\label{sec:scans} There are several ways to run scans over parameter space points within \textsc{MadDM}. First, \textsc{MadDM}\, may be just called by an external code that performs a scan. In this case, the parameters and settings may just be passed by a script as detailed in section~\ref{sec:script}. The second option is to employ the sequential grid scan functionality of \textsc{MadDM}, which allows one to scan over an arbitrary number of model parameters with one launch command. To achieve this, instead of setting a given parameter to a fixed value, the respective scan range has to be defined: \begin{verbatim} MadDM> launch my_process_dir > set mxd scan:range(50,700,25) \end{verbatim} Note that after the syntax \texttt{scan:} any python iterable is accepted, including list comprehension syntax. In the case of multi-dimensional scans, two possibilities are available. Using the syntax \texttt{scan:} for two or more parameters will generate a nested loop over the scan ranges, i.e. a complete grid. Another possibility is to use the syntax \texttt{scan1:}, which instead creates a parallel scan, namely the values of the iterables are scanned simultaneously. Instead of being specified in the launch interface, this setting can also be done in the \texttt{param\_card.dat}: \begin{center} \begin{mgscript}[title=my\_process\_dir/Cards/param\_card.dat, width=9cm] ... Block mass 1 5.040000e-03 # MD 2 2.550000e-03 # MU 3 1.010000e-01 # MS 4 1.270000e+00 # MC 5 4.700000e+00 # MB 6 1.720000e+02 # MT 15 1.777000e+00 # MTA 23 9.118760e+01 # MZ 25 1.250000e+02 # MH 51 1.000000e+01 # MXc 52 scan:range(50,700,25) # MXd 54 1.000000e+03 # MY0 ... \end{mgscript} \end{center} The third option is to perform guided scans with the Bayesian inference tool \textsc{MultiNest} (\textsc{MultiNest}~\cite{Feroz:2007kg,Feroz:2008xx} is provided together with \textsc{PyMultiNest}~\cite{Buchner:2014nha}) by specifying \begin{verbatim} MadDM> launch my_process_dir > nestscan = 0N \end{verbatim} and setting multinest parameters in \texttt{multinest\_card.dat}, which then appears as number 7 in the launch interface: \begin{Verbatim}[fontsize=\footnotesize] /=================================================================\ | 5. Edit the model parameters [param] | | 6. Edit the MadDM options [maddm] | | 7. Edit the Multinest options [multinest] | \=================================================================/ [60s to answer] > \end{Verbatim} For further details see Appendix~E.2 of~Ref.~\cite{Ambrogi:2018jqj}. \subsection{Importing models}\label{sec:models} Being a plug-in of \textsc{MG5\_aMC}\,, \textsc{MadDM}\, can perform computation within any particle physics model that allows for an implementation in the Universal FeynRules Output (UFO) format~\cite{Degrande:2011ua}. The implementation can be achieved with automated tools like \textsc{FeynRules}~\cite{Alloul:2013bka}, \textsc{SARAH}~\cite{Staub:2012pb} or \textsc{LanHEP}~\cite{Semenov:2014rea}. The respective model directory (named as the model to be imported) has to be stored in the directory \texttt{models} (\emph{cf.}~section~\ref{sec:folder}). Note that a database of models can be found at the FeynRules webpage: \href{http://feynrules.irmp.ucl.ac.be/wiki/ModelDatabaseMainPage}{http://feynrules.irmp.ucl.ac.be/wiki/ModelDatabaseMainPage}. For models stored in that database (but not in the local \texttt{models} folder) \textsc{MadDM}\, automatically downloads the model importing it via the \textsc{MadDM}\, command-line interface (\emph{cf.}~section~\ref{sec:basic}). This model list can be viewed by the user by typing \begin{verbatim} MadDM> display modellist \end{verbatim} The entire list of models is then displayed. A user guide for the implementation of a particle physics model into \textsc{FeynRules} can be found in Ref.~\cite{Alloul:2013bka}. \section{Dark matter observables}\label{sec:DMobs} In the following, we detail the capabilities of \textsc{MadDM}\, to perform computations of the relic density, direct and indirect detection observables, respectively. \subsection{Relic density}\label{sec:reldens} \textsc{MadDM}\, allows for the computation of the relic density in the framework of thermal freeze-out of dark matter. It automatically computes the rates for all relevant $2\to2$ annihilation processes including coannihilation processes. Coannihilation is taken into account if the user specifies the coannihilating partner(s), such as \begin{verbatim} MadDM> define coannihilator xco1 xco2 xco3 \end{verbatim} prior to the command \texttt{generate relic\_density}, see section~\ref{sec:basic}. Here \texttt{xco1}, \texttt{xco2}, \texttt{xco3} are exemplary names of coannihilating partners in the model. The code solves the corresponding Boltzmann equation for the dark-matter abundance (or dark-sector\footnote{Here we consider the \emph{dark sector} to comprise the dark matter candidate and all potential coannihilators.} abundance for the case of coannihilation~\cite{Edsjo:1997bg}) numerically, see Ref.~\cite{Backovic:2013dpa} for further details. It assumes kinetic equilibrium between all involved particles (as well as chemical equilibrium within the dark sector) during dark-matter freeze out. The output on screen is, for instance: \begin{verbatim} ***** Relic Density OMEGA IS 0.000325869586293 INFO: Relic Density = 3.26e-04 UNDERABUNDANT INFO: x_f = 2.80e+01 INFO: sigmav(xf) = 3.27e-24 cm^3/s INFO: xsi = 2.72e-03 \end{verbatim} The relic density is given in terms of $\Omega h^2$, where $\Omega$ is the dark-matter energy density in units of the critical density and $h$ is the Hubble constant in units of $100\,\text{km}\,\text{s}^{-1}\, \text{Mpc}^{-1}$. The freeze-out point (approximately the point of chemical decoupling) is given by $x_\text{f}$, where $x =m_\text{DM}/T$. The thermally averaged annihilation cross section at the freeze-out point, $\langle\sigma v\rangle (x_\text{f})$, is given in units of $\text{cm}^{3}\,\text{s}^{-1}$. Since version 3.1, \textsc{MadDM}\ additionally displays the contributions of the different channels to the relic density. The resulting screen output reads: \begin{verbatim} INFO: Channels contributions: INFO: xdxdx_hh : 0.05 % INFO: xdxdx_zz : 0.13 % INFO: xdxdx_ttx : 98.21 % INFO: xdxdx_aa : 0.95 % INFO: xdxdx_wpwm : 0.10 % INFO: xdxdx_az : 0.57 % INFO: No contribution from processes: y0y0 \end{verbatim} Models whose relic density undershoots (overshoots) the value measured by Planck, $\Omega h^2 = 0.120\pm 0.001$~\cite{Aghanim:2018eyx}, by more than $2\sigma$ are flagged as \texttt{UNDERABUNDANT} (\texttt{OVERABUNDANT}), while values within that range are flagged as \texttt{WITHIN\_EXP\_ERRORS}. However, note that the theoretical uncertainty in the model prediction -- typically significantly larger -- is not estimated by \textsc{MadDM}. It is up to the user to estimate the error on $\Omega h^2$ to be taken into account \emph{e.g.}~in a global fit. In the case of thermally underabundant dark matter, \texttt{xsi} denotes the fraction of the model's thermal abundance of total measured dark matter abundance, \begin{equation} \xi= \frac{(\Omega h^2)_\text{model}}{(\Omega h^2)_\text{Planck}}. \end{equation} In this case constraints from direct and indirect detection, subject to the following sections, are interpreted in two ways: \begin{itemize} \item One assumes that the model's candidate, indeed only makes up a fraction $\xi$ of the total amount of dark matter, implying the existence of a further unspecified contribution, \emph{e.g.}~axions or primordial black holes. This interpretation entails the rescaling of the yields for direct and indirect detection by a factor of $\xi$ and $\xi^2$, respectively. It is denoted by `Thermal'. \item Regardless of the underabundant contribution from thermal freeze-out, the model's candidate is assumed to constitute 100\% of the measured dark matter abundance. This interpretation applies in the presence of an additional (non-thermal) contribution to dark-matter production, \emph{e.g.}~through a late decay of a heavier species. It is denoted by `All DM'. \end{itemize} Note that to enable the interpretation of direct and indirect detection observables in the `Thermal' scenario the relic density computation has to be performed. \subsection{Direct detection}\label{sec:DD} Direct detection experiments search for dark-matter particles scattering off atomic nuclei in low-background environments, \emph{e.g.}~deep underground. If the recoil momentum of the nucleus is above the detection threshold, then electrons, photons and/or phonons induced by the nuclear recoil may be detected. The number of dark-matter scattering events in a given experiment is then set by a confluence of factors in the dark-matter theory parameter space. This includes the mass, scattering cross section and the astrophysical velocity distribution of the dark matter in our local Galactic neighborhood. Since the solar system is moving in a particular direction with respect to the Galactic dark-matter halo, the astrophysical distribution can provide both velocity and angular information. These are used to calculate the nuclear recoil energy spectrum and the angular recoil spectrum, by both direct detection and directional detection experiments respectively. \textsc{MadDM}\ allows the user to choose among two modes called \texttt{direct} and \texttt{directional}. The setting can be changed in the launch interface either by (repeatedly) entering the number 2 until the requested option is displayed on screen, \emph{cf.}~figure~\ref{fig:lauchpromtp} and section~\ref{sec:basic}, or by directly entering one of the following commands: \begin{verbatim} > direct = direct > direct = directional \end{verbatim} The mode \texttt{direct} provides the basic computations of spin-independent and spin-dependent dark-matter-nucleon cross section as well as their respective limits from LUX~\cite{Akerib:2017kat}, XENON1T~\cite{Aprile:2018dbl}, and PICO-60~\cite{Amole:2017dex}. An exemplary screen output from the direct detection module (for the same parameters used above) reads: \begin{Verbatim}[fontsize=\scriptsize] ***** Direct detection [cm^2]: INFO: SigmaN_SI_p: Thermal = 2.01e-50 ALLOWED, All DM = 7.40e-48 ALLOWED Xenon1ton ul = 4.16e-46 INFO: SigmaN_SI_n: Thermal = 1.98e-50 ALLOWED, All DM = 7.27e-48 ALLOWED Xenon1ton ul = 4.16e-46 INFO: SigmaN_SD_p: Thermal = 0.00e+00 ALLOWED, All DM = 0.00e+00 ALLOWED Pico60 ul = 2.03e-40 INFO: SigmaN_SD_n: Thermal = 0.00e+00 ALLOWED, All DM = 0.00e+00 ALLOWED Lux2017 ul = 1.22e-40 \end{Verbatim} As indicated, all direct-detection cross sections are given in $\text{cm}^2$. In the `Thermal' scenario the cross-section prediction is rescaled by $\xi$, see section~\ref{sec:reldens} for details. This particular model does not have spin-dependent interactions due to the type of mediator interacting with the dark matter. The mode \texttt{directional} additionally provides the fully differential nuclear recoil rates as a function of energy, angle and time. It, therefore, allows the user to explore the directional information of dark-matter scattering. For the computation of nuclear recoil rates \textsc{MadDM}\ allows the user to choose among a large set of detector materials and take into account detector smearing effects. Furthermore, the recoil energy, the detector size, the most probable dark-matter velocity and escape velocity as well as the local dark-matter density can be specified. Moreover, the nuclear form factor can be customized. All these settings can be adjusted by changing the corresponding entries in the \texttt{maddm\_card.dat} file. For more information, see~\cite{Backovic:2015cra}. \subsection{Indirect detection}\label{sec:ID} Indirect detection probes the (self-)annihilation of dark matter in locally over-dense regions, like the center of the Galaxy. Stable particles that are the final products of these annihilation processes can propagate to us and act as the messengers of the dark-matter signal. Photons (gamma rays), neutrinos and stable charged particles (cosmic rays), in particular, antiprotons and positrons, are commonly considered messenger particles. The dark-matter annihilation cross section and energy spectra of these messengers need to be computed to confront the signal prediction with data. \textsc{MadDM}\ provides two different modes and a variety of further settings to supply the user with these observables at an appropriate level of precision and speed. \subsubsection{Running modes and settings} In the \texttt{fast} mode, the cross-section computation is performed with a fast phase-space integrator using the Simpson method~\cite{Weinzierl:2000wd} (also used for the relic density computations). In this mode, no events are generated. Furthermore, it is restricted to $2\to2$ processes. It is selected by entering, in the \texttt{launch} interface, the command \begin{verbatim} > set fast \end{verbatim} In \texttt{precise} mode, the phase-space integration is performed by \textsc{MadEvent}~\cite{Maltoni:2002qb}. Events are generated and arbitrary $2\to n$ processes can be taken into account. It is selected by \begin{verbatim} > set precise \end{verbatim} The \texttt{precise} mode allows the user to evaluate the cross section either at a fixed dark-matter velocity \begin{verbatim} > set sigmav_method = madevent \end{verbatim} or taking into account a Maxwell-Boltzmann distribution in velocity through a reshuffling and reweighting of events~\cite{Kleiss:1985gy,Mattelaer:2016gcx}: \begin{verbatim} > set sigmav_method = reshuffling \end{verbatim} Notice that this is the default setting if nothing is specified. The average velocity (in units $c=1$) can be set as follows, \emph{e.g.}~to $10^{-5}$: \begin{verbatim} > set vave_indirect 1e-5 \end{verbatim} which is the default value if nothing is specified. Note that the automatic derivation of constraints from dwarf spheroidal galaxies (see below) requires the velocity to lie between $1.4\times10^{-4}$ and $3\times10^{-6}$, while typical velocities for the Galactic center is around $10^{-3}$. The number of generated events can be set as follows, for instance: \begin{verbatim} > set nevents = 50000 \end{verbatim} Note that the generation of smooth spectra (in particular towards the tails) might require a large number of events (up to several million). However, for analyses of binned spectra much fewer events can often be sufficient. For instance, for the computation of Fermi-LAT limits (see below), based on a binned likelihood function (with 24 bins) an event number between 10000 to 50000 is often sufficient to obtain a good estimate of the constraints. However, as the number and energy of messenger particles per annihilation depends strongly on the dark-matter model, these numbers are not universally valid. For instance, considering heavy dark matter with masses larger than a few TeV, Fermi-LAT only probes the low-energy tail of the photon spectrum that is sampled by a small fraction of events. In such cases, care has to be taken when estimating the number of required events. In both modes, \texttt{fast} and \texttt{precise}, the user can specify whether to compute just the cross section (\texttt{sigmav}) or in addition the energy spectra at sources (\texttt{flux\_source}) or near Earth (\texttt{flux\_earth}). The latter is relevant for neutrinos, which oscillate, and for cosmic rays that are subject to a non-trivial propagation process between the source and Earth that, in particular, affect the spectra. This setting can be chosen by repeatedly typing the number 3 in the launch interface to alternate between the three options (and \texttt{OFF}), \emph{cf.}~figure~\ref{fig:lauchpromtp} and section~\ref{sec:basic}. Alternatively, it can be set by one of the following commands, respectively: \begin{verbatim} > indirect = sigmav > indirect = flux_source > indirect = flux_earth \end{verbatim} This provides a total of six different running modes. The respective default settings and further options are summarized in table~\ref{tab:options}. We will briefly discuss them in the following for completeness, see~\cite{Ambrogi:2018jqj} for further details. \begin{table}[h!] \begin{center} \begin{tabular}{|c|l|l|} \hline & \texttt{fast} mode & \texttt{precise} mode \\ \hline \multirow{8}{*}{\rotatebox[origin=c]{90}{\texttt{indirect\;=\;sigmav}}}& & \\ & Default: & Default:\\ & \sttt{sigmav\_method\;=\;inclusive} & \sttt{sigmav\_method\;=\;reshuffling}\\ & & \\ & & Other options:\\ & & {\sttt{sigmav\_method\;=\;madevent}}\\ & & \\ & & \\ \hline \multirow{10}{*}{\rotatebox[origin=c]{90}{\texttt{indirect\;=\;flux\_source}}}& & \\ & Default: & Default:\\ & {\sttt{sigmav\_method\;=\;inclusive}} & {\sttt{sigmav\_method\;=\;reshuffling}}\\ & {\sttt{indirect\_flux\_source\_method\,=\,PPPC4DMID\_ew}} &{\sttt{indirect\_flux\_source\_method\;=\;pythia8}} \\ & & \\ & Other options: & Other options: \\ & {\sttt{indirect\_flux\_source\_method\,=\,PPPC4DMID}} & \sttt{sigmav\_method\;=\;madevent}\\ & &{\sttt{indirect\_flux\_source\_method\,=\,PPPC4DMID\_ew}} \\ & & {\sttt{indirect\_flux\_source\_method\,=\,PPPC4DMID}} \\ & & \\ \hline \multirow{10}{*}{\rotatebox[origin=c]{90}{\texttt{indirect\;=\;flux\_earth}}}& & \\ & Default: & Default:\\ & {\sttt{sigmav\_method\;=\;inclusive}} & {\sttt{sigmav\_method\;=\;reshuffling}}\\ & {\sttt{indirect\_flux\_earth\_method\,=\,PPPC4DMID\_ep}} &{\sttt{indirect\_flux\_source\_method\;=\;pythia8}} \\ & & {\sttt{indirect\_flux\_earth\_method\;=\;dragon}} \\ & & \\ & & Other options: \\ & & \sttt{sigmav\_method\;=\;madevent}\\ & &{\sttt{indirect\_flux\_earth\_method\,=\,PPPC4DMID\_ep}} \\ & & \\ \hline \end{tabular} \end{center} \caption{Summary of the \textsc{MadDM}\ indirect-detection functionalities upon the execution of the launch command. We display the default settings and further options of all six occurring combinations.} \label{tab:options} \end{table} In \texttt{fast} mode, enabling \texttt{indirect\;=\;flux\_source}, the energy spectra are taken from the \textsc{PPPC4DMID} database~\cite{Cirelli:2010xx} containing pre-computed results for annihilation into pairs of standard model particles only. \textsc{MadDM}\ combines the spectra of different channels according to their cross sections. The user may choose whether to include electroweak corrections~\cite{Ciafaloni:2010ti} (default) or not, specified by setting \texttt{indirect\_flux\_source\_method} to \texttt{PPPC4DMID\_ew} or \texttt{PPPC4DMID}, respectively. Note that in \texttt{fast} mode, propagated cosmic-ray spectra are only available for positrons. In \texttt{precise} mode, enabling \texttt{indirect\;=\;flux\_source}, energy spectra are computed using the generated events and \textsc{Pythia}~8~\cite{Sjostrand:2014zea} for showering and hadronization, where electroweak corrections are enabled by default.\footnote{This setting can be changed by modifying \texttt{pythia\_card.dat}.} However, the user can also choose to use the pre-computed spectra from \textsc{PPPC4DMID} instead (\emph{cf.}~table~\ref{tab:options}) by using the commands mentioned above. For \texttt{indirect\;=\;flux\_earth} the energy spectra of charged particles are propagated by the numerical code \textsc{DRAGON}~\cite{Evoli:2008dv}. The propagation parameters can be set in the \texttt{dragon\_card.xml}. Similar to the case of source spectra, for positrons, the propagates spectra can alternatively be taken from the \textsc{PPPC4DMID} database, regardless of the \texttt{precise} mode. This is achieved by typing: \begin{verbatim} > indirect_flux_earth_method = PPPC4DMID_ep \end{verbatim} Note that the propagated neutrino spectra are always computed after oscillations in vacuum employing the very long baseline approximation (see~\cite{Ambrogi:2018jqj} for details). As explained in section~\ref{sec:install}, both \textsc{Pythia}~8 and \textsc{DRAGON} are automatically installed within the \textsc{MadDM}\ framework either when first running \texttt{indirect\_detection} (asked for in the launch interface) or via the command-line interface whenever the user needs them. We ask the user to cite these additional public codes when these are used within \textsc{MadDM}. \subsubsection{Selecting processes} So far we have assumed that the user employs the general command (\emph{cf.}~section~\ref{sec:basic}) \begin{verbatim} MadDM > generate indirect_detection \end{verbatim} to specify the considered processes. Note that this command generates all processes where two dark matter particles annihilate into two particles, including all possible standard model particles and BSM particles which are even under the `dark' group. Alternatively, the user can specify the final state by typing: \begin{verbatim} MadDM > generate indirect_detection u u~ \end{verbatim} forcing \textsc{MadDM}\ to generate the diagram with a pair of $u$-quarks in the final state only. In fact, any $2\to n$ dark-matter annihilation process can be computed (requiring running \texttt{precise} mode). For instance, one may consider internal bremsstrahlung, where an additional photon \texttt{a} is emitted: \begin{verbatim} MadDM > generate indirect_detection u u~ a \end{verbatim} Note that further \textsc{MG5\_aMC}\ syntax can be employed to specify the process. For instance, to collectively specify a set of particles, multiparticle variables may be defined: \begin{verbatim} MadDM > define q = u d s c b t MadDM > define qbar = u~ d~ s~ c~ b~ t~ \end{verbatim} Furthermore, the decay of final state particles may be specified (and hence performed by \textsc{MadDM}). For example, the model used above as reference contains a spin-0 mediator \texttt{y0} that can appear in the final state (if it is lighter than dark matter). The mediator can further decay into quarks. Assuming the above multiparticle definition we can hence specify its appearance in the final state and subsequent decay by \begin{verbatim} MadDM > generate indirect_detection y0 y0, y0 > q qbar \end{verbatim} Alternatively, decays can be performed by \textsc{Pythia}~8 if the particles' branching ratios are provided. To this end, an automatic computation of branching ratios within \textsc{MadDM}\ can be performed by setting the corresponding decay width to \texttt{AUTO} in the \texttt{param\_card.dat}~\cite{Alwall:2014bza}. The above process can, hence, also be computed by typing: \begin{verbatim} MadDM > generate indirect_detection y0 y0 \end{verbatim} while the decay width is set in the launch interface: \begin{verbatim} > set wy0 AUTO \end{verbatim} (or by the corresponding change in the \texttt{param\_card.dat} by any other instance). With these settings, \textsc{MadDM}\ automatically computes all branching ratios of the spin-0 mediator while \textsc{Pythia}~8 performs the respective decays in the narrow width approximation. \subsubsection{Fermi-LAT constraints} Once the photon energy spectra have been computed with one of the methods described above, \textsc{MadDM}\, automatically computes the exclusion limit from the Fermi-LAT gamma-ray data from dwarf spheroidal galaxies~\cite{Fermi-LAT:2016uux}, if the dark matter velocity is set to an allowed value, \emph{i.e.}~between $1.4\times10^{-4}$ and $3\times10^{-6}$. For details on the Fermi-LAT likelihood function implementation, we refer to~\cite{Ambrogi:2018jqj}. The output is given by the excluded annihilation cross section at 95\% CL (confidence level) compared to the predicted annihilation cross-section, the Fermi-LAT likelihood and the $p$-value for the tested model point. \subsubsection{Output} An exemplary screen output from the indirect detection module (for the same parameters used above) reads: \begin{Verbatim}[fontsize=\scriptsize] ****** Indirect detection [cm^3/s]: INFO: <sigma v> method: madevent INFO: DM particle halo velocity: 2e-05/c INFO: xdxdx_zz Thermal = 1.67e-38 ALLOWED All DM = 2.26e-33 ALLOWED Fermi ul = 9.04e-23 INFO: xdxdx_aa Thermal = 1.24e-37 NO LIMIT All DM = 1.69e-32 NO LIMIT Fermi ul = -1.00e+00 INFO: xdxdx_ttx Thermal = 1.28e-35 ALLOWED All DM = 1.74e-30 ALLOWED Fermi ul = 1.11e-25 INFO: xdxdx_hh Thermal = 6.01e-39 ALLOWED All DM = 8.15e-34 ALLOWED Fermi ul = 2.21e-22 INFO: xdxdx_wpwm Thermal = 1.27e-38 ALLOWED All DM = 1.73e-33 ALLOWED Fermi ul = 1.16e-22 INFO: Skipping zero cross section processes for: xrxr, xcxcx, y0y0 INFO: Total limits calculated with Fermi likelihood: INFO: DM DM > all Thermal = 1.30e-35 ALLOWED All DM = 1.76e-30 ALLOWED Fermi ul = 2.85e-25 INFO: INFO: *** Fluxes at earth [particle/(cm^2 sr)]: INFO: gammas Flux = 1.57e-14 INFO: neutrinos_e Flux = 8.89e-18 INFO: neutrinos_mu Flux = 9.71e-18 INFO: neutrinos_tau Flux = 8.75e-18 \end{Verbatim} For each annihilation channel, the velocity averaged annihilation cross section today $\langle \sigma v \rangle_0$ is displayed in units of $\text{cm}^3 \text{s}^{-1}$. For the `Thermal' scenario, the cross section predictions are rescaled by $\xi^2$, see section~\ref{sec:reldens} for details. Processes with zero cross section are listed below. Each channel is compared to the limit coming from the Fermi-LAT constraints on prompt photons from dwarf spheroidal galaxies. Subsequently, the total cross section is shown, with the limit computed performing the full Fermi-LAT likelihood analysis. The last lines show the values of the total integrated flux for prompt photons and neutrinos. In addition, several output files are produced. In the case of single point runs, the relevant output is written into \texttt{my\_process\_dir/output/run\_01/MadDM\_results.txt} (\emph{cf.}~figure~\ref{fig:folder}). The file contains a summary of the computed observables, including the Fermi-LAT likelihood and $p$-value for the considered parameter point. It is formatted conveniently to enable parsing. The output spectra generated from the PPPC4DMID database can be found in the same directory. The ones generated by using simulated events (in the \texttt{precise} mode) are stored in the \texttt{my\_process\_dir/Indirect/Events/run\_01/} directory, \emph{cf.}~the folder structure shown on the right in figure~\ref{fig:folder}. In case of a scan, \texttt{MadDM\_results.txt} is not created. Instead, the file \texttt{my\_process\_dir/output/scan\_run\_01.txt} is written. It contains a list of the computed observables for all parameter points scanned over. \section{Summary and outlook}\label{sec:conl} \textsc{MadDM}\, is a comprehensive numerical tool for performing computations of dark-matter observables. In particular, it supports a detailed interpretation of direct and indirect dark-matter searches by providing \emph{e.g.}~the fully differential nuclear recoil rates (as a function of energy, angle and time) as well as the photon, neutrino and cosmic-ray spectra for arbitrary $2\to n$ annihilation processes at source or near Earth. For the latter, \textsc{MadDM}\ is interfaced to \textsc{Pythia}~8 and \textsc{Dragon}. Being a plug-in of \textsc{MG5\_aMC}, \textsc{MadDM}\ is conveniently installed and run through a user-friendly command-line interface. It provides an interactive and self-explanatory tutorial mode while the experienced user may prefer running \textsc{MadDM}\ via scripting. The \textsc{MG5\_aMC}\ framework provides further features inherited by \textsc{MadDM}\ such as automated width computation or natural support of any particle physics model that can be cast in a UFO format. \textsc{MadDM}\ is subject to ongoing developments that further enlarge its capabilities. For the next release, the computations are extended to general loop-induced processes. In particular, we will provide a framework to analyze the gamma-ray line spectrum arising from the annihilation of dark matter into photons, like $\gamma\gamma$, $\gamma Z$, $\gamma h$. An automated computation of constraints from the gamma-ray line searches from observations of the Galactic center~\cite{Ackermann:2015lka,Abdallah:2018qtu} will also be supplied. \section*{Acknowledgements} C.A.~is supported by the Innoviris ATTRACT 2018 104 BECAP 2 agreement. J.H.~acknowledges support from the F.R.S.-FNRS, of which he is a postdoctoral researcher. This work has received funding from the European Union's Horizon 2020 research and innovation program as part of the Marie Sk\l{}odowska-Curie Innovative Training Network MCnetITN3 (grant agreement no.~722104). Computational resources have been provided by the supercomputing facilities of the Universit{\'e} catholique de Louvain (CISM/UCL) and the Consortium des {\'E}quipements de Calcul Intensif en F{\'e}d{\'e}ration Wallonie Bruxelles (C{\'E}CI) funded by the Fond de la Recherche Scientifique de Belgique (F.R.S.-FNRS) under convention 2.5020.11 and by the Walloon Region. \bibliographystyle{JHEP}
{ "timestamp": "2020-12-17T02:20:59", "yymm": "2012", "arxiv_id": "2012.09016", "language": "en", "url": "https://arxiv.org/abs/2012.09016" }
\section{Introduction} Magnetic fields play a significant role in various solar activities. To obtain more accurate magnetic field values, infrared Stokes polarimetry has been attempted in recent decades, given the high magnetic sensitivity of lines in the infrared solar spectrum \citep{2014Matthew}. The Cryogenic Infrared Spectrograph instrument \citep{2010Cao} of the 1.6 m Goode Solar Telescope at Big Bear Solar Observatory and the Cryogenic Near-Infrared Spectro-Polarimeter instruments \citep{2016Woeger} of the 4 m Daniel K. Inouye Solar Telescope at the National Solar Observatory have been designed to measure the solar magnetic field in the 1--5 $\mu$m infrared band. Of the many magnetic sensitive lines in the infrared, the Mg I emission lines at 12.32 and 12.22 $\mu$m (or 811.575 and 818.058 cm$^{-1}$, hereafter Mg I 12 $\mu$m lines) were found to have the highest magnetic sensitivity so far. They were first reported by \cite{1981Murcray} and then identified as transitions between high Rydberg levels of the Mg I atom by \cite{1983Chang&Noyes}. These lines show clear Zeeman splitting when the magnetic field strength is only a few hundred Gauss \citep{1983Brault&Noyes}. They have a great potential for magnetic field measurements, especially for the detection of the weak magnetic field. There are many important studies in the early observation and modeling of the Mg I 12 $\mu$m lines. In terms of observation, \cite{1983Brault&Noyes} found that these lines with extremely narrow emission peaks and wide absorption troughs show limb brightening, and they usually appear in quiet regions, sunspot penumbrae, and plages, but are absent in sunspot umbrae. The high-resolution spectrum observation of the Mg I 12.32 $\mu$m line shows velocity oscillations, but no intensity oscillations \citep{1988Deming,1991Deming}. For modeling, \cite{1992Carlsson} successfully synthesized the line profiles with an excellent agreement with the observations for the first time by considering nonlocal thermodynamic equilibrium (NLTE). They confirmed that these lines were formed in the photosphere instead of the chromosphere and that population departure divergence of the high Rydberg levels is the reason for the formation of the emergent line profiles. This was an important contribution to understanding the formation mechanism of the Mg I 12 $\mu$m lines. The observed peak-and-trough limb brightening has also been simulated and reproduced. Since then, people can carry out quantitative modeling and diagnostic application for these lines. \cite{1995Bruls} synthesized the polarization profiles of the Mg I 12 $\mu$m lines and found that the formation heights of the emission features in quiet regions and sunspot penumbrae are approximately the same through the contribution function. Recently, \cite{2020Hong} performed NLTE calculations on the Mg I 12.32 $\mu$m line under flare atmosphere models, and the results showed that the change in the line formation height from the upper photosphere to the chromosphere during flare heating can cause the Zeeman splitting width and the Stokes V lobe intensity to decrease. The two Mg I 12 $\mu$m lines have a very high magnetic sensitivity, but it is still unclear what their potential contribution in the diagnosis of solar atmospheric parameters might be, for instance, magnetic field and temperature. There are two main approaches to deriving magnetic fields. One is calculating the magnetic fields directly from the Zeeman splitting, but this cannot be used when the Zeeman components are incompletely separated \citep{2001Jennings}. The other method is inverting the magnetic fields from the Stokes profiles based on the radiative transfer theory \citep{1993Hewagama}. The inversion based on the local thermodynamic equilibrium (LTE) is represented by the Milne-Eddington inversion method, which proved to be quite robust and relatively stable \citep{1987Skumanich}. However, the NLTE-based inversion is a time-consuming process. Furthermore, these inversion procedures with multiple free parameters usually face problems of convergence and uniqueness of the solutions. In this paper, we perform NLTE calculations of the two Mg I 12 $\mu$m lines using quiet-Sun and sunspot penumbra atmospheric models and compare their differences in the magnetic field diagnosis. Based on the result, we also try to find a new calibration method that can infer the magnetic field in a short time without using inversion when the magnetic field is so weak that the Zeeman components are unresolved. The feasibility of measuring the magnetic field by employing a filter-based magnetograph observed at the single-wavelength point of the Mg I 12 $\mu$m lines is also discussed. This work is therefore helpful not only for the future of solar magnetic field telescopes working at mid-infrared wavelengths, but also for the Infrared System for the Accurate Measurement of Solar Magnetic Field (AIMS) under construction in China, which selected the Mg I 12 $\mu$m lines as its main working lines \citep{2016Deng}. The paper is organized as follows. In Section 2 we introduce the methods and models, including the numerical approach and the atomic and atmospheric models, followed by the results in Section 3. Finally, in Section 4 we summarize and discuss our main results, including an outlook on future research. \section{Methods and models} For the Mg I 12 $\mu$m emission lines, we must take optically thick radiative transfer into account. To understand the radiation transfer process of these lines, a comprehensive atomic model must be selected to perform detailed numerical radiative transfer calculations under a given atmospheric model. The Rybicki-Hummer (RH) code is the numerical radiative transfer code based on the Multi-level Approximate Lambda Iteration formalism of \cite{1991Rybicki,1992Rybicki}, which was later improved by Uitenbroek \citep{2001Uitenbroek,2015Pereira&Uitenbroek}. Considering the advantages of the RH code in calculating NLTE radiation transfer, we used it to calculate the detailed radiative transfer of the Mg I 12 $\mu$m lines. In radiative transfer calculations, radiative transfer equations determine the absorption and emission state of the radiation in the solar atmosphere, and statistical equilibrium equations give the populations at each energy level of the atom, which are coupled with each other and can be solved self-consistently. We assumed complete frequency redistribution, and the Zeeman effect and the magneto-optic (M-O) effect were considered as well. As a result, we obtained polarization spectra of the Mg I 12 $\mu$m lines in different solar atmosphere models with different magnetic field strengths at disk center by setting $\mu=1.0$ (viewing angles are indicated by $\mu=\cos\theta$, and $\mu=1.0$ means the disk center). The input magnetic field strength is a fixed value and does not change with height. We used the model atom \textsf{Mg I\_66.atom}, which is based on the atomic model of \cite{1992Carlsson}, which properly considers all population processes. It is well known that the Mg atom has many energy levels, and the transitions between the energy levels are very complicated. The 12.32 and 12.22 $\mu$m lines are identified as $3s7i^{1,3}I^{e}-3s6h^{1,3}H^{0}$ and $3s7h^{1,3}H^{0}-3s6g^{1,3}G^{e}$ transitions of Mg I, respectively \citep{1983Chang&Noyes}. \cite{1987Chang} deduced that their Land\'e g-factor was unity, which was subsequently verified by \cite{1988Lemoine} in a laboratory measurement. Because the Mg I 12 $\mu$m lines are formed in NLTE \citep{1992Carlsson}, we need to consider enough energy levels to better model the emission features consistent with the observations. We chose the sunspot penumbra, umbra, and also quiet-Sun atmosphere models to study these lines. Figure~\ref{fig1}.a shows the temperature stratifications of the three different solar atmosphere models. They are all standard one-dimensional plane-parallel atmosphere models, which means that all the physical parameters only change with depth in the atmosphere. For different atmospheric models, the formation of the Mg I 12 $\mu$m lines is different. The FALC in Figure~\ref{fig1}.a is a static model of the quiet Sun proposed by \cite{1993Fontenla}. The second model, labeled MACKKL \citep{1986Maltby}, is a sunspot umbra atmosphere model, and the third model named MALTBY\_PENUMBRA \citep[hereafter MALTBY]{1969Kjeldseth} is a sunspot penumbra model. The largest difference between the MALTBY model and other two models is that it does not possess a chromospheric temperature rise. \begin{figure*}[!htbp] \centering \includegraphics[width=18cm, angle=0]{fig1.eps} \caption{Different solar atmosphere models and corresponding synthetic Stokes I profiles. Panel (a) Temperature against the continuum optical depth at $\lambda$ = 500 nm for the three solar atmosphere models. Panels (b) and (c) are synthetic Stokes I profiles of the Mg I 12.32 $\mu$m and Mg I 12.22 $\mu$m without magnetic fields. The relative intensity profiles are relative to the continuum intensity in the corresponding model.} \label{fig1} \end{figure*} \section{Results} \subsection{Synthetic Stokes profiles} Figure~\ref{fig1}.b and Figure~\ref{fig1}.c show the synthetic Stokes I profiles of the Mg I 12.32 $\mu$m and Mg I 12.22 $\mu$m lines without a magnetic field at disk center for the three solar atmosphere models, respectively. The results show that the synthetic profiles of two Mg I 12 $\mu$m lines agree with the observed profiles \citep{1992Carlsson}. The observed emission peaks, the wide absorption troughs, and the observed strength ratios are reproduced quiet well. The synthetic profiles in the MACKKL model also show weak emissions, but early observations by \cite{1983Brault&Noyes} showed that the 12 $\mu$m emission lines are normally absent in the umbrae. The reason might be the Saha-Boltzmann temperature sensitivity \citep{1995Bruls}. Normally, the 12 $\mu$m emission lines are hard to observe because the low temperature of sunspot umbra leads to a low population density in the high Rydberg states, resulting in weak radiation. However, there may be emission if the temperature is high enough to cause a slight difference between the population departures of the upper and lower levels of the Mg I 12 $\mu$m lines. For example, \cite{1990Deming} found that flare heating can excite emission lines in the umbrae. Nonetheless, there may be no corresponding observations to verify the simulation results obtained in the umbra atmosphere. The FALC and MALTBY models are therefore included in the following calculations, and the MACKKL model is not considered. The relative intensity of the 12.32 $\mu$m line in the three atmospheric models is also stronger than that of 12.22 $\mu$m line. Next, we calculated the splitting behavior of the two Mg I 12 $\mu$m lines by adding magnetic fields. Figure~\ref{fig2} (Figure~\ref{fig3}) displays the Stokes profiles for the FALC and MALTBY models with $B_l$ ($B_t$) of 100, 200, and 300 G. The Stokes I, Q, U, and V profiles of the two 12 $\mu$m lines are similar. However, the average polarization signal of the 12.32 $\mu$m line with different magnetic field strengths is three times stronger than that of the 12.22 $\mu$m line in the FALC model, and twice as strong in the MALTBY model. The feature in the center of the Stokes V profile in the quiet Sun with 300 G appears to be an M-O reversal (Figure~\ref{fig2} d). For comparison, we calculated the Stokes V profile without considering the M-O effect for the same conditions, and the results show that this feature still exists. This means that the feature in the center of the Stokes V profile is not caused by the M-O effect. The Stokes Q and U profiles primarily show the emission parts and are more complicated (with many wiggles, or inflections) in the quiet regions than in the penumbrae (Figure~\ref{fig3}). This result can be explained by the more obvious absorption troughs and the wider line width of the Stokes I profiles in the penumbra model. \begin{figure*}[!htbp] \centering \includegraphics[width=18cm, angle=0]{fig2.eps} \caption{Calculated Stokes I (left panels) and V (right panels) profiles of two Mg I 12 $\mu$m lines for the quiet Sun (solid, top four panels) and sunspot penumbrae (dashed, bottom four panels) models with longitudinal magnetic fields (i.e., inclination angle equals to $ 0^{\circ}$, expressed by $B_l$). Panels (a), (b), (e), and (f) and panels (c), (d), (g), and (h) correspond to the Mg I 12.32 $\mu$m and Mg I 12.22 $\mu$m lines, respectively. Different colours are for different magnetic field strength. Red, blue, and green lines correspond to magnetic field strengths of $B_l$ = 100 G, $B_l$ = 200 G, and $B_l$ = 300 G, respectively. The relative intensity profiles are relative to the continuum intensity.} \label{fig2} \end{figure*} \begin{figure*}[!htbp] \centering \includegraphics[width=18cm, angle=0]{fig3.eps} \caption{Same as Fig. 2, but for the transverse magnetic fields (expressed by $B_t$), and the azimuth angle equals $\chi= 30^{\circ}$. The left, middle, and right panels show Stokes I, Q, and U profiles, respectively. Panels (a), (b), (c), (g), (h), and (i) and panels (d), (e), (f), (j), (k), and (l) correspond to the Mg I 12.32 $\mu$m and Mg I 12.22 $\mu$m lines, respectively. The relative intensity profiles are relative to the continuum intensity.} \label{fig3} \end{figure*} Furthermore, the Mg I 12 $\mu$m lines show a different splitting behavior for different magnetic field strengths. The first columns of Figures~\ref{fig2} and~\ref{fig3} show that as the magnetic field strength increases, the Stokes I profiles exhibit a very clear Zeeman splitting, and the line width also gradually increases. In the FALC model, the 12.32 $\mu$m line splits into two $\sigma$ components at $B_l$ = 100 G, while the 12.22 $\mu$m line only splits at $B_l$ = 200 G (Figure~\ref{fig2} a,c). Correspondingly, in the MALTBY model, the 12.32 $\mu$m line shows a split at $B_l$ = 200 G, while the 12.22 $\mu$m line only shows a split at $B_l$ = 300 G (Figure~\ref{fig2} e,g). The two Mg I 12 $\mu$m lines split into two $\sigma$ components and a $\pi$ component at $B_t$ = 300 G in the FALC model (Figure~\ref{fig3} a,d), but the lines do not show a split even at $B_t$ = 300 G in the MALTBY model (Figure~\ref{fig3} g,j). We conclude that a higher magnetic field strength is required in the penumbrae than in quiet regions to see the split due to the greater line width in penumbrae. The larger line width in the penumbrae may be caused by increased pressure broadening, because for the cool model, the Mg I 12 $\mu$m lines are formed deeper and the density there is higher \citep{1995Bruls}. The 12.32 $\mu$m line splits at a lower magnetic field than the 12.22 $\mu$m line. The intensity of the line center and continuum of the 12.32 $\mu$m line is also stronger than that of the 12.22 $\mu$m line. Therefore, although their Land\'e g-factors are the same, the polarization signals of Q/I, U/I, and V/I of 12.32 $\mu$m are greater than those of the 12.22 $\mu$m line. \subsection{Response function from Stokes I to the magnetic field, temperature, and velocity} When physical parameters (e.g., magnetic field, temperature, and Doppler velocity) are diagnosed with a given spectral line, we need to know at which height they are mainly affected. The height information can be obtained based on the response function, which indicates how the Stokes profiles respond to small changes in the various physical parameters at different atmospheric heights \citep{1977Landi}. A response function with a high value means that the investigated physical parameter is more sensitive to that height. When different spectral lines are used to diagnose the same physical parameter (e.g., the magnetic field), the higher the response function value for the same disturbance, the more suitable this spectral line for diagnosing the physical parameter. Figures~\ref{fig4} and~\ref{fig5} show the response functions of Stokes I of the two Mg I 12 $\mu$m lines to the magnetic field, temperature, and velocity with $B_l$ and $B_t$. Here the magnetic field strengths of 200 G (Figure~\ref{fig4}) and 1000 G (Figure~\ref{fig5}) correspond to an incomplete and a complete split of the Stokes I profile, respectively. \begin{figure*}[!htbp] \centering \includegraphics[width=18cm, angle=0]{fig4.eps} \caption{Response functions of Stokes I of the two Mg I 12 $\mu$m lines for the FALC model to magnetic field (up), temperature (middle), and velocity (bottom). Columns 1 and 3 and 2 and 4 correspond to the Mg I 12.32 $\mu$m line and the Mg I 12.22 $\mu$m line, respectively. The left two columns and the right two columns correspond to $B_l$ = 200 G and $B_t$ = 200 G, respectively. The white lines are the corresponding Stokes I profiles relative to the continuum intensity.} \label{fig4} \end{figure*} \begin{figure*}[!htbp] \centering \includegraphics[width=18cm, angle=0]{fig5.eps} \caption{Same as Fig. 4, but the left two columns and the right two columns correspond to $B_l$ = 1000 G and $B_t$ = 1000 G, respectively.} \label{fig5} \end{figure*} The response function is a two-dimensional function of wavelength and height. The corresponding rows in Figures~\ref{fig4} and~\ref{fig5} show that the height corresponding to the maximum value of the response function of the Stokes I of the two Mg I 12 $\mu$m lines to perturbations in the magnetic field, temperature, and velocity at line center are 450 km, 490 km, and 450 km, respectively. However, the height corresponding to the maximum value of the response function of the Mg I 12.22 $\mu$m to perturbations in the velocity with $B_t$ = 200 G is only 400 km. This means that for the Mg I 12 $\mu$m lines, both the magnetic field and the velocity are mainly affected by the atmosphere around $\sim$450 km. Except for $B_t$ = 200 G, the velocity of 12.22 $\mu$m comes from a lower atmosphere than that of 12.32 $\mu$m. Of the three considered parameters, only the temperature affects the continuum. In the response function to the temperature, the continuum is sensitive to the atmosphere at a height of 203 km. The height corresponding to the temperature at line center is about 40 km higher than that corresponding to the magnetic field and the velocity. Thus, when Stokes I, Q, U, and V profiles are employed to derive different physical parameters, the temperature signature comes from a greater height than the magnetic field and the velocity. With regard to the line splitting, the response function of Stokes I to perturbations in the magnetic field, the first rows of Figures~\ref{fig4} and~\ref{fig5} show that as the magnetic field strength increases, the Mg I 12 $\mu$m lines are from incompletely to fully split due to the Zeeman effect. As a result, the distance between the wavelengths corresponding to the maximum value of the response function also gradually increases as the Zeeman splitting increases. Therefore, the wavelength position corresponding to the greatest magnetic sensitivity separates with the line splitting. When we compare columns 1 and 3 and columns 2 and 4 in Figures~\ref{fig4} and~\ref{fig5}, the response function of the 12.32 $\mu$m line to the magnetic field, temperature, and velocity is greater than that of the 12.22 $\mu$m line for the same disturbance. This means that the 12.32 $\mu$m line is more sensitive to these physical parameters than the 12.22 $\mu$m line. In addition, the first row of Figure~\ref{fig5} shows that the response to the magnetic field is obviously different for two $\sigma$ components in that they are oppositely antisymmetrical about the wavelength centers of the two components. In the third row of Figure~\ref{fig5}, the response to the velocity is the same in character for all three components as they all shift in the same direction by the same amount. Moreover, in the right two columns in Figure~\ref{fig5}, the response function of the Stokes I central $\pi$ component to the magnetic field is near zero because the $\pi$ component does not shift or broaden in the Zeeman effect. However, the response function of Stokes I central $\pi$ component to the temperature and the velocity appears stronger because the central $\pi$ component is taller, and thus has steeper wings, so that $\Delta$I is larger. The results of the response function show that the 12.32 $\mu$m line is more sensitive to the magnetic field, temperature, and velocity than the 12.22 $\mu$m line. Therefore, the Mg I 12.32 $\mu$m line is a better choice for measuring magnetic fields. \subsection{Multiwavelength calibration curve between Stokes profiles and magnetic field} When the Zeeman triple components are incompletely separated, the magnetic field strength cannot be directly inferred from Stokes I using the Zeeman-splitting formula. Polarization spectra are required instead. Based on the above analysis, in this section we adopt the wavelength-integrated method to infer $B_l$ and $B_t$ \citep{2008Lites}. Specifically, we integrate the Stokes profiles with the wavelength as follows to obtain the curve of the integrated area with the change of the magnetic field (i.e., the S-B curve). The different calculation methods (S1, S2, S3, and S4) are defined as \begin{equation}\label{eq1} S_{1}=\frac{\vert \int_{\lambda_b}^{\lambda_0}V(\lambda)\mathrm{d}\lambda \vert + \vert \int_{\lambda_0}^{\lambda_r}V(\lambda)\mathrm{d}\lambda \vert}{I_c\int_{\lambda_b}^{\lambda_r}\mathrm{d}\lambda}, \end{equation} \begin{equation}\label{eq2} S_{2}=\frac{\int_{\lambda_b}^{\lambda_r} \vert V(\lambda) \vert \mathrm{d}\lambda}{I_c\int_{\lambda_b}^{\lambda_r}\mathrm{d}\lambda}, \end{equation} \begin{equation}\label{eq3} S_{3}=\int_{\lambda_b}^{\lambda_r} \left[\left(Q/I \right)^2 + \left(U/I \right)^2 \right]^{1/4} \mathrm{d}\lambda, \end{equation} \begin{equation}\label{eq4} S_{4}=\frac{\int_{\lambda_b}^{\lambda_r} \left[ Q^2(\lambda) + U^2(\lambda) \right]^{1/2} \mathrm{d}\lambda}{I_c\int_{\lambda_b}^{\lambda_r}\mathrm{d}\lambda}, \end{equation} where $\lambda_b$ and $\lambda_r$ represent the blue and red limits of the integration on the line ($\lambda_b=12315.233$ nm and $\lambda_r=12324.862$ nm for 12.32 $\mu$m, $\lambda_b=12215.557$ nm and $\lambda_r=12224.946$ nm for 12.22 $\mu$m), $\lambda_0$ represents the zero-crossing wavelength of the Stokes V profiles ($\lambda_0=12319.246$ nm for 12.32 $\mu$m, $\lambda_0=12220.251$ nm for 12.22 $\mu$m), and $I_c$ corresponds to the continuum intensity in each model. S1 and S2 describe the wavelength-integrated Stokes V (circular polarization) and are related to $B_l$. Their difference is that S1 divides the Stokes V profile into two parts and integrates separately, while S2 directly integrates the absolute value of the entire Stokes V profile. S3 is related to the wavelength-integrated Stokes Q and U and is the formula derived from the weak-field approximation \citep{1967Rayrole}. The integrand function of S3, multiplied by a simple linear coefficient, is generally used to obtain the transverse magnetic field in the calibration of filter-based magnetograph \citep{2004Su}. S4 represents the degree of linear polarization and also has a relation with $B_t$. It is worth mentioning that S1 and S4 are adopted for the magnetic field calibration of the quiet Sun by the Solar Optical Telescope / Spectro-Polarimeter onboard Hinode \citep{2008Lites}. Figures~\ref{fig6} and~\ref{fig7} display the calibration curves of S-$B_l$ and S-$B_t$, respectively, for the two Mg I 12 $\mu$m lines. When the magnetic field strength was weaker than 400 G, we chose 20 G as the interval in the calculation. The interval was set to 200 G for a magnetic field strength greater than 400 G. The calibration curves in the FALC model and the MALTBY model trend differently because the line width and model temperature are different. Furthermore, all the S-B curves are nonlinear distributions and just have a limited linear range for efficiently calibrating $B_l$ and $B_t$. We evaluated the four different methods from the overall trend of the curves, and the correlation coefficient (CC) was used as a reference or assistance to determine the approximate range for the linear calibration of $B_l$ and $B_t$. The vertical red lines in the figures roughly mark the upper limits of the linear range, the points within which have a CC value greater than 0.972. \begin{figure*}[!htbp] \centering \includegraphics[width=18cm, angle=0]{fig6.eps} \caption{S-B curves of the Mg I 12.32 $\mu$m (top) and Mg I 12.22 $\mu$m (bottom) lines using different calculation methods in the FALC (solid green line, data points are denoted by triangles) and MALTBY (dashed blue line, data points are denoted by diamonds) models for $B_l$. S1 (left) and S2 (right) represent different wavelength-integrated methods (see Equations (1) and (2)). The vertical lines are used to divide the position that deviates from linearity in the FALC model (solid red) and MALTBY model (dashed red).} \label{fig6} \end{figure*} \begin{figure*}[!htbp] \centering \includegraphics[width=18cm, angle=0]{fig7.eps} \caption{Same as Fig. 6, but for $B_t$. S3 (left) and S4 (right) represent different wavelength-integrated methods (see Equations (3) and (4)).} \label{fig7} \end{figure*} Figure~\ref{fig6} shows that the two S-$B_l$ calibration curves (the S1 curve in the left panel and the S2 curve on the right) in the FALC model are almost the same for the two 12 $\mu$m lines. The S-$B_l$ curves in the MALTBY model show different behaviours after reaching saturation, that is, S1 gradually decreases, while S2 remains stable. One main reason is that S1 considers the effect of the wide absorption trough in the Stokes V profile. As the magnetic field strength increases, the wide absorption trough becomes deeper, so that the direct integration will cause the curve to decrease (cf., Appendix~\ref{fig9}). For the 12.32 $\mu$m (12.22 $\mu$m) line, both S1 and S2 have linear ranges of about 0--400 G (0--400 G) in the MALTBY model and about 0--600 G (0--1200 G) in the FALC model. S-$B_t$ appears to have a broader linear range than S-$B_l$, as shown in Figure~\ref{fig7}. The calibration curves of both S3-$B_t$ and S4-$B_t$ in the FALC model are not saturated even at $B_t$ = 3000 G, showing good linearity within 0--3000 G. In the MALTBY model, S3 and S4 appear to have the linear ranges of about 0--1200 G (0--1400 G) and about 0--1400 G (0--1200 G), respectively, for the 12.32 $\mu$m (12.22 $\mu$m) line. Beyond the critical $B_t$ strengths indicated by the vertical lines, the dashed blue curves gradually become saturated. Neither model shows significant discrepancy between S3 and S4 for the two 12 $\mu$m lines. These results show that the wavelength-integrated method is a very useful tool for calibrating $B_l$ and $B_t$ when the magnetic field is not strong enough to clearly see Zeeman splitting (weaker than 300G for $B_l$ and 500 G for $B_t,$ as shown in Figures~\ref{fig2} and~\ref{fig3}). The advantage of this method is that the signal-to-noise ratio can be improved by integrating the whole spectral line. Moreover, it is very fast by employing linear calibration compared with the Stokes inversion process. The disadvantage is that it can only diagnose a limited magnetic field range and has the saturation effect in strong magnetic fields. In the strong magnetic field case, we can use Zeeman-splitting formula and the ratio of the central $\pi$ to the $\sigma$ components to derive $B_l$ and $B_t$ (or the magnetic field strength and inclination angle) \citep{1990Deming}. The four different calculation methods considered are effective in calibrating $B_l$ and $B_t$. Therefore we can combine the S-B curves with the Zeeman triplet splitting to carry out a fast magnetic field calibration for both weak and strong magnetic field regions based on AIMS observations in the future. The Stokes inversion method considering the NLTE formation process of the Mg I 12 $\mu$m lines is still needed to derive the stratification of atmospheric parameters, that is, the height distribution of the temperature, magnetic fields, velocity, and so on. \subsection{Single-wavelength calibration curve between Stokes profiles and magnetic fields} A filter-based magnetograph is generally employed in the visible and near-infrared wavelength because it can obtain a large field-of-view magnetogram with high temporal resolution \citep{Schou2012,Deng2019,Tsuneta2008,Solanki2019,2010Cao}. Traditional filter-based magnetographs take routine Stokes observations just at one wavelength point of a selected magnetic sensitive line \citep{Ai1987,Hagyard1982}. In order to obtain the magnetic field, linear calibration under the weak-field approximation is generally adopted. In this section, we try to understand whether the single-wavelength magnetic field calibration method is suitable for the Mg I 12.32 $\mu$m line. $B_{l}$ and $B_{t}$ were reconstructed by equations $B_{l}=C_{l}(V/I)$ and $B_{t}=C_{t}[(Q/I)^2+(U/I)^2]^{1/4}$, respectively, where $C_l$ and $C_t$ are the corresponding linear calibration coefficients. Figures~\ref{fig8}.a and~\ref{fig8}.b give the calibration curve of $B_{l}$ versus Stokes V/I and $B_{t}$ versus $[(Q/I)^2+(U/I)^2]^{1/4}$ for the FALC model with the magnetic field strength from 0 to 400 G, respectively. The calibration curves of V/I and $B_{l}$ strongly depend on the selected wavelength position. The farther the wavelength position is from the line center, the greater the saturation value of the magnetic field (Figure~\ref{fig8}a). These curves are approximately Gaussian distributed, which appears to reach saturation and then decreases. When the offset from the line center is 0.058 nm, 0.092 nm, 0.112 nm, 0.134 nm, 0.252 nm, and 0.294 nm, the corresponding value of magnetic saturation is about 100 G, 150 G, 180 G, 200 G, 360 G, and 400 G for the $B_{l}$. Because of the poor linearity of the calibration curves, the range of the magnetic field that can be diagnosed by linear calibration is very limited. Similar conclusions are found for $B_{t}$ (Figure~\ref{fig8}b). The calibration curves of $[(Q/I)^2+(U/I)^2]^{1/4}$ and $B_{t}$ have a quadratic distribution and a better linearity than $B_{l}$ if the offset $\Delta \lambda$ from the line center is less than 0.112 nm. However, the calibration curves are very complicated when the selected wavelength position $\Delta \lambda$ lies farther than 0.112 nm from the line center. So it is difficult to find a fixed wavelength position that can derive $B_{l}$ and $B_{l}$ with the linear calibration method even in the range of 0 to 400 G. \begin{figure*}[!htbp] \centering \includegraphics[width=18cm, angle=0]{fig8.eps} \caption{Single-wavelength calibration curves for different wavelengths. Panel (a) Calibration relation of the $B_l$ and Stokes V/I for the Mg I 12.32 $\mu$m line in the FALC model with a magnetic field strength 0--400 G. Panel (b): Same as panel (a), but the calibration relationship between $B_t$ and Stokes (Q/I, U/I). Different colours are for different wavelength positions toward the line center ($\lambda_0$ = 12319.246 nm). The selected wavelength positions are the same for $B_l$ and $B_t$.} \label{fig8} \end{figure*} This result shows that the single-wavelength calibration method is not suitable for diagnosing magnetic fields in the Mg I 12.32 $\mu$m line because the effective linear range of the magnetic field is particularly small. As a comparison, the calibration curve using Fe I 532.4 nm line in the visible wavelength does not appear saturated until 1000 G when the linear calibration under the weak-field assumption is used \citep{1982Ai,2004Su}. This can be explained by the phenomenon that the Mg I 12.32 $\mu$m line usually shows triple Zeeman splitting with two or three hundred Gauss, while that for some visible spectral lines is about kilo-gauss. This means that the weak-field range of the Mg I 12.32 $\mu$m line is much smaller than that of visible lines with lower magnetic sensitivity. \section{Conclusions and discussion} We compared the differences in magnetic field diagnosis of the Mg I 12.32 $\mu$m and 12.22 $\mu$m lines based on the calculation results of the Stokes profile and response function, and calculated their calibration curve distribution using wavelength-integrated methods. Moreover, we analyzed the feasibility of the single-wavelength calibration method using the Mg I 12.32 $\mu$m line. The main conclusions are listed below. \begin{enumerate} \item[(1)] The synthetic Stokes I profiles of the two Mg I 12 $\mu$m lines from the RH code without magnetic fields are consistent with previous observations and simulations. The radiation intensity of the 12.32 $\mu$m line is stronger than that of the 12.22 $\mu$m line, and the Saha-Boltzmann temperature sensitivity explains that there may be emission in sunspot umbrae. \item[(2)] Although their Land\'e g-factors are the same, it is easier to obtain data with high signal-to-noise ratio using the 12.32 $\mu$m line because its polarization signal is stronger than that of the 12.22 $\mu$m line for the same field. The Stokes Q and U profiles are more complex in quiet regions than in penumbrae because of the wider line width and the more pronounced absorption troughs in penumbrae. \item[(3)] According to the analysis of the response function, the 12.32 $\mu$m line is more suitable for diagnosing magnetic fields than the 12.22 $\mu$m line because its response function value to the magnetic field is higher. For the Mg I 12 $\mu$m lines, the derived temperature signal mainly comes from a height of about 490 km above the photosphere ($\tau_{500}$ = 1), while the magnetic field and velocity sensitivity correspond to a height of about 450 km. \item[(4)] The calibration curves of the magnetic field and wavelength-integrated Stokes profiles can be used for fast magnetic field calibration. In estimating $B_l$, the linear range is about 0--400 G for the MALTBY model for the two Mg I 12 $\mu$m lines, while it is about 0--600 G (0--1200 G) for the 12.32 $\mu$m (12.22 $\mu$m) line in the FALC model. In estimating $B_t$, the linear range is about 0--1200 G in the MALTBY model, while it does not show significant saturation up to $\sim 3$ kG for the FALC model for the two 12 $\mu$m lines. \item[(5)] Because of the high magnetic sensitivity in the Mg I 12.32 $\mu$m line, the effective linear range of the magnetic field calibration using a single-wavelength method is very limited. It is not suitable for a traditional filter-based magnetograph to measure the magnetic field at a single wavelength point. \end{enumerate} All the analyses we presented are based on the simulation results of the forward model, and what we input is the intrinsic magnetic field strength, which does not change with height. We only considered the magnetized components, that is, assuming the filling factor is 1 (f=1). This is an ideal situation, which is difficult to achieve even for large solar telescopes. In the actual magnetic field observation, it is a reverse process from the Stokes parameter to the vector magnetic field. In this case, the filling factor must be considered. However, we cannot obtain information about the filling factor from the wavelength-integrated Stokes profiles. Therefore we give the calibration curve distribution with filling factors of 0.2, 0.4, 0.6, and 0.8 for different calculation methods (Equations (1)-(4)) in Appendix~\ref{fig10}. The wavelength-integrated method can only derive an approximate magnetic field strength. The Stokes inversion is still needed when an accurate value of the magnetic field is to be obtained. This needs to be kept in mind when our method is used for magnetic field calibration, especially for weak fields.\\ In addition, we only used one-dimensional solar atmospheric models to investigate the radiative transfer process of Mg I 12 $\mu$m lines. In recent years, two-dimensional and three-dimensional magnetohydrodynamic models such as those from the Bifrost and MURaM codes have been used, which can reproduce quiet-Sun and sunspot features quiet well \citep{Gudiksen2011,2018Nbrega,2018Moreno,Rempel2014,Cheung2008,Chen2017,Mart2017}. With these models and the RH code, we can obtain the polarization profiles of the Mg I 12 $\mu$m lines at different solar features to investigate their spatial distribution and temporal evolution, which is helpful for us to better understand future observations from AIMS. We will therefore try to carry out these works in future papers. \begin{acknowledgements} We are very grateful to the referee for the valuable comments that helped improve the manuscript. X.L. would like to thank Sihui Zhong and Xianyu Wang for helpful discussions. This work was supported by the National Natural Science Foundation of China under grants 11427901, 11873062, 11803002, 11973056, 12003051, 12073040, 11773038, U1731241, 11703042, the Chinese Academy of Sciences under grants XDA15320102, XDA15052200, XDA15320302, and grant 1916321TS00103201. \end{acknowledgements}
{ "timestamp": "2020-12-17T02:17:10", "yymm": "2012", "arxiv_id": "2012.08912", "language": "en", "url": "https://arxiv.org/abs/2012.08912" }
\section{Introduction} \label{sec:intro} Research in autonomous driving continues to advance in terms of improving the perception capabilities of self-driving cars for greater safety. This includes detecting both static and dynamic obstacles such as pedestrians~\cite{dollar2011pedestrian}, tracking and predicting the trajectories of other vehicles~\cite{chandra2019traphic,chandra2020forecasting, chandra2019robusttp, chandra2020roadtrack}, and scene segmentation~\cite{fu2019dual,zhao2017pyramid}. Immense progress along these lines has led to the deployment of level $2$ and almost level $3$ autonomous vehicles (AVs) in urban traffic environments~\cite{cusumano2020self}. However, these advances in perception technology have been primarily designed to work well in safe and clear weather conditions. Driving in adverse weather and lighting conditions such as snow, rain, or fog is challenging not just for autonomous vehicles but even for humans. These conditions result in a degradation in accuracy of perception techniques including road segmentation~\cite{sakaridis2018model,sakaridis2019guided}. Consequently, AVs are unable to distinguish drivable regions of the road from the non-driveable region (which may be affected by snow, rain, or fog), thereby increasing the likelihood of road accidents~ \cite{mueller2012driving}. In this paper, we address the problem of road segmentation in adverse weather conditions. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{Figures/cover_rohan.png} \caption{We highlight the results generated by SS-SFDA on night \cite{sakaridis2019guided} and fog benchmarks \cite{sakaridis2018model}, compared to the baseline source model pre-trained on clear weather CityScapes. The purple regions (right) denote the segmented road pixels. The overall accuracy of our self-supervised algorithm in terms of mIoU is ($88-96\%$) of supervised methods. \label{fig: coverpic} \vspace{-12pt} \end{figure} The road segmentation ~\cite{fan2020sne,sun2019reverse} problem corresponds to identifying the pixels in an RGB image or video that belong to the `road' class. While general models designed for semantic segmentation in computer vision can be directly used for road segmentation, they suffer from the inability to capture semantic relationships between different objects due to the lack of unique labels for each class. The use of self-attention techniques ~\cite{fu2019dual,zhang2019self} can mitigate this issue by capturing long-range dependencies. However, one major challenge in road segmentation in adverse weather is the lack of ground-truth annotations for road pixels. A common approach in deep learning for handling lack of training data is domain adaptation (DA)~\cite{hoffman2018cycada,vu2019advent}. However, DA-based methods assume access to source datasets (clear weather dataset in our context) at all times which can be prohibitive in terms of storage, memory, data corruption and privacy concerns. Recently, \cite{kim2020domain, nelakurthi2018source} have proposed source-free domain adaptation (SFDA) in which deep neural networks (DNNs) do not require access to the source dataset during the adaptation stage; instead, DNNs are pre-trained on a source dataset (clear weather dataset) and the pre-trained model is directly used to adapt to the unlabeled target domain (adverse weather dataset). Current methods for SFDA are used for image classification~\cite{kundu2020universal,hou2020source,kurmi2021domain,yeh2021sofa} and may not work well for semantic segmentation due to the inherent differences between the classification and segmentation tasks. Moreover, many current SFDA methods use GANs to produce a ``copy'' of the original source domain distribution. In addition to being computationally intensive due to the difficulty in training GANs, image generation for segmentation requires GANs to capture contextual information and semantic relationships between multiple objects and the background, which can be complicated in road scenes. As a result, prior SFDA techniques have not been used for road segmentation. \subsection{Main Contributions} We present a new approach for road segmentation in adverse weather conditions. Our approach is based on a novel algorithm for SFDA using self-supervised learning. We initialize our model with an auto-encoder baseline network using self-attention to generate a pre-trained model on the clear weather source dataset. Using self-attention improves the overall model by capturing long-range dependencies within the image (Section~\ref{sec:sup}). Our novel contributions include: \begin{enumerate}[nosep] \item We present a novel two-step self-supervised SFDA approach called SS-SFDA. In the first step, our method uses entropy minimization to enrich the noisy pseudo-labels generated by the pre-trained auto-encoder. In the second step, we use a novel self-training method that generates pseudo labels in an \textit{online} manner, as opposed to iterative self-training use by prior methods~(Section \ref{sec:unsup}). We use curriculum learning to implement these two steps. This results in the following benefits compared to prior GAN-based approaches: \begin{itemize}[noitemsep] \item SS-SFDA directly exploits the pre-trained model and trains via curriculum learning to progressively bridge the domain gap between the pre-trained source domain and target domain and achieve faster training times. \item Our online self-training scheme overcomes the saturation issues. \end{itemize} \item For heterogeneous adverse weather datasets, we propose a method that extends SS-SFDA~by leveraging a few labeled images from the target domain to improve the accuracy using model distillation (Section \ref{sec:fewimage}). \end{enumerate} We have evaluated our approach on $6$ datasets corresponding to real and synthetic adverse weather conditions. Overall, our mIoU score is $88-96\%$ of prior supervised methods. We also improve the training time over prior SFDA approaches by $18-180 \times$. Finally, our improvement in terms of mIoU over the best SFDA approach is $10.26$\% on real adverse weather data. \section{Related Work} We discuss recent work related to road segmentation, domain adaptation and source-free domain adaptation, and self-supervised learning. \subsection{Road Segmentation} Research in deep learning for semantic segmentation~\cite{long2015fully, yu2017dilated, chen2017deeplab,chen2017rethinking, zhao2017pyramid, fu2019dual, takikawa2019gated} has paved the way for segmentation in urban traffic scenes like CityScapes~\cite{cordts2016cityscapes}. These methods have been extended for supervised road segmentation~\cite{wang2017embedding,zohourian2018superpixel,fan2020sne,sun2019reverse}. Our approach based on self-supervised learning is complimentary to these methods. \subsection{Domain Adaptation and Source Free Domain Adaptation} \label{sec:relatedwork_da} Traditional domain adaptation \cite{hoffman2016fcns, hoffman2018cycada, sankaranarayanan2017unsupervised, vu2019dada, tsai2018learning, chen2017no} methods have achieved remarkable success in adapting models from one domain to another for clear weather conditions. However, these methods need access to the source data. Many domain specific solutions have been proposed for adverse weather conditions, including specific solutions for driving in rain, fog, etc.~\cite{porav2019can,mueller2012driving, sakaridis2018model,dai2020curriculum,sakaridis2018semantic, pizzati2020domain} In contrast, we propose a generic method that neither relies on specific details from each domain, nor requires access to source data during the adaptation stage. \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{Figures/overview_rohan.png} \caption{\textbf{Our Approach:} In stage $1$, our model is pre-trained on a clear weather source dataset. In stage $2$, our model is initialized with the pre-trained model from stage $1$ and trained using our self-supervised algorithm, SS-SFDA, on the unlabeled adverse weather dataset. For heterogeneous weather datasets, we perform additional refinement steps based on model distillation (stage $3$).} \label{fig: overview} \vspace{-15pt} \end{figure*} In source-image free domain adaptation (SFDA), a deep neural network (DNN) pre-trained on a source dataset is required to directly make predictions on the target domain dataset in an unsupervised manner. This approach has been primarily used for image classification tasks. In SFDA, generative approaches~\cite{kundu2020universal,hou2020source,kurmi2021domain,yeh2021sofa,li2020model} are used to either emulate the source data by using the feature representations of the pre-trained model or create a negative source dataset during the pre-training stage. Non-generative approaches~\cite{kim2020domain,yang2020unsupervised,liang2020we,ishii2021source} rely on computing adaptive class specific prototypes, and progressively learn on the target images. While generative approaches work well for classification tasks, they have not been effective for urban scene segmentation. This is because segmentation, being a pixel level task, requires the network to encode context, inter-class semantic relations, structure, and intricate boundaries~\cite{ding2018context,zhang2018context,zhou2019context}. In addition, training a generator for a complex segmentation task can lead to memory and computational overheads, and thereby adds to the difficulty of training GANs. While non-generative classification methods are easier to adapt for segmentation, the computation of prototypes results in similar issues due to the inherent differences between classification and segmentation tasks. Bateson et al.~\cite{bateson2020source} explored SFDA in the context of medical segmentation and minimized entropy by the incorporation of class priors~\cite{bateson2020source}. However, preservation of class priors does not extend well to urban road scenes because the number of road pixels in each image in the dataset can vary quite drastically. In contrast, we present a new method for SFDA using self-supervised learning that overcomes issues related to training and convergence that are freq generative processes. Our method is designed to be complimentary to GAN-based SFDA. \subsubsection{Self-Supervised Learning} Self-supervised learning has been used in semi-supervised learning~\cite{li2019learning,cascante2020curriculum} and domain adaptation~\cite{shin2020two}. Most self-supervised learning methods are centered around the ideas of pseudo labeling~\cite{lee2013pseudo,choi2019pseudo,morerio2020generative,zou2018unsupervised,kothandaraman2020bomuda}, entropy minimization~\cite{vu2019advent,grandvalet2005semi}, and curriculum learning~\cite{zhang2017curriculum}. Domain adaptation methods have access to source domain images. In contrast, we propose a novel self-training routine for SFDA and a completely unsupervised problem setting, where we have access to only a pre-trained model, and target domain images. \section{Source-Image Free Domain Adaptive Road Segmentation via Self-Supervised Learning} \section{Our Approach} In this section, we present our approach for source-image free domain adaptive (SFDA) road segmentation based on self-supervised learning. Our approach consists of three main components: \begin{enumerate}[nosep] \item \textbf{Pre-training using the self-attention auto-encoder:} During this stage, we train the self-attention auto-encoder architecture on a clear weather dataset. This generates a model that encapsulates knowledge about road pixels. \item \textbf{SS-SFDA~: A Self-Supervised learning algorithm for SFDA:} During this stage, we initialize the model using the pre-trained model from the previous step and the target domain images. We use a combination of curriculum learning and entropy minimization to bridge the domain gap between the pseudo-labels and the target domain images. We first sort the target domain images in the increasing order of entropy, and create mini-batches of the dataset. The next task is to execute the following steps on each mini-batch to progressively self-train the model: \begin{itemize}[nosep] \item Optimize the model with an entropy minimization constraint to bridge the domain gap. \item Self-train the model by generating enriched pseudo-labels in an online manner. \end{itemize} \item \textbf{Few-Image Regularization:} For heterogeneous weather datasets, we use a very small number of labeled images ($5-10$) from the target domain to boost the performance of SS-SFDA~via model distillation. (Section \ref{sec:fewimage}) \end{enumerate} \subsection{Pre-Training Baselines Using Self-Attention} \label{sec:sup} The first step in SFDA is to pre-train a DNN on the source dataset for the task of road segmentation. In our case, the source dataset corresponds to traffic videos with clear weather conditions. While networks developed for semantic segmentation \cite{takikawa2019gated,zhao2017pyramid,chen2017deeplab,chen2017rethinking} can be directly used for road segmentation, there is loss of context i.e. the model is unable to capture relationships between various semantic classes in the images like cars and roads, pedestrians and roads, sky and roads, etc. The loss of such context can lead to local ambiguities in classifying pixels~\cite{ding2018context,zhang2018context,zhou2019context}. Self-attention benefits from its capability to capture long-range dependencies between various regions of the image. Thus, using self-attention in road segmentation can allow neural networks to alleviate the degradation in performance due to loss of context. We use a simple autoencoder self-attention architecture that can be combined with any ``off-the-shelf'' segmentation network. We begin by taking an input RGB image $\mathcal{I}$ $\in \mathbb{R}^{w\times h \times 3}$, which is passed through an encoder $E$ to generate feature maps $F_{en}$ ($F_{en}=E(I)$). Next, we apply self-attention $SA$ \cite{zhang2019self} on these features maps to obtain \textit{attention} maps $F_{sa}$ (same dimensions as $F_{en}$). These feature maps encapsulate the semantic relationships between various parts of the image. These feature maps $F_{sa}$ are used to learn the final predictions $\mathcal{P}_{out}$ $\in \mathbb{R}^{w'\times h' \times 1}$, which corresponds to the probability that each pixel is classified as `road'. In the supervised setting where ground-truth labels $\mathcal{Y}$ $\in \mathbb{Z}^{w'\times h' \times 1}$ are available, the network is optimized with a binary cross-entropy loss function, \begin{equation} \mc{L}_{\textrm{CE}} = - \sum_{h,w} \mc{Y}\log(P_{out}) + (1-\mc{Y})\log(1-P_{out}). \label{eq: cross_entropy} \end{equation} \noindent We use this pre-trained model for SFDA on the adverse weather datasets~\cite{sakaridis2018model,sakaridis2019guided,yu2018bdd100k,halder2019physics,tung2017raincouver}. \subsection{SS-SFDA~} \label{sec:unsup} In this section, we describe our two-step self-supervised learning algorithm for unsupervised source-image free domain adaptive road segmentation in adverse weather conditions. We use the pre-trained model from the previous step. The pseudo-labels generated by the model are noisy leading to a domain gap between the pseudo-labels generated by the pre-trained model (on the source dataset) and the target domain images. Thus, directly self-training using the pseudo labels can hamper the performance of the model. To counter this, we propose an entropy minimization step (Section \ref{sec:ent_min}) which encourages the network to generate more accurate pseudo labels. In addition, to bridge the domain gap between the pre-trained model and the target domain, we use curriculum learning~\cite{bengio2009curriculum, hacohen2019power} in which the DNN is allowed to train on samples progressively in their increasing order of entropy of predictions. Given a probability map $P$ denoting the probability that pixels are classified as road pixels, the entropy is computed as $-\Sigma P \times \log(P)$ This is because learning from samples with low entropy (low rain, for example) yields better pseudo labels on samples with higher entropy (high rain, for example)~\cite{dai2020curriculum,zhang2017curriculum,zhang2017curriculum}. We create mini-batches of the dataset characterizing the difficulty of the images. For datasets which provide labels on the intensity (light rain vs heavy rain) of the weather condition, mini-batches can be created directly. For other datasets, we sort the images in increasing order of the entropy and then split them into $m$ ($m \sim 4-5$, determined by hyperparameter tuning) equal mini-batches. The model is self-trained on the mini-batches in a sequential manner. For the first mini-batch, the model is initialized with the pre-trained model from Section~\ref{sec:sup}. For subsequent mini-batches, our model is initialized with weights obtained by training the network on the previous mini-batch. For each mini-batch, the network is trained in two stages, as described below: \subsubsection{Step 1: Bridging the Domain Gap via Entropy Minimization} \label{sec:ent_min} The pre-trained model from Section~\ref{sec:sup} has a low entropy (\ie high prediction probability or better generalization) on images that are similar (for example, similar geography, light rain, light fog) to source domain images \cite{sakaridis2019guided} and vice versa. Thus, initializing the network with these pre-trained weights, followed by training by entropy minimization ~\cite{grandvalet2005semi,saito2019semi} allows the network to generate enriched pseudo-labels. The inputs to the network are images from the target domain. Let the predictions of the network be denoted by $\mathcal{P}_{out}$ $\in \mathbb{R}^{w'\times h' \times 1}$, corresponding to the probability that each pixel is classified as `road'. The cost function for entropy minimization is given by, \begin{equation} L_{\textrm{EM}} = - \sum_{\forall \textrm{pixels}}P^{h,w} \log P^{h,w}, \end{equation} \noindent where $P^{h,w}$ is the probability that a pixel belongs to a class `road' at a given location, and $-P^{h,w} \log, P^{h,w}$ is the entropy. \subsubsection{Step 2: Online Self-Training Using Enriched Pseudo-Labels} \label{sec:st_pl} The network trained in Step 1 generates enhanced pseudo labels with high probability, and is thus a better representative of the target domain than the pre-trained source model from Section~\ref{sec:sup}. These enriched pseudo labels which can be used to self-train the model further to improve performance. A traditional method of self-training using pseudo-labels is iterative~\cite{pan2020unsupervised} in which the network is trained to convergence (validation loss less than a given threshold) over multiple iterations. In each iteration, pseudo labels from the trained model in the previous iteration are used to set up the binary segmentation cost function. We show (Table \ref{tab:sota}) that iterative self training does not lead to any improvement in performance. This is because, the pre-training step, which is imperative for acquiring initial knowledge about road pixels since the problem is unsupervised in the target domain, causes the network to saturate quickly. Hence, we generate the pseudo labels in an online manner, \textit{i.e.} the pseudo labels are generated from the network that is being trained. This allows the network to self-train from the improved pseudo labels as they are learnt. The network in this stage is initialized with the weights obtained in Step 1. The inputs to the network are images from the target domain. Pseudo labels are generated in an online fashion from the network being trained as follows, \[ Y_{\textrm{pseudo}} = \begin{cases} 1 \ \textrm{if} \ P^{h,w} \geq \tau, \\ 0 \ \textrm{otherwise}, \end{cases} \] \noindent where $P^{h,w}$ is the probability that a pixel belongs to the class `road' at a given location and $\tau$ is a threshold. The network is optimized with these pseudo-labels using a binary cross entropy loss term, (similar to Equation \ref{eq: cross_entropy}). \input{Tables/AblationsSOTA/WeatherDatasets} \subsection{Few-Image Fine-Tuning via Model Distillation } \label{sec:fewimage} Some heterogeneous weather datasets like Raincouver \cite{tung2017raincouver} and Berkeley Deep Drive (BDD) \cite{yu2018bdd100k} contain a mixture of adversities within the same image (for instance night+rain in Raincouver, see Table \ref{tab:datasets}). Furthermore, these datasets are captured from different geographic conditions (i.e., source and target datasets may be from different regions). To make our model robust against such factors, we use ground truth labels for a few images (order of $5-10$ images) from the target dataset in a final refinement step described below. In a nutshell, given a model trained on the unlabeled target dataset using SS-SFDA, and $k \leq 10$ labeled images from the target domain images, our goal is to learn enhanced feature maps for the target domain in the presence of adversarial factors such as mixtures of adversities and different geographical regions. We empirically observe that directly fine-tuning the SS-SFDA~model on the $k$ images is sub-optimal due to overfitting. To prevent overfitting, we propose a model distillation~\cite{li2020model} regularizer. Let the weights of the SS-SFDA~model be denoted by $\omega_{\textrm{SS-SFDA}}$, and the weights of the model being currently trained be denoted by $\omega_{\textrm{fewIm}}$. The cost function for model distillation is given by, \[ L_{\textrm{model-distil}} = C(\omega_{\textrm{SS-SFDA}},\omega_\textrm{fewIm}), \] \noindent where $C$ represents a distance function such as MSE distance or L1 distance. In our benchmarks, MSE distance works best. The network is first initialized with weights of the SS-SFDA~model. The model distillation term $L_{\textrm{model-distil}}$ with weight parameter $\lambda_{\textrm{model-distil}}$ is applied in conjunction with the binary cross entropy loss function (Equation \ref{eq: cross_entropy}) to constrain the probability predictions and ground-truth labels for $k$ images. The $\lambda_{\textrm{model-distil}}$ term balances between extracting domain specific characteristics from the $k$ images (such as mix of adverse weather, geographical features etc.) and prevents the weights of the model from diverging from the SS-SFDA~weights (for better generalization). The overall equation follows as, \begin{equation} \mathcal{L}_{\textrm{overall}} = L_{\textrm{CE}} + \lambda_{\textrm{model-distil}} L_{\textrm{model-distil}} \end{equation} \section{Experiments and Results} We use the CityScapes dataset as the clear weather source domain. We conduct evaluation experiments on $6$ datasets captured in adverse environmental conditions, described in Table \ref{tab:datasets}. We evaluate our model using four metrics: mean Intersection over Union (mIoU), Recall (or accuracy), Precision, and F1 score. All our models are trained using one NVIDIA GeForce GPU, and we implement the model using the PyTorch framework. We will make all code publicly available. The hyperparameters generalize across our experiments on all datasets. For the segmenatation model, we use the SGD optimizer with a learning rate of $2.5e-4$, and momentum of $0.9$ and weight decay of $0.0005$. Dataset specific details are provided in the table below. Images are downsampled (by a factor of $2$, where necessary) by bilinear sampling, and the corresponding ground-truth labels are downsampled by nearest neighbour downsampling. In this section, we highlight our main results which we summarize as follows, \begin{itemize}[nosep] \item The self-attention auto-encoder is comparable ($94.7\%-101.25\%$ of second best SOTA mIoU) to more complex and sophisticated architectures for road segmentation (Tables~\ref{tab:cityscapesclearweather_modelcomplexity} and~\ref{tab:sup_weatherdatasets}). \item We empirically show that our method approximates supervised learning-based models ($88-96\%$ of supervised mIoU) across all $6$ datasets (Tables \ref{tab:syntheticrain_mainresults},\ref{tab:syntheticrain_mainresults},\ref{tab:Foggy_Zurich},\ref{tab:Dark_Zurich},\ref{tab:raincouver},\ref{tab:bdd}). \item We demonstrate an improvement of at least $10.26\%$ over prior work in SFDA (Table \ref{tab:sota}). \item We improve training time over prior SFDA approaches by $18-180\times$. \end{itemize} \subsection{Analysing Pre-Trained Self-Attention-based AutoEncoder} \label{sec:analysis_safe} \input{Tables/Supervised/modelComplexity} \input{Tables/Supervised/weatherdatasets} In this section, we analyse the performance of our self-attention auto-encoder model (described in Section \ref{sec:sup} and benchmark its performance on various datasets in the supervised setting. In Table \ref{tab:cityscapesclearweather_modelcomplexity} (I), we show that the usage of self-attention within various conventional semantic segmentation models to encode semantic relationships via capturing long-range dependencies improves performance over the corresponding baseline. We use the DRN-D-38 model \cite{yu2017dilated} with self-attention in all further experiments. In Table \ref{tab:sup_weatherdatasets} (I,II,III), we benchmark the model on various weather datasets in the supervised setting. We use these supervised numbers in the following subsections to perform a comparative study of our self-supervised model (See Figure \ref{fig: coverpic} for a summary). In Table \ref{tab:sup_weatherdatasets} (IV), we show that the self-attention autoencoder is comparable to the state-of-the-art on various datasets. \input{Tables/WeatherDatasets/syntheticrain} \subsection{Results on Synthetic Datasets: Rain and Fog} \label{sec:exp_syn} We perform three evaluation experiments. Experiment A corresponds to testing the pre-trained CityScapes model on varying intensities of rain and fog. Experiment B corresponds to results obtained by training SS-SFDA~without curriculum learning (\textit{i.e.} by initializing with the CityScapes pre-trained model and training on higher intensities of rain and fog directly), and experiment C corresponds to results obtained by training SS-SFDA~as proposed in Section \ref{sec:unsup} (\ie with curriculum learning). \noindent \textbf{Results on Synthetic Rain:} The results of SS-SFDA~ on Synthetic Rainy CityScapes \cite{halder2019physics} are shown in Table \ref{tab:syntheticrain_mainresults} (I). For low intensities of rain, the performance of the CityScapes model is more or less preserved (Table~\ref{tab:sup_weatherdatasets} (I)). For higher intensities of rain, there is a degradation in performance. We observe that the decrease in performance is highest for $200$mm rain, at $27.15\%$. For $75$mm, $100$mm, and $200$mm, Experiment B imparts an improvement of $1.03\%$, $4.45\%$, and $21.63\%$ respectively over the corresponding baselines. Experiment C leads to a cumulative improvement of $23.64\%$ over the corresponding baseline. Furthermore, we demonstrate that the mIoU and recall of our SS-SFDA, which is completely unsupervised, is $90.07\%-90.80\%$ and $92.53\%-97.60\%$ of the counterpart supervised mIoU. \noindent \textbf{Results on Synthetic Fog:} The results of our SS-SFDA~ on Synthetic Foggy CityScapes \cite{halder2019physics} are shown in Table \ref{tab:syntheticrain_mainresults}II. Similar to synthetic rain, the performance of the CityScapes model on light fog is more or less preserved (Table \ref{tab:sup_weatherdatasets} (II)). For higher intensities of fog, there is a degradation in performance. We notice that the degradation is very high for visibility distances less than 150m, and is the highest at $60.6\%$ for 30m fog. Experiment B leads to an improvement of 4.89\%, 13.9\%, 25.04\%, 13.7\% on 150m, 75m, 50m, and 40m fog respectively. Direct application of the self-training algorithm on 30m fog (without curriculum learning) degrades performance since the generalization of CityScapes on 30m is very poor. Experiment C (Training SelfTr-Road Seg by curriculum learning i.e. progressively from 150m fog to 30m fog) cumulatively improves performance over the baselines (Experiment A) by 23.54\%, 28.18\%, 80.78\% and 130.65\% on 75m, 50m 40m, and 30m fog respectively. Furthermore, we demonstrate that the mIoU and recall of our SS-SFDA, which is completely unsupervised, is 90.8\%-96.78\% and 91.12\%-98.82\% of the supervised mIoU. \noindent \textbf{Benefits of Curriculum Learning:} On a given dataset, the performance of our self-training algorithm depends heavily on the generalization capabilities of the pre-trained model used for initialization. Therefore, progressively initializing and training the model on increasing intensities of rain and fog will lead to the best accuracies (Table \ref{tab:syntheticrain_mainresults}, Table \ref{tab:syntheticrain_mainresults}). This implies that progressively training from low intensities to high intensities of rain and fog improves the quality of pseudo labels and prediction probabilities on high intensities of rain and fog. In simpler terms, for the $100$mm rain dataset, a model trained on $75$mm rain will work better than the CityScapes clear weather model. Similarly, for the $200$mm rain dataset, a model trained on $100$mm rain will work better than the $75$mm rain model, which in turn will generalize better the CityScapes clear weather model. A similar intuition can be drawn for synthetic fog too. We validate this hypothesis in Table \ref{tab:clpseudolabels_proof}. The first and second columns correspond to the datasets that the model is trained and tested on respectively. We observe that curriculum learning progressively improves performance, which results in high quality pseudo labels with high confidence, a boon for self-training. \noindent \textbf{Generalization trends:} We observe that our self-supervised SS-SFDA~preserves the accuracy on the clear weather source dataset, CityScapes. The mIoU and accuracy of the fog model on clear weather CityScapes are at $96.9\%$ and $99.00\%$ of the supervised CityScapes model, respectively. The corresponding accuracy numbers for synthetic rain are $98.11\%$ and $98.72\%$, respectively. \input{Tables/CurriculumLearningPseudoLabels/syntheticrainfog} \input{Visualizations/MainPaper/visualizations} \input{Tables/WeatherDatasets/foggyzurich} \subsection{Results on Real datasets: Foggy Zurich and Dark Zurich} \label{sec:exp_real} \noindent \textbf{Analysis: Foggy Zurich} The results are presented in Table \ref{tab:Foggy_Zurich}. The pre-trained CityScapes model fails to generalize (Table \ref{tab:Foggy_Zurich} (I)) to the real fog dataset due to the domain gap. In accordance with the curriculum learning strategy, the self-training algorithm is first applied on images with light fog, and then on images with medium fog. We show results for each of the two stages of SS-SFDA~to demonstrate how the two-step training procedure gradually improves performance. Training with light fog improves the mIoU by $64.99\%$ over the corresponding clear weather baseline. Initializing the model trained on light fog, and fine-tuning on medium fog using SS-SFDA~further improves the performance by $22.3\%$, thus resulting in a cumulative improvement of $101.84\%$. We do not show comparisons with supervised methods due to a lack of labeled datasets. \input{Tables/WeatherDatasets/darkzurich} \input{Tables/WeatherDatasets/raincouver} \input{Tables/WeatherDatasets/bdd} \input{Tables/AblationsSOTA/sota} \noindent \textbf{Analysis-Dark Zurich:} The results are presented in Table \ref{tab:Dark_Zurich}. Table \ref{tab:Dark_Zurich} (I) shows that the CityScapes pre-trained model does not work well on Dark Zurich. SS-SFDA~ first trains on the twilight images, and then on the night images. Training with twilight images improves the mIoU by $24.91\%$ over the corresponding clear weather baseline. Initializing the model with the model trained on twilight images, and fine-tuning on night images further improves performance by $4.41\%$, thus resulting in a cumulative improvement of $31.47\%$. We do not show comparisons to supervised accuracies due to the unavailability of training labels. \subsection{Heterogeneous Real Datasets: Raincouver and Berkeley Deep Drive} \label{sec:exp_complexreal} Raincouver~\cite{tung2017raincouver} and Berkeley Deep Drive (BDD)~\cite{yu2018bdd100k} are complex datasets with images containing a mix of weather conditions in addition to scenes from different geographical regions. Raincouver consists of images captured under rain during the night. BDD consists of images in snow, fog, low light, glare, rain, etc. We show that SS-SFDA~benefits from supervised finetuning with just $5-10$ images using the procedure discussed in Section~\ref{sec:fewimage}. The models converge in 40 iterations, which takes 2 minutes to train on one NVIDIA GeForce GPU with 11GB memory. \noindent \textbf{Analysis-Raincouver:} The results are shown in Table~\ref{tab:raincouver}. SS-SFDA~results in an mIoU of $52.42$, which is $72.9\%$ of supervised mIoU. In Table~\ref{tab:raincouver} (II), we show the effectiveness of the fine-tuning step. As $k$ (number of supervised images used by the algorithm) increases, the performance of the model improves. Using just $1$ image ($k=1$) results in an improvement of $4.3\%$ (over SS-SFDA). $k=10$ achieves $88.82\%$ of supervised mIoU. In Table~\ref{tab:raincouver} (III), we demonstrate that model distillation improves mIoU by $11.86\%$. Hyperparameter tuning on the model distillation hyperparameter $\lambda_{\textrm{model-distil}}$ reveals that a value of $1.0$ works best. \noindent \textbf{Analysis-Berkeley Deep Drive:} The CityScapes model results in an mIoU of $70.28$ (baseline). Training with SS-SFDA~improves performance by $7.12\%$ over the baseline, and is at $84.41\%$ of supervised IoU. In Experiment III, we pick k random images from the dataset and finetune the network using the fine-tuning step. In concurrence with our intuition, we observe that the performance improves as the number of images increases. For $k=10$, we demonstrate an mIoU improvement of $10.30\%$ over SS-SFDA, which is $93.11\%$ of supervised accuracy. \subsection{Training Time and Convergence} The pre-training step helps our model converges in 1/6 epoch (the accuracies are similar over multiple random runs), thus bringing the training time down to 15 minutes on an NVIDIA GeForce GPU with 11GB memory. Generative SFDA models~\cite{kundu2020universal} take 30 epochs to converge, while prior work on self-training \cite{zou2018unsupervised} and SFDA~\cite{kim2020domain} applied (where feasible) to road segmentation converge in 3 epochs. Thus, we improve training time over prior SFDA approaches by $18-180 \times$. \subsection{Comparisons with Prior Work} \label{sec:sota} We adapt (where feasible) the state-of-the-art method in the following categories: self-supervised learning \cite{zou2018unsupervised}, source-free DA\cite{kim2020domain}, and DA \cite{vu2019advent} to road segmentation for comparisons. Other methods like feature alignment by class-wise prototype learning \cite{liang2020we}, generative methods \cite{kundu2020universal,hou2020source,kurmi2021domain,yeh2021sofa,li2020model}, and models that optimize source domain class priors \cite{hacohen2019power} do not scale well to SFDA road segmentation due to the problems highlighted in Section~\ref{sec:relatedwork_da}. We performed evaluations using each of the methods on synthetic rain (Table~\ref{tab:sota} (I)) and observe that our model outperforms all prior methods. We finally train the best performing SOTA model on BDD (Table \ref{tab:sota} (II)) for comparisons on heterogeneous real datasets which carry the highest level of difficulty. On BDD our method outperforms prior methods by $10.26\%$. \section{Conclusion, Limitations, and Future Work} We propose a new method for road segmentation in adverse weather conditions using a novel self-supervised source-free domain adaptation approach. Through our evaluations on $6$ real and synthetic datasets, we show that that our self-supervised model that has access to only a pre-trained clear weather model and unlabeled target images exhibits accuracy that is comparable to completely supervised models which have access to labels for all target domain images. In addition, we exhibit benefits in terms of faster training time and state-of-the-art performance There are a few limitations of our work. Currently, our approach is designed for binary segmentation and cannot perform multi-class segmentation. Extending the current method to multi-class segmentation would generalize the approach beyond road segmentation. Moreover, it would interesting to investigate if SFDA via self-supervised learning could be extended to other computer vision problems such as object recognition, image classification, scene understanding, etc. \section*{A.1 Training details} We will make all code publicly available. Our codes are written in PyTorch. The hyperparameters generalize across our experiments on all datasets. For the segmenatation model, we use the SGD optimizer with a learning rate of $2.5e-4$, and momentum of $0.9$ and weight decay of $0.0005$. Dataset specific details are provided in the table below. Images are downsampled (by a factor of $2$, where necessary) by bilinear sampling, and the corresponding ground-truth labels are downsampled by nearest neighbour downsampling. \input{LaTeX/Tables/AblationsSOTA/suppWeather} \section*{A.2 Qualitative results} Please refer to the supplementary video for more qualitative results.
{ "timestamp": "2021-03-26T01:16:17", "yymm": "2012", "arxiv_id": "2012.08939", "language": "en", "url": "https://arxiv.org/abs/2012.08939" }
\section*{Program summary} \noindent \textit{Program title}: \textsf{SpaceGroupIrep} \noindent \textit{Developer's respository link}: \url{https://github.com/goodluck1982/SpaceGroupIrep} \noindent\textit{Licensing provisions}: GNU General Public Licence 3.0 \noindent\textit{Distribution format}: tar.gz \noindent\textit{Programming language}: Mathematica \noindent\textit{Classification}: 11.2 \noindent\textit{External routines/libraries used}: \textsf{spglib} (\url{http://spglib.github.io/spglib}) \noindent\textit{Nature of problem}: Space groups and their representations are important mathematical language to describe symmetry in crystals. The book\textemdash ``The mathematical theory of symmetry in solids'' by C. J. Bradley \& A. P. Cracknell (called the BC book)\textemdash is highly influential because it contains not only systematic theory but also detailed complete data of space groups and their representations. The package \textsf{SpaceGroupIrep} digitizes these data in the BC book and provides tens of functions to manipulate them, such as obtaining group elements and calculating their multiplications, identifying k-points, showing the character table of any little group, determining the little-group (LG) irreducible representations (IRs) of energy bands, and calculating the direct product of space-group (SG) IRs. This package is a useful database and tool set for space groups and their representations in BC convention. \noindent\textit{Solution method}: The direct data in the BC book is used to calculate the LG IRs for standard k-points defined in the book. For a non-standard k-point, we first relate it to a standard k-point by an element which makes the space group self-conjugate and then calculate the LG IRs through the element. SG IRs are obtained by calculating the induced representations of the corresponding LG IRs. The full-group method based on double coset is used to calculate the direct products of SG IRs. In addition, an external package \textsf{spglib} is utilized to help convert any input cell to a cell in BC convention. \section{Introduction} Symmetry embodies the beauty of natural law and hence plays an important role in physics. A famous example is the Noether's theorem which relates symmetric invariance to conserved quantities. Another example is various selection rules determined by symmetry. Furthermore, in recent years point-group (PG) and space-group (SG) symmetries demonstrate deep connections to topological physics such as topological insulators\citep{Weng_Fang_2014_4_11002__Transition}, topological crystalline insulators\citep{Fu_Fu_2011_106_106802__Topological,Hsieh_Fu_2012_3_982__Topological}, Dirac and Weyl semimetals\citep{Wang_Fang_2012_85_195320__Dirac,Weng_Dai_2015_5_11029__Weyl}, nodal line and nodal loop semimetals\citep{Fang_Fu_2015_92_81201__Topological,Li_Yang_2017_96_81106__Type,Ma_Yao_2018_98_201104__Mirror}, nodal chain metals\citep{Bzdusek_Soluyanov_2016_538_75__Nodal}, hourglass-band materials \citep{Wang_Bernevig_2016_532_189__Hourglass,Li_Yang_2018_97_45131__Nonsymmorphic,Fu_Yao_2018_98_75146__Hourglasslike}, topological photonic crystals with all-dielectric materials\citep{Wu_Hu_2015_114_223901__Scheme,Lu_Soljacic_2016_12_337__Symmetry,Slobozhanyuk_Khanikaev_2016_11_130__Three,Ji_Yao_2019_99_43801__Transport}. In addition, representation theory of space group is also used in symmetry indicator method or elementary band representation method to classify symmetry-protected band topology of nonmagnetic materials\citep{Po_Watanabe_2017_8_50__Symmetry,Kruthoff_Slager_2017_7_41069__Topological,Song_Fang_2018_9_3530_1711.11049v3_Quantitative,Zhang_Fang_2019_566_475__Catalogue,Tang_Wan_2019_15_470__Efficient,Tang_Wan_2019_566_486__Comprehensive,Tang_Wan_2019_5_8725__Topological,Bradlyn_Bernevig_2017_547_298__Topological,Cano_Bernevig_2018_120_266401__Topology,Cano_Bernevig_2018_97_35139__Building,Vergniory_Wang_2019_566_480__complete}, which greatly helps to search materials with specific band topology systematically. Although nearly all modern codes of density functional theory (DFT) support symmetry analysis such as doing the Brillouin zone (BZ) integration in a symmetrically irreducible wedge region of BZ to accelerate calculation, none of them gives full information of space groups and their representations as far as we know. DFT codes \textsf{VASP}\citep{vasp1} and \textsf{ABINIT}\citep{abinit1} gives SG operations in their output files and \textsf{ABINIT} also provides the SG name, but neither of them give the information about the little-group (LG) irreducible representations (IRs). It's known that SG IR is obtained by inducing from LG IR (also called small representation), and in most cases LG IR is enough for symmetry analysis. DFT codes \textsf{WIEN2k}\citep{wien2k1} and \textsf{Quantum ESPRESSO}\citep{QuantumESPRESSO1} can give LG IRs in the simple case of symmorphic space groups, but neither of them can process LG IRs with wave vectors (i.e. k-points) on the boundary of BZ for nonsymmorphic space groups. The manual of \textsf{WIEN2k} says \textquotedblleft \textit{It will not work in cases of non-symmorphic spacegroups AND k-points at the surface of the BZ}\textquotedblright{} for its program \textsf{IRREP}, while in the output of \textsf{band.x} of \textsf{Quantum ESPRESSO} there are remarks ``\textit{zone border point and non-symmorphic group, symmetry decomposition not available}''. In order to determine the LG IRs of Bloch states there are mainly two steps: the first step is to calculate the characters of each operation in the little group, and the second step is to identify the LG IRs by looking up the character tables of LG IRs. In fact, the character tables of LG IRs were given by several books decades ago\citep{Kovalev1965,MLbook,ZakCGG,BCbook,CDML}. Then why few DFT codes can give LG IRs in all cases? We think the main reason is that the large amount of data relating to all LG IRs and the not-so-easy relations among the data to those who are not very familiar with the representation theory of space groups hinder the realization of full support of LG IRs in the DFT codes. On the contrary, in the simple case of symmorphic space groups which can be treated by \textsf{WIEN2k} and \textsf{Quantum ESPRESSO}, only PG character tables are needed to determine LG IRs, and PG character tables have much smaller data amount of several pages compared to those of LG IRs which can fill a book of hundreds of pages. In fact, there have been third-party databases of SG/LG IRs available for a long time, i.e. the \textsf{ISOTROPY} software suite\citep{iso1,iso-ir} and the Bilbao Crystallographic Server (BCS)\citep{BCS-II,BCS-DSG}. The \textsf{ISOTROPY} is a collection of programs using space group theory to analyze phase transitions in crystals. It contains and provides all the data of SG IRs, but it does not give the data of LG IRs directly. And it does not have IR data of double space groups either, because they are not needed to analyze phase transitions in crystals. On the other hand, the BCS is a user-friendly website with various crystallographic databases and programs available online. It has full support of LG IRs and SG IRs for both space groups and double space groups. However, all the IR data are online and BCS does not provide offline programs. For common users, this makes BCS not convenient for batch processing a large number of offline jobs. Only recently, has there been a program called \textsf{irvsp} capable of calculating the LG IRs of Bloch states\citep{Gao_Wang_2020___2002.04032v1_Irvsp}. The \textsf{irvsp} is a post-processing program written in fortran and designed to calculate the LG IRs of Bloch states generated by \textsf{VASP} according to the character tables of LG IRs of BCS (obtained from a developer of BCS). Historically, there are different conventions used to describe space groups and their IRs by different authors, such as Kovalev convention\citep{Kovalev1965}, Zak-Casher-Gl\"{u}ck-Gur (ZCGG) convention\citep{ZakCGG}, Bradley-Cracknell (BC) covention\citep{BCbook}, and Cracknell-Davies-Miller-Love (CDML) convention\citep{CDML}. Both \textsf{ISOTROPY} and BCS use the CDML convention. However, after we analyzed the times cited of these conventions on Web of Science\citep{note1} , we found that the BC convention is the most used one, especially in recent years, as shown in Fig. \ref{fig:timesCited}. This is not surprising, because the BC book not only contains the tables of LG IRs and related complete data but also contains comprehensive and systematic theory of space group and its representation, which makes it a classic reference book and teaching material. In addition, the BC book \citep{BCbook} is on sale and hence the most easily obtained one while the other books \citep{Kovalev1965,ZakCGG,CDML} are all out of print and hardly obtained. For example, we have tried our best to seek the CDML book \citep{CDML} and failed finally, and hence we can only understand the CDML convention from \textsf{ISOTROPY} or BCS indirectly. The first version of the BC book was published in 1972 and a reprint version was published in 2009. We think, it was the popularity of the BC book that led to its reprint in 2009, and the reprint made the BC book easily obtained and more popular. \begin{figure} \begin{centering} \includegraphics[width=10cm]{fig-citation.pdf} \par\end{centering} \caption{Times cited of the books \citep{Kovalev1965}, \citep{ZakCGG}, \citep{BCbook}, and \citep{CDML} in each year on Web of Science, corresponding to the Kovalev, ZCGG, BC, and CDML conventions of space groups and their IRs respectively. The data are up to Aug. 2020.\label{fig:timesCited}} \end{figure} Although the BC book is popular, the space group settings used in it are different from the commonly used settings in ``International Tables for Crystallography, Volume A'' (hereafter referred to as ITA)\citep{ITA}. This maybe make the BC settings of space groups not very intuitive. Additionally, there are various tables correlated with each other in the BC book, which makes it somewhat tedious and complicated to extract information from these tables. For example, if we want to know the character of the operation $\{\sigma_{z}|\frac{1}{4}\frac{1}{4}\frac{1}{4}\}$ in the IR $\Gamma_{2}^{+}$ of space group $Fddd$ (No. 70) we have to first look up the Tab. 5.7 in the BC book (hereafter referred to as ``BC-Tab. 5.7'') to find the abstract group $G_{8}^{3}$, the generators of the Herring little group (HLG) $\{C_{2z}|000\}$, $\{C_{2y}|000\}$, $\{I|\frac{1}{4}\frac{1}{4}\frac{1}{4}\}$, and the letter ``b'' according to which we can know that the IR $\Gamma_{2}^{+}$ is the IR $R_{2}$ of $G_{8}^{3}$ from BC-Tab. 5.8. Then we calculate the elements of HLG according to the generators $P=\{C_{2z}|000\}$, $Q=\{C_{2y}|000\}$, $R=\{I|\frac{1}{4}\frac{1}{4}\frac{1}{4}\}$ of $G_{8}^{3}$ in BC-Tab. 5.1 and find that $C_{6}=PR=\{\sigma_{z}|\frac{1}{4}\frac{1}{4}\bar{\frac{3}{4}}\}$. Hence we find in the character table of BC-Tab. 5.1 that the character of $\{\sigma_{z}|\frac{1}{4}\frac{1}{4}\bar{\frac{3}{4}}\}$ in IR $R_{2}$ of $G_{8}^{3}$ is $-1$. According to the properties of LG IR {[}refer to Eq. (\ref{eq:Gammakpv+t}){]} we know that the character of $\{\sigma_{z}|\frac{1}{4}\frac{1}{4}\frac{1}{4}\}$ in IR $\Gamma_{2}^{+}$ is $\chi(\{\sigma_{z}|\frac{1}{4}\frac{1}{4}\frac{1}{4}\})=e^{-i\vk\cdot\Delta\vR}\chi(\{\sigma_{z}|\frac{1}{4}\frac{1}{4}\bar{\frac{3}{4}}\})=\chi(\{\sigma_{z}|\frac{1}{4}\frac{1}{4}\bar{\frac{3}{4}}\})=-1$, in which $\vk$ is $\Gamma$ and $\Delta\vR=(\frac{1}{4}\frac{1}{4}\frac{1}{4})-(\frac{1}{4}\frac{1}{4}\bar{\frac{3}{4}})=(001)$. Furthermore, the rotation matrices used to calculate the HLG are defined in BC-Tab. 3.2. This example shows the cumbersome process of extracting information from the tables of the BC book. If lots of data are obtained this way manually, it's not only tedious but also prone to error. Consequently, a program based on the SG and IR data in the BC book and capable of automating this process is highly required. However, there are no such programs available as we know, therefore we developed such a program package named \textsf{SpaceGroupIrep} in the Mathematica language. Different conventions use different notations to label the LG IRs, therefore the meanings of IR labels are clear only if the convention used is pointed out. It is particularly true for the ZCGG, BC, and CDML conventions, because they use similar labels such as $\Gamma_{1},\Gamma_{2},\cdots,X_{1},X_{2},\cdots$ (called ``$\Gamma$ labels'' here) but with probably different meanings. A concomitant problem is how to find the correspondence between the IR labels of two different conventions. The only route we know before our \textsf{SpaceGroupIrep} is to use \textsf{ISOTROPY}. \textsf{ISOTROPY} can give the correspondence of IR labels for all the Kovalev, ZCGG, BC, and CDML conventions. However, \textsf{ISOTROPY} works only for high-symmetry (HS) k-points but not for HS lines, and \textsf{ISOTROPY} does not distinguish a couple of complex conjugate IRs related by time reversal symmetry. Furthermore, \textsf{ISOTROPY} does not support IRs of double space groups. Based on these reasons, at present we have realized the correspondence of LG IR labels between BC convention and CDML convention in \textsf{SpaceGroupIrep}. Notice that hereafter the CDML convention will be called BCS convention, because the CDML IR data we used are actually the BCS IR data collected from the output of \textsf{irvsp}. The \textsf{SpaceGroupIrep} package contains all necessary data related to space groups and their IRs defined in the BC book and tens of functions manipulating the data. It can give both the LG IRs and SG IRs at any k-point in an intuitive table form. It can calculate the reduction of the direct product of two SG IRs. It can read the \textsf{trace.txt} file generated by \textsf{vasp2trace}\citep{Vergniory_Wang_2019_566_480__complete} and determine the LG IRs in BC convention for all Bloch states. It can give the correspondence of LG IR labels between BC convention and BCS convention. It can also convert any given crystalline structure to the one in BC convention with the help of an external package \textsf{spglib}\citep{spglib}. The above aspects are also true for double-valued IRs. In addition, \textsf{SpaceGroupIrep} can help study and understand the BC book. It can easily give the elements of a designated space group, little group, Herring little group, or central extension of little co-group and calculate the multiplication of the elements. In a word, the \textsf{SpaceGroupIrep} package is a database and tool set for SG/LG IRs in BC convention, which is very useful in both study and research. \section{Theory} \subsection{Representation theory overview} Let $G$ be a space group whose elements are in the form of Seitz symbol $\{R|\bm{v}\}$. $\{R|\bm{v}\}$ means a rotation $R$ followed by a translation by vector $\bm{v}$. Select one k-point from each wave vector star (i.e. k-star) arbitrarily. Then the induced representations of all the allowed LG IRs of these selected k-points are just all the SG IRs of $G$. Let $\vk$ be a wave vector, its little group be $G^{\vk}$, and its wave vector star be $^{*}\vk$. Suppose that $\Gamma_{p}^{\vk}$ is the $p$-th allowed LG IR of $G^{\vk}$ with dimension $d_{p}$. The modifier ``allowed'' means that $\Gamma_{p}^{\vk}$ satisfies \begin{equation} \Gamma_{p}^{\vk}(\{E|\bm{t}\})=e^{-i\vk\cdot\bm{t}}\Gamma_{p}^{\vk}(\{E|\bm{0}\})=e^{-i\vk\cdot\bm{t}}I_{p}, \end{equation} where $E$ is the identity element of point group, $\bm{t}$ is a lattice vector, hence $\{E|\bm{t}\}$ is a pure translation operation, and $I_{p}$ is a $d_{p}\times d_{p}$ identity matrix. This allowing condition makes LG IRs compatible with the IRs of translation group $T$, and it also makes the representation matrices of all LG elements with the same rotation easily obtained through the relation \begin{equation} \Gamma_{p}^{\vk}(\{R|\bm{v}+\bm{t}\})=e^{-i\vk\cdot\bm{t}}\Gamma_{p}^{\vk}(\{R|\bm{v}\})\label{eq:Gammakpv+t} \end{equation} if $\Gamma_{p}^{\vk}(\{R|\bm{v}\})$ is known. If not explicitly stated otherwise, all the LG IRs we mentioned are allowed. Use $\{R_{\alpha}|\bm{\tau}_{\alpha}\}$ $(\alpha=1,2,\cdots,m_{\vk})$ to denote the coset representatives of the left cosets of $G^{\vk}$ in $G$. Then the SG IR induced from $\Gamma_{p}^{\vk}$, denoted by $\Gamma_{p}^{\vk}\uparrow G$ or $^{*}\Gamma_{p}^{\vk}$ , is an $m_{\vk}d_{p}$-dimensional IR and is determined by \begin{equation} \big[{}^{*}\Gamma_{p}^{\vk}(\{R|\bm{v}\})\big]_{\alpha\beta}=\begin{cases} \Gamma_{p}^{\vk}(\{R_{\alpha}|\bm{\tau}_{\alpha}\}^{-1}\{R|\bm{v}\}\{R_{\beta}|\bm{\tau}_{\beta}\}) & \ \ \text{if }\{R_{\alpha}|\bm{\tau}_{\alpha}\}^{-1}\{R|\bm{v}\}\{R_{\beta}|\bm{\tau}_{\beta}\}\in G^{\vk}\\ 0 & \ \ \text{otherwise} \end{cases}, \end{equation} where $m_{\vk}=|G|/|G^{\vk}|$ is the number of k-points in the star $^{*}\vk$, and $|G|$ means the order of the group $G$. It can be seen that LG IRs have to be known first to determine SG IRs. Consequently, the core problem in SG representation theory is to obtain all LG IRs of a space group. The projective representation method is a general method to obtain LG IRs. Suppose $\{R_{i}|\bm{v}_{i}\}$ is one of the coset representatives of the cosets of $T$ in $G^{\vk}$, and then the set of $R_{i}$ is the little co-group of $G^{\vk}$, denoted by $\bar{G}^{\vk}.$ Further suppose that $\tilde{D}_{p}^{\vk}$ is the $p$-th projective representation of $\bar{G}^{\vk}$ with factor system $\mu(R_{i},R_{j})=\exp[-i(R_{i}^{-1}\vk-\vk)\cdot\bm{v}_{j}]$, then the LG IR $\Gamma_{p}^{\vk}$ is determined by the following simple relation \begin{equation} \Gamma_{p}^{\vk}(\{R|\bm{v}\})=e^{-i\vk\cdot\bm{v}}\tilde{D}_{p}^{\vk}(R). \end{equation} To obtain the projective representation $\tilde{D}_{p}^{\vk}$ of the little co-group $\bar{G}^{\vk}$, we can resort to the central extension of $\bar{G}^{\vk}$, denoted by $\bar{G}^{\vk*}$. The group elements of $\bar{G}^{\vk*}$ are in the form of $(R_{i},\alpha)$ with $\alpha=0,1,\cdots,g-1$ and $g$ is the smallest positive integer determined by the factor system \begin{equation} \mu(R_{i},R_{j})=\exp[-i(R_{i}^{-1}\vk-\vk)\cdot\bm{v}_{j}]=\exp[2\pi ia(R_{i},R_{j})/g]\label{eq:facSys} \end{equation} for all $i,j,$ in which the function $a(R_{i},R_{j})$ determined by $\mu(R_{i},R_{j})$ has integer value also in range $[0,g-1]$. The group multiplication of central extension $\bar{G}^{\vk*}$ is defined as \begin{equation} (R_{i},\alpha)(R_{j},\beta)=(R_{i}R_{j},\;\alpha+\beta+a(R_{i},R_{j})\!\!\mod\,g)\label{eq:CEtimes} \end{equation} with the property \begin{equation} (R_{i},\alpha)=(R_{i},0)(E,\alpha)=(E,\alpha)(R_{i},0). \end{equation} Then all the irreducible projective representations we need can be obtained from the allowed ordinary IRs of the corresponding central extension. Suppose $\Delta_{p}^{\vk}$ is the $p$-th allowd IR of $\bar{G}^{\vk*}$ with the property \begin{equation} \Delta_{p}^{\vk}((E,\alpha))=e^{i2\pi\alpha/g}I_{p}, \end{equation} and then the irreducible projective representation $\tilde{D}_{p}^{\vk}$ is determined by \begin{equation} \tilde{D}_{p}^{\vk}(R_{i})=\Delta_{p}^{\vk}((R_{i},0)). \end{equation} Apart from the projective representation method which is available for any k-point, there is a Herring little group method which is easier but only available for HS k-points. The HLG of $\vk,$ denoted by $\herring{G^{\vk}}$, is a quotient group defined by $\herring{G^{\vk}=G^{\vk}/T^{\vk}}$ in which $T^{\vk}$ is a subgroup of $T$ with all translations $\{E|\bm{t}\}$ satisfying $e^{-i\vk\cdot\bm{t}}$=1. When $\vk$ is a HS k-point , $T^{\vk}$ is an infinite group and hence $\herring{G^{\vk}}$ is a finite group whose order is not very large. Then the IRs of $G^{\vk}$ can be obtained directly from the IRs of $\herring{G^{\vk}}$. Suppose $\herring{D_{p}^{\vk}}$ is the $p$-th IR of $\herring{G^{\vk}}$ and then the LG IR $\Gamma_{p}^{\vk}$ of $G^{\vk}$ is determined by \begin{equation} \Gamma_{p}^{\vk}(\{R_{i}|\bm{v}_{i}\})=\herring{D_{p}^{\vk}(}\{R_{i}|\bm{v}_{i}\}T^{\vk}). \end{equation} When $\vk$ is not a HS k-point, $\herring{G^{\vk}}$ is generally an infinite group whose IRs are not easily obtained. In this case the HLG method loses its advantage over the projective representation method. Accordingly, the BC book uses both the two methods to describe the LG IRs in BC-Tabs. 5.7 and 6.13, i.e., it uses HLG to describe HS k-points and uses central extension of little co-group to describe k-points on HS lines. And each HLG or central extension is isomorphic to a certain abstract group $G_{m}^{n}$ whose IRs are known and given in BC-Tab. 5.1. \subsection{Brillouin zone and k-points} Generally, the Wigner-Seitz unit cell in reciprocal space is used as the (first) BZ. But for triclinic and monotonic Bravais lattices, their Wigner-Seitz BZs are dependent very much on the actual values of the lattice parameters and are difficult to draw or visualize. Therefore, in practice, the BC book uses the reciprocal primitive cell, i.e. a parallelepiped centered at $\vk=0$, as the BZ for triclinic and monotonic Bravais lattices (and so does BCS). For other Bravais lattices, Wigner-Seitz BZs are used in the BC book. In spite of defining BZ this way, the shape of BZ is not always unique to each Bravais lattice. Depending on the ratios of lattice constants, there are more than one type (or shape) of BZ for base-centered orthorhombic {[}(a), (b) two types{]}, body-centered orthorhombic {[}(a), (b), (c) three types{]}, face-centered orthorhombic {[}(a), (b), (c), (d) four types{]}, body-centered tetragonal {[}(a), (b) two types, see Fig. \ref{fig:tetrBZ}{]}, and trigonal {[}(a), (b) two types{]} Bravais lattices. These five kinds of Bravais lattices are called ``multiple-BZ Bravais lattices''. There are in total 22 different types of BZs for the 14 Bravais lattices which are shown in BC-Figs. 3.2 to 3.15. \begin{figure} \begin{centering} \includegraphics[width=15cm]{fig-tetrBZ} \par\end{centering} \caption{Two different types of BZs for body-centered tetragonal Bravais lattice with lattice constants $a$ and $c$: (a) for $a>c$ and (b) for $a<c$. Black capital letters are HS k-points, and purple ones are HS lines. Except the two auxiliary points $\Lambda_{0}$ and $V_{0}$, the two BZs are generated by \lstinline!showBZDemo["TetrBody(a)"]! and \lstinline!showBZDemo["TetrBody(b)"]! respectively.\label{fig:tetrBZ}} \end{figure} HS k-points and k-points on HS lines in the basic domain are defined in BC-Tab. 3.6, and all the k-points in BC-Tab. 3.6 are termed ``BC standard k-points'' here. For multiple-BZ Bravais lattices, one k-point name may have different coordinates for different BZ types, but these coordinates are equivalent (differing by a reciprocal lattice vector) to each other with only one exception which is the $F$ k-point of trigonal lattice with coordinates $(0\frac{1}{2}\bar{\frac{1}{2}})$ and $(\frac{1}{2}\frac{1}{2}0)$ for BZ types (a) and (b) respectively. This means that for a space group the LG IRs for a given k-point name do not depend on BZ types, except for $F$ k-point in space groups of trigonal lattice. This is demonstrated as the two entries ``(a) $F$'' and ``(b) $F$'' existing in BC-Tabs. 5.7 and 6.13 for space groups of trigonal lattice. Also for a multiple-BZ Bravais lattice, k-points on some HS lines are named only for certain BZ types but not for others. Take body-centered tetragonal lattice for example, $V$ HS line only exists in (a) type BZ, while $F$ and $U$ HS lines only exist in (b) type BZ, as shown in Fig. \ref{fig:tetrBZ}. In fact, both $\Lambda$ and $V$ in Fig. \ref{fig:tetrBZ}(a) correspond to the $\Lambda$ in Fig. \ref{fig:tetrBZ}(b). This can be analyzed as follow. The $\Lambda$ HS lines in both (a) and (b) type BZs have coordinates $(uu\bar{u})$ (here we use $u$ for the $\alpha$ in BC-Tab. 3.6), but the range of $u$ is different. The $\Lambda$ in (b) type BZ is from $\Gamma$ $(000)$ to $Z$ $(\frac{1}{2}\frac{1}{2}\bar{\frac{1}{2}})$ with $u\in(0,\frac{1}{2})$, while the $\Lambda$ in (a) type BZ is from $\Gamma$ $(000)$ to $\Lambda_{0}$ $(\lambda_{0}\lambda_{0}\bar{\lambda}_{0})$ with $u\in(0,\lambda_{0}]$ and $\lambda_{0}=\frac{1}{4}+\frac{c^{2}}{4a^{2}}<\frac{1}{2}$ ($a$ and $c$ are lattice constants and $a>c$ for (a) type BZ). The $V$ in (a) type BZ with coordinates $(-\frac{1}{2}+u,\frac{1}{2}+u,\frac{1}{2}-u)$ is from $Z$ $(u=0)$ to $V_{0}$ $(u=v_{0})$ and $v_{0}=\frac{1}{4}-\frac{c^{2}}{4a^{2}}$. We can see that $\lambda_{0}+v_{0}=\frac{1}{2}$. So, if $\lambda_{0}<u<\frac{1}{2}$, the k-point $(uu\bar{u})$ lies on the extension line of $\Lambda$ outside the BZ of type (a). However, if this point $(uu\bar{u})$ is translated by $-\bm{g}_{2}$ to $(u,u-1,-u)$ and then transformed by inversion $I$ to $(-u,1-u,u)$, it just lies on the line segment $V$. This becomes clear if we do a substitution $u=\frac{1}{2}-u'$ and $(-u,1-u,u)$ becomes $(-\frac{1}{2}+u',\frac{1}{2}+u',\frac{1}{2}-u')$ with $0<u'<\frac{1}{2}-\lambda_{0}=v_{0}$. \subsection{LG IRs at any k-point\label{subsec:LGIR-at-any-k}} Note that the LG IRs given in BC-Tabs. 5.7 and 6.13 are only directly for BC standard k-points, i.e. those defined in BC-Tab. 3.6, not for every k-point. Fortunately, LG IRs at any k-point can be obtained from the LG IRs in BC-Tabs. 5.7 and 6.13 according to certain transformation relations, except the k-points on $Z'$ HS line for space group $Pa\bar{3}$ (No. 205) whose LG IRs have to be given additionally in BC-Tabs. 5.11 and 6.15. Therefore, the $Z'$ with coordinates $(\frac{1}{2}u0)$ has to be added to the BC standard k-points for space group No. 205, and complete BC LG IR tables comprise BC-Tabs. 5.7, 5.11, 6.13, and 6.15. To describe LG IRs at any k-point, the problem of naming k-point has to be solved firstly. Any k-point $\vk$ can be classified as one of the five types as follow. \begin{itemize} \item Type I, k-point which is identical to the BC standard k-point $\vk_{{\rm BC}}$ or equivalent to $\vk_{{\rm BC}}$, i.e. $\vk\equiv\vk_{{\rm BC}}$ ($\equiv$ means the equivalence of k-points). \item Type II, k-point not equivalent to $\vk_{{\rm BC}}$ but equivalent to one arm of $^{*}\vk_{{\rm BC}}$ (the star of $\vk_{{\rm BC}}$), which means that there is an element $\{S|\bm{w}\}\in G$ such that $\vk\equiv S\vk_{{\rm BC}}$. \item Type III, k-point not equivalent to any arm of $^{*}\vk_{{\rm BC}}$. But there is an element $\{S|\bm{w}\}\notin G$ satisfying $\{S|\bm{w}\}^{-1}G\{S|\bm{w}\}=G$ such that $\vk\equiv S\vk_{{\rm BC}}.$ \item Type IV, general k-point whose little co-group has only identity element and which does not belong to types I\textendash III. \item Type V, k-point not belonging to types I\textendash IV. In fact, this $\vk$ is either on a HS plane with little co-group $\{E,\sigma\}$ ($\sigma$ is mirror reflection) for space groups other than $P\bar{1}$ (No. 2), or a HS k-point with little co-group $\{E,I\}$ for space group $P\bar{1}$. \end{itemize} The names of type I k-points are directly defined in the BC book. For k-points of type II and III, we usually borrow the name of $\vk_{{\rm BC}}$ to name $\vk$ if $\vk\equiv S\vk_{{\rm BC}}$. But it should be kept in mind that this is only an expedient and when necessary a name different from $\vk_{{\rm BC}}$ has to be used for $\vk$ to avoid confusion. The type IV k-point is simply named ``GP''. Note that not all general k-points are named ``GP'' because some are BC standard k-points which have been named, e.g. all the k-points with the abstract group $G_{1}^{1}$ in BC-Tab. 5.7. Type V k-points comprise all k-points on HS planes and some HS k-points of space group $P\bar{1}$, whose names are not defined in the BC book. Accordingly for simplicity, we just use ``UN'' as the name of these unnamed k-points of type V. When necessary, customized names can be used to replace ``UN''. The LG IRs of type I k-point $\vk_{{\rm BC}}$ are given in the BC LG IR tables. For a k-point $\vk$ of type II and III, its little group $G^{\vk}$ is isomorphic to $G^{\vk_{{\rm BC}}}$ because they are conjugate to each other, i.e. $\{S|\bm{w}\}G^{\vk_{{\rm BC}}}\{S|\bm{w}\}^{-1}=G^{\vk}$. Therefore, the LG IRs of $G^{\vk}$ can be obtained from those of $G^{\vk_{{\rm BC}}}$ by \begin{equation} \Gamma_{p}^{\vk}(\{R|\bm{v}\})=\Gamma_{p}^{\vk_{{\rm BC}}}(\{S|\bm{w}\}^{-1}\{R|\bm{v}\}\{S|\bm{w}\})\ \ \ \text{for\ \ \ensuremath{\forall\{R|\bm{v}\}\in G^{\vk}}.}\label{eq:GMkGMkBC} \end{equation} The LG IRs for a k-point of type IV and V are not given in the BC book and have to be calculated by ourselves. However, the calculations are easy because of the low symmetry of the k-point. The central extension of a GP k-point is trivially $G_{1}^{1}$ ($G_{2}^{1}$ for double-valued IR), and the central extension of a UN k-point is either $G_{2}^{1}$ or $G_{4}^{1}$ ($G_{4}^{1}$, $G_{4}^{2}$, or $G_{8}^{2}$ for double-valued IR). \section{Files and installation } The Mathematica package \textsf{SpaceGroupIrep} mainly includes four files: \textsf{SpaceGroupIrep.wl}, \textsf{AbstractGroupData.wl}, \textsf{LittleGroupIrepData.wl}, and \textsf{allBCSkLGdat.mx}. \textsf{SpaceGroupIrep.wl} is the main file containing most functions and data and the other three are all data files. \textsf{AbstractGroupData.wl} contains the abstract group data in BC-Tab. 5.1 which are stored in \lstinline!AGClasses!, \lstinline!AGCharTab!, and \lstinline!AGIrepGen!. \textsf{LittleGroupIrepData.wl }contains the data of LG IRs in BC-Tabs. 5.7, 5.11, 6.13, and 6.15 which are stored in \lstinline!LGIrep! and \lstinline!DLGIrep! for single-valued and double-valued representations respectively. \textsf{allBCSkLGdat.mx }contains the BCS data of LG IRs collected from the output of \textsf{irvsp}. To install the package \textsf{SpaceGroupIrep}, just create a directory \textsf{SpaceGroupIrep} containing the four files and place it under any of the following paths: \begin{itemize} \item \lstinline!$InstallationDirectory/AddOns/Packages/! \item \lstinline!$InstallationDirectory/AddOns/Applications/! \item \lstinline!$BaseDirectory/Applications/! \item \lstinline!$UserBaseDirectory/Applications/! \end{itemize} where \lstinline!$InstallationDirectory! is the installation directory of Mathematica (version $\ge$ 11.2), and \lstinline!$BaseDirectory! and \lstinline!$UserBaseDirectory! are the directories containing respectively systemwide and user-specific files loaded by Mathematica. The concrete values of \lstinline!$InstallationDirectory!, \lstinline!$BaseDirectory!, and \lstinline!$UserBaseDirectory! can be obtained by running them in Mathematica because they are all built-in symbols. Then one can use the package after running \lstinline[literate={`}{\textasciigrave}{1}]!<<"SpaceGroupIrep`"!. \section{Group elements and multiplication} Following the notations in the BC book, we use $\bm{t}_{1},\bm{t}_{2},\bm{t}_{3}$ and $\bm{g}_{1},\bm{g}_{2},\bm{g}_{3}$ to represent the basic vectors of the primitive cell and the reciprocal primitive cell respectively which have the relations $\bm{t}_{i}\cdot\bm{g}_{j}=2\pi\delta_{ij}.$ $\bm{t}_{1},\bm{t}_{2},\bm{t}_{3}$ and $\bm{g}_{1},\bm{g}_{2},\bm{g}_{3}$ are defined in BC-Tab. 3.1 and BC-Tab. 3.3 respectively for each of the 14 Bravais lattices. Then a real space vector $\bm{v}=v_{1}\bm{t}_{1}+v_{2}\bm{t}_{2}+v_{3}\bm{t}_{3}$ can be described by a column matrix of its coefficients (or coordinates) $\mathsf{v}=(v_{1},v_{2},v_{3})^{T}$ with respect to $\bm{t}_{1},\bm{t}_{2},\bm{t}_{3}$; similarly a wave vector $\vk=k_{1}\bm{g}_{1}+k_{2}\bm{g}_{2}+k_{3}\bm{g}_{3}$ can be described by $\mathsf{k}=(k_{1},k_{2},k_{3})^{T}$. Let $R$ be a rotation operation, and it rotates $\bm{v}$ to $\bm{v'}$ and $\vk$ to $\vk'$, i.e. $\bm{v}'=R\bm{v}$ and $\vk'=R\vk$. If these relations are described in matrix form they are $\mathsf{v'=R_{r}v}$ and $\mathsf{k'=R_{k}k}$, in which $\mathsf{R_{r}}$ and $\mathsf{R_{k}}$ are the rotation matrices of $R$ in real space and in reciprocal space respectively. Note that both the coefficients of vectors and the rotation matrices are dependent on basic vectors and hence on the Bravais lattice, therefore for the same $R$ its $\mathsf{R_{r}}$ ($\mathsf{R_{k}})$ is different for different Bravais lattices. $\mathsf{R_{r}}$ ($\mathsf{R_{k}})$ is defined according to BC-Tab. 3.2 (BC-Tab. 3.4), and for the same rotation $R$ there is relation \begin{equation} \mathsf{R_{k}=}(\mathsf{R}_{r}^{-1})^{T}. \end{equation} In \textsf{SpaceGroupIrep}, we use functions \lstinline!getRotMat! and \lstinline!getRotMatOfK! to get the rotation matrices $\mathsf{R_{r}}$ and $\mathsf{R_{k}}$ respectively according to their rotation names (for available rotation names refer to BC-Tab. 3.2, and one prime is replaced by one ``p'', e.g. \lstinline!"C21pp"! is used in the code for $C_{21}''$), and inversely use \lstinline!getRotName! to obtain the rotation name. All these functions use a string representing Bravais lattice as their first argument which is listed in Tab. \ref{tab:brav}. For example, \begin{lstlisting}[backgroundcolor={\color{yellow!5!white}}] |In[1]:=| getRotMat["OrthPrim","C2x"] getRotMat["OrthFace","C2x"] getRotMatOfK["OrthFace","C2x"] getRotName["OrthFace",{{0,0,1},{-1,-1,-1},{1,0,0}}] |Out[1]=| {{-1,0,0},{0,1,0},{0,0,-1}} |Out[2]=| {{0,0,1},{-1,-1,-1},{1,0,0}} |Out[3]=| {{0,-1,1},{0,-1,0},{1,-1,0}} |Out[4]=| C2x \end{lstlisting} \begin{table} \caption{String codes representing Bravais lattices and available values for \lstinline!BZtype! in package \textsf{SpaceGroupIrep}. The full string code of a type of BZ is just its Bravais lattice string code for single-BZ Bravais lattices, and is its Bravais lattice string code followed by \lstinline!BZtype! in a pair of parentheses such as \lstinline!"OrthBase(a)"! for multiple-BZ Bravais lattices. \label{tab:brav}} \begin{centering} \par\end{centering} \begin{centering} \begin{tabular}{llll} \hline \multicolumn{2}{l}{~~~~~~~~Bravais lattice} & string code & \lstinline!BZtype!\tabularnewline \hline Triclinic & primitive & \lstinline!"TricPrim"! & \lstinline!""!\tabularnewline Monotonic & primitive & \lstinline!"MonoPrim"! & \lstinline!""!\tabularnewline & base-centered & \lstinline!"MonoBase"! & \lstinline!""!\tabularnewline Orthorhombic & primitive & \lstinline!"OrthPrim"! & \lstinline!""!\tabularnewline & base-centered & \lstinline!"OrthBase"! & \lstinline!"a"!, \lstinline!"b"!\tabularnewline & body-centered & \lstinline!"OrthBody"! & \lstinline!"a"!, \lstinline!"b"!, \lstinline!"c"!\tabularnewline & face-centered & \lstinline!"OrthFace"! & \lstinline!"a"!, \lstinline!"b"!, \lstinline!"c"!, \lstinline!"d"!\tabularnewline Tetragonal & primitive & \lstinline!"TetrPrim"! & \lstinline!""!\tabularnewline & body-centered & \lstinline!"TetrBody"! & \lstinline!"a"!, \lstinline!"b"!\tabularnewline Trigonal & primitive & \lstinline!"TrigPrim"! & \lstinline!"a"!, \lstinline!"b"!\tabularnewline Hexagonal & primitive & \lstinline!"HexaPrim"! & \lstinline!""!\tabularnewline Cubic & primitive & \lstinline!"CubiPrim"! & \lstinline!""!\tabularnewline & face-centered & \lstinline!"CubiFace"! & \lstinline!""!\tabularnewline & body-centered & \lstinline!"CubiBody"! & \lstinline!""!\tabularnewline \hline \end{tabular} \par\end{centering} \end{table} For double space groups, we use \lstinline!{srot,o3det}! to describe a spin rotation operation, where \lstinline!srot! is a SU(2) spin rotation matrix defined in BC-Tab. 6.1 and \lstinline!o3det! is the determinant (either 1 or $-1$) of corresponding O(3) rotation matrix. Note that the SU(2) matrices in the BC book use \{spin down, spin up\} as bases which has reversal sequence of the usually used \{spin up, spin down\}. We use \lstinline!getSpinRotOp[rotname]! to get the spin rotation operation according to the rotation name \lstinline!rotname!. For rotations with an overbar such as $\bar{C}_{2z},$ their name strings are all prefixed with \lstinline!bar! in the code, e.g. \lstinline!"barC2z"! for $\bar{C}_{2z}.$ All available \lstinline!rotname!'s can be obtained by \lstinline!Keys[getSpinRotOp]! because \lstinline!getSpinRotOp! is in fact an association. Inversely, \lstinline!getSpinRotName[brav,{srot,o3det}]! is used to obtain the rotation name, in which \lstinline!brav! is the string code for Bravais lattice. In fact, here \lstinline!brav! is only used to distinguish cubic compatible Bravais lattices (triclinic, monoclinic, orthorhombic, tetragonal, and cubic) from hexagonal compatible Bravais lattices (trigonal and hexagonal), because in each of these two cases one rotation name is associated with only one SU(2) matrix. For example, \begin{lstlisting}[backgroundcolor={\color{yellow!5!white}}] |In[1]:=| srop=getSpinRotOp["barC32+"] getSpinRotName["CubiBody",srop] getSpinRotName["CubiPrim",srop] |Out[1]=| {{{-(1/2)-I/2,1/2-I/2},{-(1/2)-I/2,-(1/2)+I/2}},1} |Out[2]=| barC32+ |Out[3]=| barC32+ \end{lstlisting} Space group element $\{R|\bm{v}\}$ (or $\{R|v_{1}v_{2}v_{3}\}$) is expressed as \lstinline!{R,{v1,v2,v3}}! in the \textsf{SpaceGroupIrep} code where \lstinline!R! is the name string of the rotation $R$, e.g. $\{C_{2z}|\frac{1}{2}\frac{1}{2}0\}$ expressed as \lstinline!{"C2z",{1/2,1/2,0}}!. And the element $(R,\alpha)$ of the central extension is described by \lstinline!{R,alpha}! in the code. We use \lstinline!getLGElem[sgno,k]!, \lstinline!getHLGElem[sgno,kname]!, and \lstinline!getCentExt[sgno,kname]! to get the elements of little group, HLG, and central extension respectively for the space group of number \lstinline!sgno!, in which \lstinline!k! can be either the k-point name or the k-point coordinates but \lstinline!kname! can be only k-point name. These three functions also support double space groups if an option \lstinline!"DSG"->True! is used. It should be pointed out that the list of elements returned by \lstinline!getLGElem! is actually the set of coset representatives of the cosets of $T$ in the little group of \lstinline!k!. In the same sense, the elements of space group can be obtained by \lstinline!getLGElem! if \lstinline[mathescape=true]!k="$\Gamma$"!. We also emphasize that although HLG is a quotient group whose elements are cosets of $T^{\vk}$ in $G^{\vk}$, we usually omit $T^{\vk}$ and only care about the coset representatives which are also called the elements of the HLG here for simplicity. Accordingly, \lstinline!getHLGElem! returns the list of coset representatives of the real elements of the HLG. Furthermore, for HLGs having the form of $G_{m}^{n}\otimes T_{q}$ in BC-Tabs. 5.7 and 6.13, only the abstract group $G_{m}^{n}$ is needed to determine LG IRs, and hence the list of elements returned by \lstinline!getHLGElem! is actually $G_{m}^{n}$ determined by the generators given in BC-Tabs. 5.7 and 6.13. And the list returned by \lstinline!getHLGElem! or \lstinline!getCentExt! has the same element sequence as the corresponding abstract group in BC-Tab. 5.1. Examples are as following: \begin{lstlisting}[backgroundcolor={\color{yellow!5!white}},mathescape=true] |In[1]:=| getLGElem[20,"$\Gamma$"] getLGElem[20,"R"] getHLGElem[20,"R"] getCentExt[20,"B"] getCentExt[20,"B","DSG"->True] |Out[1]=| {{E,{0,0,0}}, {C2x,{0,0,0}}, {C2y,{0,0,1/2}}, {C2z,{0,0,1/2}}} |Out[2]=| {{E,{0,0,0}}, {C2z,{0,0,1/2}}} |Out[3]=| {{E,{0,0,0}}, {C2z,{0,0,1/2}}, {E,{0,0,1}}, {C2z,{0,0,3/2}}} |Out[4]=| {{E,0}, {C2y,0}, {E,1}, {C2y,1}} |Out[5]=| {{E,0}, {C2y,0}, {barE,1}, {barC2y,1}, {barE,0}, {barC2y,0}, {E,1}, {C2y,1}} \end{lstlisting} The multiplication of two SG elements $\{R_{1}|\bm{v}\}\{R_{2}|\bm{w}\}=\{R_{1}R_{2}|R_{1}\bm{w}+\bm{v}\}$, the inversion of a SG element \{$R|\bm{v}\}^{-1}=\{R^{-1}|-R^{-1}\bm{v}\},$ and the $n$-th power of a SG element $\{R|\bm{v}\}^{n}$ can be calculated by \lstinline!SeitzTimes!, \lstinline!invSeitz!, and \lstinline!powerSeitz! in the code respectively. All the three functions have double space group versions with a \lstinline!DSG! prefix. Hence the functions operating on SG elements are listed below. \begin{lstlisting}[backgroundcolor={\color{yellow!5!white}}] SeitzTimes[brav][{R1,{v1,v2,v3}},{R2,{w1,w2,w3}}] invSeitz[brav][{R,{v1,v2,v3}}] powerSeitz[brav][{R,{v1,v2,v3}},n] DSGSeitzTimes[brav][{R1,{v1,v2,v3}},{R2,{w1,w2,w3}}] DSGinvSeitz[brav][{R,{v1,v2,v3}}] DSGpowerSeitz[brav][{R,{v1,v2,v3}},n] \end{lstlisting} In addition, multiplication of the central extension, i.e. Eq. (\ref{eq:CEtimes}), is realized by the function \lstinline!CentExtTimes!, and its power is realized by \lstinline!CentExtPower!. Also, the double space group versions are available. They are listed below. \begin{lstlisting}[backgroundcolor={\color{yellow!5!white}}] CentExtTimes[brav,adict][{R1,alpha},{R2,beta}] CentExtPower[brav,adict][{R,alpha},n] DSGCentExtTimes[brav,adict][{R1,alpha},{R2,beta}] DSGCentExtPower[brav,adict][{R,alpha},n] \end{lstlisting} In the above functions, \lstinline!adict! contains the information of $a(R_{i},R_{j})$ defined in Eq. (\ref{eq:facSys}) and it can be obtained by calling \lstinline!aCentExt[sgno,kname]! or \lstinline!aCentExt[brav,Gk,k]! in which \lstinline!Gk! is the LG elements and \lstinline!k! is the k-point coordinates. For double space group, the option \lstinline!"DSG"->True! should be used in \lstinline!aCentExt!. \section{Abstract group} There are in total 93 abstract groups in BC-Tab. 5.1 which can be used to describe LG IRs and whose indexes $\{m,n$\}'s are listed in \lstinline!allAGindex!. The information of $G_{m}^{n}$ in BC-Tab. 5.1 is stored in \lstinline!AGClasses[m,n]!, \lstinline!AGCharTab[m,n]!, and \lstinline!AGIrepGen[m,n]! which mean the classes, the character table, and the generators of each IR respectively. The elements in classes are described by their power exponents of the generators $P,Q,R,\cdots$, e.g. the element $P^{2}QR^{3}$ is described by \lstinline!{2,1,3}!. For ease of view, classes and character table are shown in table form by \lstinline!showAGCharTab[m,n]!, and generators of each IR are shown in table form by \lstinline!showAGIrepGen[m,n]!. Examples are shown in Fig. \ref{fig:showAGCharTab}. \begin{figure} \begin{centering} \includegraphics[width=9cm]{fig-showAGCharTab} \par\end{centering} \caption{Examples of \lstinline!showAGCharTab[m,n]! and \lstinline!showAGIrepGen[m,n]! for the abstract group $G_{8}^{5}$.\label{fig:showAGCharTab}} \end{figure} \section{Tables for LG IRs and SG IRs} \subsection{Identify k-point} When the coordinates of a k-point are given, we have to know its name and its relation to the BC standard k-point before we can determine its LG IRs. In other words, we have to classify the k-point into one of the five types defined in subsection \ref{subsec:LGIR-at-any-k} and find the operation $\{S|\bm{w}\}$ if the k-point is of type II or III. There are two functions for doing this in \textsf{SpaceGroupIrep}, i.e. \begin{lstlisting}[backgroundcolor={\color{yellow!5!white}}] identifyBCHSKpt[fullBZtype, kOrKlist] identifyBCHSKptBySG[sgno, BZtypeOrBasVec, kOrKlist] \end{lstlisting} in which \lstinline!fullBZtype! is the string code for one of the 22 types of BZs (see Tab. \ref{tab:brav}), \lstinline!kOrKlist! is the numerical coordinates of a k-point or of a list of k-points, and \lstinline!BZtypeOrBasVec! is either one of the \lstinline!BZtype! in Tab. \ref{tab:brav} or the numerical basic vectors of the space group. In fact, \lstinline!identifyBCHSKpt! is a preprocessor of \lstinline!identifyBCHSKptBySG!. The former identifies a k-point only according to BC-Tab. 3.6 without the SG information, while using the SG information the latter can further determine the $\{S|\bm{w}\}$ based on the results of the former. The result returned by \lstinline!identifyBCHSKpt! is a list \begin{equation} \{\vk,\text{\,kname},\,\text{\text{line\_info},\,}P(\vk),\,\vk_{{\rm BC}},\,S,\,S\vk_{{\rm BC}},\,\bm{g},\ {\rm u}\rightarrow v_{{\rm u}}\}\label{eq:kinfo0} \end{equation} where $\vk$ is the k-point to be identified, kname is the identified name, line\_info is the connection of HS line (null string if $\vk$ is a HS point), $P(\vk)$ is the symmetry point group of $\vk$ in the basic domain of the Bravais lattice, $\vk_{{\rm BC}}$ is the BC standard k-point in the basic domain, $S$ is an element of $P(\vk)$ such that $\vk\equiv S\vk_{{\rm BC}}$, $\bm{g}=\vk-S\vk_{{\rm BC}}$ is the reciprocal lattice vector connecting $S\vk_{{\rm BC}}$ and $\vk$, and ${\rm u}\rightarrow v_{{\rm u}}$ is the rule for the actual value $v_{{\rm u}}$ of u. If kname is GP or UN only the first four items exist in Eq. (\ref{eq:kinfo0}), and the last item ${\rm u}\rightarrow v_{{\rm u}}$ exists only when $\vk$ is on HS line. It is noteworthy that for multiple-BZ Bravais lattice, a k-point can be identified as two points with different names by \lstinline!identifyBCHSKpt!, i.e. two entries in the form of Eq. (\ref{eq:kinfo0}) with different knames. And this case occurs when the k-point is on certain HS lines. For example, the k-point with coordinates $(-0.3,-0.3,0.3)$ is identified as $\Lambda$ in the BZ of type \lstinline!"TetrBody(b)"! , while it is identified as either $\Lambda$ or $V$ in the BZ of type \lstinline!"TetrBody(a)"!. The code is as follow. \begin{lstlisting}[backgroundcolor={\color{yellow!5!white}},mathescape=true] |In[1]:=| identifyBCHSKpt["TetrBody(b)",{-0.3,-0.3,0.3}] identifyBCHSKpt["TetrBody(a)",{-0.3,-0.3,0.3}] |Out[1]=| {{{-0.3,-0.3,0.3},$\Lambda$,$\Gamma$Z,C4v,{u,u,-u},C2x,{-u,-u,u},{0,0,0},u->0.3}} |Out[2]=| {{{-0.3,-0.3,0.3},$\Lambda$,$\Gamma\Lambda$,C4v,{u,u,-u},C2x,{-u,-u,u},{0,0,0},u->0.3}, {{-0.3,-0.3,0.3},V,ZV,C4v,{-1/2+u,1/2+u,1/2-u},E, {-1/2+u,1/2+u,1/2-u},{0,-1,0},u->0.2}} \end{lstlisting} \begin{table}[t] \caption{The table of $u_{{\rm max}}$ for HS lines, i.e. the maximum value of $u$ that keeps the BC standard k-point inside or on the boundary of the BZ. Except the HS lines in this table, the $u_{{\rm max}}$ of all other HS lines in BC-Tab. 3.6 is $\frac{1}{2}$. $a,b,c$ are lattice constants the same as those in BC-Tab. 3.1. \label{tab:umax}} {\renewcommand{\arraystretch}{1.4} \begin{centering} \begin{tabular}{ccccccc} \hline Type of BZ & \multicolumn{6}{c}{$u_{{\rm max}}$ for HS lines}\tabularnewline \hline \lstinline!"OrthBase(a)"! & $\Delta\text{: }\frac{1}{4}+\frac{b^{2}}{4a^{2}}$ & $F\text{: }\frac{1}{4}-\frac{b^{2}}{4a^{2}}$ & $B\text{: }\frac{1}{4}+\frac{b^{2}}{4a^{2}}$ & $G\text{: }\frac{1}{4}-\frac{b^{2}}{4a^{2}}$ & & \tabularnewline \lstinline!"OrthBase(b)"! & $A\text{: }\frac{1}{4}+\frac{a^{2}}{4b^{2}}$ & $E\text{: }\frac{1}{4}-\frac{a^{2}}{4b^{2}}$ & $\Sigma\text{: }\frac{1}{4}+\frac{a^{2}}{4b^{2}}$ & $C\text{: }\frac{1}{4}-\frac{a^{2}}{4b^{2}}$ & & \tabularnewline \lstinline!"OrthBody(a)"! & $\Lambda\text{: }\frac{1}{4}+\frac{c^{2}}{4a^{2}}$ & $G\text{: }\frac{1}{4}-\frac{c^{2}}{4a^{2}}$ & $\Delta\text{: }\frac{1}{4}+\frac{b^{2}}{4a^{2}}$ & $U\text{: }\frac{1}{4}-\frac{b^{2}}{4a^{2}}$ & & \tabularnewline \lstinline!"OrthBody(b)"! & $\Lambda\text{: }\frac{1}{4}+\frac{c^{2}}{4b^{2}}$ & $G\text{: }\frac{1}{4}-\frac{c^{2}}{4b^{2}}$ & $\Sigma\text{: }\frac{1}{4}+\frac{a^{2}}{4b^{2}}$ & $F\text{: }\frac{1}{4}-\frac{a^{2}}{4b^{2}}$ & & \tabularnewline \lstinline!"OrthBody(c)"! & $\Delta\text{: }\frac{1}{4}+\frac{b^{2}}{4c^{2}}$ & $U\text{: }\frac{1}{4}-\frac{b^{2}}{4c^{2}}$ & $\Sigma\text{: }\frac{1}{4}+\frac{a^{2}}{4c^{2}}$ & $F\text{: }\frac{1}{4}-\frac{a^{2}}{4c^{2}}$ & & \tabularnewline \lstinline!"OrthFace(a)"! & $G\text{: }\frac{1}{4}+C_{ab}^{-}$ & $H\text{: }\frac{1}{4}-C_{ab}^{-}$ & $C\text{: }\frac{1}{4}+A_{bc}^{-}$ & $A\text{: }\frac{1}{4}-A_{bc}^{-}$ & $D\text{: }\frac{1}{4}+B_{ac}^{-}$ & $B\text{: }\frac{1}{4}-B_{ac}^{-}$\tabularnewline \lstinline!"OrthFace(b)"! & $\text{\ensuremath{\Lambda}}\text{: }\frac{1}{4}+C_{ab}^{+}$ & $Q\text{: }\frac{1}{4}-C_{ab}^{+}$ & $G\text{: }\frac{1}{4}+C_{ab}^{-}$ & $H\text{: }\frac{1}{4}-C_{ab}^{-}$ & \multicolumn{2}{c}{$(C_{ab}^{\pm}=\frac{c^{2}(a^{2}\pm b^{2})}{4a^{2}b^{2}})$}\tabularnewline \lstinline!"OrthFace(c)"! & $\Delta\text{: }\frac{1}{4}+B_{ac}^{+}$ & $R\text{: }\frac{1}{4}-B_{ac}^{+}$ & $D\text{: }\frac{1}{4}+B_{ac}^{-}$ & $B\text{: }\frac{1}{4}-B_{ac}^{-}$ & \multicolumn{2}{c}{$(B_{ac}^{\pm}=\frac{b^{2}(a^{2}\pm c^{2})}{4a^{2}c^{2}})$}\tabularnewline \lstinline!"OrthFace(d)"! & $\Sigma\text{: }\frac{1}{4}+A_{bc}^{+}$ & $U\text{: }\frac{1}{4}-A_{bc}^{+}$ & $C\text{: }\frac{1}{4}+A_{bc}^{-}$ & $A\text{: }\frac{1}{4}-A_{bc}^{-}$ & \multicolumn{2}{c}{$(A_{bc}^{\pm}=\frac{a^{2}(b^{2}\pm c^{2})}{4b^{2}c^{2}})$}\tabularnewline \lstinline!"TetrBody(a)"! & $\Lambda\text{: }\frac{1}{4}+\frac{c^{2}}{4a^{2}}$ & $V\text{: }\frac{1}{4}-\frac{c^{2}}{4a^{2}}$ & & & & \tabularnewline \lstinline!"TetrBody(b)"! & $\Sigma\text{: }\frac{1}{4}+\frac{a^{2}}{4c^{2}}$ & $F\text{: }\frac{1}{4}-\frac{a^{2}}{4c^{2}}$ & $U\text{: }\frac{1}{2}-\frac{a^{2}}{2c^{2}}$ & $Y\text{: }\frac{a^{2}}{2c^{2}}$ & & \tabularnewline \lstinline!"TrigPrim(a)"! & $\Lambda\text{: }\frac{1}{6}+\frac{2c^{2}}{3a^{2}}$ & $P\text{: }\frac{1}{3}-\frac{2c^{2}}{3a^{2}}$ & & & & \tabularnewline \lstinline!"TrigPrim(b)"! & $B\text{: }\frac{1}{3}-\frac{a^{2}}{6c^{2}}$ & $Y\text{: }\frac{1}{6}+\frac{a^{2}}{6c^{2}}$ & $\Sigma\text{: }\frac{1}{3}+\frac{a^{2}}{12c^{2}}$ & $Q\text{: }\frac{1}{6}-\frac{a^{2}}{12c^{2}}$ & & \tabularnewline \lstinline!"HexaPrim"! & $T\text{: }\frac{1}{3}$ & $T'\text{: }\frac{1}{6}$ & $S\text{: }\frac{1}{3}$ & $S'\text{: }\frac{1}{6}$ & & \tabularnewline \lstinline!"CubiBody"! & $\Lambda\text{: }\frac{1}{4}$ & $F:\frac{1}{4}$ & & & & \tabularnewline \lstinline!"CubiFace"! & $\Sigma\text{: }\frac{3}{8}$ & $S:\frac{1}{8}$ & & & & \tabularnewline \hline \end{tabular} \par\end{centering} } \end{table} The information about a k-point from \lstinline!identifyBCHSKpt! is preliminary. In the final result, the $\{S|\bm{w}\}$ defined in subsection \ref{subsec:LGIR-at-any-k} is determined and only one kname is determined with the knowledge of actual values of lattice constants or basic vectors. This is done by \lstinline!identifyBCHSKptBySG! which returns the complete information of a k-point in the list form of \begin{equation} \{\vk,\text{\,kname},\,\text{\text{line\_info},\,}\bar{G}^{\vk},\,\vk_{{\rm BC}},\,\{S|\bm{w}\},\,S\vk_{{\rm BC}},\,\bm{g},\ {\rm u}\rightarrow v_{{\rm u}},\,u_{{\rm max}},\,\text{if\/inG}\}\label{eq:kinfo} \end{equation} in which $\bar{G}^{\vk}$ is the little co-group, $u_{{\rm max}}$ is the maximum value of $u$ that keeps $\vk_{{\rm BC}}$ inside or on the boundary of the BZ, and if\/inG is a string ``in G'' or ``not in G'' to indicate whether $\{S|\bm{w}\}\in G$ or not. In the above example, in order to determine whether the k-point $(-0.3,-0.3,0.3)$ is $\Lambda$ or $V$, the $u_{{\rm max}}$ of $\Lambda$ and $V$ should be known and then the one satisfying $v_{{\rm u}}<u_{{\rm max}}$ is selected. We can see that $u_{{\rm max}}$ depends on the actual values of lattice constants in most cases listed in Tab. \ref{tab:umax}. Therefore, the basic vectors are needed to precisely determine $\Lambda$ or $V$. Taking the space group $I4_{1}md$ (No. 109) for example, it has body-centered tetragonal lattice and has (a) type BZ when $a>c$. If $a=3$, $c=2$ then $u_{\max}^{\Lambda}=0.361111$ and $u_{\max}^{V}=0.138889$, which makes $0.3<u_{{\rm max}}^{\Lambda}$, $0.2>u_{{\rm max}}^{V}$ and then $\Lambda$ is selected. If $a=3$, $c=1$ then $u_{\max}^{\Lambda}=0.222222$ and $u_{\max}^{V}=0.277778$, which makes $0.3>u_{{\rm max}}^{\Lambda}$, $0.2<u_{{\rm max}}^{V}$ and then $V$ is selected. The code is as follow. \begin{lstlisting}[backgroundcolor={\color{yellow!5!white}},mathescape=true] |In[1]:=| bv=BasicVectors["TetrBody"] identifyBCHSKptBySG[109,bv/.{a->3,c->2},{-0.3,-0.3,0.3}] identifyBCHSKptBySG[109,bv/.{a->3,c->1},{-0.3,-0.3,0.3}] |Out[1]=| {{-a/2,a/2,c/2},{a/2,-a/2,c/2},{a/2,a/2,-c/2}} |Out[2]=| {{-0.3,-0.3,0.3},$\Lambda$,$\Gamma\Lambda$,C4v,{u,u,-u},{C2x,{3/4,1/4,1/2}}, {-u,-u,u},{0,0,0},u->0.3,0.361111,not in G} |Out[3]=| {{-0.3,-0.3,0.3},V,ZV,C4v,{-1/2+u,1/2+u,1/2-u},{E,{0,0,0}}, {-1/2+u,1/2+u,1/2-u},{0,-1,0},u->0.2,0.222222,in G} \end{lstlisting} When the basic vectors are unknown, we can use the \lstinline!BZtype!, i.e. \lstinline!"a"!, \lstinline!"b"!, \lstinline!"c"!, \lstinline!"d"!, \lstinline!""!, as the second parameter of \lstinline!identifyBCHSKptBySG!. In this case, k-points on the HS lines listed in Tab. \ref{tab:umax} except the last three rows may be identified with incorrect names, because the $u_{{\rm max}}$ cannot be determined precisely without actual values of $a,b,c$ and in this case $u_{{\rm max}}$ is set to a makeshift value $1/4$ to make sure only one kname is output. However, if $u_{{\rm max}}$ is not precisely determined and the option \lstinline!"allowtwok"->True! is used, \lstinline!identifyBCHSKptBySG! can still return two entries of k-point information with different knames for k-points on the HS lines in Tab. \ref{tab:umax}. \subsection{Tables for LG IRs} \begin{figure} \begin{centering} \includegraphics[width=12cm]{fig-LGIrep-68A} \par\end{centering} \caption{The output of the function \lstinline!showLGIrepTab[68,"A"]! which shows the table for LG IRs of the k-point $A$ of the space group of number 68. Light green background shows single-valued IRs and light blue background shows double-valued IRs. Both the $\Gamma$ label (the second column) and the extended Mulliken label (the third column) are given. The first column is the index of the LG IRs and the fourth column is the realities whose values may be 1, 2, 3, or x. \label{fig:IR68A}} \end{figure} \begin{figure}[t] \begin{centering} \includegraphics[width=12cm]{fig-LGIrep-68SM-C} \par\end{centering} \caption{The output of the function \lstinline!showLGIrepTab[68,\{0.3,0.3,0\},"rotmat"->False]! which shows two tables for LG IRs of the k-point $(0.3,0.3,0)$ of the space group of number 68. In this example $(0.3,0.3,0)$ is identified as two k-points $\Sigma$ and $C$, and which one is selected depends on the actual values of lattice constants. $k_{{\rm in}}$ means the input k-point, $G_{{\rm kin}}$ means the little group of $k_{{\rm in}}$, $G_{{\rm kBC}}$ means the little group of the standard BC k-point $k_{{\rm BC}}$, and $G_{{\rm kBC}}^{{\rm d}}$ means the double little group of $k_{{\rm BC}}.$ \label{fig:IR68-SM-C}} \end{figure} Although the data of LG IRs are contained in the BC book, they are distributed over several tables. This makes it not direct to obtain the representation matrices or characters of a designated LG IR, which is shown by the aforementioned example in the introduction. Accordingly, we create the functions \lstinline!getLGIrepTab[sgno,k]! and \lstinline!showLGIrepTab[sgno,k]! in which \lstinline!k! is either the name or the numeric coordinates of a k-point. The former calculates and gives the representation matrices and related information of all the single-valued and double-valued LG IRs of \lstinline!k! for the space group of number \lstinline!sgno!, and the latter shows the data in user-friendly table form. Fig. \ref{fig:IR68A} is an example showing the output of \lstinline!showLGIrepTab[68,"A"]! which directly gives the information about the k-point, the available types of BZ, the LG elements (only the coset representatives with respect to the translation group are given), the rotation matrices and spin rotation matrices, the representation matrices of both single-valued LG IRs (light green background) and double-valued LG IRs (light blue background), the $\Gamma$ labels ($A_{1},A_{2},\cdots$) and extended Mulliken labels ($E,$ $^{2}\!\bar{E}'',\cdots$) of LG IRs, and the realities (the fourth column) of the corresponding SG IRs. Following the notations in the BC book, the realities 1, 2, 3 stand for real representation, pseudo-real representation, and complex representation respectively for the SG IRs in which $\vk$ and $-\vk$ are in the same star. If $\vk$ and $-\vk$ are not in the same star, the SG IR is complex and its reality is represented by a letter ``x''. For double-valued LG IRs, the representation matrix of the element $\{\bar{R}|\bm{v}\}$ is just the negative value of the representation matrix of $\{R|\bm{v}\},$ so only $\{R|\bm{v}\}$'s are shown in the LG IR table in Fig. \ref{fig:IR68A}. If the coordinates of a k-point are given, they may be identified as two BC standard k-points for the k-points listed in Tab. \ref{tab:umax}, and two LG IR tables are given by \lstinline!showLGIrepTab!. An example of this case is \lstinline!showLGIrepTab[68,{0.3,0.3,0},"rotmat"->False]! whose output is shown in Fig. \ref{fig:IR68-SM-C}. The input k-point $\vk_{{\rm in}}=(0.3,0.3,0)$ is identified as only $\Sigma$ for \lstinline!"OrthBase(a)"! type of BZ but as either $\Sigma$ or $C$ for \lstinline!"OrthBase(b)"! type of BZ. Two LG IR tables for both $\Sigma$ and $C$ are given in Fig. \ref{fig:IR68-SM-C}. In this example, the coordinates $(0.3,0.3,0)$ are directly in the form of $(u,u,0)$, i.e. the coordinates of the BC standard $\Sigma$ k-point, and hence $\vk_{{\rm in}}$ is of type I if it is a $\Sigma$ k-point. But if this k-point is identified as $C$, it is of type II and has non-identity $\{S|\bm{w}\}$ which can relate this k-point to the BC standard $C$ k-point. Here $\{S|\bm{w}\}=\{C_{2y}|000\}$ and it makes the little group of $\vk_{{\rm in}}$, $G_{{\rm kin}}$, isomorphic to the (double) little group of $\vk_{{\rm BC}}$, $G_{{\rm kBC}}$ ($G_{{\rm kBC}}^{{\rm d}}$), i.e. \begin{equation} \{S|\bm{w}\}^{-1}G_{{\rm kin}}\{S|\bm{w}\}=G_{{\rm kBC}}\ \ \text{or }G_{{\rm kBC}}^{{\rm d}}.\label{eq:GkinGkBC} \end{equation} Then the LG IRs of $\vk_{{\rm in}}$ are obtained from those of $\vk_{{\rm BC}}$ according to Eqs. (\ref{eq:GMkGMkBC}) and (\ref{eq:GkinGkBC}). The mappings between the elements in $G_{{\rm kin}}$ and those in $G_{{\rm kBC}}$ ($G_{{\rm kBC}}^{{\rm d}}$) are also shown in the lower table of Fig. \ref{fig:IR68-SM-C}, which can help to understand the relations between the LG IRs of $\vk_{{\rm in}}$ and $\vk_{{\rm BC}}.$ Let's remind that here $\vk_{{\rm in}}$ just borrows the name $C$ of $\vk_{{\rm BC}}$ and if we give a different name to $\vk_{{\rm in}},$ say $C'$, then the LG IR labels of $\vk_{{\rm in}}$ should be $C_{1}',\cdots,C_{5}'$ which are clearly distinguished from $C_{1},\cdots,C_{5}$ of $\vk_{{\rm BC}}.$ Fig. \ref{fig:IR68-SM-C} is generated by \lstinline!showLGIrepTab! with the option \lstinline!"rotmat"->False! which controls not to show the rotation matrices. In fact, \lstinline!showLGIrepTab! has several options. Its default options can be obtained by \begin{lstlisting}[backgroundcolor={\color{yellow!5!white}},mathescape=true] |In[1]:=| Options[showLGIrepTab]//InputForm |Out[1]=| {"uNumeric"->False, "irep"->All, "elem"->All, "rotmat"->True, "trace"->False, "spin"->"downup", "abcOrBasVec"->None, "linewidth"-> 2} \end{lstlisting} If \lstinline!"uNumeric"->True! is used, the value of $u$ is substituted into $u$ to make the LG IRs numeric. Although the LG IRs of $\Sigma$ and $C$ in Fig. \ref{fig:IR68-SM-C} are different seemingly, it will be seen clearly that they are in fact equivalent when \lstinline!"uNumeric"->True! is used, with the correspondence $\Sigma_{1,3}\leftrightarrow C_{3,1}$, $\Sigma_{2,4}\leftrightarrow C_{4,2}$, $\Sigma_{5}\leftrightarrow C_{5}.$ Options \lstinline!"irep"! and \lstinline!"elem"! can select certain IRs and elements to be shown, e.g. \lstinline!"irep"->{1,3},"elem"->{3,4}! will only show the first and third LG IRs ($\Sigma_{1,3}$ and $C_{1,3}$ in Fig. \ref{fig:IR68-SM-C}) and the third and fourth elements ($\{\sigma_{y}|\frac{1}{2}\frac{1}{2}\frac{1}{2}\}$ and $\{\sigma_{z}|\frac{1}{2}\frac{1}{2}\frac{1}{2}\}$ in Fig. \ref{fig:IR68-SM-C}). \lstinline!"trace"->True! makes \lstinline!showLGIrepTab! show the characters not the representation matrices. In fact, we have also defined functions \lstinline!getLGCharTab[sgno,k]! and \lstinline!showLGCharTab[sgno,k]! to calculate and show the character tables for LG IRs and these two functions are just respectively the functions \lstinline!getLGIrepTab[sgno,k]! and \lstinline!showLGIrepTab[sgno,k]! with the option \lstinline!"trace"->True!. The option \lstinline!"spin"->"updown"! will change the bases of spin rotation matrices from the default $\{\downarrow,\uparrow\}$ to $\{\uparrow,\downarrow\}$. If lattice constants or basic vectors are given through the option \lstinline!"abcOrBasVec"!, one definite k-point is determined, e.g. \lstinline!"abcOrBasVec"->{a->3,b->5,c->4}! only shows $\Sigma$ and \lstinline!"abcOrBasVec"->{a->2,b->5,c->4}! only shows $C$ for the example in Fig. \ref{fig:IR68-SM-C}. At last, \lstinline!"linewidth"! can control the line width of the table. \subsection{Tables for SG IRs} \begin{figure} \begin{centering} \includegraphics[width=15cm]{fig-SGIrep-68SM} \par\end{centering} \caption{The output of \lstinline[mathescape=true]!showSGIrepTab[68,"$\Sigma$","maxDim"->3,"elem"->\{1,2,3\}]! which shows part of the table for SG IRs of the star $^{*}\Sigma$ of the space group of number 68. Light green background shows single-valued IRs and light blue background shows double-valued IRs. The first column is the index of the SG IRs, the second and third column are two kinds of labels for SG IRs, and the fourth column is the realities whose values may be 1, 2, 3, or x. \label{fig:SGIR68SM}} \end{figure} In addition to tables for LG IRs, we have also created the functions \lstinline!getSGIrepTab[sgno,k]! and \lstinline!showSGIrepTab[sgno,k]! to calculate and show the tables for SG IRs. The usage of \lstinline!showSGIrepTab[sgno,k]! is almost the same as \lstinline!showLGIrepTab[sgno,k]! except that \lstinline!showSGIrepTab! has one more option \lstinline!"maxDim"!. \lstinline!"maxDim"! is the critical dimension controlling the appearance of representations whose default value is 4. When the dimension of a representation matrix is lower than or equal to \lstinline!"maxDim"! , the representation matrix is shown in matrix form, and otherwise only nonzero matrix elements are shown to save the table space. An example is shown in Fig. \ref{fig:SGIR68SM}, which is generated by the following code. \begin{lstlisting}[backgroundcolor={\color{yellow!5!white}},mathescape=true] showSGIrepTab[68, "$\Sigma$", "maxDim"->3, "elem"->{1,2,3}] \end{lstlisting} This example gives the table for the SG IRs of the wave vector star $^{*}\Sigma$ of space group 68. The double-valued SG IR $^{*}\Sigma_{5}$ is shown in the form of nonzero matrix elements, because its dimension 4 is larger than the value 3 of \lstinline!"maxDim"!. To save space, representation matrices of only the first three SG elements are shown due to the option \lstinline!"elem"->{1,2,3}!. If the option \lstinline!"elem"! is not used, the table will show the representation matrices of 8 SG elements in total, i.e. all the coset representatives with respective to the translation group. We use two kinds of labels for SG IRs. The first one is to put a $*$ at the top left corner of corresponding $\Gamma$ label of LG IR, and the second one is to put the k-point name in front of the extended Mulliken label of LG IR, e.g. both $^{*}\Sigma_{2}$ and $\Sigma B_{1}$ for $\Sigma_{2}\uparrow G$. \section{Direct product of SG IRs} The (inner) direct product of SG IRs is of great importance to determine the selection rules in various quantum processes in crystals. Therefore we have realized the decomposition (or reduction) of the direct product of any two SG IRs according to BC-Eqs. (4.7.1) and (4.7.29), i.e. \begin{equation} (\Gamma_{p}^{i}\uparrow G)\otimes(\Gamma_{q}^{j}\uparrow G)\equiv\sum_{l}\sum_{r}C_{pq,r}^{ij,l}(\Gamma_{r}^{l}\uparrow G) \end{equation} \begin{equation} C_{pq,r}^{ij,l}=\sum_{\{\alpha|\bm{u}\}}\!\!\vphantom{\sum}'\sum_{\{\beta|\bm{v}\}}\!\!\vphantom{\sum}'\frac{|T|}{|N_{\alpha\beta}|}\sum_{\{\gamma|\bm{w}\}\in N_{\alpha\beta}/T}\chi_{p}^{i}(\{\beta|\bm{v}\}^{-1}\{\gamma|\bm{w}\}\{\beta|\bm{v}\})\cdot\chi_{q}^{j}(\{\alpha|\bm{u}\}\{\gamma|\bm{w}\}\{\alpha|\bm{u}\})\cdot\chi_{r}^{l*}(\{\gamma|\bm{w}\}) \end{equation} in which $\sum\vphantom{\sum}'$ means the summation is restricted by the condition \begin{equation} \beta\vk_{i}+\alpha\vk_{j}\equiv\vk_{l}. \end{equation} In the above equations, $\Gamma_{p}^{i}$, $\Gamma_{q}^{j},$ and $\Gamma_{r}^{l}$ are the LG IRs of the little groups $G^{\vk_{i}}$, $G^{\vk_{j}},$ and $G^{\vk_{l}}$ respectively; $\chi_{p}^{i},$ $\chi_{q}^{j},$ and $\chi_{r}^{l}$ are their characters respectively; and $\Gamma_{p}^{i}\uparrow G$, $\Gamma_{q}^{j}\uparrow G$, and $\Gamma_{r}^{l}\uparrow G$ are the corresponding induced SG IRs of the space group $G$. $\{\alpha|\bm{u}\}$'s are the double coset representatives of $G$ with respect to $G^{\vk_{l}}$ and $G^{\vk_{j}}$; $\{\beta|\bm{v}\}$'s are the double coset representatives of $G$ with respect to $L_{\alpha}=G^{\vk_{l}}\cap(\{\alpha|\bm{u}\}G^{\vk_{j}}\{\alpha|\bm{u}\}^{-1})$ and $G^{\vk_{i}}$; and $N_{\alpha\beta}$ is a group defined by $N_{\alpha\beta}=L_{\alpha}\cap(\{\beta|\bm{v}\}G^{\vk_{i}}\{\beta|\bm{v}\}^{-1})$. \begin{figure}[t] \begin{centering} \includegraphics[width=12cm]{fig-DP195-MM} \par\end{centering} \caption{The output of \lstinline!showSGIrepDirectProduct[195, \{1/2,1/2,0\}, \{1/2,1/2,0\}, "label"->2]! which shows the direct product of SG IRs of $^{*}M$ and $^{*}M$ for space group $P23$. The notation $[M,3]$ on top of the table means the the k-point in front of it is identified as $M$ and the number of arms of its star is 3. The notation $(k_{2},3)$ in the table indicates that the SG IR in front of it is for the star of $k_{2}$ and the dimension of this SG IR is 3. Light green background stands for direct products between single-valued SG IRs and single-valued SG IRs, light yellow background for direct products between single-valued SG IRs and double-valued SG IRs, and light blue background for direct products between double-valued SG IRs and double-valued SG IRs.\label{fig:SGIRDP-MM}} \end{figure} The functions that calculate and show the direct product of SG IRs are \lstinline!SGIrepDirectProduct[sgno,k1,k2]! and \lstinline!showSGIrepDirectProduct[sgno,k1,k2]! respectively, in which \lstinline!k1! and \lstinline!k2! are both numeric k-point coordinates. Taking the same example as in the BC book, that is the direct products of SG IRs of $^{*}M$ and $^{*}M$ for space group $P23$ (No. 195). The following function \begin{lstlisting}[backgroundcolor={\color{yellow!5!white}},mathescape=true] showSGIrepDirectProduct[195, {1/2,1/2,0}, {1/2,1/2,0}, "label"->2] \end{lstlisting} gives the results shown in Fig. \ref{fig:SGIRDP-MM}, which are consistent with the direct products listed in BC-Tab. 4.5. \lstinline!showSGIrepDirectProduct! has three options, \lstinline!"label"!, \lstinline!"abcOrBasVec"!, and \lstinline!"linewidth"!, in which \lstinline!"label"! whose value can be 1 (default) or 2 controls which kind of labels for SG IRs are used and the other two options have the same usages as in \lstinline!showLGIrepTab!. It is worth noting that \lstinline!showSGIrepDirectProduct! not only gives the direct products between single-valued SG IRs and single-valued SG IRs, but also gives the direct products between double-valued SG IRs and double-valued SG IRs and even direct products between single-valued SG IRs and double-valued SG IRs. \section{Obtain the LG IRs of energy bands} As mentioned in the introduction, a tool with full support for determining the LG IRs of all Bloch states in energy bands has been missing for a long time until the appearance of the recent program \textsf{irvsp}\citep{Gao_Wang_2020___2002.04032v1_Irvsp}. However, \textsf{irvsp} only supports LG IRs in the BCS convention. Therefore, support for LG IRs in the BC convention is provided in the \textsf{SpaceGroupIrep} package, as a complement to \textsf{irvsp}. Here, we use ``BC cells'' for the primitive cells with BC settings, i.e. the cells have basic vectors defined in BC-Tab. 3.1 and the SG symmetry operations defined by the cells are consistent with the first-row generators of every space group in BC-Tab. 3.7. Note that a cell only with basic vectors defined in BC-Tab. 3.1 is not necessary a BC cell, because different selections for the origin may result in different SG elements. In the following two subsections, we first discuss the cases with BC cells, and then discuss the cases with non-BC cells. \subsection{For BC cells} To determine the LG IRs of energy bands, the character of each LG element operating on each set of degenerate Bloch states should be obtained first. Fortunately, there has be such a program called \textsf{vasp2trace}\citep{Vergniory_Wang_2019_566_480__complete} which can do this. \textsf{vasp2trace} is a third-party post-processing program for \textsf{VASP}. It reads the output wave functions of \textsf{VASP}, calculates the characters of LG elements, and writes the results in a file named \textsf{trace.txt}. In fact, \textsf{vasp2trace} is the precursor of \textsf{irvsp} without the function of determining LG IRs and \textsf{irvsp} can also output a \textsf{trace.txt} file in certain cases. However, the\textsf{ trace.txt} files generated by the two programs may be different and what we need is the \textsf{trace.txt }generated by \textsf{vasp2trace}, because the chacracter data in the \textsf{trace.txt }file from\textsf{ irvsp} may be processed data, not the original ones we need. It is worth noting that the number of bands output by \textsf{vasp2trace} is limited by the number of electrons, i.e. the NELECT in \textsf{VASP}, because \textsf{vasp2trace} is designed for determining the topological properties of materials and for this purpose only the trace data of occupied states are needed. To make \textsf{vasp2trace} output trace data for all bands, two tiny changes should be made to the source code of \textsf{vasp2trace}: changing the \lstinline!nele! in the 30th line of \textsf{wrtir.f90} to \lstinline!ne! and deleting the 55th line of \textsf{chrct.f90}, i.e. \lstinline!IF(IE>nele) EXIT!. \begin{figure}[ph] \begin{centering} \includegraphics[width=12cm]{fig-MoS2} \par\end{centering} \caption{The LG IRs of monolayer MoS$_{2}$. (a) Top view and three different BC cells for monolayer MoS$_{2}$ where blue and orange dots represent Mo and S atoms respectively. (b) Codes to show the LG IRs at $K$ (the first k-point) and $K'$ (the 10th k-point) for seven valence bands (3\textendash 9) and four conduction bands (10\textendash 13). The four columns of the output are respectively the range of bands for a degenerate energy (e.g. \{3,3\} means from band 3 to band 3), the band energy, the degree of degeneracy, and the labels for LG IRs. Both the extended Mulliken label and the $\Gamma$ label for a LG IR are given and the integer in the parentheses is the dimension of the LG IR. (c) The character table for LG IRs of $K$ $(\frac{2}{3},-\frac{1}{3},0)$ obtained by \lstinline!showLGCharTab[187,\{2/3,-1/3,0\},"rotmat"->False,"irep"->1;;6]!. (d) The character table for LG IRs of $K'$ $(-\frac{2}{3},\frac{1}{3},0)$ obtained by \lstinline!showLGCharTab[187,\{-2/3,1/3,0\},"rotmat"->False,"irep"->1;;6]!. \label{fig:MoS2}} \end{figure} In the \textsf{SpaceGroupIrep} package we use the function \lstinline!readVasp2trace! to read the \textsf{trace.txt }file generated by \textsf{vasp2trace} and its returned value is an association containing all the data in the \textsf{trace.txt }file. Then the function \lstinline!getBandRep! is used to determine the LG IRs according to the trace data returned by \lstinline!readVasp2trace!. There are three ways to call \lstinline!getBandRep!, i.e. \begin{lstlisting}[backgroundcolor={\color{yellow!5!white}},mathescape=true] getBandRep[sgno, BZtypeOrBasVec, traceData] getBandRep[sgno, BZtypeOrBasVec, traceData, ikOrListOrSpan] getBandRep[sgno, BZtypeOrBasVec, traceData, ikOrListOrSpan, ibOrListOrSpan] \end{lstlisting} in which \lstinline!BZtypeOrBasVec! has the same meaning as in \lstinline!identifyBCHSKptBySG!, \lstinline!traceData! is the value returned by \lstinline!readVasp2trace!, \lstinline!ikOrListOrSpan! (\lstinline!ibOrListOrSpan!) specifies the indexes of k-points (bands) to be processed. \lstinline!ikOrListOrSpan! (\lstinline!ibOrListOrSpan!) may be an integer such as \lstinline!5!, a list of integers such as \lstinline!{2,3,5}!, or a span such as \lstinline!3;;5!. If \lstinline!ikOrListOrSpan! (\lstinline!ibOrListOrSpan!) is not specified then all k-points (bands) are processed. Taking monolayer MoS$_{2}$ (space group 187) for example, we calculate its energy bands by \textsf{VASP} using the unit cell 1 shown in Fig. \ref{fig:MoS2}(a), get \textsf{trace.txt }file by \textsf{vasp2trace}, and then determine the LG IRs for all Bloch states in the bands by \begin{lstlisting}[backgroundcolor={\color{yellow!5!white}},mathescape=true] |(* put trace.txt to the current working directory *)| tr1=readVasp2trace["trace.txt"]; rep1=getBandRep[187, "", tr1]; \end{lstlisting} in which \lstinline!rep1! is an association having keys \lstinline!"kpath"!, \lstinline!"rep"!, and \lstinline!"kinfo"!, and the determined LG IRs are contained in \lstinline!rep1["rep"]!. In this example, the first k-point is $K$ $(\frac{2}{3},-\frac{1}{3},0)$ and the 10th k-point is $K'$ $(-\frac{2}{3},\frac{1}{3},0)$. Then the LG IRs of $K$ for seven valence bands 3\textendash 9 and four conduction bands 10\textendash 13 can be extracted by \lstinline!rep1["rep"][[1, 3;;13]]! whose result is shown in Fig. \ref{fig:MoS2}(b). The $\Gamma$ labels $K_{1}\sim K_{6}$ of LG IRs in Fig. \ref{fig:MoS2}(b) should refer to the character table in Fig. \ref{fig:MoS2}(c), from which it can be seen that the characters of $C_{3}^{+}$ and $\sigma_{h}$ are consistent with the data listed in the Tab. 2 of ref. \citep{Liu_Yao_2015_44_2643__Electronic}. It should be pointed out that the LG IRs of $K'$, i.e. the results of \lstinline!rep1["rep"][[10, 3;;13]]!, are \textit{seemingly} the same with $K$ as shown by the ``True'' in Fig. \ref{fig:MoS2}(b). However, the meanings of $K_{1}\sim K_{6}$ are different because the LG IRs for $K'$ should refer to the character table in Fig. \ref{fig:MoS2}(d). Taking the topmost valence band (the 9th band) as example, the character of $\{C_{3}^{+}|000\}$ for the $K_{3}$ of $K$ is $e^{i\frac{2\pi}{3}}$ {[}c.f. Fig. \ref{fig:MoS2}(c){]}, but the character of $\{C_{3}^{+}|000\}$ for the $K_{3}$ of $K'$ is $e^{-i\frac{2\pi}{3}}$ {[}c.f. Fig. \ref{fig:MoS2}(d){]}. The two characters are complex conjugates of each other, which is consistent with the time reversal symmetry between the states of $K$ and $K'$. Because $K'$ is not a BC standard k-point, we should remember that it borrows the name $K$ from its related BC standard k-point. In fact, if a non-BC k-point has its own name, we can directly use this name in its labels of LG IRs, e.g. the $K_{1}\sim K_{6}$ in Fig. \ref{fig:MoS2}(d) are in fact $K_{1}'\sim K_{6}'$. It is noteworthy that the BC cell of a crystal is probably not unique. Also taking monolayer MoS$_{2}$ for example, the three cells in Fig. \ref{fig:MoS2}(a) are all BC cells, but they have different origins. Consequently, the LG IR of a certain state is different for the three different cells, because the rotation center is different, e.g. the LG IRs of the topmost valence band and the lowest conduction band (the 10th band) at $K$ for the three BC cells can be obtained as follow \begin{lstlisting}[backgroundcolor={\color{yellow!5!white}},mathescape=true] |In[1]:=| rep1["rep"][[1, 9;;10]]//TableForm[#,TableDepth->2]& rep2["rep"][[1, 9;;10]]//TableForm[#,TableDepth->2]& rep3["rep"][[1, 9;;10]]//TableForm[#,TableDepth->2]& |Out[1]=| {9,9} -2.56193 1 {$^2$E$'$, K$_3$(1)} {10,10} -0.886607 1 {A$'$, K$_1$(1)} |Out[2]=| {9,9} -2.56193 1 {A$'$, K$_1$(1)} {10,10} -0.886609 1 {$^1$E$'$, K$_2$(1)} |Out[3]=| {9,9} -2.56188 1 {$^1$E$'$, K$_2$(1)} {10,10} -0.886553 1 {$^2$E$'$, K$_3$(1)} \end{lstlisting} in which \lstinline!rep1!, \lstinline!rep2!, and \lstinline!rep3! are the returned values of \lstinline!getBandRep! for BC cells 1, 2, and 3 in Fig. \ref{fig:MoS2}(a) respectively. The LG IRs for the three BC cells are consistent with the eigenvalues of $C_{3}^{+}$ in the Tab. 2 of ref. \citep{Liu_Yao_2015_44_2643__Electronic} for three different rotation centers. Therefore, if the primitive cell is not given explicitly, we cannot say what is the LG IR of the topmost valence band at $K$ (or any other state) for monolayer MoS$_{2}$. \subsection{For non-BC cells} For a non-BC primitive cell, it has to be converted to a BC cell and its trace data also has to be converted accordingly before determining LG IRs. To convert cells, we adopt the conventions used in the package \textsf{spglib}. In \textsf{spglib}, converting one cell to another needs three ingredients, i.e. transformation matrix, rotation matrix, and origin shift\citep{spglibdoc}. Concretely speaking, \textsf{spglib} can convert any input cell with basic vectors $\bm{a}_{0}$, $\bm{b}_{0}$, and $\bm{c}_{0}$ to an idealized standard cell with basic vectors $\bm{a}_{s}'$, $\bm{b}_{s}'$, and $\bm{c}_{s}'$ through the relation \begin{equation} (\bm{a}_{s}',\bm{b}_{s}',\text{\ensuremath{\bm{c}_{s}')=R_{{\rm std}}(\bm{a}_{0},s\bm{b}_{0},\bm{c}_{0})}}P^{-1},\label{eq:Rabc0iP} \end{equation} in which $P$ and $R_{{\rm std}}$ are respectively the transformation matrix and rotation matrix determined by \textsf{spglib}. The above basic vectors are all column matrices, so both $(\bm{a}_{0},\bm{b}_{0},\bm{c}_{0})$ and $(\bm{a}_{s}',\bm{b}_{s}',\bm{c}_{s}')$ are $3\times3$ square matrices and called ``basic-vector matrix''. In the conventions of \textsf{spglib}, transformation matrix makes linear combination of basic vectors to form new basic vectors but does not rotate the crystal, while rotation matrix rotates the crystal and hence rotates all basic vectors. Therefore, transformation matrix is always multiplied on the right side of a basic-vector matrix, while rotation matrix is always multiplied on the left side of a basic-vector matrix. Further using the origin shift $\bm{p}$ determined by \textsf{spglib}, the atomic coordinates and SG elements can be converted as follow \begin{equation} \bm{x}_{0}\ \ \ \ \rightarrow\ \ \ \ \bm{x}_{s}=P\bm{x}_{0}+\bm{p}, \end{equation} \begin{equation} \{R_{0}|\bm{v}_{0}\}\ \ \ \ \rightarrow\ \ \ \ \{R_{s}|\bm{v}_{s}\}=\{PR_{0}P^{-1}|P\bm{v}_{0}-R_{s}\bm{p}+\bm{p}\}, \end{equation} in which $\bm{x}_{0}$ and $\{R_{0}|\bm{v}_{0}\}$ are respectively the atomic coordinates and SG elements of the input cell, and $\bm{x}_{s}$ and $\{R_{s}|\bm{v}_{s}\}$ are respectively the atomic coordinates and SG elements of the idealized standard cell of \textsf{spglib}. In fact, the idealized standard cell of \textsf{spglib} is the conventional cell consistent with the first setting of ITA\citep{ITA} and hence can also be called ``ITA cell''. \begin{table} \caption{Changes to BC-Tab. 3.7. All space groups that are not listed in this table have the same data for the original BC-Tab. 3.7 and the adapted BC-Tab. 3.7.\label{tab:BC3.7change}} {\renewcommand{\arraystretch}{1.2} \begin{centering} \begin{tabular}{llll} \hline No.$\ \ \ \ $ & Symbol$\ \ \ \ \ $ & Generators & $\bm{t}_{0}$\tabularnewline \hline 68 & $Ccca$ & \{$C_{2x}|000\},\ \{C_{2y}|000\},\ \{I|\frac{1}{2}\frac{1}{2}\frac{1}{2}\}$ & 0\tabularnewline 125 & $P4/nbm$ & \{$C_{4z}^{+}|\frac{1}{2}\frac{1}{2}0\},\ \{C_{2x}|000\},\ \{I|\frac{1}{2}\frac{1}{2}0\}$ & $\frac{1}{2}\bm{t}_{1}$\tabularnewline & & \{$C_{4z}^{+}|000\},\ \{C_{2x}|000\},\ \{I|\frac{1}{2}\frac{1}{2}0\}$ & \tabularnewline 141 & $I4_{1}/amd$ & \{$C_{4z}^{+}|0\frac{1}{2}0\},\ \{C_{2x}|\frac{1}{2}\frac{1}{2}0\},\ \{I|\frac{1}{2}\frac{1}{2}0\}$ & $\frac{3}{8}\bm{t}_{1}+\frac{1}{8}\bm{t}_{2}+\frac{1}{4}\bm{t}_{3}$\tabularnewline & & \{$C_{4z}^{+}|\frac{3}{4}\frac{1}{4}\frac{1}{2}\},\ \{C_{2x}|\frac{3}{4}\frac{1}{4}\frac{1}{2}\},\ \{I|\frac{3}{4}\frac{1}{4}\frac{1}{2}\}$ & \tabularnewline 142 & $I4_{1}/acd$ & \{$C_{4z}^{+}|\frac{1}{2}00\},\ \{C_{2x}|\frac{1}{2}\frac{1}{2}0\},\ \{I|000\}$ & $\frac{1}{8}\bm{t}_{1}+\frac{3}{8}\bm{t}_{2}-\frac{1}{4}\bm{t}_{3}$\tabularnewline & & \{$C_{4z}^{+}|\frac{3}{4}\frac{1}{4}\frac{1}{2}\},\ \{C_{2x}|\frac{1}{4}\frac{3}{4}\frac{1}{2}\},\ \{I|\frac{3}{4}\frac{1}{4}\frac{1}{2}\}$ & \tabularnewline 155 & $R32$ & \{$C_{3}^{+}|000\},\ \{C_{21}'|000\}$ & 0\tabularnewline 160 & $R3m$ & \{$C_{3}^{+}|000\},\ \{\sigma_{d1}|000\}$ & 0\tabularnewline 161 & $R3c$ & \{$C_{3}^{+}|000\},\ \{\sigma_{d1}|\frac{1}{2}\frac{1}{2}\frac{1}{2}\}$ & 0\tabularnewline 166 & $R\bar{3}m$ & \{$S_{6}^{+}|000\},\ \{\sigma_{d1}|000\}$ & 0\tabularnewline 167 & $R\bar{3}c$ & \{$S_{6}^{+}|000\},\ \{\sigma_{d1}|\frac{1}{2}\frac{1}{2}\frac{1}{2}\}$ & 0\tabularnewline 178 & $P6_{1}22$ & \{$C_{6}^{+}|00\frac{1}{6}\},\ \{C_{21}'|000\}$ & $\frac{1}{4}\bm{t}_{3}$\tabularnewline & & \{$C_{6}^{+}|00\frac{1}{6}\},\ \{C_{21}''|000\}$ & \tabularnewline 179 & $P6_{5}22$ & \{$C_{6}^{+}|00\frac{5}{6}\},\ \{C_{21}'|000\}$ & $\frac{1}{4}\bm{t}_{3}$\tabularnewline & & \{$C_{6}^{+}|00\frac{5}{6}\},\ \{C_{21}''|000\}$ & \tabularnewline 182 & $P6_{3}22$ & \{$C_{6}^{+}|00\frac{1}{2}\},\ \{C_{21}'|000\}$ & $\frac{1}{4}\bm{t}_{3}$\tabularnewline & & \{$C_{6}^{+}|00\frac{1}{2}\},\ \{C_{21}''|000\}$ & \tabularnewline \hline \end{tabular} \par\end{centering} } \end{table} Next, we should convert the ITA cell to a BC cell with the aid of BC-Tab. 3.7. BC-Tab. 3.7 gives the SG generators of each space group and for some space groups there are two rows of data of which the first-row generators are the ones used in the BC book and the second-row generators are those used in the book \citep{IT1965} (referred to as IT1965 hereafter) but based on the BC basic vectors. Then we call the cell having the second-row SG generators ``second-row cell'', and in this sense a BC cell is just a ``first-row cell''. If a space group has only one row of generators in BC-Tab. 3.7, the term second-row cell can also be used but has the same meaning as the first-row cell. In fact, IT1965 can be considered as an earlier edition of ITA but their SG settings have some differences. In order to convert the ITA cell to a BC cell, BC-Tab. 3.7 has to be adapted and the changes are listed in Tab. \ref{tab:BC3.7change}. So the BC-Tab. 3.7 with the changes in Tab. \ref{tab:BC3.7change} is called ``the adapted BC-Tab. 3.7''. \begin{figure}[t] \begin{centering} \includegraphics[width=14cm]{fig-cell-convert} \par\end{centering} \caption{The procedure of converting the input cell to the BC cell. The transformation matrices (red color) and rotation matrices (blue color) are shown above the arrows, and the origin shifts (if exist) are shown below the arrows.\label{fig:cell-convert}} \end{figure} According to the adapted BC-Tab. 3.7, we first convert the ITA cell to a second-row cell, and then convert the second-row cell to a BC cell. The first conversion has the transformation matrix $Q$, rotation matrix $S_{1}$ and no origin shift, and the second conversion has the transformation matrix $U$, rotation matrix $S_{2}$, and origin shift $\bm{t}_{0}$. Suppose that the basic-vector matrices of the second-row cell and the BC cell are $(\bm{t}_{1}'^{(2)},\bm{t}_{2}'^{(2)},\bm{t}_{3}'^{(2)})$ and $(\bm{t}_{1},\bm{t}_{2},\bm{t}_{3})$ respectively, that the atomic coordinates of the two cells are $\bm{x}^{(2)}$ and $\bm{x}$ respectively, and that the SG elements of the two cells are $\{R^{(2)}|\bm{v}^{(2)}\}$ and $\{R|\bm{v}\}$ respectively. Then the conversion from the ITA cell to the second-row cell consists of the following relations \begin{equation} (\bm{t}_{1}'^{(2)},\bm{t}_{2}'^{(2)},\bm{t}_{3}'^{(2)})=S_{1}(\bm{a}_{s}',\bm{b}_{s}',\bm{c}_{s}')Q,\label{eq:S1abcspQ} \end{equation} \begin{equation} \bm{x}_{s}\ \ \ \ \rightarrow\ \ \ \ \bm{x}^{(2)}=Q^{-1}\bm{x}_{s}, \end{equation} \begin{equation} \{R_{s}|\bm{v}_{s}\}\ \ \ \ \rightarrow\ \ \ \ \{R^{(2)}|\bm{v}^{(2)}\}=\{Q^{-1}R_{s}Q\,|\,Q^{-1}\bm{v}_{s}\}, \end{equation} and the conversion from the second-row cell to the BC cell consists of the following relations \begin{equation} (\bm{t}_{1},\bm{t}_{2},\bm{t}_{3})=S_{2}(\bm{t}_{1}'^{(2)},\bm{t}_{2}'^{(2)},\bm{t}_{3}'^{(2)})U,\label{eq:S2t123p2U} \end{equation} \begin{equation} \bm{x}^{(2)}\ \ \ \ \rightarrow\ \ \ \ \bm{x}=U^{-1}\bm{x}^{(2)}+\bm{t}_{0}, \end{equation} \begin{equation} \{R^{(2)}|\bm{v}^{(2)}\}\ \ \ \ \rightarrow\ \ \ \ \{R|\bm{v}\}=\{U^{-1}R^{(2)}U\,|\,U^{-1}\bm{v}^{(2)}-R\bm{t}_{0}+\bm{t}_{0}\},\label{eq:Rv2toRv} \end{equation} where $\bm{t}_{0}$ is given in the adapted BC-Tab. 3.7. \begin{table}[t] \caption{The transformation matrix $Q$ and rotation matrix $S_{1}$ in Eq. (\ref{eq:S1abcspQ}) for each Bravais lattice. ${\rm c}\gamma$ (${\rm s}\gamma$) means $\cos\gamma$ ($\sin\gamma$) and $\gamma$ is the non-right angle between basic vectors of monoclinic lattices given in BC-Tab. 3.1.\label{tab:QS1}} {\renewcommand{\arraystretch}{1.2} \begin{centering} \begin{tabular}{llllll} \hline \Gape{% \parbox[c]{4em}{% Bravais lattice% }} & $Q$ & $S_{1}$ & % \parbox[c]{4em}{% Bravais lattice% } & $Q$ & $S_{1}$\tabularnewline \hline \begin{lstlisting} TricPrim \end{lstlisting} & \Gape{$\begin{bmatrix}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}$} & $\begin{bmatrix}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}$ & \begin{lstlisting} TetrPrim \end{lstlisting} & $\begin{bmatrix}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}$ & $\begin{bmatrix}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}$\tabularnewline \begin{lstlisting} MonoPrim \end{lstlisting} & \Gape{$\begin{bmatrix}0 & 1 & 0\\ 0 & 0 & 1\\ 1 & 0 & 0 \end{bmatrix}$} & $\begin{bmatrix}{\rm s}\gamma & 0 & -{\rm c}\gamma\\ -{\rm c}\gamma & 0 & -{\rm s}\gamma\\ 0 & 1 & 0 \end{bmatrix}$ & \begin{lstlisting} TetrBody \end{lstlisting} & $\begin{bmatrix}-\frac{1}{2} & \frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \end{bmatrix}$ & $\begin{bmatrix}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}$\tabularnewline \begin{lstlisting} MonoBase \end{lstlisting} & \Gape{$\begin{bmatrix}0 & \frac{1}{2} & \frac{1}{2}\\ 0 & -\frac{1}{2} & \frac{1}{2}\\ 1 & 0 & 0 \end{bmatrix}$} & $\begin{bmatrix}{\rm s}\gamma & 0 & -{\rm c}\gamma\\ -{\rm c}\gamma & 0 & -{\rm s}\gamma\\ 0 & 1 & 0 \end{bmatrix}$ & \begin{lstlisting} TrigPrim \end{lstlisting} & $\begin{bmatrix}\frac{2}{3} & -\frac{1}{3} & -\frac{1}{3}\\ \frac{1}{3} & \frac{1}{3} & -\frac{2}{3}\\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \end{bmatrix}$ & $\begin{bmatrix}-\frac{1}{2} & \frac{\sqrt{3}}{2} & 0\\ -\frac{\sqrt{3}}{2} & -\frac{1}{2} & 0\\ 0 & 0 & 1 \end{bmatrix}$\tabularnewline \begin{lstlisting} OrthPrim \end{lstlisting} & \Gape{$\begin{bmatrix}0 & 1 & 0\\ -1 & 0 & 0\\ 0 & 0 & 1 \end{bmatrix}$} & $\begin{bmatrix}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}$ & \begin{lstlisting} HexaPrim \end{lstlisting} & $\begin{bmatrix}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}$ & $\begin{bmatrix}0 & 1 & 0\\ -1 & 0 & 0\\ 0 & 0 & 1 \end{bmatrix}$\tabularnewline \begin{lstlisting} OrthBase \end{lstlisting} & \Gape{$\begin{bmatrix}0 & 1 & 0\\ -\frac{1}{2} & 0 & \frac{1}{2}\\ \frac{1}{2} & 0 & \frac{1}{2} \end{bmatrix}$} & $\begin{bmatrix}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}$ & \begin{lstlisting} CubiPrim \end{lstlisting} & $\begin{bmatrix}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}$ & $\begin{bmatrix}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}$\tabularnewline \begin{lstlisting} OrthBody \end{lstlisting} & \Gape{$\begin{bmatrix}\frac{1}{2} & -\frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2}\\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \end{bmatrix}$} & $\begin{bmatrix}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}$ & \begin{lstlisting} CubiFace \end{lstlisting} & $\begin{bmatrix}0 & \frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & 0 & \frac{1}{2}\\ \frac{1}{2} & \frac{1}{2} & 0 \end{bmatrix}$ & $\begin{bmatrix}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}$\tabularnewline \begin{lstlisting} OrthFace \end{lstlisting} & \Gape{$\begin{bmatrix}\frac{1}{2} & 0 & \frac{1}{2}\\ 0 & -\frac{1}{2} & -\frac{1}{2}\\ \frac{1}{2} & \frac{1}{2} & 0 \end{bmatrix}$} & $\begin{bmatrix}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}$ & \begin{lstlisting} CubiFace \end{lstlisting} & $\begin{bmatrix}-\frac{1}{2} & \frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \end{bmatrix}$ & $\begin{bmatrix}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}$\tabularnewline \hline \end{tabular} \par\end{centering} } \end{table} To sum up briefly, any input cell can be converted to BC cell via two intermediate cells and the procedure includes three steps: firstly input cell to ITA cell, then ITA cell to second-row cell, and lastly second-row cell to BC cell, as shown in Fig. \ref{fig:cell-convert}. In the first step, $R_{{\rm std}}$, $P$, and $\bm{p}$ are all determined by \textsf{spglib}; in the second step, $Q$ and $S_{1}$ are listed in Tab. \ref{tab:QS1} according to the Bravais lattice of space group; and in the last step, $U$ and $S_{2}$ are listed in Tab. \ref{tab:US2} and $\bm{t}_{0}$ is given in the last column of the adapted BC-Tab. 3.7. Integrating the three steps from Eq. (\ref{eq:Rabc0iP}) to Eq. (\ref{eq:Rv2toRv}), the integrated conversion from the input cell to the BC cell consists of the following relations \begin{equation} (\bm{t}_{1},\bm{t}_{2},\bm{t}_{3})=S_{2}S_{1}R_{{\rm std}}(\bm{a}_{0},\bm{b}_{0},\bm{c}_{0})P^{-1}QU, \end{equation} \begin{equation} \bm{x}_{0}\ \ \ \ \rightarrow\ \ \ \ \bm{x}=U^{-1}Q^{-1}(P\bm{x}_{0}+\bm{p})+\bm{t}_{0},\label{eq:x0tox} \end{equation} \begin{equation} \{R_{0}|\bm{v}_{0}\}\ \ \ \ \rightarrow\ \ \ \ \{R|\bm{v}\}\begin{cases} R=U^{-1}Q^{-1}PR_{0}P^{-1}QU\\ \bm{v}=U^{-1}Q^{-1}(P\bm{v}_{0}-PR_{0}P^{-1}\bm{p}+\bm{p})-R\bm{t}_{0}+\bm{t}_{0} \end{cases}.\label{eq:R0v0toRv} \end{equation} \begin{table}[ph] \caption{The transformation matrix $U$ and rotation matrix $S_{2}$ in Eq. (\ref{eq:S2t123p2U}) for the space groups with an $*$ in the last column of the adapted BC-Tab. 3.7. The BC book uses orientations different from the default ones in ITA for these space groups, and the orientations used are given here in the second column (refer to ITA for the meanings of the orientations). For the space groups not listed in this table, both $U$ and $S_{2}$ are identity matrix.\label{tab:US2}} {\renewcommand{\arraystretch}{1.2} \begin{centering} \begin{tabular}{llccc} \hline \multicolumn{2}{c}{Space Group} & orientation & $U$ & $S_{2}$\tabularnewline \hline $17\ (P222_{1})$ & $19\ (P2_{1}2_{1}2_{1})$ & \multirow{8}{*}{\Gape[12.5em]{$\bm{b\bar{a}c}$}} & \multirow{4}{*}{\Gape{$\begin{bmatrix}\begin{array}{ccc} 0 & -1 & 0\\ 1 & 0 & 0\\ 0 & 0 & 1 \end{array}\end{bmatrix}$}} & \multirow{8}{*}{\Gape[11em]{$\begin{bmatrix}0 & 1 & 0\\ -1 & 0 & 0\\ 0 & 0 & 1 \end{bmatrix}$}}\tabularnewline $28\ (Pma2)$ & $29\ (Pca2_{1})$ & & & \tabularnewline $31\ (Pmn2_{1})$ & $33\ (Pna2_{1})$ & & & \tabularnewline $53\ (Pmna)$ & $61\ (Pbca)$ & & & \tabularnewline $36\ (Cmc2_{1})$ & & & \Gape{$\begin{bmatrix}\begin{array}{ccc} 0 & -1 & 0\\ 1 & 0 & 0\\ 0 & 0 & 1 \end{array}\end{bmatrix}$} & \tabularnewline $46\ (Ima2)$ & & & \Gape{$\begin{bmatrix}\begin{array}{ccc} 0 & 1 & 0\\ 0 & 1 & -1\\ -1 & 1 & 0 \end{array}\end{bmatrix}$} & \tabularnewline $70\ (Fddd)$ & & & \Gape{$\begin{bmatrix}\begin{array}{ccc} 1 & 1 & 1\\ 0 & 0 & -1\\ -1 & 0 & 0 \end{array}\end{bmatrix}$} & \tabularnewline $122\ (I\bar{4}2d$) & & & \Gape{$\begin{bmatrix}\begin{array}{ccc} 0 & 1 & 0\\ 0 & 1 & -1\\ -1 & 1 & 0 \end{array}\end{bmatrix}$} & \tabularnewline \hline \begin{minipage}[c]{2cm}% \mbox{}\\ 38 $(Amm2)$\\[-8pt] 40 $(Ama2)$ \mbox{}\\[-1em]% \end{minipage} & % \begin{minipage}[c]{2cm}% \mbox{}\\ 39 $(Abm2)$\\[-8pt] 41 $(Aba2)$ \mbox{}\\[-1em]% \end{minipage} & \multirow{2}{*}{\Gape[2.5em]{$\bm{bca}$}} & \multirow{1}{*}{\Gape[-2em]{$\begin{bmatrix}\begin{array}{ccc} -1 & 0 & 0\\ 0 & 0 & 1\\ 0 & 1 & 0 \end{array}\end{bmatrix}$}} & \multirow{2}{*}{\Gape[1em]{$\begin{bmatrix}0 & 1 & 0\\ 0 & 0 & 1\\ 1 & 0 & 0 \end{bmatrix}$}}\tabularnewline $57\ (Pbcm)$ & & & \Gape{$\begin{bmatrix}\begin{array}{ccc} 0 & -1 & 0\\ 0 & 0 & 1\\ -1 & 0 & 0 \end{array}\end{bmatrix}$} & \tabularnewline \hline $51\ (Pmma)$ & $54\ (Pcca)$ & $\bm{\bar{c}ba}$ & $\begin{bmatrix}\begin{array}{ccc} 1 & 0 & 0\\ 0 & 0 & 1\\ 0 & -1 & 0 \end{array}\end{bmatrix}$ & \Gape{$\begin{bmatrix}0 & 0 & -1\\ 0 & 1 & 0\\ 1 & 0 & 0 \end{bmatrix}$}\tabularnewline \hline $52\ (Pnna)$ & $60\ (Pbcn)$ & $\bm{a}\bar{\bm{c}}\bm{b}$ & $\begin{bmatrix}\begin{array}{ccc} 0 & 0 & -1\\ 0 & 1 & 0\\ 1 & 0 & 0 \end{array}\end{bmatrix}$ & \Gape{$\begin{bmatrix}1 & 0 & 0\\ 0 & 0 & -1\\ 0 & 1 & 0 \end{bmatrix}$}\tabularnewline \hline \end{tabular} \par\end{centering} } \end{table} From Eqs. (\ref{eq:x0tox}) and (\ref{eq:R0v0toRv}) we can see that in the conversion of atomic coordinates and SG elements no rotation matrices ($R_{{\rm std}}$, $S_{1}$, or $S_{2})$ are used. This is because the rotation of crystal rotates the basic vectors and atomic positions simultaneously but does not change the fractional coordinates of atoms. On the contrary, the conversion of the spin rotation matrices for double space groups uses only the rotation matrices, i.e. \begin{equation} \tilde{R}=\tilde{S}\tilde{R}_{{\rm std}}\tilde{R}_{0}\tilde{R}_{{\rm std}}^{-1}\tilde{S}^{-1}, \end{equation} in which $\tilde{R}_{0}$ and $\tilde{R}$ are the spin rotation matrices of the input cell and the BC cell respectively, and $\tilde{R}_{{\rm std}}$ and $\tilde{S}$ are the SU(2) spin rotation matrices of the corresponding O(3) rotation matrices $R_{{\rm std}}$ and $S$ $(=S_{2}S_{1})$ respectively. $\tilde{S}$ is determined by $S$ through $\tilde{S}=\exp(-i\omega\bm{n}\cdot\bm{\sigma}/2)$ in which $\omega$ and $\bm{n}$ are the rotation angle and the unit direction of the rotation axis of $S$ respectively and $\bm{\sigma}$ is the vector of Pauli matrices, and so is $\tilde{R}_{{\rm std}}.$ Using the cell conversion method mentioned above, the data in a\textsf{ trace.txt }file from a non-BC cell can be converted to the data for a BC cell by \begin{lstlisting}[backgroundcolor={\color{yellow!5!white}}] convTraceToBC[sgno, traceData, P, p, stdR] \end{lstlisting} in which \lstinline!traceData! is the returned value of \lstinline!readVasp2trace! for a non-BC cell, and \lstinline!P!, \lstinline!p!, and \lstinline!stdR! are respectively the transformation matrix $P$, the origin shift $\bm{p}$, and the rotation matrix $R_{{\rm std}}$ determined by \textsf{spglib}. Note that \lstinline!stdR! is needed only when spin-orbit coupling is considered for the trace data. This conversion can also be done automatically by \begin{lstlisting}[backgroundcolor={\color{yellow!5!white}}] autoConvTraceToBC[poscarFile, traceData] \end{lstlisting} in which \lstinline!poscarFile! is the file name of the \textsf{POSCAR} file for the non-BC cell. In fact, \lstinline!autoConvTraceToBC! first calls the function \lstinline!readPOSCAR! to read the non-BC \textsf{POSCAR} file and then calls the function \lstinline!spglibGetSym! which calls the python interface of the package \textsf{spglib} externally to determine $P$, $\bm{p}$, and $R_{{\rm std}}$. After the trace data are converted they belong to a BC cell and can be directly used by \lstinline!getBandRep! to determine LG IRs. \section{Correspondence of LG IR labels between BCS and BC conventions} \begin{figure}[th] \begin{centering} \includegraphics[width=14cm]{fig-kptBCStoBC-187} \par\end{centering} \caption{The output of \lstinline[backgroundcolor={\color{yellow!5!white}}]!showKptBCStoBC[187]! which gives the correspondence of k-point coordinates between BCS and BC conventions. The first column is the BCS k-points. The second and third columns are the BCS k-point coordinates for conventional cell and primitive cell respectively. The 4th column is the k-point coordinates for a BC cell which are converted from the BCS k-point coordinates, i.e. the $\protect\vk$ in Eq. (\ref{eq:k0tok}). The 5th column is the BC k-points whose BC standard coordinates, namely $\protect\vk_{{\rm BC}},$ are given in the 6th column. The relation between $\protect\vk$ and $\protect\vk_{{\rm BC}}$ is $\protect\vk=S\protect\vk_{{\rm BC}}+\bm{g}_{n}$ in which $S$ is the rotation given in the 7th column and $\bm{g}_{n}$ is the reciprocal lattice vector given in the 8th column. The k-points with $*$ and $**$ in the 5th column are respectively of type II and type III as defined in subsection \ref{subsec:LGIR-at-any-k}. Blue color highlights the k-points of type IV (GP) and type V (UN). Red color highlights the k-points which have different names in BCS and BC conventions. Yellow background highlights the cases in which one BCS k-point may be identified as two BC k-points. \label{fig:kptBCStoBC187}} \end{figure} With the aid of \textsf{irvsp}, we have obtained all the data of LG IRs in BCS convention. Based on these BCS data we can make correspondence of LG IR labels between BCS and BC conventions. To achieve this, the coordinates of all the k-points defined in BCS convention have to be converted to the coordinates in BC convention. The conversion of k-point coordinates can be done through the equation \begin{equation} \bm{k}=(P^{-1}QU)^{T}\vk_{0},\label{eq:k0tok} \end{equation} in which $\vk_{0}$ is the k-point defined by a BCS cell, namely the input cell, $\vk$ is the k-point defined by the converted BC cell, and $P$, $Q$, and $U$ are the transformation matrices of the aforementioned cell conversion. Then $\vk$ is processed by \lstinline!identifyBCHSKptBySG! to find its relation to the BC standard k-point $\vk_{{\rm BC}}$. The correspondence of k-point coordinates between BCS and BC conventions can be tabulated by the function \begin{lstlisting}[backgroundcolor={\color{yellow!5!white}}] showKptBCStoBC[sgno, BZtype] \end{lstlisting} for the space group of number \lstinline!sgno!, in which \lstinline!BZtype! is optional if it is \lstinline!""! or \lstinline!"a"!. An example is shown in Fig. \ref{fig:kptBCStoBC187} for space group $P\bar{6}m2$ (No. 187) where the second or third column corresponds to $\vk_{0}$ and the 4th and 6th columns correspond to $\vk$ and $\vk_{{\rm BC}}$ respectively. After the correspondence of k-points is clear, the correspondence of LG IR labels can be made by first building trace data containing all the BCS LG IRs and then determining their BC LG IRs via \lstinline!getBandRep!. In this procedure, it has to be pointed out that it is the complex conjugates of the BCS characters that can correspond to the BC ones. A typical example is that the character of a pure translation $\{E|t_{1}t_{2}t_{3}\}$ is $e^{i2\pi(ut_{1}+vt_{2}+wt_{3})}$ for the k-point $(u,v,w)$ in BCS convention while it should be $e^{-i2\pi(ut_{1}+vt_{2}+wt_{3})}$ in BC convention. The final correspondence of LG IR labels can be tabulated by the function \lstinline!showKrepBCStoBC! for either single-valued or double-valued LG IRs, \begin{lstlisting}[backgroundcolor={\color{yellow!5!white}}] showKrepBCStoBC[sgno, BZtype] (* for sigle-valued LG IRs *) showKrepBCStoBC[sgno, BZtype, "DSG"->True] (* for double-valued LG IRs *) \end{lstlisting} in which \lstinline!BZtype! is also optional if it is \lstinline!""! or \lstinline!"a"!. The example for single-valued LG IRs of the space group $P\bar{6}m2$ is shown in Fig. \ref{fig:KrepBCStoBC187}. \begin{figure}[th] \begin{centering} \includegraphics[width=12cm]{fig-krepBCStoBC-187} \par\end{centering} \caption{The output of \lstinline[backgroundcolor={\color{yellow!5!white}}]!showKrepBCStoBC[187]! which gives the correspondence of LG IR labels between BCS and BC conventions. The first column is the $\Gamma$ labels for BCS LG IRs. The second column is the dimensions of the LG IRs. The third column is the $\Gamma$ labels for BC LG IRs. The 4th column is the extended Mulliken labels. The 5th column is the realities for corresponding SG IRs. Blue background highlights the k-points of type IV (GP) and type V (UN). Yellow background highlights the cases in which one BCS k-point may be identified as two BC k-points.\label{fig:KrepBCStoBC187}} \end{figure} \section{Conclusions} During the development of the package \textsf{SpaceGroupIrep}, we found some typos in the BC book. The fixed typos are given in the supplementary material. For quick reference, the elements of each space group in BC convention are listed in the supplementary material. In addition, the correspondences of k-points and LG IR labels between BCS and BC conventions for all the 230 space groups and all possible types of BZs are also given in the supplementary material. In conclusion, we have developed a program package called \textsf{SpaceGroupIrep} in the Mathematica language for space groups and their IRs in BC convention. This package digitizes many tables in the BC book, especially the huge tables BC-Tabs. 5.1, 5.7, and 6.13, and it provides tens of functions to manipulate these data. In this package, there are functions which can get the elements of a space group, a little group, a Herring little group, or a central extension of little co-group and functions which can calculate the multiplication of the elements. There are functions which can get and show the LG IRs (SG IRs) of any k-point (k-star) for both single-valued and double-valued IRs. There are functions which can calculate and show the decomposition of the direct product of SG IRs for any two k-stars. There are functions which can determine the LG IRs of Bloch band states in BC convention from the \textsf{trace.txt} file produced by \textsf{vasp2trace} and they work for any primitive cell because there are functions which can convert any input cells to BC cells. And there are also functions which give the correspondence of k-points and LG IR labels between BCS and BC conventions. In addition to the main functions mentioned above, there are other useful functions such as \lstinline!showBZDemo! (showing the rotatable BZ and HS k-points and k-lines), \lstinline!rotAxisAngle! (finding the rotation axis and rotation angle of an O(3) matrix), and \lstinline!generateGroup! (obtaining all group elements according to its generators and multiplication). Detailed information for each function can be obtained by the Mathematica build-in function \lstinline!Information!, e.g. \lstinline!generateGroup//Information! or just \lstinline!?generateGroup!. In a word, the Mathematica package \textsf{SpaceGroupIrep} is a very useful database and tool set for both studying the representation theory of space group and applying them in research such as analyzing band topology or determining selection rules. \section*{Acknowledgments} GBL acknowledges the support by the National Key R\&D Program of China (Grant No. 2017YFB0701600). ZZ acknowledges the support by China Postdoctoral Science Foundation (Grant No. 2020M670106). YY acknowledges the support by the National Key R\&D Program of China (Grant Nos. 2020YFA0308800 and 2016YFA0300600), the NSF of China (Grants No. 11734003), and the Strategic Priority Research Program of Chinese Academy of Sciences (Grant No. XDB30000000). \bibliographystyle{elsarticle-num-names}
{ "timestamp": "2020-12-17T02:15:33", "yymm": "2012", "arxiv_id": "2012.08871", "language": "en", "url": "https://arxiv.org/abs/2012.08871" }
\section*{Introduction} The construction of lasers with ultra-high frequency stability is a key enabling technology for optical atomic clocks \cite{ludlow_optical_2015} and a large variety of precision measurements \cite{krohn_fiber_2014, ghelfi_fully_2014, derevianko_hunting_2014, hogan_atom-interferometric_2016, canuel_exploring_2018, karr_progress_2019}. In spite of recent alternative approaches \cite{norcia_frequency_2018}, the best results have been obtained with lasers locked to ultra-stable external frequency references. Well-studied examples for such system include spectral holes in rare-earth doped crystals \cite{thorpe_frequency_2011,cook_laser-frequency_2015}, fiber-optical delay lines \cite{kefelian_ultralow-frequency-noise_2009,dong_subhertz_2015, dong_observation_2016}, as well as whispering-gallery-mode \cite{lim_chasing_2017} and Fabry-Perot resonators \cite{salomon_laser_1988, webster_thermal-noise-limited_2008, kessler_sub-40-mhz-linewidth_2012}. The latter have demonstrated an impressive performance down to a relative frequency stability of $4\cdot10^{-17}$ \cite{matei_1.5_2017}. In that experiment, the detrimental effect of temperature fluctuations has been minimized by operating a crystalline silicon resonator at a zero-crossing of its thermal expansion coefficient around $124\,\si{\kelvin}$. The achieved frequency stability has then been limited by thermal noise of the mirror coatings \cite{numata_thermal-noise_2004}. While this noise can be reduced by cooling to even lower temperature \cite{zhang_ultrastable_2017, robinson_crystalline_2019}, this comes at the prize of a larger sensitivity to temperature drifts, as the system is then operated below the zero-crossing of the thermal expansion coefficient of silicon. Therefore, in this work we explore a cryogenic fiber-based resonator as an alternative design for a frequency-stable reference. This has two advantages: First, such resonator is easier to implement as it only requires off-the-shelf components. Second, in this work we show that our fiber resonator exhibits a temperature-insensitive point around 3.55\,\si{\kelvin}, 35-fold lower than that of crystalline silicon studied previously \cite{zhang_ultrastable_2017}. Fiber delay lines have been investigated at room temperature, mainly because of their lower cost and complexity \cite{kefelian_ultralow-frequency-noise_2009} as compared to Fabry-Perot resonators. Recent experiments have demonstrated sub-Hz short-term stability \cite{dong_subhertz_2015}. However, both temperature fluctuations and thermal noise have been limiting the performance \cite{dong_observation_2016}. Here we show that both of these limitations can be alleviated when operating at cryogenic temperature. Our experiment is intended as a proof-of-concept. We thus do not target or achieve the ultra-high precision of other cryogenic experiments with Fabry-Perot resonators \cite{zhang_ultrastable_2017} or rare-earth doped crystals \cite{thorpe_frequency_2011, cook_laser-frequency_2015}. Still, we perform a detailed characterization of our device with respect to vibrations as well as temperature and pressure instability. The observed low sensitivity and in particular the existence of a temperature-insensitive point make our approach promising for future precision experiments. Our experiment uses a fiber ring resonator which exhibits a ladder of equidistant resonances with free spectral range $f_\text{FSR}=c/(nL)$. Here, $n$ is the refractive index, $L$ the length of the fiber, and $c$ the speed of light. In most Fabry-Perot frequency references investigated to date \cite{matei_second_2016, webster_thermal-noise-limited_2008} the cavity field is in vacuum with constant $n=1$. Thus, a temperature-insensitive point is observed when the thermal expansion coefficient $\alpha=L^{-1}(\partial L /\partial T)$ exhibits a zero-crossing. In contrast, in our experiment the light is guided in a silica fiber, whose refractive index changes with temperature $T$, as described by the normalized thermo-optic coefficient $\beta_\text{TO}=n^{-1}(\partial n/\partial T)$. In our setting, both the thermal expansion \cite{white_thermal_1975} and the thermo-optic coefficient \cite{arcizet_cryogenic_2009} change with temperature. The combined sensitivity $\alpha+\beta_\text{TO}$ of fused silica exhibits a zero-crossing around $13\,\si{K}$ \cite{arcizet_cryogenic_2009}, making it a promising material for cryogenic frequency references. Instead of pure glass, we use a commercial fiber, which gives an additional contribution to the total temperature sensitivity. The fiber cladding will exert a radial pressure on the core if their thermal expansion coefficients are not the same, which changes both the refractive index and fiber length. The effect on phase stability has been studied recently down to temperatures $100\,\si{\kelvin}$ \cite{zhu_thermal_2020}, finding that acrylate coatings transition to a stiff, glass-like state with thermal expansion converging to zero at $0\,\si{\kelvin}$. In our modeling, we include the effect of thermal strain as coefficient $\beta_\text{TS}$, getting the total temperature sensitivity: \begin{equation} \label{eq:df_dT_over_f} \frac{1}{f} \frac{\mathrm{d} f}{\mathrm{d} T} = -(\alpha + \beta_\text{TO} + \beta_\text{TS}). \end{equation} The existence, temperature and curvature of a temperature-insensitive point, which corresponds to a zero-crossing of eq.~\ref{eq:df_dT_over_f}, will thus depend on the used fiber. In particular the material and diameter of the core, cladding, and acrylate coating will determine the thermal strain coefficient. While we use a standard commercial product for our initial experiments, this gives access to a large parameter space for future optimization. In our experiment, we fabricate a fiber ring resonator by splicing the ends of a 95:5 fused fiber beam-splitter to a $\sim 120\,\si{m}$ long fiber. The latter is coiled to an aluminum cylinder of $4\,\si{\centi\metre}$ outer diameter that fits into the sample space of our closed-cycle cryostat (AttoDry 2100). To avoid bend-induced loss that would limit the finesse, both the beam-splitter and fiber are made from Corning$^\text{\textregistered}$ ClearCurve$^\text{\textregistered}$ bend-insensitive fiber. We do not expect that the properties of the fused coupler (Evanescent Optics Inc., type 954) significantly influence our measurements. We first determine the resonator properties at cryogenic temperature by measuring its transmission at $1535\,\si{nm}$ through the two open ports of the beam-splitter, c.f. Fig.~\ref{fig:setup}. After adjusting the input polarization to match one of the resonator eigenmodes, we observe a free spectral range of $1.71(4)\,\si{MHz}$ and a FWHM linewidth of $87(7)\,\si{kHz}$. This corresponds to a finesse of 20(1) and a round-trip transmission of about $70\,\%$. This value is limited by the splice-, bend- and absorption loss of the fiber and the excess loss of the beam splitter, and might be further increased in future devices. The sample is thermalized to its cryogenic environment via helium exchange gas, whose pressure is monitored with a Pirani pressure gauge at ambient temperature. To characterize the sensitivity of the device to perturbations, we attach a cryogenic vibration sensor and a resistive thermometer in close proximity to the ring resonator. \begin{figure} \includegraphics[width=1\columnwidth]{fig1_setup.pdf} \caption{\textbf{Experimental setup.}\label{fig:setup} A cryogenic fiber ring resonator serves as a frequency reference for stabilizing a laser at $1535\,\si{nm}$. To this end, the transmission is measured by a fast photodiode. An error signal is generated using the Pound-Drever-Hall technique with the help of a radio-frequency source at 100 MHz, an electro-optical modulator (EOM) and a mixer. The feedback signal is applied to a voltage-controlled oscillator (VCO) that drives an acousto-optical modulator (AOM). In addition, slow drifts of the laser frequency are compensated via its tuning port. To characterize the frequency stability, the beating signal of the laser light with an ultra-stable frequency comb is recorded. } \end{figure} We now study the stability of the resonator against external perturbations. To investigate its short-term stability, we lock the laser (Koheras BASIK X15) to a frequency comb (Menlo Systems FC1500-250-ULN) that in turn is referenced to a resonator (Menlo Systems ORS1500) with sub-Hz stability. We then tune the laser to the side of the fiber resonator transmission dip. Fluctuations of the resonator frequency will lead to a fluctuation of the transmitted power, which we transfer to frequency deviations using the independently measured spectral response. The resulting time trace is shown in Fig.~\ref{fig:shortterm}(a). A fast-Fourier-transform gives the spectral properties of the frequency shift (b), which exhibits a number of peaks at different frequencies (black). The reason for the peaked structure is acoustic resonances that are excited by the broadband noise of our pulse tube cryocooler. The position and width of the resonances shows large similarities with the vibration spectrum measured by the attached piezo-electric sensor (blue). Deviations in the amplitude of the peaks can be explained by the sensor being only sensitive to vibrations along the axis parallel to the coil center, while the resonator will be sensitive to vibrations along all axes. By comparing the peaks in the spectra of sensor and resonator between $0.3$ and $4\,\si{\kilo\hertz}$, we can estimate that the vibration sensitivity of our resonator is about $<5\cdot{10^{-11}}\,\si{m^{-1}\,s^{2}}$ in that frequency range. \begin{figure} \includegraphics{fig2_shortterm_log.pdf} \caption{\textbf{Short-term stability.} (a) Frequency shift of the resonance as a function of time (grey), with running averages over $10\,\si{ms}$ (black) and $100\,\si{ms}$ (red). (b) Transmission spectrum (black) of the fiber ring resonator obtained from the data in (a), compared to the sample stage vibrations (blue) measured by a cryogenic acceleration sensor. The red dashed line indicates the calculated noise floor caused by measured temperature drifts of about $1\,\si{\milli\kelvin\per\second}$. } \label{fig:shortterm} \end{figure} \begin{figure}[ht] \centering\includegraphics[width=1\columnwidth]{fig3_TPsensitivity.pdf} \caption{\textbf{Temperature and pressure sensitivity.} \textbf{Main graph:} The frequency shift of the fiber ring resonances with temperature shows a turning point around $3.55\,\si{K}$, shown for an exchange gas pressure of $18\,\si{mbar}$. The measured data (black dots) are fit by a third-order polynomial (red line) to extract a curvature of $-22(1)\cdot{10^{-9}}\,\si{K^{-2}}$ at the turning point. \textbf{Inset:} Shift of the resonance frequency with pressure at a temperature of $3.55\,\si{K}$. At low pressure (grey dots) the resonator thermalization may be impaired. A fit to a third-order polynomial (blue line) gives a pressure-insensitive point around 15 mbar, with a remaining pressure dependence of $4.2(2)\cdot{10^{-11}}\,\si{\milli\bar^{-2}}$.} \label{fig:TP_dependence} \end{figure} As a next step, we investigate the sensitivity with respect to temperature and pressure changes by locking the laser to the resonator using the Pound-Drever-Hall technique \cite{black_introduction_2001}, with an input power of $\sim 0.1\,\si{\milli\watt}$. We use a modulation frequency that is much larger than the free-spectral range and tuned such that the sidebands are halfway between higher-order resonances and thus not phase-shifted upon transmission. We then record the frequency difference between the laser and frequency comb after changing the sample space temperature and waiting a few seconds until full thermalization. The obtained temperature dependence (with a constant amount of helium in the sample space) is shown in Fig.~\ref{fig:TP_dependence}(a). We observe a first-order temperature-insensitive point around $3.55\,\si{\kelvin}$, (Fig.~\ref{fig:TP_dependence}(a)). The data is well-fit by a third-order polynomial (red curve), from which we extract the curvature at the turning point, $-22(1)\cdot{10^{-9}}\,\si{K^{-2}}$. We repeat this procedure for different amounts of helium gas in the sample space and thus different pressures. We find that the temperature of the turning point and its curvature do not change significantly (not shown). To directly investigate the pressure sensitivity, we evacuate the sample space and then repeatedly add small amounts of helium while the temperature of the sample space is kept at the insensitive point, $\sim 3.55\,\si{\kelvin}$. The observed frequency shift (Fig.~\ref{fig:TP_dependence}(b)) corresponds to a sensitivity of $\lesssim 5\cdot{10^{-10}}\,\si{mbar^{-1}}$ over a large pressure range, with a first-order insensitive point around $15\,\si{\milli\bar}$. Measurements at lower pressure (grey) are less reliable as the thermalization of the fiber resonator with the surroundings is impaired when the pressure is too low. Similarly, we cannot exclude that changing temperature gradients contribute to the measured pressure dependence. Thus, the above value should be considered as an upper bound. Still, we note that the observed low pressure sensitivity justifies that our temperature scans have been performed at constant helium filling level instead of constant gas pressure. The reason is that in the investigated regime, the pressure changes at $\lesssim 2.5\,\si{mbar/K}$. Thus, its impact on the temperature sensitivity is only $\lesssim 1.25\cdot{10^{-9}}\,\si{K^{-1}}$, i.e. small compared to the observed temperature dependence. For the same reason, compensating temperature changes by connecting the sample space to a room-temperature helium reservoir, as pioneered in \cite{cook_laser-frequency_2015}, does not seem promising in our fiber-based ring resonator unless the temperature sensitivity can be further reduced by materials engineering. After characterizing the sensitivity to external perturbations, we measured the stability over a period of sixteen hours, see Fig.~\ref{fig:longterm}. The inset shows the raw data with a slow linear drift by about $5\cdot{10^{-11}}\,\si{h^{-1}}$, similar to the isothermal creep reported for other amorphous materials such as ultra-low expansion glass commonly used in reference cavities \cite{webster_thermal-noise-limited_2008}. As it may originate from the thermal stress exerted by the fiber cladding, different fiber types may show a different linear drift. Still, after subtracting a linear fit, our resonator exhibits a long-term stability around $20\,\si{kHz}$, or $\cdot{10^{-10}}$, limited by the moderate temperature stability of $\pm 100\,\si{mK}$ obtained in our cryostat. Thus, our implementation does not achieve a stability that exceeds previous experiments, neither those with fiber interferometers at room temperature ($5\cdot 10^{-15}$) \cite{dong_subhertz_2015}, nor that of cryogenic silicon resonators ($4\cdot10^{-17}$) \cite{matei_1.5_2017}. The reason is that our setup does not include any thermal shields or vibration-damping enclosures. Instead, it constitutes a proof-of-concept experiment to determine the sensitivity to pressure, temperature and vibrations, which we discuss in the following. The extracted vibration sensitivity of our fiber coil, $<5\cdot{10^{-11}}\,\si{(m^{-1}\,s^{2})}$, is only about tenfold larger than that of specially optimized, unitary aspect ratio ultra-low expansion glass cavities at room temperature \cite{webster_thermal-noise-limited_2008, leibrandt_spherical_2011} and that of crystalline silicon resonators \cite{matei_1.5_2017}. Operating our resonator in a closed-cycle cryocooler with decoupled vibrations down to a level of $\lesssim 10^{-3}\,\si{m/s^2}$ \cite{zhang_ultrastable_2017}, we expect that our fiber ring resonator would perform around $10^{-14}$ short-term stability. Even better short-term stability can be achieved with improvements of our resonator design. First, additional vibration damping material may be inserted between the aluminum cylinder and the fiber coil. Second, increasing the fiber length while reducing the finesse may be advantageous. Third, an optimized geometric arrangement of the fiber may reduce the vibration sensitivity, with a fifty-fold lower value demonstrated at room temperature \cite{huang_optical_2019}. Finally, our setup does not require coupling of a free-space optical beam into a micro-sized resonator mode. Thus, it should be possible to build an effective cryogenic vibration isolation stage. As the thermalization is done by exchange gas, one could e.g. simply use a large mass on a soft spring to hold the fiber resonator while damping out all high-frequency vibrations. Alternatively, active damping and magnetic levitation could be implemented in cryostats with larger sample space, offering the potential for unprecedented short-term stability. In this context, also the upper bound of the pressure sensitivity, $4.2(2)\cdot{10^{-11}}\,\si{\milli\bar^{-2}}$, is important. It is much smaller than the linear change of $\sim 3\cdot{10^{-7}}\,\si{\milli\bar^{-2}}$ observed with Fabry-Perot cavities at room temperature and atmospheric pressure \cite{egan_performance_2015}. Getting the stability to the $10^{-16}$ level would require pressure stabilization to $10^{-4}$, which should be straightforward in a closed cryogenic volume. Alternatively, placing the system in cryogenic vacuum with typical pressures $\ll 10^{-9}\,\si{\milli\bar}$ will eliminate the influence of pressure and its fluctuations. Next, we compare the observed temperature sensitivity of $22(1)\cdot{10^{-9}}\,\si{K^{-2}}$ to other experiments. It is about an order of magnitude worse than ultra-low expansion glass cavities at room temperature, $1.5\cdot{10^{-9}}\,\si{K^{-2}}$ \cite{webster_thermal-noise-limited_2008}, but close to that of Fabry-Perot cavities made from crystalline silicon and operated at their temperature-insensitive point, $17\cdot{10^{-9}}\,\si{K^{-2}}$ at $124\,\si{K}$ \cite{kessler_sub-40-mhz-linewidth_2012}. Therefore, to estimate the potential of our cryogenic fiber ring resonator, we make an explicit comparison with the most stable sub-10\,K resonator to date: a silicon cavity operated around $4\,\si{K}$ \cite{zhang_ultrastable_2017, robinson_crystalline_2019}. To achieve the same linear sensitivity ($0.02\,\si{ppb/K}$), we would need to stabilize the fiber within $1\,\si{mK}$ proximity to the turning point, which is directly feasible in most commercial cryocoolers. With additional passive and active heat shields, temperature fluctuations below $10\,\si{\micro K}$ have been demonstrated \cite{zhang_ultrastable_2017}. For our resonator, this would lead to an expected stability below $2\cdot 10^{-18}$. Optimization of the fiber coating thickness and material may allow for even lower temperature sensitivity at the turning point. \begin{figure} \centering\includegraphics{fig4_longterm.pdf} \caption{\textbf{Long-term stability.} Shift of the laser frequency when locked to the fiber ring resonator (with $60\,\si{s}$ averaging intervals to eliminate the influence of vibrations). The temperature is kept within $100\,\si{mK}$ around the temperature-insensitive point of $3.55\,\si{K}$. The inset shows the raw data, while the main graph has been obtained by subtracting a linear fit.} \label{fig:longterm} \end{figure} Still, the question whether such setup would allow for unprecedented frequency stability requires a careful analysis of thermal noise in the fiber. There are two main mechanisms predicted from theory \cite{wanser_fundamental_1992, duan_intrinsic_2010} and confirmed experimentally \cite{dong_subhertz_2015, dong_observation_2016}: At high frequencies, thermodynamic noise (thermoelastic and thermorefractive) \cite{wanser_fundamental_1992} dominates the spectrum in room temperature experiments. This contribution scales with $T^2\frac{\mathrm{d} f}{\mathrm{d} T}$. When operating at cryogenic temperature, and in particular at the temperature insensitive point, it should therefore be negligible. The second cause of thermal noise can be derived from the fluctuation-dissipation theorem, where the spectral density of spontaneous fiber length fluctuations reads \cite{duan_intrinsic_2010}: \begin{equation} S_l(f)=\frac{2 k T L \Phi_0}{3 \pi E_0 A f}. \end{equation} Here, $k$ is Boltzmann's constant, $L$ the fiber length and $A$ its cross section, $f$ the frequency, $E_0$ the value of Young's modulus without loss and $\Phi_0$ its loss angle. The $T$-scaling in the above formula suggests that cooling a fiber to cryogenic temperature may lead to an improved stability -- comparing to measurements at room temperature \cite{dong_subhertz_2015} even linewidths of a few mHz seem feasible. However, care has to be taken in this extrapolation, as the loss angle of the fiber assembly will also change when lowering the temperature. While that of the fiber will likely increase \cite{arcizet_cryogenic_2009}, that of the acrylate coating may be reduced when it transitions to a stiff, glass-like state \cite{zhu_thermal_2020}. Therefore, additional measurements are required to give a reliable estimation about the ultimately achievable stability of a cryogenic fiber-based setup. In summary, we have characterized the sensitivity of a fiber-ring resonator to environmental fluctuations at cryogenic temperature. Our approach may find direct application in laboratories that operate cryogenic setups and have moderately high requirements on laser stability. As an example, our setup is intended for spectroscopy of rare-earth doped crystals with lifetime-limited spectral resolution \cite{merkel_coherent_2020}. In addition, the robustness and light-weight design of our resonator makes it promising for laser stabilization in space \cite{mcrae_frequency_2013}. Finally, when operated in an optimized closed-cycle cryostat \cite{zhang_ultrastable_2017}, our system may also be considered for laser stabilization to an unprecedented accuracy, depending on the yet unknown contribution of thermal noise at cryogenic temperature. If this contribution is too large, however, further reduction by two orders of magnitude seems feasible by operating our setup in a dilution refrigerator. \section*{Acknowledgments} This project received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 757772), and from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2111 - 390814868. We acknowledge the technical contribution of Samarth Chawla during an early stage of the project, and discussions with Thomas Legero.
{ "timestamp": "2020-12-17T02:16:52", "yymm": "2012", "arxiv_id": "2012.08898", "language": "en", "url": "https://arxiv.org/abs/2012.08898" }
\section{Introduction} \subsection{Cubulating groups acting on polygonal complexes} Recently, a very fruitful route to understanding groups has been to find an action on a $CAT(0)$ cube complex. Indeed, an action without a global fixed point provides an obstruction to Property $(T)$ \cite{Niblo-Reeves97}, while a proper action is enough to guarantee the Haagerup property \cite{Cheriz-Martin-Valette_haagerupproperty}. Further properties, such as residual finiteness or linearity, can deduced if the cube complex is \emph{special} \cite{Haglund-Wise}. Perhaps the most notable recent use of cube complexes was in Agol's proof of the Virtual Haken Conjecture \cite{Agol13}. Therefore, it is of interest to find actions of groups on $CAT(0)$ cube complexes. In this paper we provide a condition on the links of polygonal complexes (including those with triangular faces) that is sufficient to ensure a group acting properly discontinuously and cocompactly on such a complex contains a virtually free codimension-$1$ subgroup. We provide stronger conditions that are sufficient to ensure a group acting properly discontinuously and cocompactly on such a complex acts properly discontinuously on a $CAT(0)$ cube complex: in many applications (in particular for hyperbolic groups) this action is also cocompact. We shall see that these conditions can be practically checked in many examples, and can in fact be checked by computer search if desired. For a polygonal complex $X$ and a vertex $v$ we define the \emph{link} of $v$, $Lk_{X}(v)$ (or simply $Lk(v)$ when $X$ is clear from context), as the graph whose vertices are the edges of $X$ incident at $v$, and two vertices $e_{1}$ and $e_{2}$ are connected by an edge $f$ in $Lk(v)$ if the edges $e_{1}$ and $e_{2}$ in $X$ are adjacent to a common face $f$. We can endow the link graph with the \emph{angular metric}: an edge $f=(e_{1},e_{2})$ in $Lk(v)$ has length $\alpha$, where $\alpha$ is the angle between $e_{1}$ and $e_{2}$ in the shared face $f$. We refer the reader to Section \ref{subsec: Link conditions} for further definitions, such as that of a gluably $\pi$-separated complex (this requires a solution to a system of linear equations called the \emph{gluing equations}). We note that in all of our applications, the gluing equations can be solved by considering only the links of vertices of $G\backslash X$. It is well known that a group containing a codimension-$1$ subgroup cannot have Property $(T)$ \cite{Niblo-Roller1998}. Furthermore, a hyperbolic group acting properly discontinuously and cocompactly on a $CAT(0)$ cube complex is virtually special \cite[Theorem $1.1$]{Agol13} (see Haglund-Wise \cite{Haglund-Wise} for a discussion of the notion of specialness); in particular it is linear over $\mathbb{Z}$ and is residually finite. \begin{theoremalph}\label{mainthm: cubulating groups} Let $G$ be a group acting properly discontinuously and cocompactly on a simply connected $CAT(0)$ polygonal complex $X$. \begin{enumerate}[label=(\roman*)] \item If $G\backslash X$ is gluably weakly $\pi$-separated, then $G$ contains a virtually-free codimension-$1$ subgroup (and therefore does not have Property $(T)$). \item If $G\backslash X$ is gluably $\pi$-separated, then $G$ acts properly discontinuously on a $CAT(0)$ cube complex. If, in addition, $G$ is hyperbolic, then this action is cocompact. In particular, if $G$ is hyperbolic, then it is virtually special, and so linear over $\mathbb{Z}$. \end{enumerate} \end{theoremalph} It is commonly far easier to check a local property than a global one, and so local to global principles are frequently of great use. When working with complexes, it is often most natural to consider local properties related to the links of vertices. In terms of metric curvature, one of the best-known local to global principles is Gromov's Link Condition \cite[$4.2A$]{Gromov_hyperbolic}. Switching to group theoretic properties, \.{Z}uk \cite{zuk1996} and Ballmann--\'{S}wiatkowski \cite{Ballmann-Swiatkowski} independently provided a condition on the first eigenvalue of the Laplacian of links of simplicial complexes that is sufficient to prove a group acting properly discontinuously and cocompactly on such a complex has Property $(T)$. In particular, as opposed to \cite[Example $4.3$]{Hruska-Wise}, we do not require a partition of the edges of links into cut sets: we can remove this assumption at the expense of requiring that every cutset contains at least two elements, and that the \emph{gluing equations} are satisfied for the cutsets (these equations are trivially satisfied for a collection of proper disjoint edge cutsets). Furthermore, we do not require that the cutsets are two-sided: $\Gamma - C$ is allowed to contain arbitrarily many components. Finally, we allow cutsets to be comprised of vertices or edges. Though we are not always able to cocompactly cubulate non-hyperbolic groups with this method, we can still produce codimension-$1$ subgroups, and often a proper action on a cube complex. ------------------------------------------- \subsection{Applications of the main theorem} We provide some applications of Theorem \ref{mainthm: cubulating groups}. We consider the groups classified by Kangaslampi--Vdovina \cite{Kangaslampi-Vdovina} and Carbone--Kangaslampi--Vdovina \cite{Carbone-Kangaslampli-Vdovina_2012}. These are groups that act simply transitively on triangular hyperbolic buildings: in particular, they act properly discontinuously and cocompactly on a simply connected triangular complex with links isomorphic to the minimal generalized quadrangle. There is little known about these groups: until now they were not even known to be residually finite. We apply Theorem \ref{mainthm: cubulating groups} to these groups to deduce that they are virtually special. The full automorphism groups of Kac--Moody buildings of $2$-spherical type of large thickness have Property $(T)$ \cite{Dymara-Januszkiewicz2002,Ershov-Rall18}: neither \cite{Dymara-Januszkiewicz2002} nor \cite{Ershov-Rall18} record whether Property (T) fails at small thicknesses. Some of the groups considered in Corollary \ref{mainthm: cubulating the generalized quadrangle} are cocompact lattices in a $2$-spherical Kac--Moody building with small thickness \cite{Carbone-Kangaslampli-Vdovina_2012}. Therefore, Corollary \ref{mainthm: cubulating the generalized quadrangle} complements \cite{Dymara-Januszkiewicz2002,Ershov-Rall18}, providing an example example of the failure of Property $(T)$ when the thickness is small. \begin{coralph}\label{mainthm: cubulating the generalized quadrangle} Let $X$ be a simply connected polygonal complex such that every face has at least $3$ sides, and the link of every vertex is isomorphic to the minimal generalized quadrangle. If a group $G$ acts properly discontinuously and cocompactly on $X$, then it is virtually special; in particular it is linear over $\mathbb{Z}$. \end{coralph} We prove that if $X$ and $G$ are as above, then $X$ can be endowed with a $CAT(0)$ metric such that $G\backslash X$ is gluably $\pi$-separated. However, we show that that it is not disjointly $\pi$-separated, so that \cite[Example 4.3]{Hruska-Wise} cannot be applied to such a complex. As a further application of Theorem \ref{mainthm: cubulating groups}, we consider generalized triangle groups, as defined in \cite{Lubotzky-Manning-Wilton} (see Definitions \ref{def: generalized triangle} and \ref{def: generalized triangle 2}). Let $C_{k,2}$ be the cage graph on $k$ edges, i.e. the smallest $k$ regular graph of girth $2$. For finite-sheeted covering graphs $\Gamma_{i} \looparrowright C_{k,2}$, we consider an associated pair of families of triangular complexes of groups $D^{j}_{0,k}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$, and $D^{j}_{k}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$. We remark that these complexes of groups are not necessarily unique for given $\Gamma_{1},\Gamma_{2},\Gamma_{3}$. We consider explicitly the graphs used in \cite{Caprace-Conder-Kaluba-Witzel_triangle}: we refer to them by their Foster Census names (see \cite{fostercensus}). The only graph not in the Foster Census is $G54$, the Gray graph, which is edge but not vertex transitive. Using Theorem \ref{mainthm: cubulating groups} and Theorem \ref{mainthm: cubulating generalized triangle groups} we can deduce the following. \begin{coralph}\label{coralph: small girth generalized triangle groups} Let $\Gamma_{i}\looparrowright C_{k,2}$ be finite-sheeted covers, such that $girth(\Gamma_{i})\geq 6$ for each $i$. Let $G=\pi_{1}(D^{j}_{0,k}(\Gamma_{1},\Gamma_{2},\Gamma_{3}))$ or $G=\pi_{1}(D^{j}_{k}(\Gamma_{1},\Gamma_{2},\Gamma_{3}))$ for some $j$. \begin{enumerate}[label=(\roman*)] \item If $\Gamma_{i}\in\{F24A,\;F26A,\;F48A\}$ for each $i$, then $G$ acts properly discontinuously on a $CAT(0)$ cube complex: if $G$ is hyperbolic, then this action is also cocompact and so $G$ is virtually special. \item If $\Gamma_{1}\in\{F40A,G54\}$, then $G$ acts properly discontinuously on a $CAT(0)$ cube complex: if $G$ is hyperbolic, then this action is also cocompact and so $G$ is virtually special. \end{enumerate} \end{coralph} There are $252$ groups considered in \cite{Caprace-Conder-Kaluba-Witzel_triangle}, of which they show that $168$ do not satisfy Property $(T)$. Our method recovers this result for $101$ groups, and proves that $30$ new groups do not have Property $(T)$. We prove that each of the $131$ groups we consider has a proper action on a $CAT(0)$ cube complex, and so, by e.g. \cite{Cheriz-Martin-Valette_haagerupproperty}, has the Haagerup property. Furthermore, $125$ of these groups are hyperbolic and have a proper and cocompact action on a $CAT(0)$ cube complex, and hence by \cite{Agol13} are virtually special. Wise's malnormal special quotient theorem \cite{Wise-MSQT} (c.f. \cite{AgolGrovesManningMSQT}) is one of the most important theorems in modern geometric group theory. However, the proof of this theorem is famously complex and so in Section \ref{subsection: cubulating dehn fillings of generalized triangle groups} we apply Theorem \ref{mainthm: cubulating groups} to generalized triangle groups to recover partial consequences of the malnormal special quotient theorem in Corollary \ref{mainthm: cubulating dehn fillings of generalized triangle groups}. Although this theorem follows from Wise's proof of the MSQT, a far more general theorem, the proof of Corollary \ref{mainthm: cubulating dehn fillings of generalized triangle groups} is considerably shorter and simpler, and provides an effective bound on the index of the fillings required. ------------------------------------------- \subsection{Structure of the paper} The main idea of the proof is the following. Since $G\backslash X$ is $\pi$-separated, we can find a collection of local geodesics in $G\backslash X$ that are locally separating at vertices of $G\backslash X$. The \emph{gluing equations} provide us with a way to glue these local geodesics together to find a locally geodesic locally separating subcomplex of $G\backslash X$: by lifting we find a geodesic separating subcomplex of $X$ with cocompact stabilizer. We then use the construction of Sageev \cite{Sageev-95}, generalized by Hruska--Wise in \cite{Hruska-Wise}, to construct the desired $CAT(0)$ cube complex. The paper is structured as follows. In Section \ref{section: Cubulating hyperbolic groups acting on polygonal complexes} we define hypergraphs, which will be separating subspaces constructed in the polygonal complex, and show certain subgroups of their stabilizers are codimension-$1$. We then prove Theorem \ref{mainthm: cubulating groups} by using Hruska--Wise's \cite{Hruska-Wise} extension of Sageev's \cite{Sageev-95} construction of a $CAT(0)$ cube complex, and proving that there are `enough' hypergraphs to `separate' the polygonal complex. In Section \ref{section: finding cutsets}, we discuss how to find `separated' cutsets of a graph by computer search. In Section \ref{section: generalized quadrangles} we prove Corollary \ref{mainthm: cubulating the generalized quadrangle} by proving that the minimal generalized quadrangle is \emph{weighted edge $3$-separated} and and endowing the polygonal complexes with a suitable $CAT(0)$ metric. In Section \ref{sec: gen triangles} we prove Theorem \ref{mainthm: cubulating generalized triangle groups} and Corollary \ref{coralph: small girth generalized triangle groups}. We again apply Theorem \ref{mainthm: cubulating groups} to prove Corollary \ref{mainthm: cubulating dehn fillings of generalized triangle groups} by considering cutsets in covers of graphs. \section*{Acknowledgements} I would like to thank my PhD advisor Henry Wilton for suggesting this topic, for the many useful discussions, and invaluable comments on an earlier draft of this manuscript. I would also like to thank Pierre-Emmanuel Caprace for the extremely helpful comments and corrections on an earlier draft, as well as pointing out the relevance of Corollary \ref{mainthm: cubulating the generalized quadrangle} to Property (T) in Kac--Moody buildings. \section{Cubulating groups acting on polygonal complexes}\label{section: Cubulating hyperbolic groups acting on polygonal complexes} This section is structured as follows. We first define the required conditions on graphs and complexes in Section \ref{subsec: Link conditions}, and in Section \ref{subsection: Removing cut edges and producing two sided cut sets} we discuss how to remove cut edges from links. We provide some examples where our conditions can be readily verified for graphs in Section \ref{subsection: Examples of separated graphs} and for complexes in Section \ref{subsection: Examples of solutions of the gluing equations}. We use these definitions in Sections \ref{subsection: Constructing hypergraphs in polygonal complexes}, \ref{subsection: Hypergraphs are separating}, and \ref{subsection: Hypergraph stabolizers and wallspaces} to build separating convex trees in polygonal complexes, and in Section \ref{subsection: Cubulating groups acting on polygonal complexes} we use these convex trees, and a construction due to \cite{Sageev-95} and \cite{Hruska-Wise}, to prove Theorem \ref{mainthm: cubulating groups}. Firstly, we introduce the relevant definitions for links. \subsection{Some separation conditions}\label{subsec: Link conditions} We now define the notion of `separatedness' of a graph. The \emph{combinatorial metric} on a graph $\Gamma$ is the path metric induced by assigning each edge of $\Gamma$ length $1$. \begin{definition} Let $\Gamma$ be a finite metric graph. \begin{enumerate}[label=\roman*)] \item An edge $e$ is a \emph{cut edge} if $\Gamma-\{e\}$ is disconnected. \item A set $C\subseteq \Gamma$ is a \emph{cutset} if $\Gamma - C$ is disconnected as a topological space. \item A cutset $C$ is an \emph{edge cutset} if $C\subseteq E(\Gamma)$ and is a \emph{vertex cutset} if $C\subseteq V(\Gamma).$ \item An edge cutset $C$ is \emph{proper} if for any edge $e\in C$, the endpoints of $e$ lie in disjoint components of $\Gamma-C$. \item A vertex cutset $C$ is \emph{proper} if for any vertex $u\in C$, and any distinct vertices $v,w$ adjacent to $u$, the vertices $v$ and $w$ lie in disjoint components of $\Gamma-C$. \end{enumerate} For an edge $e$ in $\Gamma$ let $m(e)$ be the midpoint of $e$. For $\sigma >0$ a set $\mathcal{C}\subseteq E(\Gamma)$ is \emph{$\sigma$-separated} if for all distinct $e_{1},e_{2}\in\mathcal{C}$, $d_{\Gamma}(m(e_{1}),m(e_{2}))\geq \sigma .$ A set $\mathcal{C}\subseteq V(\Gamma)$ is \emph{$\sigma$-separated} if for all distinct $v_{1},v_{2}\in\mathcal{C}$, $d_{\Gamma}(v_{1},v_{2})\geq \sigma .$ \end{definition} \begin{remark} We note that proper cut sets are very natural to consider. Any minimal edge cut set is proper, and more importantly, proper cutsets are preserved under passing to finite covers. Finding proper edge cutsets is easy, but for a given graph $\Gamma$ there may not be any proper $\sigma$-separated vertex cutsets: see for example the graph $F26A$, considered in Lemma \ref{lem: F26A is * separated}. \end{remark} \begin{definition}[\emph{Edge separated}] Let $\Gamma$ be a finite metric graph, and let $\sigma>0$. We will say that $\Gamma$ is \emph{edge $\sigma$-separated} if $\Gamma$ is connected, contains no vertices of degree $1$, and there exists a collection of proper $\sigma$-separated edge cutsets $C_{i}\subseteq E(\Gamma)$ with $\cup_{i}C_{i}=E(\Gamma)$ and $\vert C_{i}\vert\geq 2$ for each $i$. We say the graph is \emph{disjointly edge $\sigma$-separated} if the above cutsets form a partition of the edges. \end{definition} Note that to each edge cutset $C$ we can assign a partition $\mathcal{P}(C)$ to $\pi_{0}(\Gamma-C)$: we \emph{always require} that such a partition is at least as coarse as connectivity in $\Gamma - C$, and each partition contains at least two elements. The \emph{canonical partition} of $C$ is that induced by connectivity in $\Gamma -C$. \begin{definition}[\emph{Strongly edge separated}] A graph $\Gamma$ is \emph{strongly edge $\sigma$-separated} if $\Gamma$ is edge $\sigma$-separated and for every pair of points $u,v$ in $\Gamma$ with $d_{\Gamma}(u,v)\geq\sigma$ there exists a proper $\sigma$-separated edge cutset $C_{i}$ with $u$ and $v$ lying in separated components of $\Gamma - C_{i}$. We say the graph is \emph{disjointly strongly edge $\sigma$-separated} if the above cutsets form a partition of the edges. \end{definition} There is a more combinatorial condition that implies strong edge separation. \begin{definition} We say a cutset $C$ separates $\{v_{1},v_{2}\}$ and $\{w_{1},w_{2}\}$ if each $v_{i}$ lies in a different component of $\Gamma - C$ to each $w_{j}$. \end{definition} \begin{lemma}\label{lem: strong edge sep condition} Let $n\geq 2$, and let $\Gamma$ be a graph endowed with the combinatorial metric, such that $girth(\Gamma)\geq 2n$. Suppose that $\Gamma$ is edge $n$-separated with cutsets $\mathcal{C}=\{C_{1},\hdots , C_{m}\}$, and for every pair of vertices $u,v$ in $\Gamma$, and any vertices $u',v'$ with $d_{\Gamma}(u,v)\geq n$ and $d(u,u')=d(v,v')=1$ there exists an $n$-separated cutset $C_{i}$ separating $\{u,u'\}$ and $\{v,v'\}$. Then $\Gamma$ is strongly edge $n$-separated with the same cutsets. \end{lemma} \begin{proof} First note that as $\Gamma$ is edge $n$-separated, it is connected and contains no vertices of degree $1$. Let $u,v$ be two points in $\Gamma$ with $d(u,v)\geq n$. If $u,v$ are vertices, then we are done. Suppose $u$ and $v$ both lie on edges: let $e(u),e(v)$ be the respective edges, and $u_{1},u_{2}$, $v_{1},v_{2}$ the endpoints of $e(u),e(v)$ respectively. If $v$ is a vertex, take $v=v_{1}=v_{2}.$ As $girth (\Gamma)\geq 2n$, without loss of generality $d(u_{1},v_{1})\geq n$: taking $C_{i}$ to be the cutset separating $u_{1},u_{2}$ and $v_{1},v_{2}$, we see that $C_{i}$ separates $u$ and $v$. \end{proof} \begin{definition}[\emph{Weakly vertex separated}] Let $\Gamma$ be a finite metric graph, and let $\sigma>0$. We will say that $\Gamma$ is \emph{weakly vertex $\sigma$-separated} if: \begin{enumerate}[label=\roman*)] \item $\Gamma$ is connected and contains no vertices of degree $1$, \item and there exists a collection of $\sigma$-separated vertex cutsets $C_{i}\subseteq V(\Gamma)$ such that $\cup_{i}C_{i}=V(\Gamma)$ and $\vert C_{i}\vert \geq 2$ for each $i$. \end{enumerate} To each vertex cutset $C$ we can assign a partition $\mathcal{P}(C)$ to $\pi_{0}(\Gamma-C)$: we \emph{always require} that such a partition is at least as coarse as connectivity in $\Gamma - C$ and each partition contains at least two elements. The \emph{canonical partition} of $C$ is that induced by connectivity in $\Gamma -C$. \end{definition} \begin{definition}[\emph{Vertex separated}] Let $\Gamma$ be a finite metric graph, and let $\sigma>0$. We will say that $\Gamma$ is \emph{vertex $\sigma$-separated} if: \begin{enumerate}[label=\roman*)] \item $\Gamma$ is connected and contains no vertices of degree $1$, \item there exists a collection of $\sigma$-separated vertex cutsets $C_{i}\subseteq V(\Gamma)$ such that $\cup_{i}C_{i}=V(\Gamma)$ and $\vert C_{i}\vert \geq 2$ for each $i$, \item for any vertex $v$ and any distinct vertices $w,w'$ adjacent to $v$ there exists a $\sigma$-separated vertex cutset $C_{i}$ such that $w$ and $w'$ lie in separate components of $\Gamma - C_{i}$, \item and for any points $u$ and $v$ in $\Gamma$ with $d(u,v)\geq \sigma$, there exists a cutset $C_{i}$ with $u$ and $v$ lying in distinct components of $\Gamma - C_{i}$. \end{enumerate} Note that importantly, in general we don't require vertex cutsets to be proper. We say the graph is \emph{disjointly vertex separated} if the above cutsets form a partition of the vertices, and each cutset is proper. To each vertex cutset $C$ we can assign a partition $\mathcal{P}(C)$ to $\pi_{0}(\Gamma-C)$: we \emph{always require} that such a partition is at least as coarse as connectivity in $\Gamma - C$ and each partition contains at least two elements. The \emph{canonical partition} of $C$ is that induced by connectivity in $\Gamma -C$. \end{definition} \begin{remark} The reason we don't require vertex cutsets to be proper is the following. For edge cut sets we could weaken the definition of edge separated to require a condition similar to $iii)$ above: i.e. that the endpoints of each edge are separated by some cutset. However such a cutset can always be made minimal, and therefore proper, by removing unnecessary edges: the same is not true for vertex cutsets. \end{remark} Once again, this definition is not as difficult to verify as it may seem. \begin{lemma}\label{lem: vertex separated condition} Let $n\geq 2$, and let $\Gamma$ be a graph endowed with the combinatorial metric, such that $\Gamma$ is connected, contains no vertices of degree $1$, and $girth(\Gamma)\geq 2 n$. Suppose there exists a collection of $n$-separated vertex cutsets $\mathcal{C}=\{C_{1},\hdots , C_{m}\}$ so that \begin{enumerate}[label=$\roman*)$] \item $\cup_{i}C_{i}=V(\Gamma)$, \item $\vert C_{i}\vert \geq 2$ for each $i$, \item for each vertex $v$ and distinct $w$, $w'$ adjacent to $v$ there exists a $n$-separated cutset with $w$ and $w'$ lying in separate components of $\Gamma - C$, \item and furthermore that for any pair of vertices $u,v$ with $d_{\Gamma}(u,v)\geq n$ there exists a cutset $C_{i}$ with $u$ and $v$ lying in separate components of $\Gamma -C_{i}$. \end{enumerate} Then $\Gamma$ is vertex $\sigma$-separated with the collection of cutsets $\mathcal{C}$. \end{lemma} \begin{proof} It suffices to show that for any pair of points $u,v$ with $d(u,v)\geq \sigma$ there exists a cutset $C_{i}$ separating them. If $u$ and $v$ are vertices, then we are finished. Otherwise, let $e(u),e(v)$ be the edges that $u$ and $v$ lie on. Let $u_{1},u_{2}$ and $v_{1},v_{2}$ be the endpoints of $e(u),e(v)$ respectively. If $v$ is a vertex simply take $v_{1}=v_{2}=v$. Then without loss of generality, as $girth(\Gamma)\geq 2n$ and $d(u,v)\geq n$, we have that $d(u_{1},v_{1})\geq n$. Let $C_{i}$ be the cutset separating $u_{1}$ and $v_{1}$: this cutset must also separate $u$ and $v$. \end{proof} Finally, we define weighted $\sigma$-separated. \begin{definition}[Weighted $\sigma$-separated.] Let $\sigma>0$ and let $\Gamma$ be an edge $\sigma$-separated graph (respectively strongly edge $\sigma$-separated, weakly vertex $\sigma$-separated, vertex $\sigma$-separated) with $\sigma$-separated cutsets $\mathcal{C}=\{C_{1},\hdots , C_{m}\}$. We call $\Gamma$ \emph{weighted edge $\sigma$-separated} (respectively \emph{strongly edge $\sigma$-separated, weakly vertex $\sigma$-separated, vertex $\sigma$-separated}) if there exists an assignment of positive integers $n(C_{i})$ to the cutsets in $\mathcal{C}$ that solves the \emph{weight equations}: for any edges (respectively edges, vertices, vertices) $\alpha,\beta$ of $\Gamma$, $$\sum\limits_{C_{i}\in\mathcal{C}:\alpha\in C_{i}}n(C_{i})=\sum\limits_{C_{i}\in\mathcal{C}:\beta\in C_{i}}n(C_{i}).$$ \end{definition} Note that though the above equations at first appear to be difficult to solve, we can always find solutions for a graph with an edge (respectively vertex) transitive automorphism group (see Section \ref{subsection: Examples of separated graphs}). Next we extend these definitions to $CAT(0)$ polygonal complexes. This requires some care to ensure that the subcomplexes we build will actually be separating. A \emph{polygonal complex} is a $2$-dimensional polyhedral complex and is \emph{regular} if either all polygonal faces are regular polygons. For a polygonal complex $X$ and a vertex $v$ we define the \emph{link} of $v$, $Lk_{X}(v)$ (or simply $Lk(v)$ when $X$ is clear from context), as the graph whose vertices are the edges of $X$ incident at $v$, and two vertices $e_{1}$ and $e_{2}$ are connected by an edge $f$ in $Lk(v)$ if the edges $e_{1}$ and $e_{2}$ in $X$ are adjacent to a common face $f$. We can endow the link graph with the \emph{angular metric}: an edge $f=(e_{1},e_{2})$ in $Lk(v)$ has length $\alpha$, where $\alpha$ is the angle between $e_{1}$ and $e_{2}$ in the shared face $f$. We first define the following graph, which appeared in \cite{Ollivier-Wise}. \begin{definition}[\emph{Antipodal graph}] Let $Y$ be a regular non-positively curved polygonal complex. Subdivide edges in $Y$ and add vertices at the midpoints of edges: call these additional vertices \emph{secondary vertices}, and call the other vertices \emph{primary}. Every polygon in $Y$ now contains an even number of edges in its boundary. Construct a graph $\Delta_{Y}$ as follows. Let $V(\Delta_{Y})=V(Y)$ and join two vertices $v$ and $w$ by an edge, labelled $f$, if $v$ and $w$ exist and are antipodal in the boundary of a face $f$ in $Y$: add as many edges as such faces exist. This is the \emph{antipodal graph} for $Y$. \end{definition} \begin{remark} We note that for a secondary vertex $s$ of $Y$, $s$ is a cage graph with edges of length $\pi$. Hence, if $Y$ does not contain any free faces, $Lk_{Y}(s)$ is weighted edge $\pi$-separated, with a single $\pi$-separated cutset $E(Lk_{Y}(s))$. \end{remark} Note that as the complex is regular, the edges of $\Delta_{Y}$ pass through the midpoints of edges in $Lk_{Y}(v)$ for vertices $v$. There is a canonical immersion $\Delta_{Y} \looparrowright Y$; we map a vertex $v$ of $\Delta_{Y}$ to the corresponding vertex of $Y$, and we map an edge $e$ labelled by $f$ to the local geodesic between the endpoints of $e$ lying in the face $f$. \begin{definition} Let $Y$ be a non-positively curved polygonal complex, and let $\Delta$ be one of $Y^{(1)}$ or $\Delta_{Y}$. Assign $\Delta$ an arbitrary orientation, and let $e$ be an oriented edge of $\Delta$. For each $\pi$-separated cutset $C$ in $Lk(i(e))$, choose a set of partitions of $\pi_{0}(Lk(i(e))-C)$, $\{P_{i}(C)\}_{i}$. For $v\in V(\Delta)$, we define $$\mathcal{C}_{v}=\{C\;:\;C\mbox{ is a }\pi\mbox{-separated cutset},\;C\subseteq Lk(v)\}.$$ We define $$\mathcal{C}(e):=\{C\;:\;C\mbox{ is a }\pi\mbox{-separated cutset},\;e\in C\},$$ and $$\mathcal{C}=\bigcup_{e\in E^{\pm 1}(\Delta)}\mathcal{C}(e).$$ Similarly we can define $$\mathcal{CP}(e):=\bigcup\limits_{C\in\mathcal{C}(e)}\{(C,P_{i}(C))\}_{i},$$ and $$\mathcal{CP}=\bigcup_{e\in E^{\pm 1}(\Delta)}\mathcal{CP}(e).$$ \end{definition} The following is extremely similar to the `splicing' of Manning \cite{Manning10}: we will use this for a similar purpose to that of \cite{Cashen-Macura2011}. \begin{definition}[Equatable partitions] Let $Y$ be a non-positively curved polygonal complex, and let $\Delta$ be one of $Y^{(1)}$ or $\Delta_{Y}$. Let $v,w$ be two vertices of $\Delta$ connected by an oriented edge $e$, so that $v=i(e)$ and $w=t(e)$. Let $C_{v}$ be a $\pi$-separated cutset in $Lk(v)$ with choice of partition $P_{v}$ and $C_{w}$ be a $\pi$-separated cutset in $Lk(w)$ with choice of partition $P_{w}$. Let $v'$, $w'$ be points on $e$ in an $\epsilon$-neighbourhood of $v$, $w$ respectively, so that there are canonical mappings \begin{equation*} \begin{split} i_{v}&:St(v')\hookrightarrow Lk(v),\\ i_{w}&:St(w')\hookrightarrow Lk(w),\\ \phi&:St(v')\xrightarrow{\cong}St(w'). \end{split} \end{equation*} Therefore we have induced mappings \begin{equation*} \begin{split} \overline{i}_{v}&:St(v')-v'\hookrightarrow Lk(v)-C_{v},\\ \overline{i}_{w}&:St(w')-w'\hookrightarrow Lk(w)-C_{w},\\ \overline{\phi}&:St(v')-v'\xrightarrow{\cong}St(w')-w'. \end{split} \end{equation*} For $u=v,w$ let $\mathcal{P}_{u}$ be the set of partitions of $\pi_{0}(Lk(u)-C_{u})$, and let $\mathcal{P}_{u'}$ be the set of partitions of $\pi_{0}(St(u')-u')$. There are induced maps \begin{equation*} \begin{split} \iota_{v}&:\mathcal{P}_{v}\rightarrow \mathcal{P}_{v'},\\ \iota_{w}&:\mathcal{P}_{w}\rightarrow \mathcal{P}_{w'},\\ \psi&:\mathcal{P}_{v'}\hookdoubleheadrightarrow \mathcal{P}_{w'}. \end{split} \end{equation*} We say that $(C_{v},P_{v})$ and $(C_{w},P_{w})$ are \emph{equatable along $e$}, written $$(C_{v},P_{v})\sim_{e}(C_{w},P_{w})$$ if $$\psi(\iota_{v}(P_{v}))=\iota_{w}(P_{w}).$$ Note that this also defines an equivalence relation on $\mathcal{CP}(e)$: for $(C,P),(C',P')\in \mathcal{CP}(e)$, we write $$(C,P)\approx_{e} (C',P')$$ if $$\iota_{v}(P)=\iota_{v}(P').$$ This defines an equivalence relation on $\mathcal{CP}(e)$, and so defines an equivalence class $[C,P]_{e}$. We define $[C,P]_{e^{-1}}$ to be the equivalence class of cutset partitions in $\mathcal{CP}(e^{-1})$ equatable to $(C,P)$ along $e$: by definition this is independent of choice of $(C',P')\in [C,P]_{e}$. \end{definition} These constructions are designed so that we can `splice' the local cutsets along each edge. Though this definition is somewhat complicated, note the following remark. \begin{remark} Let $e$, $v,w$, $C_{v},C_{w}$ be as above. If both $C_{v},C_{w}$ are proper with canonical partitions $P_{v},P_{w}$, then $(C_{v},P_{v})\sim_{e}(C_{w},P_{w})$. This follows as the induced partitions of $St(v')-v'$ and $St(w')-w'$ are just the partitions induced by connectivity, and by properness every element of the induced partition of $St(v')-v'$ (respectively $St(w')-w'$) contains a unique vertex. Similarly, if $C_{1},C_{2}\in\mathcal{C}(e)$ are proper, with canonical partitions $P_{1},P_{2}$, then $(C_{1},P_{1})\approx_{e}(C_{2},P_{2})$. \end{remark} \begin{definition}[\emph{Gluably $\sigma$-separated}] Let $Y$ be a non positively curved polygonal complex. We call $Y$ \emph{gluably edge $\sigma$-separated} (respectively \emph{gluably (weakly) vertex $\sigma$-separated}) if : \begin{enumerate}[label=$\roman*)$] \item $Y$ is regular (respectively $Y$ is allowed \textbf{not to be regular}) \item the link of every vertex in $Y$ is edge (respectively (weakly) vertex) $\sigma$-separated, \item for every $\pi$-separated cutset $C$ in $Lk(v)$ there exists a series of partitions $\{P_{i}(C)\}$ of $\pi_{0}(Lk(v)-C)$ such that for any distinct pair of points $x,y\in Lk(v)$ separated by $C$, $x$ and $y$ are separated by some $P_{i}(C),$ \item and there exists a strictly positive integer solution to the \emph{gluing equations}: letting $\Delta=\Delta_{Y}$ (respectively $\Delta=Y^{(1)}$) we can assign a positive integer $\mu (C,P)$ to every pair $$(C,P)\in\mathcal{CP}:=\bigcup\limits _{e\in E^{\pm}(\Delta )}\mathcal{CP}(e)$$ such that for every edge $e$ of $\Delta$ and every $(C,P)\in \mathcal{CP}(e)$: \begin{equation*} \sum\limits_{(C',P')\in [C,P]_{e}} \mu(C',P')=\sum\limits_{(C',P')\in [C,P]_{e^{-1}}} \mu(C',P'). \end{equation*} \end{enumerate} \end{definition} \begin{definition}[\emph{Gluably $\sigma$-separated}] Let $Y$ be a non positively curved polygonal complex. We call $Y$: \begin{enumerate}[label=$\roman*)$] \item \emph{gluably weakly $\sigma$-separated} if it is gluably weakly vertex $\sigma$-separated, \item and \emph{gluably $\sigma$-separated} if it is gluably edge or gluably vertex $\sigma$-separated. \end{enumerate} \end{definition} \begin{remark} Again, note that in the definition of a gluably (weakly) vertex $\sigma$-separated complex, \textbf{we do not require that the complex $Y$ is regular}. If the link of each vertex in the complex $Y$ is disjointly $\sigma$-separated, then we can solve the gluing equations by taking only the canonical partition $P(C)$ for each cutset $C$, and setting $\mu(C,P(C))=1$ for all cutsets $C$, so that $Y$ is gluably $\sigma$-separated. \end{remark} \subsection{Removing cut edges}\label{subsection: Removing cut edges and producing two sided cut sets} We now show the existence of cut edges is not too much of an issue. \begin{lemma}\label{lem: removing cut edges} Let $G$ be a group acting properly discontinuously and cocompactly on a simply connected $CAT(0)$ polygonal complex $X$, such that the link of every vertex in $X$ is connected. There exists a simply connected $CAT(0)$ polygonal complex $X'$ such that $G$ acts properly discontinuously and cocompactly on $X'$ and the link of any vertex $v'$ in $X'$ is a subgraph of $Lk(v)$ for some vertex $v$ in $X$. Furthermore for any vertex $v$ of $X'$, either $Lk_{X'}(v)$ is connected and contains no cut edges, or $Lk_{X'}(v)$ is disconnected. \end{lemma} \begin{proof} First note that we can assume that $X$ contains no vertices of degree $1$ in its links. Let $Y=G\backslash X$, and let $v_{0},\hdots ,v_{m}$ be the vertices of $Y$. Let $v$ be a vertex in $X$, and suppose there exists a cut edge $f$ in $Lk(v)$. Let $e_{1}$ and $e_{2}$ be the endpoints of $f$. Suppose that, in $X$, the endpoints of $e_{1}$ are $v$ and $w$. Construct a new complex $X'$ as follows: Let $v_{1}$ and $v_{2}$ be two copies of $v$ and connect these vertices to $w$ with the edges $e_{1}^{1}$ and $e_{2}^{2}$ respectively. Since $f$ is a cut edge in $Lk(v)$ there is a canonical way to attach edges and faces to $v_{1}$ and $v_{2}$ that agrees with the connected components of $Lk(v) - e_{1}$. Now, we assume that $f$ is attached to $v_{1}$. Then the face $f$ is a free face, which we can push in to remove the vertex of degree $1$, $e_{1}^{1}$ in $Lk(v_{1})$, so that $Lk(v_{1})$ and $Lk(v_{2})$ are connected subgraphs of $Lk(v)-f$, and the links of any other vertices $w$ incident to the face $f$ are transformed to a proper subgraph of $Lk(w)$ with the edge $f$ removed. We can repeat this process finitely many times, applied to the set of vertices $Gv_{i}$ each time, to find the $CAT(0)$ polygonal complex $X'$ desired. \end{proof} \subsection{Examples of separated graphs}\label{subsection: Examples of separated graphs} Our definitions of weighted $\sigma$-separated graphs required assigning weights to cutsets such that certain equations hold. In this subsection we prove that as long as the automorphism group of a graph is transitive on vertices (or edges, depending on whether cutsets are formed of vertices or edges), then these equations can always be solved. Note that $Aut(\Gamma)$ is the group of automorphisms of $\Gamma$ as a metric graph. \begin{lemma}\label{lem: vertex transitive automorphism group can solve equations} Let $\sigma>0$ and let $\Gamma$ be (weakly) vertex $\sigma$-separated. If $Aut(\Gamma)$ is vertex transitive then $\Gamma$ is weighted (weakly) vertex $\sigma$-separated. \end{lemma} \begin{proof} Assume that $\Gamma$ is vertex $\sigma$-separated, with $\sigma$-separated vertex cutsets $\mathcal{C}=\{C_{1},\hdots, C_{n}\}$. The proof is similar for weakly separated graphs. Let $H=Aut(\Gamma)$. For each $C\in\mathcal{C}$, let $$H(C):=\{\gamma C\;:\;\gamma\in H\},$$ counted {\bf{with}} multiplicity, i.e. if $\gamma_{1}C=\gamma_{2}C$ and $\gamma_{1}\neq_{H}\gamma_{2}$, then both $\gamma_{1}C,\gamma_{2}C$ appear in $H(C)$. Note that for every $C'\in H(C)$, $C'$ is a $\sigma$-separated vertex cutset. Fix some vertex $v\in C$, and let $w\in V(\Gamma)$ be any vertex. Since $H$ acts vertex transitively, there exists $h\in H$ such that $h v=w$. Therefore $$\{\gamma\in H \;:\;v\in \gamma C\}=\{\gamma\in H \;:\;w\in h\gamma C\}=\{h^{-1}\gamma'\in H \;:\;w\in \gamma' C\},$$ and therefore $$\vert\{\gamma\in H \;:\;v\in \gamma C\}\vert=\vert\{\gamma\in H \;:\;w\in \gamma C\}\vert.$$ Let $$\tilde{\mathcal{C'}}:=\bigsqcup\limits_{C\in\mathcal{C'}}H(C),$$ again with multiplicity. By the above, it follows that for any two vertices $v,w\in V(\Gamma)$, $$\vert\{C\in \tilde{\mathcal{C}'} \;:\;v\in C\}\vert=\vert\{C\in \tilde{\mathcal{C}'} \;:\;w\in C\}\vert.$$ Let $\mathcal{C}'$ be the underlying set of $\tilde{\mathcal{C}'}$, and for $C\in \mathcal{C}',$ let $$n(C)=\vert\{C'\in\tilde{\mathcal{C}'}\;:\;C=C'\} \vert,$$ i.e. $n(C)$ is the multiplicity of $C$ in $\tilde{\mathcal{C}'}.$ It is easily seen that the above weights solve the gluing equations. As $\mathcal{C}\subseteq\mathcal{C}'$, it follows that $\Gamma$ is vertex separated with respect to these cutsets: by the above argument it follows that $\Gamma$ is weighted vertex $\sigma$-separated with cutsets $\mathcal{C}'$. \end{proof} Similarly, we can prove the following. \begin{lemma}\label{lem: edge transitive automorphism group can solve equations} Let $\sigma>0$ and let $\Gamma$ be (strongly) edge $\sigma$-separated. If $Aut(\Gamma)$ is edge transitive then $\Gamma$ is weighted (strongly) edge $\sigma$-separated. \end{lemma} \subsection{Examples of solutions of the gluing equations}\label{subsection: Examples of solutions of the gluing equations} Recall that we call an edge cutset $C$ \emph{proper} if the endpoints of every edge $e$ in $C$ lie in separate components of $\Gamma-C$, and a vertex cutset $C$ \emph{proper} if for every $v\in C$, the vertices adjacent to $v$ each lie in separate components of $\Gamma-C$. \begin{lemma}\label{lem: solving gluing equations for minimal edge cutsets} Let $Y$ be a regular non-positively curved complex and suppose the link of each vertex is weighted edge $\pi$-separated. There exists a system of strictly positive weights that solve the gluing equations for $Y$. \end{lemma} \begin{proof} Since edge cutsets are proper, any two cutsets are equatable along a shared edge. Therefore we may associate to each cutset $C$ exactly one partition $P(C)$, namely that of connectivity in $\Gamma-C$. In particular for any oriented edge $e\in E^{\pm}(\Delta_{Y})$ and any $(C,P(C))\in \mathcal{CP}(e),$ $[C,P(C)]_{e}=\mathcal{CP}(e).$ First, note that for an oriented edge $e$ of $\Delta_{Y}$, and $v=i(e)$, $\mathcal{C}(e)=\mathcal{C}(e)\cap \mathcal{C}_{v}$. Since the link of each vertex in $Y$ is weighted edge $\pi$-separated, for each vertex $v\in Y$ there exists a positive integer $N_{v}>0$ and a system of strictly positive weights $n_{v}(C)$ for $C\in \mathcal{C}_{v}$ such that for any edge $e$ in $Lk_{Y}(v)$, $$\sum\limits_{C\in \mathcal{C}(e)}n_{v}(C)=\sum\limits_{C\in \mathcal{C}(e)\cap \mathcal{C}_{v}}n_{v}(C)=N_{v}.$$ Let $M=\prod_{v\in V(Y)}N_{v},$ and for a cutset $C\in\mathcal{C}_{v},$ define $m(C)=Mn_{v}(C)\slash N_{v}.$ It follows that for an edge $e$ in $Lk_{Y}(v)$, $$\sum\limits_{C\in \mathcal{C}(e)}m(C)=\frac{M}{N_{v}}\sum\limits_{C\in \mathcal{C}(e)}n_{v}(C)=\frac{M}{N_{v}}N_{v}=M.$$ Finally, taking $\mu(C,P(C))=m(C)$, these weights immediately solve the gluing equations. \end{proof} Similarly, we can prove the following. \begin{lemma}\label{lem: solving gluing equations for proper vertex cutsets} Let $Y$ be a non-positively curved complex, such that the link of each vertex is weighted vertex $\pi$-separated, and every cutset is proper. There exists a system of strictly positive weights that solve the gluing equations for $Y$. \end{lemma} \subsection{Hypergraphs in \texorpdfstring{$\mathbf{\pi}$}{pi}-separated polygonal complexes}\label{subsection: Constructing hypergraphs in polygonal complexes} We now begin to construct our separating subcomplexes. Suppose $X$ is a simply connected $CAT(0)$ polygonal complex, and $G$ acts properly discontinuously and cocompactly on $X$, so that $G\backslash X$ is (weakly) gluably $\pi$-separated. If $G\backslash X$ is gluably edge $\pi$-separated, let $\Delta=\Delta_{G\backslash X}$, and if it is (weakly) gluably vertex $\pi$-separated, let $\Delta=(G\backslash X)^{(1)}$. Assign an arbitrary orientation to $\Delta$. Recall that for an oriented edge $e$ of $\Delta$, we let $\mathcal{C}(e)=\{C\in\mathcal{C}\;:\;e\in C\}$ (note that for any oriented edge $e$, $\mathcal{C}(e)$ is non-empty, as $G\backslash X$ is gluably edge $\pi$-separated). For every vertex $v$ and $\pi$-separated cutset $C$ in $Lk(v)$ let $\{P_{i}(C)\}$ be the required set of partitions of $\pi_{0}(Lk(v)-C)$, and let $$\mathcal{CP}(e)=\bigcup\limits_{C\in\mathcal{C}(e)}\{(C,P_{i}(C)\}_{i}.$$ Let $$\mathcal{CP}=\bigcup\limits_{e\in E^{\pm}(\Delta)}\mathcal{CP}(e).$$ By assumption, we can assign positive integer weights $\mu(C,P)$ to each cutset $(C,P)\in\mathcal{CP}$ so that for every edge $e$ of $\Delta$: \begin{equation*} \sum\limits_{(C',P')\in [C,P]_{e}} \mu(C',P')=\sum\limits_{(C',P')\in [C,P]_{e^{-1}}} \mu(C',P'). \end{equation*} We now construct a second graph $\Sigma$ as follows. Let $$V(\Sigma)=\bigsqcup\limits_{(C,P)\in\mathcal{CP}}\{u_{(C,P)}^{1},\hdots ,u_{(C,P)}^{\mu(C,P)}\}.$$ The gluing equations imply that for each positively oriented edge $e$ of $\Delta$ and each equivalence class $[C,P]_{e}\subseteq \mathcal{CP}(e)$ there exists a bijection $$\phi_{e}:\hspace*{- 10pt}\bigsqcup\limits_{(C',P')\in [C,P]_{e}}\hspace*{- 10pt}\{u_{(C',P')}^{1},\hdots ,u_{(C',P')}^{\mu(C',P')}\}\rightarrow\hspace*{- 10pt} \bigsqcup\limits_{(C',P')\in [C,P]_{e^{-1}}}\hspace*{- 10pt}\{u_{(C',P')}^{1},\hdots ,u_{(C',P')}^{\mu(C',P')}\}.$$ For each positively oriented edge $e$ and each equivalence class $[C,P]_{e}$ of $\mathcal{CP}(e)$ choose such a bijection, $\phi_{e}$, and add the oriented edges $$\{(u_{(C',P')}^{i},\phi_{e}(u_{(C',P')}^{i}))\;:\;(C',P')\in[C,P]_{e}, 1\leq i\leq \mu(C',P')\}.$$ Note that for each $(C,P)\in\mathcal{CP}$, $Lk_{\Sigma}(u_{(C,P)}^{i})$ is isomorphic to $C$ as labelled oriented graphs. Furthermore, each edge in $\Sigma$ labelled by $e$ connects two vertices of the form $u_{(C,P)}^{i}$ $u_{(C',P')}^{j}$ with $(C,P)\sim_{e}(C',P')$, i.e. every edge connects vertices with equatable partitions along that edge. There is an immersion $\Sigma\looparrowright\Delta$ that sends $u_{(C,P)}^{i}$ to the vertex $v_{C}$ such that $C\subseteq Lk_{\Delta}(v_{C})$ and maps an edge labelled by $e$ to the edge $e$ in $\Delta$. Let $\Sigma_{1},\hdots ,\Sigma_{m}$ be the connected components of $\Sigma$, and let $\underline{\Lambda}^{1},\hdots , \underline{\Lambda}^{m}$ be the images of these graphs in $G\backslash X$ under the immersion $$\Sigma_{i}\looparrowright\Delta\looparrowright G\backslash X.$$ Note that each $\underline{\Lambda}^{i}$ is locally geodesic as the cut sets are $\pi$-separated in $G\backslash X$. \begin{definition} If $G\backslash X$ is gluably edge $\pi$-separated, a lift of $\underline{\Lambda}^{i}$ from to the $CAT(0)$ complex $X$ is called a \emph{edge hypergraph in $X$}, and otherwise it is a \emph{vertex hypergraph in $X$}. \end{definition} Note that hypergraphs come with two pieces of information at each vertex $v$ in $X$: the cutset $C$ and partition $P$. We say $\Lambda$ \emph{passes through} the above objects. \begin{remark} Note that in the above construction for every vertex $v$, every $\pi$-separated edge cutset $C$ in $Lk(v)$ and chosen partition $P$ of $\pi_{0}(Lk(v)-C)$, and every lift $\tilde{v}$ of $v$, there exists a hypergraph passing through $(C,P)$ in $Lk_{X}(\tilde{v})$. \end{remark} \begin{figure}[H] \centering \includegraphics{hypergraph.png} \caption{Example of subsection of an edge hypergraph.} \end{figure} Importantly, our construction ensures that for any hypergraph $\Lambda$, $Stab(\Lambda)$ acts properly discontinuously and cocompactly on $\Lambda$. \subsection{Hypergraphs are separating}\label{subsection: Hypergraphs are separating} We now analyse the structure of the hypergraphs, and show they are in fact separating. \begin{lemma}\label{lem: edge hypergraphs are leafless convex} Let $G\backslash X$ be a simply connected (weakly) gluably $\pi$-separated $CAT(0)$ polygonal complex, and let $\Lambda$ be a hypergraph in $X$. Then $\Lambda$ is a leafless convex tree. \end{lemma} \begin{proof} We prove this for edge hypergraphs: the argument is similar for vertex hypergraphs. For each $i$, as the cutsets are $\pi$-separated, the image of $\Sigma_{i}\looparrowright G\backslash X$ is locally geodesic: therefore $\Lambda$ is locally geodesic in $X$. As $X$ is $CAT(0)$, local geodesics are geodesic, and geodesics are unique, so that $\Lambda$ is a convex tree. Since $\vert C\vert\geq 2$ for any $v\in V(\Delta_{G\backslash X})$ and $C\in\mathcal{C}_{v}$, $\Lambda$ contains no primary vertices of degree $1$. Similarly, as there are no vertices of degree $1$ in the link of a primary vertex in $X$, there are no cut edges in the link of a secondary vertex and so every cut set contains at least two edges. It follows that edge hypergraphs are leafless. \end{proof} \begin{definition} Let $\Lambda_{i}$ be a hypergraph in $X$ and $x,y\in X$ be distinct points in $X$. We say $\Lambda_{i}$ \emph{separates} $x$ and $y$ if $x$ and $y$ lie in distinct components of $X-\Lambda_{i}$. We write $\#_{\Lambda}(x,y)$ for the number of edge (or vertex) hypergraphs separating $x$ and $y$. \end{definition} We now consider separating points: we prove the following lemma. We call a path $\gamma$ \emph{transverse} to $\Lambda$ if $\vert \gamma\cap \Lambda\vert =1$ \begin{lemma}\label{lem: separation condition} Let $\Lambda$ be a hypergraph in $X$, and $\gamma=[p,q]$ be a geodesic transverse to $\gamma$. If $\gamma\cap\Lambda=\{x\}$ and $p$ and $q$ lie in different elements of the partition of $Lk(x)-\Lambda$, then $p$ and $q$ lie in different components of $X-\Lambda$. \end{lemma} Before we prove this, we need to define some technology. \begin{definition}[Hypergraph retraction] Let $X$ be a $CAT(0)$ space and $\Lambda$ a hypergraph in $X$. The projection map $$\pi_{\Lambda}:X\rightarrow\Lambda$$ maps every point in $X$ to its nearest point in $\Lambda$. Since $\Lambda$ is convex, this map is a deformation retraction. That is, we have a homotopy $\pi_{\Lambda}^{\delta}$ from the identity to $\pi_{\Lambda}.$ \end{definition} \begin{definition}[$\Lambda$-balanced paths] Let $\Lambda$ be a hypergraph in $X$ and $\epsilon>0$. For points $p,q$ lying in the same component of $X-\Lambda$, a \emph{$(\Lambda,\epsilon)$-balanced path} from $p$ to $q$ is a path $\sigma$ starting at $p$ and ending at $q$ such that: \begin{enumerate}[label=\roman*)] \item $\sigma \subseteq \mathcal{N}_{\epsilon}(\Lambda)-\Lambda,$ \item and for any $x\in \Lambda$, $\vert(\pi_{\Lambda})^{-1}(x)\cap \sigma\vert$ is even, and in particular finite. \end{enumerate} Given such a path, we define $$m_{\Lambda}(\sigma)=\frac{1}{2}\sum \limits_{e\in E(\Lambda\cap \pi_{\Lambda}(\sigma))}\max\limits_{y\in e} \vert(\pi_{\Lambda})^{-1}(y)\cap \sigma\vert. $$ \end{definition} By considering the retraction map, we see that such paths exist. \begin{lemma}\label{lem: existence of balanced paths} Let $\Lambda$ be a hypergraph in $X$, $\epsilon>0$, and let $p,q\in \mathcal{N}_{\epsilon}(\Lambda)$ be points lying in the same component of $X-\Lambda$. There exists a $(\Lambda,\epsilon)$-balanced path between them. \end{lemma} \begin{proof} Let $\gamma$ be a path from $p$ to $q$ not intersecting $\Lambda$. By taking $\delta$ close to $1$, we have that $\sigma_{\delta}:=\pi_{\Lambda}^{\delta}(\gamma)\subseteq \mathcal{N}_{\epsilon}(\Lambda)-\Lambda$. Since $\sigma_{\delta}$ maps to a loop in $\Lambda$, which is a tree, it immediately follows that by taking $\delta$ close to $1$, and after a small homotopy, for any $y\in \Lambda$, $\vert(\pi_{\Lambda})^{-1}(y)\cap \sigma_{\delta}\vert$ is even, and in particular finite. \end{proof} We can now prove Lemma \ref{lem: separation condition}. \begin{proof}[Proof of Lemma \ref{lem: separation condition}] Throughout we choose $\epsilon>0$ sufficiently small so that for any pair of vertices $v,w$ of $X$, $\mathcal{N}_{\epsilon}(v)\cap\mathcal{N}_{\epsilon}(w)=\emptyset$. Let $P$ be the partition of $Lk(x)$ through which $\Lambda$ passes. Let $P_{1}$ be the element of $P$ containing $p$ and $P_{2}$ the element of $P$ containing $q$. We may assume that $p,q\in \mathcal{N}_{\epsilon}(\Lambda)$: otherwise, choose $w_{i}$ lying in $P_{i}$ such that $w_{i}\in \mathcal{N}_{\epsilon}$. Then $w_{1}$ and $p$ lie in the same component of $X-\Lambda$ and $w_{2}$ and $q$ lie in the same component of $X-\Lambda$. Suppose $p$ and $q$ lie in the same component of $X-\Lambda$. We first choose $p'\in P_{1}$, $q'\in P_{2}$ and $\sigma $ a $\Lambda$-balanced path between $p'$ and $q'$ so that the pair $(m_{\Lambda}(\sigma),l(\sigma))$ is minimal by lexicographic ordering amongst all such $p',q',\sigma$. We induct on $(m_{\Lambda}(\sigma),l(\sigma))$ by lexicographic ordering. If $m_{\Lambda}(\sigma)=1$, then $\sigma$ passes along exactly one edge $e$ of $\Lambda$: it follows that the partitions of $X-\Lambda$ at the endpoints of $e$ are not equatable along $e$, or that $P_{1}=P_{2}$, a contradiction. Otherwise $ m_{\Lambda}(\sigma)=m\geq 2$. Suppose the first and last edges of $\Lambda$ traversed by $\sigma$ are the same edge $e$. Note that we may always travel along $\sigma$ to put ourselves in the situation assumed above: this is analogous to the classical situation of pushing to a leaf in a tree for graph theory arguments. In particular, if this is not true, then move along $\sigma$, starting at $q'$, until we return to $\mathcal{N}_{\epsilon}(x)$. Let $s$ be the point we reach in $\mathcal{N}_{\epsilon}(x)$. If $s$ is in the same component as $q'$ in $P$ then $m_{\Lambda}(\sigma\vert_{[p',s]})\leq m_{\Lambda}(\sigma)$, and $l(\sigma\vert_{[p',s]})< l(\sigma)$, so that $$(m_{\Lambda}(\sigma\vert_{[p',s]}),l(\sigma\vert_{[p',s]}))<(m_{\Lambda}(\sigma),l(\sigma)),$$ a contradiction as $p',q',\sigma$ where chosen so this pair was minimal. If $s$ is in the same component as $p'$ in $P$, and is not equal to $p'$, then $m_{\Lambda}(\sigma\vert_{[s,q']})\leq m_{\Lambda}(\sigma)$, and $l(\sigma\vert_{[s,q']})< l(\sigma)$, so that again $$(m_{\Lambda}(\sigma\vert_{[s,q']}),l(\sigma\vert_{[s,q']}))<(m_{\Lambda}(\sigma),l(\sigma))$$ a contradiction. Therefore if $s \neq p'$, then $s$ lies in a different component to $q'$ in $P$: we have $(m_{\Lambda}(\sigma\vert_{[s,q']}),l(\sigma\vert_{[s,q']}))<(m_{\Lambda}(\sigma),l(\sigma))$, and hence by induction $s$ must lie in a separate component of $X-\Lambda$ to $q'$, a contradiction as $q'$ is connected to $s$ by a path not intersecting $\Lambda$. Therefore by induction we have that $s=p'$. Let $y$ be the endpoint of $e$ distinct from $x$, let $\alpha$ be the the point obtained by pushing $p'$ along $\sigma $ to $\mathcal{N}_{\epsilon}(y)$, and similarly $\beta $ be the the point obtained by pushing $q'$ along $\sigma $ to $\mathcal{N}_{\epsilon}(y)$. Let $\sigma'$ be the subpath of $\sigma$ connecting $\alpha$ and $\beta$. If $\alpha$ and $\beta$ lie in the same component of the partition of $Lk(y)-\Lambda$, then the partitions are not equatable along $e$, a contradiction. Otherwise $(m_{\Lambda}(\sigma'),l(\sigma '))< (m_{\Lambda}(\sigma),l(\sigma))$, and so by induction $\alpha$ and $\beta$ lie in distinct components of $X-\Lambda$. As $p'$ is connected to $\alpha$ by a path not intersecting $\Lambda$, and $q'$ to $\beta$, we see that $p'$ and $q'$ lie in distinct components. Since $p$ is connected to $p'$ and $q$ is connected to $q'$ by a path not intersecting $\Lambda$, the result follows. \end{proof} \subsection{Hypergraph stabilizers and wallspaces}\label{subsection: Hypergraph stabolizers and wallspaces} We now want to use the construction of a cube complex dual to a system of walls, as found in \cite{Hruska-Wise}. Their definition of a wallspace is more general than the one we require, and so we restrict to the case that $X$ is endowed with a metric. \begin{definition}[\emph{Walls}] Let $X$ be a metric space. A \emph{wall} is a pair $\{U,V\}$ such that $X=U\cup V$. The \emph{open halfspaces} associated to the wall are $U-(U\cap V)$ and $V-(U\cap V)$. We say a wall \emph{betwixts} a point $x$ if $x\in U\cap V$, and \emph{separates} the points $x,y$ if $x$ and $y$ lie in distinct open halfspaces. We write $\#(x,y)$ for the number of walls separating $x$ and $y$. \end{definition} \begin{definition}[\emph{Wallspace}] A \emph{wallspace} is a pair $(X,\mathcal{W})$, where $X$ is a connected metric space and $\mathcal{W}$ is a collection of walls in $X$ such that; \begin{enumerate}[label=\roman*)] \item for any $x\in X$, finitely many walls in $\mathcal{W}$ betwixt $x$, \item for any $x,y\in X$, $\#(x,y)<\infty$, \item and there are no duplicate walls that are genuine partitions. \end{enumerate} We say a group $G$ \emph{acts} on a wallspace $(X,\mathcal{W})$ if $G$ acts on $X$, and $G\cdot\mathcal{W}=\mathcal{W}$. \end{definition} \begin{definition}[\emph{$\Lambda$ walls}] Let $\Lambda$ be a vertex or edge hypergraph in $X$, with disjoint components $X-\Lambda=\{U^{i}_{\Lambda}\}_{i}$. For each $U_{\Lambda}^{i}$, let $V_{\Lambda}^{i}=X-\overline{U_{\Lambda}^{i}}$. The \emph{set of $\Lambda$ walls} is the set $$\mathcal{W}_{\Lambda}=\bigg\{\{\overline{U_{\Lambda}^{i}},\overline{V_{\Lambda}^{i}}\}\;:\;U_{\Lambda}^{i}\mbox{ a component of }X-\Lambda\bigg\}.$$ The \emph{hypergraph wallspace} is the set of walls $$\mathcal{W}=\cup_{\Lambda}\mathcal{W}_{\Lambda},$$ where we remove any duplicate walls. \end{definition} We now show that the pair $(X,\mathcal{W})$ is a wallspace. There are several easy but technical steps to this. \begin{lemma}\label{lem: wall stabilizers are cocompact} Let $H_{\Lambda}=Stab_{G}(\Lambda)$, and for any $i$, let $H_{\Lambda,i}=Stab_{H_{\Lambda}}(U_{\Lambda}^{i}).$ Then $H_{\Lambda, i}$ acts cocompactly on $\partial U_{\Lambda}^{i}.$ \end{lemma} This Lemma follows immediately from the proof of \cite[Theorem 2.9]{Hruska-Wise}. We include the argument here for completeness. For a set $A$ in a metric space $(X,d)$, we define the \emph{frontier of $A$} as the set $\partial_{f}A=\{x\in X\;\vert 0<d(x,A)\leq 1\}.$ \begin{proof} Note that $H_{\Lambda}$ acts cocompactly on $\Lambda$ and so on $\partial_{f}\Lambda$. Furthermore $H_{\Lambda}$ preserves the partition of $\partial_{f}\Lambda$ into $U_{\Lambda}^{i}\cap \partial_{f}\Lambda$. Hence $H_{\Lambda,i}$ acts properly discontinuously and cocompactly on $\partial_{f}U_{\Lambda}^{i}$, and therefore on $\partial U_{\Lambda}^{i}$. \end{proof} \begin{lemma}\label{lem: finitely many wall orbits} There are finitely many $G$-orbits of walls in $\mathcal{W}$. \end{lemma} \begin{proof} There are finitely many $G$-orbits of hypergraphs $\Lambda$, and there are finitely many $H_{\Lambda}$ orbits of $U_{\Lambda}^{i}$. The result follows. \end{proof} \begin{lemma}\label{lem: W is a wallspace} The pair $(X,\mathcal{W})$ is a wallspace. \end{lemma} \begin{proof} First we prove finitely many walls betwixt points. Since the set of walls is acted upon cofinitely by $G$, and each wall has a cocompact stabilizer, this follows immediately. In a similar manner we can observe $\#(x,y)<\infty$ for any $x$ and $y$. \end{proof} Therefore we have constructed a wallspace for $X$. \begin{lemma}\label{lem: wallspace separation from hypergraph separation} Let $\mathcal{W}$ be constructed as above. Then $\#(x,y)\geq\#_{\Lambda}(x,y)$. \end{lemma} \begin{proof} Note that if a hypergraph $\Lambda$ separates $x$ and $y$, by taking $i$ such $x\in U_{\Lambda}^{i}$, it follows that $W_{\Lambda}^{i}$ separates $x$ and $y$. The result follows.\end{proof} Next we discuss transverse walls. \begin{definition}[\emph{Transverse}] Two walls $W=\{U,V\}$ and $W'=\{U',V'\}$ are \emph{transverse} if each of the intersections $U\cap U', U\cap V', V\cap U', V\cap V'$ are nonempty. \end{definition} There is an easier formulation for this definition. \begin{lemma} Two walls $W_{\Lambda}^{i},W_{\Lambda'}^{j}$ are transverse if and only if $\partial U_{\Lambda}^{i}\cap \partial U_{\Lambda'}^{j}$ is non-empty. In particular the walls are transverse only if $\Lambda\cap \Lambda'$ is non-empty. \end{lemma} Using this we can now move on to cubulating groups acting on such complexes. \subsection{Cubulating groups acting on polygonal complexes}\label{subsection: Cubulating groups acting on polygonal complexes} We now understand the structure of the hypergraph stabilisers and the separation in the wallspaces $(X,\mathcal{W})$. For a metric polygonal complex $X$, let $D(X)$ be the maximal circumference of a polygonal face in $X$. We will be considering $G$ acting properly discontinuously and cocompactly on a polygonal complex $X$ so that $D(X)= D(G\backslash X)$ is finite. \begin{lemma}\label{lem: edge hypergraph separation} Let $X$ be a simply connected $CAT(0)$ polygonal complex with $G\backslash X$ gluably edge $\pi$-separated. Let $\gamma$ be a finite geodesic in $X$ of length at least $4D(X)$. There exists an edge hypergraph $\Lambda$ that separates the endpoints of any finite geodesic extension of $\gamma$. \end{lemma} \begin{proof} Since $\gamma$ is of length at least $4D(X)$, we can find a subgeodesic $\delta$ of $\gamma$ of length at least $2D(X)$ that starts at a point $v\in X^{(1)}$ and ends at $w\in X^{(1)}$. If $\delta$ passes through the interior of a $2$-cell $f$ then, as $\delta$ is of length at least $2D(X)$, it meets the boundary $\partial f$ at two points $u_{1},u_{2}$. The sides of the polygonal faces are geodesic, and geodesics are unique in $CAT(0)$ spaces, so that there must exist a vertex $w$ in $\partial f$ lying between $u_{1}$ and $u_{2}$. Choose a cutset $C$ in $Lk(w)$ containing $f$, and let $P$ be a chosen partition of $\pi_{0}(Lk(w)-C)$ so that the endpoints of $f$ lie in different elements of $P$ (this must exist by assumption). Let $\Lambda$ be any hypergraph passing through $(C,P)$ in $Lk(w)$: by Lemma \ref{lem: separation condition}, $\Lambda$ separates the endpoints of the subpath of $\delta$ between $u_{1}$ and $u_{2}$: as geodesics in $X$ are unique, it follows that $\Lambda$ intersects any geodesic extension of $\delta$ exactly once, and so separates the endpoints of any geodesic extension of $\delta$. Otherwise $\delta$ lies strictly in $X^{(1)}$: $\delta$ is of length at least $2D(X)$ and so it must intersect at least two primary vertices. Therefore $\delta$ contains an edge of the form $[u_{1},u_{2}]$ for some primary vertices $u_{1},u_{2}$: this edge must be geodesic. Furthermore, the geodesic $[u_{1},u_{2}]$ contains a secondary vertex $s$. Let $P$ be a partition of $Lk(s)-E(s)$ so that the endpoints of $[u_{1},u_{2}]$ lie in different elements of $P$ (this must exist by assumption). Let $\Lambda$ be the hypergraph passing through $(C,P)$ in $Lk(s)$, it follows by Lemma \ref{lem: separation condition} that $\Lambda$ separates the endpoints of any finite geodesic extension of $\delta$. \end{proof} Similarly, we have the following. \begin{lemma}\label{lem: vertex hypergraph separation} Let $X$ be a simply connected $CAT(0)$ polygonal complex with $G\backslash X$ gluably vertex $\pi$-separated. Let $\gamma$ be a finite geodesic in $X$ of length at least $4D(X)$. There exists a vertex hypergraph $\Lambda$ that separates the endpoints of any finite geodesic extension of $\gamma$. \end{lemma} \begin{proof} Again, since $\gamma$ is of length at least $4D(X)$, we can write $\gamma=\gamma_{1}\cdot\delta\cdot\gamma_{2}$, where each $\gamma_{i}$ is of length at least $D(X)\slash 2$, and $\delta$ is a path of length between $D(X)$ and $2D(X)$ that starts at a point $v\in X^{(1)}$ and ends at $w\in X^{(1)}$. First suppose that $\delta$ contains a nontrivial subpath, $\delta'$, which contains exactly one point of $X^{(1)}$, $u$, in its interior. Let $e$ be the edge of $X$ containing $u$. Since $\delta'$ is geodesic, we see that $i(\delta')$ and $t(\delta')$ lie in two distinct faces $F,F'$, both adjacent to $e$. In $Lk(i(e))$, $F,F'$ are two edges adjacent to $e$, and so, as $G\backslash X$ is gluably $\pi$-separated. there exists a $\pi$-separated cutset $C\ni e$ and partition $P$ of $\pi_{0}(Lk(i(e))-C)$ with $F,F'$ lying in distinct elements of $P$. Let $\Lambda$ be any hypergraph passing through $(C,P)$ in $Lk(i(e))$. By Lemma \ref{lem: separation condition} $\Lambda$ separates the endpoints of $\delta'$, and hence the endpoints of $\gamma$. Otherwise, $\delta$ is contained in $X^{(1)}$: as we have not subdivided $X$, $v$ is a primary vertex of $X$. Let $\delta_{1}$, $\delta_{2}$ be the two subpaths of $\gamma$ incident to $v$: as $\gamma$ is geodesic, $d_{Lk(v)}(\delta_{1},\delta_{2})\geq \pi$. Let $C$ be the vertex cutset such that $\gamma_{1}$ and $\gamma_{2}$ lie in different components of $Lk(v)-C$ and let $P$ be a chosen partition of $\pi_{0}(Lk(v)-C)$ separating $\gamma_{1}$ and $\gamma_{2}$ (this exists as $G\backslash X$ is gluably vertex $\pi$-separated). Let $\Lambda$ be any vertex hypergraph passing through $(C,P)$ in $Lk(v)$: by Lemma \ref{lem: separation condition} this separates $\gamma_{1}$ and $\gamma_{2}$, and so separates the endpoints of $\gamma$. \end{proof} We now turn our attention to finding codimension-$1$ subgroups. We first note the following lemma concerning $CAT(0)$ geometry. \begin{lemma}\label{lem: geodesic rays diverge} Let $Y$ be a $CAT(0)$ space, and let $\gamma_{1},\gamma_{2}$ be infinite one-ended geodesics starting from the same point. If there exists $r>0$ such that $\gamma_{1}\subseteq \mathcal{N}_{r}(\gamma_{2})$, then $\gamma_{1}=\gamma_{2}.$ \end{lemma} \begin{proof} Let $p$ be the common start point of $\gamma_{1},\gamma_{2}$ and let $\theta=\angle_{p}(\gamma_{1},\gamma_{2}).$ Since $\gamma_{1}\subseteq \mathcal{N}_{r}(\gamma_{2})$, for all $t>0$ there exists $t'(t)>0$ such that $d(\gamma_{1}(t),\gamma_{2}(t'))\leq r$. However, $d(\gamma_{1}(t),p)\rightarrow \infty $ as $t\rightarrow\infty$, so that $d(\gamma_{2}(t'(t)),p)\rightarrow \infty $ as $t\rightarrow\infty$. Consider the Euclidean comparison triangle for the geodesics $\gamma_{1}(t)$ and $\gamma_{2}(t'(t))$: this has third side length at most $r$, and so has angle at $p$ of $\theta(t)\rightarrow 0$ as $t\rightarrow \infty.$ However, $\theta\leq \theta(t)$ for all $t$, and so $\theta=0$. It follows that $\gamma_{1}=\gamma_{2}$ in a closed neighbourhood of $p$, and so the set $\{t\;:\;\gamma_{1}(t)=\gamma_{2}(t)\}$ is clopen. The result follows. \end{proof} Using this we can prove that hypergraph stabilizers have subgroups that are codimension-$1$ in $G$. Let $G$ be a group with finite generating set $S$ and let $\Gamma$ be the Cayley graph of $G$ with respect to $S$. A subgroup $H$ of $G$ is \emph{codimension-$1$} if the graph $H\backslash \Gamma$ has at least two ends, i.e. for some compact set $K$, $H\backslash\Gamma - K$ contains at least two infinite components. \begin{lemma}\label{lem: hypergraph stabilizers are codimension-1} Let $G$ be a group acting properly discontinuously and cocompactly on a simply connected $CAT(0)$ polygonal complex $X$ such that $G\backslash X$ is (weakly) gluably $\pi$-separated. Let $\Lambda$ be a hypergraph in $X$. For any component $U_{\Lambda}$ of $X-\Lambda$, the group $$H_{U}=Stab_{Stab(\Lambda)}(U_{\Lambda})\cap Stab_{Stab(\Lambda)}(X-\overline{U_{\Lambda}})$$ is virtually free, and is quasi-isometrically embedded and codimension-$1$ in $G$. \end{lemma} This again follows by \cite[Theorem 2.9]{Hruska-Wise}: we provide a direct proof for completeness. \begin{proof} We prove this in the case that $\Lambda$ is an edge hypergraph: the case for vertex hypergraphs is identical. By Lemma \ref{lem: edge hypergraphs are leafless convex}, $\Lambda$ is a convex tree. Since $\partial U_{\Lambda}\subseteq \Lambda$, $\partial U_{\Lambda}$ is a convex tree. We see that $H_{U}$ is of index at most $2$ in $Stab_{Stab(\Lambda)}(U_{\Lambda})$. By Lemma \ref{lem: wall stabilizers are cocompact}, $H_{U}$ acts properly discontinuously and cocompactly on $\partial U_{\Lambda}$: it follows that $H_{U}$ is virtually free and quasi-isometrically embedded in $G$. Furthermore, by Lemma \ref{lem: separation condition} $X- \Lambda$ consists of at least two path-connected components, $\{U^{i}_{\Lambda}\}$. Let $V_{\Lambda}=X-\overline{U_{\Lambda}}$. Let $e_{1}$ and $e_{2}$ be vertices that lie in distinct components of $Lk(v)-C$ such that, in $X$, $e_{1}$ is an edge lying in $U_{\Lambda}\cup v$ and $e_{2}$ an edge lying in $V_{\Lambda}\cup v$. We construct two geodesics $\gamma_{1}$ and $\gamma_{2}$: let the first edge of $\gamma_{1}$ be $e_{1}$, and let $w$ be the endpoint of $e_{1}$ distinct from $v$. Since the links of vertices have no vertices of degree $1$ and have girth at least $2\pi$, it follows that there exists a vertex or edge, $a_{1}$, in $\Gamma=Lk(w)$ so that $d_{\Gamma}(e _{1}, m(a_{1}))\geq \pi$, and so we can extend $e_{1}$ to a geodesic $[v,m(a_{1})]$. We can continue in this fashion to construct a one-ended geodesic $\gamma_{1}$ that, by Lemma \ref{lem: separation condition}, lies in $U_{\Lambda}\cup v$ and (as geodesics are unique in $X$) intersects $\Lambda$ exactly once. Construct the geodesic $\gamma_{2}$ similarly, with first edge $e_{2}$ so that $\gamma_{2}$ intersects $\Lambda$ exactly once and lies in $V_{\Lambda}\cup v$. By Lemma \ref{lem: geodesic rays diverge}, it follows that for any $r>0$, $\gamma_{1},\gamma_{2}\not\subseteq \mathcal{N}_{r}(\partial U_{\Lambda})$. Therefore $H_{U}\backslash X-H_{U}\backslash \partial U_{\Lambda}$ consists of at least two infinite components: $H_{U}\backslash U_{\Lambda}$ and $H_{U}\backslash V_{\Lambda}$. As $G$ is quasi-isometric to $X$, and $H_{U}$ is quasi-isometric to $\partial U_{\Lambda}$, the result follows. \end{proof} We will use Hruska--Wise's \cite{Hruska-Wise} generalisation of Sageev's construction of a $CAT(0)$ cube complex dual to a collection of codimension-$1$ subgroups, as introduced in \cite{Sageev-95}. We will only describe the $1$-skeleton of this cube complex. \begin{definition}[\emph{Orientation}] Let $(X,\mathcal{W})$ be a wallspace and $W=\{U,V\}$ a wall. An \emph{orientation} of $W$ is a choice $c(W)=(\overleftarrow{c(W)},\overrightarrow{c(W)})$ of ordering of the pair $W$. An \emph{orientation} of $\mathcal{W}$ is an orientation of each wall $W$ in $\mathcal{W}$. \end{definition} A $0$-cube in the dual cube complex $\mathcal{C}(X,\mathcal{W})$ corresponds to a choice of orientation $c$ of $\mathcal{W}$ such that that for any element $x\in X$, $x$ lies in $\overleftarrow{c(W)}$ for all but finitely many $W\in\mathcal{W}$, and $\overleftarrow{c(W)}\cap\overleftarrow{c(W')}\neq \emptyset$ for all $W,W'\in\mathcal{W}$. Two $0$-cells are joined by a $1$-cell if there exists a unique wall to which they assign opposite orientations. Sageev analysed the properness and cocompactness of the group action on $\mathcal{C}(X,\mathcal{W})$ in \cite{Sageev-97}, and this was generalized by Hruska--Wise in \cite{Hruska-Wise}. We will use the following, as they are the easiest criteria to verify in our setting. \begin{theorem*}\cite[Theorem 1.4]{Hruska-Wise}\label{prop: proper actions} Suppose $G$ acts on a wallspace $(X,\mathcal{W})$, and the action on the underlying metric space (X,d) is metrically proper. If there exists constants $\kappa,\epsilon>0$ such that for any $x,y\in X$, $$\#(x,y)\geq \kappa d(x,y)-\epsilon,$$ then $G$ acts metrically properly on $C(X, \mathcal{W})$. \end{theorem*} \begin{theorem*}\cite[Lemma 7.2]{Hruska-Wise}\label{thm: Sageev cubulation cocompactness} Let $G$ act on a wallspace (X,W). Suppose there are finitely many orbits of collections of pairwise transverse walls in $X$. Then $G$ acts cocompactly on $C(X,\mathcal{W})$. \end{theorem*} This is sufficient to prove Theorem \ref{mainthm: cubulating groups}. \begin{proof}[Proof of Theorem \ref{mainthm: cubulating groups}] If $G\backslash X$ is gluably weakly $\pi$-separated, then by Lemma \ref{lem: hypergraph stabilizers are codimension-1}, $G$ contains a virtually free codimension-$1$ subgroup. Now suppose $G\backslash X$ is a gluably $\pi$-separated complex. $X$ is locally finite, and $G$ acts properly discontinuously on $X$, so acts metrically properly on $X$. Construct the hypergraph wallspace for $X$. Then by Lemmas \ref{lem: edge hypergraph separation} and \ref{lem: vertex hypergraph separation}, $$\#_{\Lambda}(p,q)\geq d_{X}(p,q)\slash 4D(X) -1.$$ By Lemma \ref{lem: wallspace separation from hypergraph separation}, this implies that $$\#(p,q)\geq d_{X}(p,q)\slash 4D(X)-1:$$ by \cite[Theorem 1.4]{Hruska-Wise} it follows that $G$ acts properly discontinuously on the cube complex $C(X,\mathcal{W})$. Now suppose that $G$ is hyperbolic, so that $X$ is also hyperbolic. As hypergraphs are convex and hypergraph stabilisers are cocompact, by \cite{Gitik-Mitra-Rips-Sageev_1998widths} (c.f. \cite{Sageev-97}) there is an upper bound on the number of pairwise intersecting hypergraphs. For any point $x\in \Lambda$ there is a finite upper bound on the number of components of $X-\Lambda$ intersecting $x$, and so by Lemma \ref{lem: finitely many wall orbits}, we see there is an upper bound on the size of a collection of pairwise transverse walls. As $G$ acts cofinitely on the set of walls, it follows that the hypothesis of \cite[Lemma 7.2]{Haglund-Wise} are met, and so $G$ acts cocompactly on the $CAT(0)$ cube complex $C(X,\mathcal{W})$: by \cite[Theorem 1.1]{Agol13}, we conclude that $G$ is virtually special. \end{proof} \section{Finding separated cutsets by computer search}\label{section: finding cutsets} In this short section, we discuss how to find separated cutsets by computer search. Let $\Gamma$ be a finite metric graph, and let $I(\Gamma)=V(\Gamma)$ or $E(\Gamma)$. Define $d_{I}(x,y)=d_{\Gamma}(x,y)$ if $x,y\in V(\Gamma)$ and $d_{I}(x,y)=d_{\Gamma}(m(x),m(y))$ if $x,y\in E(\Gamma)$. Let $\sigma>0$. The $\sigma$-separated cutsets of $\Gamma$ that lie in $I(\Gamma)$ can be found in the following way: we can define a dual graph $\bar{\Gamma}$ by $V(\bar{\Gamma})=I(\Gamma),$ and $$E(\bar{\Gamma})=\{(x,y)\in I(\Gamma)^{2}\;:\;x\neq y\;\mbox{and}\;d_{I}(x,y)< \sigma \}.$$ Finding $\sigma$-separated cut sets in $\Gamma$ then corresponds to finding independent vertex sets in $\bar{\Gamma}$ and checking if they are cut sets in $\Gamma$. Importantly, finding independent vertex sets can be done relatively efficiently. See \href{ https://github.com/CJAshcroft/Graph-Cut-Set-Finder}{\color{blue} https://github.com/CJAshcroft/Graph-Cut-Set-Finder} for the implementation of the above algorithm, and for the code used to find cutsets in the following sections. \section{Triangular buildings}\label{section: generalized quadrangles} In the following section, we prove Corollary \ref{mainthm: cubulating the generalized quadrangle}. In \cite{Kangaslampi-Vdovina} and \cite{Carbone-Kangaslampli-Vdovina_2012} all groups acting simply transitively on triangular buildings whose links are the minimal generalized quadrangle (see Figure \ref{fig: generalized quadrangle}) were classified. We apply Theorem \ref{mainthm: cubulating groups} to these groups, proving they are virtually special by considering the separation of the minimal generalized quadrangle. \begin{figure}[ht] \centering \includegraphics[scale=0.75]{genquad.png} \caption{The minimal generalized quadrangle.}\label{fig: generalized quadrangle} \end{figure} \begin{table}[H] \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 1&2& 14& 30\\ 2&1& 3& 19\\ 3&2& 4& 24\\ 4&3& 5& 11\\ 5&4& 6& 28\\ 6&5& 7& 15\\ 7&6& 8& 20\\ 8&7& 9& 25\\ 9&8& 10& 30\\ 10&9& 11& 17 \end{tabular} \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 11&4&10& 12\\ 12&11& 13& 21\\ 13&12& 14& 26\\ 14&1& 13& 15\\ 15&6& 14& 16\\ 16&15& 17& 23\\ 17&10& 16& 18\\ 18&17& 19& 27\\ 19&2& 18& 20\\ 20&7& 19& 21 \end{tabular} \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 21&12& 20& 22\\ 22&21& 23& 29\\ 23&16& 22& 24\\ 24&3& 23& 25\\ 25&8& 24& 26\\ 26&13&25&27\\ 27&18&26&28\\ 28&5&27&29\\ 29&22&28&30\\ 30&1&9&29 \end{tabular} \caption{Edge incidences for the minimal generalized quadrangle} \end{table} \begin{lemma}\label{lem: separation of the generalized quadrangle} Let $\Gamma$ be the minimal generalized quadrangle equipped with the combinatorial metric. Then $\Gamma$ is weighted (strongly) edge $3$-separated. \end{lemma} \begin{proof} By a computer search, we find the following exhaustive list of $3$-separated edge cut sets in $\Gamma$: \begin{enumerate}[label=\hspace*{-10pt}] { \item\hspace*{-10pt}$C_{1}=\{\fe{1}{2},\fe{4}{5},\fe{7}{20},\fe{9}{10},\fe{12}{13},\fe{15}{16},\fe{18}{27},\\\hspace*{25pt}\fe{22}{29},\fe{24}{25}\},$ \item\hspace*{-10pt}$C_{2}=\{\fe{1}{2},\fe{4}{11},\fe{6}{15},\fe{8}{9},\fe{13}{26},\fe{17}{18},\fe{20}{21},\\\hspace*{25pt}\fe{23}{24},\fe{28}{29}\},$ \item\hspace*{-10pt}$C_{3}=\{\fe{1}{14},\fe{3}{4},\fe{6}{7},\fe{9}{10},\fe{12}{21},\fe{16}{23},\fe{18}{19},\\\hspace*{25pt}\fe{25}{26},\fe{28}{29}\},$ \item\hspace*{-10pt}$C_{4}=\{\fe{1}{14},\fe{3}{24},\fe{6}{7},\fe{9}{10},\fe{12}{21},\fe{16}{23},\fe{18}{19},\\\hspace*{25pt}\fe{25}{26},\fe{28}{29}\},$ \item\hspace*{-10pt}$C_{5}=\{\fe{1}{30},\fe{3}{4},\fe{6}{15},\fe{8}{25},\fe{10}{17},\fe{12}{13},\fe{19}{20},\\\hspace*{25pt}\fe{22}{23},\fe{27}{28}\},$ \item\hspace*{-10pt}$C_{6}=\{\fe{1}{30},\fe{3}{24},\fe{5}{28},\fe{7}{8},\fe{10}{11},\fe{13}{26},\fe{15}{16},\\\hspace*{25pt}\fe{18}{19},\fe{21}{22}\},$ \item\hspace*{-10pt}$C_{7}=\{\fe{2}{3},\fe{5}{6},\fe{8}{25},\fe{10}{11},\fe{13}{14},\fe{16}{23},\fe{18}{27},\\\hspace*{25pt}\fe{20}{21},\fe{29}{30}\},$ \item\hspace*{-10pt}$C_{8}=\{\fe{2}{3},\fe{5}{28},\fe{7}{20},\fe{9}{30},\fe{11}{12},\fe{14}{15},\fe{17}{18},\\\hspace*{25pt}\fe{22}{23},\fe{25}{26}\},$ \item\hspace*{-10pt}$C_{9}=\{\fe{2}{19},\fe{4}{5},\fe{7}{8},\fe{10}{17},\fe{12}{21},\fe{14}{15},\fe{23}{24},\\\hspace*{25pt}\fe{26}{27},\fe{29}{30}\},$ \item\hspace*{-10pt}$C_{10}=\{\fe{2}{19},\fe{4}{11},\fe{6}{7},\fe{9}{30},\fe{13}{14},\fe{16}{17},\fe{21}{22},\\\hspace*{25pt}\fe{24}{25},\fe{27}{28}\}.$ } \end{enumerate} $\Gamma$ is connected and contains no vertices of degree $1$. The cutsets sets are $3$-separated, and $\cup_{i}C_{i}=E(\Gamma)$. In fact, each cutset is minimal, and so is certainly proper. Furthermore, every edge appears in exactly two cutsets: assigning each cutset weight $1$ we see that the weight equations are satisfied, and so $\Gamma$ is weighted edge $3$-separated. In fact, by a computer search we can see that $\Gamma$ satisfies the conditions of Lemma \ref{lem: strong edge sep condition}, and so is weighted strongly edge $3$-separated. \end{proof} Note that $C_{i}\cap C_{j}$ is nonempty for all $i$ and $j$, so that we are not able to use \cite[Example $4.3$]{Hruska-Wise}. However, we can apply Theorem \ref{mainthm: cubulating groups} to prove groups acting properly discontinuously and cocompactly on triangular buildings with the minimal generalized quadrangle as links are virtually special. \begin{proof}[Proof of Corollary \ref{mainthm: cubulating the generalized quadrangle}] Let $X$ be a simply connected polygonal complex such that every face has at least $3$ sides, and the link of every vertex is isomorphic to the minimal generalized quadrangle, $\Gamma$, and let $G$ be a group acting properly discontinuously and cocompactly on $X$. Since $\Gamma$ has girth $8$, $X$ can be endowed with a $CAT(-1)$ metric, so that $G$ is hyperbolic. Endow $X$ with the metric that makes each $k$-gonal face a regular unit Euclidean $k$-gon, so that $X$ is regular and the length of each edge in the link of a vertex is at least $\pi\slash 3$. As $\Gamma$ is weighted edge $3$-separated with the combinatorial metric, it follows that the links of $X$ are weighted edge $\pi$-separated. Hence by Lemma \ref{lem: solving gluing equations for minimal edge cutsets}, $G\backslash X$ is gluably $\pi$-separated. Furthermore, by Gromov's link condition, $X$ is $CAT(0)$. Therefore, $G$ is hyperbolic and acts properly discontinuously and cocompactly on a simply connected $CAT(0)$ triangular complex $X$ with $G\backslash X$ gluably $\pi$-separated, so acts properly discontinuously and cocompactly on a $CAT(0)$ cube complex by Theorem \ref{mainthm: cubulating groups}, and hence is virtually special by \cite[Theorem $1.1$]{Agol13}. \end{proof} \section{Application to generalized triangular groups}\label{sec: gen triangles} In this section we prove Theorem \ref{mainthm: cubulating generalized triangle groups} in Section \ref{subsection: Codimension-$1$ subgroups of generalized ordinary triangle groups}, Corollary \ref{coralph: small girth generalized triangle groups} in Section \ref{subsection: small girth generalized triangle groups}, and Corollary \ref{mainthm: cubulating dehn fillings of generalized triangle groups} in Section \ref{subsection: cubulating dehn fillings of generalized triangle groups}. \subsection{Cubulating generalized ordinary triangle groups}\label{subsection: Codimension-$1$ subgroups of generalized ordinary triangle groups} We now consider generalized ordinary triangle groups, constructed in \cite{Lubotzky-Manning-Wilton} to answer a question of Agol and Wise: note that the case of $k=2$ corresponds to classical ordinary triangle groups. The first complex of groups we define uses the notation from \cite{Caprace-Conder-Kaluba-Witzel_triangle} to more easily align with their work. See e.g. \cite{Bridson-Haefliger} for further discussion of complexes of groups. \begin{definition}[Generalized triangle groups]\label{def: generalized triangle} Consider the following complex of groups over $\mathcal{T}$, the poset of all subsets of $\{1,2,3\}$. Let $X_{1},X_{2},X_{3}$ be the vertex groups, and $A_{1},A_{2},A_{3}$ the edge groups, with the face group trivial, and homomorphisms $\phi_{i,i+1}:A_{i}\rightarrow X_{i+1},\;\phi_{i,i-1}:A_{i}\rightarrow A_{i-1}$ for $i=1,2,3$ taken $\mod 3$. Now, consider the coset graph $$\Gamma_{X_{i}}(\phi_{i-1,i}(A_{i-1}),\phi_{i+1,i}A_{i+1}).$$ Fix $k\geq 2$ and let each $A_{i}=\mathbb{Z}\slash k$. For graphs $\Gamma_{i}$, let $$\{D_{k}^{j}(\Gamma_{1},\Gamma_{2},\Gamma_{3})\}_{j}$$ be the family of complexes of groups obtained by choosing $X_{i}$ and $\phi_{i,i\pm 1}$ such that for each $i$ $$\Gamma_{X_{i}}(\phi_{i-1,i}(A_{i-1}),\phi_{i+1,i}A_{i+1})\cong \Gamma_{i}.$$ A group $$G^{j}_{k}(\Gamma_{1},\Gamma_{2},\Gamma_{3})=\pi_{1}(D^{j}_{k}(\Gamma_{1},\Gamma_{2},\Gamma_{3}))$$ is called a \emph{($k$-fold) generalized triangle group}. \end{definition} Bridson and Haefliger considered the developability of a complex of groups in \cite[III.$\mathcal{C}$]{Bridson-Haefliger}. The following is well known: see e.g. \cite[Theorem 3.1]{Caprace-Conder-Kaluba-Witzel_triangle}. \begin{prop}\label{lem: developability of gen triangle} Suppose that $girth(\Gamma_{i})\geq 6$ for each $i$. Then $G_{k}^{j}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$ acts properly and cocompactly on a triangular complex $X^{j}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$ such that the link of each vertex is isomorphic to $\Gamma\in\{\Gamma_{i}\}_{i}$. If $girth(\Gamma_{1})>6$, then $G$ is hyperbolic. \end{prop} \begin{definition}[Generalized ordinary triangle groups]\label{def: generalized triangle 2} Consider the following complex of groups. Fix $k\geq 2$, and identify the boundaries of $k$ $2$-simplices to construct a simplicial complex $\mathcal{K}$ with three vertices $v_{1},v_{2},v_{3}$, three edges $e_{1}, e_{2}, e_{3}$, and $k$ $2$-simplices. Then $Lk(v_{i})\simeq C_{k,2}$, the cage graph on $k$ edges, i.e. the smallest $k$ regular graph of girth $2$. Let $P_{i}=\pi_{1}(Lk(v_{i}))$, and let $G_{0,k}$ be the free group on $2k-2$ letters. Note that we can view $G_{0,k}$ as the fundamental group of a complex of groups with underlying simplicial complex $\mathcal{K}$ and vertex groups $P_{i}$. Now, let $\Gamma_{i}\looparrowright Lk(v_{i})$ be finite-sheeted normal covering graphs, with associated normal subgroups $Q_{i}\unlhd P_{i}$. Let $D$ be a complex of groups with underlying complex $\mathcal{K}$ and (finite) vertex groups $V_{i}=P_{i}\slash Q_{i}$. Since there are choices for the above complex, we will let $D^{j}_{0,k}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$, $j=1,\hdots ,$ be the finite exhaustive list of possible complexes of groups achieved by the above construction. Form the \emph{($k$-fold) generalized ordinary triangular group} $$ G^{j}_{0,k}(\Gamma_{1},\Gamma_{2},\Gamma_{3})=\pi_{1}(D^{j}_{0,k}(\Gamma_{1},\Gamma_{2},\Gamma_{3}))=G_{0,k}\slash \langle\langle Q_{1}\cup Q_{2}\cup Q_{3}\rangle\rangle .$$ \end{definition} Note that in this definition the graphs $\Gamma_{i}$ are covers of $C_{k,2}$ so that they are connected, contain no cut edges, and have girth at least $2$. Theorem \ref{mainthm: cubulating groups}, along with Proposition \ref{lem: developability of gen triangle}, and \cite[Proposition 3.2]{Lubotzky-Manning-Wilton} below, allow us to cubulate $G^{j}_{k}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$ and $G^{j}_{0,k}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$ when given enough information about each of $\Gamma_{1},\Gamma_{2},\Gamma_{3}$. The purpose of this subsection is to provide a way to prove such a group acts properly discontinuously on a $CAT(0)$ cube complex by considering $\Gamma_{1}$ alone. Again, see Section \ref{subsec: Link conditions} for the relevant definitions. \begin{theorem}\label{mainthm: cubulating generalized triangle groups} Let $\Gamma_{i}\looparrowright C_{k,2}$ be finite-sheeted covers such that $girth(\Gamma_{i})\geq 6$ for each $i$, and let $G=G^{j}_{0,k}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$ or $G=G^{j}_{k}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$. If $\Gamma_{1}$ is weighted strongly edge $3$-separated, then $G$ acts properly discontinuously on a $CAT(0)$ cube complex. If, in addition, $G$ is hyperbolic, then this action is cocompact. \end{theorem} We use the following proposition, as stated in \cite[Proposition 3.2]{Lubotzky-Manning-Wilton}, which follows by an application of \cite[Theorem III.$\mathcal{C}$.4.17]{Bridson-Haefliger}. \begin{proposition*} \cite[Proposition 3.2]{Lubotzky-Manning-Wilton} If $girth(\Gamma_{i})\geq 6$ for each $i$, then $G^{j}_{0,k}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$ acts properly discontinuously and cocompactly on a simply connected simplicial complex $X^{j}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$ with links isomorphic to $\Gamma$, where $\Gamma\in\{\Gamma_{1},\Gamma_{2},\Gamma_{3}\}.$ Furthermore, if $girth(\Gamma_{1})\geq 6$ for each $i$ and $girth(\Gamma_{1})>6$, then $G_{0,k}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$ is hyperbolic. \end{proposition*} Now, fix $j$, let $G=G_{k}^{j}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$ or $G=G_{0,k}^{j}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$, and let $X=X^{j}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$ be as above. Note that the antipodal graph $\Delta_{G\backslash X}$ is the disjoint union of three components $\Delta_{1},\Delta_{2},\Delta_{3}$, such that for any vertex $v\in \Delta_{i}$ either $v$ is secondary, or $Lk_{G\backslash X}(v)\cong \Gamma_{i}$. Suppose $\Gamma_{1}$ is a weighted strongly edge $3$-separated graph, and endow $X$ with the metric that turns each triangle into a unit equilateral Euclidean triangle: $X$ is $CAT(0)$ with this metric. Then for each $v\in V(\Delta_{1})$, $Lk_{G\backslash X}(v)$ is a strongly edge $\pi$-separated graph. Since cutsets are proper, we can assign to every cutset the canonical partition: as discussed in Section \ref{subsection: Examples of solutions of the gluing equations} this is sufficient for cubulation, and therefore we omit the reference to partitions for the remainder of this section. As in Section \ref{subsection: Constructing hypergraphs in polygonal complexes} construct the graphs $\underline{\Lambda}_{1},\hdots ,\underline{\Lambda}_{m}$ as images of $$\Sigma_{i}\looparrowright\Delta_{1}\looparrowright G\backslash X.$$ In particular, if a vertex $v$ has $Lk_{X}(v)=\Gamma_{1}$, we have a hypergraph passing through every $\pi$-separated edge cutset in $Lk_{X}(v)=\Gamma_{1}$. As in Section \ref{subsection: Hypergraph stabolizers and wallspaces}, we can again build the system of hypergraph walls. We now analyse the separation of this complex by hypergraphs. \begin{lemma}\label{lem: strongly separated triangle} Suppose that $\Gamma_{1}$ is weighted strongly edge $3$-separated, and let $\gamma$ be a geodesic in $X$ of length at least $100$. There exists a hypergraph $\Lambda$ such that $\Lambda$ separates the endpoints of any finite geodesic extension of $\gamma$. \end{lemma} \begin{proof} Since $\gamma$ has length at least $100$, we may write $\gamma=\beta \cdot\gamma_{1}\cdot\gamma_{2}\cdot\gamma_{3}\cdot \delta$ such that $1\leq l(\gamma_{i})\leq \sqrt{3\slash 2}$, $l(\beta),l(\delta)\geq 40$ and the endpoints of each $\gamma_{i}$ lie in $X^{(1)}$. We can see that either: \begin{enumerate}[label = \bf{case} $\boldsymbol{\alph*})$] \item $\gamma_{2}$ contains an edge of $X^{(1)}$ of the form $[u,v]$, \item or $\gamma_{2}$ contains a subpath that intersects $X^{(1)}$ at exactly two points $x,y$ in $\partial T$ for some $2$-cell $T$. \end{enumerate} Now consider case $a)$. There are two subcases to consider. \begin{enumerate}[label = \bf{Case} $\boldsymbol{a.\roman*}$)] \item $u$ and $v$ have links isomorphic to $\Gamma_{2}$ and $\Gamma_{3}$ respectively (or vice versa), \item or $v$ has link isomorphic to $\Gamma_{1}.$ \end{enumerate} In case $a.i)$, $\gamma_{2}$ contains a secondary vertex $x$ that is opposite to some $w$ with $Lk_{X}(w)\cong\Gamma_{1}$. By Lemma \ref{lem: separation condition}, the hypergraph passing through $x$ and $w$ therefore separates the endpoints of $\gamma'$, and so the endpoints of any geodesic extension of $\gamma.$ In case $a.ii)$, consider the path $\gamma_{3}.$ If $\gamma_{3}$ is not an edge then $\gamma_{3}$ satisfies the hypothesis of case $b)$ using $\gamma_{3}$ in place of $\gamma_{2}$. Otherwise we may assume that $\gamma_{2}\cdot \gamma_{3}=[u,v]\cdot[v,w]$. Now, $d_{Lk(v)}([u,v],[v,w])\geq \pi$ as $\gamma_{2}\cdot \gamma_{3}$ is geodesic: let $C$ be the cutset separating $[u,v]$ and $[v,w]$ in $\Gamma_{1}$ (this exists as $\Gamma_{1}$ is strongly $3$-separated): by Lemma \ref{lem: separation condition} the hypergraph passing through $C$ in $Lk(v)$ separates the endpoints of $\gamma.$ \begin{figure}[H] \begin{minipage}[b]{0.4\textwidth} \includegraphics[scale=0.9]{gentriangle1.png}\centering \caption*{Case a.i)} \end{minipage} \begin{minipage}[b]{0.4\textwidth} \includegraphics[scale=0.9]{gentriangle2.png} \centering \caption*{\hspace*{70pt}Case a.ii)} \end{minipage} \label{fig:gentriangle1} \end{figure} For case $b)$ there are three subcases: \begin{enumerate}[label= \bf{case} $\boldsymbol{b.\roman*)}$] \item the two paths in $\partial T$ from $x$ to $y$ each contain one of the vertices $u,v$ such that $u$ is primary with $Lk(u)\cong \Gamma_{1}$, and $v$ is secondary and antipodal to $u$ in $\partial T$, \item one of the two paths in $\partial T$ from $x$ to $y$ contains both of the vertices $u,v$ where $u$ is primary with $Lk(u)\cong\Gamma_{1}$, and $v$ is secondary and opposite to $u$, \item or $\gamma_{2}=[u,v]$ where $u$ is secondary and opposite to $v$, with $Lk(v)\cong\Gamma_{1}$. \end{enumerate} In case b.i), by Lemma \ref{lem: separation condition} the hypergraph passing through $u$ and $v$ separates the endpoints of $\gamma_{2}$ and so the endpoints of $\gamma$. Consider case $b.ii)$. Let $T_{x}$, $T_{y}$ be the two $2$-cells adjacent$T$ containing the vertex $x$ and $y$ respectively, with $\gamma$ passing through both $T_{x}$, $T_{y}$. Note that $x$ and $y$ lie on different edges of $\partial T$. Suppose that $\gamma_{1}$ passes through $x$ and $\gamma_{3}$ passes through $y$: we may see that by a simple Euclidean geometry argument for angles that either $\gamma_{1}$ or $\gamma_{3}$ satisfies case $b.i)$. In case $b.iii)$ extend $\gamma_{2}$ through $v$ until we meet $X^{(1)}$ at a third point $w$: without loss of generality this can be written $\gamma_{1}\cdot \gamma_{2}=[u,v]\cdot [v,w]$. Now, as $\gamma$ is geodesic, we have $d_{Lk(v)}([u,v],[v,w])\geq 3$. Let $C$ be the cutset in $\Gamma_{1}$ such that $[u,v]$ and $[v,w]$ are separated by $C$ (this exists as $\Gamma_{1}$ is strongly $3$-separated), and let $\Lambda$ be the hypergraph passing through $C$ in $Lk(v)$. By Lemma \ref{lem: separation condition} $\Lambda$ separates the endpoints of $\gamma$. \begin{figure}[H] \centering \begin{minipage}[b]{0.25\textwidth} \includegraphics[scale=0.9]{gentriangle3.png} \centering \caption*{Case b.i)} \end{minipage}\hspace*{30pt} \begin{minipage}[b]{0.49\textwidth} \includegraphics[scale=0.8]{gentriangle4.png} \centering \caption*{\hspace*{70pt}Case b.ii)} \end{minipage} \label{fig:gentriangle2} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.9]{gentriangle5.png} \caption*{Case b.iii)} \end{figure} \end{proof} We can now prove Theorem \ref{mainthm: cubulating generalized triangle groups}. \begin{proof}[Proof of Theorem \ref{mainthm: cubulating generalized triangle groups}] By \cite[Proposition 3.2]{Lubotzky-Manning-Wilton}, the group $G$ acts properly discontinuously and cocompactly on a simply connected simplicial complex $X$. Endow this complex with the Euclidean metric: by Gromov's link condition X is $CAT(0)$ and has three types of vertices $\{v_{i}\}$ where $Lk(v_{i})=\Gamma_{i}$. If $\Gamma_{1}$ is strongly $3$-separated, then by Lemma \ref{lem: strongly separated triangle}, we have $$\#(p,q)\geq d_{X}(p,q)\slash 100 -1.$$ The results then follow by \cite[Theorem 5.2]{Hruska-Wise} and \cite[Lemma 7.2]{Hruska-Wise} similarly to the proof of Theorem \ref{mainthm: cubulating groups}, using Lemma \ref{lem: strongly separated triangle} in place of Lemma \ref{lem: edge hypergraph separation}. \end{proof} \subsection{Small girth generalized triangle groups}\label{subsection: small girth generalized triangle groups} To prove Corollary \ref{coralph: small girth generalized triangle groups}, we now analyse the separation of various small girth graphs considered in \cite{Caprace-Conder-Kaluba-Witzel_triangle}. These graphs arise in the work of \cite{Conder-Morton_1995classification,Conder-Dobcsanyi_2002trivalent,Conder-Malnic-Marusic-Potocnik_2006census} and are regular bipartite graphs with girth $6$ or $8$, diameter $3$ or $4$, and an edge regular subgroup of the automorphism group. Furthermore, they all have a vertex transitive automorphism group. In particular, we have the following. \begin{lemma*}\cite{Caprace-Conder-Kaluba-Witzel_triangle} Let $\Gamma$ be one of $\{F24A,F26A,F40A,F48A\}$. Then $Aut(\Gamma)$ acts vertex transitively. Let $\Gamma$ be one of $\{F24A,F26A,F40A,F48A, G54\}$: there exists a subgroup $H(\Gamma)\leq Aut(\Gamma)$ that acts freely and transitively on $E(\Gamma)$ and preserves the bipartition of $\Gamma.$ \end{lemma*} We make the following definitions. \begin{definition}[Cubic graphs] Let $\Gamma$ be a finite graph. It is \emph{cubic} if it is connected, bipartite, and trivalent. \end{definition} \begin{definition}[$\dagger$-separated graphs] Let $\Gamma$ be a graph. We say that $\Gamma$ is \emph{$\dagger$-separated} if: \begin{enumerate}[label=$\roman*)$] \item $\Gamma$ is cubic, \item $girth(\Gamma)= 6$ or $8$, \item and $\Gamma$ is disjointly weighted vertex $3$-separated by proper cutsets, (so that $\Gamma -C$ consists of exactly three components for each $C$). \end{enumerate} \end{definition} \begin{definition}[Good cubic graphs] A cubic graph is \emph{good} if $girth(\Gamma)=6$ or $8$, $diam(\Gamma)\leq 4$, $Aut(\Gamma)$ acts vertex transitively, and there exists a group $H(\Gamma)\leq Aut(\Gamma)$ that acts freely and transitively on $E(\Gamma)$ and preserves the bipartition of $\Gamma.$ \end{definition} In the above definition, for any vertex $v$ of $\Gamma$, $H(\Gamma)_{v}$ is of order three and so cyclically permutes the neighbours of $v$. Fix a vertex $v_{0}\in V(\Gamma).$ For each pair of vertices $v\neq w$, choose an element $\gamma_{v,w}\in Aut(\Gamma)$ with $\gamma_{v,w} v=w$, such that \begin{enumerate}[label=$\roman*)$] \item $\gamma_{v,w}=\gamma_{v,v_{0}}\gamma_{v_{0},w},$ \item $\gamma_{v,w}=\gamma_{w,v}^{-1},$ \item and if $v,w\in V_{1}$ or $v,w\in V_{2}$, then $\gamma_{v,w}\in H(\Gamma)$. \end{enumerate} For each $v\in V(\Gamma)$ we will let neighbours of $v$ be defined as $w_{1}(v),w_{2}(v),w_{3}(v)$, so that $\gamma_{v_{0},v}w_{i}(v_{0})=w_{i}(v).$ We also assign to $H(\Gamma)_{v}$ a generator $h_{v}$ such that $h_{v}w_{1}(v)=w_{2}(v)$, $h_{v}w_{2}(v)=w_{3}(v)$, and so on, i.e. $h_{v}=\gamma_{v_{0},v}h_{v_{0}}\gamma_{v,v_{0}}$. \begin{definition}[$\Large{*}$-separated cutsets] Let $C$ be a vertex cutset in a graph $\Gamma$. We say $C$ is a \emph{$\Large{*}$-separated cutset} if $C$ is $3$-separated, for any vertex $w\in C$, there are two vertices $v,v'$ adjacent to $w$ such that $v$ and $v'$ lie in separate components of $\Gamma-C$, and $\Gamma - C$ contains exactly two components. \end{definition} \begin{definition} Let $\Gamma$ be a good cubical graph, and let $\mathcal{C}$ be a collection of $\Large{*}$-separated cutsets. For $v\in V(\Gamma)$, we define $$\large{*}(v,i,j)$$ to be the set of all $\Large{*}$-separated cutsets $C\ni v$ such that $w_{i}(v)$ and $w_{j}(v)$ lie in the same connected component of $\Gamma - C$. We further define $$\mathcal{C}(v,i,j):=\mathcal{C}\cap \Large{*}(v,i,j).$$ \end{definition} \begin{definition}[$\Large{*}$-separated graph] Let $\Gamma$ be a graph. We say that $\Gamma$ is \emph{$\Large{*}$-separated} if: \begin{enumerate}[label=$\roman*)$] \item $\Gamma$ is a cubic graph, \item $\Gamma$ is weighted vertex $3$-separated by a set $\mathcal{C}$ of $\Large{*}$-separated cutsets, \item for any vertex $v$ and any $i\neq j$, $\mathcal{C}(v,i,j)$ is non-empty \item there exists an integer $M$ and positive integers $n(C)$ for each $C\in \mathcal{C}$ such that for any vertex $v$ and any $i\neq j $, $$\sum\limits_{C\in\mathcal{C}(v,i,j)}n(C)=\frac{M}{3}$$ \end{enumerate} \end{definition} \begin{definition} Let $v$ be any vertex in $\Gamma$. We define $D(v)=\{w\in \Gamma\;:\;d(v,w)\geq 5\}$. \end{definition} For ease, we prove the following lemma. \begin{lemma}\label{lem: simple condition for 3 sep} Let $\Gamma$ be a good cubic graph. Let $V_{1}\sqcup V_{2}$ be the bipartite partition of vertices, and choose $v_{1}\in V_{1}$. Suppose that there exists $\Large{*}$-separated cutset $A_{i}\ni v_{1}$ such that for each $u\in D(w_{1}(v_{1}))$, there exists some $i$, $w_{3}(v_{1})$ and $u$ lie in separate components of $\Gamma$ - $A_{i}$. Then $\Gamma$ is $\Large{*}$-separated. \end{lemma} \begin{proof} We need to show three separate things. Firstly we show that there exists a collection $\mathcal{C}$ of $\Large{*}$-separated cutsets so that $\Gamma$ is vertex $3$-separated by $\mathcal{C}$. Recall the element $\gamma=\gamma_{v_{1},v_{2}}$, the element of $Aut(\Gamma)$ taking $v_{1}$ to $v_{2}:=w_{1}(v_{1})\in V_{2}$. Let $H:=H(\Gamma)$ be the group acting edge-regularly on $\Gamma$ and preserving the bipartite partition. Let $A=\{A_{i}\}_{i}$, $B=\gamma \cdot A$, $\mathcal{A}=H\cdot A$, $\mathcal{B}=H\cdot B$, and $\mathcal{C}=\mathcal{A}\cup \mathcal{B}.$ By assumption, for some $i\neq j$ $\mathcal{C}(v_{1},i,j)$ is non-empty. For any vertex $v$, $\gamma_{v_{1},v}\mathcal{C}(v_{1},1,2)=\mathcal{C}(v,1,2)$, and furthermore, $h_{v}\mathcal{C}(v,1,2)=\mathcal{C}(v,2,3)=h_{v}^{-1}\mathcal{C}(v,1,3)$. Therefore $\mathcal{C}(v,i,j)$ is non empty for all $v$ and all $i\neq j$. In particular for any vertex $v$ and $w,w'$ adjacent to $v$ there exists a cutset separating $w$ and $w'$. Now let $u,v$ be vertices distance at least $3$ apart. Note that $d(u,v)\leq 4$ as $diam(\Gamma)\leq 4$. Assume $d(u,v)=3$, and let $$p=(u,u_{1})(u_{1},u_{2})(u_{2},v)$$ be any edge path between $u$ and $v$. Now, suppose without loss of generality that $u=w_{1}(u_{1})$ and $u_{2}=w_{2}(u_{1})$. Then choosing a cutset $C\in\mathcal{C}(u_{1},1,3)$, $u$ and $u_{2}$ lie on separate components of $\Gamma-C$. Since $C$ is $3$-separated, and $u_{1}\in C$, it follows that $u$, $v$ are not elements of $C$. As $u$ is adjacent to $u_{1}$ and $v$ is adjacent to $u_{2}$, it follows that $u$ and $v$ lie in different components of $\Gamma-C$. If $d(u,w)=4$, then we repeat the argument for $$p=(u,u_{1})(u_{1},u_{2})(u_{2},u_{3})(u_{3},v)$$ and for $C$ the cutset containing $u_{2}$ and separating $u_{1}$ and $u_{3}$. It now follows by Lemma \ref{lem: vertex separated condition} that $\Gamma$ is vertex $3$-separated. If $d(u,w)=5,6$, consider the edge path $$p=(u,u_{1})(u_{1},u_{2})(u_{2},u_{3})(u_{3},u_{4})(u_{4},v),$$ or $$p=(u,u_{1})(u_{1},u_{2})(u_{2},u_{3})(u_{3},u_{4})(u_{4},u_{5})(u_{5},v).$$ We may map $u_{1}$ to $v_{1}$ and $u$ to $w_{3}(v_{1})$: by taking $A$ and mapping back, by assumption this cutset separates $u$ and $v$. Finally we wish to find the positive integers $M$ and $n(C)$. This immediately implies the weight equations can be solved, and so $\Gamma$ is weighted vertex $3$-separated with respect to $\mathcal{C}.$ The proof is similar to the proof of Lemma \ref{lem: vertex transitive automorphism group can solve equations} concerning vertex transitive automorphism groups. Let $\tilde{\mathcal{C}}=H\cdot A\cup H\cdot B$ counted \emph{with multiplicity}. Let $u,v\in V_{1}$. For $i=1,2,3,$ we have $\gamma_{u,v}(w_{i}(u))=w_{i}(v)$. It follows that for $C\in \tilde{\mathcal{C}},$ $$C\in \mathcal{C}(u,i,j)\iff \gamma_{u,v}C \in \mathcal{C}(v,i,j).$$ Similarly $$C\in \mathcal{C}(u,1,2)\iff h_{u}C \in \mathcal{C}(u,2,3)\iff h_{u}^{2}C\in\mathcal{C}(u,1,3).$$ Let $n(C)=\vert\{C'\in\tilde{\mathcal{C}}\;:\;C'=C\}\vert,$ i.e. $n(C)$ is the multiplicity of $C$ in $\tilde{\mathcal{C}}$. By applying $\gamma_{v_{1},v}$ and $h_{v_{1}}$, we see that for any $v\in V_{1}$ and $i\neq j,i'\neq j'$: $$\sum\limits_{C\in\mathcal{C}(v_{1},i,j)}n(C)=\sum\limits_{C\in\mathcal{C}(v,i',j')}n(C).$$ Therefore there exists an integer $M_{1}$ such that for for any $v\in V_{1}$ and $i\neq j$: $$\sum\limits_{C\in\mathcal{C}(v,i,j)}n(C)=\frac{M_{1}}{3}.$$ Similarly there exists an integer $M_{2}$ such that for any $v\in V_{2}$ and $i\neq j$: $$\sum\limits_{C\in\mathcal{C}(v,i,j)}n(C)=\frac{M_{2}}{3}.$$ Now finally we wish to show that $M_{1}=M_{2}$. However, this follows immediately by construction, as $\mathcal{B}=\gamma \cdot\mathcal{A}$, and $\mathcal{C}=\mathcal{A}\cup \mathcal{B}$. \end{proof} Using this, we investigate the separation of several graphs. \begin{table}[H] \centering \small \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 0&1& 2& 3\\ 1&0& 4& 5\\ 2&0& 6& 8\\ 3&0& 7& 9\\ 4&1& 11& 14\\ 5&1& 10& 13\\ 6&2& 12& 16\\ 7&3& 12& 15 \end{tabular} \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 8&2& 11& 18\\ 9&3& 10& 17\\ 10&5& 9& 21\\ 11&4& 8& 20\\ 12&6& 7& 19\\ 13&5& 19& 23\\ 14&4& 19& 22\\ 15&7& 20& 23 \end{tabular} \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 16&6& 21& 22\\ 17&9& 20& 22\\ 18&8& 21& 23\\ 19&12& 13& 14\\ 20&11& 15& 17\\ 21&10& 16& 18\\ 22&14& 16& 17\\ 23&13& 15& 18 \end{tabular} \caption{Edge incidences for $F24A$} \label{tab:F24 edges} \end{table} \begin{lemma} The graph $F24A$ is $\dagger$-separated. \end{lemma} \begin{proof} By a computer search we find all $3$-separated vertex cutsets in $F24A$: \begin{table}[H] \centering \begin{tabular}{l} $C_{1}=\{x_{0}, x_{10}, x_{11}, x_{12},x_{22},x_{23}\}$,\\ $C_{2}=\{x_{1}, x_{8}, x_{9}, x_{15}, x_{16}, x_{19}\}$,\\ $C_{3}=\{x_{2}, x_{4}, x_{7}, x_{13},x_{17},x_{21}\}$,\\ $C_{4}=\{x_{3}, x_{5}, x_{6}, x_{14}, x_{18}, x_{19}\}$. \end{tabular} \end{table} We note $diam(F24A)=4$. As the above are disjoint and proper, it follows easily that $F24A$ is $\dagger$-separated. \end{proof} \vspace*{-10pt} \begin{table}[H] \centering \small \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 0&1& 2& 3\\ 1&0& 4& 7\\ 2&0& 6& 9\\ 3&0& 5& 8\\ 4&1& 10& 13\\ 5&3& 11& 14\\ 6&2& 12& 15\\ 7&1& 11& 16\\ 8&3& 12& 17 \end{tabular} \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 9&2& 10& 18\\ 10&4& 9& 22\\ 11&5& 7& 20\\ 12&6& 8& 21\\ 13&4& 23& 24\\ 14&5& 24& 25\\ 15&6& 23& 25\\ 16&7& 21& 23\\ 17&8& 22& 24 \end{tabular} \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 18&9& 20& 25\\ 19&20& 21& 22\\ 20&11& 18& 19\\ 21&12& 16& 19\\ 22&10& 17& 19\\ 23&13& 15& 16\\ 24&13& 14& 17\\ 25&14& 15& 18 \end{tabular} \caption{Edge incidences for $F26A$} \label{tab:F26 edges} \end{table} \begin{lemma}\label{lem: F26A is * separated} The graph $F26A$ is $\Large{*}$-separated. \end{lemma} \begin{proof} We can take $v_{1}=x_{0}$, $w_{i}=x_{i}$. $D(x_{3})=\emptyset$, as $diam(F26A)=4$. Using the notation as in Lemma \ref{lem: simple condition for 3 sep} we find $A_{1}=\{x_{0}, x_{10}, x_{12}, x_{14}, x_{20}, x_{23}\}.$ The result follows by Lemma \ref{lem: simple condition for 3 sep}. \end{proof} We defer the incidence table of $F40A$, and the collection of cutsets found, to Appendix \ref{section: cutset appendix}. \begin{lemma}\label{lem: F40 is separated} The graph $F40A$ is weighted strongly edge $3$-separated. \end{lemma} \begin{proof} We require a large number of cutsets for this proof: they can be found in Appendix \ref{section: cutset appendix}. In particular, we find a collection of cutsets $\{C_{i}\}_{i}$ such that for any vertices $w_{1},w_{2}$ with $d(x_{0},w_{1})\geq 3$ and $d(w_{1},w_{2})=1$ there exists some $C_{i}$ separating $\{x_{0},x_{1}\}$ and $\{w_{1},w_{2}\}$ (this can be easily checked by computer). Similarly for any vertices $w_{1},w_{2}$ with $d(x_{1},w_{1})\geq 3$ and $d(w_{1},w_{2})=1$ there exists some $C_{i}$ separating $\{x_{0},x_{1}\}$ and $\{w_{1},w_{2}\}$. By passing to subsets of $C_{i}$ we may assume each of these cutsets are minimal and therefore proper. As $Aut(F40)$ acts edge and vertex transitively, it follows by Lemma \ref{lem: strong edge sep condition} that $F40$ is strongly edge $3$-separated. By Lemma \ref{lem: edge transitive automorphism group can solve equations}, $F40A$ is weighted disjointly strongly edge $3$-separated. \end{proof} \begin{table}[H] \centering \small \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 0&1& 2& 3\\ 1&0& 4& 5\\ 2&0& 6& 8\\ 3&0& 7& 9\\ 4&1& 11& 17\\ 5&1& 10& 16\\ 6&2& 13& 21\\ 7&3& 12& 20\\ 8&2& 15& 19\\ 9&3& 14& 18\\ 10&5& 23& 25\\ 11&4& 22& 24\\ 12&7& 23& 29\\ 13&6& 22& 28\\ 14&9& 22& 27\\ 15&8& 23& 26 \end{tabular} \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 16&5& 27& 31\\ 17&4& 26& 30\\ 18&9& 25& 35\\ 19&8& 24& 34\\ 20&7& 28& 33\\ 21&6& 29& 32\\ 22&11& 13& 14\\ 23&10& 12& 15\\ 24&11& 19& 43\\ 25&10& 18& 42\\ 26&15& 17& 47\\ 27&14& 16& 46\\ 28&13& 20& 45\\ 29&12& 21& 44\\ 30&17& 40& 46\\ 31&16& 39& 47 \end{tabular} \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 32&21& 41& 43\\ 33&20& 41& 42\\ 34&19& 40& 44\\ 35&18& 39& 45\\ 36&43& 45& 46\\ 37&42& 44& 47\\ 38&39& 40& 41\\ 39&31& 35& 38\\ 40&30& 34& 38\\ 41&32& 33& 38\\ 42&25& 33& 37\\ 43&24& 32& 36\\ 44&29& 34& 37\\ 45&28& 35& 36\\ 46&27& 30& 36\\ 47&26& 31& 37 \end{tabular} \caption{Edge incidences for $F48A$} \label{tab:F48 edges} \end{table} \begin{lemma} The graph $F48A$ is $\dagger$-separated. \end{lemma} \begin{proof} By a computer search we find all $3$-separated vertex cutsets in $F48A$: \begin{table}[H] \centering \begin{tabular}{l} $C_{1}=\{x_{0}, x_{16}, x_{17}, x_{18}, x_{19}, x_{20}, x_{21}, x_{22}, x_{23}, x_{36}, x_{37}, x_{38}\}$,\\ $C_{2}=\{x_{1}, x_{6}, x_{7}, x_{14}, x_{15}, x_{24}, x_{25}, x_{30}, x_{31}, x_{41}, x_{44}, x_{45}\}$,\\ $C_{3}=\{x_{2}, x_{5}, x_{9}, x_{11}, x_{12}, x_{26}, x_{28}, x_{32}, x_{34}, x_{39}, x_{42}, x_{46}\}$,\\ $C_{4}=\{x_{3}, x_{4}, x_{8}, x_{10}, x_{13}, x_{27}, x_{29}, x_{33}, x_{35}, x_{40}, x_{43}, x_{47}\}$. \end{tabular} \end{table} The above are disjoint and proper, and it can be seen that $F48A$ is $\dagger$-separated. \end{proof} We defer the incidence table of $G54$, and the collection of cutsets found, to Appendix \ref{section: cutset appendix}. \begin{lemma}\label{lem: G54 is separated} The Gray Graph $G54$ is strongly edge $3$-separated. \end{lemma} \begin{proof} We require a large number of cutsets for this proof: they can be found in Appendix \ref{section: cutset appendix}. In particular, we find a collection of $3$-separated cutsets $\{C_{i}\}_{i}$ such that each $C_{i}$ contains one of the edges $$(x_{0},x_{1}),(x_{0},x_{53}),(x_{24},x_{25}),(x_{25},x_{26}).$$ Therefore, as each cutset is $3$-separated, they cannot contain the edge $(x_{0},x_{25})$, and so for each cutset, $x_{0}$ and $x_{25}$ lie in the same component of $G54- C_{i}$. We also show that for any point $v$ with $d(x_{0},v)\geq 3$ and any neighbour $w$ of $v$, there exists some $C_{i}$ separating $\{x_{0},x_{25}\}$ and $\{v,w\}$. Furthermore, for any point $v$ with $d(x_{25},v)\geq 3$ and any neighbour $w$ of $v$, there exists some $C_{i}$ separating $\{x_{0},x_{25}\}$ and $\{v,w\}$. By passing to subsets of $C_{i}$ we may assume each of these cutsets are minimal and therefore proper. Now, let $p=(u,w_{1})(w_{1},w_{2})\hdots(w_{n},v)$ be some path with $2\leq n\leq 4$ of length between $3$ and $6$. Let $u'$ be adjacent to $u$ and $v'$ be adjacent to $v$. Note again that $Aut(G54)$ acts transitively on the set of edges. If we can map $u$ to $x_{0}$ by some element $\gamma\in Aut(G54)$, then we may also map $u'$ to $x_{25}$ by $\gamma$, and then for some $i$, $C_{i}$ separates $x_{0},x_{25}$ and $\gamma v,\gamma v'$: $\gamma^{-1}C_{i}$ then separates $u,u'$ and $v,v'$. Otherwise, we map $u$ to $x_{25}$ by $\gamma$, so that $\gamma u'=x_{0}$. The result follows similarly. Therefore, $G54$ is strongly edge $3$-separated, and as it has an edge transitive automorphism group, it is weighted strongly edge $3$-separated by Lemma \ref{lem: edge transitive automorphism group can solve equations}. \end{proof} We finally need to prove the following. \begin{lemma}\label{lem: splicing together generalized triangle groups} Let $Y$ be a finite triangle complex such that each triangle is a unit equilateral Euclidean triangle. Suppose that the link of each vertex is either $\Large{*}$-separated or $\dagger$-separated with the combinatorial metric (we allow a mixture of these). Then $Y$ is gluably $\pi$-separated. \end{lemma} \begin{proof} It is clear that $Y$ is nonpositively curved and regular. By Lemma \ref{lem: solving gluing equations for minimal edge cutsets}, if the link of each vertex is $\dagger$-separated with the combinatorial metric then we are finished. Otherwise, let $\{v_{k}\}$ be the vertices such that $Lk(v_{k})$ is $\Large{*}$-separated with the combinatorial metric, and $\{w_{l}\}$ be the vertices such that $Lk(w_{l})$ is $\dagger$-separated with the combinatorial metric. Note that a $3$-separated cutset in $Lk(x)$ under the combinatorial metric is a $\pi$-separated cutset in $Lk(x)$ under the combinatorial metric. For each proper $\pi$-separated cutset $C$ in $Lk(w_{l})$ we may assign the three partitions $P_{1}(C),P_{2}(C),P_{3}(C)$ corresponding to placing two components of $Lk(w_{l})-C$ in the same element of the partition. For each cutset $C$ in $Lk(v_{k})$ assign the unique partition of connectedness of $Lk(v_{k})-C$. Since the links are $\dagger$-separated and $\Large{*}$-separated, by assumption for each vertex $x\in Y$ there exists a positive integer $N_{x}>0$ and a system of strictly positive weights $n_{x}(C)$ for $C\in \mathcal{C}_{x}$ such that for any vertex $e$ in $Lk_{Y}(x)$, $$\sum\limits_{C\in \mathcal{C}(e)}n_{x}(C)=\sum\limits_{C\in \mathcal{C}(e)\cap \mathcal{C}_{x}}n_{x}(C)=N_{x}.$$ Furthermore, if $Lk(v_{k})$ is $\Large{*}$-separated, then for any vertex $y\in V(Lk(v_{k}))$ and $i\neq j$ $$\sum\limits_{C\in\mathcal{C}_{v_{k}}(y,i,j)}n_{v_{l}}(C)=\frac{N_{v_{l}}}{3}.$$ Let $M=\prod_{x\in V(Y)}N_{x},$ and for a cutset $C\in\mathcal{C}_{x},$ define $$m(C)=Mn_{x}(C)\slash N_{x}.$$ It follows that for an edge $e$ in $Lk_{G\backslash X}(x)$, $$\sum\limits_{C\in \mathcal{C}(e)}m(C)=\frac{M}{N_{x}}\sum\limits_{C\in \mathcal{C}(e)}n_{x}(C)=\frac{M}{N_{x}}N_{x}=M.$$ Now, take $\mu(C,P(C))=m(C)$. It follows that for any oriented edge $e$ of $Y^{(1)}$ starting at some $w_{l}$ and any partition $(C,P)\in \mathcal{CP}(e)$: $$\sum \limits_{(C',P')\in [C,P]_{e}}\mu (C',P')=\sum \limits_{(C',P')\in [C,P]_{e}}m(C')=\frac{1}{3}\sum \limits_{C'\in \mathcal{C}(e)}m(C')=\frac{1}{3}M.$$ Similarly, by the definition of $\Large{*}$-separated, for each $v_{k}$, each edge $e$ starting at $v_{k}$, and $(C,P(C))\in \mathcal{C}(e),$ $$\sum \limits_{(C',P')\in [C,P]_{e}}\mu (C',P')=\sum \limits_{(C',P')\in [C,P]_{e}}m(C')=\frac{1}{3}\sum \limits_{(C',P')\in \mathcal{C}(e)}m(C')=\frac{1}{3}M,$$ and so the gluing equations are solved. \end{proof} The results of Corollary \ref{coralph: small girth generalized triangle groups} now follow from \cite[Theorem 3.1]{Caprace-Conder-Kaluba-Witzel_triangle}, \cite[Proposition 3.2]{Lubotzky-Manning-Wilton}, the above lemmas concerning the separation of the graphs considered, Theorem \ref{mainthm: cubulating groups}, and Theorem \ref{mainthm: cubulating generalized triangle groups}. \subsection{Cubulating Dehn fillings of generalized ordinary triangle groups}\label{subsection: cubulating dehn fillings of generalized triangle groups} We now apply Theorem \ref{mainthm: cubulating groups} to the generalized triangle groups of \cite{Lubotzky-Manning-Wilton}, in particular retrieving consequences of the malnormal special quotient theorem of Wise \cite{Wise-MSQT}. \begin{cor}\label{mainthm: cubulating dehn fillings of generalized triangle groups} Let $\Gamma_{i}\looparrowright C_{k,2}$ be finite $n(i)$-sheeted normal covering graphs. There exist finite-sheeted normal covering graphs $\dot{\Gamma}_{i}\looparrowright \Gamma_{i}$ of index at most $$4\bigg(4^{4^{kn(i)}}\bigg)$$ such that for any collection of finite-sheeted covering graphs $\Delta_{i}\looparrowright\Gamma_{i}$ that factor as $\Delta_{i}\looparrowright\dot{\Gamma}_{i}\looparrowright\Gamma_{i},$ and any $j$, the group $G^{j}_{0,k}(\Delta_{1},\Delta_{2},\Delta_{3})$ is hyperbolic and acts properly discontinuously and cocompactly on a $CAT(0)$ cube complex. \end{cor} We consider covers of $\sigma$-separated graphs: we restrict our consideration to graphs with the combinatorial metric. We note the following lemma. \begin{lemma}\label{lem:cover of separation} Let $p:\tilde{\Gamma}\looparrowright\Gamma$ be a covering graph. Let $e\in E(\Gamma)$ and let $\tilde{e}_{1},\tilde{e}_{2}\in p^{-1}(e)$ be distinct. Then $$d_{\tilde{\Gamma}}(m(\tilde{e}_{1}),m(\tilde{e}_{2}))\geq girth(\Gamma).$$ \end{lemma} We now show that covers of $\sigma$-separated graphs are also $\sigma$-separated. \begin{lemma}\label{lem: covers of suitable graphs} Let $\Gamma$ be a weighted (disjointly) edge $\sigma$-separated graph with $girth(\Gamma)\geq \sigma $ and $p:\tilde{\Gamma}\looparrowright\Gamma$ a finite-sheeted covering graph. Then $\tilde{\Gamma}$ is also weighted (disjointly) edge $\sigma$-separated, and $girth(\tilde{\Gamma})\geq girth(\Gamma)$. \end{lemma} \begin{proof} It is clear that $girth(\tilde{\Gamma})\geq girth(\Gamma)$, and that $\tilde{\Gamma}$ is connected and contains no vertices of degree $1$. Let $\mathcal{C}_{1}, \hdots , \mathcal{C}_{m}\subseteq E(\Gamma)$ be the $\sigma$-separated cut sets of $\Gamma$. Let $\tilde{\mathcal{C}}_{i}=p^{-1}(\mathcal{C}_{i})$: by Lemma \ref{lem:cover of separation}, and by noting that for all $x,y\in\tilde{\Gamma}$ we have $d_{\tilde{\Gamma}}(x,y)\geq d_{\Gamma}(p(x),p(y))$, we see that $\tilde{\mathcal{C}}_{i}$ is a collection of proper $\min \{girth (\Gamma ),\sigma \}$-separated cut sets. Furthermore $\vert \tilde{C}_{i}\vert\geq \vert C_{i}\vert\geq 2$ As $girth(\Gamma)\geq \sigma$, these are $\sigma$-separated and $\cup_{i}\tilde{\mathcal{C}}_{i}=E(\tilde{\Gamma})$. Therefore, $\tilde{\Gamma}$ is edge $\sigma$-separated. If $\Gamma$ is disjointly separated, it is clear that $\tilde{\Gamma}$ is disjointly separated. Finally, defining $n(\tilde{C}_{i})=n(C_{i})$, it can be seen that the weight equations are satisfied, so that $\tilde{\Gamma}$ is weighted (disjointly) edge $\sigma$-separated. \end{proof} Using the above, we wish to show that given any graph $\Gamma$, there exists a finite-sheeted $3$-separated covering graph $\tilde{\Gamma}\looparrowright \Gamma$. \begin{definition} Let $\Gamma$ be a graph and $m\geq 0$. The \emph{$\mathbb{Z}_{m}$ cover of $\Gamma$}, $$p_{m}:\mathbb{Z}_{m} (\Gamma )\looparrowright\Gamma,$$ is the $m^{b_{1}(\Gamma)}$-sheeted cover corresponding to the kernel of the canonical map $\pi_{1}(\Gamma)\rightarrow H_{1}(\Gamma,\mathbb{Z}_{m}).$ \end{definition} The use of this is the following. \begin{lemma}\label{lem: Gamma2 is suitable} Let $\Gamma$ be a finite connected graph with no cut edges and let $m\geq 1$. The covering graph $\mathbb{Z}_{2m} ( \Gamma )$ is weighted disjointly edge $girth(\Gamma)$-separated and $girth ( \mathbb{Z}_{2m} ( \Gamma ))= 2m (girth(\Gamma)).$ \end{lemma} \begin{proof} Let $e\in E(\Gamma)$. We claim $p_{2m}^{-1}(e)$ is a proper $girth(\Gamma)$-separated cut set in $\mathbb{Z}_{2m} ( \Gamma )$. By Lemma \ref{lem:cover of separation}, $p_{2m}^{-1}(e)$ is $girth(\Gamma)$-separated. It suffices to show that if two points $x$ and $y$ are joined by a path $q$ containing one edge of $p_{2m}^{-1}(e)$, then any path $q'$ between them contains an edge of $p_{2m}^{-1}(e)$. Now suppose not: consider such a path $q'$ not containing any edge of $p_{2m}^{-1}(e)$, and consider the loop $qq'$. Then $p_{2m}(qq')$ is trivial in the map to $H_{1}(\Gamma, \mathbb{Z}_{2m})$, so is homotopic to a curve containing $e$ an even number of times, a contradiction. Therefore the set $\mathcal{C}_{e}=\{p_{2m}^{-1}(e)\;:\; e\in E(\Gamma)\}$ is a disjoint collection of proper $girth(\Gamma)$-separated edge-cut sets such that any edge in $\mathbb{Z}_{2m}(\Gamma)$ appears exactly one cut set: the weight equations are trivially satisfied and so $\mathbb{Z}_{2m}(\Gamma)$ is weighted disjointly $girth(\Gamma)$-separated. Any loop in $\mathbb{Z}_{2m} ( \Gamma )$ projects to a loop homotopic to a product of loops where each loop is traversed $2m$ times, and so $girth(\mathbb{Z}_{2} ( \Gamma )) = 2m( girth(\Gamma))$. \end{proof} Using this, we prove the following. \begin{proof}[Proof of Corollary \ref{mainthm: cubulating dehn fillings of generalized triangle groups}] Let $\Gamma_{i}\looparrowright C_{k,2}$ be $n(i)$-sheeted normal covering graphs . Let $\dot{\Gamma}_{i}:=\mathbb{Z}_{2} (\mathbb{Z}_{2} ( \Gamma ))$: these are $$2^{2-(2^{2-2n(i)+kn(i)}+2)n(i)+(2^{1-2n(i)+kn(i)}+1)kn(i)}\leq 4^{1+kn(i)2^{kn(i)}}\leq 4\bigg(4^{4^{kn(i)}}\bigg)-$$ sheeted covering graphs, which, by Lemma \ref{lem: Gamma2 is suitable}, are weighted disjointly edge $3$-separated under the combinatorial metric and have girth at least $8$. Furthermore, it is clear that $\dot{\Gamma_{i}}\looparrowright C_{k,2}$ are normal covers. Suppose $\Delta_{i}\looparrowright\Gamma_{i}$ factors as $\Delta_{i}\looparrowright\dot{\Gamma}_{i}\looparrowright\Gamma_{i}$. By \cite[Proposition 3.2]{Lubotzky-Manning-Wilton}, noting that $girth(\Delta_{i})\geq girth(\dot{\Gamma_{i}})> 6$, the group $G^{j}_{0,k}(\Delta_{1},\Delta_{2},\Delta_{3})$ is hyperbolic. The $\Delta_{i}$ are covers of $\dot{\Gamma}_{i}$, so by Lemma \ref{lem: covers of suitable graphs} are also weighted edge $3$-separated under the combinatorial metric. The result now follows from Lemma \ref{lem: solving gluing equations for minimal edge cutsets}, \cite[Proposition 3.2]{Lubotzky-Manning-Wilton}, and Theorem \ref{mainthm: cubulating groups}. \end{proof} \section{Introduction} \subsection{Cubulating groups acting on polygonal complexes} Recently, a very fruitful route to understanding groups has been to find an action on a $CAT(0)$ cube complex. Indeed, an action without a global fixed point provides an obstruction to Property $(T)$ \cite{Niblo-Reeves97}, while a proper action is enough to guarantee the Haagerup property \cite{Cheriz-Martin-Valette_haagerupproperty}. Further properties, such as residual finiteness or linearity, can deduced if the cube complex is \emph{special} \cite{Haglund-Wise}. Perhaps the most notable recent use of cube complexes was in Agol's proof of the Virtual Haken Conjecture \cite{Agol13}. Therefore, it is of interest to find actions of groups on $CAT(0)$ cube complexes. In this paper we provide a condition on the links of polygonal complexes (including those with triangular faces) that is sufficient to ensure a group acting properly discontinuously and cocompactly on such a complex contains a virtually free codimension-$1$ subgroup. We provide stronger conditions that are sufficient to ensure a group acting properly discontinuously and cocompactly on such a complex acts properly discontinuously on a $CAT(0)$ cube complex: in many applications (in particular for hyperbolic groups) this action is also cocompact. We shall see that these conditions can be practically checked in many examples, and can in fact be checked by computer search if desired. For a polygonal complex $X$ and a vertex $v$ we define the \emph{link} of $v$, $Lk_{X}(v)$ (or simply $Lk(v)$ when $X$ is clear from context), as the graph whose vertices are the edges of $X$ incident at $v$, and two vertices $e_{1}$ and $e_{2}$ are connected by an edge $f$ in $Lk(v)$ if the edges $e_{1}$ and $e_{2}$ in $X$ are adjacent to a common face $f$. We can endow the link graph with the \emph{angular metric}: an edge $f=(e_{1},e_{2})$ in $Lk(v)$ has length $\alpha$, where $\alpha$ is the angle between $e_{1}$ and $e_{2}$ in the shared face $f$. We refer the reader to Section \ref{subsec: Link conditions} for further definitions, such as that of a gluably $\pi$-separated complex (this requires a solution to a system of linear equations called the \emph{gluing equations}). We note that in all of our applications, the gluing equations can be solved by considering only the links of vertices of $G\backslash X$. It is well known that a group containing a codimension-$1$ subgroup cannot have Property $(T)$ \cite{Niblo-Roller1998}. Furthermore, a hyperbolic group acting properly discontinuously and cocompactly on a $CAT(0)$ cube complex is virtually special \cite[Theorem $1.1$]{Agol13} (see Haglund-Wise \cite{Haglund-Wise} for a discussion of the notion of specialness); in particular it is linear over $\mathbb{Z}$ and is residually finite. \begin{theoremalph}\label{mainthm: cubulating groups} Let $G$ be a group acting properly discontinuously and cocompactly on a simply connected $CAT(0)$ polygonal complex $X$. \begin{enumerate}[label=(\roman*)] \item If $G\backslash X$ is gluably weakly $\pi$-separated, then $G$ contains a virtually-free codimension-$1$ subgroup (and therefore does not have Property $(T)$). \item If $G\backslash X$ is gluably $\pi$-separated, then $G$ acts properly discontinuously on a $CAT(0)$ cube complex. If, in addition, $G$ is hyperbolic, then this action is cocompact. In particular, if $G$ is hyperbolic, then it is virtually special, and so linear over $\mathbb{Z}$. \end{enumerate} \end{theoremalph} It is commonly far easier to check a local property than a global one, and so local to global principles are frequently of great use. When working with complexes, it is often most natural to consider local properties related to the links of vertices. In terms of metric curvature, one of the best-known local to global principles is Gromov's Link Condition \cite[$4.2A$]{Gromov_hyperbolic}. Switching to group theoretic properties, \.{Z}uk \cite{zuk1996} and Ballmann--\'{S}wiatkowski \cite{Ballmann-Swiatkowski} independently provided a condition on the first eigenvalue of the Laplacian of links of simplicial complexes that is sufficient to prove a group acting properly discontinuously and cocompactly on such a complex has Property $(T)$. In particular, as opposed to \cite[Example $4.3$]{Hruska-Wise}, we do not require a partition of the edges of links into cut sets: we can remove this assumption at the expense of requiring that every cutset contains at least two elements, and that the \emph{gluing equations} are satisfied for the cutsets (these equations are trivially satisfied for a collection of proper disjoint edge cutsets). Furthermore, we do not require that the cutsets are two-sided: $\Gamma - C$ is allowed to contain arbitrarily many components. Finally, we allow cutsets to be comprised of vertices or edges. Though we are not always able to cocompactly cubulate non-hyperbolic groups with this method, we can still produce codimension-$1$ subgroups, and often a proper action on a cube complex. ------------------------------------------- \subsection{Applications of the main theorem} We provide some applications of Theorem \ref{mainthm: cubulating groups}. We consider the groups classified by Kangaslampi--Vdovina \cite{Kangaslampi-Vdovina} and Carbone--Kangaslampi--Vdovina \cite{Carbone-Kangaslampli-Vdovina_2012}. These are groups that act simply transitively on triangular hyperbolic buildings: in particular, they act properly discontinuously and cocompactly on a simply connected triangular complex with links isomorphic to the minimal generalized quadrangle. There is little known about these groups: until now they were not even known to be residually finite. We apply Theorem \ref{mainthm: cubulating groups} to these groups to deduce that they are virtually special. The full automorphism groups of Kac--Moody buildings of $2$-spherical type of large thickness have Property $(T)$ \cite{Dymara-Januszkiewicz2002,Ershov-Rall18}: neither \cite{Dymara-Januszkiewicz2002} nor \cite{Ershov-Rall18} record whether Property (T) fails at small thicknesses. Some of the groups considered in Corollary \ref{mainthm: cubulating the generalized quadrangle} are cocompact lattices in a $2$-spherical Kac--Moody building with small thickness \cite{Carbone-Kangaslampli-Vdovina_2012}. Therefore, Corollary \ref{mainthm: cubulating the generalized quadrangle} complements \cite{Dymara-Januszkiewicz2002,Ershov-Rall18}, providing an example example of the failure of Property $(T)$ when the thickness is small. \begin{coralph}\label{mainthm: cubulating the generalized quadrangle} Let $X$ be a simply connected polygonal complex such that every face has at least $3$ sides, and the link of every vertex is isomorphic to the minimal generalized quadrangle. If a group $G$ acts properly discontinuously and cocompactly on $X$, then it is virtually special; in particular it is linear over $\mathbb{Z}$. \end{coralph} We prove that if $X$ and $G$ are as above, then $X$ can be endowed with a $CAT(0)$ metric such that $G\backslash X$ is gluably $\pi$-separated. However, we show that that it is not disjointly $\pi$-separated, so that \cite[Example 4.3]{Hruska-Wise} cannot be applied to such a complex. As a further application of Theorem \ref{mainthm: cubulating groups}, we consider generalized triangle groups, as defined in \cite{Lubotzky-Manning-Wilton} (see Definitions \ref{def: generalized triangle} and \ref{def: generalized triangle 2}). Let $C_{k,2}$ be the cage graph on $k$ edges, i.e. the smallest $k$ regular graph of girth $2$. For finite-sheeted covering graphs $\Gamma_{i} \looparrowright C_{k,2}$, we consider an associated pair of families of triangular complexes of groups $D^{j}_{0,k}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$, and $D^{j}_{k}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$. We remark that these complexes of groups are not necessarily unique for given $\Gamma_{1},\Gamma_{2},\Gamma_{3}$. We consider explicitly the graphs used in \cite{Caprace-Conder-Kaluba-Witzel_triangle}: we refer to them by their Foster Census names (see \cite{fostercensus}). The only graph not in the Foster Census is $G54$, the Gray graph, which is edge but not vertex transitive. Using Theorem \ref{mainthm: cubulating groups} and Theorem \ref{mainthm: cubulating generalized triangle groups} we can deduce the following. \begin{coralph}\label{coralph: small girth generalized triangle groups} Let $\Gamma_{i}\looparrowright C_{k,2}$ be finite-sheeted covers, such that $girth(\Gamma_{i})\geq 6$ for each $i$. Let $G=\pi_{1}(D^{j}_{0,k}(\Gamma_{1},\Gamma_{2},\Gamma_{3}))$ or $G=\pi_{1}(D^{j}_{k}(\Gamma_{1},\Gamma_{2},\Gamma_{3}))$ for some $j$. \begin{enumerate}[label=(\roman*)] \item If $\Gamma_{i}\in\{F24A,\;F26A,\;F48A\}$ for each $i$, then $G$ acts properly discontinuously on a $CAT(0)$ cube complex: if $G$ is hyperbolic, then this action is also cocompact and so $G$ is virtually special. \item If $\Gamma_{1}\in\{F40A,G54\}$, then $G$ acts properly discontinuously on a $CAT(0)$ cube complex: if $G$ is hyperbolic, then this action is also cocompact and so $G$ is virtually special. \end{enumerate} \end{coralph} There are $252$ groups considered in \cite{Caprace-Conder-Kaluba-Witzel_triangle}, of which they show that $168$ do not satisfy Property $(T)$. Our method recovers this result for $101$ groups, and proves that $30$ new groups do not have Property $(T)$. We prove that each of the $131$ groups we consider has a proper action on a $CAT(0)$ cube complex, and so, by e.g. \cite{Cheriz-Martin-Valette_haagerupproperty}, has the Haagerup property. Furthermore, $125$ of these groups are hyperbolic and have a proper and cocompact action on a $CAT(0)$ cube complex, and hence by \cite{Agol13} are virtually special. Wise's malnormal special quotient theorem \cite{Wise-MSQT} (c.f. \cite{AgolGrovesManningMSQT}) is one of the most important theorems in modern geometric group theory. However, the proof of this theorem is famously complex and so in Section \ref{subsection: cubulating dehn fillings of generalized triangle groups} we apply Theorem \ref{mainthm: cubulating groups} to generalized triangle groups to recover partial consequences of the malnormal special quotient theorem in Corollary \ref{mainthm: cubulating dehn fillings of generalized triangle groups}. Although this theorem follows from Wise's proof of the MSQT, a far more general theorem, the proof of Corollary \ref{mainthm: cubulating dehn fillings of generalized triangle groups} is considerably shorter and simpler, and provides an effective bound on the index of the fillings required. ------------------------------------------- \subsection{Structure of the paper} The main idea of the proof is the following. Since $G\backslash X$ is $\pi$-separated, we can find a collection of local geodesics in $G\backslash X$ that are locally separating at vertices of $G\backslash X$. The \emph{gluing equations} provide us with a way to glue these local geodesics together to find a locally geodesic locally separating subcomplex of $G\backslash X$: by lifting we find a geodesic separating subcomplex of $X$ with cocompact stabilizer. We then use the construction of Sageev \cite{Sageev-95}, generalized by Hruska--Wise in \cite{Hruska-Wise}, to construct the desired $CAT(0)$ cube complex. The paper is structured as follows. In Section \ref{section: Cubulating hyperbolic groups acting on polygonal complexes} we define hypergraphs, which will be separating subspaces constructed in the polygonal complex, and show certain subgroups of their stabilizers are codimension-$1$. We then prove Theorem \ref{mainthm: cubulating groups} by using Hruska--Wise's \cite{Hruska-Wise} extension of Sageev's \cite{Sageev-95} construction of a $CAT(0)$ cube complex, and proving that there are `enough' hypergraphs to `separate' the polygonal complex. In Section \ref{section: finding cutsets}, we discuss how to find `separated' cutsets of a graph by computer search. In Section \ref{section: generalized quadrangles} we prove Corollary \ref{mainthm: cubulating the generalized quadrangle} by proving that the minimal generalized quadrangle is \emph{weighted edge $3$-separated} and and endowing the polygonal complexes with a suitable $CAT(0)$ metric. In Section \ref{sec: gen triangles} we prove Theorem \ref{mainthm: cubulating generalized triangle groups} and Corollary \ref{coralph: small girth generalized triangle groups}. We again apply Theorem \ref{mainthm: cubulating groups} to prove Corollary \ref{mainthm: cubulating dehn fillings of generalized triangle groups} by considering cutsets in covers of graphs. \section*{Acknowledgements} I would like to thank my PhD advisor Henry Wilton for suggesting this topic, for the many useful discussions, and invaluable comments on an earlier draft of this manuscript. I would also like to thank Pierre-Emmanuel Caprace for the extremely helpful comments and corrections on an earlier draft, as well as pointing out the relevance of Corollary \ref{mainthm: cubulating the generalized quadrangle} to Property (T) in Kac--Moody buildings. \section{Cubulating groups acting on polygonal complexes}\label{section: Cubulating hyperbolic groups acting on polygonal complexes} This section is structured as follows. We first define the required conditions on graphs and complexes in Section \ref{subsec: Link conditions}, and in Section \ref{subsection: Removing cut edges and producing two sided cut sets} we discuss how to remove cut edges from links. We provide some examples where our conditions can be readily verified for graphs in Section \ref{subsection: Examples of separated graphs} and for complexes in Section \ref{subsection: Examples of solutions of the gluing equations}. We use these definitions in Sections \ref{subsection: Constructing hypergraphs in polygonal complexes}, \ref{subsection: Hypergraphs are separating}, and \ref{subsection: Hypergraph stabolizers and wallspaces} to build separating convex trees in polygonal complexes, and in Section \ref{subsection: Cubulating groups acting on polygonal complexes} we use these convex trees, and a construction due to \cite{Sageev-95} and \cite{Hruska-Wise}, to prove Theorem \ref{mainthm: cubulating groups}. Firstly, we introduce the relevant definitions for links. \subsection{Some separation conditions}\label{subsec: Link conditions} We now define the notion of `separatedness' of a graph. The \emph{combinatorial metric} on a graph $\Gamma$ is the path metric induced by assigning each edge of $\Gamma$ length $1$. \begin{definition} Let $\Gamma$ be a finite metric graph. \begin{enumerate}[label=\roman*)] \item An edge $e$ is a \emph{cut edge} if $\Gamma-\{e\}$ is disconnected. \item A set $C\subseteq \Gamma$ is a \emph{cutset} if $\Gamma - C$ is disconnected as a topological space. \item A cutset $C$ is an \emph{edge cutset} if $C\subseteq E(\Gamma)$ and is a \emph{vertex cutset} if $C\subseteq V(\Gamma).$ \item An edge cutset $C$ is \emph{proper} if for any edge $e\in C$, the endpoints of $e$ lie in disjoint components of $\Gamma-C$. \item A vertex cutset $C$ is \emph{proper} if for any vertex $u\in C$, and any distinct vertices $v,w$ adjacent to $u$, the vertices $v$ and $w$ lie in disjoint components of $\Gamma-C$. \end{enumerate} For an edge $e$ in $\Gamma$ let $m(e)$ be the midpoint of $e$. For $\sigma >0$ a set $\mathcal{C}\subseteq E(\Gamma)$ is \emph{$\sigma$-separated} if for all distinct $e_{1},e_{2}\in\mathcal{C}$, $d_{\Gamma}(m(e_{1}),m(e_{2}))\geq \sigma .$ A set $\mathcal{C}\subseteq V(\Gamma)$ is \emph{$\sigma$-separated} if for all distinct $v_{1},v_{2}\in\mathcal{C}$, $d_{\Gamma}(v_{1},v_{2})\geq \sigma .$ \end{definition} \begin{remark} We note that proper cut sets are very natural to consider. Any minimal edge cut set is proper, and more importantly, proper cutsets are preserved under passing to finite covers. Finding proper edge cutsets is easy, but for a given graph $\Gamma$ there may not be any proper $\sigma$-separated vertex cutsets: see for example the graph $F26A$, considered in Lemma \ref{lem: F26A is * separated}. \end{remark} \begin{definition}[\emph{Edge separated}] Let $\Gamma$ be a finite metric graph, and let $\sigma>0$. We will say that $\Gamma$ is \emph{edge $\sigma$-separated} if $\Gamma$ is connected, contains no vertices of degree $1$, and there exists a collection of proper $\sigma$-separated edge cutsets $C_{i}\subseteq E(\Gamma)$ with $\cup_{i}C_{i}=E(\Gamma)$ and $\vert C_{i}\vert\geq 2$ for each $i$. We say the graph is \emph{disjointly edge $\sigma$-separated} if the above cutsets form a partition of the edges. \end{definition} Note that to each edge cutset $C$ we can assign a partition $\mathcal{P}(C)$ to $\pi_{0}(\Gamma-C)$: we \emph{always require} that such a partition is at least as coarse as connectivity in $\Gamma - C$, and each partition contains at least two elements. The \emph{canonical partition} of $C$ is that induced by connectivity in $\Gamma -C$. \begin{definition}[\emph{Strongly edge separated}] A graph $\Gamma$ is \emph{strongly edge $\sigma$-separated} if $\Gamma$ is edge $\sigma$-separated and for every pair of points $u,v$ in $\Gamma$ with $d_{\Gamma}(u,v)\geq\sigma$ there exists a proper $\sigma$-separated edge cutset $C_{i}$ with $u$ and $v$ lying in separated components of $\Gamma - C_{i}$. We say the graph is \emph{disjointly strongly edge $\sigma$-separated} if the above cutsets form a partition of the edges. \end{definition} There is a more combinatorial condition that implies strong edge separation. \begin{definition} We say a cutset $C$ separates $\{v_{1},v_{2}\}$ and $\{w_{1},w_{2}\}$ if each $v_{i}$ lies in a different component of $\Gamma - C$ to each $w_{j}$. \end{definition} \begin{lemma}\label{lem: strong edge sep condition} Let $n\geq 2$, and let $\Gamma$ be a graph endowed with the combinatorial metric, such that $girth(\Gamma)\geq 2n$. Suppose that $\Gamma$ is edge $n$-separated with cutsets $\mathcal{C}=\{C_{1},\hdots , C_{m}\}$, and for every pair of vertices $u,v$ in $\Gamma$, and any vertices $u',v'$ with $d_{\Gamma}(u,v)\geq n$ and $d(u,u')=d(v,v')=1$ there exists an $n$-separated cutset $C_{i}$ separating $\{u,u'\}$ and $\{v,v'\}$. Then $\Gamma$ is strongly edge $n$-separated with the same cutsets. \end{lemma} \begin{proof} First note that as $\Gamma$ is edge $n$-separated, it is connected and contains no vertices of degree $1$. Let $u,v$ be two points in $\Gamma$ with $d(u,v)\geq n$. If $u,v$ are vertices, then we are done. Suppose $u$ and $v$ both lie on edges: let $e(u),e(v)$ be the respective edges, and $u_{1},u_{2}$, $v_{1},v_{2}$ the endpoints of $e(u),e(v)$ respectively. If $v$ is a vertex, take $v=v_{1}=v_{2}.$ As $girth (\Gamma)\geq 2n$, without loss of generality $d(u_{1},v_{1})\geq n$: taking $C_{i}$ to be the cutset separating $u_{1},u_{2}$ and $v_{1},v_{2}$, we see that $C_{i}$ separates $u$ and $v$. \end{proof} \begin{definition}[\emph{Weakly vertex separated}] Let $\Gamma$ be a finite metric graph, and let $\sigma>0$. We will say that $\Gamma$ is \emph{weakly vertex $\sigma$-separated} if: \begin{enumerate}[label=\roman*)] \item $\Gamma$ is connected and contains no vertices of degree $1$, \item and there exists a collection of $\sigma$-separated vertex cutsets $C_{i}\subseteq V(\Gamma)$ such that $\cup_{i}C_{i}=V(\Gamma)$ and $\vert C_{i}\vert \geq 2$ for each $i$. \end{enumerate} To each vertex cutset $C$ we can assign a partition $\mathcal{P}(C)$ to $\pi_{0}(\Gamma-C)$: we \emph{always require} that such a partition is at least as coarse as connectivity in $\Gamma - C$ and each partition contains at least two elements. The \emph{canonical partition} of $C$ is that induced by connectivity in $\Gamma -C$. \end{definition} \begin{definition}[\emph{Vertex separated}] Let $\Gamma$ be a finite metric graph, and let $\sigma>0$. We will say that $\Gamma$ is \emph{vertex $\sigma$-separated} if: \begin{enumerate}[label=\roman*)] \item $\Gamma$ is connected and contains no vertices of degree $1$, \item there exists a collection of $\sigma$-separated vertex cutsets $C_{i}\subseteq V(\Gamma)$ such that $\cup_{i}C_{i}=V(\Gamma)$ and $\vert C_{i}\vert \geq 2$ for each $i$, \item for any vertex $v$ and any distinct vertices $w,w'$ adjacent to $v$ there exists a $\sigma$-separated vertex cutset $C_{i}$ such that $w$ and $w'$ lie in separate components of $\Gamma - C_{i}$, \item and for any points $u$ and $v$ in $\Gamma$ with $d(u,v)\geq \sigma$, there exists a cutset $C_{i}$ with $u$ and $v$ lying in distinct components of $\Gamma - C_{i}$. \end{enumerate} Note that importantly, in general we don't require vertex cutsets to be proper. We say the graph is \emph{disjointly vertex separated} if the above cutsets form a partition of the vertices, and each cutset is proper. To each vertex cutset $C$ we can assign a partition $\mathcal{P}(C)$ to $\pi_{0}(\Gamma-C)$: we \emph{always require} that such a partition is at least as coarse as connectivity in $\Gamma - C$ and each partition contains at least two elements. The \emph{canonical partition} of $C$ is that induced by connectivity in $\Gamma -C$. \end{definition} \begin{remark} The reason we don't require vertex cutsets to be proper is the following. For edge cut sets we could weaken the definition of edge separated to require a condition similar to $iii)$ above: i.e. that the endpoints of each edge are separated by some cutset. However such a cutset can always be made minimal, and therefore proper, by removing unnecessary edges: the same is not true for vertex cutsets. \end{remark} Once again, this definition is not as difficult to verify as it may seem. \begin{lemma}\label{lem: vertex separated condition} Let $n\geq 2$, and let $\Gamma$ be a graph endowed with the combinatorial metric, such that $\Gamma$ is connected, contains no vertices of degree $1$, and $girth(\Gamma)\geq 2 n$. Suppose there exists a collection of $n$-separated vertex cutsets $\mathcal{C}=\{C_{1},\hdots , C_{m}\}$ so that \begin{enumerate}[label=$\roman*)$] \item $\cup_{i}C_{i}=V(\Gamma)$, \item $\vert C_{i}\vert \geq 2$ for each $i$, \item for each vertex $v$ and distinct $w$, $w'$ adjacent to $v$ there exists a $n$-separated cutset with $w$ and $w'$ lying in separate components of $\Gamma - C$, \item and furthermore that for any pair of vertices $u,v$ with $d_{\Gamma}(u,v)\geq n$ there exists a cutset $C_{i}$ with $u$ and $v$ lying in separate components of $\Gamma -C_{i}$. \end{enumerate} Then $\Gamma$ is vertex $\sigma$-separated with the collection of cutsets $\mathcal{C}$. \end{lemma} \begin{proof} It suffices to show that for any pair of points $u,v$ with $d(u,v)\geq \sigma$ there exists a cutset $C_{i}$ separating them. If $u$ and $v$ are vertices, then we are finished. Otherwise, let $e(u),e(v)$ be the edges that $u$ and $v$ lie on. Let $u_{1},u_{2}$ and $v_{1},v_{2}$ be the endpoints of $e(u),e(v)$ respectively. If $v$ is a vertex simply take $v_{1}=v_{2}=v$. Then without loss of generality, as $girth(\Gamma)\geq 2n$ and $d(u,v)\geq n$, we have that $d(u_{1},v_{1})\geq n$. Let $C_{i}$ be the cutset separating $u_{1}$ and $v_{1}$: this cutset must also separate $u$ and $v$. \end{proof} Finally, we define weighted $\sigma$-separated. \begin{definition}[Weighted $\sigma$-separated.] Let $\sigma>0$ and let $\Gamma$ be an edge $\sigma$-separated graph (respectively strongly edge $\sigma$-separated, weakly vertex $\sigma$-separated, vertex $\sigma$-separated) with $\sigma$-separated cutsets $\mathcal{C}=\{C_{1},\hdots , C_{m}\}$. We call $\Gamma$ \emph{weighted edge $\sigma$-separated} (respectively \emph{strongly edge $\sigma$-separated, weakly vertex $\sigma$-separated, vertex $\sigma$-separated}) if there exists an assignment of positive integers $n(C_{i})$ to the cutsets in $\mathcal{C}$ that solves the \emph{weight equations}: for any edges (respectively edges, vertices, vertices) $\alpha,\beta$ of $\Gamma$, $$\sum\limits_{C_{i}\in\mathcal{C}:\alpha\in C_{i}}n(C_{i})=\sum\limits_{C_{i}\in\mathcal{C}:\beta\in C_{i}}n(C_{i}).$$ \end{definition} Note that though the above equations at first appear to be difficult to solve, we can always find solutions for a graph with an edge (respectively vertex) transitive automorphism group (see Section \ref{subsection: Examples of separated graphs}). Next we extend these definitions to $CAT(0)$ polygonal complexes. This requires some care to ensure that the subcomplexes we build will actually be separating. A \emph{polygonal complex} is a $2$-dimensional polyhedral complex and is \emph{regular} if either all polygonal faces are regular polygons. For a polygonal complex $X$ and a vertex $v$ we define the \emph{link} of $v$, $Lk_{X}(v)$ (or simply $Lk(v)$ when $X$ is clear from context), as the graph whose vertices are the edges of $X$ incident at $v$, and two vertices $e_{1}$ and $e_{2}$ are connected by an edge $f$ in $Lk(v)$ if the edges $e_{1}$ and $e_{2}$ in $X$ are adjacent to a common face $f$. We can endow the link graph with the \emph{angular metric}: an edge $f=(e_{1},e_{2})$ in $Lk(v)$ has length $\alpha$, where $\alpha$ is the angle between $e_{1}$ and $e_{2}$ in the shared face $f$. We first define the following graph, which appeared in \cite{Ollivier-Wise}. \begin{definition}[\emph{Antipodal graph}] Let $Y$ be a regular non-positively curved polygonal complex. Subdivide edges in $Y$ and add vertices at the midpoints of edges: call these additional vertices \emph{secondary vertices}, and call the other vertices \emph{primary}. Every polygon in $Y$ now contains an even number of edges in its boundary. Construct a graph $\Delta_{Y}$ as follows. Let $V(\Delta_{Y})=V(Y)$ and join two vertices $v$ and $w$ by an edge, labelled $f$, if $v$ and $w$ exist and are antipodal in the boundary of a face $f$ in $Y$: add as many edges as such faces exist. This is the \emph{antipodal graph} for $Y$. \end{definition} \begin{remark} We note that for a secondary vertex $s$ of $Y$, $s$ is a cage graph with edges of length $\pi$. Hence, if $Y$ does not contain any free faces, $Lk_{Y}(s)$ is weighted edge $\pi$-separated, with a single $\pi$-separated cutset $E(Lk_{Y}(s))$. \end{remark} Note that as the complex is regular, the edges of $\Delta_{Y}$ pass through the midpoints of edges in $Lk_{Y}(v)$ for vertices $v$. There is a canonical immersion $\Delta_{Y} \looparrowright Y$; we map a vertex $v$ of $\Delta_{Y}$ to the corresponding vertex of $Y$, and we map an edge $e$ labelled by $f$ to the local geodesic between the endpoints of $e$ lying in the face $f$. \begin{definition} Let $Y$ be a non-positively curved polygonal complex, and let $\Delta$ be one of $Y^{(1)}$ or $\Delta_{Y}$. Assign $\Delta$ an arbitrary orientation, and let $e$ be an oriented edge of $\Delta$. For each $\pi$-separated cutset $C$ in $Lk(i(e))$, choose a set of partitions of $\pi_{0}(Lk(i(e))-C)$, $\{P_{i}(C)\}_{i}$. For $v\in V(\Delta)$, we define $$\mathcal{C}_{v}=\{C\;:\;C\mbox{ is a }\pi\mbox{-separated cutset},\;C\subseteq Lk(v)\}.$$ We define $$\mathcal{C}(e):=\{C\;:\;C\mbox{ is a }\pi\mbox{-separated cutset},\;e\in C\},$$ and $$\mathcal{C}=\bigcup_{e\in E^{\pm 1}(\Delta)}\mathcal{C}(e).$$ Similarly we can define $$\mathcal{CP}(e):=\bigcup\limits_{C\in\mathcal{C}(e)}\{(C,P_{i}(C))\}_{i},$$ and $$\mathcal{CP}=\bigcup_{e\in E^{\pm 1}(\Delta)}\mathcal{CP}(e).$$ \end{definition} The following is extremely similar to the `splicing' of Manning \cite{Manning10}: we will use this for a similar purpose to that of \cite{Cashen-Macura2011}. \begin{definition}[Equatable partitions] Let $Y$ be a non-positively curved polygonal complex, and let $\Delta$ be one of $Y^{(1)}$ or $\Delta_{Y}$. Let $v,w$ be two vertices of $\Delta$ connected by an oriented edge $e$, so that $v=i(e)$ and $w=t(e)$. Let $C_{v}$ be a $\pi$-separated cutset in $Lk(v)$ with choice of partition $P_{v}$ and $C_{w}$ be a $\pi$-separated cutset in $Lk(w)$ with choice of partition $P_{w}$. Let $v'$, $w'$ be points on $e$ in an $\epsilon$-neighbourhood of $v$, $w$ respectively, so that there are canonical mappings \begin{equation*} \begin{split} i_{v}&:St(v')\hookrightarrow Lk(v),\\ i_{w}&:St(w')\hookrightarrow Lk(w),\\ \phi&:St(v')\xrightarrow{\cong}St(w'). \end{split} \end{equation*} Therefore we have induced mappings \begin{equation*} \begin{split} \overline{i}_{v}&:St(v')-v'\hookrightarrow Lk(v)-C_{v},\\ \overline{i}_{w}&:St(w')-w'\hookrightarrow Lk(w)-C_{w},\\ \overline{\phi}&:St(v')-v'\xrightarrow{\cong}St(w')-w'. \end{split} \end{equation*} For $u=v,w$ let $\mathcal{P}_{u}$ be the set of partitions of $\pi_{0}(Lk(u)-C_{u})$, and let $\mathcal{P}_{u'}$ be the set of partitions of $\pi_{0}(St(u')-u')$. There are induced maps \begin{equation*} \begin{split} \iota_{v}&:\mathcal{P}_{v}\rightarrow \mathcal{P}_{v'},\\ \iota_{w}&:\mathcal{P}_{w}\rightarrow \mathcal{P}_{w'},\\ \psi&:\mathcal{P}_{v'}\hookdoubleheadrightarrow \mathcal{P}_{w'}. \end{split} \end{equation*} We say that $(C_{v},P_{v})$ and $(C_{w},P_{w})$ are \emph{equatable along $e$}, written $$(C_{v},P_{v})\sim_{e}(C_{w},P_{w})$$ if $$\psi(\iota_{v}(P_{v}))=\iota_{w}(P_{w}).$$ Note that this also defines an equivalence relation on $\mathcal{CP}(e)$: for $(C,P),(C',P')\in \mathcal{CP}(e)$, we write $$(C,P)\approx_{e} (C',P')$$ if $$\iota_{v}(P)=\iota_{v}(P').$$ This defines an equivalence relation on $\mathcal{CP}(e)$, and so defines an equivalence class $[C,P]_{e}$. We define $[C,P]_{e^{-1}}$ to be the equivalence class of cutset partitions in $\mathcal{CP}(e^{-1})$ equatable to $(C,P)$ along $e$: by definition this is independent of choice of $(C',P')\in [C,P]_{e}$. \end{definition} These constructions are designed so that we can `splice' the local cutsets along each edge. Though this definition is somewhat complicated, note the following remark. \begin{remark} Let $e$, $v,w$, $C_{v},C_{w}$ be as above. If both $C_{v},C_{w}$ are proper with canonical partitions $P_{v},P_{w}$, then $(C_{v},P_{v})\sim_{e}(C_{w},P_{w})$. This follows as the induced partitions of $St(v')-v'$ and $St(w')-w'$ are just the partitions induced by connectivity, and by properness every element of the induced partition of $St(v')-v'$ (respectively $St(w')-w'$) contains a unique vertex. Similarly, if $C_{1},C_{2}\in\mathcal{C}(e)$ are proper, with canonical partitions $P_{1},P_{2}$, then $(C_{1},P_{1})\approx_{e}(C_{2},P_{2})$. \end{remark} \begin{definition}[\emph{Gluably $\sigma$-separated}] Let $Y$ be a non positively curved polygonal complex. We call $Y$ \emph{gluably edge $\sigma$-separated} (respectively \emph{gluably (weakly) vertex $\sigma$-separated}) if : \begin{enumerate}[label=$\roman*)$] \item $Y$ is regular (respectively $Y$ is allowed \textbf{not to be regular}) \item the link of every vertex in $Y$ is edge (respectively (weakly) vertex) $\sigma$-separated, \item for every $\pi$-separated cutset $C$ in $Lk(v)$ there exists a series of partitions $\{P_{i}(C)\}$ of $\pi_{0}(Lk(v)-C)$ such that for any distinct pair of points $x,y\in Lk(v)$ separated by $C$, $x$ and $y$ are separated by some $P_{i}(C),$ \item and there exists a strictly positive integer solution to the \emph{gluing equations}: letting $\Delta=\Delta_{Y}$ (respectively $\Delta=Y^{(1)}$) we can assign a positive integer $\mu (C,P)$ to every pair $$(C,P)\in\mathcal{CP}:=\bigcup\limits _{e\in E^{\pm}(\Delta )}\mathcal{CP}(e)$$ such that for every edge $e$ of $\Delta$ and every $(C,P)\in \mathcal{CP}(e)$: \begin{equation*} \sum\limits_{(C',P')\in [C,P]_{e}} \mu(C',P')=\sum\limits_{(C',P')\in [C,P]_{e^{-1}}} \mu(C',P'). \end{equation*} \end{enumerate} \end{definition} \begin{definition}[\emph{Gluably $\sigma$-separated}] Let $Y$ be a non positively curved polygonal complex. We call $Y$: \begin{enumerate}[label=$\roman*)$] \item \emph{gluably weakly $\sigma$-separated} if it is gluably weakly vertex $\sigma$-separated, \item and \emph{gluably $\sigma$-separated} if it is gluably edge or gluably vertex $\sigma$-separated. \end{enumerate} \end{definition} \begin{remark} Again, note that in the definition of a gluably (weakly) vertex $\sigma$-separated complex, \textbf{we do not require that the complex $Y$ is regular}. If the link of each vertex in the complex $Y$ is disjointly $\sigma$-separated, then we can solve the gluing equations by taking only the canonical partition $P(C)$ for each cutset $C$, and setting $\mu(C,P(C))=1$ for all cutsets $C$, so that $Y$ is gluably $\sigma$-separated. \end{remark} \subsection{Removing cut edges}\label{subsection: Removing cut edges and producing two sided cut sets} We now show the existence of cut edges is not too much of an issue. \begin{lemma}\label{lem: removing cut edges} Let $G$ be a group acting properly discontinuously and cocompactly on a simply connected $CAT(0)$ polygonal complex $X$, such that the link of every vertex in $X$ is connected. There exists a simply connected $CAT(0)$ polygonal complex $X'$ such that $G$ acts properly discontinuously and cocompactly on $X'$ and the link of any vertex $v'$ in $X'$ is a subgraph of $Lk(v)$ for some vertex $v$ in $X$. Furthermore for any vertex $v$ of $X'$, either $Lk_{X'}(v)$ is connected and contains no cut edges, or $Lk_{X'}(v)$ is disconnected. \end{lemma} \begin{proof} First note that we can assume that $X$ contains no vertices of degree $1$ in its links. Let $Y=G\backslash X$, and let $v_{0},\hdots ,v_{m}$ be the vertices of $Y$. Let $v$ be a vertex in $X$, and suppose there exists a cut edge $f$ in $Lk(v)$. Let $e_{1}$ and $e_{2}$ be the endpoints of $f$. Suppose that, in $X$, the endpoints of $e_{1}$ are $v$ and $w$. Construct a new complex $X'$ as follows: Let $v_{1}$ and $v_{2}$ be two copies of $v$ and connect these vertices to $w$ with the edges $e_{1}^{1}$ and $e_{2}^{2}$ respectively. Since $f$ is a cut edge in $Lk(v)$ there is a canonical way to attach edges and faces to $v_{1}$ and $v_{2}$ that agrees with the connected components of $Lk(v) - e_{1}$. Now, we assume that $f$ is attached to $v_{1}$. Then the face $f$ is a free face, which we can push in to remove the vertex of degree $1$, $e_{1}^{1}$ in $Lk(v_{1})$, so that $Lk(v_{1})$ and $Lk(v_{2})$ are connected subgraphs of $Lk(v)-f$, and the links of any other vertices $w$ incident to the face $f$ are transformed to a proper subgraph of $Lk(w)$ with the edge $f$ removed. We can repeat this process finitely many times, applied to the set of vertices $Gv_{i}$ each time, to find the $CAT(0)$ polygonal complex $X'$ desired. \end{proof} \subsection{Examples of separated graphs}\label{subsection: Examples of separated graphs} Our definitions of weighted $\sigma$-separated graphs required assigning weights to cutsets such that certain equations hold. In this subsection we prove that as long as the automorphism group of a graph is transitive on vertices (or edges, depending on whether cutsets are formed of vertices or edges), then these equations can always be solved. Note that $Aut(\Gamma)$ is the group of automorphisms of $\Gamma$ as a metric graph. \begin{lemma}\label{lem: vertex transitive automorphism group can solve equations} Let $\sigma>0$ and let $\Gamma$ be (weakly) vertex $\sigma$-separated. If $Aut(\Gamma)$ is vertex transitive then $\Gamma$ is weighted (weakly) vertex $\sigma$-separated. \end{lemma} \begin{proof} Assume that $\Gamma$ is vertex $\sigma$-separated, with $\sigma$-separated vertex cutsets $\mathcal{C}=\{C_{1},\hdots, C_{n}\}$. The proof is similar for weakly separated graphs. Let $H=Aut(\Gamma)$. For each $C\in\mathcal{C}$, let $$H(C):=\{\gamma C\;:\;\gamma\in H\},$$ counted {\bf{with}} multiplicity, i.e. if $\gamma_{1}C=\gamma_{2}C$ and $\gamma_{1}\neq_{H}\gamma_{2}$, then both $\gamma_{1}C,\gamma_{2}C$ appear in $H(C)$. Note that for every $C'\in H(C)$, $C'$ is a $\sigma$-separated vertex cutset. Fix some vertex $v\in C$, and let $w\in V(\Gamma)$ be any vertex. Since $H$ acts vertex transitively, there exists $h\in H$ such that $h v=w$. Therefore $$\{\gamma\in H \;:\;v\in \gamma C\}=\{\gamma\in H \;:\;w\in h\gamma C\}=\{h^{-1}\gamma'\in H \;:\;w\in \gamma' C\},$$ and therefore $$\vert\{\gamma\in H \;:\;v\in \gamma C\}\vert=\vert\{\gamma\in H \;:\;w\in \gamma C\}\vert.$$ Let $$\tilde{\mathcal{C'}}:=\bigsqcup\limits_{C\in\mathcal{C'}}H(C),$$ again with multiplicity. By the above, it follows that for any two vertices $v,w\in V(\Gamma)$, $$\vert\{C\in \tilde{\mathcal{C}'} \;:\;v\in C\}\vert=\vert\{C\in \tilde{\mathcal{C}'} \;:\;w\in C\}\vert.$$ Let $\mathcal{C}'$ be the underlying set of $\tilde{\mathcal{C}'}$, and for $C\in \mathcal{C}',$ let $$n(C)=\vert\{C'\in\tilde{\mathcal{C}'}\;:\;C=C'\} \vert,$$ i.e. $n(C)$ is the multiplicity of $C$ in $\tilde{\mathcal{C}'}.$ It is easily seen that the above weights solve the gluing equations. As $\mathcal{C}\subseteq\mathcal{C}'$, it follows that $\Gamma$ is vertex separated with respect to these cutsets: by the above argument it follows that $\Gamma$ is weighted vertex $\sigma$-separated with cutsets $\mathcal{C}'$. \end{proof} Similarly, we can prove the following. \begin{lemma}\label{lem: edge transitive automorphism group can solve equations} Let $\sigma>0$ and let $\Gamma$ be (strongly) edge $\sigma$-separated. If $Aut(\Gamma)$ is edge transitive then $\Gamma$ is weighted (strongly) edge $\sigma$-separated. \end{lemma} \subsection{Examples of solutions of the gluing equations}\label{subsection: Examples of solutions of the gluing equations} Recall that we call an edge cutset $C$ \emph{proper} if the endpoints of every edge $e$ in $C$ lie in separate components of $\Gamma-C$, and a vertex cutset $C$ \emph{proper} if for every $v\in C$, the vertices adjacent to $v$ each lie in separate components of $\Gamma-C$. \begin{lemma}\label{lem: solving gluing equations for minimal edge cutsets} Let $Y$ be a regular non-positively curved complex and suppose the link of each vertex is weighted edge $\pi$-separated. There exists a system of strictly positive weights that solve the gluing equations for $Y$. \end{lemma} \begin{proof} Since edge cutsets are proper, any two cutsets are equatable along a shared edge. Therefore we may associate to each cutset $C$ exactly one partition $P(C)$, namely that of connectivity in $\Gamma-C$. In particular for any oriented edge $e\in E^{\pm}(\Delta_{Y})$ and any $(C,P(C))\in \mathcal{CP}(e),$ $[C,P(C)]_{e}=\mathcal{CP}(e).$ First, note that for an oriented edge $e$ of $\Delta_{Y}$, and $v=i(e)$, $\mathcal{C}(e)=\mathcal{C}(e)\cap \mathcal{C}_{v}$. Since the link of each vertex in $Y$ is weighted edge $\pi$-separated, for each vertex $v\in Y$ there exists a positive integer $N_{v}>0$ and a system of strictly positive weights $n_{v}(C)$ for $C\in \mathcal{C}_{v}$ such that for any edge $e$ in $Lk_{Y}(v)$, $$\sum\limits_{C\in \mathcal{C}(e)}n_{v}(C)=\sum\limits_{C\in \mathcal{C}(e)\cap \mathcal{C}_{v}}n_{v}(C)=N_{v}.$$ Let $M=\prod_{v\in V(Y)}N_{v},$ and for a cutset $C\in\mathcal{C}_{v},$ define $m(C)=Mn_{v}(C)\slash N_{v}.$ It follows that for an edge $e$ in $Lk_{Y}(v)$, $$\sum\limits_{C\in \mathcal{C}(e)}m(C)=\frac{M}{N_{v}}\sum\limits_{C\in \mathcal{C}(e)}n_{v}(C)=\frac{M}{N_{v}}N_{v}=M.$$ Finally, taking $\mu(C,P(C))=m(C)$, these weights immediately solve the gluing equations. \end{proof} Similarly, we can prove the following. \begin{lemma}\label{lem: solving gluing equations for proper vertex cutsets} Let $Y$ be a non-positively curved complex, such that the link of each vertex is weighted vertex $\pi$-separated, and every cutset is proper. There exists a system of strictly positive weights that solve the gluing equations for $Y$. \end{lemma} \subsection{Hypergraphs in \texorpdfstring{$\mathbf{\pi}$}{pi}-separated polygonal complexes}\label{subsection: Constructing hypergraphs in polygonal complexes} We now begin to construct our separating subcomplexes. Suppose $X$ is a simply connected $CAT(0)$ polygonal complex, and $G$ acts properly discontinuously and cocompactly on $X$, so that $G\backslash X$ is (weakly) gluably $\pi$-separated. If $G\backslash X$ is gluably edge $\pi$-separated, let $\Delta=\Delta_{G\backslash X}$, and if it is (weakly) gluably vertex $\pi$-separated, let $\Delta=(G\backslash X)^{(1)}$. Assign an arbitrary orientation to $\Delta$. Recall that for an oriented edge $e$ of $\Delta$, we let $\mathcal{C}(e)=\{C\in\mathcal{C}\;:\;e\in C\}$ (note that for any oriented edge $e$, $\mathcal{C}(e)$ is non-empty, as $G\backslash X$ is gluably edge $\pi$-separated). For every vertex $v$ and $\pi$-separated cutset $C$ in $Lk(v)$ let $\{P_{i}(C)\}$ be the required set of partitions of $\pi_{0}(Lk(v)-C)$, and let $$\mathcal{CP}(e)=\bigcup\limits_{C\in\mathcal{C}(e)}\{(C,P_{i}(C)\}_{i}.$$ Let $$\mathcal{CP}=\bigcup\limits_{e\in E^{\pm}(\Delta)}\mathcal{CP}(e).$$ By assumption, we can assign positive integer weights $\mu(C,P)$ to each cutset $(C,P)\in\mathcal{CP}$ so that for every edge $e$ of $\Delta$: \begin{equation*} \sum\limits_{(C',P')\in [C,P]_{e}} \mu(C',P')=\sum\limits_{(C',P')\in [C,P]_{e^{-1}}} \mu(C',P'). \end{equation*} We now construct a second graph $\Sigma$ as follows. Let $$V(\Sigma)=\bigsqcup\limits_{(C,P)\in\mathcal{CP}}\{u_{(C,P)}^{1},\hdots ,u_{(C,P)}^{\mu(C,P)}\}.$$ The gluing equations imply that for each positively oriented edge $e$ of $\Delta$ and each equivalence class $[C,P]_{e}\subseteq \mathcal{CP}(e)$ there exists a bijection $$\phi_{e}:\hspace*{- 10pt}\bigsqcup\limits_{(C',P')\in [C,P]_{e}}\hspace*{- 10pt}\{u_{(C',P')}^{1},\hdots ,u_{(C',P')}^{\mu(C',P')}\}\rightarrow\hspace*{- 10pt} \bigsqcup\limits_{(C',P')\in [C,P]_{e^{-1}}}\hspace*{- 10pt}\{u_{(C',P')}^{1},\hdots ,u_{(C',P')}^{\mu(C',P')}\}.$$ For each positively oriented edge $e$ and each equivalence class $[C,P]_{e}$ of $\mathcal{CP}(e)$ choose such a bijection, $\phi_{e}$, and add the oriented edges $$\{(u_{(C',P')}^{i},\phi_{e}(u_{(C',P')}^{i}))\;:\;(C',P')\in[C,P]_{e}, 1\leq i\leq \mu(C',P')\}.$$ Note that for each $(C,P)\in\mathcal{CP}$, $Lk_{\Sigma}(u_{(C,P)}^{i})$ is isomorphic to $C$ as labelled oriented graphs. Furthermore, each edge in $\Sigma$ labelled by $e$ connects two vertices of the form $u_{(C,P)}^{i}$ $u_{(C',P')}^{j}$ with $(C,P)\sim_{e}(C',P')$, i.e. every edge connects vertices with equatable partitions along that edge. There is an immersion $\Sigma\looparrowright\Delta$ that sends $u_{(C,P)}^{i}$ to the vertex $v_{C}$ such that $C\subseteq Lk_{\Delta}(v_{C})$ and maps an edge labelled by $e$ to the edge $e$ in $\Delta$. Let $\Sigma_{1},\hdots ,\Sigma_{m}$ be the connected components of $\Sigma$, and let $\underline{\Lambda}^{1},\hdots , \underline{\Lambda}^{m}$ be the images of these graphs in $G\backslash X$ under the immersion $$\Sigma_{i}\looparrowright\Delta\looparrowright G\backslash X.$$ Note that each $\underline{\Lambda}^{i}$ is locally geodesic as the cut sets are $\pi$-separated in $G\backslash X$. \begin{definition} If $G\backslash X$ is gluably edge $\pi$-separated, a lift of $\underline{\Lambda}^{i}$ from to the $CAT(0)$ complex $X$ is called a \emph{edge hypergraph in $X$}, and otherwise it is a \emph{vertex hypergraph in $X$}. \end{definition} Note that hypergraphs come with two pieces of information at each vertex $v$ in $X$: the cutset $C$ and partition $P$. We say $\Lambda$ \emph{passes through} the above objects. \begin{remark} Note that in the above construction for every vertex $v$, every $\pi$-separated edge cutset $C$ in $Lk(v)$ and chosen partition $P$ of $\pi_{0}(Lk(v)-C)$, and every lift $\tilde{v}$ of $v$, there exists a hypergraph passing through $(C,P)$ in $Lk_{X}(\tilde{v})$. \end{remark} \begin{figure}[H] \centering \includegraphics{hypergraph.png} \caption{Example of subsection of an edge hypergraph.} \end{figure} Importantly, our construction ensures that for any hypergraph $\Lambda$, $Stab(\Lambda)$ acts properly discontinuously and cocompactly on $\Lambda$. \subsection{Hypergraphs are separating}\label{subsection: Hypergraphs are separating} We now analyse the structure of the hypergraphs, and show they are in fact separating. \begin{lemma}\label{lem: edge hypergraphs are leafless convex} Let $G\backslash X$ be a simply connected (weakly) gluably $\pi$-separated $CAT(0)$ polygonal complex, and let $\Lambda$ be a hypergraph in $X$. Then $\Lambda$ is a leafless convex tree. \end{lemma} \begin{proof} We prove this for edge hypergraphs: the argument is similar for vertex hypergraphs. For each $i$, as the cutsets are $\pi$-separated, the image of $\Sigma_{i}\looparrowright G\backslash X$ is locally geodesic: therefore $\Lambda$ is locally geodesic in $X$. As $X$ is $CAT(0)$, local geodesics are geodesic, and geodesics are unique, so that $\Lambda$ is a convex tree. Since $\vert C\vert\geq 2$ for any $v\in V(\Delta_{G\backslash X})$ and $C\in\mathcal{C}_{v}$, $\Lambda$ contains no primary vertices of degree $1$. Similarly, as there are no vertices of degree $1$ in the link of a primary vertex in $X$, there are no cut edges in the link of a secondary vertex and so every cut set contains at least two edges. It follows that edge hypergraphs are leafless. \end{proof} \begin{definition} Let $\Lambda_{i}$ be a hypergraph in $X$ and $x,y\in X$ be distinct points in $X$. We say $\Lambda_{i}$ \emph{separates} $x$ and $y$ if $x$ and $y$ lie in distinct components of $X-\Lambda_{i}$. We write $\#_{\Lambda}(x,y)$ for the number of edge (or vertex) hypergraphs separating $x$ and $y$. \end{definition} We now consider separating points: we prove the following lemma. We call a path $\gamma$ \emph{transverse} to $\Lambda$ if $\vert \gamma\cap \Lambda\vert =1$ \begin{lemma}\label{lem: separation condition} Let $\Lambda$ be a hypergraph in $X$, and $\gamma=[p,q]$ be a geodesic transverse to $\gamma$. If $\gamma\cap\Lambda=\{x\}$ and $p$ and $q$ lie in different elements of the partition of $Lk(x)-\Lambda$, then $p$ and $q$ lie in different components of $X-\Lambda$. \end{lemma} Before we prove this, we need to define some technology. \begin{definition}[Hypergraph retraction] Let $X$ be a $CAT(0)$ space and $\Lambda$ a hypergraph in $X$. The projection map $$\pi_{\Lambda}:X\rightarrow\Lambda$$ maps every point in $X$ to its nearest point in $\Lambda$. Since $\Lambda$ is convex, this map is a deformation retraction. That is, we have a homotopy $\pi_{\Lambda}^{\delta}$ from the identity to $\pi_{\Lambda}.$ \end{definition} \begin{definition}[$\Lambda$-balanced paths] Let $\Lambda$ be a hypergraph in $X$ and $\epsilon>0$. For points $p,q$ lying in the same component of $X-\Lambda$, a \emph{$(\Lambda,\epsilon)$-balanced path} from $p$ to $q$ is a path $\sigma$ starting at $p$ and ending at $q$ such that: \begin{enumerate}[label=\roman*)] \item $\sigma \subseteq \mathcal{N}_{\epsilon}(\Lambda)-\Lambda,$ \item and for any $x\in \Lambda$, $\vert(\pi_{\Lambda})^{-1}(x)\cap \sigma\vert$ is even, and in particular finite. \end{enumerate} Given such a path, we define $$m_{\Lambda}(\sigma)=\frac{1}{2}\sum \limits_{e\in E(\Lambda\cap \pi_{\Lambda}(\sigma))}\max\limits_{y\in e} \vert(\pi_{\Lambda})^{-1}(y)\cap \sigma\vert. $$ \end{definition} By considering the retraction map, we see that such paths exist. \begin{lemma}\label{lem: existence of balanced paths} Let $\Lambda$ be a hypergraph in $X$, $\epsilon>0$, and let $p,q\in \mathcal{N}_{\epsilon}(\Lambda)$ be points lying in the same component of $X-\Lambda$. There exists a $(\Lambda,\epsilon)$-balanced path between them. \end{lemma} \begin{proof} Let $\gamma$ be a path from $p$ to $q$ not intersecting $\Lambda$. By taking $\delta$ close to $1$, we have that $\sigma_{\delta}:=\pi_{\Lambda}^{\delta}(\gamma)\subseteq \mathcal{N}_{\epsilon}(\Lambda)-\Lambda$. Since $\sigma_{\delta}$ maps to a loop in $\Lambda$, which is a tree, it immediately follows that by taking $\delta$ close to $1$, and after a small homotopy, for any $y\in \Lambda$, $\vert(\pi_{\Lambda})^{-1}(y)\cap \sigma_{\delta}\vert$ is even, and in particular finite. \end{proof} We can now prove Lemma \ref{lem: separation condition}. \begin{proof}[Proof of Lemma \ref{lem: separation condition}] Throughout we choose $\epsilon>0$ sufficiently small so that for any pair of vertices $v,w$ of $X$, $\mathcal{N}_{\epsilon}(v)\cap\mathcal{N}_{\epsilon}(w)=\emptyset$. Let $P$ be the partition of $Lk(x)$ through which $\Lambda$ passes. Let $P_{1}$ be the element of $P$ containing $p$ and $P_{2}$ the element of $P$ containing $q$. We may assume that $p,q\in \mathcal{N}_{\epsilon}(\Lambda)$: otherwise, choose $w_{i}$ lying in $P_{i}$ such that $w_{i}\in \mathcal{N}_{\epsilon}$. Then $w_{1}$ and $p$ lie in the same component of $X-\Lambda$ and $w_{2}$ and $q$ lie in the same component of $X-\Lambda$. Suppose $p$ and $q$ lie in the same component of $X-\Lambda$. We first choose $p'\in P_{1}$, $q'\in P_{2}$ and $\sigma $ a $\Lambda$-balanced path between $p'$ and $q'$ so that the pair $(m_{\Lambda}(\sigma),l(\sigma))$ is minimal by lexicographic ordering amongst all such $p',q',\sigma$. We induct on $(m_{\Lambda}(\sigma),l(\sigma))$ by lexicographic ordering. If $m_{\Lambda}(\sigma)=1$, then $\sigma$ passes along exactly one edge $e$ of $\Lambda$: it follows that the partitions of $X-\Lambda$ at the endpoints of $e$ are not equatable along $e$, or that $P_{1}=P_{2}$, a contradiction. Otherwise $ m_{\Lambda}(\sigma)=m\geq 2$. Suppose the first and last edges of $\Lambda$ traversed by $\sigma$ are the same edge $e$. Note that we may always travel along $\sigma$ to put ourselves in the situation assumed above: this is analogous to the classical situation of pushing to a leaf in a tree for graph theory arguments. In particular, if this is not true, then move along $\sigma$, starting at $q'$, until we return to $\mathcal{N}_{\epsilon}(x)$. Let $s$ be the point we reach in $\mathcal{N}_{\epsilon}(x)$. If $s$ is in the same component as $q'$ in $P$ then $m_{\Lambda}(\sigma\vert_{[p',s]})\leq m_{\Lambda}(\sigma)$, and $l(\sigma\vert_{[p',s]})< l(\sigma)$, so that $$(m_{\Lambda}(\sigma\vert_{[p',s]}),l(\sigma\vert_{[p',s]}))<(m_{\Lambda}(\sigma),l(\sigma)),$$ a contradiction as $p',q',\sigma$ where chosen so this pair was minimal. If $s$ is in the same component as $p'$ in $P$, and is not equal to $p'$, then $m_{\Lambda}(\sigma\vert_{[s,q']})\leq m_{\Lambda}(\sigma)$, and $l(\sigma\vert_{[s,q']})< l(\sigma)$, so that again $$(m_{\Lambda}(\sigma\vert_{[s,q']}),l(\sigma\vert_{[s,q']}))<(m_{\Lambda}(\sigma),l(\sigma))$$ a contradiction. Therefore if $s \neq p'$, then $s$ lies in a different component to $q'$ in $P$: we have $(m_{\Lambda}(\sigma\vert_{[s,q']}),l(\sigma\vert_{[s,q']}))<(m_{\Lambda}(\sigma),l(\sigma))$, and hence by induction $s$ must lie in a separate component of $X-\Lambda$ to $q'$, a contradiction as $q'$ is connected to $s$ by a path not intersecting $\Lambda$. Therefore by induction we have that $s=p'$. Let $y$ be the endpoint of $e$ distinct from $x$, let $\alpha$ be the the point obtained by pushing $p'$ along $\sigma $ to $\mathcal{N}_{\epsilon}(y)$, and similarly $\beta $ be the the point obtained by pushing $q'$ along $\sigma $ to $\mathcal{N}_{\epsilon}(y)$. Let $\sigma'$ be the subpath of $\sigma$ connecting $\alpha$ and $\beta$. If $\alpha$ and $\beta$ lie in the same component of the partition of $Lk(y)-\Lambda$, then the partitions are not equatable along $e$, a contradiction. Otherwise $(m_{\Lambda}(\sigma'),l(\sigma '))< (m_{\Lambda}(\sigma),l(\sigma))$, and so by induction $\alpha$ and $\beta$ lie in distinct components of $X-\Lambda$. As $p'$ is connected to $\alpha$ by a path not intersecting $\Lambda$, and $q'$ to $\beta$, we see that $p'$ and $q'$ lie in distinct components. Since $p$ is connected to $p'$ and $q$ is connected to $q'$ by a path not intersecting $\Lambda$, the result follows. \end{proof} \subsection{Hypergraph stabilizers and wallspaces}\label{subsection: Hypergraph stabolizers and wallspaces} We now want to use the construction of a cube complex dual to a system of walls, as found in \cite{Hruska-Wise}. Their definition of a wallspace is more general than the one we require, and so we restrict to the case that $X$ is endowed with a metric. \begin{definition}[\emph{Walls}] Let $X$ be a metric space. A \emph{wall} is a pair $\{U,V\}$ such that $X=U\cup V$. The \emph{open halfspaces} associated to the wall are $U-(U\cap V)$ and $V-(U\cap V)$. We say a wall \emph{betwixts} a point $x$ if $x\in U\cap V$, and \emph{separates} the points $x,y$ if $x$ and $y$ lie in distinct open halfspaces. We write $\#(x,y)$ for the number of walls separating $x$ and $y$. \end{definition} \begin{definition}[\emph{Wallspace}] A \emph{wallspace} is a pair $(X,\mathcal{W})$, where $X$ is a connected metric space and $\mathcal{W}$ is a collection of walls in $X$ such that; \begin{enumerate}[label=\roman*)] \item for any $x\in X$, finitely many walls in $\mathcal{W}$ betwixt $x$, \item for any $x,y\in X$, $\#(x,y)<\infty$, \item and there are no duplicate walls that are genuine partitions. \end{enumerate} We say a group $G$ \emph{acts} on a wallspace $(X,\mathcal{W})$ if $G$ acts on $X$, and $G\cdot\mathcal{W}=\mathcal{W}$. \end{definition} \begin{definition}[\emph{$\Lambda$ walls}] Let $\Lambda$ be a vertex or edge hypergraph in $X$, with disjoint components $X-\Lambda=\{U^{i}_{\Lambda}\}_{i}$. For each $U_{\Lambda}^{i}$, let $V_{\Lambda}^{i}=X-\overline{U_{\Lambda}^{i}}$. The \emph{set of $\Lambda$ walls} is the set $$\mathcal{W}_{\Lambda}=\bigg\{\{\overline{U_{\Lambda}^{i}},\overline{V_{\Lambda}^{i}}\}\;:\;U_{\Lambda}^{i}\mbox{ a component of }X-\Lambda\bigg\}.$$ The \emph{hypergraph wallspace} is the set of walls $$\mathcal{W}=\cup_{\Lambda}\mathcal{W}_{\Lambda},$$ where we remove any duplicate walls. \end{definition} We now show that the pair $(X,\mathcal{W})$ is a wallspace. There are several easy but technical steps to this. \begin{lemma}\label{lem: wall stabilizers are cocompact} Let $H_{\Lambda}=Stab_{G}(\Lambda)$, and for any $i$, let $H_{\Lambda,i}=Stab_{H_{\Lambda}}(U_{\Lambda}^{i}).$ Then $H_{\Lambda, i}$ acts cocompactly on $\partial U_{\Lambda}^{i}.$ \end{lemma} This Lemma follows immediately from the proof of \cite[Theorem 2.9]{Hruska-Wise}. We include the argument here for completeness. For a set $A$ in a metric space $(X,d)$, we define the \emph{frontier of $A$} as the set $\partial_{f}A=\{x\in X\;\vert 0<d(x,A)\leq 1\}.$ \begin{proof} Note that $H_{\Lambda}$ acts cocompactly on $\Lambda$ and so on $\partial_{f}\Lambda$. Furthermore $H_{\Lambda}$ preserves the partition of $\partial_{f}\Lambda$ into $U_{\Lambda}^{i}\cap \partial_{f}\Lambda$. Hence $H_{\Lambda,i}$ acts properly discontinuously and cocompactly on $\partial_{f}U_{\Lambda}^{i}$, and therefore on $\partial U_{\Lambda}^{i}$. \end{proof} \begin{lemma}\label{lem: finitely many wall orbits} There are finitely many $G$-orbits of walls in $\mathcal{W}$. \end{lemma} \begin{proof} There are finitely many $G$-orbits of hypergraphs $\Lambda$, and there are finitely many $H_{\Lambda}$ orbits of $U_{\Lambda}^{i}$. The result follows. \end{proof} \begin{lemma}\label{lem: W is a wallspace} The pair $(X,\mathcal{W})$ is a wallspace. \end{lemma} \begin{proof} First we prove finitely many walls betwixt points. Since the set of walls is acted upon cofinitely by $G$, and each wall has a cocompact stabilizer, this follows immediately. In a similar manner we can observe $\#(x,y)<\infty$ for any $x$ and $y$. \end{proof} Therefore we have constructed a wallspace for $X$. \begin{lemma}\label{lem: wallspace separation from hypergraph separation} Let $\mathcal{W}$ be constructed as above. Then $\#(x,y)\geq\#_{\Lambda}(x,y)$. \end{lemma} \begin{proof} Note that if a hypergraph $\Lambda$ separates $x$ and $y$, by taking $i$ such $x\in U_{\Lambda}^{i}$, it follows that $W_{\Lambda}^{i}$ separates $x$ and $y$. The result follows.\end{proof} Next we discuss transverse walls. \begin{definition}[\emph{Transverse}] Two walls $W=\{U,V\}$ and $W'=\{U',V'\}$ are \emph{transverse} if each of the intersections $U\cap U', U\cap V', V\cap U', V\cap V'$ are nonempty. \end{definition} There is an easier formulation for this definition. \begin{lemma} Two walls $W_{\Lambda}^{i},W_{\Lambda'}^{j}$ are transverse if and only if $\partial U_{\Lambda}^{i}\cap \partial U_{\Lambda'}^{j}$ is non-empty. In particular the walls are transverse only if $\Lambda\cap \Lambda'$ is non-empty. \end{lemma} Using this we can now move on to cubulating groups acting on such complexes. \subsection{Cubulating groups acting on polygonal complexes}\label{subsection: Cubulating groups acting on polygonal complexes} We now understand the structure of the hypergraph stabilisers and the separation in the wallspaces $(X,\mathcal{W})$. For a metric polygonal complex $X$, let $D(X)$ be the maximal circumference of a polygonal face in $X$. We will be considering $G$ acting properly discontinuously and cocompactly on a polygonal complex $X$ so that $D(X)= D(G\backslash X)$ is finite. \begin{lemma}\label{lem: edge hypergraph separation} Let $X$ be a simply connected $CAT(0)$ polygonal complex with $G\backslash X$ gluably edge $\pi$-separated. Let $\gamma$ be a finite geodesic in $X$ of length at least $4D(X)$. There exists an edge hypergraph $\Lambda$ that separates the endpoints of any finite geodesic extension of $\gamma$. \end{lemma} \begin{proof} Since $\gamma$ is of length at least $4D(X)$, we can find a subgeodesic $\delta$ of $\gamma$ of length at least $2D(X)$ that starts at a point $v\in X^{(1)}$ and ends at $w\in X^{(1)}$. If $\delta$ passes through the interior of a $2$-cell $f$ then, as $\delta$ is of length at least $2D(X)$, it meets the boundary $\partial f$ at two points $u_{1},u_{2}$. The sides of the polygonal faces are geodesic, and geodesics are unique in $CAT(0)$ spaces, so that there must exist a vertex $w$ in $\partial f$ lying between $u_{1}$ and $u_{2}$. Choose a cutset $C$ in $Lk(w)$ containing $f$, and let $P$ be a chosen partition of $\pi_{0}(Lk(w)-C)$ so that the endpoints of $f$ lie in different elements of $P$ (this must exist by assumption). Let $\Lambda$ be any hypergraph passing through $(C,P)$ in $Lk(w)$: by Lemma \ref{lem: separation condition}, $\Lambda$ separates the endpoints of the subpath of $\delta$ between $u_{1}$ and $u_{2}$: as geodesics in $X$ are unique, it follows that $\Lambda$ intersects any geodesic extension of $\delta$ exactly once, and so separates the endpoints of any geodesic extension of $\delta$. Otherwise $\delta$ lies strictly in $X^{(1)}$: $\delta$ is of length at least $2D(X)$ and so it must intersect at least two primary vertices. Therefore $\delta$ contains an edge of the form $[u_{1},u_{2}]$ for some primary vertices $u_{1},u_{2}$: this edge must be geodesic. Furthermore, the geodesic $[u_{1},u_{2}]$ contains a secondary vertex $s$. Let $P$ be a partition of $Lk(s)-E(s)$ so that the endpoints of $[u_{1},u_{2}]$ lie in different elements of $P$ (this must exist by assumption). Let $\Lambda$ be the hypergraph passing through $(C,P)$ in $Lk(s)$, it follows by Lemma \ref{lem: separation condition} that $\Lambda$ separates the endpoints of any finite geodesic extension of $\delta$. \end{proof} Similarly, we have the following. \begin{lemma}\label{lem: vertex hypergraph separation} Let $X$ be a simply connected $CAT(0)$ polygonal complex with $G\backslash X$ gluably vertex $\pi$-separated. Let $\gamma$ be a finite geodesic in $X$ of length at least $4D(X)$. There exists a vertex hypergraph $\Lambda$ that separates the endpoints of any finite geodesic extension of $\gamma$. \end{lemma} \begin{proof} Again, since $\gamma$ is of length at least $4D(X)$, we can write $\gamma=\gamma_{1}\cdot\delta\cdot\gamma_{2}$, where each $\gamma_{i}$ is of length at least $D(X)\slash 2$, and $\delta$ is a path of length between $D(X)$ and $2D(X)$ that starts at a point $v\in X^{(1)}$ and ends at $w\in X^{(1)}$. First suppose that $\delta$ contains a nontrivial subpath, $\delta'$, which contains exactly one point of $X^{(1)}$, $u$, in its interior. Let $e$ be the edge of $X$ containing $u$. Since $\delta'$ is geodesic, we see that $i(\delta')$ and $t(\delta')$ lie in two distinct faces $F,F'$, both adjacent to $e$. In $Lk(i(e))$, $F,F'$ are two edges adjacent to $e$, and so, as $G\backslash X$ is gluably $\pi$-separated. there exists a $\pi$-separated cutset $C\ni e$ and partition $P$ of $\pi_{0}(Lk(i(e))-C)$ with $F,F'$ lying in distinct elements of $P$. Let $\Lambda$ be any hypergraph passing through $(C,P)$ in $Lk(i(e))$. By Lemma \ref{lem: separation condition} $\Lambda$ separates the endpoints of $\delta'$, and hence the endpoints of $\gamma$. Otherwise, $\delta$ is contained in $X^{(1)}$: as we have not subdivided $X$, $v$ is a primary vertex of $X$. Let $\delta_{1}$, $\delta_{2}$ be the two subpaths of $\gamma$ incident to $v$: as $\gamma$ is geodesic, $d_{Lk(v)}(\delta_{1},\delta_{2})\geq \pi$. Let $C$ be the vertex cutset such that $\gamma_{1}$ and $\gamma_{2}$ lie in different components of $Lk(v)-C$ and let $P$ be a chosen partition of $\pi_{0}(Lk(v)-C)$ separating $\gamma_{1}$ and $\gamma_{2}$ (this exists as $G\backslash X$ is gluably vertex $\pi$-separated). Let $\Lambda$ be any vertex hypergraph passing through $(C,P)$ in $Lk(v)$: by Lemma \ref{lem: separation condition} this separates $\gamma_{1}$ and $\gamma_{2}$, and so separates the endpoints of $\gamma$. \end{proof} We now turn our attention to finding codimension-$1$ subgroups. We first note the following lemma concerning $CAT(0)$ geometry. \begin{lemma}\label{lem: geodesic rays diverge} Let $Y$ be a $CAT(0)$ space, and let $\gamma_{1},\gamma_{2}$ be infinite one-ended geodesics starting from the same point. If there exists $r>0$ such that $\gamma_{1}\subseteq \mathcal{N}_{r}(\gamma_{2})$, then $\gamma_{1}=\gamma_{2}.$ \end{lemma} \begin{proof} Let $p$ be the common start point of $\gamma_{1},\gamma_{2}$ and let $\theta=\angle_{p}(\gamma_{1},\gamma_{2}).$ Since $\gamma_{1}\subseteq \mathcal{N}_{r}(\gamma_{2})$, for all $t>0$ there exists $t'(t)>0$ such that $d(\gamma_{1}(t),\gamma_{2}(t'))\leq r$. However, $d(\gamma_{1}(t),p)\rightarrow \infty $ as $t\rightarrow\infty$, so that $d(\gamma_{2}(t'(t)),p)\rightarrow \infty $ as $t\rightarrow\infty$. Consider the Euclidean comparison triangle for the geodesics $\gamma_{1}(t)$ and $\gamma_{2}(t'(t))$: this has third side length at most $r$, and so has angle at $p$ of $\theta(t)\rightarrow 0$ as $t\rightarrow \infty.$ However, $\theta\leq \theta(t)$ for all $t$, and so $\theta=0$. It follows that $\gamma_{1}=\gamma_{2}$ in a closed neighbourhood of $p$, and so the set $\{t\;:\;\gamma_{1}(t)=\gamma_{2}(t)\}$ is clopen. The result follows. \end{proof} Using this we can prove that hypergraph stabilizers have subgroups that are codimension-$1$ in $G$. Let $G$ be a group with finite generating set $S$ and let $\Gamma$ be the Cayley graph of $G$ with respect to $S$. A subgroup $H$ of $G$ is \emph{codimension-$1$} if the graph $H\backslash \Gamma$ has at least two ends, i.e. for some compact set $K$, $H\backslash\Gamma - K$ contains at least two infinite components. \begin{lemma}\label{lem: hypergraph stabilizers are codimension-1} Let $G$ be a group acting properly discontinuously and cocompactly on a simply connected $CAT(0)$ polygonal complex $X$ such that $G\backslash X$ is (weakly) gluably $\pi$-separated. Let $\Lambda$ be a hypergraph in $X$. For any component $U_{\Lambda}$ of $X-\Lambda$, the group $$H_{U}=Stab_{Stab(\Lambda)}(U_{\Lambda})\cap Stab_{Stab(\Lambda)}(X-\overline{U_{\Lambda}})$$ is virtually free, and is quasi-isometrically embedded and codimension-$1$ in $G$. \end{lemma} This again follows by \cite[Theorem 2.9]{Hruska-Wise}: we provide a direct proof for completeness. \begin{proof} We prove this in the case that $\Lambda$ is an edge hypergraph: the case for vertex hypergraphs is identical. By Lemma \ref{lem: edge hypergraphs are leafless convex}, $\Lambda$ is a convex tree. Since $\partial U_{\Lambda}\subseteq \Lambda$, $\partial U_{\Lambda}$ is a convex tree. We see that $H_{U}$ is of index at most $2$ in $Stab_{Stab(\Lambda)}(U_{\Lambda})$. By Lemma \ref{lem: wall stabilizers are cocompact}, $H_{U}$ acts properly discontinuously and cocompactly on $\partial U_{\Lambda}$: it follows that $H_{U}$ is virtually free and quasi-isometrically embedded in $G$. Furthermore, by Lemma \ref{lem: separation condition} $X- \Lambda$ consists of at least two path-connected components, $\{U^{i}_{\Lambda}\}$. Let $V_{\Lambda}=X-\overline{U_{\Lambda}}$. Let $e_{1}$ and $e_{2}$ be vertices that lie in distinct components of $Lk(v)-C$ such that, in $X$, $e_{1}$ is an edge lying in $U_{\Lambda}\cup v$ and $e_{2}$ an edge lying in $V_{\Lambda}\cup v$. We construct two geodesics $\gamma_{1}$ and $\gamma_{2}$: let the first edge of $\gamma_{1}$ be $e_{1}$, and let $w$ be the endpoint of $e_{1}$ distinct from $v$. Since the links of vertices have no vertices of degree $1$ and have girth at least $2\pi$, it follows that there exists a vertex or edge, $a_{1}$, in $\Gamma=Lk(w)$ so that $d_{\Gamma}(e _{1}, m(a_{1}))\geq \pi$, and so we can extend $e_{1}$ to a geodesic $[v,m(a_{1})]$. We can continue in this fashion to construct a one-ended geodesic $\gamma_{1}$ that, by Lemma \ref{lem: separation condition}, lies in $U_{\Lambda}\cup v$ and (as geodesics are unique in $X$) intersects $\Lambda$ exactly once. Construct the geodesic $\gamma_{2}$ similarly, with first edge $e_{2}$ so that $\gamma_{2}$ intersects $\Lambda$ exactly once and lies in $V_{\Lambda}\cup v$. By Lemma \ref{lem: geodesic rays diverge}, it follows that for any $r>0$, $\gamma_{1},\gamma_{2}\not\subseteq \mathcal{N}_{r}(\partial U_{\Lambda})$. Therefore $H_{U}\backslash X-H_{U}\backslash \partial U_{\Lambda}$ consists of at least two infinite components: $H_{U}\backslash U_{\Lambda}$ and $H_{U}\backslash V_{\Lambda}$. As $G$ is quasi-isometric to $X$, and $H_{U}$ is quasi-isometric to $\partial U_{\Lambda}$, the result follows. \end{proof} We will use Hruska--Wise's \cite{Hruska-Wise} generalisation of Sageev's construction of a $CAT(0)$ cube complex dual to a collection of codimension-$1$ subgroups, as introduced in \cite{Sageev-95}. We will only describe the $1$-skeleton of this cube complex. \begin{definition}[\emph{Orientation}] Let $(X,\mathcal{W})$ be a wallspace and $W=\{U,V\}$ a wall. An \emph{orientation} of $W$ is a choice $c(W)=(\overleftarrow{c(W)},\overrightarrow{c(W)})$ of ordering of the pair $W$. An \emph{orientation} of $\mathcal{W}$ is an orientation of each wall $W$ in $\mathcal{W}$. \end{definition} A $0$-cube in the dual cube complex $\mathcal{C}(X,\mathcal{W})$ corresponds to a choice of orientation $c$ of $\mathcal{W}$ such that that for any element $x\in X$, $x$ lies in $\overleftarrow{c(W)}$ for all but finitely many $W\in\mathcal{W}$, and $\overleftarrow{c(W)}\cap\overleftarrow{c(W')}\neq \emptyset$ for all $W,W'\in\mathcal{W}$. Two $0$-cells are joined by a $1$-cell if there exists a unique wall to which they assign opposite orientations. Sageev analysed the properness and cocompactness of the group action on $\mathcal{C}(X,\mathcal{W})$ in \cite{Sageev-97}, and this was generalized by Hruska--Wise in \cite{Hruska-Wise}. We will use the following, as they are the easiest criteria to verify in our setting. \begin{theorem*}\cite[Theorem 1.4]{Hruska-Wise}\label{prop: proper actions} Suppose $G$ acts on a wallspace $(X,\mathcal{W})$, and the action on the underlying metric space (X,d) is metrically proper. If there exists constants $\kappa,\epsilon>0$ such that for any $x,y\in X$, $$\#(x,y)\geq \kappa d(x,y)-\epsilon,$$ then $G$ acts metrically properly on $C(X, \mathcal{W})$. \end{theorem*} \begin{theorem*}\cite[Lemma 7.2]{Hruska-Wise}\label{thm: Sageev cubulation cocompactness} Let $G$ act on a wallspace (X,W). Suppose there are finitely many orbits of collections of pairwise transverse walls in $X$. Then $G$ acts cocompactly on $C(X,\mathcal{W})$. \end{theorem*} This is sufficient to prove Theorem \ref{mainthm: cubulating groups}. \begin{proof}[Proof of Theorem \ref{mainthm: cubulating groups}] If $G\backslash X$ is gluably weakly $\pi$-separated, then by Lemma \ref{lem: hypergraph stabilizers are codimension-1}, $G$ contains a virtually free codimension-$1$ subgroup. Now suppose $G\backslash X$ is a gluably $\pi$-separated complex. $X$ is locally finite, and $G$ acts properly discontinuously on $X$, so acts metrically properly on $X$. Construct the hypergraph wallspace for $X$. Then by Lemmas \ref{lem: edge hypergraph separation} and \ref{lem: vertex hypergraph separation}, $$\#_{\Lambda}(p,q)\geq d_{X}(p,q)\slash 4D(X) -1.$$ By Lemma \ref{lem: wallspace separation from hypergraph separation}, this implies that $$\#(p,q)\geq d_{X}(p,q)\slash 4D(X)-1:$$ by \cite[Theorem 1.4]{Hruska-Wise} it follows that $G$ acts properly discontinuously on the cube complex $C(X,\mathcal{W})$. Now suppose that $G$ is hyperbolic, so that $X$ is also hyperbolic. As hypergraphs are convex and hypergraph stabilisers are cocompact, by \cite{Gitik-Mitra-Rips-Sageev_1998widths} (c.f. \cite{Sageev-97}) there is an upper bound on the number of pairwise intersecting hypergraphs. For any point $x\in \Lambda$ there is a finite upper bound on the number of components of $X-\Lambda$ intersecting $x$, and so by Lemma \ref{lem: finitely many wall orbits}, we see there is an upper bound on the size of a collection of pairwise transverse walls. As $G$ acts cofinitely on the set of walls, it follows that the hypothesis of \cite[Lemma 7.2]{Haglund-Wise} are met, and so $G$ acts cocompactly on the $CAT(0)$ cube complex $C(X,\mathcal{W})$: by \cite[Theorem 1.1]{Agol13}, we conclude that $G$ is virtually special. \end{proof} \section{Finding separated cutsets by computer search}\label{section: finding cutsets} In this short section, we discuss how to find separated cutsets by computer search. Let $\Gamma$ be a finite metric graph, and let $I(\Gamma)=V(\Gamma)$ or $E(\Gamma)$. Define $d_{I}(x,y)=d_{\Gamma}(x,y)$ if $x,y\in V(\Gamma)$ and $d_{I}(x,y)=d_{\Gamma}(m(x),m(y))$ if $x,y\in E(\Gamma)$. Let $\sigma>0$. The $\sigma$-separated cutsets of $\Gamma$ that lie in $I(\Gamma)$ can be found in the following way: we can define a dual graph $\bar{\Gamma}$ by $V(\bar{\Gamma})=I(\Gamma),$ and $$E(\bar{\Gamma})=\{(x,y)\in I(\Gamma)^{2}\;:\;x\neq y\;\mbox{and}\;d_{I}(x,y)< \sigma \}.$$ Finding $\sigma$-separated cut sets in $\Gamma$ then corresponds to finding independent vertex sets in $\bar{\Gamma}$ and checking if they are cut sets in $\Gamma$. Importantly, finding independent vertex sets can be done relatively efficiently. See \href{ https://github.com/CJAshcroft/Graph-Cut-Set-Finder}{\color{blue} https://github.com/CJAshcroft/Graph-Cut-Set-Finder} for the implementation of the above algorithm, and for the code used to find cutsets in the following sections. \section{Triangular buildings}\label{section: generalized quadrangles} In the following section, we prove Corollary \ref{mainthm: cubulating the generalized quadrangle}. In \cite{Kangaslampi-Vdovina} and \cite{Carbone-Kangaslampli-Vdovina_2012} all groups acting simply transitively on triangular buildings whose links are the minimal generalized quadrangle (see Figure \ref{fig: generalized quadrangle}) were classified. We apply Theorem \ref{mainthm: cubulating groups} to these groups, proving they are virtually special by considering the separation of the minimal generalized quadrangle. \begin{figure}[ht] \centering \includegraphics[scale=0.75]{genquad.png} \caption{The minimal generalized quadrangle.}\label{fig: generalized quadrangle} \end{figure} \begin{table}[H] \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 1&2& 14& 30\\ 2&1& 3& 19\\ 3&2& 4& 24\\ 4&3& 5& 11\\ 5&4& 6& 28\\ 6&5& 7& 15\\ 7&6& 8& 20\\ 8&7& 9& 25\\ 9&8& 10& 30\\ 10&9& 11& 17 \end{tabular} \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 11&4&10& 12\\ 12&11& 13& 21\\ 13&12& 14& 26\\ 14&1& 13& 15\\ 15&6& 14& 16\\ 16&15& 17& 23\\ 17&10& 16& 18\\ 18&17& 19& 27\\ 19&2& 18& 20\\ 20&7& 19& 21 \end{tabular} \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 21&12& 20& 22\\ 22&21& 23& 29\\ 23&16& 22& 24\\ 24&3& 23& 25\\ 25&8& 24& 26\\ 26&13&25&27\\ 27&18&26&28\\ 28&5&27&29\\ 29&22&28&30\\ 30&1&9&29 \end{tabular} \caption{Edge incidences for the minimal generalized quadrangle} \end{table} \begin{lemma}\label{lem: separation of the generalized quadrangle} Let $\Gamma$ be the minimal generalized quadrangle equipped with the combinatorial metric. Then $\Gamma$ is weighted (strongly) edge $3$-separated. \end{lemma} \begin{proof} By a computer search, we find the following exhaustive list of $3$-separated edge cut sets in $\Gamma$: \begin{enumerate}[label=\hspace*{-10pt}] { \item\hspace*{-10pt}$C_{1}=\{\fe{1}{2},\fe{4}{5},\fe{7}{20},\fe{9}{10},\fe{12}{13},\fe{15}{16},\fe{18}{27},\\\hspace*{25pt}\fe{22}{29},\fe{24}{25}\},$ \item\hspace*{-10pt}$C_{2}=\{\fe{1}{2},\fe{4}{11},\fe{6}{15},\fe{8}{9},\fe{13}{26},\fe{17}{18},\fe{20}{21},\\\hspace*{25pt}\fe{23}{24},\fe{28}{29}\},$ \item\hspace*{-10pt}$C_{3}=\{\fe{1}{14},\fe{3}{4},\fe{6}{7},\fe{9}{10},\fe{12}{21},\fe{16}{23},\fe{18}{19},\\\hspace*{25pt}\fe{25}{26},\fe{28}{29}\},$ \item\hspace*{-10pt}$C_{4}=\{\fe{1}{14},\fe{3}{24},\fe{6}{7},\fe{9}{10},\fe{12}{21},\fe{16}{23},\fe{18}{19},\\\hspace*{25pt}\fe{25}{26},\fe{28}{29}\},$ \item\hspace*{-10pt}$C_{5}=\{\fe{1}{30},\fe{3}{4},\fe{6}{15},\fe{8}{25},\fe{10}{17},\fe{12}{13},\fe{19}{20},\\\hspace*{25pt}\fe{22}{23},\fe{27}{28}\},$ \item\hspace*{-10pt}$C_{6}=\{\fe{1}{30},\fe{3}{24},\fe{5}{28},\fe{7}{8},\fe{10}{11},\fe{13}{26},\fe{15}{16},\\\hspace*{25pt}\fe{18}{19},\fe{21}{22}\},$ \item\hspace*{-10pt}$C_{7}=\{\fe{2}{3},\fe{5}{6},\fe{8}{25},\fe{10}{11},\fe{13}{14},\fe{16}{23},\fe{18}{27},\\\hspace*{25pt}\fe{20}{21},\fe{29}{30}\},$ \item\hspace*{-10pt}$C_{8}=\{\fe{2}{3},\fe{5}{28},\fe{7}{20},\fe{9}{30},\fe{11}{12},\fe{14}{15},\fe{17}{18},\\\hspace*{25pt}\fe{22}{23},\fe{25}{26}\},$ \item\hspace*{-10pt}$C_{9}=\{\fe{2}{19},\fe{4}{5},\fe{7}{8},\fe{10}{17},\fe{12}{21},\fe{14}{15},\fe{23}{24},\\\hspace*{25pt}\fe{26}{27},\fe{29}{30}\},$ \item\hspace*{-10pt}$C_{10}=\{\fe{2}{19},\fe{4}{11},\fe{6}{7},\fe{9}{30},\fe{13}{14},\fe{16}{17},\fe{21}{22},\\\hspace*{25pt}\fe{24}{25},\fe{27}{28}\}.$ } \end{enumerate} $\Gamma$ is connected and contains no vertices of degree $1$. The cutsets sets are $3$-separated, and $\cup_{i}C_{i}=E(\Gamma)$. In fact, each cutset is minimal, and so is certainly proper. Furthermore, every edge appears in exactly two cutsets: assigning each cutset weight $1$ we see that the weight equations are satisfied, and so $\Gamma$ is weighted edge $3$-separated. In fact, by a computer search we can see that $\Gamma$ satisfies the conditions of Lemma \ref{lem: strong edge sep condition}, and so is weighted strongly edge $3$-separated. \end{proof} Note that $C_{i}\cap C_{j}$ is nonempty for all $i$ and $j$, so that we are not able to use \cite[Example $4.3$]{Hruska-Wise}. However, we can apply Theorem \ref{mainthm: cubulating groups} to prove groups acting properly discontinuously and cocompactly on triangular buildings with the minimal generalized quadrangle as links are virtually special. \begin{proof}[Proof of Corollary \ref{mainthm: cubulating the generalized quadrangle}] Let $X$ be a simply connected polygonal complex such that every face has at least $3$ sides, and the link of every vertex is isomorphic to the minimal generalized quadrangle, $\Gamma$, and let $G$ be a group acting properly discontinuously and cocompactly on $X$. Since $\Gamma$ has girth $8$, $X$ can be endowed with a $CAT(-1)$ metric, so that $G$ is hyperbolic. Endow $X$ with the metric that makes each $k$-gonal face a regular unit Euclidean $k$-gon, so that $X$ is regular and the length of each edge in the link of a vertex is at least $\pi\slash 3$. As $\Gamma$ is weighted edge $3$-separated with the combinatorial metric, it follows that the links of $X$ are weighted edge $\pi$-separated. Hence by Lemma \ref{lem: solving gluing equations for minimal edge cutsets}, $G\backslash X$ is gluably $\pi$-separated. Furthermore, by Gromov's link condition, $X$ is $CAT(0)$. Therefore, $G$ is hyperbolic and acts properly discontinuously and cocompactly on a simply connected $CAT(0)$ triangular complex $X$ with $G\backslash X$ gluably $\pi$-separated, so acts properly discontinuously and cocompactly on a $CAT(0)$ cube complex by Theorem \ref{mainthm: cubulating groups}, and hence is virtually special by \cite[Theorem $1.1$]{Agol13}. \end{proof} \section{Application to generalized triangular groups}\label{sec: gen triangles} In this section we prove Theorem \ref{mainthm: cubulating generalized triangle groups} in Section \ref{subsection: Codimension-$1$ subgroups of generalized ordinary triangle groups}, Corollary \ref{coralph: small girth generalized triangle groups} in Section \ref{subsection: small girth generalized triangle groups}, and Corollary \ref{mainthm: cubulating dehn fillings of generalized triangle groups} in Section \ref{subsection: cubulating dehn fillings of generalized triangle groups}. \subsection{Cubulating generalized ordinary triangle groups}\label{subsection: Codimension-$1$ subgroups of generalized ordinary triangle groups} We now consider generalized ordinary triangle groups, constructed in \cite{Lubotzky-Manning-Wilton} to answer a question of Agol and Wise: note that the case of $k=2$ corresponds to classical ordinary triangle groups. The first complex of groups we define uses the notation from \cite{Caprace-Conder-Kaluba-Witzel_triangle} to more easily align with their work. See e.g. \cite{Bridson-Haefliger} for further discussion of complexes of groups. \begin{definition}[Generalized triangle groups]\label{def: generalized triangle} Consider the following complex of groups over $\mathcal{T}$, the poset of all subsets of $\{1,2,3\}$. Let $X_{1},X_{2},X_{3}$ be the vertex groups, and $A_{1},A_{2},A_{3}$ the edge groups, with the face group trivial, and homomorphisms $\phi_{i,i+1}:A_{i}\rightarrow X_{i+1},\;\phi_{i,i-1}:A_{i}\rightarrow A_{i-1}$ for $i=1,2,3$ taken $\mod 3$. Now, consider the coset graph $$\Gamma_{X_{i}}(\phi_{i-1,i}(A_{i-1}),\phi_{i+1,i}A_{i+1}).$$ Fix $k\geq 2$ and let each $A_{i}=\mathbb{Z}\slash k$. For graphs $\Gamma_{i}$, let $$\{D_{k}^{j}(\Gamma_{1},\Gamma_{2},\Gamma_{3})\}_{j}$$ be the family of complexes of groups obtained by choosing $X_{i}$ and $\phi_{i,i\pm 1}$ such that for each $i$ $$\Gamma_{X_{i}}(\phi_{i-1,i}(A_{i-1}),\phi_{i+1,i}A_{i+1})\cong \Gamma_{i}.$$ A group $$G^{j}_{k}(\Gamma_{1},\Gamma_{2},\Gamma_{3})=\pi_{1}(D^{j}_{k}(\Gamma_{1},\Gamma_{2},\Gamma_{3}))$$ is called a \emph{($k$-fold) generalized triangle group}. \end{definition} Bridson and Haefliger considered the developability of a complex of groups in \cite[III.$\mathcal{C}$]{Bridson-Haefliger}. The following is well known: see e.g. \cite[Theorem 3.1]{Caprace-Conder-Kaluba-Witzel_triangle}. \begin{prop}\label{lem: developability of gen triangle} Suppose that $girth(\Gamma_{i})\geq 6$ for each $i$. Then $G_{k}^{j}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$ acts properly and cocompactly on a triangular complex $X^{j}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$ such that the link of each vertex is isomorphic to $\Gamma\in\{\Gamma_{i}\}_{i}$. If $girth(\Gamma_{1})>6$, then $G$ is hyperbolic. \end{prop} \begin{definition}[Generalized ordinary triangle groups]\label{def: generalized triangle 2} Consider the following complex of groups. Fix $k\geq 2$, and identify the boundaries of $k$ $2$-simplices to construct a simplicial complex $\mathcal{K}$ with three vertices $v_{1},v_{2},v_{3}$, three edges $e_{1}, e_{2}, e_{3}$, and $k$ $2$-simplices. Then $Lk(v_{i})\simeq C_{k,2}$, the cage graph on $k$ edges, i.e. the smallest $k$ regular graph of girth $2$. Let $P_{i}=\pi_{1}(Lk(v_{i}))$, and let $G_{0,k}$ be the free group on $2k-2$ letters. Note that we can view $G_{0,k}$ as the fundamental group of a complex of groups with underlying simplicial complex $\mathcal{K}$ and vertex groups $P_{i}$. Now, let $\Gamma_{i}\looparrowright Lk(v_{i})$ be finite-sheeted normal covering graphs, with associated normal subgroups $Q_{i}\unlhd P_{i}$. Let $D$ be a complex of groups with underlying complex $\mathcal{K}$ and (finite) vertex groups $V_{i}=P_{i}\slash Q_{i}$. Since there are choices for the above complex, we will let $D^{j}_{0,k}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$, $j=1,\hdots ,$ be the finite exhaustive list of possible complexes of groups achieved by the above construction. Form the \emph{($k$-fold) generalized ordinary triangular group} $$ G^{j}_{0,k}(\Gamma_{1},\Gamma_{2},\Gamma_{3})=\pi_{1}(D^{j}_{0,k}(\Gamma_{1},\Gamma_{2},\Gamma_{3}))=G_{0,k}\slash \langle\langle Q_{1}\cup Q_{2}\cup Q_{3}\rangle\rangle .$$ \end{definition} Note that in this definition the graphs $\Gamma_{i}$ are covers of $C_{k,2}$ so that they are connected, contain no cut edges, and have girth at least $2$. Theorem \ref{mainthm: cubulating groups}, along with Proposition \ref{lem: developability of gen triangle}, and \cite[Proposition 3.2]{Lubotzky-Manning-Wilton} below, allow us to cubulate $G^{j}_{k}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$ and $G^{j}_{0,k}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$ when given enough information about each of $\Gamma_{1},\Gamma_{2},\Gamma_{3}$. The purpose of this subsection is to provide a way to prove such a group acts properly discontinuously on a $CAT(0)$ cube complex by considering $\Gamma_{1}$ alone. Again, see Section \ref{subsec: Link conditions} for the relevant definitions. \begin{theorem}\label{mainthm: cubulating generalized triangle groups} Let $\Gamma_{i}\looparrowright C_{k,2}$ be finite-sheeted covers such that $girth(\Gamma_{i})\geq 6$ for each $i$, and let $G=G^{j}_{0,k}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$ or $G=G^{j}_{k}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$. If $\Gamma_{1}$ is weighted strongly edge $3$-separated, then $G$ acts properly discontinuously on a $CAT(0)$ cube complex. If, in addition, $G$ is hyperbolic, then this action is cocompact. \end{theorem} We use the following proposition, as stated in \cite[Proposition 3.2]{Lubotzky-Manning-Wilton}, which follows by an application of \cite[Theorem III.$\mathcal{C}$.4.17]{Bridson-Haefliger}. \begin{proposition*} \cite[Proposition 3.2]{Lubotzky-Manning-Wilton} If $girth(\Gamma_{i})\geq 6$ for each $i$, then $G^{j}_{0,k}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$ acts properly discontinuously and cocompactly on a simply connected simplicial complex $X^{j}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$ with links isomorphic to $\Gamma$, where $\Gamma\in\{\Gamma_{1},\Gamma_{2},\Gamma_{3}\}.$ Furthermore, if $girth(\Gamma_{1})\geq 6$ for each $i$ and $girth(\Gamma_{1})>6$, then $G_{0,k}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$ is hyperbolic. \end{proposition*} Now, fix $j$, let $G=G_{k}^{j}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$ or $G=G_{0,k}^{j}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$, and let $X=X^{j}(\Gamma_{1},\Gamma_{2},\Gamma_{3})$ be as above. Note that the antipodal graph $\Delta_{G\backslash X}$ is the disjoint union of three components $\Delta_{1},\Delta_{2},\Delta_{3}$, such that for any vertex $v\in \Delta_{i}$ either $v$ is secondary, or $Lk_{G\backslash X}(v)\cong \Gamma_{i}$. Suppose $\Gamma_{1}$ is a weighted strongly edge $3$-separated graph, and endow $X$ with the metric that turns each triangle into a unit equilateral Euclidean triangle: $X$ is $CAT(0)$ with this metric. Then for each $v\in V(\Delta_{1})$, $Lk_{G\backslash X}(v)$ is a strongly edge $\pi$-separated graph. Since cutsets are proper, we can assign to every cutset the canonical partition: as discussed in Section \ref{subsection: Examples of solutions of the gluing equations} this is sufficient for cubulation, and therefore we omit the reference to partitions for the remainder of this section. As in Section \ref{subsection: Constructing hypergraphs in polygonal complexes} construct the graphs $\underline{\Lambda}_{1},\hdots ,\underline{\Lambda}_{m}$ as images of $$\Sigma_{i}\looparrowright\Delta_{1}\looparrowright G\backslash X.$$ In particular, if a vertex $v$ has $Lk_{X}(v)=\Gamma_{1}$, we have a hypergraph passing through every $\pi$-separated edge cutset in $Lk_{X}(v)=\Gamma_{1}$. As in Section \ref{subsection: Hypergraph stabolizers and wallspaces}, we can again build the system of hypergraph walls. We now analyse the separation of this complex by hypergraphs. \begin{lemma}\label{lem: strongly separated triangle} Suppose that $\Gamma_{1}$ is weighted strongly edge $3$-separated, and let $\gamma$ be a geodesic in $X$ of length at least $100$. There exists a hypergraph $\Lambda$ such that $\Lambda$ separates the endpoints of any finite geodesic extension of $\gamma$. \end{lemma} \begin{proof} Since $\gamma$ has length at least $100$, we may write $\gamma=\beta \cdot\gamma_{1}\cdot\gamma_{2}\cdot\gamma_{3}\cdot \delta$ such that $1\leq l(\gamma_{i})\leq \sqrt{3\slash 2}$, $l(\beta),l(\delta)\geq 40$ and the endpoints of each $\gamma_{i}$ lie in $X^{(1)}$. We can see that either: \begin{enumerate}[label = \bf{case} $\boldsymbol{\alph*})$] \item $\gamma_{2}$ contains an edge of $X^{(1)}$ of the form $[u,v]$, \item or $\gamma_{2}$ contains a subpath that intersects $X^{(1)}$ at exactly two points $x,y$ in $\partial T$ for some $2$-cell $T$. \end{enumerate} Now consider case $a)$. There are two subcases to consider. \begin{enumerate}[label = \bf{Case} $\boldsymbol{a.\roman*}$)] \item $u$ and $v$ have links isomorphic to $\Gamma_{2}$ and $\Gamma_{3}$ respectively (or vice versa), \item or $v$ has link isomorphic to $\Gamma_{1}.$ \end{enumerate} In case $a.i)$, $\gamma_{2}$ contains a secondary vertex $x$ that is opposite to some $w$ with $Lk_{X}(w)\cong\Gamma_{1}$. By Lemma \ref{lem: separation condition}, the hypergraph passing through $x$ and $w$ therefore separates the endpoints of $\gamma'$, and so the endpoints of any geodesic extension of $\gamma.$ In case $a.ii)$, consider the path $\gamma_{3}.$ If $\gamma_{3}$ is not an edge then $\gamma_{3}$ satisfies the hypothesis of case $b)$ using $\gamma_{3}$ in place of $\gamma_{2}$. Otherwise we may assume that $\gamma_{2}\cdot \gamma_{3}=[u,v]\cdot[v,w]$. Now, $d_{Lk(v)}([u,v],[v,w])\geq \pi$ as $\gamma_{2}\cdot \gamma_{3}$ is geodesic: let $C$ be the cutset separating $[u,v]$ and $[v,w]$ in $\Gamma_{1}$ (this exists as $\Gamma_{1}$ is strongly $3$-separated): by Lemma \ref{lem: separation condition} the hypergraph passing through $C$ in $Lk(v)$ separates the endpoints of $\gamma.$ \begin{figure}[H] \begin{minipage}[b]{0.4\textwidth} \includegraphics[scale=0.9]{gentriangle1.png}\centering \caption*{Case a.i)} \end{minipage} \begin{minipage}[b]{0.4\textwidth} \includegraphics[scale=0.9]{gentriangle2.png} \centering \caption*{\hspace*{70pt}Case a.ii)} \end{minipage} \label{fig:gentriangle1} \end{figure} For case $b)$ there are three subcases: \begin{enumerate}[label= \bf{case} $\boldsymbol{b.\roman*)}$] \item the two paths in $\partial T$ from $x$ to $y$ each contain one of the vertices $u,v$ such that $u$ is primary with $Lk(u)\cong \Gamma_{1}$, and $v$ is secondary and antipodal to $u$ in $\partial T$, \item one of the two paths in $\partial T$ from $x$ to $y$ contains both of the vertices $u,v$ where $u$ is primary with $Lk(u)\cong\Gamma_{1}$, and $v$ is secondary and opposite to $u$, \item or $\gamma_{2}=[u,v]$ where $u$ is secondary and opposite to $v$, with $Lk(v)\cong\Gamma_{1}$. \end{enumerate} In case b.i), by Lemma \ref{lem: separation condition} the hypergraph passing through $u$ and $v$ separates the endpoints of $\gamma_{2}$ and so the endpoints of $\gamma$. Consider case $b.ii)$. Let $T_{x}$, $T_{y}$ be the two $2$-cells adjacent$T$ containing the vertex $x$ and $y$ respectively, with $\gamma$ passing through both $T_{x}$, $T_{y}$. Note that $x$ and $y$ lie on different edges of $\partial T$. Suppose that $\gamma_{1}$ passes through $x$ and $\gamma_{3}$ passes through $y$: we may see that by a simple Euclidean geometry argument for angles that either $\gamma_{1}$ or $\gamma_{3}$ satisfies case $b.i)$. In case $b.iii)$ extend $\gamma_{2}$ through $v$ until we meet $X^{(1)}$ at a third point $w$: without loss of generality this can be written $\gamma_{1}\cdot \gamma_{2}=[u,v]\cdot [v,w]$. Now, as $\gamma$ is geodesic, we have $d_{Lk(v)}([u,v],[v,w])\geq 3$. Let $C$ be the cutset in $\Gamma_{1}$ such that $[u,v]$ and $[v,w]$ are separated by $C$ (this exists as $\Gamma_{1}$ is strongly $3$-separated), and let $\Lambda$ be the hypergraph passing through $C$ in $Lk(v)$. By Lemma \ref{lem: separation condition} $\Lambda$ separates the endpoints of $\gamma$. \begin{figure}[H] \centering \begin{minipage}[b]{0.25\textwidth} \includegraphics[scale=0.9]{gentriangle3.png} \centering \caption*{Case b.i)} \end{minipage}\hspace*{30pt} \begin{minipage}[b]{0.49\textwidth} \includegraphics[scale=0.8]{gentriangle4.png} \centering \caption*{\hspace*{70pt}Case b.ii)} \end{minipage} \label{fig:gentriangle2} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.9]{gentriangle5.png} \caption*{Case b.iii)} \end{figure} \end{proof} We can now prove Theorem \ref{mainthm: cubulating generalized triangle groups}. \begin{proof}[Proof of Theorem \ref{mainthm: cubulating generalized triangle groups}] By \cite[Proposition 3.2]{Lubotzky-Manning-Wilton}, the group $G$ acts properly discontinuously and cocompactly on a simply connected simplicial complex $X$. Endow this complex with the Euclidean metric: by Gromov's link condition X is $CAT(0)$ and has three types of vertices $\{v_{i}\}$ where $Lk(v_{i})=\Gamma_{i}$. If $\Gamma_{1}$ is strongly $3$-separated, then by Lemma \ref{lem: strongly separated triangle}, we have $$\#(p,q)\geq d_{X}(p,q)\slash 100 -1.$$ The results then follow by \cite[Theorem 5.2]{Hruska-Wise} and \cite[Lemma 7.2]{Hruska-Wise} similarly to the proof of Theorem \ref{mainthm: cubulating groups}, using Lemma \ref{lem: strongly separated triangle} in place of Lemma \ref{lem: edge hypergraph separation}. \end{proof} \subsection{Small girth generalized triangle groups}\label{subsection: small girth generalized triangle groups} To prove Corollary \ref{coralph: small girth generalized triangle groups}, we now analyse the separation of various small girth graphs considered in \cite{Caprace-Conder-Kaluba-Witzel_triangle}. These graphs arise in the work of \cite{Conder-Morton_1995classification,Conder-Dobcsanyi_2002trivalent,Conder-Malnic-Marusic-Potocnik_2006census} and are regular bipartite graphs with girth $6$ or $8$, diameter $3$ or $4$, and an edge regular subgroup of the automorphism group. Furthermore, they all have a vertex transitive automorphism group. In particular, we have the following. \begin{lemma*}\cite{Caprace-Conder-Kaluba-Witzel_triangle} Let $\Gamma$ be one of $\{F24A,F26A,F40A,F48A\}$. Then $Aut(\Gamma)$ acts vertex transitively. Let $\Gamma$ be one of $\{F24A,F26A,F40A,F48A, G54\}$: there exists a subgroup $H(\Gamma)\leq Aut(\Gamma)$ that acts freely and transitively on $E(\Gamma)$ and preserves the bipartition of $\Gamma.$ \end{lemma*} We make the following definitions. \begin{definition}[Cubic graphs] Let $\Gamma$ be a finite graph. It is \emph{cubic} if it is connected, bipartite, and trivalent. \end{definition} \begin{definition}[$\dagger$-separated graphs] Let $\Gamma$ be a graph. We say that $\Gamma$ is \emph{$\dagger$-separated} if: \begin{enumerate}[label=$\roman*)$] \item $\Gamma$ is cubic, \item $girth(\Gamma)= 6$ or $8$, \item and $\Gamma$ is disjointly weighted vertex $3$-separated by proper cutsets, (so that $\Gamma -C$ consists of exactly three components for each $C$). \end{enumerate} \end{definition} \begin{definition}[Good cubic graphs] A cubic graph is \emph{good} if $girth(\Gamma)=6$ or $8$, $diam(\Gamma)\leq 4$, $Aut(\Gamma)$ acts vertex transitively, and there exists a group $H(\Gamma)\leq Aut(\Gamma)$ that acts freely and transitively on $E(\Gamma)$ and preserves the bipartition of $\Gamma.$ \end{definition} In the above definition, for any vertex $v$ of $\Gamma$, $H(\Gamma)_{v}$ is of order three and so cyclically permutes the neighbours of $v$. Fix a vertex $v_{0}\in V(\Gamma).$ For each pair of vertices $v\neq w$, choose an element $\gamma_{v,w}\in Aut(\Gamma)$ with $\gamma_{v,w} v=w$, such that \begin{enumerate}[label=$\roman*)$] \item $\gamma_{v,w}=\gamma_{v,v_{0}}\gamma_{v_{0},w},$ \item $\gamma_{v,w}=\gamma_{w,v}^{-1},$ \item and if $v,w\in V_{1}$ or $v,w\in V_{2}$, then $\gamma_{v,w}\in H(\Gamma)$. \end{enumerate} For each $v\in V(\Gamma)$ we will let neighbours of $v$ be defined as $w_{1}(v),w_{2}(v),w_{3}(v)$, so that $\gamma_{v_{0},v}w_{i}(v_{0})=w_{i}(v).$ We also assign to $H(\Gamma)_{v}$ a generator $h_{v}$ such that $h_{v}w_{1}(v)=w_{2}(v)$, $h_{v}w_{2}(v)=w_{3}(v)$, and so on, i.e. $h_{v}=\gamma_{v_{0},v}h_{v_{0}}\gamma_{v,v_{0}}$. \begin{definition}[$\Large{*}$-separated cutsets] Let $C$ be a vertex cutset in a graph $\Gamma$. We say $C$ is a \emph{$\Large{*}$-separated cutset} if $C$ is $3$-separated, for any vertex $w\in C$, there are two vertices $v,v'$ adjacent to $w$ such that $v$ and $v'$ lie in separate components of $\Gamma-C$, and $\Gamma - C$ contains exactly two components. \end{definition} \begin{definition} Let $\Gamma$ be a good cubical graph, and let $\mathcal{C}$ be a collection of $\Large{*}$-separated cutsets. For $v\in V(\Gamma)$, we define $$\large{*}(v,i,j)$$ to be the set of all $\Large{*}$-separated cutsets $C\ni v$ such that $w_{i}(v)$ and $w_{j}(v)$ lie in the same connected component of $\Gamma - C$. We further define $$\mathcal{C}(v,i,j):=\mathcal{C}\cap \Large{*}(v,i,j).$$ \end{definition} \begin{definition}[$\Large{*}$-separated graph] Let $\Gamma$ be a graph. We say that $\Gamma$ is \emph{$\Large{*}$-separated} if: \begin{enumerate}[label=$\roman*)$] \item $\Gamma$ is a cubic graph, \item $\Gamma$ is weighted vertex $3$-separated by a set $\mathcal{C}$ of $\Large{*}$-separated cutsets, \item for any vertex $v$ and any $i\neq j$, $\mathcal{C}(v,i,j)$ is non-empty \item there exists an integer $M$ and positive integers $n(C)$ for each $C\in \mathcal{C}$ such that for any vertex $v$ and any $i\neq j $, $$\sum\limits_{C\in\mathcal{C}(v,i,j)}n(C)=\frac{M}{3}$$ \end{enumerate} \end{definition} \begin{definition} Let $v$ be any vertex in $\Gamma$. We define $D(v)=\{w\in \Gamma\;:\;d(v,w)\geq 5\}$. \end{definition} For ease, we prove the following lemma. \begin{lemma}\label{lem: simple condition for 3 sep} Let $\Gamma$ be a good cubic graph. Let $V_{1}\sqcup V_{2}$ be the bipartite partition of vertices, and choose $v_{1}\in V_{1}$. Suppose that there exists $\Large{*}$-separated cutset $A_{i}\ni v_{1}$ such that for each $u\in D(w_{1}(v_{1}))$, there exists some $i$, $w_{3}(v_{1})$ and $u$ lie in separate components of $\Gamma$ - $A_{i}$. Then $\Gamma$ is $\Large{*}$-separated. \end{lemma} \begin{proof} We need to show three separate things. Firstly we show that there exists a collection $\mathcal{C}$ of $\Large{*}$-separated cutsets so that $\Gamma$ is vertex $3$-separated by $\mathcal{C}$. Recall the element $\gamma=\gamma_{v_{1},v_{2}}$, the element of $Aut(\Gamma)$ taking $v_{1}$ to $v_{2}:=w_{1}(v_{1})\in V_{2}$. Let $H:=H(\Gamma)$ be the group acting edge-regularly on $\Gamma$ and preserving the bipartite partition. Let $A=\{A_{i}\}_{i}$, $B=\gamma \cdot A$, $\mathcal{A}=H\cdot A$, $\mathcal{B}=H\cdot B$, and $\mathcal{C}=\mathcal{A}\cup \mathcal{B}.$ By assumption, for some $i\neq j$ $\mathcal{C}(v_{1},i,j)$ is non-empty. For any vertex $v$, $\gamma_{v_{1},v}\mathcal{C}(v_{1},1,2)=\mathcal{C}(v,1,2)$, and furthermore, $h_{v}\mathcal{C}(v,1,2)=\mathcal{C}(v,2,3)=h_{v}^{-1}\mathcal{C}(v,1,3)$. Therefore $\mathcal{C}(v,i,j)$ is non empty for all $v$ and all $i\neq j$. In particular for any vertex $v$ and $w,w'$ adjacent to $v$ there exists a cutset separating $w$ and $w'$. Now let $u,v$ be vertices distance at least $3$ apart. Note that $d(u,v)\leq 4$ as $diam(\Gamma)\leq 4$. Assume $d(u,v)=3$, and let $$p=(u,u_{1})(u_{1},u_{2})(u_{2},v)$$ be any edge path between $u$ and $v$. Now, suppose without loss of generality that $u=w_{1}(u_{1})$ and $u_{2}=w_{2}(u_{1})$. Then choosing a cutset $C\in\mathcal{C}(u_{1},1,3)$, $u$ and $u_{2}$ lie on separate components of $\Gamma-C$. Since $C$ is $3$-separated, and $u_{1}\in C$, it follows that $u$, $v$ are not elements of $C$. As $u$ is adjacent to $u_{1}$ and $v$ is adjacent to $u_{2}$, it follows that $u$ and $v$ lie in different components of $\Gamma-C$. If $d(u,w)=4$, then we repeat the argument for $$p=(u,u_{1})(u_{1},u_{2})(u_{2},u_{3})(u_{3},v)$$ and for $C$ the cutset containing $u_{2}$ and separating $u_{1}$ and $u_{3}$. It now follows by Lemma \ref{lem: vertex separated condition} that $\Gamma$ is vertex $3$-separated. If $d(u,w)=5,6$, consider the edge path $$p=(u,u_{1})(u_{1},u_{2})(u_{2},u_{3})(u_{3},u_{4})(u_{4},v),$$ or $$p=(u,u_{1})(u_{1},u_{2})(u_{2},u_{3})(u_{3},u_{4})(u_{4},u_{5})(u_{5},v).$$ We may map $u_{1}$ to $v_{1}$ and $u$ to $w_{3}(v_{1})$: by taking $A$ and mapping back, by assumption this cutset separates $u$ and $v$. Finally we wish to find the positive integers $M$ and $n(C)$. This immediately implies the weight equations can be solved, and so $\Gamma$ is weighted vertex $3$-separated with respect to $\mathcal{C}.$ The proof is similar to the proof of Lemma \ref{lem: vertex transitive automorphism group can solve equations} concerning vertex transitive automorphism groups. Let $\tilde{\mathcal{C}}=H\cdot A\cup H\cdot B$ counted \emph{with multiplicity}. Let $u,v\in V_{1}$. For $i=1,2,3,$ we have $\gamma_{u,v}(w_{i}(u))=w_{i}(v)$. It follows that for $C\in \tilde{\mathcal{C}},$ $$C\in \mathcal{C}(u,i,j)\iff \gamma_{u,v}C \in \mathcal{C}(v,i,j).$$ Similarly $$C\in \mathcal{C}(u,1,2)\iff h_{u}C \in \mathcal{C}(u,2,3)\iff h_{u}^{2}C\in\mathcal{C}(u,1,3).$$ Let $n(C)=\vert\{C'\in\tilde{\mathcal{C}}\;:\;C'=C\}\vert,$ i.e. $n(C)$ is the multiplicity of $C$ in $\tilde{\mathcal{C}}$. By applying $\gamma_{v_{1},v}$ and $h_{v_{1}}$, we see that for any $v\in V_{1}$ and $i\neq j,i'\neq j'$: $$\sum\limits_{C\in\mathcal{C}(v_{1},i,j)}n(C)=\sum\limits_{C\in\mathcal{C}(v,i',j')}n(C).$$ Therefore there exists an integer $M_{1}$ such that for for any $v\in V_{1}$ and $i\neq j$: $$\sum\limits_{C\in\mathcal{C}(v,i,j)}n(C)=\frac{M_{1}}{3}.$$ Similarly there exists an integer $M_{2}$ such that for any $v\in V_{2}$ and $i\neq j$: $$\sum\limits_{C\in\mathcal{C}(v,i,j)}n(C)=\frac{M_{2}}{3}.$$ Now finally we wish to show that $M_{1}=M_{2}$. However, this follows immediately by construction, as $\mathcal{B}=\gamma \cdot\mathcal{A}$, and $\mathcal{C}=\mathcal{A}\cup \mathcal{B}$. \end{proof} Using this, we investigate the separation of several graphs. \begin{table}[H] \centering \small \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 0&1& 2& 3\\ 1&0& 4& 5\\ 2&0& 6& 8\\ 3&0& 7& 9\\ 4&1& 11& 14\\ 5&1& 10& 13\\ 6&2& 12& 16\\ 7&3& 12& 15 \end{tabular} \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 8&2& 11& 18\\ 9&3& 10& 17\\ 10&5& 9& 21\\ 11&4& 8& 20\\ 12&6& 7& 19\\ 13&5& 19& 23\\ 14&4& 19& 22\\ 15&7& 20& 23 \end{tabular} \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 16&6& 21& 22\\ 17&9& 20& 22\\ 18&8& 21& 23\\ 19&12& 13& 14\\ 20&11& 15& 17\\ 21&10& 16& 18\\ 22&14& 16& 17\\ 23&13& 15& 18 \end{tabular} \caption{Edge incidences for $F24A$} \label{tab:F24 edges} \end{table} \begin{lemma} The graph $F24A$ is $\dagger$-separated. \end{lemma} \begin{proof} By a computer search we find all $3$-separated vertex cutsets in $F24A$: \begin{table}[H] \centering \begin{tabular}{l} $C_{1}=\{x_{0}, x_{10}, x_{11}, x_{12},x_{22},x_{23}\}$,\\ $C_{2}=\{x_{1}, x_{8}, x_{9}, x_{15}, x_{16}, x_{19}\}$,\\ $C_{3}=\{x_{2}, x_{4}, x_{7}, x_{13},x_{17},x_{21}\}$,\\ $C_{4}=\{x_{3}, x_{5}, x_{6}, x_{14}, x_{18}, x_{19}\}$. \end{tabular} \end{table} We note $diam(F24A)=4$. As the above are disjoint and proper, it follows easily that $F24A$ is $\dagger$-separated. \end{proof} \vspace*{-10pt} \begin{table}[H] \centering \small \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 0&1& 2& 3\\ 1&0& 4& 7\\ 2&0& 6& 9\\ 3&0& 5& 8\\ 4&1& 10& 13\\ 5&3& 11& 14\\ 6&2& 12& 15\\ 7&1& 11& 16\\ 8&3& 12& 17 \end{tabular} \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 9&2& 10& 18\\ 10&4& 9& 22\\ 11&5& 7& 20\\ 12&6& 8& 21\\ 13&4& 23& 24\\ 14&5& 24& 25\\ 15&6& 23& 25\\ 16&7& 21& 23\\ 17&8& 22& 24 \end{tabular} \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 18&9& 20& 25\\ 19&20& 21& 22\\ 20&11& 18& 19\\ 21&12& 16& 19\\ 22&10& 17& 19\\ 23&13& 15& 16\\ 24&13& 14& 17\\ 25&14& 15& 18 \end{tabular} \caption{Edge incidences for $F26A$} \label{tab:F26 edges} \end{table} \begin{lemma}\label{lem: F26A is * separated} The graph $F26A$ is $\Large{*}$-separated. \end{lemma} \begin{proof} We can take $v_{1}=x_{0}$, $w_{i}=x_{i}$. $D(x_{3})=\emptyset$, as $diam(F26A)=4$. Using the notation as in Lemma \ref{lem: simple condition for 3 sep} we find $A_{1}=\{x_{0}, x_{10}, x_{12}, x_{14}, x_{20}, x_{23}\}.$ The result follows by Lemma \ref{lem: simple condition for 3 sep}. \end{proof} We defer the incidence table of $F40A$, and the collection of cutsets found, to Appendix \ref{section: cutset appendix}. \begin{lemma}\label{lem: F40 is separated} The graph $F40A$ is weighted strongly edge $3$-separated. \end{lemma} \begin{proof} We require a large number of cutsets for this proof: they can be found in Appendix \ref{section: cutset appendix}. In particular, we find a collection of cutsets $\{C_{i}\}_{i}$ such that for any vertices $w_{1},w_{2}$ with $d(x_{0},w_{1})\geq 3$ and $d(w_{1},w_{2})=1$ there exists some $C_{i}$ separating $\{x_{0},x_{1}\}$ and $\{w_{1},w_{2}\}$ (this can be easily checked by computer). Similarly for any vertices $w_{1},w_{2}$ with $d(x_{1},w_{1})\geq 3$ and $d(w_{1},w_{2})=1$ there exists some $C_{i}$ separating $\{x_{0},x_{1}\}$ and $\{w_{1},w_{2}\}$. By passing to subsets of $C_{i}$ we may assume each of these cutsets are minimal and therefore proper. As $Aut(F40)$ acts edge and vertex transitively, it follows by Lemma \ref{lem: strong edge sep condition} that $F40$ is strongly edge $3$-separated. By Lemma \ref{lem: edge transitive automorphism group can solve equations}, $F40A$ is weighted disjointly strongly edge $3$-separated. \end{proof} \begin{table}[H] \centering \small \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 0&1& 2& 3\\ 1&0& 4& 5\\ 2&0& 6& 8\\ 3&0& 7& 9\\ 4&1& 11& 17\\ 5&1& 10& 16\\ 6&2& 13& 21\\ 7&3& 12& 20\\ 8&2& 15& 19\\ 9&3& 14& 18\\ 10&5& 23& 25\\ 11&4& 22& 24\\ 12&7& 23& 29\\ 13&6& 22& 28\\ 14&9& 22& 27\\ 15&8& 23& 26 \end{tabular} \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 16&5& 27& 31\\ 17&4& 26& 30\\ 18&9& 25& 35\\ 19&8& 24& 34\\ 20&7& 28& 33\\ 21&6& 29& 32\\ 22&11& 13& 14\\ 23&10& 12& 15\\ 24&11& 19& 43\\ 25&10& 18& 42\\ 26&15& 17& 47\\ 27&14& 16& 46\\ 28&13& 20& 45\\ 29&12& 21& 44\\ 30&17& 40& 46\\ 31&16& 39& 47 \end{tabular} \begin{tabular}{c|ccl} $x_{i}$ &\multicolumn{3}{c}{$x_{j}$ adjacent to $x_{i}$}\\ \hline 32&21& 41& 43\\ 33&20& 41& 42\\ 34&19& 40& 44\\ 35&18& 39& 45\\ 36&43& 45& 46\\ 37&42& 44& 47\\ 38&39& 40& 41\\ 39&31& 35& 38\\ 40&30& 34& 38\\ 41&32& 33& 38\\ 42&25& 33& 37\\ 43&24& 32& 36\\ 44&29& 34& 37\\ 45&28& 35& 36\\ 46&27& 30& 36\\ 47&26& 31& 37 \end{tabular} \caption{Edge incidences for $F48A$} \label{tab:F48 edges} \end{table} \begin{lemma} The graph $F48A$ is $\dagger$-separated. \end{lemma} \begin{proof} By a computer search we find all $3$-separated vertex cutsets in $F48A$: \begin{table}[H] \centering \begin{tabular}{l} $C_{1}=\{x_{0}, x_{16}, x_{17}, x_{18}, x_{19}, x_{20}, x_{21}, x_{22}, x_{23}, x_{36}, x_{37}, x_{38}\}$,\\ $C_{2}=\{x_{1}, x_{6}, x_{7}, x_{14}, x_{15}, x_{24}, x_{25}, x_{30}, x_{31}, x_{41}, x_{44}, x_{45}\}$,\\ $C_{3}=\{x_{2}, x_{5}, x_{9}, x_{11}, x_{12}, x_{26}, x_{28}, x_{32}, x_{34}, x_{39}, x_{42}, x_{46}\}$,\\ $C_{4}=\{x_{3}, x_{4}, x_{8}, x_{10}, x_{13}, x_{27}, x_{29}, x_{33}, x_{35}, x_{40}, x_{43}, x_{47}\}$. \end{tabular} \end{table} The above are disjoint and proper, and it can be seen that $F48A$ is $\dagger$-separated. \end{proof} We defer the incidence table of $G54$, and the collection of cutsets found, to Appendix \ref{section: cutset appendix}. \begin{lemma}\label{lem: G54 is separated} The Gray Graph $G54$ is strongly edge $3$-separated. \end{lemma} \begin{proof} We require a large number of cutsets for this proof: they can be found in Appendix \ref{section: cutset appendix}. In particular, we find a collection of $3$-separated cutsets $\{C_{i}\}_{i}$ such that each $C_{i}$ contains one of the edges $$(x_{0},x_{1}),(x_{0},x_{53}),(x_{24},x_{25}),(x_{25},x_{26}).$$ Therefore, as each cutset is $3$-separated, they cannot contain the edge $(x_{0},x_{25})$, and so for each cutset, $x_{0}$ and $x_{25}$ lie in the same component of $G54- C_{i}$. We also show that for any point $v$ with $d(x_{0},v)\geq 3$ and any neighbour $w$ of $v$, there exists some $C_{i}$ separating $\{x_{0},x_{25}\}$ and $\{v,w\}$. Furthermore, for any point $v$ with $d(x_{25},v)\geq 3$ and any neighbour $w$ of $v$, there exists some $C_{i}$ separating $\{x_{0},x_{25}\}$ and $\{v,w\}$. By passing to subsets of $C_{i}$ we may assume each of these cutsets are minimal and therefore proper. Now, let $p=(u,w_{1})(w_{1},w_{2})\hdots(w_{n},v)$ be some path with $2\leq n\leq 4$ of length between $3$ and $6$. Let $u'$ be adjacent to $u$ and $v'$ be adjacent to $v$. Note again that $Aut(G54)$ acts transitively on the set of edges. If we can map $u$ to $x_{0}$ by some element $\gamma\in Aut(G54)$, then we may also map $u'$ to $x_{25}$ by $\gamma$, and then for some $i$, $C_{i}$ separates $x_{0},x_{25}$ and $\gamma v,\gamma v'$: $\gamma^{-1}C_{i}$ then separates $u,u'$ and $v,v'$. Otherwise, we map $u$ to $x_{25}$ by $\gamma$, so that $\gamma u'=x_{0}$. The result follows similarly. Therefore, $G54$ is strongly edge $3$-separated, and as it has an edge transitive automorphism group, it is weighted strongly edge $3$-separated by Lemma \ref{lem: edge transitive automorphism group can solve equations}. \end{proof} We finally need to prove the following. \begin{lemma}\label{lem: splicing together generalized triangle groups} Let $Y$ be a finite triangle complex such that each triangle is a unit equilateral Euclidean triangle. Suppose that the link of each vertex is either $\Large{*}$-separated or $\dagger$-separated with the combinatorial metric (we allow a mixture of these). Then $Y$ is gluably $\pi$-separated. \end{lemma} \begin{proof} It is clear that $Y$ is nonpositively curved and regular. By Lemma \ref{lem: solving gluing equations for minimal edge cutsets}, if the link of each vertex is $\dagger$-separated with the combinatorial metric then we are finished. Otherwise, let $\{v_{k}\}$ be the vertices such that $Lk(v_{k})$ is $\Large{*}$-separated with the combinatorial metric, and $\{w_{l}\}$ be the vertices such that $Lk(w_{l})$ is $\dagger$-separated with the combinatorial metric. Note that a $3$-separated cutset in $Lk(x)$ under the combinatorial metric is a $\pi$-separated cutset in $Lk(x)$ under the combinatorial metric. For each proper $\pi$-separated cutset $C$ in $Lk(w_{l})$ we may assign the three partitions $P_{1}(C),P_{2}(C),P_{3}(C)$ corresponding to placing two components of $Lk(w_{l})-C$ in the same element of the partition. For each cutset $C$ in $Lk(v_{k})$ assign the unique partition of connectedness of $Lk(v_{k})-C$. Since the links are $\dagger$-separated and $\Large{*}$-separated, by assumption for each vertex $x\in Y$ there exists a positive integer $N_{x}>0$ and a system of strictly positive weights $n_{x}(C)$ for $C\in \mathcal{C}_{x}$ such that for any vertex $e$ in $Lk_{Y}(x)$, $$\sum\limits_{C\in \mathcal{C}(e)}n_{x}(C)=\sum\limits_{C\in \mathcal{C}(e)\cap \mathcal{C}_{x}}n_{x}(C)=N_{x}.$$ Furthermore, if $Lk(v_{k})$ is $\Large{*}$-separated, then for any vertex $y\in V(Lk(v_{k}))$ and $i\neq j$ $$\sum\limits_{C\in\mathcal{C}_{v_{k}}(y,i,j)}n_{v_{l}}(C)=\frac{N_{v_{l}}}{3}.$$ Let $M=\prod_{x\in V(Y)}N_{x},$ and for a cutset $C\in\mathcal{C}_{x},$ define $$m(C)=Mn_{x}(C)\slash N_{x}.$$ It follows that for an edge $e$ in $Lk_{G\backslash X}(x)$, $$\sum\limits_{C\in \mathcal{C}(e)}m(C)=\frac{M}{N_{x}}\sum\limits_{C\in \mathcal{C}(e)}n_{x}(C)=\frac{M}{N_{x}}N_{x}=M.$$ Now, take $\mu(C,P(C))=m(C)$. It follows that for any oriented edge $e$ of $Y^{(1)}$ starting at some $w_{l}$ and any partition $(C,P)\in \mathcal{CP}(e)$: $$\sum \limits_{(C',P')\in [C,P]_{e}}\mu (C',P')=\sum \limits_{(C',P')\in [C,P]_{e}}m(C')=\frac{1}{3}\sum \limits_{C'\in \mathcal{C}(e)}m(C')=\frac{1}{3}M.$$ Similarly, by the definition of $\Large{*}$-separated, for each $v_{k}$, each edge $e$ starting at $v_{k}$, and $(C,P(C))\in \mathcal{C}(e),$ $$\sum \limits_{(C',P')\in [C,P]_{e}}\mu (C',P')=\sum \limits_{(C',P')\in [C,P]_{e}}m(C')=\frac{1}{3}\sum \limits_{(C',P')\in \mathcal{C}(e)}m(C')=\frac{1}{3}M,$$ and so the gluing equations are solved. \end{proof} The results of Corollary \ref{coralph: small girth generalized triangle groups} now follow from \cite[Theorem 3.1]{Caprace-Conder-Kaluba-Witzel_triangle}, \cite[Proposition 3.2]{Lubotzky-Manning-Wilton}, the above lemmas concerning the separation of the graphs considered, Theorem \ref{mainthm: cubulating groups}, and Theorem \ref{mainthm: cubulating generalized triangle groups}. \subsection{Cubulating Dehn fillings of generalized ordinary triangle groups}\label{subsection: cubulating dehn fillings of generalized triangle groups} We now apply Theorem \ref{mainthm: cubulating groups} to the generalized triangle groups of \cite{Lubotzky-Manning-Wilton}, in particular retrieving consequences of the malnormal special quotient theorem of Wise \cite{Wise-MSQT}. \begin{cor}\label{mainthm: cubulating dehn fillings of generalized triangle groups} Let $\Gamma_{i}\looparrowright C_{k,2}$ be finite $n(i)$-sheeted normal covering graphs. There exist finite-sheeted normal covering graphs $\dot{\Gamma}_{i}\looparrowright \Gamma_{i}$ of index at most $$4\bigg(4^{4^{kn(i)}}\bigg)$$ such that for any collection of finite-sheeted covering graphs $\Delta_{i}\looparrowright\Gamma_{i}$ that factor as $\Delta_{i}\looparrowright\dot{\Gamma}_{i}\looparrowright\Gamma_{i},$ and any $j$, the group $G^{j}_{0,k}(\Delta_{1},\Delta_{2},\Delta_{3})$ is hyperbolic and acts properly discontinuously and cocompactly on a $CAT(0)$ cube complex. \end{cor} We consider covers of $\sigma$-separated graphs: we restrict our consideration to graphs with the combinatorial metric. We note the following lemma. \begin{lemma}\label{lem:cover of separation} Let $p:\tilde{\Gamma}\looparrowright\Gamma$ be a covering graph. Let $e\in E(\Gamma)$ and let $\tilde{e}_{1},\tilde{e}_{2}\in p^{-1}(e)$ be distinct. Then $$d_{\tilde{\Gamma}}(m(\tilde{e}_{1}),m(\tilde{e}_{2}))\geq girth(\Gamma).$$ \end{lemma} We now show that covers of $\sigma$-separated graphs are also $\sigma$-separated. \begin{lemma}\label{lem: covers of suitable graphs} Let $\Gamma$ be a weighted (disjointly) edge $\sigma$-separated graph with $girth(\Gamma)\geq \sigma $ and $p:\tilde{\Gamma}\looparrowright\Gamma$ a finite-sheeted covering graph. Then $\tilde{\Gamma}$ is also weighted (disjointly) edge $\sigma$-separated, and $girth(\tilde{\Gamma})\geq girth(\Gamma)$. \end{lemma} \begin{proof} It is clear that $girth(\tilde{\Gamma})\geq girth(\Gamma)$, and that $\tilde{\Gamma}$ is connected and contains no vertices of degree $1$. Let $\mathcal{C}_{1}, \hdots , \mathcal{C}_{m}\subseteq E(\Gamma)$ be the $\sigma$-separated cut sets of $\Gamma$. Let $\tilde{\mathcal{C}}_{i}=p^{-1}(\mathcal{C}_{i})$: by Lemma \ref{lem:cover of separation}, and by noting that for all $x,y\in\tilde{\Gamma}$ we have $d_{\tilde{\Gamma}}(x,y)\geq d_{\Gamma}(p(x),p(y))$, we see that $\tilde{\mathcal{C}}_{i}$ is a collection of proper $\min \{girth (\Gamma ),\sigma \}$-separated cut sets. Furthermore $\vert \tilde{C}_{i}\vert\geq \vert C_{i}\vert\geq 2$ As $girth(\Gamma)\geq \sigma$, these are $\sigma$-separated and $\cup_{i}\tilde{\mathcal{C}}_{i}=E(\tilde{\Gamma})$. Therefore, $\tilde{\Gamma}$ is edge $\sigma$-separated. If $\Gamma$ is disjointly separated, it is clear that $\tilde{\Gamma}$ is disjointly separated. Finally, defining $n(\tilde{C}_{i})=n(C_{i})$, it can be seen that the weight equations are satisfied, so that $\tilde{\Gamma}$ is weighted (disjointly) edge $\sigma$-separated. \end{proof} Using the above, we wish to show that given any graph $\Gamma$, there exists a finite-sheeted $3$-separated covering graph $\tilde{\Gamma}\looparrowright \Gamma$. \begin{definition} Let $\Gamma$ be a graph and $m\geq 0$. The \emph{$\mathbb{Z}_{m}$ cover of $\Gamma$}, $$p_{m}:\mathbb{Z}_{m} (\Gamma )\looparrowright\Gamma,$$ is the $m^{b_{1}(\Gamma)}$-sheeted cover corresponding to the kernel of the canonical map $\pi_{1}(\Gamma)\rightarrow H_{1}(\Gamma,\mathbb{Z}_{m}).$ \end{definition} The use of this is the following. \begin{lemma}\label{lem: Gamma2 is suitable} Let $\Gamma$ be a finite connected graph with no cut edges and let $m\geq 1$. The covering graph $\mathbb{Z}_{2m} ( \Gamma )$ is weighted disjointly edge $girth(\Gamma)$-separated and $girth ( \mathbb{Z}_{2m} ( \Gamma ))= 2m (girth(\Gamma)).$ \end{lemma} \begin{proof} Let $e\in E(\Gamma)$. We claim $p_{2m}^{-1}(e)$ is a proper $girth(\Gamma)$-separated cut set in $\mathbb{Z}_{2m} ( \Gamma )$. By Lemma \ref{lem:cover of separation}, $p_{2m}^{-1}(e)$ is $girth(\Gamma)$-separated. It suffices to show that if two points $x$ and $y$ are joined by a path $q$ containing one edge of $p_{2m}^{-1}(e)$, then any path $q'$ between them contains an edge of $p_{2m}^{-1}(e)$. Now suppose not: consider such a path $q'$ not containing any edge of $p_{2m}^{-1}(e)$, and consider the loop $qq'$. Then $p_{2m}(qq')$ is trivial in the map to $H_{1}(\Gamma, \mathbb{Z}_{2m})$, so is homotopic to a curve containing $e$ an even number of times, a contradiction. Therefore the set $\mathcal{C}_{e}=\{p_{2m}^{-1}(e)\;:\; e\in E(\Gamma)\}$ is a disjoint collection of proper $girth(\Gamma)$-separated edge-cut sets such that any edge in $\mathbb{Z}_{2m}(\Gamma)$ appears exactly one cut set: the weight equations are trivially satisfied and so $\mathbb{Z}_{2m}(\Gamma)$ is weighted disjointly $girth(\Gamma)$-separated. Any loop in $\mathbb{Z}_{2m} ( \Gamma )$ projects to a loop homotopic to a product of loops where each loop is traversed $2m$ times, and so $girth(\mathbb{Z}_{2} ( \Gamma )) = 2m( girth(\Gamma))$. \end{proof} Using this, we prove the following. \begin{proof}[Proof of Corollary \ref{mainthm: cubulating dehn fillings of generalized triangle groups}] Let $\Gamma_{i}\looparrowright C_{k,2}$ be $n(i)$-sheeted normal covering graphs . Let $\dot{\Gamma}_{i}:=\mathbb{Z}_{2} (\mathbb{Z}_{2} ( \Gamma ))$: these are $$2^{2-(2^{2-2n(i)+kn(i)}+2)n(i)+(2^{1-2n(i)+kn(i)}+1)kn(i)}\leq 4^{1+kn(i)2^{kn(i)}}\leq 4\bigg(4^{4^{kn(i)}}\bigg)-$$ sheeted covering graphs, which, by Lemma \ref{lem: Gamma2 is suitable}, are weighted disjointly edge $3$-separated under the combinatorial metric and have girth at least $8$. Furthermore, it is clear that $\dot{\Gamma_{i}}\looparrowright C_{k,2}$ are normal covers. Suppose $\Delta_{i}\looparrowright\Gamma_{i}$ factors as $\Delta_{i}\looparrowright\dot{\Gamma}_{i}\looparrowright\Gamma_{i}$. By \cite[Proposition 3.2]{Lubotzky-Manning-Wilton}, noting that $girth(\Delta_{i})\geq girth(\dot{\Gamma_{i}})> 6$, the group $G^{j}_{0,k}(\Delta_{1},\Delta_{2},\Delta_{3})$ is hyperbolic. The $\Delta_{i}$ are covers of $\dot{\Gamma}_{i}$, so by Lemma \ref{lem: covers of suitable graphs} are also weighted edge $3$-separated under the combinatorial metric. The result now follows from Lemma \ref{lem: solving gluing equations for minimal edge cutsets}, \cite[Proposition 3.2]{Lubotzky-Manning-Wilton}, and Theorem \ref{mainthm: cubulating groups}. \end{proof}
{ "timestamp": "2021-01-15T02:19:00", "yymm": "2012", "arxiv_id": "2012.09019", "language": "en", "url": "https://arxiv.org/abs/2012.09019" }
\section{Introduction} Graphs are an important tool for modeling systems with a set of objects and pairwise relationships between those objects. In many applications, the objects and the relations between them have temporal properties, resulting in dynamically changing graphs where the topology and/or other properties of the graph change with time. Static graphs are not suitable for modeling such applications as they cannot represent the temporal information. Some examples of domains where such dynamic graphs with temporal properties arise are communication networks with intermittent inter-node connectivity such as delay tolerant networks (DTN) \cite{xie16}, vehicular ad-hoc networks (VANET) \cite{Feng17} etc., social networks \cite{iribarren09, Amblard11, bravo19}, biological networks \cite{lebre10,Han04,rao07} etc. While modelling these systems, the temporal information in the system must be represented in the graph model used. Temporal graphs \cite{Kostakos09} have been proposed as a tool for modeling such systems with temporal properties. Solving many well known graph problems on temporal graphs introduces new challenges. For many graph problems, the usual definitions and algorithms for the problem on static graphs do not apply directly to temporal graphs. As an example, consider the notion of paths in a graph, which is used extensively in applications using graphs. Unlike static graphs, in a temporal graph, an edge can only be used for traversal if it exists at the time of the traversal, and the definition of a path in a static graph no longer holds in a temporal graph. In a temporal graph, a path from a node $u$ to node $v$ must contain a sequence of edges from $u$ to $v$ that exist in strictly increasing order of time, unlike a static graph where just a sequence of edges from $u$ to $v$ is sufficient. This makes many problems that use the notion of paths more complex to solve in temporal graphs. Similar examples exist for many other graph related problems. Several works have addressed graph problems such as finding paths and trees \cite{Xuan03, Huang15}, computing dominating sets \cite{Mandal18}, traveling salesman problem \cite{Michail16}, finding vertex separator \cite{zschoche20}, computing sparse spanners \cite{casteigts19} etc. on temporal graphs. In this paper, we investigate the problem of finding matchings in a temporal graph. A matching in a static graph is defined as a subset of edges of the graph such that no two edges share a common node. A maximum matching is a matching with the maximum cardinality among all matchings. Finding a maximum matching for a static graph \cite{Hopcroft71, Micali80} is a well-studied problem in the area of static graphs due to its wide application. However, the traditional definition of matching for static graphs does not hold directly for temporal graphs due to the temporal nature of the edges. In this paper, we define a type of matching called \textit{0-1 timed matching} for a temporal graph, and investigate the complexity and algorithms for finding a \textit{maximum 0-1 timed matching} for different types of temporal graphs. We have assumed that only the edge set of the temporal graph changes with time. Thus, a temporal graph can be represented by labelling each edge with non-overlapping discrete time intervals for which that edge exists. The underlying graph of a temporal graph is a static graph for which the node set includes all the nodes of the temporal graph and the edge set includes each edge which is present in the temporal graph for at least one timestep. The specific contributions of this paper are as follows. \begin{enumerate} \item We prove that the problem of finding a maximum 0-1 timed matching for a rooted temporal tree is NP-Complete when each edge of the tree is associated with $2$ or more time intervals. \item We show that the problem is solvable in polynomial time if each edge of the rooted temporal tree is associated with a single time interval. In particular, we propose a dynamic programming based $O(n^3)$ time algorithm to solve the problem on such a rooted temporal tree with $n$ nodes. \item Next, we study the computational complexity of the problem when each edge of the temporal graph is associated with a single time interval. We prove that the problem is NP-Complete in this case even for bounded degree bipartite temporal graphs when degree of each node is bounded by $3$ or more. This automatically proves that the problem is NP-Complete for a bipartite graph with a single time interval per edge, and hence, for a general temporal graph with a single time interval per edge. \item We investigate the hardness of approximation of the problem when each edge of the temporal graph is associated with multiple time intervals. We prove that there is no approximation algorithm with approximation ratio $\frac{1}{n^{1-\epsilon}}$, for any $\epsilon > 0$, for finding a maximum 0-1 timed matching even on a rooted temporal tree with $n$ nodes when each edge is associated with multiple time intervals unless NP = ZPP. \item We propose an approximation algorithm to address the problem for a temporal graph when each edge is associated with multiple time intervals. The time complexity of the proposed algorithm is $O(m^2 + m\Delta^2 + m \Delta \mathcal{T} \log \mathcal{T})$ where $m$ is the number of edges in the temporal graph, $\mathcal{T}$ is the lifetime of the temporal graph, $\Delta$ is the maximum degree of any node in the underlying graph of the given temporal graph. The approximation ratio of the proposed algorithm is $\frac{1}{\mathcal{N}^* + 1}$ where $\mathcal{N^*}$ is the average number of edges overlapping with each edge in the temporal graph. Two edges are overlapping with each other if both are incident on the same node and there exists at least one timestep when both the edges exist\footnote{Formal definition of overlapping edges is given in Section \ref{defin}.}. The same algorithm is a constant factor approximation algorithm for a temporal graph when degree of each node is bounded by a constant. \end{enumerate} The rest of this paper is organised as follows. Section \ref{relWork} describes some related work in the area. Section \ref{model} describes the system model used. Section \ref{defin} formally defines the problem. Section \ref{matchingTree} presents the results related to the problem of finding a maximum 0-1 timed matching for a rooted temporal tree. Section \ref{bipartite} presents the results related to the problem of finding a maximum 0-1 timed matching for a general temporal graph. Finally Section \ref{conclusion} concludes the paper. \section{Finding a Maximum 0-1 Timed Matching for Rooted Temporal Tree} \label{matchingTree} In this section, we first analyse the hardness of computing a maximum 0-1 timed matching for a given rooted temporal tree $\mathcal{G(V, E)}$. We show that this problem is NP-Complete for $\mathcal{G}$ when each edge in $\mathcal{E}$ is associated with $2$ or more time intervals. We then explore the problem when each edge of $\mathcal{E}$ is associated with a single time interval. We find that this problem is solvable in polynomial time and propose a dynamic programming based algorithm for it. \subsection{Complexity of Finding a Maximum 0-1 Timed Matching for Rooted Temporal Tree} We first show that the problem of finding a maximum {\em 0-1 timed matching} is NP-Complete for a rooted temporal tree even when the number of intervals associated with each edge is at most $2$. We refer to the problem of finding a maximum 0-1 timed matching for a rooted temporal tree when each edge is associated with at most $2$ time intervals as {\em MAX-0-1-TMT-2}. We first define the decision version of MAX-0-1-TMT-2, referred to as the {\em D-MAX-0-1-TMT-2} problem. \begin{definition} \textbf{D-MAX-0-1-TMT-2:} Given a rooted temporal tree $\mathcal{G(V, E)}$ with lifetime $\mathcal{T}$, where each edge in $\mathcal{E}$ is associated with at most $2$ time intervals, and a positive integer $g$, does there exist a 0-1 timed matching $M$ for $\mathcal{G}$ such that $|M| = g$? \end{definition} We show that there is a polynomial time reduction from the decision version of the problem of finding a {\em maximum rainbow matching for a properly edge coloured path}, referred to as D-MAX-RBM-P \cite{Le14}, to the D-MAX-0-1-TMT-2 problem. The D-MAX-RBM-P problem, defined as follows, is known to be NP-Complete \cite{Le14}. \begin{definition} \textbf{D-MAX-RBM-P:} Given a properly edge coloured path $P(V, E)$ and a positive integer $h$, does there exist a set $R \subset E$ of size $h$, such that $R$ is a matching for $P$ and no two edges in $R$ are coloured with the same colour? \end{definition} \begin{theorem} D-MAX-0-1-TMT-2 is NP-Complete. \label{thm:NPTree} \end{theorem} \begin{proof} We first show that the problem is in NP. Consider a certificate $\langle \langle \mathcal{G(V,E)},$ $g \rangle, M\rangle$, where $\mathcal{G}$ is a rooted temporal tree with lifetime $\mathcal{T}$, each edge in $\mathcal{E}$ is associated with at most 2 time intervals, $g$ is a given integer and $M$ is a subset of $\mathcal{E}$. We consider one edge $e_{uv} \in M$ at a time and compare associated time intervals of $e_{uv}$ with associated time intervals of all the other edges in $M$ to find any edge overlapping with $e_{uv}$ in $M$. We perform this check for all the edges in $M$ to find any such overlapping edges with each other. This checking can be done in polynomial time. Whether $|M| = g$ can also be easily checked in polynomial time. Hence, the D-MAX-0-1-TMT-2 problem is in NP. Next, we prove that there is a polynomial time reduction from the D-MAX-RBM-P problem to the D-MAX-0-1-TMT-2 problem. Consider an instance $\langle P(V, E), h \rangle$ of the D-MAX-RBM-P problem where $P(V, E)$ is a properly edge coloured path, $V = \{v_0, v_1,\cdots,v_{n-1}\}$, $|V| = n$, $E = \{(v_0, v_1), (v_1, v_2), \cdots,$ $(v_{n-2}, v_{n-1})\}$, $|E| = n-1$, and $h$ is a positive integer. For our reduction, we assign a sequence of positive integers $A = \{a_i\,|\,a_i \in \mathbb{N}, a_i < n\}$, to nodes in $V$ in following way. We assign $a_i = i$ to the node $v_i$. $P$ is a properly edge coloured path. Let $c$ be the number of different colours used to colour $P$. As the number of edges in $P$ is $n-1$, then $c \leq n-1$. We represent these colours with different integers from $n$ to $n+c-1$. From this given instance of the D-MAX-RBM-P problem, we construct an instance of the D-MAX-0-1-TMT-2 problem as follows. \begin{itemize} \item The temporal graph $\mathcal{G(V, E)}$ is constructed as \begin{itemize} \item We add a node $\nu(v_iv_{i+1})$ for each edge $(v_i, v_{i+1}) \in E$. Additionally we add a node $r$. Thus, $\mathcal{V}$ $:=$ $\{\nu(v_iv_{i+1})\,|\,\forall (v_i, v_{i+1}) \in E\} \cup \{r\}$. \item We add an edge between each added node $\nu(v_iv_{i+1}) \in \mathcal{V}$ and $r$. This edge exists for time intervals $(a_i, a_i+2)$ and $(c_j, c_j+1)$, where $a_i$ is the integer assigned to $v_i$, $c_j$ is the integer representing the colour by which $(v_i, v_{i+1})$ is coloured. Thus, $\mathcal{E}$ $:=$ $\{e(\nu(v_iv_{i+1}), r, (a_i, a_i+2), (c_j, c_j+1))\,|\, \forall (v_i, v_{i+1}) \in E\}$, \item $\mathcal{T}$ $:=$ $n+c$ \end{itemize} \item $g := h$ \end{itemize} \begin{figure} \begin{center} \includegraphics[scale=.50]{twoInterval.eps} \caption{(a) A properly edge coloured path where nodes are assigned some integers (dashed and continuous lines are representing two different colours; integer along an edge is representing the colour of it), (b) Corresponding temporal tree rooted at node $r$} \label{RBMatching} \end{center} \end{figure} As each edge in $\mathcal{E}$ connects a node in $\mathcal{V} \setminus \{r\}$ to $r$, $\mathcal{G}$ is a temporal tree rooted at $r$ (we choose $r$ as the root node). According to the construction, the number of intervals associated with each edge in $\mathcal{E}$ is $2$. Any edge $e(u, v, (s_1, f_1) (s_2, f_2)) \in \mathcal{E}$ is also denoted as $e_{uv}$ when the time intervals for which this edge exists are not important. Figure \ref{RBMatching} shows the construction of a rooted temporal tree from a properly edge coloured path. We first show that, if there is a solution for the instance of the D-MAX-0-1-TMT-2 problem, then there is a solution for the instance of the D-MAX-RBM-P problem. For a 0-1 timed matching $M$, $|M| = g$, for $\mathcal{G}$, we construct a rainbow matching $R$, $|R| = h$, for $P$ as follows. Consider the set of edges $R = \{(v_i, v_{i+1})\,|\,e_{r\nu(v_iv_{i+1})} \in M \}$. As $|M| = g$ and $g = h$, $|R| = h$. We prove that $R$ is a rainbow matching for $P$. We prove this by contradiction. Assume that, $R$ is not a rainbow matching for $P$. This is possible in two cases. \begin{enumerate}[I.] \item There are at least two edges $(v_{i-1}, v_i), (v_i, v_{i+1})$ incident on the same node $v_i$ included in $R$. As $P$ is a path, according to the construction, $v_{i-1}, v_i, v_{i+1}$ are assigned three consecutive integers $a_{i-1}, a_i, a_{i+1}$ respectively. This implies that, time intervals $(a_{i-1}, a_{i-1}+2)$ and $(a_i, a_i+2)$ are associated with edges $e_{r\nu(v_{i-1}v_i)}$ and $e_{r\nu(v_iv_{i+1})}$ respectively, and both are included in $M$. As $a_i = a_{i-1}+1$, edges $e_{r\nu(v_{i-1}v_i)}$ and $e_{r\nu(v_iv_{i+1})}$ exist at the timestep $a_i$. This contradicts the fact that $M$ is a 0-1 timed matching for $\mathcal{G}$. \item There are two edges $(v_i, v_{i+1}), (v_j, v_{j+1})$ which are coloured with the same colour, $c_k$. This implies that edges $e_{r\nu(v_iv_{i+1})}$ and $e_{r\nu(v_jv_{j+1})}$ are included in $M$ when both exist at timestep $c_k$. This also contradicts the fact that $M$ is a 0-1 timed matching for $\mathcal{G}$. \end{enumerate} Next, we show that if there is no solution for the instance of the D-MAX-0-1-TMT-2 problem, then there is no solution for the instance of the D-MAX-RBM-P problem. To show this, we prove that if we have a solution for the instance of the D-MAX-RBM-P problem, then we have a solution for the instance of the D-MAX-0-1-TMT-2 problem. For a rainbow matching $R$, $|R| = h$, for $P$, we construct a 0-1 timed matching $M$, for $\mathcal{G}$ as follows. Consider the set $M = \{e_{r\nu(v_iv_{i+1})} \,|\, (v_i, v_{i+1}) \in R\}$. As $|R| = h$ and $h = g$, $|M| = g$. Next we show that, $M$ is a 0-1 timed matching for $\mathcal{G}$. We prove this by contradiction. Assume that, $M$ is not a 0-1 timed matching for $\mathcal{G}$. This implies that, there are at least two edges $e_{r\nu(v_iv_{i+1})}$ and $e_{r\nu(v_jv_{j+1})}$ in $M$ such that there exists at least one timestep $t$ when both the edges exist. According to our construction of $\mathcal{G}$, $t < n+c$. There can be two possible cases. \begin{enumerate}[I.] \item {\em $0 \leq t \leq n-1$:} As $0 \leq t \leq n-1$, $t$ is assigned to some node in $V$. According to our assumption, edges $e_{r\nu(v_iv_{i+1})}$ and $e_{r\nu(v_jv_{j+1})}$ in $M$ and both exist at timestep $t$. This implies that, timestep $t$ is present in both intervals $(a_i, a_i+2)$ and $(a_j, a_j+2)$ which are associated with $e_{r\nu(v_iv_{i+1})}$ and $e_{r\nu(v_jv_{j+1})}$ respectively (according to construction of $\mathcal{G}$). Without loss of generality, we can assume that $a_i < a_j$. In this scenario, $a_i+1 = a_j = t$. Then, according to the assignment of integers to the nodes in $V$, $v_{i+1} = v_j$. This implies that, the edges $(v_i, v_{i+1}), (v_j, v_{j+1}) \in R$ are incident on the same node $v_{i+1}$. This contradicts the fact that $R$ is a rainbow matching for $P$. \item {\em $n \leq t \leq n+c-1$:} As $n \leq t \leq n+c-1$, $t$ represents one colour which is used to colour the edges in $E$. According to the assignment of integers to the colours of each edge, $n \leq t \leq n+c-1$ implies that edges $(v_i, v_{i+1}), (v_j, v_{j+1}) \in R$ are coloured with the same colour which is represented by $t$. This also contradicts the fact that $R$ is a rainbow matching for $P$. \end{enumerate} This completes the proof of this theorem. \end{proof} \subsection{Finding a Maximum 0-1 Timed Matching for a Rooted Temporal Tree with Single Time Interval per Edge} Next, we present a dynamic programming based algorithm for finding a maximum 0-1 timed matching for a rooted temporal tree $\mathcal{G(V,E)}$ with root $r \in \mathcal{V}$ where each edge of $\mathcal{G}$ is associated with a single time interval. In the rest of this paper, $T_v$ denotes the temporal subtree rooted at a node $v \in \mathcal{V}$ and $M_v$ denotes a maximum 0-1 timed matching for $T_v$. The algorithm orders the nodes in non-increasing order of depths, and then computes a maximum 0-1 timed matching for the subtrees rooted at each node in this order. For any leaf node $u_i$, $T_{u_i}$ has no edges, and hence $M_{u_i} = \emptyset$. To compute $M_v$ for $T_v$ where $v$ is a non-leaf node, two cases are possible: \begin{enumerate} \item No edge incident on $v$ in $T_v$ is included in $M_v$. \item One or more edges incident on $v$ in $T_v$ are included in $M_v$. \end{enumerate} Let $TM1[v]$ and $TM2[v]$ denote the maximum 0-1 timed matching for $T_v$ which does not include any edge incident on $v$ (Case 1), and which includes at least one edge incident on $v$ (Case 2) respectively. Note that for a leaf node, $TM1[v] = \emptyset$ and $TM2[v] = \emptyset$. Then, it is easy to see that \begin{equation} \label{eqn3} M_v := cardMax(TM1[v], TM2[v]) \end{equation} where the function $cardMax(X, Y)$ returns the maximum cardinality set among two sets $X$ and $Y$. We first describe the method for computing $TM1[v]$ for $T_v$ when for all $u_i \in child(v)$, $M_{u_i}$ for $T_{u_i}$ are already computed. As $TM1[v]$ does not include any edge $e_{vu_i}$ for any $u_i \in child(v)$, \begin{equation} \label{eqn1} TM1[v] := \bigcup_{\forall u_i \in child(v)} M_{u_i} \end{equation} For example, in Figure \ref{exampleTree}, while computing $M_a$, $M_c = \{e_{ci}, e_{cj}\}$ and $M_d = \{e_{dl}, e_{dq}\}$ where $c$ and $d$ are two child nodes of $a$. Thus by Equation \ref{eqn1}, $TM1[a] = \{e_{ci}, e_{cj}, e_{dl}, e_{dq}\}$. \begin{figure}[t] \begin{center} \includegraphics[scale=.45]{exampleTree.eps} \caption{A temporal tree rooted at $r$} \label{exampleTree} \end{center} \end{figure} Next, we describe the method to compute $TM2[v]$ for $T_v$ when for all $u_i \in child(v)$, $M_{u_i}$ for $T_{u_i}$ are already computed. We first define the following sets. \begin{definition} \textbf{Maximum Allowable Set for $T_v$:} A maximum allowable set for $T_v$, denoted by $A_v$, is a maximum cardinality set of edges incident on $v$, such that $A_v \cup (\bigcup_{\forall u_i \in child(v)} M_{u_i})$ is a 0-1 timed matching for $T_v$. \end{definition} Note that there can be more than one possible maximum allowable sets for $T_v$. We give preference to a particular type of these sets, as defined below. \begin{definition} \textbf{Maximum Feasible Set for $T_v$:} A maximum feasible set for $T_v$, denoted by $F_v$, is a maximum allowable set such that $F_v \cup \{e_{vP(v)}\}$ is a 0-1 timed matching for $T_{P(v)}$. If there is no such maximum allowable set (i.e, for any maximum allowable set $A_v$, $A_v \cup \{e_{vP(v)}\}$ is not a 0-1 timed matching for $T_{P(v)}$), then $F_v$ is set to an arbitrary maximum allowable set. \end{definition} Assuming that $F_v$'s are computed for all $T_v$, $TM2[v]$ is computed as follows. If $F_v = \emptyset$, then we set $TM2[v] := \emptyset$. If $F_v \neq \emptyset$, then \begin{equation} \label{eqn2} TM2[v] := F_v \cup (\bigcup_{\forall u_i \in child(v)} M_{u_i}) \end{equation} We illustrate the computation of $TM2[v]$ using the graph shown in Figure \ref{exampleTree}. While computing $M_a$, $M_c = \{e_{ci}, e_{cj}\}$ and $M_d = \{e_{dl}, e_{dq}\}$. Note that, if edge $e_{ac}$ is included in a 0-1 timed matching for $T_a$, we need to remove both $e_{ci}$ and $e_{cj}$ to maintain the properties of 0-1 timed matching. Similarly, $e_{ad}$ cannot be included along with $e_{dl}$ and $e_{dq}$ in any 0-1 timed matching for $T_a$. Thus for $T_a$, both $A_a$ and $F_a$ are $\emptyset$. Again while computing $M_b$, $\{e_{bg}\}$ and $\{e_{bh}\}$ both are $A_b$. But only $\{e_{bg}\} \cup \{e_{rb}\}$ is a 0-1 timed matching for $T_r$. Thus, $F_b = \{e_{bg}\}$. It can be observed that, $M_a \cup \{e_{ra}\}$ and $M_b \cup \{e_{rb}\}$ are both 0-1 timed matching for $T_r$, and $e_{ra}$, $e_{rb}$ are non-overlapping with each other and $P(r) = \phi$. Thus, $F_r = \{e_{ra}, e_{rb}\}$. As $F_a = \emptyset$ for $T_a$, hence $TM2[a] = \emptyset$. Again, as $F_r = \{e_{ra}, e_{rb}\}$ for $T_r$, then $TM2[r] = \{e_{ra}, e_{rb}, e_{ci}, e_{cj}, e_{dl}, e_{dq}, e_{bg}\}$. In Figure \ref{exampleTree}, as illustrated above, $TM1[a] = \{e_{ci}, e_{cj}, e_{dl}, e_{dq}\}$ and $TM2[a] = \emptyset$, hence $M_a = \{e_{ci}, e_{cj},$ $e_{dl}, e_{dq}\}$. Similarly, $TM1[r] = \{ e_{ci}, e_{cj}, e_{dl}, e_{dq}, e_{bg}\}$ and $TM2[r] = \{e_{ra}, e_{rb}, e_{ci}, e_{cj}, e_{dl}, e_{dq},$ $e_{bg}\}$. Hence $M_r = \{e_{ra}, e_{rb}, e_{ci}, e_{cj},$ $e_{dl}, e_{dq}$ $, e_{bg}\}$. Algorithm \ref{alg:dpMatching} describes the pseudocode of the proposed algorithm. The algorithm calls a function {\em createLevelList} to put all the nodes in $\mathcal{G}$ in different lists, with two nodes put in the same list if their depth in $\mathcal{G}$ are the same. After that, for all leaf nodes $u_i$, it assigns $TM1[u_i] := \emptyset$ and $TM2[u_i] := \emptyset$. Then it processes rooted temporal subtrees according to the non-increasing order of the depths of their root node in $\mathcal{G}$, starting from the temporal subtree rooted at the node with the maximum depth. For each non-leaf node $v$, the algorithm computes $TM1[v]$, $TM2[v]$ and $M_v$ following Equations \ref{eqn1}, \ref{eqn2} and \ref{eqn3} respectively. $F_v$ is computed using the function {\em computeFeasibleSet}. \begin{algorithm}[t] \caption{dp0-1TimedMatching} \textbf{Input:} $\mathcal{G(V, E)}$, root node $r$\\ \textbf{Output:} $M \subseteq \mathcal{E}$, a maximum 0-1 timed matching \label{alg:dpMatching} \begin{algorithmic}[1] \If{$r = NULL$ \textbf{or} $child(r) = \emptyset$} \State $M := \emptyset \State \textit{return($M$)} \EndIf \ForAll{leaf nodes $u_i$} \State $TM1[u_i] := \emptyset$ \label{init \State $TM2[u_i] := \emptyset$ \label{init1} \EndFor \State $nList :=$ createLevelList($r$) \For{$level = max\_depth \to 0$} \Comment{$max\_depth = $ maximum depth of a node} \While{$(v := nList[level].extractNode()) != \emptyset$} \State $TM1[v] := \bigcup_{u_i \in child(v)} M_{u_i}$ \label{TM1 \State $TM2[v] := \emptyset$ \State $F_v$ $:=$ computeFeasibleSet($v$, $child(v)$, $\forall u_i \in child(v) \, M_{u_i}$) \If{$F_v \neq \emptyset$} \State $TM2[v] := (\bigcup_{u_i \in child(v)} M_{u_i}) \cup F_v$ \label{TM2} \EndIf \State $M_v = cardMax(TM1[v], TM2[v])$ \EndWhile \EndFor \State \textit{return($M_r$)} \end{algorithmic} \end{algorithm} We next describe how to compute $F_v$ for all $T_v$ using the function {\em computeFeasibleSet} when, for all $u_i \in child(v)$, $M_{u_i}$ is already computed. For computing $F_v$, we first define the following. \begin{definition} \textbf{Minimum Removal Set for $e_{vu_i}$ on $M_{u_i}$:} The minimum removal set for $e_{vu_i}$ on $M_{u_i}$, denoted as $R_{M_{u_i}}(e_{vu_i})$, is a minimum cardinality set of edges that needs to be removed from $M_{u_i}$ such that $\{e_{vu_i}\} \cup (M_{u_i} \setminus R_{M_{u_i}}(e_{vu_i}))$ is a 0-1 timed matching for $T_v$, where $u_i \in child(v)$. \end{definition} In Figure \ref{exampleTree}, for $e_{ac}$ on $T_a$, $R_{M_c}(e_{ac}) = \{e_{ci}, e_{cj}\}$. Here $M_c = \{e_{ci}, e_{cj}\}$ and we need to remove $\{e_{ci}, e_{cj}\}$ from $M_c$ such that $\{e_{ac}\} \cup (M_c \setminus R_{M_c}(e_{ac}))$ is a 0-1 timed matching for $T_a$. To compute $F_v$, we first compute $R_{M_{u_i}}(e_{vu_i})$ for each edge $e_{vu_i}$ incident on $v$. Let $\tilde{\mathcal{R}}_0(v)$ be the set of edges incident on $v$ in $T_v$ such that for any edge $e_{vu_i} \in \tilde{\mathcal{R}}_0(v)$, $R_{M_{u_i}}(e_{vu_i}) = \emptyset$. Let $\tilde{\mathcal{R}}^P_0(v) = \tilde{\mathcal{R}}_0(v) \cup \{e_{vP(v)}\}$. Note that, for the root $r$, $\tilde{\mathcal{R}}_0(r) = \tilde{\mathcal{R}}^P_0(r)$ as it has no parent. Then the algorithm computes $MaxNOS(\tilde{\mathcal{R}}_0(v))$ and $MaxNOS(\tilde{\mathcal{R}}^P_0(v))$, where $MaxNOS(S)$ denotes the maximum cardinality non-overlapping subset of any set of temporal edges $S$. If $|MaxNOS(\tilde{\mathcal{R}}^P_0(v))| > |MaxNOS(\tilde{\mathcal{R}}_0(v))|$, then $F_v$ is set to $MaxNOS(\tilde{\mathcal{R}}^P_0(v)) \setminus \{e_{vP(v)}\}$, else $F_v$ is set to $MaxNOS(\tilde{\mathcal{R}}_0(v))$. As an illustration, in Figure \ref{exampleTree}, while computing $F_b$, both $M_g$ and $M_h$ are $\emptyset$. As both $R_{M_g}(e_{bg})$ and $R_{M_h}(e_{bh})$ are $\emptyset$, $\tilde{\mathcal{R}}_0(b) = \{e_{bg}, e_{bh}\}$ and $\tilde{\mathcal{R}}^P_0(b) = \{e_{bg}, e_{bh}, e_{rb}\}$. Therefore $MaxNOS(\tilde{\mathcal{R}}_0(b))$ can be both $\{e_{bh}\}$ and $\{e_{bg}\}$, but $MaxNOS(\tilde{\mathcal{R}}^P_0(b)) = \{e_{bg}, e_{rb}\}$. Since $|MaxNOS(\tilde{\mathcal{R}}^P_0(b))| > |MaxNOS(\tilde{\mathcal{R}}_0(b))|$, therefore $F_b = \{e_{bg}\}$. Similarly, while computing $F_c$, $\tilde{\mathcal{R}}_0(c) = \{e_{ci}, e_{cj}, e_{ck}\}$ and $\tilde{\mathcal{R}}^P_0(c) = \{e_{ac}, e_{ci}, e_{cj}, e_{ck}\}$. In this case $MaxNOS(\tilde{\mathcal{R}}_0(c))$ and $MaxNOS(\tilde{\mathcal{R}}^P_0(c))$ are both equal to $\{e_{ci}, e_{cj}\}$. Thus, $F_c = \{e_{ci}, e_{cj}\}$. Algorithm \ref{alg:feasibleSet} describes the details of computing $F_v$. Algorithm \ref{alg:feasibleSet} internally invokes two functions. Function $maxNonOverlap(S)$ returns the maximum cardinality non-overlapping subset of edges from a set of temporal edges $S$. Function $interSect(e_{vu_i}, S)$ returns the number of edges overlapping with $e_{vu_i}$ in $S$. Both of these functions are easy to implement in polynomial time. \begin{algorithm} \caption{computeFeasibleSet($v$, $child(v)$, $\forall u_i \in child(v)\ M_{u_i}$)} \textbf{Input:} $v$, $child(v)$, $\forall u_i \in child(v)\ M_{u_i}$\\ \textbf{Output:} $F_v$ \label{alg:feasibleSet} \begin{algorithmic}[1] \ForAll{$u_i \in child(v)$} \State $R_{M_{u_i}}(e_{vu_i})$ $:=$ $interSect(e_{vu_i}, M_{u_i})$ \label{step1} \EndFor \ForAll{$u_i \in child(v)$} \label{step2} \If{$|R_{M_{u_i}}(e_{vu_i})| = 0$} \State $\tilde{\mathcal{R}}_0(v)$ $:=$ $\tilde{\mathcal{R}}_0(v) \cup \{e_{vu_i}\}$ \label{lrem} \EndIf \EndFor \If{$\tilde{\mathcal{R}}_0(v) \neq \emptyset$} \If{$P(v) != \phi$} \label{maxcond1} \State $\tilde{\mathcal{R}}^P_0(v) = \tilde{\mathcal{R}}_0(v) \cup \{e_{vP(v)}\}$ \label{lprem} \If{$|maxNonOverlap(\tilde{\mathcal{R}}_0(v))| < |maxNonOverlap(\tilde{\mathcal{R}}^P_0(v))|$} \State $F_v := maxNonOverlap(\tilde{\mathcal{R}}^P_0(v)) \setminus \{e_{vP(v)}\}$ \Else \State $F_v := maxNonOverlap(\tilde{\mathcal{R}}_0(v))$ \EndIf \Else \label{maxcond2} \State $F_v := maxNonOverlap(\tilde{\mathcal{R}}_0(v))$ \EndIf \Else \State $F_v := \emptyset$ \EndIf \State \textit{return($F_v$)} \end{algorithmic} \end{algorithm} \subsection{Proof of Correctness} \begin{lemma} \label{emptyTM2} Suppose that for each $u_i \in child(v)$, $M_{u_i}$ is already computed and there is no 0-1 timed matching $M'_{u_i}$ for $T_{u_i}$ such that $|M'_{u_i}| = |M_{u_i}|$ and $|R_{M'_{u_i}}(e_{vu_i})|$ $= 0$ but $|R_{M_{u_i}}(e_{vu_i})| > 0$. Then for $T_v$, if $F_v = \emptyset$, then $|TM1[v]| \geq |\tilde{M}_v|$ where $\tilde{M}_v$ is a maximum 0-1 timed matching for $T_v$ which includes at least one edge incident on $v$. \end{lemma} \begin{proof} We prove this lemma by contradiction. Assume that for each $u_i \in child(v)$, $M_{u_i}$ is already computed and there is no 0-1 timed matching $M'_{u_i}$ for $T_{u_i}$ such that $|M'_{u_i}| = |M_{u_i}|$ and $|R_{M'_{u_i}}(e_{vu_i})| = 0$ but $|R_{M_{u_i}}(e_{vu_i})| > 0$. In such condition, we assume that there exists a 0-1 timed matching $\tilde{M}_v$ for $T_v$ such that $\tilde{M}_v$ includes at least one edge incident on $v$ and $|\tilde{M}_v| > |TM1[v]|$, when $F_v = \emptyset$. As $F_v = \emptyset$, then for all edges $e_{vu_i}$ incident on $v$, $|R_{M_{u_i}}(e_{vu_i})| > 0$. As $TM1[v] = \bigcup_{\forall u_i \in child(v)} M_{u_i}$, $|\tilde{M}_v| > |TM1[v]|$ is possible if any of the following conditions hold. \begin{itemize} \item There is at least one $u_i \in child(v)$ such that $M_{u_i}$ is not a maximum 0-1 timed matching for $T_{u_i}$. This contradicts our assumption about each already computed $M_{u_i}$. \item There is at least one edge $e_{vu_i}$ for which $|\{e_{vu_i}\} \cup (M_{u_i} \setminus R_{M_{u_i}}(e_{vu_i}))| > |M_{u_i}|$ when $|R_{M_{u_i}}(e_{vu_i})|$ $ > 0$. This is impossible. \item There is at least one $u_i \in child(v)$ such that another 0-1 timed matching $M'_{u_i}$ exists for $T_{u_i}$, where $|M'_{u_i}| = |M_{u_i}|$ and $|R_{M'_{u_i}}(e_{vu_i})| = 0$ but $|R_{M_{u_i}}(e_{vu_i})| > 0$. According to our assumption about each already computed $M_{u_i}$, this is also impossible. \end{itemize} \end{proof} \begin{lemma} \label{maxTM2} Suppose that for each $u_i \in child(v)$, $M_{u_i}$ is already computed and there is no 0-1 timed matching $M'_{u_i}$ for $T_{u_i}$ such that $|M'_{u_i}| = |M_{u_i}|$ and $|R_{M'_{u_i}}(e_{vu_i})|$ $= 0$ but $|R_{M_{u_i}}(e_{vu_i})| > 0$. Then if $F_v \neq \emptyset$, Equation \ref{eqn2} correctly computes $TM2[v]$ for $T_v$. \end{lemma} \begin{proof} We prove this lemma by contradiction. Assume that for each $u_i \in child(v)$, $M_{u_i}$ is already computed and there is no 0-1 timed matching $M'_{u_i}$ for $T_{u_i}$ such that $|M'_{u_i}| = |M_{u_i}|$ and $|R_{M'_{u_i}}(e_{vu_i})| = 0$ but $|R_{M_{u_i}}(e_{vu_i})| > 0$. In such condition, we assume that there exists another 0-1 timed matching $M^*_v$ which includes edges incident on $v$, such that $|TM2[v]| < |M^*_v|$ when $F_v \neq \emptyset$. According to Equation \ref{eqn2}, $TM2[v]$ includes all edges in $M_{u_i}$ where $u_i \in child(v)$. As each $M_{u_i}$ is the maximum 0-1 timed matching for $T_{u_i}$, the cardinality of the edges in $M^*_v$ which are not incident on $v$ are smaller than or equal to such edges in $TM2[v]$. Thus $|TM2[v]| < |M^*_v|$ is possible in three cases: \begin{itemize} \item There exists another set of edges $F^*_v$ incident on $v$, such that $|F^*_v| > |F_v|$ and $ (\bigcup_{\forall u_i \in child(v)} M_{u_i})$ $\cup F^*_v$ is also a 0-1 timed matching for $T_v$. But this contradicts the definition of $F_v$. \item There exists at least one edge $e_{vu_i} \in S_v$ ($S_v \subset M^*_v$ is the set of edges incident on $v$) for which $|R_{M_{u_i}}(e_{vu_i})| > 0$, but $|(M_{u_i} \setminus R_{M_{u_i}}(e_{vu_i})) \cup \{e_{vu_i}\}|$ $>$ $|M_{u_i}|$. This is impossible. \item There is at least one node $u_i \in child(v)$ such that there exists another 0-1 timed matching $M'_{u_i}$ for $T_{u_i}$, where $|M_{u_i}| = |M'_{u_i}|$, and $|R_{M'_{u_i}}(e_{vu_i})| = 0$ but $|R_{M_{u_i}}(e_{vu_i})| > 0$. This contradicts our assumption about computed $M_{u_i}$. \end{itemize} \end{proof} \begin{lemma} \label{lem:feasible} Algorithm \ref{alg:feasibleSet} correctly computes $F_v$ for $T_v$. \end{lemma} \begin{proof} Algorithm \ref{alg:dpMatching} invokes Algorithm \ref{alg:feasibleSet} for computing $F_v$. Algorithm \ref{alg:dpMatching} processes each rooted temporal subtree according to the non-increasing order of the depth of its root node in $\mathcal{G}$, starting from a temporal subtree rooted at a node with the maximum depth ($max\_depth$). Here $max\_depth$ is the maximum depth of a node in $\mathcal{G}$. We use induction on the height of the rooted temporal subtree for which $F_v$ is computed. \textit{\textbf{Base Case:}} {\em Height of the rooted temporal subtree is $0$}: For the temporal subtrees with height $0$ rooted on any leaf node $x$, the computed value $F_x$ is $\emptyset$\footnote{This step is not shown explicitly in Algorithm \ref{alg:dpMatching}}. As there are no edges in $T_x$, the computed $F_x$ is correct and it satisfies the condition that $F_x \cup \{e_{xP(x)}\}$ is a 0-1 timed matching for $T_{P(x)}$. \textit{\textbf{Inductive Step:}} Let this lemma hold for the rooted temporal subtrees with height up to $l$. We have to show that this lemma holds for rooted temporal subtrees with height $l+1$ also. Before processing a temporal subtree $T_v$ rooted at $v$ where the depth of $v$ in $\mathcal{G}$ is $max\_depth - (l+1)$, $M_{w_i}$ and $F_{w_i}$ for every $T_{w_i}$, where the depth of $w_i$ is greater than $max\_depth - (l+1)$ in $\mathcal{G}$, are computed by Algorithm \ref{alg:dpMatching} and Algorithm \ref{alg:feasibleSet}. Note that, the height of each $T_{w_i}$ is at most $l$. As the height of each $T_{u_i}$, where $u_i \in child(v)$, is at most $l$, already computed $F_{u_i}$ using Algorithm \ref{alg:feasibleSet} for each $T_{u_i}$ is correct. Let the computed $F_v$ for $T_v$ by Algorithm \ref{alg:feasibleSet} be incorrect. Let there exist another set of edges $F'_v$ incident on $v$, such that $|F'_v| > |F_v|$ and $(\bigcup_{u_i \in child(v)} M_{u_i}) \cup F'_v$ is also a 0-1 timed matching for $T_v$. This is possible if at least one of the following cases is true. \begin{enumerate} \item The function $maxNonOverlap$, returns a non-maximum set of non-overlapping edges. But, this is not true. \item Algorithm \ref{alg:feasibleSet} only considers the edges $e_{vu_i}$ for which $|R_{M_{u_i}}(e_{vu_i})| = 0$. Thus $F'_v$ can exist if it includes some edge $e_{vu_i}$ incident on $v$ for which there exists another 0-1 timed matching $M'_{u_i}$ such that $|M'_{u_i}| = |M_{u_i}|$ and $|R_{M_{u_i}}(e_{vu_i})| > 0$ but $|R_{M'_{u_i}}(e_{vu_i})| = 0$. As only the edges in $F_{u_i}$ can be overlapping with edge $e_{vu_i}$, existence of such edge contradicts the fact that already computed $F_{u_i}$ for $T_{u_i}$ is correct. \end{enumerate} Lines \ref{maxcond1} to \ref{maxcond2} of Algorithm \ref{alg:feasibleSet} ensures that when $P(v) \neq \phi$, if possible, the algorithm returns $F_v$ such that $F_v \cup \{e_{vP(v)}\}$ is a 0-1 timed matching for $T_{P(v)}$. Thus, $F_v$ is correctly computed for $T_v$ with height $l+1$. This proves the inductive step. \end{proof} \begin{theorem} Algorithm \ref{alg:dpMatching} correctly computes a maximum 0-1 timed matching for a given rooted temporal tree $\mathcal{G(V, E)}$. \label{thm:correctness} \end{theorem} \begin{proof} We prove this theorem by induction on the height of the rooted temporal subtree for which we are computing a maximum 0-1 timed matching. We prove that, at each step the algorithm correctly computes $M_v$ and if $P(v) \neq \phi$, then there is no other 0-1 timed matching $M'_v$ for $T_v$, such that $|M'_v| = |M_v|$ and $|R_{M'_v}(e_{vP(v)})| = 0$ but $|R_{M_v}(e_{vP(v)})| > 0$. \textit{\textbf{Base Case:}} {\em Height of the rooted temporal subtree is $0$}: Algorithm \ref{alg:dpMatching}, in Lines \ref{init}, \ref{init1} assigns $TM1$ and $TM2$ values for any temporal subtrees with height $0$ rooted at any leaf node $x$ to $\emptyset$. As $T_x$ has no edges, computed $M_x$ is correct and $|R_{M_x}(e_{xP(x)})| = 0$. \textit{\textbf{Inductive Step:}} Let this theorem hold for the rooted temporal subtrees with height up to $l$. We need to prove that it also holds for the rooted temporal subtrees with height $l+1$. Algorithm \ref{alg:dpMatching} processes each rooted temporal subtree according to the non-increasing order of the depth of its root node in $\mathcal{G}$, starting from a temporal subtree rooted at a node with the maximum depth ($max\_depth$). Thus, while processing $T_v$ where the depth of $v$ in $\mathcal{G}$ is $max\_depth - (l+1)$, each $M_{w_i}$ is correctly computed for each $T_{w_i}$ with height at most $l$, where the depth of $w_i$ in $\mathcal{G}$ is at least $max\_depth - l$. It can be noted that, for each $T_{u_i}$, the depth of $u_i \in child(v)$ in $\mathcal{G}$ is $max\_depth - l$ and the height of each $T_{u_i}$ is at most $l$. Hence each $M_{u_i}$ is correct and there is no other 0-1 timed matching $M'_{u_i}$ for $T_{u_i}$ such that $|M'_{u_i}| = |M_{u_i}|$ and $|R_{M'_{u_i}}(e_{vu_i})| = 0$ but $|R_{M_{u_i}}(e_{vu_i})| > 0$. In Line \ref{TM1}, Algorithm \ref{alg:dpMatching} computes $TM1[v]$ using Equation \ref{eqn1}. As $TM1[v]$ does not include any edge incident on $v$, the computed $TM1[v]$ is a 0-1 timed matching for $T_v$. As each $M_{u_i}$ is correct, where $u_i \in child(v)$, the cardinality of $TM1[v]$ is the maximum among all such 0-1 timed matchings for $T_v$, which does not include any edge incident on $v$. When $F_v = \emptyset$, Algorithm \ref{alg:dpMatching} returns $TM1[v]$ as $M_v$. Lemma \ref{emptyTM2} proves that when $F_v = \emptyset$, $|TM1[v]| \geq |\tilde{M}_v|$ where $\tilde{M}_v$ is a maximum 0-1 timed matching for $T_v$ which includes at least one edge incident on $v$. Thus, $TM1[v]$ is $M_v$ when $F_v = \emptyset$. As $TM1[v]$ does not include any edge incident on $v$, in this case $|R_{M_v}(e_{vP(v)})| = 0$. Thus, Algorithm \ref{alg:dpMatching} correctly computes $M_v$ for $T_v$ when $F_v = \emptyset$. In Line \ref{TM2}, Algorithm \ref{alg:dpMatching} computes $TM2[v]$ using Equation \ref{eqn2}. Lemma \ref{lem:feasible} proves that the computed $F_v$ by Algorithm \ref{alg:feasibleSet} is correct. As for each $u_i \in child(v)$, $M_{u_i}$ is already computed and there is no 0-1 timed matching $M'_{u_i}$ for $T_{u_i}$ such that $|M'_{u_i}| = |M_{u_i}|$ and $|R_{M'_{u_i}}(e_{vu_i})| = 0$ but $|R_{M_{u_i}}(e_{vu_i})| > 0$. Thus, from Lemma \ref{maxTM2}, the computed $TM2[v]$ is correct when $F_v \neq \emptyset$ and $TM2[v]$ is $M_v$. It can be observed that, while including $e_{vP(v)}$ in a 0-1 timed matching for $T_{P(v)}$ only the edges in $F_v$ can get removed. Thus, there can exist another 0-1 timed matching $M'_v$ for $T_v$ such that $|M_v| = |M'_v|$ and $|R_{M'_v}(e_{vP(v)})| = 0$ but $|R_{M_v}(e_{vP(v)})| > 0$ only when computed $F_v$ is incorrect. This contradicts Lemma \ref{lem:feasible}. This proves the inductive step. \end{proof} \begin{theorem} The time complexity of Algorithm \ref{alg:dpMatching} is $O(n^3)$. \label{thm:runtime} \end{theorem} \begin{proof} Algorithm \ref{alg:dpMatching} stores the nodes in the rooted temporal tree $\mathcal{G}$, in different lists according to their depth in $\mathcal{G}$. This can be done in $O(n)$ time. For any node $v$, to compute $M_v$ when information about each $M_{u_i}$ where $u_i \in child(v)$ is available, we need to compute $F_v$. Algorithm \ref{alg:dpMatching} invokes Algorithm \ref{alg:feasibleSet} to compute $F_v$. At Line \ref{step1}, Algorithm \ref{alg:feasibleSet} finds the number of overlapping edges in $M_{u_i}$ with edge $e_{vu_i}$ where $u_i \in child(v)$. As each edge is associated with one interval, function \textit{interSect} does this in $O(n)$ time (number of edges in $M_{u_i}$ is $O(n)$). Hence, the overall running time of this step is $O(n^2)$. The formation of $\tilde{\mathcal{R}}_0(v)$ at Lines \ref{step2} to \ref{lrem} takes $O(n)$ time. Finding a maximum non-overlapping set from $\tilde{\mathcal{R}}_0(v)$ and $\tilde{\mathcal{R}}^P_0(v)$ takes $O(n \log n)$ time using function $maxNonOverlap$. Thus, the overall running time of Algorithm \ref{alg:feasibleSet} is $O(n^2)$. Hence, the computation of $M_{v}$ for any node $v$ takes $O(n^2)$ time. Hence, the overall running time of the algorithm is $O(n^3)$. \end{proof} \section{Problem Definition} \label{defin} In this section, we define a 0-1 timed matching for temporal graphs. We first define some terminologies related to temporal graphs that we will need. For a given temporal graph $\mathcal{G(V, E)}$ with lifetime $\mathcal{T}$, the {\em underlying graph} of $\mathcal{G}$ is defined as the static graph $\mathcal{G}_U(\mathcal{V}, \mathcal{E}_U)$, where $\mathcal{E}_U = \{(u, v)\,|\, \exists t$ such that, $e^t_{uv}$ is an instance of $e_{uv} \in \mathcal{E} \}$. Next, we define different types of temporal graphs. \begin{definition} \textbf{Temporal Tree:} A temporal graph $\mathcal{G(V, E)}$ is a temporal tree if the underlying graph of $\mathcal{G}$ is a tree. \end{definition} \begin{definition} \textbf{Rooted Temporal Tree:} A rooted temporal tree $\mathcal{G(V, E)}$ rooted at node $r$ is a temporal tree with one node $r \in \mathcal{V}$ chosen as root of the tree. \end{definition} Note that, the underlying graph of a rooted temporal tree is also a rooted tree. For any node $v$, let $P(v)$ denote the parent node of $v$ and $child(v)$ denote the set of children nodes of $v$ in the underlying graph of the rooted temporal tree. For the root node $r$, $P(r) = \phi$. {\em Depth} of a node $v$ in a rooted temporal tree $\mathcal{G}$ rooted at $r$ is the path length from $r$ to $v$ in $\mathcal{G}_U$. {\em Height} of a rooted temporal tree $\mathcal{G}$ is the maximum depth of any node in $\mathcal{G}$. \begin{figure} \begin{center} \includegraphics[scale=.50]{tree.eps} \caption{A temporal tree rooted at node $r$} \label{fig:tree} \end{center} \end{figure} Figure \ref{fig:tree}, shows a temporal tree rooted at node $r$. In this temporal tree $P(a), P(b), P(c)$ is $r$ and $child(r) = \{a, b, c\}$. The depth of node $r$ is $0$. Depth of $a, b, c$ is $1$, depth of $d$ is $2$ and depth of $f$ is $3$. The height of this rooted temporal tree is $3$. \begin{definition} \textbf{Bipartite Temporal Graph:} A temporal graph $\mathcal{G(V, E)}$ is bipartite if the underlying graph of $\mathcal{G}$ is a bipartite graph. \end{definition} \begin{definition} \textbf{Bounded Degree Temporal Graph:} A temporal graph $\mathcal{G(V, E)}$ is bounded degree temporal graph where degree of each node is bounded by some positive integer $k$, if the underlying graph of $\mathcal{G}$ is a bounded degree graph where degree of each node is bounded by $k$. \end{definition} \begin{definition} \textbf{Bounded Degree Bipartite Temporal Graph:} A temporal graph $\mathcal{G(V, E)}$ is a bounded degree bipartite temporal graph if it is both a bipartite temporal graph and a bounded degree temporal graph. \end{definition} \begin{definition} \textbf{Overlapping Edge:} Any edge $e_{vw} \in \mathcal{E}$ is said to be overlapping with another edge $e_{uv}$ if there exists a timestep $t$ such that both $e^t_{vw}$ and $e^t_{uv}$ exist. \end{definition} Note that if $e_{uv}$ is overlapping with edge $e_{vw}$, then $e_{vw}$ is also overlapping with $e_{uv}$. We refer to such pair of edges as {\em edges overlapping with each other}. In Figure \ref{fig:bipartite}, $e_{dg}$ is an overlapping edge with $e_{fg}$ because both edges are incident on $g$ and both $e^1_{dg}$ and $e^1_{fg}$ exist. On the other hand, edges $e_{ab}$ and $e_{ad}$ are incident on the same node $a$, but there is no timestep $t$ when both $e^t_{ab}$ and $e^t_{ad}$ exist. Thus $e_{ab}$ and $e_{ad}$ are {\em non-overlapping with each other}. For any two sets of edges $E_1$, $E_2$, if $E_1 \subseteq E_2$ and no two edges in $E_1$ are overlapping with each other, then $E_1$ is called a {\em non-overlapping subset} of $E_2$. \begin{definition} \textbf{0-1 Timed Matching:} A 0-1 timed matching $M$ for a given temporal graph $\mathcal{G(V, E)}$ is a non-overlapping subset of $\mathcal{E}$. \end{definition} \begin{definition} \textbf{Maximum 0-1 Timed Matching:} A maximum 0-1 timed matching for a given temporal graph is a 0-1 timed matching with the maximum cardinality. \end{definition} \begin{definition} \textbf{Maximal 0-1 Timed Matching:} A 0-1 timed matching $M$ for a given temporal graph $\mathcal{G(V, E)}$ is maximal if for any edge $e_{uv} \in \mathcal{E} \setminus M$, $M \cup \{e_{uv}\}$ is not a 0-1 timed matching for $\mathcal{G}$. \end{definition} \begin{figure} \begin{center} \includegraphics[scale=.70]{example2.eps} \caption{A bipartite temporal graph} \label{fig:bipartite} \end{center} \end{figure} For the bipartite temporal graph $\mathcal{G(V, E)}$ shown in Figure \ref{fig:bipartite}, a maximum 0-1 timed matching $M$ is $\{e_{ab}, e_{ad}, e_{cd}, e_{fg}\}$. $M$ is also a maximal 0-1 timed matching for $\mathcal{G}$. Consider another 0-1 timed matching $M' = \{e_{ab}, e_{ad}, e_{dg}\}$ for $\mathcal{G}$. $M'$ is a maximal 0-1 timed matching for $\mathcal{G}$ but not a maximum 0-1 timed matching. Note that, the edges in a 0-1 timed matching for a given temporal graph may not be a matching for its underlying graph. In the next section, we explore the problem of finding a maximum 0-1 timed matching for a given rooted temporal tree. \section{Related Work} \label{relWork} The problem of finding a maximum matching is a well studied problem for static graphs. In \cite{Edmonds65}, Edmonds proposed a $O(n^4)$ time algorithm, where $n$ is the number of nodes in the input graph. Since then, many algorithms have been proposed to address the problem on both general graphs and other restricted classes of graphs \cite{Hopcroft71,Micali80,EvenT75,Even75,Kameda74,Cheriyan97,Mucha04,Mucha06,Mulmuley87}. Recently, a few works have addressed the problem of finding matchings in temporal graphs. In the existing literature, matching in temporal graphs have been defined in different ways. For a given temporal graph, Michail et al. \cite{Michail16} consider the decision problem of finding if there exists a maximum matching $M$ in the underlying graph such that a single label can be assigned to each edge of $M$ with the constraint that the assigned label to an edge is chosen from the timesteps when that edge exists in the temporal graph and no two edges of $M$ are assigned the same label. It is then proved that the problem is NP-Hard. Baste et al. \cite{Baste19} define another type of temporal matching called $\gamma$-matching. $\gamma$-edges are defined as edges which exist for at least $\gamma$ consecutive timesteps. The maximum $\gamma$-matching is defined as a maximum cardinality subset of $\gamma$-edges such that no two $\gamma$-edges in the subset share any node at any timestep. They have shown that the problem of finding a maximum $\gamma$-matching is NP-hard when $\gamma > 1$, and proposed a 2-approximation algorithm for the problem. Mertzios et al. \cite{mertzios20} define another type of temporal matching called $\Delta$-matching where two edge instances at timesteps $t, t'$ can be included in a matching if either those two edge instances do not share any node or $|t - t'|$ is greater than or equal to a positive integer $\Delta$. They prove that this problem is APX-Hard for any $\Delta \geq 2$ when the lifetime of the temporal graph is at least $3$, and the problem remains NP-Hard even when the underlying graph of the temporal graph is a path. An approximation algorithm is proposed to find a $\frac{\Delta}{2\Delta - 1}$-approximate maximum $\Delta$-matching for a given temporal graph with $n$ nodes, $m$ edges and lifetime $\mathcal{T}$ in $O(\mathcal{T}m(\sqrt{n}+ \Delta))$ time. In \cite{akrida20}, Akrida et al. address the maximum matching problem on stochastically evolving graphs represented using stochastic arrival departure model. In this model, each node in the temporal graph arrives and departs at certain times, and these arrival and departure times of each node are determined using independent probability distributions. The node exists in the time interval between the arrival time and the departure time. An edge between two nodes can exist if there is an intersection between the time intervals for which those two nodes exist. A matching on a stochastically evolving graph is defined as the subset of edges such that no two edges are incident on the same node. A fully randomized approximation scheme (FPRAS) has been derived to approximate the expected size of maximum cardinality matching on a stochastically evolving graph. A probabilistic optimal algorithm is proposed when the model is defined over two timesteps. They have also defined price of stochasticity and proved that the upper bound on the price of stochasticity is $\frac{2}{3}$. In this paper, we propose another type of matching for temporal graphs called {\em 0-1 timed matching}. We then investigate the complexity and algorithms for finding a maximum 0-1 timed matching in temporal graphs. \section{System Model} \label{model} We represent a temporal graph by the {\em evolving graphs} \cite{Ferreira02} model. In this model, a temporal graph is represented as a finite sequence of static graphs, each static graph being an undirected graph representing the graph at a discrete timestep. The total number of timesteps is called the {\em lifetime} of the temporal graph. In this paper, we assume that the node set of the temporal graph remains unchanged throughout the lifetime of the temporal graph; only the edge set changes with time. All the changes in the edge set are known a-priori. Also, there are no self-loops and at most one edge exists between any two nodes at any timestep. Thus, a temporal graph is denoted by $\mathcal{G(V, E)} = \{G_0(\mathcal{V}, \mathcal{E}_0), G_1(\mathcal{V}, \mathcal{E}_1),$ $\cdots, G_{\mathcal{T} - 1}(\mathcal{V}, \mathcal{E}_{\mathcal{T} - 1})\}$ where $\mathcal{V}$ is the node set, $\mathcal{E} = \bigcup_{i=0}^{\mathcal{T}-1} \mathcal{E}_i$ is the edge set and $\mathcal{T}$ is the lifetime of the temporal graph. Each $G_i$ denotes the static graph at timestep $i$ with node set $\mathcal{V}$ and set of edges $\mathcal{E}_i$ that exist at timestep $i$. As only the edge set changes with time, each edge in $\mathcal{E}$ of a temporal graph $\mathcal{G}$ can be equivalently represented by specifying the time intervals for which the edge exists. Thus an edge $e \in \mathcal{E}$ between nodes $u$ and $v$ can be represented as $e(u, v, (s_1, f_1),$ $(s_2, f_2), \cdots, (s_k, f_k))$, where $u, v$ $\in$ $\mathcal{V}, u \neq v$, $f_k \leq \mathcal{T}$ and a pair $(s_i, f_i)$ indicates that the edge exists for the time interval $[s_i, f_i)$, where $0 \leq s_i < f_i \leq \mathcal{T}$ (and hence the edge exists in all the static graphs $G_{s_i}, G_{s_i+1}, \cdots, G_{f_i - 1}$). Also, if an edge $e$ has two such pairs $(s_i, f_i)$ and $(s_j, f_j)$, $s_i \neq s_j$ and if $s_i$ $<$ $s_j$ then $f_i$ $<$ $s_j$. Thus, the maximum number of time intervals for an edge can be $\lfloor \frac{\mathcal{T}}{2} \rfloor$. An edge at a single timestep is called an instance of that edge. For simplicity, we also denote the edge $e$ between nodes $u, v \in \mathcal{V}$ by $e_{uv}$ when the exact time intervals for which $e$ exists are not important. The corresponding instance of $e_{uv}$ at time $t$ is denoted by $e^t_{uv}$. \section{Finding a Maximum 0-1 Timed Matching for Temporal Graphs} \label{bipartite} In this section, we address the problem of finding a maximum 0-1 timed matching for a given temporal graph which is not a tree. We first analyse the computational complexity of the problem. After that, we analyse the approximation hardness of the problem of finding a maximum 0-1 timed matching for a given temporal graph when multiple time intervals are associated with each edge. Finally, we propose an approximation algorithm for the problem. \subsection{Complexity of Finding Maximum 0-1 Timed Matching in Temporal Graphs} We have already proved in Theorem \ref{thm:NPTree} that the problem of finding a maximum 0-1 timed matching for a given rooted temporal tree when each edge is associated with multiple time intervals is NP-Complete. From Theorem \ref{thm:NPTree}, we get the following result. \begin{corollary} The problem of finding a maximum 0-1 timed matching for temporal graphs when each edge is associated with multiple time intervals is NP-Complete. \end{corollary} Next, we investigate the computational complexity of the problem of finding a maximum 0-1 timed matching for temporal graphs when each edge is associated with a single time interval. We prove that this problem is also NP-Complete. In order to show that, we prove that this problem is NP-Complete even when the given temporal graph is a degree-bounded bipartite temporal graph such that degree of each node is bounded by $3$ and each edge is associated with a single time interval. We refer to this problem as D-MAX-0-1-TMBD3B-1 problem. We first define the D-MAX-0-1-TMBD3B-1 problem. In the rest of this paper, we refer to a bounded degree bipartite temporal graph when degree of each node is bounded by $3$ as BDG3B. \begin{definition} \textbf{D-MAX-0-1-TMBD3B-1:} Given a BDG3B $\mathcal{G(V, E)}$ with lifetime $\mathcal{T}$, where each edge in $\mathcal{E}$ is associated with a single time interval, and a positive integer $r$, does there exist a 0-1 timed matching $M$ for $\mathcal{G}$ such that $|M| = r$? \end{definition} Next, we prove NP-Completeness of the D-MAX-0-1-TMBD3B-1 problem by showing that there is a polynomial time reduction from the 2P2N-3SAT problem \cite{Caragiannis19} to the D-MAX-0-1-TMBD3B-1 problem. The 2P2N-3SAT problem is known to be NP-Complete \cite{Caragiannis19}. The 2P2N-3SAT problem is defined as follows. \begin{definition} \textbf{2P2N-3SAT:} Let $U=\{v_1, v_2, \cdots, v_m\}$ be a set of $m$ boolean variables and let $\Psi$ be a 3-CNF formula with $d$ clauses $C_1, C_2, \cdots, C_d$ such that each variable occurs exactly twice positively and twice negatively in $\Psi$. Does there exist a truth assignment which satisfies $\Psi$? \end{definition} \begin{theorem} The D-MAX-0-1-TMBD3B-1 problem is NP-Complete. \label{thm:bdg3} \end{theorem} \begin{proof} We first show that the D-MAX-0-1-TMBD3B-1 problem is in NP. Consider a certificate $\langle \langle \mathcal{G(V,E)},$ $r \rangle, M\rangle$, where $\mathcal{G}$ is a BDG3B with lifetime $\mathcal{T}$ such that each edge is associated with a single time interval, $r$ is a given integer, and $M$ is a given set of edges. We consider one edge $e_{uv} \in M$ at a time and compare the associated time interval of $e_{uv}$ with associated time intervals of all the other edges in $M$. We perform this check for all the edges in $M$ to find if there exists any edges overlapping with each other. This checking can be done in polynomial time. Whether $|M| = r$ and $M \subseteq \mathcal{E}$ can also be easily checked in polynomial time. Hence, the D-MAX-0-1-TMBD3B-1 problem is in NP. Next, we prove that, there is a polynomial time reduction from the 2P2N-3SAT problem to the D-MAX-0-1-TMBD3B-1 problem. Let $\langle U, \Psi \rangle$ be an instance of the 2P2N-3SAT problem where $U=\{v_1, v_2, \cdots, v_m\}$ be a set of $m$ variables and $\Psi$ be a 3-CNF formula with $d = \frac{4m}{3}$ clauses $C_1, C_2, \cdots, C_d$ such that each clause $C_i$ consists of exactly $3$ literals and each variable occurs exactly twice positively and twice negatively in $\Psi$. A literal is a variable or negation of a variable in $U$. Without loss of generality, we assume that each clause in $\Psi$ consists of distinct literals. From this instance of the 2P2N-3SAT problem, we construct an instance of the D-MAX-0-1-TMBD3B-1 problem as follows. \begin{itemize} \item We construct a temporal graph $\mathcal{G(V, E)}$ as follows: \begin{enumerate} \item We add a node $c_i$ to a set $A_1$ for each clause $C_i$ in $\Psi$. Thus, $A_1 := \{c_i\,|\, \forall C_i \in \Psi\}$ We add a node $a_i$ to a set $A_2$ for each variable $v_i$ in $U$. Thus, $A_2 := \{a_i\,|\, \forall v_i \in U\}$ We add two nodes $b_i, \Bar{b}_i$ to a set $B$ for each variable $v_i$ in $U$. Thus, $B := \{b_i, \Bar{b}_i\,|\, \forall v_i \in U\}$ Then the set of nodes $\mathcal{V}$ in the temporal graph $\mathcal{G}$ is $\mathcal{V} := A \cup B$ where $A := A_1 \cup A_2$ \item We add an edge between a node $c_i$ and $b_j$ which exists for time interval $(i-1, i)$, if $v_j$ is a literal in clause $C_i$. Thus, $\mathcal{E}_1 := \{e(c_i, b_j, (i-1, i))\,|\, \forall v_j \in U, C_i \in \Psi,$ $v_j$ is a literal in $C_i\}$ We add an edge between node $c_i$ and $\bar{b}_j$ which exists for time interval $(i-1, i)$, if $\bar{v}_j$ is a literal in clause $C_i$. Thus, $\mathcal{E}_2 := \{e(c_i, \bar{b}_j, (i-1, i))\,|\,\forall v_j \in U, C_i \in \Psi,$ $\bar{v}_j$ is a literal in $C_i\}$ We add an edge between each node $a_i$ and node $b_i$ which exists for time interval $(0, d)$. Thus, $\mathcal{E}_3 := \{e(a_i, b_i, (0, d))\,|\, \forall a_i, b_i \in \mathcal{V}\}$ We add an edge between each node $a_i$ and node $\bar{b}_i$ which exists for time interval $(0, d)$. Thus, $\mathcal{E}_4 := \{e(a_i, \bar{b}_i, (0, d))\,|\, \forall a_i, \bar{b}_i \in \mathcal{V}\}$ Then the set of edges $\mathcal{E}$ in the temporal graph $\mathcal{G}$ is $\mathcal{E} := \mathcal{E}_1 \cup \mathcal{E}_2 \cup \mathcal{E}_3 \cup \mathcal{E}_4$ \item $\mathcal{T} := d$ \end{enumerate} \item $r := d+m$ \end{itemize} As there are exactly 3 literals in each clause $C_j$ and for each variable $v_i$, literals $v_i$ and $\bar{v}_i$ are present in exactly two clauses each, $\mathcal{G}$ is a BDG3B. Any edge $e(u, v, (s, f)) \in \mathcal{E}$ is also denoted as $e_{uv}$ when the time interval for which this edge exists is not important. Figure \ref{fig:bounded} shows the construction of a BDG3B from an instance of the 2P2N-3SAT problem. \begin{figure} \begin{center} \includegraphics[scale=.52]{boundedBipartite.eps} \caption{Construction of a BDG3B from the input of an instance of the 2P2N-3SAT problem} \label{fig:bounded} \end{center} \end{figure} We first prove that if there is a solution for the D-MAX-0-1-TMBD3B-1 problem, then there is a truth assignment which satisfies $\Psi$. For a 0-1 timed matching $M$, $|M| = r$, for $\mathcal{G}$, we construct a truth assignment $\mathcal{S}$ for $\Psi$ as follows. \begin{enumerate} \item For each $e_{a_ib_i} \in M$, assign $v_i := \mathbf{false}$. \label{R1} \item For each $e_{a_i\bar{b}_i} \in M$, assign $v_i = \mathbf{true}$. \label{R2} \end{enumerate} We first show that, for each node $v \in A$, there is exactly one edge incident on $v$ is in $M$. According to our construction, $\mathcal{G}$ is bipartite and $|A| = m+d$. Again, any two edges incident on the same node in $A$ are overlapping with each other. Thus, no two edges incident on the same node in $A$ can be in $M$. As $|M| = m+d$ and $|A| = m+d$, for each node $v \in A$, there is exactly one edge incident on $v$ in $M$. This shows that, for each node $a_i \in A_2$ there is exactly one edge incident on $a_i$ in $M$. This implies that, each variable $v_i \in U$ is assigned to a truth value and no variable in $U$ is assigned to both $\mathbf{true}$ and $\mathbf{false}$. Next we show that $\mathcal{S}$ satisfies all the clauses in $\Psi$. For any clause $C_k$, we show that at least one literal $v_i \in C_k$ or $\bar{v}_i \in C_k$ is satisfied. We have shown that, for each node $c_k \in A_1$, there is one edge incident on $c_k$ in $M$. There are two possible cases: \begin{enumerate}[I.] \item {\em Some edge $e_{c_kb_i}$ incident on $c_k$ is in $M$:} This implies that if variable $v_i$ is set to \textbf{true}, the clause $C_k$ is satisfied. According to the truth assignment in $\mathcal{S}$, if $e_{a_i\bar{b}_i}$ is in $M$, then $v_i$ is set to \textbf{true}. Again we have already proved that, for each $a_i \in A_2$, there is an edge incident on $a_i$ in $M$. If $e_{a_ib_i}$ is in $M$, then $e_{c_kb_i}$ cannot be in $M$ because these two edges are overlapping with each other. Hence, $e_{a_i\bar{b}_i} \in M$ and $C_k$ is satisfied. \item {\em Some edge $e_{c_k\bar{b}_i}$ incident on $c_k$ is in $M$:} This implies that if variable $v_i$ is set to \textbf{false}, the clause $C_k$ is satisfied. According to the truth assignment in $\mathcal{S}$, if $e_{a_ib_i}$ is in $M$, then $v_i$ is set to \textbf{false}. Again we have already proved that, for each $a_i \in A_2$, there is an edge incident on $a_i$ in $M$. If $e_{a_i\bar{b}_i}$ is in $M$, then $e_{c_k\bar{b}_i}$ cannot be in $M$ because these two edges are overlapping with each other. Hence, $e_{a_ib_i} \in M$ and $C_k$ is satisfied. \end{enumerate} Thus, each clause $C_k$ in $\Psi$ is satisfied by $\mathcal{S}$ and each variable in $U$ is assigned to either \textbf{true} or \textbf{false}. Hence, $\mathcal{S}$ is a truth assignment which satisfies $\Psi$. Next, we prove that if there is no solution for the instance of the D-MAX-0-1-TMBD3B-1 problem, then there is no truth assignment which satisfies $\Psi$. We prove this by showing that if there is a truth assignment which satisfies $\Psi$, then there is a solution for the instance of the D-MAX-0-1-TMBD3B-1 problem. For a satisfying truth assignment $\mathcal{S}$ of $\Psi$, we construct a 0-1 timed matching $M$ of size $m+d$ for $\mathcal{G}$ as follows. \begin{enumerate} \item For any variable $v_i \in U$, if $v_i = \mathbf{false}$ is in $\mathcal{S}$, we include $e_{a_ib_i}$ in $M$. \item For any variable $v_i \in U$, if $v_i = \mathbf{true}$ is in $\mathcal{S}$, we include $e_{a_i\bar{b}_i}$ in $M$. \item For any clause $C_l$ in $\Psi$, we select any one literal $v_i$ in $C_l$ such that $v_i$ evaluates to $\textbf{true}$ following assignment $\mathcal{S}$, then we include $e_{c_lb_i}$ in $M$. If the selected literal is $\bar{v}_i$ in $C_l$ such that $\bar{v}_i$ evaluates to $\textbf{true}$ following assignment $\mathcal{S}$, then we include $e_{c_l\bar{b}_i}$ in $M$. Note that, for each clause $C_l$, we include exactly one such edge incident on $c_l$ in $M$. \end{enumerate} We first prove that $M$ is a 0-1 timed matching for $\mathcal{G}$. We prove this by contradiction. Assume that, $M$ is not a 0-1 timed matching for $\mathcal{G}$. Then there are at least two edges in $M$, which are overlapping with each other. According to the construction of $\mathcal{G}$, any two edges incident on two different nodes in $A_1$ are non-overlapping with each other. $M$ includes only one edge incident on each node in $A_1$. As each variable $v_i$ is assigned to either $\mathbf{true}$ or $\mathbf{false}$, only one edge incident on a node in $A_2$ is included in $M$. Thus, two edges can overlap in following two cases: \begin{enumerate}[I.] \item There is some node $b_i \in B$ such that $e_{a_ib_i}$ and some edge $e_{c_jb_i}$ are both included in $M$. According to the construction of $M$, $e_{a_ib_i} \in M$ implies that $v_i$ is assigned to $\mathbf{false}$. Again $e_{c_jb_i} \in M$ implies that, $v_i = \mathbf{true}$ and $v_i \in C_j$. Thus, $e_{a_ib_i}$ and $e_{c_jb_i}$ both can be included in $M$ only when $v_i$ is assigned to both $\mathbf{true}$ and $\mathbf{false}$. This is a contradiction. \item There is some node $\bar{b}_i \in B$ such that $e_{a_i\bar{b}_i}$ and some edge $e_{c_j\bar{b}_i}$ are both included in $M$. According to the construction of $M$, $e_{a_i\bar{b}_i} \in M$ implies that $v_i$ is assigned to $\mathbf{true}$. Again $e_{c_j\bar{b}_i} \in M$ implies that, $v_i = \mathbf{false}$ and $\bar{v}_i \in C_j$. Thus, $e_{a_i\bar{b}_i}$ and $e_{c_j\bar{b}_i}$ both can be included in $M$ only when $v_i$ is assigned to both $\mathbf{true}$ and $\mathbf{false}$. This is also a contradiction. \end{enumerate} Next, we prove that $|M| = m+d = r$. We prove this by contradiction. There can be following two possible cases: \begin{enumerate}[I.] \item {\em $|M| > m+d$:} According to the construction of $M$, for each node $v \in A$, only one edge incident on $v$ is included in $M$. As $\mathcal{G}$ is bipartite and $|A| = m+d$, $|M| > m+d$ is impossible. \item {\em $|M| < m+d$:} $\mathcal{S}$ satisfies $\Psi$. There are $d$ clauses in $\Psi$. Hence, according to the construction of $M$, one edge incident on each node in $A_1$ is included in $M$. Thus $|M| < m+d$ implies that, there is at least one node $a_i \in A_2$ such that no edge incident on $a_i$ is included in $M$. This implies that, there is a variable $v_i \in U$ which is not assigned to any truth value in $\mathcal{S}$. This is a contradiction. \end{enumerate} Hence, $|M| = m+d = r$. This completes the proof. \end{proof} The following result directly follows from Theorem \ref{thm:bdg3}. \begin{corollary} The problem of finding maximum 0-1 timed matching for temporal graphs when each edge is associated with a single time interval, is NP-Complete. \end{corollary} \subsection{Approximation Hardness of Finding Maximum 0-1 Timed Matching for Temporal Graphs} \input{hardness} \subsection{An Approximation Algorithm for Finding Maximum 0-1 Timed Matching for Temporal Graphs} Next, we propose a greedy approximation algorithm to address the problem of finding a maximum 0-1 timed matching for a given temporal graph $\mathcal{G(V, E)}$ when each edge is associated with multiple time intervals. Recall that as per the assumptions about $\mathcal{G(V, E)}$ mentioned in Section \ref{model}, there is no self loop and there is only one edge between any two nodes in $\mathcal{V}$. It can be observed that, while constructing a 0-1 timed matching $M$ for $\mathcal{G(V, E)}$, if an edge $e_{uv} \in \mathcal{E}$ is selected in $M$, then no other edge overlapping with $e_{uv}$ can be selected in $M$. Following this observation, we design a greedy approximation algorithm which constructs $M$ for $\mathcal{G}$ by adding an edge at a time. Before describing the details of the algorithm, we define a parameter called {\em overlapping number of edge $e_{uv}$} as follows. \begin{definition} \textbf{Overlapping Number of Edge $e_{uv}$:} The overlapping number of an edge $e_{uv} \in \mathcal{E}$, denoted by $\mathcal{N}(e_{uv})$, is the number of edges overlapping with $e_{uv}$ in $\mathcal{E}$. \end{definition} At each step, the proposed algorithm chooses the edge $e_{uv}$ with the minimum overlapping number (ties are broken arbitrarily) and adds it to $M$. After this addition, the algorithm removes $e_{uv}$ and all the edges overlapping with $e_{uv}$ from $\mathcal{E}$. Thus, due to the addition of an edge to $M$ at each round, the overlapping number of an edge, which is not yet deleted from $\mathcal{E}$, either decreases or remains the same. The algorithm terminates when $\mathcal{E}$ is empty. It can be noted that, once an edge $e_{uv}$ is added to $M$, it is never removed from $M$. Algorithm \ref{alg:greedTM}, shows the pseudocode of the algorithm. Function $computeOverlappingNo(e_{uv},S)$ computes overlapping numbers of an edge $e_{uv}$ in $S$. For each edge $e_{uv} \in \mathcal{E}$, it maintains a set $\mathcal{C}(e_{uv})$ which is the set of overlapping edges with $e_{uv}$. \begin{figure} \begin{center} \includegraphics[scale=.5]{temporal.eps} \caption{A temporal graph} \label{fig:tempG} \end{center} \end{figure} In Figure \ref{fig:tempG}, initially $\mathcal{N}(e_{ab}) = 1$, $\mathcal{C}(e_{ab}) = \{e_{af}\}$, $\mathcal{N}(e_{bc}) = 1$, $\mathcal{C}(e_{bc}) = \{e_{cf}\}$, $\mathcal{N}(e_{cd}) = 2$, $\mathcal{C}(e_{cd}) = \{e_{cf}, e_{bd}\}$, $\mathcal{N}(e_{df}) = 1$, $\mathcal{C}(e_{df}) = \{e_{cf}\}$, $\mathcal{N}(e_{af}) = 2$, $\mathcal{C}(e_{af}) = \{e_{cf}, e_{ab}\}$, $\mathcal{N}(e_{cf}) = 4$, $\mathcal{C}(e_{cf}) = \{e_{af}, e_{df}, e_{cd}, e_{bc}\}$ and $\mathcal{N}(e_{bd}) = 1$, $\mathcal{C}(e_{bd}) = \{e_{cd}\}$. At round 1, Algorithm \ref{alg:greedTM} adds edge \begin{algorithm} \caption{Greedy-0-1-TimedMatching} \textbf{Input:} $\mathcal{G(V, E)}$ with lifetime $\mathcal{T}$\\ \textbf{Output:} $M$, a maximal 0-1 timed matching \label{alg:greedTM} \begin{algorithmic}[1] \State $i = 0$ \State $M := \emptyset$ \ForAll{$e_{xy} \in \mathcal{E}$} \label{s01} \State $computeOverlappingNo(e_{xy}, \mathcal{E})$ \label{s02} \EndFor \While{$\mathcal{E} \neq \emptyset$} \label{swhile} \State $e_{uv}$ $:=$ min $\{\mathcal{N}(e_{xy})\,|\, e_{xy} \in \mathcal{E}\}$ \label{s03} \State $M$ $:=$ $M \cup \{e_{uv}\}$ \label{ll1} \State $\mathcal{E} := \mathcal{E} \setminus \{e_{uv}\}$ \label{ll1a} \ForAll{$e_{uw_k} \in \mathcal{C}(e_{uv})$} \label{ll2} \State $\mathcal{E} := \mathcal{E} \setminus \{e_{uw_k}\}$ \label{ll3} \ForAll{$e_{w_kx_k} \in \mathcal{C}(e_{uw_k})$} \State $\mathcal{C}(e_{w_kx_k}) := \mathcal{C}(e_{w_kx_k}) \setminus \{e_{uw_k}\}$ \State $\mathcal{N}(e_{w_kx_k}) := \mathcal{N}(e_{w_kx_k}) - 1$ \EndFor \EndFor \EndWhile \label{ewhile} \State \textit{return($M$)} \end{algorithmic} \end{algorithm} $e_{bc}$ to $M$ and removes edge $e_{cf}$ along with $e_{bc}$ from $\mathcal{E}$. Thus, $\mathcal{N}(e_{df})$, $\mathcal{C}(e_{df})$ gets updated to $0$, $\emptyset$ respectively. This removal of edges also updates $\mathcal{N}(e_{cd})$, $\mathcal{N}(e_{af})$, $\mathcal{C}(e_{cd})$ and $\mathcal{C}(e_{af})$ to $1$, $1$, $\{e_{bd}\}$ and $\{e_{ab}\}$ respectively. In round 2, $e_{df}$ is added to $M$. In round 3, the algorithm adds $e_{ab}$ to $M$ and $e_{af}$ along with $e_{ab}$ gets removed from $\mathcal{E}$. In the next round, $e_{cd}$ is added to $M$ and $e_{bd}$ along with $e_{cd}$ gets removed. Thus final 0-1 timed matching $M$ $=$ $\{e_{bc}, e_{df}, e_{ab}, e_{cd}\}$ \begin{theorem} Algorithm \ref{alg:greedTM} correctly computes a maximal 0-1 timed matching $M$ for a given temporal graph, $\mathcal{G(V, E)}$ in $O(m^2 + m\Delta^2 + m \Delta \mathcal{T} \log \mathcal{T})$ time, where $\Delta$ is the maximum degree of a node in $\mathcal{G}_U$ and $|\mathcal{E}| = m$. \end{theorem} \begin{proof} First, we prove that the set $M$ computed by Algorithm \ref{alg:greedTM} is a maximal 0-1 timed matching for $\mathcal{G}$. We prove this by contradiction. Assume that $M$ is not a maximal 0-1 timed matching for $\mathcal{G}$. There are two possible scenarios. \begin{enumerate} \item $M$ is not a 0-1 timed matching for $\mathcal{G}$. This implies that there are at least two edges $e_{uv}$ and $e_{vw}$ which are overlapping with each other. But at any round of edge selection, at Line \ref{ll1}, if Algorithm \ref{alg:greedTM} adds $e_{uv}$ to $M$, then at Lines \ref{ll2} and \ref{ll3}, it removes all the overlapping edges with $e_{uv}$ from the edge set. Thus, Algorithm \ref{alg:greedTM} cannot add $e_{vw}$ to $M$. Using a similar argument, it can be proved that if Algorithm \ref{alg:greedTM} adds $e_{vw}$ to $M$, then $e_{uv}$ cannot be added to $M$. Hence, $M$ is a 0-1 timed matching for $\mathcal{G}$. \item There exists an edge, $e_{uv}$ which is not included in $M$ but which is non-overlapping with any edge in $M$. As Algorithm \ref{alg:greedTM}, at Lines \ref{ll2} and \ref{ll3}, only removes overlapping edges, then the existence of $e_{uv}$ contradicts the termination condition of Algorithm \ref{alg:greedTM}. Thus, $M$ is a maximal 0-1 timed matching for $\mathcal{G}$. \end{enumerate} At Lines \ref{s01} and \ref{s02}, Algorithm \ref{alg:greedTM} finds the overlapping number and the set of overlapping edges for each edge in $\mathcal{E}$. As the maximum number of edges incident on any two nodes in $\mathcal{V}$ is $O(\Delta)$, this step takes $O(m \Delta \mathcal{T} \log \mathcal{T})$ time. After that, in each iteration of the while loop from Lines \ref{swhile} to \ref{ewhile}, at Line \ref{s03}, it selects the edge $e_{uv}$ with the minimum overlapping number and adds to $M$. This selection takes $O(m)$ time. Then, it removes the overlapping edges with $e_{uv}$ incident on $u$ and $v$. Also, the maximum number of overlapping edges with $e_{uv}$ is $O(\Delta)$. Hence, deletion of these edges and update of the overlapping number and the set of overlapping edges of the remaining edges in $\mathcal{E}$ takes $O(\Delta^2)$ time. Thus, each edge addition to $M$ takes $O(m + \Delta^2)$ time. The maximum number of edges that can be added to $M$ is $O(m)$. Hence, the overall running time of Algorithm \ref{alg:greedTM} is $O(m^2 + m\Delta^2 + m \Delta \mathcal{T} \log \mathcal{T})$. \end{proof} \begin{theorem} Algorithm \ref{alg:greedTM} finds a $\frac{1}{\mathcal{N}^* + 1}$-approximate maximum 0-1 timed matching for a given temporal graph $\mathcal{G(V, E)}$, where $|\mathcal{V}| = n$, $|\mathcal{E}| = m$ and $\mathcal{N}^*$ is the average overlapping number of each edge in $\mathcal{E}$. \end{theorem} \begin{proof} We assume that, at the $i^{th}$ iteration of the while loop from Lines \ref{swhile} to \ref{ewhile}, the selected edge is $e_{x_iy_i}$ and its overlapping number at the start of this iteration is $\mathcal{N}^i(e_{x_iy_i})$. Thus at the $i^{th}$ iteration of the while loop from Lines \ref{swhile} to \ref{ewhile}, the number of deleted edges from $\mathcal{E}$ is $\mathcal{N}^i(e_{x_iy_i})+1$. Assume that this greedy algorithm terminates after $t$ rounds. Thus, we get the following equation. \begin{center} \begin{eqnarray} \label{eqng1} \sum_{i=1}^t (\mathcal{N}^i(e_{x_iy_i})+1) = m \end{eqnarray} \end{center} As the algorithm selects the edge with the minimum overlapping number, the overlapping number of each deleted edge overlapping with $e_{x_iy_i}$ is at least $\mathcal{N}^i(e_{x_iy_i})$. Thus the sum of the overlapping number of all the deleted edges in the $i^{th}$ iteration is at least $\mathcal{N}^i(e_{x_iy_i})(\mathcal{N}^i(e_{x_iy_i})+1)$. This leads to the following inequality. \begin{center} \begin{eqnarray} \label{eqng2} \sum_{i=1}^t \mathcal{N}^i(e_{x_iy_i})(\mathcal{N}^i(e_{x_iy_i})+1) \leq m \mathcal{N}^* \end{eqnarray} \end{center} Adding Equations \ref{eqng1} and \ref{eqng2} we get, \begin{center} \begin{eqnarray} \sum_{i=1}^t (\mathcal{N}^i(e_{x_iy_i})+1)^2 \leq m(\mathcal{N}^*+1) \label{eqng3} \\ \frac{m^2}{t} \leq m(\mathcal{N}^*+1) \label{eqng4} \\ \frac{m}{\mathcal{N}^*+1} \leq t \nonumber \end{eqnarray} \end{center} We get Equation \ref{eqng4} from Equation \ref{eqng3} by applying the Cauchy-Schwarz inequality and Equation \ref{eqng1}. As Algorithm \ref{alg:greedTM} selects one edge at each iteration of the while loop from Lines \ref{swhile} to \ref{ewhile}, the size of the constructed 0-1 timed matching is $t$. The cardinality of the maximum 0-1 timed matching for $\mathcal{G}$ can be $m$. Hence, approximation ratio of Algorithm \ref{alg:greedTM} is $\frac{1}{\mathcal{N}^*+1}$. \end{proof} It can be observed that the overlapping number of any edge $e_{uv} \in \mathcal{E}$ is dependent on the degree of $u$ and $v$ in $\mathcal{G}_U$. Thus, the average overlapping number $\mathcal{N}^*$ of edges in a temporal graph $\mathcal{G(V, E)}$ is also dependent on the degree of nodes in $\mathcal{G}_U$. This indicates that Algorithm \ref{alg:greedTM} produces a 0-1 timed matching with good approximation ratio for sparse graphs. The algorithm produces a 0-1 timed matching with good approximation ratio even when there exists a small number of edges for which the overlapping number is high if the average overlapping number per edge in $\mathcal{E}$ is small. It can also be observed that in any temporal graph $\mathcal{G(V, E)}$, for any edge $e_{uv} \in \mathcal{E}$, the maximum value of the overlapping number of $e_{uv}$ $\mathcal{N}(e_{uv})$ is $2\Delta - 2$ where $\Delta$ is the maximum degree of any node in $\mathcal{G}_U$. Thus, the maximum value of $\mathcal{N}^*$ in any temporal graph is $2\Delta - 2$. Hence, the minimum value of the approximation ratio of Algorithm \ref{alg:greedTM} is $\frac{1}{2\Delta-1}$. Corollary \ref{cor:bounded} follows directly from this. \begin{corollary} \label{cor:bounded} Algorithm \ref{alg:greedTM} is a $\frac{1}{2B-1}$ approximate algorithm for a given bounded degree temporal graph where degree of each node is bounded by $B$ and each edge is associated with multiple time intervals. Thus, for bounded degree temporal graphs where degree of each node is bounded by a constant, Algorithm \ref{alg:greedTM} is a constant factor approximation algorithm. \end{corollary} \section{Conclusion} \label{conclusion} In this paper, we have defined {\em 0-1 timed matching} on temporal graphs, and investigated the problem of finding a maximum 0-1 timed matching for different types of temporal graphs. We have proved that this problem is NP-Complete for a rooted temporal tree when each edge is associated with $2$ or more time intervals, and proposed a $O(n^3)$ time dynamic programming based algorithm for a rooted temporal tree with $n$ nodes when each edge is associated with a single time interval. It is also proved that this problem is NP-Complete for a bounded degree bipartite temporal graph where degree of each node is bounded by $3$ or more such that each edge is associated with a single time interval. We have also proved that there is no $\frac{1}{n^{1-\epsilon}}$, for any $\epsilon > 0$, factor approximation algorithm for the problem of finding a maximum 0-1 timed matching even for a rooted temporal tree when each edge is associated with multiple time intervals unless NP = ZPP. Then, we have proposed a greedy approximation algorithm to address the problem for a temporal graph when each edge is associated with multiple time intervals. The work can be extended to consider the problem on other classes of temporal graphs.
{ "timestamp": "2020-12-17T02:17:08", "yymm": "2012", "arxiv_id": "2012.08909", "language": "en", "url": "https://arxiv.org/abs/2012.08909" }
\subsection{Tensor and Notations} \label{sec:prelim:tensor} Tensors are defined as multi-dimensional arrays that generalize the one-dimensional arrays (or vectors) and two-dimensional arrays (or matrices) to higher dimensions. Specifically, the dimension of a tensor is referred to as order or way; the length of each mode is called ‘dimensionality’ and denoted by $I_1, \cdots, I_n$. We use boldface Euler script letters (e.g., $\T{X}$) to denote tensors, boldface capitals (e.g., $\mat{A}$) to denote matrices, and boldface lower cases (e.g., $\vect{a}$) to denote vectors. The $\alpha = (i_1, \cdots, i_N)$th entry of tensor $\T{X}$ is denoted by $x_{\alpha}$. A slice of a 3-order tensor is a two-dimensional subset of it. There are the horizontal, lateral, and frontal slices in a 3-order tensor $\tensor{X}$, denoted by $\mat{X}_{i_1::}$, $\mat{X}_{:i_2:}$, and $\mat{X}_{::i_3}$. A tensor containing a mode representing time is called a \textit{temporal tensor}. A time slice in a $3$-mode temporal tensor represents a two-dimensional subset disjointed by each time index. For example, $\mat{X}_{i_t::}$ is an $i_t$th time slice when the first mode is the time mode. For brevity, we express $\mat{X}_{i_t::}$ as $\mat{X}_{i_t}$. Our proposed method is not limited to a $3$-mode tensor so that a time slice in an N-order temporal tensor corresponds to an (N-1)-dimensional subset of the tensor sliced by each time index. We formally define a time slice $\T{X}_{i_{t}}$ as follows: \begin{defn}[Time slice $\T{X}_{i_{t}}$] \label{def:timeslice} {Given an $N$-order tensor $\T{X} \in \mathbb{R}^{I_1\times \cdots \times I_{N}}$ and a time mode $t$, we extract time slices of size $I_1\times \cdots \times I_{t-1} \times I_{t+1}\cdots \times I_{N}$ by slicing the tensor $\T{X}$ so that an $i_{t}$th time slice $\T{X}_{i_{t}} \in \mathbb{R}^{I_1\times \cdots \times I_{t-1} \times I_{t+1}\cdots \times I_{N}}$ is an $N-1$ order tensor obtained at time $i_{t}$ where $1 \leq i_{t} \leq I_{t}$. \QEDB} \end{defn} The \textit{Frobenius norm} of a tensor $\T{X}$ $(\in \mat{R}^{I_{1} \times ... \times I_{N}})$ is given by $ ||\T{X}||_F = \sqrt{\sum_{\alpha \in \Omega}{\T{X}^{2}_{\alpha}}}$, where $\Omega$ is the set of indices of entries in $\T{X}$, $ \alpha\ = (i_1,\cdots, i_N)$ is an index included in $\Omega$, and $\T{X}_{\alpha}$ is the $(i_1, \cdots, i_N)$th entry of the tensor $\T{X}$. \subsection{Tensor Decomposition} \label{sec:prelim:cp} \begin{figure*} \centering \vspace{5mm} \includegraphics[width=0.5\linewidth]{fig/overview/cp.pdf} \caption{CP decomposition of a 3-way sparse tensor into $K$ components} \label{fig:3way_cp} \end{figure*} We provide the definition of CP decomposition~\Citep{harshman1970foundations,kiers2000towards} which is one of the most representative factorization models. Fig.~\ref{fig:3way_cp} illustrates CP decomposition of a $3$-way sparse tensor. Our model \method is based on CP decomposition. \begin{defn}[CP decomposition] \label{def:tensor_decomp} Given a rank $K$ and an $N$-mode tensor $\T{X} \in \mathbb{R}^{I_{1} \times \cdots \times I_N}$ with observed entries, CP decomposition approximates $\T{X}$ by finding latent factor matrices $\{\A{n} \in \mathbb{R}^{I_n \times K}\:|\:1 \leq n \leq N\}$. The factor matrices are obtained by minimizing the following loss function: \begin{align} \T{L} \left(\A{1}, \cdots, \A{N}\right) & = \sum_{\forall \alpha \in \Omega}{ \left({x}_{\alpha}-\sum_{k=1}^{K} \prod_{n=1}^{N}a^{(n)}_{i_{n}k}\right)^{2}} \label{eq:tensor_decomp:obs} \end{align} where $\Omega$ indicates the set of the indices of the observed entries, $x_\alpha$ indicates the $\alpha = (i_1, \cdots, i_N)$th entry of $\T{X}$, and $a^{(n)}_{i_{n}k}$ indicates $(i_{n}, k)$th entry of $\A{n}$. \QEDB \end{defn} The standard CP decomposition method is not specifically designed to deal with temporal dependency; thus CP decomposition does not give an enough accuracy for predicting missing values in a temporal tensor. Although a few methods~\Citep{yu2016temporal,wu2019neural} have tried to capture temporal interaction, none of them 1) captures temporal dependency between adjacent time steps, and 2) exploits the sparsity of temporal slices. Our proposed \method carefully captures temporal information and considers sparsity of temporal slices for better accuracy in decomposing temporal tensors. \subsection{Overview} \label{sec:overview} \method is a tensor factorization method designed for temporal tensors with missing entries. There are several challenges in designing an accurate tensor factorization method for temporal tensors. \begin{enumerate} \item {\bf Model temporal dependency.} Temporal dependency is an essential structure of temporal tensor. How can we design a tensor factorization model to reflect the temporal dependency? \item {\bf Exploit sparsity of time slices.} Time-evolving tensor has varying sparsity for its temporal slices. How can we exploit the temporal sparsity for better accuracy? \item {\bf Optimization.} How can we efficiently train our model and minimize its loss function? \end{enumerate} To overcome the aforementioned challenges, we propose the following main ideas. \begin{enumerate} \item {\bf Smoothing regularization (Section~\ref{sec:sm}).} We propose a smoothing regularization on time factor to capture temporal dependency. \item {\bf Time-dependent sparsity penalty (Section~\ref{sec:sp}).} We propose a time-dependent sparsity penalty to further improve the accuracy. \item {\bf Careful optimization (Section~\ref{sec:opt}).} We propose an optimization strategy utilizing an analytical solution and Adam optimizer to efficiently and accurately train our model. \end{enumerate} Fig.~\ref{fig:overview} illustrates overview of \method. We observe that adjacent time slices in a temporal tensor are closely related with each other due to temporal trend of the tensor. Based on the observation, \method uses smoothing regularization such that time factor vectors for adjacent time slices become similar. We also observe that different time slices have different densities. Instead of applying the same amount of regularization for all the time slices, we control the amount of regularization based on the sparsity of time slices such that sparse slices are affected more from the regularization. It is also crucial to efficiently optimize our objective function. We propose an optimization strategy exploiting alternating minimization to expedite training and improve the accuracy. \subsection{Smoothing Regularization}\label{sec:sm} We describe how we formulate the smoothing regularization on tensor decomposition to capture temporal dependency. Our main observation is that temporal tensors have temporal trends, and adjacent time slices are closely related. For example, consider an air quality tensor containing measurements of pollutant at a specific time and location; it is modeled as a 3-mode tensor $\T{X}$ (time, location, type of pollutants; measurements). Since the amount of pollutants at nearby time steps are closely related, the time slice $\T{X}_{t}$ at time $t$ is closely related to the time slices $\T{X}_{t-1}$ at time $t-1$ and $\T{X}_{t+1}$ at time $t+1$. This implies the time factor matrix after tensor decomposition should have related rows for adjacent time steps. Based on the observation, our objective function is as follows. Given an $N$-order temporal tensor $\T{X} \in \mathbb{R}^{I_1 \times \cdots \times I_N}$ with observed entries $\Omega$, the time mode $t$, and a window size $S$, we find factor matrices {$\mat{A}^{(n)} \in \mathbb R^{I_{n}\times K}$} {$,1 \leq n \leq N $} that minimizes \begin{align} \label{eq:method:loss} \T{L} = \sum_{\alpha \in \Omega}{\left({x}_{\alpha}-\sum_{k=1}^{K} \prod_{n=1}^{N}a^{(n)}_{i_{n}k}\right)^{2}} + \lambda_t \sum_{i_{t}=1}^{I_{t}}\lVert{\vect{a}_{i_{t}}^{(t)}}-{{\tilde{\vect{a}}_{i_{t}}^{(t)}}}\rVert_{\text{2}}^{2} + \lambda_r\sum_{n \neq t}^{N}\lVert{\mat{A}^{(n)}}\rVert_{\text{F}}^{2} \end{align} where we define \begin{equation} \label{eq:method:smooth} {\tilde{\vect{a}}_{i_t}^{(t)}} = \sum_{i_s \in \mathscr{N}({i_t}, S)}{w(i_t,i_s)}{\vect{a}_{i_s}^{(t)}}, \end{equation} and $\mathscr{N}(i_t, S)$ indicates adjacent indices $i_s$ of $i_t$ in a window of size $S$. $\lambda_t$ and $\lambda_r$ are regularization constants to adjust the effect of time smoothing and weight decay, respectively. $\tilde{\vect{a}}_{i_t}^{(t)}$ in Equation~\eqref{eq:method:smooth} denotes the smoothed row of the temporal factor. The $\sum_{i_{t}=1}^{I_{t}}\lVert{\vect{a}_{i_{t}}^{(t)}}-{{\tilde{\vect{a}}_{i_{t}}^{(t)}}}\rVert_{\text{2}}^{2}$ term in Equation~\eqref{eq:method:loss} means that we regularize the $i_t$th row of the temporal factor to the smoothed vector from the neighboring rows in the factor. The weight $w(i_t, i_s)$ denotes the weight to give to the $i_s$th row of the temporal factor matrix for the smoothing the $i_t$th row of the temporal factor. An important question is, how to determine the weight $w(i_t, i_s)$? We use the Gaussian kernel for the weight function due to the following two reasons. First, it does not require any parameters to tune, and thus we can focus more on learning the factors in tensor decomposition. Second, it fits our intuition that a row closer to the $i_t$th row should be given a higher weight. In Section~\ref{sec:experiment}, we show that \method with Gaussian kernel outperforms all the competitors; however we note that other weight function can possibly replace the Gaussian kernel to further improve the accuracy, and we leave it as a future work. Given a target row index ${i_t}$, an adjacent row index ${i_{s}}$, and a window size $S$, the weight function based on the Gaussian kernel is as follows: \begin{equation} \label{eq:method:kernel} w(i_{t},i_{s}) = \frac{\mathscr{K}(i_{t},i_{s})}{{\sum_{i_{s'} \in \mathscr{N}({i_t}, S)}} \mathscr{K}(i_{t}, i_{s'})} \end{equation} where $\mathscr{K}$ is defined by $$ \mathscr{K}(i_{t}, i_{s}) =\exp\left(-\frac{(i_{t}-i_{s})^2}{2\sigma^2}\right)$$ Note that $\sigma$ affects the degree of smoothing; a higher value of $\sigma$ imposes more smoothing. For each $i_t$th time slice, the model constructs a smoothed time factor vector $\tilde{\vect{a}}_{i_t}$ based on nearby factor vectors $\vect{a}_{i_s}$ and the weights $w(i_{t},i_{s})$. Our model then aims to reduce the smoothing loss between the time factor vector $\vect{a}_{i_{t}}^{(t)}$ and the smoothed one ${{\tilde{\vect{a}}_{i_{t}}^{(t)}}}$. \subsection{Sparsity Penalty} \label{sec:sp} \begin{figure*} \centering{ \subfigure[ \bair]{ \includegraphics[width=0.3\linewidth]{./fig/density/beijing.pdf} } \hspace{-3mm} \subfigure[ \mair]{ \includegraphics[width=0.3\linewidth]{./fig/density/mair_std.pdf} } \hspace{-3mm} \subfigure[ \radar]{ \includegraphics[width=0.3\linewidth]{./fig/density/radar.pdf} }\\ \hspace{-3mm} \subfigure[ \indoor]{ \includegraphics[width=0.3\linewidth]{./fig/density/indoor.pdf} } \hspace{-3mm} \subfigure[ \server]{ \includegraphics[width=0.3\linewidth]{./fig/density/ncfd.pdf} } } \caption{\label{fig:density} Time-varying density of five real-world datasets. The horizontal axis represents the unique number of nonzero entries in time slices. The vertical axis represents the number of time indices with such number of nonzeros. Note that time slices have varying densities } \end{figure*} We describe how to further improve the accuracy of our method by considering the sparsity of time slices. The loss function~\eqref{eq:method:loss} uses the same smoothing regularization penalty $\lambda_t$ for all the time factor vectors. However, different time slices have different sparsity due to the different number of nonzeros in time slices (see Fig.~\ref{fig:density}), and it is thus desired to design our method so that it controls the degree of regularization penalty depending on the sparsity. For example, consider the 3-mode air quality tensor $\T{X}$ (time, location, type of pollutants; measurements), introduced in Section~\ref{sec:sm}, containing measurements of pollutant at a specific time and location. Assume that the time slice $\T{X}_{t_1}$ at time $t_1$ is very sparse containing few nonzeros, while the time slice $\T{X}_{t_2}$ at time $t_2$ is dense with many nonzeros. The factor row $\vect{a}_{i_{t_2 }}^{(1)}$ at time $t_2$ can be updated easily using its many nonzeros. However, the factor row $\vect{a}_{i_{t_1}}^{(1)}$ at time $t_1$ does not have enough nonzeros at its corresponding time slice, and thus it is hard to train $\vect{a}_{i_{t_1}}^{(1)}$ using only its few nonzeros; we need to actively use nearby slices to make up for the lack of data. Thus, it is desired to impose more smoothing regularization at time $t_1$ than at time $t_2$. Based on the motivation, \method controls the degree of smoothing regularization based on the sparsity of time slices. Let the \textit{time sparsity} $\beta_{i_{t}}$ of the $i_t$th time slice be defined as \begin{align} \label{eq:method:time_sparse} \beta_{i_{t}} & = 1 - d_{i_t} \end{align} where a \textit{time density} $d_{i_t}$ is defined as follows: \begin{align} \label{eq:method:time_norm_dense} d_{i_t} & = (0.999-0.001) \frac{ \omega_{i_{t}} - \omega_\textit{min}}{\omega_\textit{max} - \omega_\textit{min}} + 0.001 \end{align} $\omega_{i_{t}}$ indicates the number of nonzeros at $i_{t}$th time slice; $\omega_\textit{max}$ and $\omega_\textit{min}$ are the maximum and the minimum values of the number of nonzeros in time slices, respectively. The \textit{time density} $d_{i_t}$ can be thought of as a min-max normalized version of $\omega_{i_{t}}$, with its range regularized to [$0.001$, $0.999$]. Using the defined time sparsity, we modify our objective function as follows. \begin{align} \label{eq:method:naive_sp} \T{L} = \sum_{\alpha \in \Omega}{\left({x}_{\alpha}-\sum_{k=1}^{K} \prod_{n=1} ^{N}a^{(n)}_{i_{n}k}\right)^{2}} +\sum_{i_{t}=1}^{I_{t}}\lambda_t \beta_{i_{t}} ||{{\vect a}_{i_{t}}^{(t)}}-{{\tilde{\vect{a}}_{i_{t}}^{(t)}}}||_{2}^{2} + \lambda_r\sum_{n \neq t}^{N}\lVert{\mat{A}^{(n)}}\rVert_{\text{F}}^{2} \end{align} Note that the second term is changed to include the time sparsity $\beta_{i_{t}}$; this makes the degree of the regularization vary depending on the sparsity of time slices. Given the modified objective function in Equation~\eqref{eq:method:naive_sp}, we focus on minimizing the difference between $\vect{a}_{i_t}^{(t)}$ and $\tilde{\vect{a}}_{i_t}^{(t)}$ for time slices with a high sparsity rather than those with a low sparsity. \method actively exploits the neighboring time slices when a target time slice is sparse, while it less exploits the neighboring ones for a dense time slice. \subsection {Optimization} \label{sec:opt} \begin{algorithm}[t] \caption{Training \method} \label{alg:als_grad} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{ Tensor $\T{X} \in \mat{R}^{I_1 \times I_2 \times \cdots \times I_N}$ with observed entries $\forall \alpha = (i_1, \dots, i_n) \in \Omega $, rank $K$, window size $S$, sparsity penalty $\lambda_t$$\beta_{i_t}$, regularization parameter $\lambda_r$, and learning rate $\eta$\\ } \Output{ Updated factor matrices $\mathbf{A}^{(n)} \in {R}^{I_n \times k}$ $ (n = 1...N)$ } \vspace{1.5mm} initialize all factor matrices $\mathbf{A}^{(n)}$ for $n = 1...N$ \label{alg:proposed:init} \Repeat{convergence criterion is met} { \For{$n=1...N$} { loss $\T{L}$ $\leftarrow$ Eq.~\eqref{eq:method:naive_sp} \label{alg:proposed:inner_start} \\ \If{$n$ is a time mode}{ \Repeat{convergence criterion is met} {update a factor $\mathbf{A}^{(n)}$ using Adam optimizer with a learning rate $\eta$ \label{alg:proposed:timefactor}} } \Else{ \For{$i_n = 1...I_n$}{ update a row factor $\vect{a}_{i_n}^{(n)}$ using the row-wise update rule \label{alg:proposed:inner_end} } } } } \end{algorithm} To minimize the objective function in Equation~\eqref{eq:method:naive_sp}, \method uses an alternating optimization method; it updates one factor matrix at a time while fixing all other factor matrices. \method updates non-time factor matrices using the row-wise update rule~\Citep{shin2016fully} while updating the time factor matrix using the Adam optimizer~\Citep{kingma2014adam}. It allows \method to quickly converge with a low error, compared to naive gradient-based methods. {\bf{Updating non-time factor matrix.}} We note that updating a non-time factor matrix while fixing all other factor matrices is solved via the least square method, and we use the row-wise update rule~\Citep{shin2016fully,OhPSK18} in ALS for it. The row-wise update rule is advantageous since it gives the optimal closed-form solution, and allows parallel update of factors. We describe the details of the row-wise update rule in Appendix~\ref{subsec:rowwiseupdate}. {\bf{Updating time factor matrix.}} Updating the time factor matrix while fixing all other factor matrices is not the least square problem any more, and thus we turn to gradient based methods. We use the Adam optimizer which has shown superior performance for recent machine learning tasks. We verify that using the Adam optimizer only for the time factor leads to faster convergence compared to other optimization methods in Section~\ref{sec:experiment}. {\bf{Overall training.}} Algorithm~\ref{alg:als_grad} describes how we train \method. We first initialize all factor matrices (line~\ref{alg:proposed:init}). For each iteration, we update a factor matrix while keeping all others fixed (lines~\ref{alg:proposed:inner_start} to~\ref{alg:proposed:inner_end}). The time factor matrix is updated with Adam optimizer (line~\ref{alg:proposed:timefactor}) until the validation RMSE increases, which is our convergence criterion (line 8) for Adam. Each of the non-time factor matrices is updated with the row-wise update rule (line~\ref{alg:proposed:inner_end}) in ALS. We repeat this process until the validation RMSE continuously increases for five iterations, which is our global convergence criterion (line 12). \subsection{Experimental Settings} \subsubsection{Machine} All experiments are performed on a machine equipped with Intel Xeon E5-2630 CPU and a Geforce GTX 1080 Ti GPU. \subsubsection{Datasets} \begin{table*} \centering \vspace{5mm} \begin{threeparttable} \caption{ \label{tab:dataset} Summary of real-world tensors used for experiments. Bold text denotes time mode } \setlength{\tabcolsep}{10pt} \begin{tabular}{ l c r r c} \toprule \textbf{Name} & \textbf{Dimensionality} & \textbf{Nonzero} & \textbf{Granularity} &\textbf{Density}\\ \midrule \bair\tnote{1} & {\bf 35,064} $\times$ 12 $\times$ 6 & 2,454,305 & 1 hour &0.97 \\ \mair\tnote{2} & {\bf 2,678 } $\times$ 24 $\times$ 14 & 337,759 & 1 day & 0.37 \\ \radar\tnote{3} & {\bf 17,937} $\times$ 23 $\times$ 5 & 495,685 & 1 hour &0.24 \\ \indoor\tnote{4} & {\bf 19,735} $\times$ 9 $\times$ 2 & 241,201 & 10 minutes & 0.70 \\ \server\tnote{5} & 3 $\times$ 3 $\times$ 34 $\times$ {\bf 4,157} & 1,009,426 & 1 second & 0.79 \\ \bottomrule \end{tabular} \scriptsize \begin{tablenotes} \item[1] {\url{https://archive.ics.uci.edu/ml/datasets/Beijing+Multi-Site+Air-Quality+Data}} \item[2] {\url{https://www.kaggle.com/decide-soluciones/air-quality-madrid}} \item[3] {\url{https://data.austintexas.gov/Transportation-and-Mobility/Radar-Traffic-Counts/}} \item[4] {\url{https://archive.ics.uci.edu/ml/datasets/Appliances+energy+prediction}} \item[5] {\url{https://zenodo.org/record/3610078#.XlNpAigzaM8}} \end{tablenotes} \end{threeparttable} \end{table*} We evaluate \method on five real-world datasets summarized in Table \ref{tab:dataset}. \begin{itemize} \item \textbf{\bair}~\Citep{zhang2017cautionary} is a 3-mode tensor (hour, locations, atmospheric pollutants) containing measurements of pollutants. It was collected from $12$ air-quality monitoring sites in Beijing between 2013 to 2017. \item \textbf{\mair} is a 3-mode tensor (day, locations, atmospheric pollutants) containing measurements of pollutants in Madrid between 2011 to 2018. \item \textbf{\radar} is a 3-mode tensor (hour, locations, directions) containing traffic volumes measured by radar sensors from 2017 to 2019 in Austin, Texas. \item \textbf{\indoor} is a 3-mode tensor (10 minutes, locations, ambient conditions) containing measurements. There are two ambient conditions defined: humidity and temperature. We construct a fully dense tensor from the original dataset and randomly sample $70$ percent of the elements to make a tensor with missing entries. In Section~\ref{sec:exp:sparsity}, we sample from the fully dense version of it. \item \textbf{\server} is a 4-mode tensor (air conditioning, server power, locations, second) containing temperatures recorded in a server room. The first mode "air conditioning" means air conditioning temperature setups ($24$, $27$, and $30$ Celsius degrees); the second mode "server power" indicates server power usage scenarios ($50\%$, $75\%$, and $100\%$). \end{itemize} Before applying tensor factorization, we z-normalize the datasets. Each data is randomly split into training, validation, and test sets with the ratio $8$:$1$:$1$; the validation set is used for determining early stopping. \subsubsection{Competitors} We compare \method with the state-of-the-art methods for missing entry prediction. All the competitors use only the observed entries of a given tensor. \begin{itemize} \item \textbf{\cpals}~\Citep{harshman1970foundations}: a standard CP decomposition method using ALS. \item \textbf{CP-WOPT}~\Citep{acar2011scalable}: a CP decomposition method solving a weighted least squares problem. \item \textbf{CoSTCo}~\Citep{liu2019costco}: a CNN-based tensor decomposition method. \item \textbf{TRMF}~\Citep{yu2016temporal}: a temporally regularized matrix/tensor factorization method. \item \textbf{NTF}~\Citep{wu2019neural}: a tensor factorization method integrating LSTM to model time-evolving interactions for rating prediction. \end{itemize} \subsubsection{Metrics} We evaluate the performance using RMSE (Root Mean Squared Error) and MAE (Mean Absolute Error) defined as follows. \begin{align*} \text{RMSE} = {\sqrt{\frac{1}{|\Omega|}\sum_{\forall\alpha \in \Omega}{\left(x_{\alpha}-\hat{x}_{\alpha}\right)^{2}}}}, \quad \text{MAE} = \frac{1}{|\Omega|}{\sum_{\forall\alpha \in \Omega}{|x_{\alpha}-\hat{x}_{\alpha}|}} \end{align*} $\Omega$ indicates the set of the indices of observed entries. $x_\alpha$ stands for the entry with index $\alpha$ and $\hat{x}_{\alpha}$ is the corresponding reconstructed value. \subsubsection{Hyper-parameter} We use hyper-parameters in Table~\ref{tab:hyper-param} for \method, except in Section~\ref{sec:exp:hyper} where we vary hyper-parameters. We use $0.5$ for $\sigma$ which adjusts the smoothing level in kernel function. We change the window size $ S \in \{3, 5, 7, 9, 11 \}$ and find the optimal value for each dataset. \begin{table}[t!] \small \centering \caption{Default hyper-parameter setting} \setlength{\tabcolsep}{4pt} \begin{tabular}{ l c c c c } \toprule \textbf{Dataset} & \textbf{{\makecell{Learning \\ rate $\eta$ }}} & \textbf{Rank $K$} & \textbf{Window $S$} & \textbf{Penalty $\lambda_t$} \\ \midrule \bair & $10^{-2}$ & $10$ & $3$ & $10^{3}$ \\ \mair & $10^{-2}$ & $10$ & $5$ & $10^{2}$ \\ \radar & $10^{-2}$ & $10$ & $9$ & $10^{2}$ \\ \indoor & $10^{-2}$ & $10$ & $3$ & $10^{2}$ \\ \server & $10^{-3}$ & $10$ & $3$ & $10^{-1}$ \\ \bottomrule \end{tabular} \label{tab:hyper-param} \end{table} \subsection{Accuracy (Q1)} \label{sec:exp:accuracy} We compare \method with competitors in terms of RMSE and MAE in Table~\ref{tab:standarderror}. \methodz indicates \method without the sparsity penalty. Note that \method consistently gives the best accuracy for all the datasets. \method achieves up to $7.01\times$ lower RMSE and $5.50\times$ lower MAE compared to the second-best methods. \methodz provides the second-best performance; the smoothing regularization effectively predicts missing values by capturing temporal patterns and leaving out the noise. \begin{table*} \centering \vspace{5mm} \caption{ Performance of missing entry prediction by \method and competitors. The best method is in bold, and the second-best method is underlined. Our proposed \method consistently shows the best performance in all datasets } \renewcommand\arraystretch{1.2} \setlength{\tabcolsep}{4.5pt} \begin{tabular}{l | c | c | c | c | c} \toprule Data & {\makecell{Beijing \\ Air quality}} & {\makecell{Madrid \\ Air quality}} & {\makecell{Radar \\ Traffic}} & {\makecell{Indoor \\ Condition}} & {\makecell{Server \\ Room}} \\ \midrule \diagbox[height=2em]{Method}{Metric} & RMSE / MAE & RMSE / MAE & RMSE / MAE & RMSE / MAE & RMSE / MAE \\ \midrule \cpals & 0.352 / 0.219 & 0.456 / 0.293 & 0.365 / 0.248 & 0.624 / 0.316 & 0.076 / 0.048 \\ [0.3em] CP-WOPT & 0.766 / 0.538 & 0.482 / 0.297 & 0.328 / 0.206 & 0.603 / 0.310 & 0.070 / 0.046 \\ [0.3em] CoSTCo & 0.360 / 0.223 & 0.461 / 0.303 & 0.298 / 0.197 & 0.609 / 0.303 & 0.306 / 0.090 \\ [0.3em] TRMF & 1.098 / 0.770 & 1.004 / 0.936 & 0.695 / 0.485 & 0.894 / 0.563 & 1.083 / 0.813 \\ [0.3em] NTF & 0.529 / 0.333 & 0.648 / 0.455 & 0.585 / 0.400 & 0.968 / 0.576 & 0.660 / 0.516 \\ [0.3em] \midrule \methodz & \underline{0.327} / \underline{0.204} & \underline{0.416} / \underline{0.279} & \underline{0.257} / \underline{0.160} & \underline{0.088} / \underline{0.057} & \underline{0.058} / \underline{0.039} \\ [0.3em] \textbf{\method (proposed)} & \textbf{0.323} / \textbf{0.201} & \textbf{0.409} / \textbf{0.274} & \textbf{0.249} / \textbf{0.152} & \textbf{0.086} / \textbf{0.055} & \textbf{0.054} / \textbf{0.035} \\ \bottomrule \end{tabular} \label{tab:standarderror} \end{table*} \subsection{Effect of Data Sparsity (Q2)} \label{sec:exp:sparsity} \begin{figure*} \addvspace{6mm} \centering \includegraphics[width=0.75\linewidth]{./fig/legend/four_legend}\vspace{-1mm}\\ \subfigure[\bair]{ \includegraphics[width=0.32\linewidth]{./fig/missing/beijing.pdf} } \hspace{-2mm} \subfigure[\mair]{ \includegraphics[width=0.32\linewidth]{./fig/missing/mair.pdf} } \hspace{-2mm} \subfigure[ \radar]{ \includegraphics[width=0.32\linewidth]{./fig/missing/radar.pdf} }\\ \subfigure[ \indoor]{ \includegraphics[width=0.32\linewidth]{./fig/missing/indoor.pdf} } \hspace{-2mm} \subfigure[ \server]{ \includegraphics[width=0.32\linewidth]{./fig/missing/ncfd.pdf} } \caption{\label{fig:sparsity_ratio} Test RMSE of \method and the best competitors CP-ALS, CP-WOPT and CoSTCo for varying data sampling ratio. Note that the error gap of \method and competitors becomes larger when the sparsity increases (the sampling ratio decreases), due to the careful consideration of sparse time slices by \method. } \end{figure*} We evaluate the performance of \method with varying data sparsity. We sample the data with the ratio of $\{10, 30, 50, 70, 90\}\%$ to identify how accurately the method predicts missing entries even when the data are highly sparse. Fig.~\ref{fig:sparsity_ratio} shows the errors of \method and the best competitors, CP-ALS, CP-WOPT, and CoSTCo, for five datasets. Note that the error gap of \method and competitors becomes larger when the sparsity increases (the sampling ratio decreases). \method achieves up to $4.67\times$, $3.86\times$ and $5.24\times$ lower test RMSE than CP-ALS, CP-WOPT, and CoSTCo, respectively, when we use only $10\%$ of data. There are two reasons for the superior performance of \method as the sparsity increases. First, \method is designed to infer missing entries of a target slice by using its neighboring slices; this is especially useful when the target slice is extremely sparse and has no information to infer its entries. Second, \method explicitly considers sparsity in its model through the sparsity penalty, and imposes more regularization for sparser slices. \subsection{Effect of Optimization (Q3)} \label{sec:exp:optimization} We evaluate our optimization strategy in terms of error and running time. We call our strategy as \alsa and compare it with the following optimization strategies. \begin{itemize} \item Adam: a recent gradient-based method using momentum and controlling learning rate. \item SGD: a standard stochastic gradient descent method which is widely used for optimization. \item ALS + SGD: an alternating minimization method which updates a time factor matrix with SGD and non-time factor matrices with the least square solution. \item Alternating Adam: an alternating minimization method which updates a single factor matrix with Adam while fixing other factor matrices. \end{itemize} Fig.~\ref{fig:opt} shows the result. Note that our proposed \alsa makes the best trade-off of running time and test RMSE, giving smaller running time and test RMSE compared to other methods in general. \alsa achieves better results compared to ALS + SGD since Adam optimizer finds a better local minimum compared to SGD. Compared to alternating Adam, \alsa achieves better results as well since updating each non-time factor matrix has an analytical solution by ALS, and thus a gradient-based approach Adam is less effective. \begin{figure*}[t] \centering \vspace{1mm} \includegraphics[width=0.95\textwidth]{./fig/opt/legend.pdf} \\ \vspace{-1mm} \hspace{-3mm} \subfigure[ \bair]{ \includegraphics[width=0.32\textwidth]{./fig/opt/beijing} } \hspace{-3mm} \subfigure[ \mair]{ \includegraphics[width=0.32\textwidth]{./fig/opt/mair_std} } \hspace{-3mm} \subfigure[ \radar]{ \includegraphics[width=0.32\textwidth]{./fig/opt/radar} }\\ \hspace{-3mm} \subfigure[ \indoor]{ \includegraphics[width=0.32\textwidth]{./fig/opt/indoor} } \hspace{-3mm} \subfigure[ \server]{ \includegraphics[width=0.32\textwidth]{./fig/opt/ncfd} }\\ \caption{\label{fig:opt} Comparison of optimization strategies in \method. Our proposed ALS + Adam makes the best trade-off of running time and test RMSE, giving smaller running time and test RMSE compared to other methods in general } \end{figure*} \subsection{Hyper-parameter Study (Q4)} \label{sec:exp:hyper} We evaluate the performance of \method with regard to hyper-parameters: smoothing regularization penalty and rank size. \subsubsection{Smoothing regularization penalty}\label{sec:exp:penalty} We vary the smoothing regularization penalty $\lambda_t$ and evaluate the test RMSE in Fig.~\ref{fig:penalty}. Note that too small or too large values of $\lambda_t$ do not give the best results; too small value of $\lambda_t$ leads to overfitting, and too large value of it leads to underfitting. The results show that a right amount of smoothing regularization gives the smallest error, verifying the effectiveness of our proposed idea. \subsubsection{Rank}\label{sec:exp:rank} We increase the rank $K$ from $5$ to $50$ and evaluate the test RMSE in Fig.~\ref{fig:rank}. We have two main observations. First, \method shows a stable performance improvement with increasing ranks, compared to CP-ALS and CP-WOPT which show unstable performances. Second, the error gap between \method and competitors increases with increasing ranks. Higher ranks may make the models overfit to a training dataset; however, \method works even better for higher ranks since it exploits rich information from neighboring rows when regularizing a row of the time factor matrix. \begin{figure*}[t] \centering \vspace{10mm} \subfigure[\bair]{ \includegraphics[width=0.3\linewidth]{./fig/penalty/beijing_10} } \hspace{-2mm} \subfigure[\mair]{ \includegraphics[width=0.3\linewidth]{./fig/penalty/mair_std_10} } \hspace{-2mm} \subfigure[\radar]{ \includegraphics[width=0.3\linewidth]{./fig/penalty/radar_10} }\\ \subfigure[\indoor]{ \includegraphics[width=0.3\linewidth]{./fig/penalty/indoor_10} } \hspace{-2mm} \subfigure[\server]{ \includegraphics[width=0.3\linewidth]{./fig/penalty/ncfd_10} }\\ \caption{ \label{fig:penalty} Effect of the smoothing regularization penalty parameter $\lambda_t$ in \method. Note that too small or too large values of $\lambda_t$ lead to overfitting and underfitting, respectively. A right amount of smoothing regularization gives the smallest error, verifying the effectiveness of our proposed idea } \end{figure*} \begin{figure*} \centering \includegraphics[width=0.75\linewidth]{./fig/legend/four_legend}\\ \subfigure[\bair]{ \includegraphics[width=0.3\linewidth]{./fig/rank/beijing} } \hspace{-2mm} \subfigure[\mair]{ \includegraphics[width=0.3\linewidth]{./fig/rank/mair_std} } \hspace{-2mm} \subfigure[\radar]{ \includegraphics[width=0.3\linewidth]{./fig/rank/radar} \label{fig:rank_radar} }\\ \subfigure[\indoor]{ \includegraphics[width=0.3\linewidth]{./fig/rank/indoor} \label{fig:rank_indoor} } \hspace{-2mm} \subfigure[\server]{ \includegraphics[width=0.3\linewidth]{./fig/rank/ncfd} \label{fig:rank_server} }\\ \caption{\label{fig:rank} Effect of rank on the performance of \method. When the rank increase, \method shows a stable performance improvement unlike competitors, and the error gap between \method and competitors increases. \method works even better for higher ranks since it exploits rich information from neighboring rows when regularizing a row of the time factor matrix. } \end{figure*} \subsection{Tensor Decomposition} We present one of the major tensor decomposition methods, CP decomposition. \textbf{CP decomposition. } CP decomposition methods~\Citep{KangPHF12,jeon2015haten2,ChoiV14} have been widely used for analyzing large-scale real-world tensors. \cite{KangPHF12,jeon2015haten2} propose distributed CP decomposition methods running on the MapReduce framework. \citet{ChoiV14} propose a scalable CP decomposition method by exploiting properties of a tensor operation used in CP decomposition. \citet{BattaglinoBK18} propose a randomized CP decomposition method which reduces the overhead of computation and memory. However, the above methods are not appropriate for missing value prediction in highly sparse tensors since they assume the values of missing entries are zero. Several CP decomposition methods have been developed to handle sparse tensors without setting the values of missing entries as zero. \citet{PapalexakisFS12} propose ParCube to obtain sparse factor matrices using a sampling technique in parallel systems. \citet{BeutelTKFPX14} propose FlexiFaCT, which performs a coupled matrix-tensor factorization using Stochastic Gradient Descent (SGD) update rules. \citet{shin2016fully} propose CDTF and SALS, which are scalable CP decomposition methods for sparse tensors. \citet{SmithK17} improves the efficiency of CP decomposition for sparse tensors by exploiting a compressed data structure. The above CP decomposition methods do not consider temporal dependency and time-varying sparsity which are crucial for temporal tensors. On the other hand, \method improves accuracy for temporal tensors by exploiting temporal dependency and time-varying sparsity. \textbf{Applications.} Tensor decomposition have been used for various applications. \citet{KoldaBK05} analyze a hyperlink graph modeled as $3$-way tensor using CP decomposition. Tensor decomposition is also applied to tag recommendation~\Citep{RendleMNS09, RendleS10}. \citet{SunPLCLQ09} develop a content-based network analysis framework for finding higher-order clusters. \citet{LebedevGROL14} exploit CP decomposition to compress convolution filters of convolutional neural networks (CNNs). Several works~\Citep{LeeOS18, PerrosPWVSTS17, PerrosPPVYdSS18} use tensor decomposition for analyzing Electronic Health Record (EHR) data. \subsection{Tensor Factorization on Temporal Tensors} We explain tensor factorization methods dealing with temporal data. \citet{dunlavy2011temporal} propose a tensor-based approach with an exponential smoothing technique for link prediction. \citet{matsubara2012fast} discover main topics in a complex temporal tensor and perform analysis for long periods of prediction. \citet{de2017tensorcast} present a non-negative coupled tensor factorization for forecasting future links in evolving networks. \citet{bahadori2014fast} propose an efficient low-rank tensor method to capture shared structures across multivariate spatial-temporal relationships. \citet{liu2019costco} propose a tensor completion method by exploiting the expressive power of convolutional neural networks to model non-linear interactions inside spatio-temporal tensors. These works focus on general multi faceted relationships rather than temporal characteristics. Several works~\Citep{xiong2010temporal,yu2016temporal,wu2019neural} model temporal patterns and trends in temporal tensors. \citet{xiong2010temporal} propose a Bayesian probabilistic tensor factorization method that learns global temporal patterns by adding an extra time factor to model evolving relations in a matrix. \citet{yu2016temporal} propose a matrix factorization method with an autoregressive temporal regularization to learn a temporal property. \citet{wu2019neural} propose a CP factorization method based on a long short-term memory network to model temporal interactions between latent factors of tensors. \citet{jing2018high} propose a tucker decomposition method to capture temporal correlations. However, these approaches are not designed for modeling temporal dependency from both past and future information, whereas \method obtains a time factor considering neighboring factors for both past and future time steps, giving an accurate tensor factorization result. Moreover, they do not exploit the temporal sparsity, a common characteristic of a temporal tensor, while \method actively exploits the temporal sparsity. \subsection{Extra experiment on datasets from data center} \label{subsec:datacenter} \begin{table} \centering \vspace{5mm} \begin{threeparttable} \caption{ \label{tab:datacenter_dataset} Summary of real-world tensors from data center. Bold text denotes time mode. } \begin{tabular}{ l c r r c} \toprule \textbf{Name} & \textbf{Dimensionality} & \textbf{Nonzero} &\textbf{Density}\\ \midrule server room \tnote{1} & 3 $\times$ 3 $\times$ 34 $\times$ \bf{4,157} & 1,009,426 & $7.935e-01$ \\ \bottomrule \end{tabular} \scriptsize \begin{tablenotes} \item[1] {\url{https://zenodo.org/record/3610078#.XlNpAigzaM8}} \end{tablenotes} \end{threeparttable} \end{table} \begin{itemize} \item server room is a 4-mode tensor (cooling, power usage, location, second) containing temperature. \item Mode 1 : air conditioning temperature setups (24, 27 and 30 Celsius degrees) \item Mode 2 : power usage ($50\%, 75\%, 100\%$ scenario) \item Mode 3 : temperature probes \item Mode 4 : duration time (3000~4000s) \end{itemize} \begin{table} \small \centering \caption{ Performance of missing entry prediction by TATD and competitors on a data center dataset. The best is in bold, and the second-best method is underlined. } \setlength{\tabcolsep}{12pt} \begin{tabular}{l | cc } \toprule Data & \multicolumn{2}{c}{\makecell{server room \\ (z-score normalization)}} \\ \midrule \diagbox[height=2em]{Method}{Metric} & RMSE & MAE \\ \midrule \cpals~\cite{harshman1970foundations} & 0.076 & 0.030 \\ [0.3em] CoSTCo~\cite{liu2019costco} & 0.675 & 0.387 \\ [0.3em] TRMF~\cite{yu2016temporal} & 1.083 & 0.8134 \\ [0.3em] NTF~\cite{wu2019neural} & 0.660 & 0.516 \\ [0.3em] \midrule \methodz & \underline{0.058} & \underline{0.039} \\ [0.3em] \bf{\method} & \bf{0.054} & \bf{0.035} \\ \bottomrule \end{tabular} \label{tab:appendix_exp} \end{table} \begin{figure*} \addvspace{6mm} \centering \includegraphics[width=0.25\linewidth]{./fig/legend.pdf}\vspace{1mm}\\ \subfigure[Sparsity]{ \includegraphics[width=0.3\linewidth]{./fig/missing/ncfd.pdf} } \subfigure[Penalty]{ \includegraphics[width=0.3\linewidth]{./fig/penalty/ncfd_10.pdf} } \subfigure[Rank]{ \includegraphics[width=0.3\linewidth]{./fig/rank/ncfd.pdf} } \caption{\label{fig:sparsity_ratio_1} Test RMSE of \method and the best competitor CP-ALS for three experiments: (a) sampling ratio, (b) penalty, and (c) rank. } \end{figure*} \subsection{Row-wise Update Rule} \label{subsec:rowwiseupdate} We use a row-wise update rule to efficiently update non-time factor matrices. This update rule has an advantage of considering only nonzero values and allows easy parallelization. Following the notations and equations introduced by~\cite{shin2016fully}, the update rule for $i_n$th row of the $n$th factor matrix $\A{n} (n \neq t)$ is given as follows: \begin{align} \label{eq:ls} \begin{split} [a_{i_n 1}^{(n)}, \cdots, a_{i_n K}^{(n)}] \leftarrow \argmin{[a^{(n)}_{i_{n}1}, ..., a^{(n)}_{i_{n}K}]}{L(\mat{A}^{(1)},...,\mat{A}^{(N)})} \\ = \vect{c}_{i_n:}^{(n)} \times [\mat{B}_{i_n}^{(n)}+\lambda_r \mathbf{I}_{K}]^{-1}\text{\space\space} \end{split} \end{align} where $\mat{B}_{i_n}^{(n)}$ is a ${K \times K}$ matrix whose entries are \begin{equation} \label{eq:rowB} {(\mat{B}_{i_n}^{(n)})}_{k_1 k_2} = \sum_{\forall\alpha\in\Omega_{i_n}^{(n)}} \prod_{l \neq n} a^{(l)}_{i_l k_1}\prod_{l \neq n} a^{(l)}_{i_l k_2}, \forall k_1, k_2 \end{equation} , $\vect{c}_{i_n}^{(n)}$ is a length ${K}$ vector whose entries are \begin{equation} \label{eq:rowC} \sum_{\forall\alpha\in\Omega_{i_n}^{(n)}}x_{\alpha}\prod_{l \neq n} a^{(l)}_{i_l k}, \forall k \end{equation} and $\Omega_{i_n}^{(n)}$ denotes the subset of $\Omega$ whose $n$th mode's index is $i_n$. \section{Introduction} \label{sec:introduction} \input{010intro} \section{Preliminaries} \label{sec:preliminaries} \input{020prelim} \section{Proposed Method} \label{sec:proposed} \input{030method} \section{Experiment} \label{sec:experiment} \input{040experiment} \section{Related Work} \label{sec:related} \input{050related} \section{Conclusion} \label{sec:conclusion} \input{060conclusions} \bibliographystyle{spbasic}
{ "timestamp": "2020-12-17T02:15:14", "yymm": "2012", "arxiv_id": "2012.08855", "language": "en", "url": "https://arxiv.org/abs/2012.08855" }
\section*{Supplementary material: Hopf bifurcation in addition-shattering kinetics} \subsection{Truncated Models and Dulac function} \label{sec:truncate} In aggregation-fragmentation processes \eqref{AF:gen}, never-ending oscillations have not been found analytically. This is not surprising as it requires a solution of an infinite set of non-linear ODEs. To appreciate the existence of oscillations one can first seek such solutions in truncated models in which the matrix elements $K_{ij}$ vanish when $i+j$ is sufficiently large. We define $m-$truncated models by requiring \begin{equation} K_{ij}=0 \quad\text{when}\quad i+j>m \end{equation} For such models, the system \eqref{AF:gen} of infinitely many ODEs reduces to ODEs for the densities $c_1,\ldots,c_m$. Taking into account mass conservation \begin{equation} \label{massSM} \sum_{j=1}^m jc_j(t) = 1 \end{equation} reduces the number of ODEs to $m-1$. Limit cycles are possible in a system of two (or more) ODEs. Thus in the truncated models, limit cycles may arise when $m\geq 3$. Chaos becomes (in principle) feasible when $m\geq 4$. The $m=3$ truncated model consists of three ODEs \begin{subequations} \begin{align} &\frac{d c_1}{dt} = -2c_1^2 - 2Kc_1c_2 + 2F c_2 + G c_3 \label{1}\\ &\frac{d c_2}{dt} = c_1^2 - 2Kc_1c_2 - F c_2 + G c_3 \label{2}\\ &\frac{d c_3}{dt} = 2Kc_1c_2 - G c_3 \label{3} \end{align} \end{subequations} where we shortly write the matrices $||K_{ij}||$ and $||F_{ij}||$ with $i, j\leq 2$ as \begin{equation} \label{KF:3} ||K_{ij}|| = \left( \begin{array}{cc} 2 & K \\ \\ K& 0 \end{array} \right), \quad ||F_{ij}|| = \left( \begin{array}{cc} 2F & G \\ \\ G& 0 \end{array} \right) \end{equation} Specializing \eqref{massSM} to $m=3$ we get $2c_2=1-c_1-3c_3$. Substituting this relation to \eqref{1} and \eqref{3} we obtain \begin{subequations} \begin{align} \label{c1:eq} \frac{d c_1}{dt} &= (F-\tfrac{1}{2}Kc_1)(1-c_1-3c_3) + G c_3 - 2c_1^2 \\ \frac{d c_3}{dt} &= \tfrac{1}{2}Kc_1(1-c_1-3c_3) - G c_3 \label{c3:eq} \end{align} \end{subequations} Equations \eqref{c1:eq}--\eqref{c3:eq} are the most general equations for the truncated model with $m=3$. Indeed, \eqref{KF:3} are the most general rates satisfying the symmetry requirement, we merely disregarded the pathological case $K_{11}=0$ and set $K_{11}=2$ by rescaling the time variable. On the physical grounds, the rates are positive. Hence the parameters should lie inside the octant \begin{equation} \label{space} \mathbb{R}^3_+=\{(F,G,K)|\,F > 0, ~ G > 0, ~K > 0\} \end{equation} We are also interested in the behavior inside the triangle \begin{equation} \label{triangle} \mathcal{T} = \{(c_1, c_3)|\, c_1\geq 0, ~ c_3\geq 0, ~ c_1+3c_3\leq 1\} \end{equation} Indeed, the densities are non-negative and mass conservation written as $2c_2=1-c_1-3c_3\geq 0$ explains the last inequality. If the system starts inside the triangle, it forever remains there. We now rule out the existence of limit cycles for the system \eqref{c1:eq}--\eqref{c3:eq} using the Dulac criterion \cite{Perko,Chris,Strogatz}. For the general planar system \eqref{PQ}, the Dulac criterion asserts that if there exists a smooth function $D(x,y)$ in a simply-connected domain $\mathcal{D}\subset \mathbb{R}^2$ such that the Dulac function \begin{equation} \mathbb{D}\equiv \partial_x[DP]+\partial_y[DQ] \end{equation} has the same sign throughout $\mathcal{D}$, there are no closed orbits lying entirely in $\mathcal{D}$. Choosing $D(c_1,c_3)=1$ and $\mathcal{T}$ as the domain, one computes the Dulac function \begin{equation} \mathbb{D} =-F-G -K(c_1+c_2)-4c_1 \end{equation} Thus $\mathbb{D} < 0$ assuring the absence of limit cycles for the general truncated system \eqref{c1:eq}--\eqref{c3:eq}. Similarly, a limit cycle is impossible for the arbitrary addition-shattering process truncated to $m=3$. Indeed, Eqs.~\eqref{ns}--\eqref{n1} turn into a planar system \begin{equation} \label{c12-eq} \dot c_1 = P, \qquad \dot c_2 = Q \end{equation} with quadratic polynomials \begin{equation} \label{R12} \begin{split} P &= (B_2 - \tfrac{1}{2}A_2 c_1)(1-c_1-3c_3)+3B_3 c_3 -2A_1c_1^2\\ Q &=\tfrac{1}{2}A_2 c_1(1-c_1-3c_3)-B_3 c_3 \end{split} \end{equation} depending on four positive rates: $A_1,A_2,B_2,B_3$. Choosing again $D(c_1,c_3)=1$ and the triangle $\mathcal{T}$ as the domain, we compute the corresponding Dulac function \begin{equation} \frac{\partial P}{\partial c_1}+ \frac{\partial Q}{\partial c_2}=-B_2-B_3-A_2(c_1+c_2)-4A_1 c_1 \end{equation} and find that it is negative assuring the absence of limit cycles for the truncated system \eqref{c12-eq}--\eqref{R12}. \subsection{Linearizing Eqs. \eqref{ns-1}--\eqref{n1-1} about the steady state} Equation \eqref{c-SS} asserts that the stationary size distribution is uniquely determined by the density $c_1$ of monomers. The mass density \begin{equation} \label{ss-mass} M = \sum_{s \geq 1} s c_s = c_1 \sum_{s \geq 1} s \prod_{j = 2}^{s} \frac{j - 1}{j + B_j / c_1} \end{equation} increases monotonically with $c_1$ and thus for every value of $M$, the system \eqref{ns-1}--\eqref{n1-1} has at most one steady state. To numerically find the steady state with a given mass density it suffices to solve a nonlinear equation \eqref{ss-mass}. Owing to mass conservation, the sets of equal-mass size distributions are invariant for \eqref{ns-1}--\eqref{n1-1}. And when we talk about the birth of limit cycles we always confine the system to distributions of fixed mass $M$, which we can choose to be unity since the system remains unchanged under scaling \begin{equation} \label{scaling} c_s \mapsto M c_s, \quad B_s \mapsto M B_s, \quad t \mapsto \frac{1}{M} t \end{equation} To preserve the total mass, we consider the following perturbations $\{ x_s \}$ of the steady state: \begin{equation} \label{perturb-mass} \sum_{s \geq 1} s x_s(t) = 0. \end{equation} In the vicinity of the steady state, equations \eqref{ns-1}--\eqref{n1-1} read \begin{subequations} \begin{multline} \label{xs-1} \frac{dx_s}{dt} = (c_1 + x_1)[(s-1) x_{s-1}- s x_s] -\\ B_s\left[\frac{c_s}{c_1}x_1 - x_s\right], \quad s\geq 2 \end{multline} \begin{equation} \label{x1-1} \frac{dx_1}{dt} = -(c_1 + x_1)x_1 - \sum_{s \geq 2} s B_s \left[\frac{c_s}{c_1}x_1 - x_s \right] \end{equation} \end{subequations} Dropping nonlinear terms in Eqs.~\eqref{xs-1}--\eqref{x1-1} we arrive at \begin{subequations} \begin{multline} \label{xs-1-lin} \frac{dx_s}{dt} = c_1[(s-1) x_{s-1}- s x_s] -\\ B_s\left[\frac{c_s}{c_1}x_1 - x_s\right], \quad s\geq 2 \end{multline} \begin{equation} \label{x1-1-lin} \frac{dx_1}{dt} = -c_1x_1 - \sum_{s \geq 2} s B_s \left[\frac{c_s}{c_1}x_1 - x_s \right] \end{equation} \end{subequations} \subsection{Numerical approach for evaluation of the spectrum} Fix $N$ and consider the first $N$ equations of \eqref{xs-1}--\eqref{x1-1}. Such truncation obviously breaks the mass conservation law but it holds with machine precision provided $N$ is sufficiently large, making the finite system numerically indistinguishable from the infinite one. The elements of the Jacobian matrix $\mathbf{J} \in \mathbb{R}^{N \times N}$ are given by \begin{equation} \label{jac} \mathbf{J}(i, j) = \begin{cases} -c_1 - \sum_{s \geq 2} s B_s \frac{c_s}{c_1}, & i = 1,\,j = 1 \\ j B_j, & i = 1,\,j > 1 \\ c_1 - B_2 \frac{c_2}{c_1}, & i = 2,\,j = 1 \\ -B_i \frac{c_i}{c_1}, & i > 2,\,j = 1 \\ (i - 1) c_1, & i > 2,\,j = i - 1 \\ B_i - i c_1, & i \geq 2,\,j = i \\ 0, & \text{otherwise} \end{cases} \end{equation} For moderate values of $N$ we can compute the complete spectrum $\sigma(\textbf{J})$ of $\mathbf{J}$ with the help of standard LAPACK procedures (or their wrappers as in \texttt{numpy}). For example, Fig. \ref{fig:spectrum} of the main text was obtained this way for $N = 5000$. However, with Hopf bifurcation in mind, we are not interested in the whole spectrum of $\mathbf{J}$ but only in its eigenvalues that invade the complex half-plane with a positive real part, $\text{Re} \lambda > 0$. When $\beta$ is close to unity, $N$ gets as big as $10^7$ making the computation of all the eigenvalues not only superfluous but highly inefficient. Instead, we can use the so-called \textit{inverse iterations} (or \textit{inverse power method}) that allow one to find the eigenvalue closest to a given complex number $\mu \in \mathbb{C}$ and its corresponding eigenvector. The iterations start from an initial vector $\textbf{v}_0 \in \mathbb{C}^N$ that is typically chosen to be random unless some a priori information is available. At each iteration, the algorithm solves a linear system of equations and normalizes a vector: \begin{equation}\label{inv_iter} \textbf{u}_k = (\textbf{J} - \mu \textbf{I}_N)^{-1} \textbf{v}_{k-1}, \quad \textbf{v}_k = \frac{\textbf{u}_k}{\| \textbf{u}_k \|} \end{equation} The resulting vector is an approximate eigenvector of $\textbf{J}$: \[ \textbf{J} \textbf{v}_k \approx \lambda_k \textbf{v}_k, \quad \lambda_k \approx \text{argmin}_{\lambda \in \sigma(\textbf{J})} |\lambda - \mu| \] This method converges fast and very few iterations are needed when $\mu$ is close to the desired eigenvalue. \begin{figure}[h] \begin{center} \includegraphics[width=0.45\textwidth]{inv_pwr_conv.pdf} \caption{The inverse power method converges geometrically and its rate of convergence depends on how close the spectral shift $\mu$ is to the eigenvalue $\lambda$ we are computing. The error is measured as $|\lambda - \lambda_k|$, where $\lambda_k$ is the result of the $k$-th iteration. We used $B = 10^{-10}$, $\beta = 1.5$, and $N = 10^7$.} \label{fig:inv_pwr} \end{center} \end{figure} The computational complexity of the algorithm stems from the need to solve a linear system of equations at each iteration \eqref{inv_iter}. To make the iterations efficient, we exploit the structure of the Jacobian \eqref{jac}. Matrix $\mathbf{J}$ is extremely sparse and has the following template: \begin{equation}\label{jac_template} \mathbf{J} = \begin{bmatrix} \times & \times & \times & \times & \ldots & \times & \times & \times \\ \times & \times & 0 & 0 & \ldots & 0 & 0 & 0 \\ \times & \times & \times & 0 & \ldots & 0 & 0 & 0 \\ \times & 0 & \times & \times & \ldots & 0 & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ \times & 0 & 0 & 0 & \ldots & \times & \times & 0 \\ \times & 0 & 0 & 0 & \ldots & 0 & \times & \times \\ \end{bmatrix}, \end{equation} where $\times$ denotes nonzero elements. Matrices of this form \eqref{jac_template} admit an exceptionally pleasant upper-lower triangular factorization $\mathbf{J} = \mathbf{U} \mathbf{L}$ with \begin{subequations} \begin{equation}\label{jac_u} \mathbf{U} = \begin{bmatrix} \times & \times & \times & \times & \ldots & \times & \times & \times \\ 0 & \times & 0 & 0 & \ldots & 0 & 0 & 0 \\ 0 & 0 & \times & 0 & \ldots & 0 & 0 & 0 \\ 0 & 0 & 0 & \times & \ldots & 0 & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & \ldots & 0 & \times & 0 \\ 0 & 0 & 0 & 0 & \ldots & 0 & 0 & \times \\ \end{bmatrix} \end{equation} and \begin{equation}\label{jac_l} \mathbf{L} = \begin{bmatrix} \times & 0 & 0 & 0 & \ldots & 0 & 0 & 0 \\ \times & \times & 0 & 0 & \ldots & 0 & 0 & 0 \\ \times & \times & \times & 0 & \ldots & 0 & 0 & 0 \\ \times & 0 & \times & \times & \ldots & 0 & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ \times & 0 & 0 & 0 & \ldots & \times & \times & 0 \\ \times & 0 & 0 & 0 & \ldots & 0 & \times & \times \\ \end{bmatrix} \end{equation} \end{subequations} This means that we can precompute the $\mathbf{U} \mathbf{L}$ factorization \eqref{jac_u}--\eqref{jac_l} of $\textbf{J} - \mu \textbf{I}_N$ and then solve two very sparse triangular systems per iteration \eqref{inv_iter}. We used this approach to compute the unstable region \eqref{tongue} in the parameter space as depicted in Fig. \ref{fig:region}. To further accelerate the computations, we employed parameter continuation: we took the approximate eigenvalue $\lambda$ and eigenvector $\textbf{v}$ corresponding to parameters $(\beta, B)$ as the spectral shift $\tilde{\mu}$ and starting vector $\tilde{\textbf{v}}_0$ for the adjacent parameters $(\tilde{\beta}, \tilde{B})$. This allowed us to process Jacobians of size $N = 10^7$ in reasonable time on a standard laptop. Figure \ref{fig:inv_pwr} shows how the convergence of the inverse power method depends on the spectral shift $\mu$: The convergence is always geometrical, but its rate decreases when $\mu$ is far from the eigenvalue that we seek to compute. On a standard laptop, the computation takes 30-40 seconds in the worst case and less than 1 second in the best one. This phenomenon motivates one to exploit parameter continuation. \subsection{The product kernel} Aggregation-fragmentation processes in which both processes are collision-controlled, and each fragmentation event leads to complete shattering, have been studied in \cite{pnas2015,AS17,AS18} in the situation when the rates of aggregation and shattering events differ only by an amplitude: \begin{equation} \label{SK} S_{ij} = \lambda K_{ij} \end{equation} This relation between the rates is natural since both aggregation and shattering are possible outcomes of the binary collision \cite{pnas2015}. The governing equations then read \begin{subequations} \begin{equation} \label{AS:k} \frac{dc_k}{dt} = \frac{1}{2}\sum_{i+j=k} K_{ij}\,c_i\,c_j-(1+\lambda) c_k\sum_{j\geq 1} K_{kj}\,c_j \end{equation} for $k\geq 2$, while the density of monomers satisfies \begin{eqnarray} \label{AS:1} \frac{dc_1}{dt} &=& - c_1\sum_{i\geq 1} K_{1,i}\,c_i + \lambda c_1\sum_{i\geq 2} i K_{1,i}\,c_i \nonumber\\ &+& \frac{ \lambda}{2}\sum_{i\geq 2} \sum_{j\geq 2} (i+j) K_{ij}\,c_i\,c_j \end{eqnarray} \end{subequations} For the special class of rates \begin{equation} \label{K-a} K_{ij} = (i/j)^a + (j/i)^a \end{equation} never-ending oscillations have been detected \cite{AS17,AS18} in the region \begin{equation} \{(a, \lambda)|\, \tfrac{1}{2} < a \leq 1, ~ 0<\lambda\leq \lambda_c(a)\} \end{equation} To appreciate the bounds $\frac{1}{2} < a \leq 1$ we note that aggregation equations with kernel \eqref{K-a} and $a >1$ are ill-defined due to instantaneous gelation. Further, for aggregation equations with kernel \eqref{K-a} driven by the constant input of monomers, the densities approach steady state values when $0\leq a<\frac{1}{2}$, while in the range $\frac{1}{2} < a \leq 1$ the densities evolve ad infinitum \cite{Colm-PK}. The shattering effectively acts as a source of monomers, and this qualitatively explains the appearance of $a_c = \frac{1}{2}$. \begin{figure}[ht] \begin{center} \includegraphics[width=0.4\textwidth]{cMM} \caption{Bottom to top: The stationary densities $c_1,~M_0,~M_2$. The density of monomers is $c_1=(1+2\lambda)/(2+2\lambda)$; the moments $M_0,~M_2$ are given by \eqref{MM}.} \label{fig:cMM} \end{center} \end{figure} To gain insight into the behavior of the aggregation-shattering models satisfying \eqref{SK}, one may study kernels different from \eqref{K-a} and hopefully more amenable to analytical treatment. The product kernel \begin{equation} K_{ij} = ij \end{equation} is particularly well-known --- in the context of pure aggregation it provides the simplest description of gelation \cite{Flory,book}. For this kernel, Eqs.~\eqref{AS:k}--\eqref{AS:1} become \begin{subequations} \begin{align} \label{AS:k-product} \frac{dc_k}{dt} &= \frac{1}{2}\sum_{i+j=k} ij c_i\,c_j-(1+\lambda)kc_k, \quad k\geq 2\\ \label{AS:1-product} \frac{dc_1}{dt} &= -(1+\lambda)c_1+\lambda M_2 \end{align} \end{subequations} where $M_2(t) =\sum_{j\geq 1} j^2 c_j(t)$ is the second moment, and the mass density is again set to unity: $\sum_{j\geq 1} j c_j(t) = 1$. The system \eqref{AS:k-product}--\eqref{AS:1-product} does not admit solutions with never-ending oscillations. Instead, for every $\lambda>0$ solutions quickly approach to the steady state \begin{equation} \label{ck:ER} c_k = \frac{1}{\sqrt{4\pi}}\,\frac{\Gamma\left(k-\frac{1}{2}\right)}{k\,\Gamma(k+1)}\,\frac{(1+2\lambda)^k}{(1+\lambda)^{2k-1}} \end{equation} The moments $M_p=\sum_{k\geq 1} k^pc_k$ approach to [see also Fig.~\ref{fig:cMM}] \begin{equation} \label{MM} \begin{split} M_0 & = 2+2(1+\lambda)\,\ln\frac{1+2\lambda}{2+2\lambda}\\ M_2 & = \frac{1+2\lambda}{2\lambda}\\ M_3 & = \frac{(1+2\lambda)(1+2\lambda+2\lambda^2)}{4\lambda^3}\\ M_4 & = \frac{(1+2\lambda)(3+12\lambda+18\lambda^2+12\lambda^3+4\lambda^4)}{8\lambda^5} \end{split} \end{equation} etc. The tail of the distribution \eqref{ck:ER} is \begin{equation} c_k \sim k^{-5/2} e^{-\mu k}, \quad \mu =2\ln(1+\lambda)- \ln(1+2\lambda) \end{equation} Since $\mu\simeq \lambda^2$ as $\lambda\to +0$, the mass distribution decays algebraically, $c_k \sim k^{-5/2}$, when $1\ll k \ll \lambda^{-2}$. These analytical observations become extremely useful during the validation of accuracy of miscellaneous numerical methods. \end{document} \bibitem{osel2014} A. Chaudhury, I. V. Oseledets, and R. Ramachandran, Comput. Chem.\ Eng.\ {\bf 61}, 234 (2014). \bibitem{bennaim2017} K. Kawagoe, G. Huber, M. Pradas, M. Wilkinson, A. Pumir, and E. Ben-Naim, Phys.\ Rev.\ E {\bf 96}, 012142 (2017). \bibitem{brilliantov2018natcomm} N. V. Brilliantov, A. Formella, T. P{\"o}schel, Nat.\ Comm.\ \ {\bf 9.1} 1--9 (2018). \bibitem{Arnold} V. I. Arnold, {\it Experimental Mathematics} (MSRI, American Mathematical Society, Providence, RI, 2015).
{ "timestamp": "2020-12-17T02:20:26", "yymm": "2012", "arxiv_id": "2012.09003", "language": "en", "url": "https://arxiv.org/abs/2012.09003" }
\section{Introduction} Forecasting is of utmost importance to the integration of renewable energy into power systems and electricity markets. The attention of energy forecasting has increased tremendously over the years \cite{Hong2020}. For instance, thinking of short-term operational problems, transmission system operators (TSOs) have to operate reserves optimally to keep the system in balance at reasonable costs. Indeed, in Denmark, the TSO has some time argued the 10-min lead time as the most important since wind power fluctuations at this horizon particularly affect the system balance, see \cite{Akhmatov2007} for instance. Emphasis here is on offshore wind power forecasting, since those short-term fluctuations in power generation are most significant offshore. Even though most efforts in wind power forecasting are placed on lead times from hours to days, many are investing in alternative approaches to improve the accuracy of very short-term forecasts, for instance leveraging detailed turbine-level data \cite{Gilbert2019}. Those very short-term lead times are not only crucial but also those it is the most difficult to improve the forecasts for, especially compared to the simple but very effective persistence benchmark. Forecasts characterize and reduce but do not eliminate uncertainty. Thus forecasts should be probabilistic in nature taking the form of probability distributions, following the argument of \cite{Dawid1984} among others. Wind power generation is a stochastic process which is double-bounded by nature, both by zero when there is no production at all, and by the nominal power for high-enough wind speeds. For short-term forecasting, statistical methods have proved to be more skilled and accurate. However, those methods often rely on a Gaussian assumption -- which cannot be appropriate for a double-bounded variable. In \cite{Pinson2012}, it is proposed to move from the classical Gaussian assumption to a framework where the wind power variable follows a generalized logit-normal distribution. In this framework though, not all the parameters of the distribution are estimated and tracked, the shape parameter being selected upon cross-validation. Consequently here, we propose to revisit this work and to estimate all the parameters of the generalized logit-normal distributions within a maximum likelihood framework. Such a framework is particularly suitable to obtain skilled probabilistic forecasts. In addition, emphasis is placed on describing both batch and recursive estimation approaches, in order to go towards an online learning approach as a basis for probabilistic forecasting. For a nice introduction to online learning, the reader is referred to \cite{Orabona2020}. Online learning (with exponential forgetting) makes it possible to accommodate the non-stationarity of wind power generation time-series. The models and estimation framework are first presented in Section \ref{sec:method}, and the resulting algorithms in Section \ref{sec:algos}. They are then applied to 10-min-ahead point and probabilistic forecasting at the Anholt offshore wind farm in Section \ref{sec:application}. Finally some concluding remarks and prospects are given in Section \ref{sec:conclusion}. \section{Model and Estimation Framework} \label{sec:method} \subsection{Generalized Logit-Normal Distribution and its Parameters} For an original random variable $X \in (0, 1)$, the generalized logit transform $Y$ is given by \begin{equation} Y=\gamma(X;\nu)=\ln\left(\frac{X^\nu}{1-X^\nu} \right), \quad \nu>0, \end{equation} where $\nu$ is the shape parameter. When $Y$ follows a Gaussian distribution $\mathcal{N}(\mu,\sigma^2)$, the original variable $X$ follows a generalized logit-normal distribution $L_{\nu}(\mu,\sigma^2)$, see \cite{Pinson2012}. The probability density function is given by \begin{equation} \label{eq:f} f(x)=\frac{1}{\sqrt{2\pi\sigma^2}}\frac{\nu}{x(1-x^\nu)}\exp{\left[-\frac{1}{2}\left(\frac{\gamma(x;\nu)-\mu}{\sigma}\right)^2\right]}. \end{equation} Let $X$ the wind power random variable. We want $\nu$ such as the transform variable $Y$ is as close as possible to a Gaussian variable which then can be forecast in a Gaussian framework. As we have access to some realizations $(x_t)$ of $X$ and to the analytical expression of its density, we can then maximize the probability of observing the data $(x_t)$ depending on $\nu$, $\mu$ and $\sigma^2$, that is estimate all the parameters of the distribution \eqref{eq:f} using maximum likelihood inference. In the case of wind power generation, the observations $(x_t)$ are strongly correlated. We thus assume that $Y_t|Y_{t-1},...,Y_{t-p} \sim \mathcal{N}(\mu_t,\sigma^2)$ where $\mu_t = \sum_{k=1}^p \phi_k Y_{t-k}$, that is the distribution of $X_t|X_{t-1},...,X_{t-p}$ is a generalized logit-normal distribution of density \begin{equation} \label{eq:f_conditional} \frac{1}{\sqrt{2\pi\sigma^2}}\frac{\nu}{x_t(1-x_t^\nu)}\exp{\left[-\frac{1}{2}\left(\frac{y_t-\sum_{k=1}^p\phi_k y_{t-k}}{\sigma}\right)^2\right]}, \end{equation} where $y_t=\gamma(x_t;\nu)$. While the density in (\ref{eq:f_conditional}) is defined only for $x \in (0,1)$, the wind power generation can take values 0 and 1. We thus choose to look at the observations $x_t \in [0,1]$ as a coarsened version of $X$, see \cite{Lesaffre2007}. This coarsened data framework has been formalized by \cite{Heitjan1991} and \cite{Heitjan1993}. \subsection{Maximum Likelihood Inference} Let $\Phi=(\phi_1,...,\phi_p)^\top \in \mathbb{R}^p$. The maximum likelihood inference is based on the likelihood function, given by \begin{equation} \label{eq:L} L(\nu, \Phi, \sigma^2|\textbf{x})=\prod_{t=1}^{N}f(x_t|x_{t-1},...,x_{t-p}, \nu,\Phi,\sigma^2), \end{equation} which is the probability of the observed data under the model $f$, assuming the realizations of $X_t|X_{t-1},...,X_{t-p}$ are independent and identically distributed. We think of $L(\nu,\Phi,\sigma^2|\textbf{x})$ as a function of $\nu$, $\Phi$ and $\sigma^2$, the data $(x_t)$ being fixed. The method of maximum likelihood chooses the values $(\nu,\Phi,\sigma^2)=(\hat{\nu},\hat{\Phi},\hat{\sigma}^2)$ to maximize $L(\nu, \Phi, \sigma^2|\textbf{x})$. The logarithm of $L$ being easier to maximize, especially when exponential distributions are involved, it is used instead of the likekihood. For model $f$ the negative log-likelihood function is \begin{equation} \label{eq:negl} \begin{split} \Tilde{l}(\nu, \Phi, \sigma^2|\textbf{x}) & = \frac{N-p}{2}\ln(\sigma^2)-(N-p)\ln(\nu) \\ & +\sum_{t=p+1}^{N}\ln(1-x_t^\nu) \\ & +\frac{1}{2\sigma^2}(\textbf{y}-\textbf{Y}\Phi)^\top(\textbf{y}-\textbf{Y}\Phi) + C, \end{split} \end{equation} where $\textbf{y}=(y_{p+1}, ..., y_N)^\top \in \mathbb{R}^{N-p}$, $\textbf{Y}$ is a matrix with columns $B\textbf{y}, B^2\textbf{y}, ..., B^p\textbf{y} \in \mathbb{R}^{(N-p) \times p}$, $B$ being the backshift operator, $C$ is a constant which does not depend on $\nu$, $\Phi$ or $\sigma^2$. Computing the first derivatives of \eqref{eq:negl} w.r.t. the parameters of the distribution we can retrieve stationary points. It is worth noting that those points are minimizers only if the negative log-likelihood is convex. Taking the derivative of \eqref{eq:negl} w.r.t. $\Phi$, resp. $\sigma^2$, and setting it equal to zero, leads to the usual maximum likelihood estimators \begin{equation} \label{eq:Phi} \Hat{\Phi}=(\textbf{Y}^\top \textbf{Y})^{-1}\textbf{Y}^\top \textbf{y}, \quad \quad \Hat{\sigma}^2=\frac{(\textbf{y}-\textbf{Y}\Hat{\Phi})^\top (\textbf{y}-\textbf{Y}\Hat{\Phi})}{N-p}. \end{equation} Taking the derivative of \eqref{eq:negl} w.r.t. $\nu$, we thus need to solve \begin{equation} \label{eq:nu} -\frac{N-p}{\nu}-\sum_{t=p+1}^{N} \frac{\ln(x_t) {x_t}^\nu}{1-{x_t}^\nu} + \frac{(\textbf{u}-\textbf{U}\Phi)^\top(\textbf{y}-\textbf{Y}\Phi)}{\sigma^2} = 0, \end{equation} where $\textbf{u}=\frac{\partial \textbf{y}}{\partial \nu}$ with $u_t=\ln(x_t)(1+\frac{{x_t}^\nu}{1-{x_t}^\nu})$, $\textbf{U}=\frac{\partial\textbf{Y}}{\partial\nu}$ with columns $B\textbf{u}, B^2\textbf{u},...,B^p\textbf{u}$. Unlike $\hat{\Phi}$ and $\hat{\sigma}^2$, $\hat{\nu}$ does not have a closed-form solution and a descent algorithm is then to be used to solve \eqref{eq:nu}. \section{Batch and Recursive Algorithms} \label{sec:algos} \subsection{Batch Algorithm} We use both the closed-form solutions in \eqref{eq:Phi} for $\Hat{\Phi}$ and $\Hat{\sigma}^2$, and a Newton-Raphson algorithm to solve \eqref{eq:nu} in order to estimate the shape parameter $\nu$. The computation of the Newton-Raphson step requires the second derivative of \eqref{eq:negl} w.r.t. $\nu$, i.e. \begin{equation} \label{eq:nu2} \begin{split} \frac{\partial^2\Tilde{l}}{\partial\nu^2 } & = \frac{N-p}{\nu^2 }- \sum_{t=p+1}^{N}\ln(x_t)^2\frac{x_t^\nu}{(1-x_t^\nu)^2} \\ & +\frac{(\textbf{v}-\textbf{V}\Phi)^\top(\textbf{y}-\textbf{Y}\Phi)}{\sigma^2} + \frac{\parallel \textbf{u}-\textbf{U}\Phi \parallel_2^2}{\sigma^2}, \end{split} \end{equation} where $\textbf{v}=\frac{\partial \textbf{u}}{\partial \nu}$ with $v_t= u_t \ln(x_t) \frac{{x_t}^\nu}{1-{x_t}^\nu}$, $\textbf{V}=\frac{\partial\textbf{U}}{\partial \nu}$ with columns $B\textbf{v}, B^2\textbf{v},...,B^p\textbf{v}$. The full algorithm is described in Algorithm \ref{algo:diagalgo} and has showed very fast convergence on numerous simulations of samples distributed according to the generalized logit-normal distribution with different values of $\Phi$, $\sigma^2$ and $\nu$. \begin{algorithm} \caption{Batch MLE with diagonalization} \begin{algorithmic} \label{algo:diagalgo} \STATE Set $i \leftarrow 1$ and let $\nu_1=1$, $\epsilon=0.001$. \REPEAT \STATE{1. \textit{Update.} $\Phi_i=(\textbf{Y}^\top \textbf{Y})^{-1}\textbf{Y}^\top \textbf{y}$; $\sigma^2_i = \frac{(\textbf{y}-\textbf{Y}\Phi_i)^\top (\textbf{y}-\textbf{Y}\Phi_i)}{N-p}$.} \STATE{2. \textit{Compute the Newton step and decrement for} $\nu$. \newline $\Delta \nu_{nt}=-\frac{\nabla_\nu \Tilde{l}}{\nabla_\nu^2 \Tilde{l}}$; $\lambda^2=\frac{(\nabla_\nu \Tilde{l})^2}{\nabla_\nu^2 \Tilde{l}}$.} \STATE{3. \textit{Stopping criterion.} \textbf{quit} if $\lambda^2/2 \leq \epsilon$.} \STATE{4. \textit{Line search.} Choose step size $t$ by backtracking line search.} \STATE{5. \textit{Update.} $\nu_{i+1}=\nu_{i}+t\Delta\nu_{nt}$.} \UNTIL{termination test satisfied.} \end{algorithmic} \end{algorithm} \subsection{Recursive Algorithm} The batch algorithm is well suited if the data are known to be stationary to second order, that is assuming the parameters of the distribution \eqref{eq:f} do not change over the curse of time. But if, as we suspect in the case of wind power data, the time series is not stationary and the parameters are not constant, then the batch algorithm is not appropriate and alternative solutions are required. Recursive estimation allows for such a parametric time-variability and provides information not only on the existence of non-stationarity but also on the possible nature of the parametric variations (see e.g. \cite{Young1984}). As the inference relies on the likelihood function, it is straightforward to derive a recursive algorithm which only requires the first derivatives of \eqref{eq:negl} w.r.t. to the parameters. Let introduce $\Hat{\Theta}_t = (\Hat{\Phi}_t, \Hat{\sigma}^2_t, \Hat{\nu}_t)$ the estimate of the parameters at time $t$. The recursive estimation procedure relies on a Newton-Raphson step for obtaining the estimate $\Hat{\Theta}_t$ as a function of the previous estimate $\Hat{\Theta}_{t-1}$, see e.g. \cite{Madsen2007} and \cite{Madsen2012}. Let introduce the time-dependent negative log-likelihood objective function to be minimized at time $t$ \begin{equation} \label{eq:recursiveobj} S_t(\Theta) = -\frac{1}{n_\alpha} \sum_{j=p+1}^{t} \alpha^{t-j}\ln(f_j(\Theta)), \end{equation} where $f_j(\Theta) = f(x_j | x_{j-1}, ... , x_{j-p}; \Theta)$, $\alpha$ is a forgetting factor, $\alpha \in (0,1)$, allowing for exponential forgetting of past observations, $n_\alpha = \frac{1}{1-\alpha}$ is the effective number of observations used for normalizing the weighted negative log-likelihood function. Applying one Newton-Raphson step we have \begin{equation} \label{eq:NRstep} \Hat{\Theta}_t = \Hat{\Theta}_{t-1} - \frac{\nabla_\Theta S_t(\Hat{\Theta}_{t-1})}{\nabla^2_\Theta S_t(\Hat{\Theta}_{t-1})}. \end{equation} As \begin{equation} \label{eq:gradient1} \begin{split} \nabla_\Theta S_t(\Hat{\Theta}_{t-1}) & = \alpha \nabla_\Theta S_{t-1}(\Hat{\Theta}_{t-1}) \\ &- (1-\alpha) \nabla_\Theta \ln(f_t(\Hat{\Theta}_{t-1})), \end{split} \end{equation} assuming that $\Hat{\Theta}_{t-1}$ minimizes $S_{t-1}(\Theta)$, we get \begin{equation} \label{eq:gradient2} \nabla_\Theta S_t(\Hat{\Theta}_{t-1}) = -(1-\alpha) \nabla_\Theta \ln(f_t(\Hat{\Theta}_{t-1})). \end{equation} From \eqref{eq:gradient1} we also get \begin{equation} \label{eq:hessian1} \begin{split} \nabla^2_\Theta S_t(\Hat{\Theta}_{t-1}) & = \alpha \nabla^2_\Theta S_{t-1}(\Hat{\Theta}_{t-1}) \\ &- (1-\alpha) \nabla^2_\Theta \ln(f_t(\Hat{\Theta}_{t-1})). \end{split} \end{equation} As \begin{equation} \label{eq:hessian2} \begin{split} \nabla^2_\Theta \ln(f_t(\Hat{\Theta}_{t-1})) &= \frac{\nabla^2_\Theta f_t({\Hat{\Theta}_{t-1}})}{f_t({\Hat{\Theta}_{t-1}})} \\ &- \frac{\nabla_\Theta f_t(\Hat{\Theta}_{t-1})(\nabla_\Theta f_t(\Hat{\Theta}_{t-1}))^T}{f_t({\Hat{\Theta}_{t-1}})^2}, \end{split} \end{equation} assuming $f_t$ is (almost) linear in $\Theta$ in the neighborhood of $\Hat{\Theta}_{t-1}$, the first term in \eqref{eq:hessian2} vanishes and we obtain the following approximation \begin{equation} \label{eq:hessian3} \nabla^2_\Theta \ln(f_t(\Hat{\Theta}_{t-1})) = -\textbf{h}^{}_t \textbf{h}^\top_t, \end{equation} where $\textbf{h}_t = \frac{\nabla_\Theta f_t(\Hat{\Theta}_{t-1})}{f_t({\Hat{\Theta}_{t-1}})} = \nabla_\Theta \ln(f_t(\Hat{\Theta}_{t-1}))$. \ Let $\Hat{\textbf{R}}_t = \nabla^2_\Theta S_t(\Hat{\Theta}_t)$ and assume that the objective criterion $S$ is smooth in the vicinity of $\Hat{\Theta}_{t}$, and the adaptation step small enough so that \begin{equation} \label{eq:assumption} \Hat{\textbf{R}}_t = \nabla^2_\Theta S_t(\Hat{\Theta}_t) \simeq \nabla^2_\Theta S_t(\Hat{\Theta}_{t-1}). \end{equation} This is a classic assumption for deriving recursive estimation methods for stochastic systems (see \cite{Ljung1983}). The two-step recursive scheme at time $t$ is then \begin{align} \Hat{\textbf{R}}_t &= \alpha \Hat{\textbf{R}}_{t-1} + (1-\alpha) \textbf{h}^{}_t \textbf{h}^\top_t, \label{eq:recursive1} \\ \Hat{\Theta}_t &= \Hat{\Theta}_{t-1} + (1-\alpha) \Hat{\textbf{R}}_t^{-1} \textbf{h}_t. \label{eq:recursive2} \end{align} Equation \eqref{eq:recursive1} derives from \eqref{eq:hessian1} and \eqref{eq:hessian3}. Equation \eqref{eq:recursive2} derives from \eqref{eq:NRstep}, \eqref{eq:gradient2} and \eqref{eq:assumption}. The final algorithm is available in Algorithm \ref{algo:rMLE}. \begin{algorithm} \caption{Recursive MLE} \begin{algorithmic} \label{algo:rMLE} \STATE Let $\Phi_0=\textbf{0}, \sigma_0^2=1, \nu_0=1, \textbf{h}_0=\textbf{0}, R_0=0_{(p+2,p+2)}$. \REPEAT \STATE{1. \textit{Update.} $\Hat{\textbf{R}}_i = \alpha \Hat{\textbf{R}}_{i-1} + (1-\alpha) \textbf{h}^{}_i \textbf{h}^\top_i$.} \STATE{2. \textit{Update.} $\Hat{\Theta}_i = \Hat{\Theta}_{i-1} + (1-\alpha) \Hat{\textbf{R}}_i^{-1} \textbf{h}_i$ if $i>100+p$.} \UNTIL{$t$ the forecasting time.} \end{algorithmic} \end{algorithm} \section{Very-short-term Wind Power Forecasting Application} \label{sec:application} We apply the proposed models to a real dataset consisting of wind power generation from a large wind farm, Anholt in Denmark, from July 1, 2013 to August 31, 2014. Emphasis is placed on the maximum likelihood framework and its online learning derivation. For a comparison of the generalized logit-normal distribution to other distributions (e.g., Beta) for the purpose of wind power forecasting, see \cite{Pinson2012}. \subsection{Data Description} Active power is available for 110 wind turbines at a temporal resolution of every 10 minute. The time series are scaled individually according to the nominal power of the wind turbines. The average generation over the wind farm is then computed depending on the number of wind turbines being available at each time step, in order to handle missing values. The resulting random variable is then $X_t \in [0,1]$, the average active power generated in the wind farm at time $t$. We are interested in forecasting $X_{t+1}$ (point forecasting) and its distribution (probabilistic forecasting) knowing the realization of $X_t$; the lead time is therefore 10-minute-ahead. We split our data into two datasets: \begin{itemize} \item a training/cross-validation dataset from July 1, 2013 to March 31, 2014, resulting in 39,450 observations, \item a test dataset from April 1 to August 31, 2014, resulting in 22,029 observations. \end{itemize} The training set is used to fit all models, the cross-validation set to select hyper-parameters if needed and the test set to compare the proposed methodology to the benchmarks. It is worth noting the training set is long enough for the Algorithm \ref{algo:rMLE} to be recursive yet on the training period, after a short warm-up of 100 iterations. \subsection{Point Forecasting} \label{sec:point} In order to evaluate and compare the performance of the proposed methods for point forecasting we use the Root Mean Square Error (RMSE). When a model requires hyper-parameters to be selected before estimating the parameters, we use the following procedure: \begin{itemize} \item The candidate models are fitted over a grid of hyper-parameters' values from July 1 to October 31, 2013; \item they are then retrained in a time-series cross-validation scheme, from November 1, 2013 to March 31, 2014, for which the size of the training window increases as we evolve through the validation set (consistent with a leave-one-out setup); \item the hyper-parameters leading to the smallest RMSE on the cross-validation set are selected; \item finally the final model is fitted over the whole training/cross-validation set and used for forecasting on the test set. \end{itemize} \paragraph{Benchmarks} We compare our methods to three benchmarks: the persistence, a normal auto-regressive (NAR) model and its recursive version. The persistence consists in taking $\hat{x}_{t+1} = x_t$. The normal AR model assumes $X_t|X_{t-1},...,X_{t-p} \sim \mathcal{N}(\mu_t,\sigma^2)$ where $\mu_t = \sum_{k=1}^p \phi_k X_{t-k}$. In this Gaussian setup the forecasts are unbounded and happen to be greater than 1 or lower than 0. Thus we need to truncate \textit{a posteriori} the out-of-range predictions so they lie in the interval $[0,1]$. We test AR models up to lag $p=5$ and observe that no significant improvement is provided beyond lag 2 for both batch and recursive approaches. We thus select $p = 2$. For the recursive AR model we also need to select the forgetting factor $\alpha$, which exponentially weights data in the past. In a similar way, it is selected such as $\alpha = 0.995$. \paragraph{Forecasting using generalized logit-normal distributions} Let $\delta > 0$ such as each value being lower than $\delta$ (resp. greater than $1 - \delta$) is set to $\delta$ (resp. $1 - \delta$) and consider those "corrected" observations as the realizations of $X \in (0,1)$. In a symmetric way, forecasts being lower than $\delta$ (resp. greater than $1 - \delta$) will be set to 0 (resp. 1). $\delta$ is selected over cross-validation along with $p$. Algorithm \ref{algo:diagalgo} converges in 11 iterations towards the estimated values $\Hat{\nu}=1.39$, $\Hat{\Phi}=(1.363,-0.370)^\top$ and $\Hat{\sigma}^2=0.11$ for the selected combination of hyper-parameters $\delta=0.005$ and $p=2$. For Algorithm \ref{algo:rMLE}, we choose $\delta=0.005$, $p=2$ and $\alpha=0.9994$ upon cross-validation. See in Fig. \ref{fig:ANH_point_parameters} the estimated parameters of the generalized logit-normal distributions over the test period. \begin{figure}[!ht] \centerline{ \includegraphics[width=.95\columnwidth]{figures/ANH_point_recursive_vs_batch_parameters_GLAR_test.png} } \caption{Parameters of the generalized logit-normal distribution for $p=2$ and $\alpha=0.9994$: $\Hat{\Phi}$ (top), $\Hat{\sigma}^2$ (bottom left) and $\Hat{\nu}$ (bottom right).} \vspace{-3mm} \label{fig:ANH_point_parameters} \end{figure} \paragraph{Results} The point forecasting performance over the test set of the benchmarks and the (GLNAR) proposed algorithms are available in Table \ref{tab1}. It is worth noting that the test set consists in 22,023 observations, which is a volume of data large enough to claim for significant results. The best point forecasts are obtained by the model using adaptive generalized logit-normal distributions. One can note that the model which uses a constant generalized logit-normal distribution gets poorer performance than the recursive AR model. Therefore the assumption that seems to matter the most here is the time-varying parameters assumption. Moreover, the estimated value of the scale parameter is significantly larger in the batch setup than in the recursive one, while the shape parameter is significantly lower. It may confirm that the recursive setup is more appropriate to the characteristics of the time series and thus allows for a better discrimination between the scale and the shape parameters of the distribution. \begin{table}[!ht] \vspace{-2mm} \caption{10-minute-ahead RMSE over the test period, and respective improvements over persistence} \begin{center} \begin{tabular}{|c|c|c|} \hline \textbf{Model} & \textbf{RMSE}& \textbf{Imp. over persist.}\\ \hline persistence &3.27\% &- \\ \hline batch NAR &2.79\% &14.68\% \\ \hline recursive NAR &2.72\% &16.82\% \\ \hline batch GLNAR &2.74\% &16.21\% \\ \hline recursive GLNAR &\textbf{2.70}\% &\textbf{17.43}\% \\ \hline \multicolumn{3}{l}{*\textit{Best forecast bolded.}} \end{tabular} \label{tab1} \end{center} \end{table} \vspace{-5mm} \subsection{Probabilistic Forecasting} Let $F_t$ a predictive cumulative distribution function at time $t$. The Continuous Ranking Probabilistic Score (CRPS) is defined by \begin{equation} \label{eq:CRPS} \text{CRPS} = \frac{1}{T}\sum_{t=1}^T \text{crps}(F_t,x_t) = \int_{-\infty}^{\infty}\text{BS}(y)\text{d}y, \end{equation} where \begin{equation} \label{eq:crps} \text{crps}(F_t,x_t) = \int_{-\infty}^{\infty}\{F_t(y)-\textbf{1}(y \geq x_t)\}^2 \text{d}y, \end{equation} and BS is the Brier score \begin{equation} \label{eq:BS} \text{BS}(y) = \frac{1}{T}\sum_{t=1}^T \{F_t(y)-\textbf{1}(x_t \leq y)\}^2. \end{equation} See for example \cite{Brier1950} and \cite{Gneiting2007}. To evaluate the performance of the proposed models for probabilistic forecasting we use the CRPS instead of the RMSE, following the scheme described at the beginning of section \ref{sec:point}. \paragraph{Benchmarks} We compare our method to four benchmarks: climatology, probabilistic persistence, and probabilistic versions of the batch and recursive AR models. Climatology consists in computing empirical quantiles on the training set. We test different grids and choose upon cross-validation to estimate the predictive cumulative distribution from the quantiles $\{0,0.01,...,0.99,1\}$. On the test set the quantiles are updated whenever a new observation is recorded. Probabilistic persistence consists in dressing the point persistence prediction with the most recent observed values of the persistence error. We choose the number of observed values upon cross-validation to be 20. For probabilistic AR forecasts, the least squares estimator of the variance of the residuals is used in both batch and recursive modes, and we assume those residuals to follow a Gaussian distribution $\mathcal{N}(0,\Hat{\sigma}^2)$. The forecast distribution of $x_t$ is then a Gaussian distribution $\mathcal{N}(\Hat{x}_t,\Hat{\sigma}^2)$ where $\Hat{x}_t$ is the point forecast from the AR model. The hyper-parameters $p$ and $\alpha$ for the recursive model are selected upon cross-validation with CRPS, which leads to $p=2$ as for point forecasting, but to a different $\alpha$ which is now equal to 0.983 instead of 0.995. \paragraph{Forecasting using generalized logit-normal distributions} The lag $p$ selected upon cross-validation with CRPS remains equal to 2 in both batch and recursive algorithms, while $\delta$ and $\alpha$ change. For Algorithm \ref{algo:diagalgo}, now $\delta=0.006$ which leads to slightly different estimated parameters of the distribution: $\Hat{\nu}=1.37$ and $\Hat{\Phi}=(1.358,-0.365)^\top$, while the variance $\Hat{\sigma}^2=0.11$ remains the same. For Algorithm \ref{algo:rMLE}, now $\delta=0.004$ and $\alpha$ decreases from 0.9994 for point forecasting to 0.9986 for probabilistic forecasting. See in Fig. \ref{fig:ANH_proba_parameters} the estimated parameters of the generalized logit-normal distributions over the test period, which show higher time-variability because of the lower value of the forgetting factor. \begin{figure}[!ht] \centerline{ \includegraphics[width=.95\columnwidth]{figures/ANH_proba_recursive_vs_batch_parameters_GLAR_test.png} } \caption{Temporal evolution of the parameters of the generalized logit-normal distributions for $p=2$ and $\alpha=0.9994$: $\Hat{\Phi}$ (top), $\Hat{\sigma}^2$ (bottom left) and $\Hat{\nu}$ (bottom right).} \label{fig:ANH_proba_parameters} \vspace{-3mm} \end{figure} \paragraph{Results} The CRPS computed over the test set for all the benchmarks and the proposed models are available in Table \ref{tab2}. The climatology's predictive cumulative distribution function $F_{t+1}$ remains unchanged whatever the value of $x_t$, which explains the very poor global performance of this method. The performance of the predictive cumulative distributions assuming a Gaussian setup and that of the approach using a constant generalized logit-normal distribution are close as for point forecasting. However, for probabilistic forecasting, the approach using adaptive generalized logit-normal distributions outperforms the other methods. The Brier scores are plotted in Fig. \ref{fig:ANH_brier}. As expected the methods using the generalized logit transformation perform better close to the bounds of the interval $[0,1]$. \begin{table}[!ht] \vspace{-2mm} \caption{10-minute-ahead CRPS over the test period, and respective improvements over climatology and persistence} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{Model} & \textbf{CRPS} & \textbf{Imp. over clim.} & \textbf{Imp. over persist.}\\ \hline climatology &22.04\% &- &-\\ \hline prob. persistence &1.36\% &93.85\% &- \\ \hline batch NAR &1.28\% &94.17\% &5.28\% \\ \hline recursive NAR &1.23\% &94.34\% &9.40\% \\ \hline batch GLNAR &1.21\% &94.52\% &10.90\%\\ \hline recursive GLNAR &\textbf{1.06}\% &\textbf{95.17}\% &\textbf{21.57}\%\\ \hline \multicolumn{4}{l}{*\textit{Best forecast bolded.}} \end{tabular} \label{tab2} \end{center} \end{table} \vspace{-5mm} \begin{figure}[!ht] \centerline{ \includegraphics[width=\columnwidth]{figures/ANH_brier_scores_test.png} } \caption{Brier score computed over the test set for all methods but climatology, as a function of the chosen threshold.} \label{fig:ANH_brier} \end{figure} The CRPS and the Brier score give indications about the sharpness of the distributions. In order to check the calibration we show the results of two tools: the reliability diagram in Fig. \ref{fig:reliability} and a marginal calibration plot which is the difference between the average predictive $\bar{F}$ on the test set and the empirical cumulative distribution function in Fig. \ref{fig:marginal_calibration}. For the reliability diagram, the closer to the diagonal, the better the calibration, the empirical probabilities getting closer to the nominal ones. See \cite{Pinson2007} and \cite{Gneiting2007} for more details about those calibration tools. One can see that for both indicators the approach using adaptive generalized logit-normal distributions outperforms the other probabilistic forecasting methods. In Fig. \ref{fig:marginal_calibration} the climatology difference is not presented for being far bigger than zero. \begin{figure}[!ht] \centerline{ \includegraphics[width=\columnwidth]{figures/ANH_qqplot_test.png} } \caption{Reliability diagram over the test set.} \vspace{-3mm} \label{fig:reliability} \end{figure} \begin{figure}[!ht] \centerline{ \includegraphics[width=\columnwidth]{figures/ANH_F.observed-F.forecast_test.png} } \caption{Marginal calibration plot over the test set.} \label{fig:marginal_calibration} \vspace{-2mm} \end{figure} Example probabilistic forecasts obtained from the adaptive generalized logit-normal approach over a 36 hour period of time are depicted in Fig. \ref{fig:probapred} by using prediction intervals with nominal coverage rates of 95 and 75\%. \begin{figure}[!ht] \centerline{ \includegraphics[width=\columnwidth]{figures/ANH_proba_rGLAR_36h_August.png} } \caption{Probabilistic forecasts from the recursive approach (Algorithm \ref{algo:rMLE}), based on prediction interval with nominal coverage rates of 95 and 75\%, along with the power measurements (solid black line).} \label{fig:probapred} \vspace{-3mm} \end{figure} \section{Conclusions} \label{sec:conclusion} A generalized logit-normal distribution was considered for very short-term wind power forecasting, in order to adequately handle the double-bounded nature of the time series. All the parameters of the distribution were estimated from the data in a maximum likelihood framework, for both batch and online setups. The adaptive version of the distribution provides only a slight improvement in the accuracy of the point forecasts compared to approaches within a Gaussian framework, though it substantially outperforms the other benchmarks when focusing on probabilistic forecasting (intervals and full predictive densities). This confirms that such a choice of distribution may be most appropriate. While it achieves better calibration and sharpness, there is still room for improvement. In particular, we have emphasized the importance of the double-bounded nature of the process, but in practice the upper bound may also change in time. Indeed, wind power generation is not always bounded by the nominal capacity of the wind farm, e.g. in case of curtailment. It should then be taken into account within the modelling and forecasting framework, by additionally adaptively estimating this upper bound from data. Furthermore, the proposed framework could be applied for multi-step ahead forecasting, and makes it easy to assume other models for the conditional expectation of the transformed variable. In particular it is straightforward to add exogenous variables to the auto-regressive model, or to generalize it with a non-linear one. This may be a way to account for the individual productions of the wind turbines in order to improve the prediction of power generation for the whole wind farm. Finally, the $\delta$ hyper-parameter which handles the coarsened version of the distribution was selected upon cross-validation. It could instead enter a Bayesian or a likelihood inference as a parameter to be properly estimated.
{ "timestamp": "2021-05-07T02:14:15", "yymm": "2012", "arxiv_id": "2012.08910", "language": "en", "url": "https://arxiv.org/abs/2012.08910" }
\section{Introduction} Nowadays, there is an open question about the usefulness of machine learning (MLE) techniques to test significance of group analyses. While classification problems using MLE have been the main target in predictive or decoding analysis in neuroimaging, there is an increasing interest in the inference analysis with continuous outputs based on MLE, as detailed in \cite{Cohen2011} and remarked in \cite{Reiss15}. Recently, several advances for combining p-value maps have been proposed based on the concept of \emph{prevalence} \cite{Heller07,Rosenblatt14}, beyond the fixed and mixed (random) effects models \cite{Friston02}. Common to all these approaches is to assume a voxel-wise model that allows a proportion of conditions or subjects that activated the voxel at some mixing proportion. This assumption, that is more realistic than those assumed in classic random effect approaches, e.g. homogeneity in the (binary) activation pattern \cite{Rosenblatt14}, clearly opens a new application field for modern statistics. Indeed, the concept of prevalence as a fraction of individuals correctly classified by MLE algorithms in group comparisons, is not novel at all in neuroimaging, being the main focus of predictive inference. As an example, out-of sample generalization approaches, such as Cross-Validation (CV), try to estimate on unseen new data the accuracy ($A_{cc}$) of the classifier in a binary classification problem. Despite the methods and goals of predictive CV inference being distinct from classical extrapolation procedures \cite{Lindquist13}, they are actually exploited within statistical frameworks aimed at assessing statistical significance \cite{Reiss15}. Examples include bootstrapping, binomial or permutation (``resampling'') tests \cite{Winkler16}, which have been demonstrated to be competitive outside the comfort zone of classical statistics, filling otherwise-unmet inferential needs. In the pattern classification problem we usually assume the existence of classes ($H_1$) that are differentiated by classifiers that are measured by their performance in terms of $A_{cc}$ or \emph{prevalence} on a independent dataset. Then, we conclude (improperly in a statistical sense) $H_1$ using empirical confidence intervals, e.g. standard deviations of the classification $A_{cc}$ from training folds. In limited sample sizes the most popular K-fold CV method \cite{Kohavi95} has been demonstrated to sub-optimally work under unstable conditions \cite{Gorriz18,Gorriz19,Varoquaux18}. In such circumstances, the predictive power of the fitted classifiers can be arguable. Moreover, recent works have partially demonstrate that, when using only a classifier's empirical $A_{cc}$ as a test statistic, the probability of detecting differences between two distributions is lower than that of a bona fide statistical test \cite{Rosenblatt16,Kim20}. Beyond the latter empirical techniques for the estimation of performance, MLE is well-framed into a data-driven statistical learning theory (SLT) which is mainly devoted to problems of estimating dependencies with limited amounts of data \cite{Vapnik82}. Although CV-MLE approaches were not originally designed to test hypotheses based on prevalence in brain mapping \cite{Friston2013}, they are theoretically grounded to provide confidence intervals in the classification of image patterns (protected inference) that can be seen as maps of statistical significance \cite{Gorriz2021}. As shown in the latter reference, this can be achieved by assessing the upper bounds of the actual error in a binary classification problem (a confidence interval), and by using simple significance tests of a population proportion within it. Definitely, this results in improvements to the test's statistical power based on $A_{cc}$. Thus, assessing with high probability the quality of the fitting function (and its generalization ability) in terms of in and out-sample predictions can be conceptualized, under a hypothesis testing scenario, as the inverse problem of ``carefully rejecting $H_0$'', that is, the problem of rejecting $H_1$, and thus accepting $H_0$ (there is no effect or it is not significant). In this paper we show the connection between the classical General linear model (GLM) including the classic random effect model, with the MLE framework in the estimation of model/classifier parameters, and the subsequent analyses to achieve the degree of significance in group comparisons. In this sense, inference based on the parametric T statistic or the prevalence-based probability tests, means two different paths to assess the same problem. Moreover, we show a novel method for achieving statistical significance using MLE and permutation tests based on concentration inequalities. This approach assesses the worst case of the actual error to propose an estimation of the observed distribution of the permuted data. \section{Methods: Classical and MLE statistical inferences}\label{sec:methods} \subsection{Background on classical statistics in neuroimaging}\label{sec:GLM} The GLM \cite{Friston02} is defined just for a single observation level, e.g. in a inter-subject comparison, as: \begin{equation}\label{eq:1} \mathbf{y}=\mathbf{X} \mathbf{\theta} + \mathbf{\epsilon} \end{equation} where $\mathbf{y}$ is the $N\times 1$ observation vector with units over time, voxels, etc., $\mathbf{\epsilon}$ is the $N\times 1$ vector of errors that is assumed to be Gaussian, $\mathbf{X}$ is the $N \times M$ matrix containing the explanatory variables or constraints and $\mathbf{\theta}$ is the $M\times 1$ vector of parameters explaining the observations $\mathbf{y}$. Note that i) for a hierarchical observation model each level like the latter requires the estimation of the previous levels, and ii) in terms of MLE, $\mathbf{X}$ plays the role of a multidimensional label or regressors acting on the observations $\mathbf{y}$. In the classic GLM $\mathbf{\theta}$ is usually estimated by a Maximum Likelihood (ML) criterion based on the Gaussian assumption and is given by: \begin{equation}\label{eq:2} \hat{\mathbf{\theta}}=(\mathbf{X}^T \mathbf{C}_{\epsilon}^{-1} \mathbf{X})^{-1}\mathbf{X}^T \mathbf{C}_{\epsilon}^{-1}\mathbf{y} \end{equation} Inferences about this estimate that is, how large are the components of $\mathbf{\theta}$ and their relation with each other, can be obtained by using a linear compound, specified by a contrast weight vector $\mathbf{c}$, and writing a T statistic as: \begin{equation}\label{eq:3} T=\frac{\mathbf{c}^T\hat{\mathbf{\theta}} }{\sqrt{\mathbf{c}^TCov(\hat{\mathbf{\theta}})\mathbf{c} }} \end{equation} where $Cov(\hat{\mathbf{\theta}})=(\mathbf{X}^T \mathbf{C}_{\epsilon}^{-1} \mathbf{X})^{-1}$. The T statistic gives us the probability of observation of the ML estimation under the null hypothesis and, when it is small enough, e.g. $p<0.05$, the linear compound is considered significantly different from zero. As an example, given a set of two parameters in $\theta=[\theta_1, \theta_2]^T$ if we select $\mathbf{c}=[1 -1]$ we are assessing how large is the first parameter w.r.t the second, i.e. the difference $\theta_1-\theta_2$, thus if the T statistic provides a small probability, the latter difference is statistically significant and observations are generated from different sources. A similar procedure could be established based on a Bayesian estimation and inference to handle complex hierarchical observation models. The latter framework is based on the Expectation Maximization (EM) algorithm for parameter estimation and known priors and a-priori probability models, with the aim of evaluating the posterior probability (ppm). By thresholding the ppm, a relationship between the two approaches can be established including similarities (statistical power) and differences (specificity) \cite{Friston02}. \subsubsection{Least Squares of the GLM} \label{sec:GLMleast} The GLM can be estimated without any assumption on the noise model by simply solving the associated Least Squares (LS) problem. Therefore, if we assume that $\epsilon=0$ in the GLM, the problem is now to find the ``best'' set of parameters $\theta_i$ that better explains each observation $y_i$ by: \begin{equation}\label{eq:4} y_j=\sum_{i=1}^M X_{ji} \theta_i;\quad \text{for } j=1,\ldots,N \end{equation} Thus, we need to solve the linear regression problem given in equation \ref{eq:4} to estimate the parameters $\theta_i$. The most popular estimation method is LS, in which we select the coefficients $\mathbf{\theta}$ to minimize the residual sum of squares: \begin{equation}\label{eq:5} RS(\mathbf{\theta})=\sum_{j=1}^N (y_j-\sum_{i=1}^M X_{ji} \theta_i)^2 \end{equation} The solution to this problem ($\frac{\partial RS(\mathbf{\theta})}{\partial \mathbf{\theta}}=0$), the Markov-Gauss estimate, provides the smallest variance among all linear unbiased estimates and is given by: \begin{equation}\label{eq:6} \hat{\mathbf{\theta}}=(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^{T}\mathbf{y} \end{equation} similar to equation \ref{eq:2} in the GLM but assuming $C_{\epsilon}=\mathbf{I}$, that is, the errors are assumed to be independently and identically distributed. \subsection{Converting the problem of the estimation of $\theta$ into a LS classification problem}\label{sec:LS} In the LS multiclass classification problem, the goal is to design $M$ linear functions $f_i(\mathbf{y})=\mathbf{w}_i^T\mathbf{y}$, given a set of response variables $\mathbf{y}_i$ and according to a suitable mean squared error (MSE) criterion w.r.t some desired discrete output (i.e. labels) $\mathbf{x}_i$ which represents a binary code. Note how, in general, this pattern classification problem is the inverse problem of hierarchical modeling in neuroimaging, as shown in the following. Recently, the residual score or the classification error obtained from several methodologies beyond LS, e.g. by applying the fitted linear hyperplanes to new unseen data, are deployed to establish a CV $A_{cc}$-based test on the responses with permuted labels, as shown elsewhere \cite{Reiss15, Gorriz2021}. \subsubsection{The inverse problem: LS for regressing an indicator matrix}\label{sec:LSinverse} Let consider the general inverse problem, that is, given a set of observations $\{\mathbf{y}_i\}$, for $i=1,\ldots,N$, we are interested in explaining a set of ``explanatory'' binary-coded variables $\mathbf{x}_i$ (labels) by a matrix $\mathbf{W}$ of parameters. This problem that is referred in this paper as the inverse problem in the \emph{label domain}, is also known as the linear regression of an Indicator Matrix or linear regression model (LRM) \cite{Hastie2001}. In this model we regress explanatory variables instead of doing that on the observed responses, i.e. in the \emph{observation domain}, as in the GLM. This regression could be more accurate than the latter depending on the nature of the data to be fitted, e.g. for a low number of discrete categories in the specified design matrix $\mathbf{X}=[x_{im}]$. If we have $M$ classes then $\mathbf{X}$ is a $N\times M$ matrix, where each row $i=1,\ldots,N$ contains a single $x_{im}=1$, for $m=1,\ldots,M$, $\mathbf{Y}$ is the $N\times P$ matrix of column responses $\mathbf{y}_i$ and $\mathbf{W}$ is a $P\times M$ coefficient matrix. Thus, we fit a linear regression model of the form: \begin{equation}\label{eq:7} \mathbf{X}=\mathbf{Y}\mathbf{W} \end{equation} where the $P$ dimension allows the inclusion of several responses (multimodality or multiframe acquisitions) given the same indicator response matrix $\mathbf{X}$. Following the same methodology as aforementioned the best estimation is given by: \begin{equation}\label{eq:8} \hat{\mathbf{W}}=(\mathbf{Y}^T\mathbf{Y})^{-1}\mathbf{Y}^T\mathbf{X} \end{equation} which regresses inputs of observations into a novel set of labels or constraints: \begin{equation}\label{eq:9} \hat{\mathbf{X}}=\mathbf{Y}\hat{\mathbf{W}} \end{equation} The novel set $\hat{\mathbf{X}}$ can be seen as a guess on the constraints for the set of observation vectors $\mathbf{y}_i$, or an approximation of the posterior probability $p(\text{class}=m|\mathbf{y})$. Thus, it allows us to compute an error model as: \begin{equation}\label{eq:10} \epsilon_{LS}=\mathbf{X}-\hat{\mathbf{X}} \end{equation} \subsubsection{Connection between $\theta$ and $\mathbf{w}$} \label{sec:connection} For simplicity and to connect with the GLM as shown in section \ref{sec:GLM}, let $P=1$ in the LRM, then $\mathbf{W}=\mathbf{w}$ is a $1\times M$ row vector and $\mathbf{Y}=\mathbf{y}$ is an $N\times 1$ column vector. An easy relation between the GLM and LRM approximations can be found taking into account that: \begin{equation}\label{eq:11} \mathbf{X}=\mathbf{y}\mathbf{w}+\mathbf{\epsilon}_{LS} \end{equation} Thus, the corresponding GLM is: \begin{equation}\label{eq:12} \mathbf{y}=(\mathbf{X}-\mathbf{\epsilon}_{LS})\hat{\theta} \end{equation} where we define $\hat{\theta}=\mathbf{w}^T(\mathbf{w}\mathbf{w}^T)^{-1}$ and the GLM noise model is derived using $\epsilon=-\epsilon_{LS}\hat{\theta}$. The scalar term of equation \ref{eq:12} can be expressed at the LS solution as: \begin{equation}\label{eq:13} (\mathbf{w}\mathbf{w}^T)^{-1}=(\mathbf{y}^T\mathbf{y})^2/((\mathbf{X}^T\mathbf{y})^T\mathbf{X}^T\mathbf{y})=\frac{(\sum_{i=1}^N y_i^2)^2}{\sum_{m=1}^M\sum_{i,j}y_{im}y_{jm}} \end{equation} where $y_{im}$ denotes the observation $i$ belonging to class $m$. Thus, the LS linear regression of the observations can be described by the GLM on the observations (a linear regression on the explanatory variables) and viceversa.\footnote{given a GLM on the observation we can define a LRM on the explanatory variables as $\hat{\mathbf{w}}=\mathbf{\theta}^T(\mathbf{\theta}\mathbf{\theta}^T)^{-1}$ with and error $\epsilon_{LS}=-\epsilon\hat{\mathbf{w}}$} \subsubsection{Inference of the inverse GLM based on MLE} \label{sec:inference} The LRM can be seen as a generalization of the GLM for the responses, coding $\mathbf{x}$ as a vector of continuous noisy responses (then $M=1$, instead of being an indicator matrix): \begin{equation}\label{eq:14} \mathbf{x}=\mathbf{Y}\mathbf{w}+\mathbf{\epsilon} \end{equation} that is equivalent to the inverse GLM in equation \ref{eq:1}. An inference on this model based on MLE could proceed as follows. Given an a set of pairs $(\mathbf{y}_i,x_i)$ we estimate the set of parameters $\mathbf{w}$ using a similar expression as in equation \ref{eq:8}. After the fitting process, we assess its significance under the null hypothesis on an independent set, likewise the T-statistic inference on the GLM, using a CV $A_{cc}$-based test statistic as: \begin{equation}\label{eq:15} T_{CV}=(\mathbf{x}-\mathbf{Y}\mathbf{w})^T(\mathbf{x}-\mathbf{Y}\mathbf{w})=\sum_{i=1}^N(x_i-\sum_{j=1}^PY_{ij}w_j)^2 \end{equation} The null distribution is modeled by choosing a large number of permutations $\pi$ to create artificial data sets, $(\mathbf{y}_i,x_{\pi_p})$, for $p=1,\ldots, O$, i.e. a permutation test, and evaluating the sum of squared residuals $T_{CV}$ on every unseen sample within the permuted and original set. Consequently, the p-value is defined by\footnote{the correction factor +1 in the numerator and denominator is justified by the inclusion of the original sample set in the test}: \begin{equation}\label{eq:16} p_{value}=\frac{ card\{ T_{CV}^{\pi}<T_{CV}\}+1}{O+1} \end{equation} where $card(.)$ is the cardinality of a set and $T_{CV}$ and $T_{CV}^{\pi}$ are the CV $A_{cc}$-based tests on the original and permuted sets, respectively. In the latter test, also known as P-test \cite{Reiss15}, we assumed that we have a good procedure for estimating $\mathbf{w}$. However, CV is a standard procedure for estimating the actual error of the classifier that is found to be unstable in limited samples sizes \cite{Varoquaux18, Gorriz19}. Thus, we could improve this estimation by including a term to cope with the possibility that the fitting process is not as good as expected, i.e. the resulting estimate is not a good predictor. In this sense other alternatives could be tested by the assessment of the worst case based on concentration inequalities and the resubstitution estimate as: \begin{equation}\label{eq:17} T_{Res}=(\mathbf{x}-\mathbf{Y}\mathbf{w})^T(\mathbf{x}-\mathbf{Y}\mathbf{w})+\Delta(N,P) \end{equation} where $\Delta(N,P)$ is an upper bound of the actual risk \cite{Vapnik82, Gorriz2021,Gorriz19} with a probability at least $1-\alpha$. \subsection{A general framework for multiclass regression} As pointed out the columns in the specified design matrix $\mathbf{X}$ can be interpreted and coded as the labels of a multiclass classification problem for each component of the observation variable or response $\mathbf{y}$, stored in the rows of $\mathbf{Y}$. Let $\mathbf{x}$ denote the vector of labels for each observation $\mathbf{y}$, then it is straight forward to see that the following minimization is similar to the one proposed above (eq. \ref{eq:5}) for estimation of parameters contained in $\theta$: \begin{equation}\label{eq:18} \begin{array}{l} \mathbf{W}=arg\min_{\mathbf{W}}E[||\mathbf{X}-\mathbf{Y}\mathbf{W}||^2]=arg\min_{\mathbf{W}}E[\sum_{m=1}^M (\mathbf{x}_{m}-\mathbf{Y}\mathbf{w}_m)^2]=\\= arg\min_{\mathbf{W}} \sum_{m=1}^M \sum_{j=1}^N (x_{jm}-\sum_{i=1}^P y_{ji}w_{im})^2 \end{array} \end{equation} where $m,j,i$ index the set of labels, samples and observations, respectively, $\mathbf{W}=(\mathbf{w}_1,\ldots,\mathbf{w}_M)^T$ is the matrix of linear functions and $\mathbf{x}_j$ are the set of constraints (labels) for each observation $\mathbf{y}_j$. This is equivalent to solve $M$ LS problems one for each class. It is worth mentioning that in the GLM the parameters $\mathbf{\theta}$ and observations $\mathbf{y}$ are considered as vectors ($P=1$). In previous sections we have shown a simple connection between GLM and LRM, although the goal of this work is not using the parameters derived from LRM, at all. LRM is a naive LS model based on the minimization of the empirical risk, e.g. the MSE. In this sense, we prefer to use the Structural Risk Minimization (SRM) principle by means of support vector machines (SVM), an optimum strategy with limited sample sizes. The latter minimization is more related to the concentration inequalities framework as shown in \cite{Vapnik82}. Thus, linear regression could be replaced by SVM or other predictive algorithms, that employ different loss functions and measures of performance as suggested in \cite{Reiss15}. In this sense it would be interesting to assess the connection between the parameters estimated by SVM with the ones obtained using GLM. In fact, for the linear SVM, after training, the same equations, i.e. equation \ref{eq:12}, could be applied to estimate $\theta_{SVM}$ in terms of the support vectors, as we did with the LRM (see experimental section below). \section{Experimental results} In the first part of the experiments, to clearly state the problem and the solutions, we consider a simple group comparison with only one-level analysis (a Bayesian approach of this problem is equivalent to the GLM based inference on this single level) using a second-level design matrix that models the subject-specific effects over subjects. This is the well-known example in diagnostics in the group comparison of two conditions, e.g. Alzheimer subjects vs controls. We adjust the GLM and the equivalent problem using LRM and SVM. Thus, we regress observed variables using a simple explanatory matrix $\mathbf{X}$ and a Gaussian model for the noise to obtain two parameters $\theta_1,\theta_2$ in this toy example, as the following: \begin{equation*}\label{eq:19} \mathbf{Y}|_{N\times 1}=\mathbf{X}|_{N\times 2} \mathbf{\theta}|_{2\times 1}+\mathbf{\epsilon}|_{N\times 1} \end{equation*} where, as an example, \begin{equation*}\label{eq:20} \mathbf{X}=\left( \begin{array}{cc} 1 &0 \\ 0 &1 \\ 1 &0 \\ 1 &0\\ ...&...\\ \end{array}\right) \end{equation*} is a matrix of explanatory variables containing $1$s and $0$s and indicates the class of the observation using a two dimensional binary code. A hierarchical model could be processed the same way by fitting the set of parameters step by step, however we are interested in assessing the connection between $\mathbf{\theta}$ and $\mathbf{w}$ in this toy example. The objective of this part is double: i) the estimation of model parameters using both methodologies and domains linking them by the use of the theoretical connection in equation \ref{eq:12} and, ii) to assess how well they explain observations and labels in both domains. The latter can be tackled by showing the estimations and the group of observations in both domains and by quantitatively evaluating the classification error in the equivalent label domain, given the expected ideal values for model parameters. In the last part of this section, we show the inference analysis derived from the two methodologies on each domain. We regress on the observations and on the labels to construct and assess the spatially extended statistical processes, which provide maps of significance, using the MRI ADNI dataset \cite{Gorriz2021}. In this way, we compare SPM, that is based on a two-sample T-statistic similar to equation \ref{eq:3}, where significance is individually assessed at each voxel with a using three configurations: cluster-defining threshold CDT of $P = 0.001$ (uncorrected for multiple comparisons), cluster extent threshold equal to $10$ and FWE correction at $0.05$, and the P-tests described in section \ref{sec:inference}. \subsection{Data generation 1} A $N$-dimensional Gaussian noise vector $\mathbf{v}$ is randomly drawn with zero mean and an $N\times N$ covariance matrix with $2$-norm equal to $1$. This noise allows the definition of a vector of observations by adding the noise to a binary vector (a column in the explanatory matrix of indicators), i.e. $\mathbf{y}=\mathbf{X}_k+\mathbf{v}$ for $k\in\{1,2\}$. The design matrix is then obtained by $X=[\mathbf{X}_k \bar{\mathbf{X}}_k]$, where $\bar{(.)} $ denotes logical negation. Once the observations are artificially drawn (see figure \ref{fig:uno}) with increasing sample size, we regress both explanatory variables (LS or SVM) and observations (GLM) to obtain a set of two parameters for each model $\mathbf{\theta}=[\theta_1,\theta_2]$, $\mathbf{w}=[w_1,w_2]$. All the methods can be employed to estimate the regressed observation variables using equations \ref{eq:1} and \ref{eq:10}, given the explanatory matrix and the estimated parameters, as shown in figure \ref{fig:dos}. In the latter we plot the distribution of the T-statistic over 1000 simulations (up), a sample of this distribution that shows the variability of the estimation using GLM around the ideal value 1 (bottom left) and the estimated observation by the analyzed models. In connection with the previous one-sample GLM estimations, we plot the estimated parameters explaining the observation using all the methods in figure \ref{fig:tres} together with the observations they model. Note the large variability of the GLM estimation with increasing sample size. On the contrary, in figure \ref{fig:cuatro} we show the inverse problem to the above: how the methods estimate the $\mathbf{w}$ from the point of view of the label regression. In this case it is readily seen that the one sample GLM model provides a wrong estimation at different sample sizes, i.e. red curve above blue curve. As expected, the use of these parameter in the dual classification problem results in a larger empirical error as shown in \ref{fig:cinco}. \begin{figure*} \centering \includegraphics[width=\textwidth]{uno} \caption{Data generation 1 (DG1) example} \label{fig:uno} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{randomGLMd} \includegraphics[width=0.49\textwidth]{tres} \includegraphics[width=0.49\textwidth]{dos} \caption{Estimated Observations and T-statistic distribution. Note that in the GLM model we use the covariance matrix of the noise to evaluate \ref{eq:2}, that is, in the estimation of $\theta$. We show the comparison between non-normalized statistics of all the estimations, that is, suppressing the covariance term in the GLM, in a random (R=1000) simulation. This clearly demonstrates that only on average the ML statistic converges to the ideal value $\theta_1-\theta_0=1$ unlike the single sample of this distribution shown in the bottom left.} \label{fig:dos} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{cuatro} \caption{Distribution of observations and estimations of $\mathbf{\theta}$ for GLM, LRM and SVM in DG1} \label{fig:tres} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{cinco} \caption{Estimations of the parameter $\mathbf{w}$ regressing the observations with increasing sample size in DG1} \label{fig:cuatro} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{seis} \includegraphics[width=0.49\textwidth]{siete} \includegraphics[width=0.49\textwidth]{ocho} \caption{Classification boundaries and empirical errors in GLM, LRM and SVM (N=1000, DG1).} \label{fig:cinco} \end{figure*} \subsection{Data generation 2} A more realistic data generation is described in the following (DG2). The previous model is a standard procedure, where two expected values ($1$ and $0$) are added to the same Gaussian noise model. Here we propose to introduce the concept of label noise in the design matrix, instead of only in the observations. Again $\mathbf{y}=\hat{\mathbf{X}}_k+\mathbf{v}$, but the matrix of indicators $\hat{\mathbf{X}}$ is a version of the original, equal to it with probability $1-t$, with $t=0.1$, and flipped with probability $t$. This allows us to control the noise label in the design matrix, following the methodology shown in \cite{Ojala2010}. We repeat the same experiments (see figures \ref{fig:unobis}-\ref{fig:cincobis}) of the previous section using this realistic data generation. Again, the observations, error and explanatory matrix (ideal and nosy version) are shown in \ref{fig:unobis}. Note how this observation model is more realistic than the previous one and corresponds to a group comparison including wrongly labeled subjects or acquisitions. From figures \ref{fig:dosbis} and \ref{fig:tresbis} the same behavior could be expected as in the previous case. However, when we analyze the inverse problem we see a huge difference in the GLM estimation of $\mathbf{w}$. The dependence of the GLM estimator on the one-point sample mean provides a fluctuating estimation about the optimum value, unlike LRM and SVM. It is also worth mentioning that, in this realistic example, the estimation of $\mathbf{w}$ using GLM provides a huge empirical error as shown in figure \ref{fig:cincobis}. \begin{figure*} \centering \includegraphics[width=\textwidth]{unobis} \caption{DG 2 example} \label{fig:unobis} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{randomLSd} \includegraphics[width=0.49\textwidth]{tresbis} \includegraphics[width=0.49\textwidth]{dosbis} \caption{Estimated Observations and T-statistic distribution. (DG2) See caption in figure \ref{fig:uno}} \label{fig:dosbis} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{cuatrobis} \caption{Distribution of observations and estimations of $\mathbf{\theta}$ for GLM, LRM and SVM (DG2).} \label{fig:tresbis} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{cincobis} \caption{Estimations of the parameter $\mathbf{w}$ regressing the observations with increasing sample size in DG2} \label{fig:cuatrobis} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{seisbis} \includegraphics[width=0.49\textwidth]{sietebis} \includegraphics[width=0.49\textwidth]{ochobis} \caption{Classification boundaries and empirical errors in GLM, LRM and SVM (DG2)} \label{fig:cincobis} \end{figure*} As a conclusion, the link between the two approaches is the different nature of the regression procedure. In both domains there is an implicit classification task once the parameters, that better explain the corresponding observations, are derived. These parameters are fitted taking into account only the empirical data available (including a noise model or not). Therefore, $w_{m}$ for a given model $m$, can be used to regress the observations to obtain a novel data set on the label space (new regressed labels), which can be associated to the states (or classified) of the explanatory matrix. This classification task provides an empirical error as shown in figures \ref{fig:cinco} and \ref{fig:cincobis}. Other methods could be used as well to obtain such parameters in a (non) linear fashion. As an example, we compared the decision boundary obtained by LRM with SVM in figures \ref{fig:cinco} and \ref{fig:cincobis} to show the differences between methods in terms of generalization ability. \subsection{Real data experiment: the ADNI dataset} Data used in preparation of this paper were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). The ADNI database contains 1.5 T and 3.0 T t1w MRI scans for AD, Mild Cognitive Impairment (MCI), and cognitively NC which are acquired at multiple time points. Here we only included 1.5T sMRI corresponding to the three different groups of subjects. The original database contained more than 1000 T1-weighted MRI images, comprising 229 NC, and 188 AD, although for the proposed study, only the first medical examination of each subject is considered, resulting in 417 gray matter (GM) images. Following the recommendation of the National Institute on Aging and the Alzheimer's Association (NIA-AA) for the use of imaging biomarkers \cite{NIA18}, we considered the group comparison NC vs. AD for establishing a clear framework for comparing statistical paradigms (SPM and $T_{CV}$), since the MCI class is strictly based on clinical criteria, without including any other biomarker \cite{McKhann11}. Demographic data of subjects in the database is summarized in Table \ref{tab:demog}. The dataset was preprocessed using standardised neuroimaging methods and protocols implemented by the SPM software (registration in MNI space by spatial normalization and segmented to differentiate brain tissues, e.g. GM \cite{Friston95}. Following the aforementioned methods, we fit the set of parameters using linear SVM and evaluate the $T_{CV}$ statistic on the original set (see figure \ref{fig:seis}). As shown from this figure the resubstitution estimate provides a more optimistic value in the Acc distribution than the $K$-fold based estimate. Note that this analysis is independent of the selected fold as we are performing $\sim 10^6$ folds, one per voxel. However, both are optimistic since the mean of the distribution is not clearly distributed around $0.5$ (it is already shifted to the right, beyond the effect due to real significant regions). The effect is even larger when the dataset is slightly imbalanced, the case of over-powered datasets, as shown in the bottom of the latter figure (using 228vs188). On the contrary, note how the correction based on the bound derived in \cite{Gorriz19} clearly shifts the $A_{cc}$ obtained by resubstitution to the left, resulting in a better (conservative) estimation of the statistic in the whole volume. Based on the $T_{CV}$ and $T_{Res}$ values from the original dataset, and the ones obtained using a permutation analysis ($O=1000$) for a selection of structures, e.g. hippocampus, we can compare the SPM with the previous inference approaches, as described in section \ref{sec:inference}. Note that in this paper the huge amount of voxels contained within an image limits the permutation analysis in this sense to some specific structures. Results on the hipocampus are depicted in figure \ref{fig:siete}. The permutation analysis reveals how the power of the $T_{CV}$ approach is affected in this featured region, where a real effect might be found in almost the whole structure. The statistical power of the $T_{Res}$ is preserved through the permutation procedure ($2058$ detected voxels vs $1024$ voxels as shown in the same figure). It is also worth mentioning the CDF of the errors derived in the specific region and the distribution of the p-values within it. Recall that the dataset include advanced AD subjects thus the selected structure should be clearly affected by the disease. To preliminary extend the analysis to the whole volume we approximately simulate the null distribution outside this featured region in two steps. First, we compute the set of p-values in the hippocampus (around $2\cdot 10^3$ voxels) following equation \ref{eq:16} and determine the T threshold $T_{th}$ that approximately provides the significance level, e.g. $0.05$. Then, assuming that for any $T<T_{th}$ the probability of observation is $p-value<0.05$, we threshold the rest of the image to obtain the significant voxels showing an effect. This approach clearly needs the multiple-comparison correction as several dependent or independent statistical tests are being performed simultaneously at the given significance level. Therefore, we decrease the significance level down to $\alpha=0.001$ to avoid the presence of false positives in permutation analyses and then compare with SPM in the whole volume using the aforementioned configurations. In figure \ref{fig:siete} we show the detection ability together with the control of type I error in the $T_{Res}$ approach (map in red font). Note how the permutation test affects the detection ability of the classical CV approach (map in green font) and how the uncorrected voxelwise SPM approaches (in blue font ) tends to inflate false positives. \begin{table}[htbp] \centering \caption{Demographics details of the ADNI dataset, with group means with their standard deviation} \label{tab:demog} \begin{tabular}{lccccc} \hline & Status & Number & Age & Gender (M/F) & MMSE\\ \midrule MRI ADNI& NC & 229 & 75.97$\pm$5.0 & 119/110 & 29.00$\pm$1.0 \\ & AD & 188 & 75.36$\pm$7.5 & 99/89 & 23.28$\pm$2.0 \\ \bottomrule \end{tabular} \end{table} \begin{figure*} \centering \includegraphics[width=\textwidth]{Acchist} \includegraphics[width=0.49\textwidth]{SliceBrowserAtlasCV} \includegraphics[width=0.49\textwidth]{SliceBrowserAtlasC} \caption{Up: Distribution of voxelwise accuracies of the real dataset using $K=10$-fold, resubstitution and concentration inequalities \cite{Gorriz2021}. Down 3D distribution of the accuracies using CV and Corrected Accuracy.} \label{fig:seis} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{CVvsResubperm} \includegraphics[width=\textwidth]{hipocampo_analysis} \caption{Permutation analysis on the hippocampus. Note that $O=1000$ and the upper bound \cite{Gorriz19} was obtained with a probability at least equal to $0.05$} \label{fig:siete} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{Tres_mapa} \includegraphics[width=0.49\textwidth]{TCV_mapa} \includegraphics[width=0.3\textwidth]{SPMunc0001_mapa} \includegraphics[width=0.3\textwidth]{SPMunc0001CE10_mapa} \includegraphics[width=0.3\textwidth]{SPMfwe005_mapa} \caption{Parametric and non-parametric statistical maps. Note the trade-off in etection and control of the FWE of the $T_{res}$ approach, compared with $T_{CV}$ and he three SPM configurations} \label{fig:siete} \end{figure*} \section{Discussion} In the context of classification for statistical inference, researchers mainly follow two strategies: i) they perform K-fold and assess the $A_{cc}$ in several averaged folds or ii) they propose some kind of cross-validation based statistic (P-test) using an estimation of the actual error of the classifier on new set of samples (equation \ref{eq:8}). In both cases, if this residual square (error) is small, i.e. a good classification) is achieved, then it constitutes evidence against the null hypothesis. In the second approach to simulate the null distribution they employ a technique, as shown in section \ref{sec:inference}, that is also used in the frequentist inference, the permutation test. A set of permutations $\pi_p$ for $p=1,\ldots,O$ is generated and then applied to the dataset, using the same observations $\mathbf{y}$ and permuted constraints $x_{\pi_p}$. They estimate the parameters $w_{pi_p}$ and compute a set of residuals for all the permutations. The p-value is computed by dividing the number of times we randomly obtain a \emph{residual score} less than the one we obtain with the original value over the number of permutations, i.e. $p-value=p(RS_{\pi}<RS)$. This methodology is called CV-$P$ tests \cite{Reiss15} where LRM might be replaced by SVM or other predictive algorithms. Several limitations are found using only LRM for estimating the posterior probability above mentioned. Linear regression is only operative in binary classification, e.g the regression could be negative or even greater than zero \cite{Hastie2001}. Indeed, as shown in the example, in this case there is a strong correspondence between GLM and LRM for a single level analysis in group comparisons. Thus complex classifiers and other loss functions are needed for relieving bad estimations on the set of parameters. Beyond that, the selected predictive algorithms build their P-tests on the CV strategy, that could be a biased estimator of the actual error in heterogeneous datasets, such as the ones used in neuroimaging \cite{Varoquaux18}. On the other hand, classical and Bayesian Inferences depends on the specified models when proposing the T statistic and fitting parameters of the GLM. This is partly solved again by the use of permutation analysis in the estimation of the null distribution, but what about the T statistic definition? It is also described in terms of the error covariance matrix, which must be estimated on empirical data in limited sample sizes. In the toy example shown above we assumed in the formulation of the GLM a known covariance matrix. Despite that, the T-statistic following from the best guess was fluctuating around the ideal value and resulted in low classification rates in the dual problem. How is actually performed frequestist analysis or Bayesian analysis (equivalent to the latter in the last level) in a real scenario? Again, there are model selection and parameter fitting stages to achieve that, nonetheless in complex scenarios with a limited sample size, heuristics are the common solutions \cite{Woolrich2009}. Indeed, in the high dimensional case or under the assumption of complex models, the performance and operation of the latter approaches is arguable \cite{Reiss15}. They hardly estimate parameters, the computation is costly and tend to use heuristics for solving such issues, e.g. in the FSL tools based on Bayesian inference, such as BET (Brain extraction tool), TBSS (tract-based spatial statistics), FLIRT (FMRIB's linear image registration tool), PRELUDE/FUGUE (phase unwarping and MRI unwarping), MELODIC ICA, the use of heuristics is a common practice and the estimation of the full posterior distribution of model parameters is biased. As a conclusion, limited samples sizes and the selection/estimation of any specific model is still an issue in neuroimaging, further when the model and the interaction between model parameters become too complex for an accurate posterior probability estimation, or a feasible numerical computation of the Bayes rule. Given the connection between the two observation models, i.e. GLM and LRM, in this paper we propose the use of an agnostic theory about the estimation of dependencies and established in the pattern classification problem with limited amounts of data, to achieve statistical inference \cite{Vapnik82,Haussler92}. \section{Conclusions} In this paper we propose the use of permutation tests and agnostic theory in the set of regressed outputs by the definition of the residual score or $A_{cc}$ based test. We employ permutation tests and a better estimation of actual error based on concentration inequalities to provide a trade-off between the Type I error and the statistical power. Some previous results demonstrate the ability of such estimator to provide maps of significance \cite{Gorriz2021} where a random simulation on controls resulted in a nominal rate of false positives. As a conclusion, we see the equivalence in the estimation in the observation and the (explanatory) label domains, thus any test performed on the label space using an $A_{cc}$-based test is similar to the ones used in neuroimaging in the last decade. Moreover, prevalence (scores in equations \ref{eq:15} and \ref{eq:17}) is a valid measure for statistical inference without using any model at the very first assumptions. Our approach tries to compute this score using all the database available, instead of splitting it into folds, and with the resulting set of accuracies, we estimate the real one based on the upper bounds (instead of using K-fold strategy) with probability at least $1-\alpha$. Then, some permutation analysis is derived using this measure to simulate the distribution of the null hypothesis and finally a test can be formulated in a classic statistical sense. \section*{Acknowledgments} This work was partly supported by the MINECO/ FEDER under the RTI2018-098913-B100 CV20-45250 and A-TIC-080-UGR18 projects and by the Ministerio de Universidades under the FPU Predoctoral Grant FPU 18/04902. \section*{Appendix} \subsection*{Proof of the connection:} The derivation of the connection between $\mathbf{\theta}$ and $\mathbf{w}$ is shown in the following assuming non-singular matrices when needed. Given the optimum solution for the matrix of parameters $\hat{\mathbf{w}}=(\mathbf{y}^T\mathbf{y})^{-1}\mathbf{y}^T\mathbf{X}$, from the LRM in equation \ref{eq:7} we readily see that the observation can be written as: \begin{equation}\label{eq:21} \mathbf{y}=\mathbf{X}\hat{\mathbf{w}}^T(\hat{\mathbf{w}}\hat{\mathbf{w}}^T)^{-1} \end{equation} The transpose of the parameter matrix $\hat{\mathbf{w}}$ can be expressed as: \begin{equation}\label{eq:22} \hat{\mathbf{w}}^T=(\mathbf{y}^T\mathbf{X})^T(\mathbf{y}^T\mathbf{y})^{-T}=(\mathbf{y}^T\mathbf{y})^{-1}\mathbf{X}^T\mathbf{y} \end{equation} then, equation \ref{eq:19} transforms into: \begin{equation}\label{eq:23} \mathbf{y}=\mathbf{X}\hat{\mathbf{w}}^T((\mathbf{y}^T\mathbf{y})^{-1}\mathbf{y}^T\mathbf{X}(\mathbf{y}^T\mathbf{y})^{-1}\mathbf{X}^T\mathbf{y})^{-1}= (\mathbf{y}^T\mathbf{y})^{2}(\mathbf{y}^T\mathbf{X}\mathbf{X}^T\mathbf{y})^{-1} \mathbf{X}\hat{\mathbf{w}}^T \end{equation} where the leading terms in parenthesis are scalars. The first term is the squared power of the observation $(\mathbf{y}^T\mathbf{y})^{2}=(\sum_{i=1}^Ny_i^2)^2$ and the second, given that $\mathbf{X}$ is an indicator matrix and: \begin{equation}\label{eq:24} \mathbf{y}^T\mathbf{X}=\left(\sum_{i\in C_1}y_{i1}, \sum_{i\in C_2}y_{i2},\ldots,\sum_{i\in C_M}y_{iM}\right)=(\mathbf{X}^T\mathbf{y})^T \end{equation} , where $y_{im}$ denotes the observation $i$ in class $m$, can be expressed as the denominator in equation \ref{eq:13}. \subsection*{Binary Support vector Machines:} In general, SVM separate binary labeled training data $(\mathbf{y},x)\in \mathbb{R}^P\times\pm 1$ by the hyperplane $f:\mathbb{R}^P\rightarrow \pm 1$: \begin{equation}\label{eq:1ap} f(\mathbf{y})=\mathbf{w}^T\mathbf{y}+w_0=(\mathbf{w}^T w_0)(\mathbf{y}^T 1)^T \end{equation} where $\mathbf{w}$ is known as the weight vector and $w_0$ as the threshold. This hyperplane is obtained in such a way \cite{Burges98} that is maximally distant from the two classes, i.e. the maximal margin hyperplane. In our case, the binary labels $x$ are coding each row of the explanatory matrix $\mathbf{X]}$, e.g. $1\rightarrow (1,0)$ and $-1\rightarrow (0,1)$, and $P=1$, thus our function $f:\mathbb{R}\rightarrow \{(0 1), (1 0)\}$ that can be easily solved via the original SVM with two independent minimizations with opposite expected solutions $w^1=-w^0$ and $w_0^1=-w_0^0$. It is worth mentioning that SVM is referred to $\pm 1$ labels thus to compute the regressed observations we need to evaluate: \begin{equation}\label{eq:1ap} \hat{\mathbf{Y}}=(\mathbf{X}\mathbf{w}^T(\mathbf{w}\mathbf{w}^T)^{-1}+1)/2 \end{equation} \bibliographystyle{srt} \section{Introduction} Nowadays, there is an open question about the usefulness of machine learning (MLE) techniques to test significance of group analyses. While classification problems using MLE have been the main target in predictive or decoding analysis in neuroimaging, there is an increasing interest in the inference analysis with continuous outputs based on MLE, as detailed in \cite{Cohen2011} and remarked in \cite{Reiss15}. Recently, several advances for combining p-value maps have been proposed based on the concept of \emph{prevalence} \cite{Heller07,Rosenblatt14}, beyond the fixed and mixed (random) effects models \cite{Friston02}. Common to all these approaches is to assume a voxel-wise model that allows a proportion of conditions or subjects that activated the voxel at some mixing proportion. This assumption, that is more realistic than those assumed in classic random effect approaches, e.g. homogeneity in the (binary) activation pattern \cite{Rosenblatt14}, clearly opens a new application field for modern statistics. Indeed, the concept of prevalence as a fraction of individuals correctly classified by MLE algorithms in group comparisons, is not novel at all in neuroimaging, being the main focus of predictive inference. As an example, out-of sample generalization approaches, such as Cross-Validation (CV), try to estimate on unseen new data the accuracy ($A_{cc}$) of the classifier in a binary classification problem. Despite the methods and goals of predictive CV inference being distinct from classical extrapolation procedures \cite{Lindquist13}, they are actually exploited within statistical frameworks aimed at assessing statistical significance \cite{Reiss15}. Examples include bootstrapping, binomial or permutation (``resampling'') tests \cite{Winkler16}, which have been demonstrated to be competitive outside the comfort zone of classical statistics, filling otherwise-unmet inferential needs. In the pattern classification problem we usually assume the existence of classes ($H_1$) that are differentiated by classifiers that are measured by their performance in terms of $A_{cc}$ or \emph{prevalence} on a independent dataset. Then, we conclude (improperly in a statistical sense) $H_1$ using empirical confidence intervals, e.g. standard deviations of the classification $A_{cc}$ from training folds. In limited sample sizes the most popular K-fold CV method \cite{Kohavi95} has been demonstrated to sub-optimally work under unstable conditions \cite{Gorriz18,Gorriz19,Varoquaux18}. In such circumstances, the predictive power of the fitted classifiers can be arguable. Moreover, recent works have partially demonstrate that, when using only a classifier's empirical $A_{cc}$ as a test statistic, the probability of detecting differences between two distributions is lower than that of a bona fide statistical test \cite{Rosenblatt16,Kim20}. Beyond the latter empirical techniques for the estimation of performance, MLE is well-framed into a data-driven statistical learning theory (SLT) which is mainly devoted to problems of estimating dependencies with limited amounts of data \cite{Vapnik82}. Although CV-MLE approaches were not originally designed to test hypotheses based on prevalence in brain mapping \cite{Friston2013}, they are theoretically grounded to provide confidence intervals in the classification of image patterns (protected inference) that can be seen as maps of statistical significance \cite{Gorriz2021}. As shown in the latter reference, this can be achieved by assessing the upper bounds of the actual error in a binary classification problem (a confidence interval), and by using simple significance tests of a population proportion within it. Definitely, this results in improvements to the test's statistical power based on $A_{cc}$. Thus, assessing with high probability the quality of the fitting function (and its generalization ability) in terms of in and out-sample predictions can be conceptualized, under a hypothesis testing scenario, as the inverse problem of ``carefully rejecting $H_0$'', that is, the problem of rejecting $H_1$, and thus accepting $H_0$ (there is no effect or it is not significant). In this paper we show the connection between the classical General linear model (GLM) including the classic random effect model, with the MLE framework in the estimation of model/classifier parameters, and the subsequent analyses to achieve the degree of significance in group comparisons. In this sense, inference based on the parametric T statistic or the prevalence-based probability tests, means two different paths to assess the same problem. Moreover, we show a novel method for achieving statistical significance using MLE and permutation tests based on concentration inequalities. This approach assesses the worst case of the actual error to propose an estimation of the observed distribution of the permuted data. \section{Methods: Classical and MLE statistical inferences}\label{sec:methods} \subsection{Background on classical statistics in neuroimaging}\label{sec:GLM} The GLM \cite{Friston02} is defined just for a single observation level, e.g. in a inter-subject comparison, as: \begin{equation}\label{eq:1} \mathbf{y}=\mathbf{X} \mathbf{\theta} + \mathbf{\epsilon} \end{equation} where $\mathbf{y}$ is the $N\times 1$ observation vector with units over time, voxels, etc., $\mathbf{\epsilon}$ is the $N\times 1$ vector of errors that is assumed to be Gaussian, $\mathbf{X}$ is the $N \times M$ matrix containing the explanatory variables or constraints and $\mathbf{\theta}$ is the $M\times 1$ vector of parameters explaining the observations $\mathbf{y}$. Note that i) for a hierarchical observation model each level like the latter requires the estimation of the previous levels, and ii) in terms of MLE, $\mathbf{X}$ plays the role of a multidimensional label or regressors acting on the observations $\mathbf{y}$. In the classic GLM $\mathbf{\theta}$ is usually estimated by a Maximum Likelihood (ML) criterion based on the Gaussian assumption and is given by: \begin{equation}\label{eq:2} \hat{\mathbf{\theta}}=(\mathbf{X}^T \mathbf{C}_{\epsilon}^{-1} \mathbf{X})^{-1}\mathbf{X}^T \mathbf{C}_{\epsilon}^{-1}\mathbf{y} \end{equation} Inferences about this estimate that is, how large are the components of $\mathbf{\theta}$ and their relation with each other, can be obtained by using a linear compound, specified by a contrast weight vector $\mathbf{c}$, and writing a T statistic as: \begin{equation}\label{eq:3} T=\frac{\mathbf{c}^T\hat{\mathbf{\theta}} }{\sqrt{\mathbf{c}^TCov(\hat{\mathbf{\theta}})\mathbf{c} }} \end{equation} where $Cov(\hat{\mathbf{\theta}})=(\mathbf{X}^T \mathbf{C}_{\epsilon}^{-1} \mathbf{X})^{-1}$. The T statistic gives us the probability of observation of the ML estimation under the null hypothesis and, when it is small enough, e.g. $p<0.05$, the linear compound is considered significantly different from zero. As an example, given a set of two parameters in $\theta=[\theta_1, \theta_2]^T$ if we select $\mathbf{c}=[1 -1]$ we are assessing how large is the first parameter w.r.t the second, i.e. the difference $\theta_1-\theta_2$, thus if the T statistic provides a small probability, the latter difference is statistically significant and observations are generated from different sources. A similar procedure could be established based on a Bayesian estimation and inference to handle complex hierarchical observation models. The latter framework is based on the Expectation Maximization (EM) algorithm for parameter estimation and known priors and a-priori probability models, with the aim of evaluating the posterior probability (ppm). By thresholding the ppm, a relationship between the two approaches can be established including similarities (statistical power) and differences (specificity) \cite{Friston02}. \subsubsection{Least Squares of the GLM} \label{sec:GLMleast} The GLM can be estimated without any assumption on the noise model by simply solving the associated Least Squares (LS) problem. Therefore, if we assume that $\epsilon=0$ in the GLM, the problem is now to find the ``best'' set of parameters $\theta_i$ that better explains each observation $y_i$ by: \begin{equation}\label{eq:4} y_j=\sum_{i=1}^M X_{ji} \theta_i;\quad \text{for } j=1,\ldots,N \end{equation} Thus, we need to solve the linear regression problem given in equation \ref{eq:4} to estimate the parameters $\theta_i$. The most popular estimation method is LS, in which we select the coefficients $\mathbf{\theta}$ to minimize the residual sum of squares: \begin{equation}\label{eq:5} RS(\mathbf{\theta})=\sum_{j=1}^N (y_j-\sum_{i=1}^M X_{ji} \theta_i)^2 \end{equation} The solution to this problem ($\frac{\partial RS(\mathbf{\theta})}{\partial \mathbf{\theta}}=0$), the Markov-Gauss estimate, provides the smallest variance among all linear unbiased estimates and is given by: \begin{equation}\label{eq:6} \hat{\mathbf{\theta}}=(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^{T}\mathbf{y} \end{equation} similar to equation \ref{eq:2} in the GLM but assuming $C_{\epsilon}=\mathbf{I}$, that is, the errors are assumed to be independently and identically distributed. \subsection{Converting the problem of the estimation of $\theta$ into a LS classification problem}\label{sec:LS} In the LS multiclass classification problem, the goal is to design $M$ linear functions $f_i(\mathbf{y})=\mathbf{w}_i^T\mathbf{y}$, given a set of response variables $\mathbf{y}_i$ and according to a suitable mean squared error (MSE) criterion w.r.t some desired discrete output (i.e. labels) $\mathbf{x}_i$ which represents a binary code. Note how, in general, this pattern classification problem is the inverse problem of hierarchical modeling in neuroimaging, as shown in the following. Recently, the residual score or the classification error obtained from several methodologies beyond LS, e.g. by applying the fitted linear hyperplanes to new unseen data, are deployed to establish a CV $A_{cc}$-based test on the responses with permuted labels, as shown elsewhere \cite{Reiss15, Gorriz2021}. \subsubsection{The inverse problem: LS for regressing an indicator matrix}\label{sec:LSinverse} Let consider the general inverse problem, that is, given a set of observations $\{\mathbf{y}_i\}$, for $i=1,\ldots,N$, we are interested in explaining a set of ``explanatory'' binary-coded variables $\mathbf{x}_i$ (labels) by a matrix $\mathbf{W}$ of parameters. This problem that is referred in this paper as the inverse problem in the \emph{label domain}, is also known as the linear regression of an Indicator Matrix or linear regression model (LRM) \cite{Hastie2001}. In this model we regress explanatory variables instead of doing that on the observed responses, i.e. in the \emph{observation domain}, as in the GLM. This regression could be more accurate than the latter depending on the nature of the data to be fitted, e.g. for a low number of discrete categories in the specified design matrix $\mathbf{X}=[x_{im}]$. If we have $M$ classes then $\mathbf{X}$ is a $N\times M$ matrix, where each row $i=1,\ldots,N$ contains a single $x_{im}=1$, for $m=1,\ldots,M$, $\mathbf{Y}$ is the $N\times P$ matrix of column responses $\mathbf{y}_i$ and $\mathbf{W}$ is a $P\times M$ coefficient matrix. Thus, we fit a linear regression model of the form: \begin{equation}\label{eq:7} \mathbf{X}=\mathbf{Y}\mathbf{W} \end{equation} where the $P$ dimension allows the inclusion of several responses (multimodality or multiframe acquisitions) given the same indicator response matrix $\mathbf{X}$. Following the same methodology as aforementioned the best estimation is given by: \begin{equation}\label{eq:8} \hat{\mathbf{W}}=(\mathbf{Y}^T\mathbf{Y})^{-1}\mathbf{Y}^T\mathbf{X} \end{equation} which regresses inputs of observations into a novel set of labels or constraints: \begin{equation}\label{eq:9} \hat{\mathbf{X}}=\mathbf{Y}\hat{\mathbf{W}} \end{equation} The novel set $\hat{\mathbf{X}}$ can be seen as a guess on the constraints for the set of observation vectors $\mathbf{y}_i$, or an approximation of the posterior probability $p(\text{class}=m|\mathbf{y})$. Thus, it allows us to compute an error model as: \begin{equation}\label{eq:10} \epsilon_{LS}=\mathbf{X}-\hat{\mathbf{X}} \end{equation} \subsubsection{Connection between $\theta$ and $\mathbf{w}$} \label{sec:connection} For simplicity and to connect with the GLM as shown in section \ref{sec:GLM}, let $P=1$ in the LRM, then $\mathbf{W}=\mathbf{w}$ is a $1\times M$ row vector and $\mathbf{Y}=\mathbf{y}$ is an $N\times 1$ column vector. An easy relation between the GLM and LRM approximations can be found taking into account that: \begin{equation}\label{eq:11} \mathbf{X}=\mathbf{y}\mathbf{w}+\mathbf{\epsilon}_{LS} \end{equation} Thus, the corresponding GLM is: \begin{equation}\label{eq:12} \mathbf{y}=(\mathbf{X}-\mathbf{\epsilon}_{LS})\hat{\theta} \end{equation} where we define $\hat{\theta}=\mathbf{w}^T(\mathbf{w}\mathbf{w}^T)^{-1}$ and the GLM noise model is derived using $\epsilon=-\epsilon_{LS}\hat{\theta}$. The scalar term of equation \ref{eq:12} can be expressed at the LS solution as: \begin{equation}\label{eq:13} (\mathbf{w}\mathbf{w}^T)^{-1}=(\mathbf{y}^T\mathbf{y})^2/((\mathbf{X}^T\mathbf{y})^T\mathbf{X}^T\mathbf{y})=\frac{(\sum_{i=1}^N y_i^2)^2}{\sum_{m=1}^M\sum_{i,j}y_{im}y_{jm}} \end{equation} where $y_{im}$ denotes the observation $i$ belonging to class $m$. Thus, the LS linear regression of the observations can be described by the GLM on the observations (a linear regression on the explanatory variables) and viceversa.\footnote{given a GLM on the observation we can define a LRM on the explanatory variables as $\hat{\mathbf{w}}=\mathbf{\theta}^T(\mathbf{\theta}\mathbf{\theta}^T)^{-1}$ with and error $\epsilon_{LS}=-\epsilon\hat{\mathbf{w}}$} \subsubsection{Inference of the inverse GLM based on MLE} \label{sec:inference} The LRM can be seen as a generalization of the GLM for the responses, coding $\mathbf{x}$ as a vector of continuous noisy responses (then $M=1$, instead of being an indicator matrix): \begin{equation}\label{eq:14} \mathbf{x}=\mathbf{Y}\mathbf{w}+\mathbf{\epsilon} \end{equation} that is equivalent to the inverse GLM in equation \ref{eq:1}. An inference on this model based on MLE could proceed as follows. Given an a set of pairs $(\mathbf{y}_i,x_i)$ we estimate the set of parameters $\mathbf{w}$ using a similar expression as in equation \ref{eq:8}. After the fitting process, we assess its significance under the null hypothesis on an independent set, likewise the T-statistic inference on the GLM, using a CV $A_{cc}$-based test statistic as: \begin{equation}\label{eq:15} T_{CV}=(\mathbf{x}-\mathbf{Y}\mathbf{w})^T(\mathbf{x}-\mathbf{Y}\mathbf{w})=\sum_{i=1}^N(x_i-\sum_{j=1}^PY_{ij}w_j)^2 \end{equation} The null distribution is modeled by choosing a large number of permutations $\pi$ to create artificial data sets, $(\mathbf{y}_i,x_{\pi_p})$, for $p=1,\ldots, O$, i.e. a permutation test, and evaluating the sum of squared residuals $T_{CV}$ on every unseen sample within the permuted and original set. Consequently, the p-value is defined by\footnote{the correction factor +1 in the numerator and denominator is justified by the inclusion of the original sample set in the test}: \begin{equation}\label{eq:16} p_{value}=\frac{ card\{ T_{CV}^{\pi}<T_{CV}\}+1}{O+1} \end{equation} where $card(.)$ is the cardinality of a set and $T_{CV}$ and $T_{CV}^{\pi}$ are the CV $A_{cc}$-based tests on the original and permuted sets, respectively. In the latter test, also known as P-test \cite{Reiss15}, we assumed that we have a good procedure for estimating $\mathbf{w}$. However, CV is a standard procedure for estimating the actual error of the classifier that is found to be unstable in limited samples sizes \cite{Varoquaux18, Gorriz19}. Thus, we could improve this estimation by including a term to cope with the possibility that the fitting process is not as good as expected, i.e. the resulting estimate is not a good predictor. In this sense other alternatives could be tested by the assessment of the worst case based on concentration inequalities and the resubstitution estimate as: \begin{equation}\label{eq:17} T_{Res}=(\mathbf{x}-\mathbf{Y}\mathbf{w})^T(\mathbf{x}-\mathbf{Y}\mathbf{w})+\Delta(N,P) \end{equation} where $\Delta(N,P)$ is an upper bound of the actual risk \cite{Vapnik82, Gorriz2021,Gorriz19} with a probability at least $1-\alpha$. \subsection{A general framework for multiclass regression} As pointed out the columns in the specified design matrix $\mathbf{X}$ can be interpreted and coded as the labels of a multiclass classification problem for each component of the observation variable or response $\mathbf{y}$, stored in the rows of $\mathbf{Y}$. Let $\mathbf{x}$ denote the vector of labels for each observation $\mathbf{y}$, then it is straight forward to see that the following minimization is similar to the one proposed above (eq. \ref{eq:5}) for estimation of parameters contained in $\theta$: \begin{equation}\label{eq:18} \begin{array}{l} \mathbf{W}=arg\min_{\mathbf{W}}E[||\mathbf{X}-\mathbf{Y}\mathbf{W}||^2]=arg\min_{\mathbf{W}}E[\sum_{m=1}^M (\mathbf{x}_{m}-\mathbf{Y}\mathbf{w}_m)^2]=\\= arg\min_{\mathbf{W}} \sum_{m=1}^M \sum_{j=1}^N (x_{jm}-\sum_{i=1}^P y_{ji}w_{im})^2 \end{array} \end{equation} where $m,j,i$ index the set of labels, samples and observations, respectively, $\mathbf{W}=(\mathbf{w}_1,\ldots,\mathbf{w}_M)^T$ is the matrix of linear functions and $\mathbf{x}_j$ are the set of constraints (labels) for each observation $\mathbf{y}_j$. This is equivalent to solve $M$ LS problems one for each class. It is worth mentioning that in the GLM the parameters $\mathbf{\theta}$ and observations $\mathbf{y}$ are considered as vectors ($P=1$). In previous sections we have shown a simple connection between GLM and LRM, although the goal of this work is not using the parameters derived from LRM, at all. LRM is a naive LS model based on the minimization of the empirical risk, e.g. the MSE. In this sense, we prefer to use the Structural Risk Minimization (SRM) principle by means of support vector machines (SVM), an optimum strategy with limited sample sizes. The latter minimization is more related to the concentration inequalities framework as shown in \cite{Vapnik82}. Thus, linear regression could be replaced by SVM or other predictive algorithms, that employ different loss functions and measures of performance as suggested in \cite{Reiss15}. In this sense it would be interesting to assess the connection between the parameters estimated by SVM with the ones obtained using GLM. In fact, for the linear SVM, after training, the same equations, i.e. equation \ref{eq:12}, could be applied to estimate $\theta_{SVM}$ in terms of the support vectors, as we did with the LRM (see experimental section below). \section{Experimental results} In the first part of the experiments, to clearly state the problem and the solutions, we consider a simple group comparison with only one-level analysis (a Bayesian approach of this problem is equivalent to the GLM based inference on this single level) using a second-level design matrix that models the subject-specific effects over subjects. This is the well-known example in diagnostics in the group comparison of two conditions, e.g. Alzheimer subjects vs controls. We adjust the GLM and the equivalent problem using LRM and SVM. Thus, we regress observed variables using a simple explanatory matrix $\mathbf{X}$ and a Gaussian model for the noise to obtain two parameters $\theta_1,\theta_2$ in this toy example, as the following: \begin{equation*}\label{eq:19} \mathbf{Y}|_{N\times 1}=\mathbf{X}|_{N\times 2} \mathbf{\theta}|_{2\times 1}+\mathbf{\epsilon}|_{N\times 1} \end{equation*} where, as an example, \begin{equation*}\label{eq:20} \mathbf{X}=\left( \begin{array}{cc} 1 &0 \\ 0 &1 \\ 1 &0 \\ 1 &0\\ ...&...\\ \end{array}\right) \end{equation*} is a matrix of explanatory variables containing $1$s and $0$s and indicates the class of the observation using a two dimensional binary code. A hierarchical model could be processed the same way by fitting the set of parameters step by step, however we are interested in assessing the connection between $\mathbf{\theta}$ and $\mathbf{w}$ in this toy example. The objective of this part is double: i) the estimation of model parameters using both methodologies and domains linking them by the use of the theoretical connection in equation \ref{eq:12} and, ii) to assess how well they explain observations and labels in both domains. The latter can be tackled by showing the estimations and the group of observations in both domains and by quantitatively evaluating the classification error in the equivalent label domain, given the expected ideal values for model parameters. In the last part of this section, we show the inference analysis derived from the two methodologies on each domain. We regress on the observations and on the labels to construct and assess the spatially extended statistical processes, which provide maps of significance, using the MRI ADNI dataset \cite{Gorriz2021}. In this way, we compare SPM, that is based on a two-sample T-statistic similar to equation \ref{eq:3}, where significance is individually assessed at each voxel with a using three configurations: cluster-defining threshold CDT of $P = 0.001$ (uncorrected for multiple comparisons), cluster extent threshold equal to $10$ and FWE correction at $0.05$, and the P-tests described in section \ref{sec:inference}. \subsection{Data generation 1} A $N$-dimensional Gaussian noise vector $\mathbf{v}$ is randomly drawn with zero mean and an $N\times N$ covariance matrix with $2$-norm equal to $1$. This noise allows the definition of a vector of observations by adding the noise to a binary vector (a column in the explanatory matrix of indicators), i.e. $\mathbf{y}=\mathbf{X}_k+\mathbf{v}$ for $k\in\{1,2\}$. The design matrix is then obtained by $X=[\mathbf{X}_k \bar{\mathbf{X}}_k]$, where $\bar{(.)} $ denotes logical negation. Once the observations are artificially drawn (see figure \ref{fig:uno}) with increasing sample size, we regress both explanatory variables (LS or SVM) and observations (GLM) to obtain a set of two parameters for each model $\mathbf{\theta}=[\theta_1,\theta_2]$, $\mathbf{w}=[w_1,w_2]$. All the methods can be employed to estimate the regressed observation variables using equations \ref{eq:1} and \ref{eq:10}, given the explanatory matrix and the estimated parameters, as shown in figure \ref{fig:dos}. In the latter we plot the distribution of the T-statistic over 1000 simulations (up), a sample of this distribution that shows the variability of the estimation using GLM around the ideal value 1 (bottom left) and the estimated observation by the analyzed models. In connection with the previous one-sample GLM estimations, we plot the estimated parameters explaining the observation using all the methods in figure \ref{fig:tres} together with the observations they model. Note the large variability of the GLM estimation with increasing sample size. On the contrary, in figure \ref{fig:cuatro} we show the inverse problem to the above: how the methods estimate the $\mathbf{w}$ from the point of view of the label regression. In this case it is readily seen that the one sample GLM model provides a wrong estimation at different sample sizes, i.e. red curve above blue curve. As expected, the use of these parameter in the dual classification problem results in a larger empirical error as shown in \ref{fig:cinco}. \begin{figure*} \centering \includegraphics[width=\textwidth]{uno} \caption{Data generation 1 (DG1) example} \label{fig:uno} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{randomGLMd} \includegraphics[width=0.49\textwidth]{tres} \includegraphics[width=0.49\textwidth]{dos} \caption{Estimated Observations and T-statistic distribution. Note that in the GLM model we use the covariance matrix of the noise to evaluate \ref{eq:2}, that is, in the estimation of $\theta$. We show the comparison between non-normalized statistics of all the estimations, that is, suppressing the covariance term in the GLM, in a random (R=1000) simulation. This clearly demonstrates that only on average the ML statistic converges to the ideal value $\theta_1-\theta_0=1$ unlike the single sample of this distribution shown in the bottom left.} \label{fig:dos} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{cuatro} \caption{Distribution of observations and estimations of $\mathbf{\theta}$ for GLM, LRM and SVM in DG1} \label{fig:tres} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{cinco} \caption{Estimations of the parameter $\mathbf{w}$ regressing the observations with increasing sample size in DG1} \label{fig:cuatro} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{seis} \includegraphics[width=0.49\textwidth]{siete} \includegraphics[width=0.49\textwidth]{ocho} \caption{Classification boundaries and empirical errors in GLM, LRM and SVM (N=1000, DG1).} \label{fig:cinco} \end{figure*} \subsection{Data generation 2} A more realistic data generation is described in the following (DG2). The previous model is a standard procedure, where two expected values ($1$ and $0$) are added to the same Gaussian noise model. Here we propose to introduce the concept of label noise in the design matrix, instead of only in the observations. Again $\mathbf{y}=\hat{\mathbf{X}}_k+\mathbf{v}$, but the matrix of indicators $\hat{\mathbf{X}}$ is a version of the original, equal to it with probability $1-t$, with $t=0.1$, and flipped with probability $t$. This allows us to control the noise label in the design matrix, following the methodology shown in \cite{Ojala2010}. We repeat the same experiments (see figures \ref{fig:unobis}-\ref{fig:cincobis}) of the previous section using this realistic data generation. Again, the observations, error and explanatory matrix (ideal and nosy version) are shown in \ref{fig:unobis}. Note how this observation model is more realistic than the previous one and corresponds to a group comparison including wrongly labeled subjects or acquisitions. From figures \ref{fig:dosbis} and \ref{fig:tresbis} the same behavior could be expected as in the previous case. However, when we analyze the inverse problem we see a huge difference in the GLM estimation of $\mathbf{w}$. The dependence of the GLM estimator on the one-point sample mean provides a fluctuating estimation about the optimum value, unlike LRM and SVM. It is also worth mentioning that, in this realistic example, the estimation of $\mathbf{w}$ using GLM provides a huge empirical error as shown in figure \ref{fig:cincobis}. \begin{figure*} \centering \includegraphics[width=\textwidth]{unobis} \caption{DG 2 example} \label{fig:unobis} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{randomLSd} \includegraphics[width=0.49\textwidth]{tresbis} \includegraphics[width=0.49\textwidth]{dosbis} \caption{Estimated Observations and T-statistic distribution. (DG2) See caption in figure \ref{fig:uno}} \label{fig:dosbis} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{cuatrobis} \caption{Distribution of observations and estimations of $\mathbf{\theta}$ for GLM, LRM and SVM (DG2).} \label{fig:tresbis} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{cincobis} \caption{Estimations of the parameter $\mathbf{w}$ regressing the observations with increasing sample size in DG2} \label{fig:cuatrobis} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{seisbis} \includegraphics[width=0.49\textwidth]{sietebis} \includegraphics[width=0.49\textwidth]{ochobis} \caption{Classification boundaries and empirical errors in GLM, LRM and SVM (DG2)} \label{fig:cincobis} \end{figure*} As a conclusion, the link between the two approaches is the different nature of the regression procedure. In both domains there is an implicit classification task once the parameters, that better explain the corresponding observations, are derived. These parameters are fitted taking into account only the empirical data available (including a noise model or not). Therefore, $w_{m}$ for a given model $m$, can be used to regress the observations to obtain a novel data set on the label space (new regressed labels), which can be associated to the states (or classified) of the explanatory matrix. This classification task provides an empirical error as shown in figures \ref{fig:cinco} and \ref{fig:cincobis}. Other methods could be used as well to obtain such parameters in a (non) linear fashion. As an example, we compared the decision boundary obtained by LRM with SVM in figures \ref{fig:cinco} and \ref{fig:cincobis} to show the differences between methods in terms of generalization ability. \subsection{Real data experiment: the ADNI dataset} Data used in preparation of this paper were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). The ADNI database contains 1.5 T and 3.0 T t1w MRI scans for AD, Mild Cognitive Impairment (MCI), and cognitively NC which are acquired at multiple time points. Here we only included 1.5T sMRI corresponding to the three different groups of subjects. The original database contained more than 1000 T1-weighted MRI images, comprising 229 NC, and 188 AD, although for the proposed study, only the first medical examination of each subject is considered, resulting in 417 gray matter (GM) images. Following the recommendation of the National Institute on Aging and the Alzheimer's Association (NIA-AA) for the use of imaging biomarkers \cite{NIA18}, we considered the group comparison NC vs. AD for establishing a clear framework for comparing statistical paradigms (SPM and $T_{CV}$), since the MCI class is strictly based on clinical criteria, without including any other biomarker \cite{McKhann11}. Demographic data of subjects in the database is summarized in Table \ref{tab:demog}. The dataset was preprocessed using standardised neuroimaging methods and protocols implemented by the SPM software (registration in MNI space by spatial normalization and segmented to differentiate brain tissues, e.g. GM \cite{Friston95}. Following the aforementioned methods, we fit the set of parameters using linear SVM and evaluate the $T_{CV}$ statistic on the original set (see figure \ref{fig:seis}). As shown from this figure the resubstitution estimate provides a more optimistic value in the Acc distribution than the $K$-fold based estimate. Note that this analysis is independent of the selected fold as we are performing $\sim 10^6$ folds, one per voxel. However, both are optimistic since the mean of the distribution is not clearly distributed around $0.5$ (it is already shifted to the right, beyond the effect due to real significant regions). The effect is even larger when the dataset is slightly imbalanced, the case of over-powered datasets, as shown in the bottom of the latter figure (using 228vs188). On the contrary, note how the correction based on the bound derived in \cite{Gorriz19} clearly shifts the $A_{cc}$ obtained by resubstitution to the left, resulting in a better (conservative) estimation of the statistic in the whole volume. Based on the $T_{CV}$ and $T_{Res}$ values from the original dataset, and the ones obtained using a permutation analysis ($O=1000$) for a selection of structures, e.g. hippocampus, we can compare the SPM with the previous inference approaches, as described in section \ref{sec:inference}. Note that in this paper the huge amount of voxels contained within an image limits the permutation analysis in this sense to some specific structures. Results on the hipocampus are depicted in figure \ref{fig:siete}. The permutation analysis reveals how the power of the $T_{CV}$ approach is affected in this featured region, where a real effect might be found in almost the whole structure. The statistical power of the $T_{Res}$ is preserved through the permutation procedure ($2058$ detected voxels vs $1024$ voxels as shown in the same figure). It is also worth mentioning the CDF of the errors derived in the specific region and the distribution of the p-values within it. Recall that the dataset include advanced AD subjects thus the selected structure should be clearly affected by the disease. To preliminary extend the analysis to the whole volume we approximately simulate the null distribution outside this featured region in two steps. First, we compute the set of p-values in the hippocampus (around $2\cdot 10^3$ voxels) following equation \ref{eq:16} and determine the T threshold $T_{th}$ that approximately provides the significance level, e.g. $0.05$. Then, assuming that for any $T<T_{th}$ the probability of observation is $p-value<0.05$, we threshold the rest of the image to obtain the significant voxels showing an effect. This approach clearly needs the multiple-comparison correction as several dependent or independent statistical tests are being performed simultaneously at the given significance level. Therefore, we decrease the significance level down to $\alpha=0.001$ to avoid the presence of false positives in permutation analyses and then compare with SPM in the whole volume using the aforementioned configurations. In figure \ref{fig:siete} we show the detection ability together with the control of type I error in the $T_{Res}$ approach (map in red font). Note how the permutation test affects the detection ability of the classical CV approach (map in green font) and how the uncorrected voxelwise SPM approaches (in blue font ) tends to inflate false positives. \begin{table}[htbp] \centering \caption{Demographics details of the ADNI dataset, with group means with their standard deviation} \label{tab:demog} \begin{tabular}{lccccc} \hline & Status & Number & Age & Gender (M/F) & MMSE\\ \midrule MRI ADNI& NC & 229 & 75.97$\pm$5.0 & 119/110 & 29.00$\pm$1.0 \\ & AD & 188 & 75.36$\pm$7.5 & 99/89 & 23.28$\pm$2.0 \\ \bottomrule \end{tabular} \end{table} \begin{figure*} \centering \includegraphics[width=\textwidth]{Acchist} \includegraphics[width=0.49\textwidth]{SliceBrowserAtlasCV} \includegraphics[width=0.49\textwidth]{SliceBrowserAtlasC} \caption{Up: Distribution of voxelwise accuracies of the real dataset using $K=10$-fold, resubstitution and concentration inequalities \cite{Gorriz2021}. Down 3D distribution of the accuracies using CV and Corrected Accuracy.} \label{fig:seis} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{CVvsResubperm} \includegraphics[width=\textwidth]{hipocampo_analysis} \caption{Permutation analysis on the hippocampus. Note that $O=1000$ and the upper bound \cite{Gorriz19} was obtained with a probability at least equal to $0.05$} \label{fig:siete} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{Tres_mapa} \includegraphics[width=0.49\textwidth]{TCV_mapa} \includegraphics[width=0.3\textwidth]{SPMunc0001_mapa} \includegraphics[width=0.3\textwidth]{SPMunc0001CE10_mapa} \includegraphics[width=0.3\textwidth]{SPMfwe005_mapa} \caption{Parametric and non-parametric statistical maps. Note the trade-off in etection and control of the FWE of the $T_{res}$ approach, compared with $T_{CV}$ and he three SPM configurations} \label{fig:siete} \end{figure*} \section{Discussion} In the context of classification for statistical inference, researchers mainly follow two strategies: i) they perform K-fold and assess the $A_{cc}$ in several averaged folds or ii) they propose some kind of cross-validation based statistic (P-test) using an estimation of the actual error of the classifier on new set of samples (equation \ref{eq:8}). In both cases, if this residual square (error) is small, i.e. a good classification) is achieved, then it constitutes evidence against the null hypothesis. In the second approach to simulate the null distribution they employ a technique, as shown in section \ref{sec:inference}, that is also used in the frequentist inference, the permutation test. A set of permutations $\pi_p$ for $p=1,\ldots,O$ is generated and then applied to the dataset, using the same observations $\mathbf{y}$ and permuted constraints $x_{\pi_p}$. They estimate the parameters $w_{pi_p}$ and compute a set of residuals for all the permutations. The p-value is computed by dividing the number of times we randomly obtain a \emph{residual score} less than the one we obtain with the original value over the number of permutations, i.e. $p-value=p(RS_{\pi}<RS)$. This methodology is called CV-$P$ tests \cite{Reiss15} where LRM might be replaced by SVM or other predictive algorithms. Several limitations are found using only LRM for estimating the posterior probability above mentioned. Linear regression is only operative in binary classification, e.g the regression could be negative or even greater than zero \cite{Hastie2001}. Indeed, as shown in the example, in this case there is a strong correspondence between GLM and LRM for a single level analysis in group comparisons. Thus complex classifiers and other loss functions are needed for relieving bad estimations on the set of parameters. Beyond that, the selected predictive algorithms build their P-tests on the CV strategy, that could be a biased estimator of the actual error in heterogeneous datasets, such as the ones used in neuroimaging \cite{Varoquaux18}. On the other hand, classical and Bayesian Inferences depends on the specified models when proposing the T statistic and fitting parameters of the GLM. This is partly solved again by the use of permutation analysis in the estimation of the null distribution, but what about the T statistic definition? It is also described in terms of the error covariance matrix, which must be estimated on empirical data in limited sample sizes. In the toy example shown above we assumed in the formulation of the GLM a known covariance matrix. Despite that, the T-statistic following from the best guess was fluctuating around the ideal value and resulted in low classification rates in the dual problem. How is actually performed frequestist analysis or Bayesian analysis (equivalent to the latter in the last level) in a real scenario? Again, there are model selection and parameter fitting stages to achieve that, nonetheless in complex scenarios with a limited sample size, heuristics are the common solutions \cite{Woolrich2009}. Indeed, in the high dimensional case or under the assumption of complex models, the performance and operation of the latter approaches is arguable \cite{Reiss15}. They hardly estimate parameters, the computation is costly and tend to use heuristics for solving such issues, e.g. in the FSL tools based on Bayesian inference, such as BET (Brain extraction tool), TBSS (tract-based spatial statistics), FLIRT (FMRIB's linear image registration tool), PRELUDE/FUGUE (phase unwarping and MRI unwarping), MELODIC ICA, the use of heuristics is a common practice and the estimation of the full posterior distribution of model parameters is biased. As a conclusion, limited samples sizes and the selection/estimation of any specific model is still an issue in neuroimaging, further when the model and the interaction between model parameters become too complex for an accurate posterior probability estimation, or a feasible numerical computation of the Bayes rule. Given the connection between the two observation models, i.e. GLM and LRM, in this paper we propose the use of an agnostic theory about the estimation of dependencies and established in the pattern classification problem with limited amounts of data, to achieve statistical inference \cite{Vapnik82,Haussler92}. \section{Conclusions} In this paper we propose the use of permutation tests and agnostic theory in the set of regressed outputs by the definition of the residual score or $A_{cc}$ based test. We employ permutation tests and a better estimation of actual error based on concentration inequalities to provide a trade-off between the Type I error and the statistical power. Some previous results demonstrate the ability of such estimator to provide maps of significance \cite{Gorriz2021} where a random simulation on controls resulted in a nominal rate of false positives. As a conclusion, we see the equivalence in the estimation in the observation and the (explanatory) label domains, thus any test performed on the label space using an $A_{cc}$-based test is similar to the ones used in neuroimaging in the last decade. Moreover, prevalence (scores in equations \ref{eq:15} and \ref{eq:17}) is a valid measure for statistical inference without using any model at the very first assumptions. Our approach tries to compute this score using all the database available, instead of splitting it into folds, and with the resulting set of accuracies, we estimate the real one based on the upper bounds (instead of using K-fold strategy) with probability at least $1-\alpha$. Then, some permutation analysis is derived using this measure to simulate the distribution of the null hypothesis and finally a test can be formulated in a classic statistical sense. \section*{Acknowledgments} This work was partly supported by the MINECO/ FEDER under the RTI2018-098913-B100 CV20-45250 and A-TIC-080-UGR18 projects and by the Ministerio de Universidades under the FPU Predoctoral Grant FPU 18/04902. \section*{Appendix} \subsection*{Proof of the connection:} The derivation of the connection between $\mathbf{\theta}$ and $\mathbf{w}$ is shown in the following assuming non-singular matrices when needed. Given the optimum solution for the matrix of parameters $\hat{\mathbf{w}}=(\mathbf{y}^T\mathbf{y})^{-1}\mathbf{y}^T\mathbf{X}$, from the LRM in equation \ref{eq:7} we readily see that the observation can be written as: \begin{equation}\label{eq:21} \mathbf{y}=\mathbf{X}\hat{\mathbf{w}}^T(\hat{\mathbf{w}}\hat{\mathbf{w}}^T)^{-1} \end{equation} The transpose of the parameter matrix $\hat{\mathbf{w}}$ can be expressed as: \begin{equation}\label{eq:22} \hat{\mathbf{w}}^T=(\mathbf{y}^T\mathbf{X})^T(\mathbf{y}^T\mathbf{y})^{-T}=(\mathbf{y}^T\mathbf{y})^{-1}\mathbf{X}^T\mathbf{y} \end{equation} then, equation \ref{eq:19} transforms into: \begin{equation}\label{eq:23} \mathbf{y}=\mathbf{X}\hat{\mathbf{w}}^T((\mathbf{y}^T\mathbf{y})^{-1}\mathbf{y}^T\mathbf{X}(\mathbf{y}^T\mathbf{y})^{-1}\mathbf{X}^T\mathbf{y})^{-1}= (\mathbf{y}^T\mathbf{y})^{2}(\mathbf{y}^T\mathbf{X}\mathbf{X}^T\mathbf{y})^{-1} \mathbf{X}\hat{\mathbf{w}}^T \end{equation} where the leading terms in parenthesis are scalars. The first term is the squared power of the observation $(\mathbf{y}^T\mathbf{y})^{2}=(\sum_{i=1}^Ny_i^2)^2$ and the second, given that $\mathbf{X}$ is an indicator matrix and: \begin{equation}\label{eq:24} \mathbf{y}^T\mathbf{X}=\left(\sum_{i\in C_1}y_{i1}, \sum_{i\in C_2}y_{i2},\ldots,\sum_{i\in C_M}y_{iM}\right)=(\mathbf{X}^T\mathbf{y})^T \end{equation} , where $y_{im}$ denotes the observation $i$ in class $m$, can be expressed as the denominator in equation \ref{eq:13}. \subsection*{Binary Support vector Machines:} In general, SVM separate binary labeled training data $(\mathbf{y},x)\in \mathbb{R}^P\times\pm 1$ by the hyperplane $f:\mathbb{R}^P\rightarrow \pm 1$: \begin{equation}\label{eq:1ap} f(\mathbf{y})=\mathbf{w}^T\mathbf{y}+w_0=(\mathbf{w}^T w_0)(\mathbf{y}^T 1)^T \end{equation} where $\mathbf{w}$ is known as the weight vector and $w_0$ as the threshold. This hyperplane is obtained in such a way \cite{Burges98} that is maximally distant from the two classes, i.e. the maximal margin hyperplane. In our case, the binary labels $x$ are coding each row of the explanatory matrix $\mathbf{X]}$, e.g. $1\rightarrow (1,0)$ and $-1\rightarrow (0,1)$, and $P=1$, thus our function $f:\mathbb{R}\rightarrow \{(0 1), (1 0)\}$ that can be easily solved via the original SVM with two independent minimizations with opposite expected solutions $w^1=-w^0$ and $w_0^1=-w_0^0$. It is worth mentioning that SVM is referred to $\pm 1$ labels thus to compute the regressed observations we need to evaluate: \begin{equation}\label{eq:1ap} \hat{\mathbf{Y}}=(\mathbf{X}\mathbf{w}^T(\mathbf{w}\mathbf{w}^T)^{-1}+1)/2 \end{equation} \bibliographystyle{srt}
{ "timestamp": "2020-12-17T02:17:03", "yymm": "2012", "arxiv_id": "2012.08903", "language": "en", "url": "https://arxiv.org/abs/2012.08903" }
\section{INTRODUCTION} \label{sec:introduction} 2D LiDARs provide accurate range measurements with a large field of view, often greater than 200 degrees, at an affordable price, and are a popular sensor choice for many robotic tasks, including person detection. While early approaches for detecting persons in 2D range data focused on heuristics with hand-crafted features \cite{Leigh15ICRA,Arras07ICRA}, recent studies used convolutional neural networks and further improved the detection results \cite{Beyer18RAL,Jia20arXiv}. Deep learning has become an integral part of modern detection algorithms, whether from 2D range data \cite{Beyer18RAL,Jia20arXiv} or images \cite{Tan20CVPR,He17ICCV,Redmon16CVPR,Liu16ECCV,Ren15NIPS}. The success of these detectors hinges upon the availability of large and high-quality datasets. Over the past years, significant effort has gone into labeling images with bounding boxes or segmentation masks, whereas relatively little attention has been given to annotating 2D range data (see~Fig.\,\ref{fig:datasets}). The few available datasets for 2D LiDARs do not possess enough diversity in terms of the surrounding environments and sensor models. Networks trained solely on these data may not generalize well at deployment, where both the environment and the sensor specification are likely to differ from those encountered during training. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{pics/tikz/teaser.tikz} \caption{ Only few 2D LiDAR datasets with person annotations exist, while a vast amount of image-based annotations are available. We utilize the image-based annotations to train 2D LiDAR-based person detectors by generating pseudo-labels from image-based detections. $^*$: Numbers are estimated based on person-per-image statistics of the training set. } \label{fig:datasets} \end{figure} To overcome the limitation imposed by insufficient training data, we propose a method to automatically generate labels for training LiDAR-based person detectors using the output of an image-based detector on a calibrated camera. Given a person bounding box in an image, our method takes the 2D LiDAR points that fall within the box frustum and uses a clustering algorithm to locate the person in the LiDAR coordinate. The estimated locations of persons in the scene, as well as a set of negative points, are used as pseudo-labels for training a detector. This allows a robot to train or fine-tine a laser-based detector in a self-supervised fashion. We empirically demonstrate the validity of the pseudo-labels and show that they can be used for both training and fine-tuning a 2D LiDAR-based person detector. Additionally, we experiment with robust training techniques to further improve the detector performance. Our method is an effective way to boost the performance of a person detector during deployment without any additional labeling effort, and has great potential for many robotic applications. In summary, the main contributions of this work are: \begin{itemize} \item We propose a method to automatically generate pseudo-labels for training 2D LiDAR-based person detectors, leveraging the output of an image-based detector and known extrinsic camera calibration. \item We demonstrate that the generated pseudo-labels can be used to train or fine-tune a person detector and experiment with robust training techniques to further improve its performance. \item We release our code, implemented in PyTorch with an easy-to-use detector ROS node, for robotic applications.\footnote{\url{https://github.com/VisualComputingInstitute/2D_lidar_person_detection}} \end{itemize} \section{RELATED WORK} \label{sec:related_work} \subsection{LiDAR-based Person Detection} Person detection from 2D range data has a long-standing history in the robotics community. Early approaches~\cite{Fod02ICRA,Scheutz04IROS,Schulz03IJRR} focus on tracking moving blobs in sequential LiDAR scans. These blobs are detected using manually engineered heuristics in a non-learning fashion. Later developments~\cite{Leigh15ICRA,Arras07ICRA,Pantofaru10ROS} improved the detection stage by using supervised learning techniques. In these approaches, a clustering algorithm is first applied over scan points, using clues like proximity~\cite{Leigh15ICRA} or jump distance~\cite{Arras07ICRA}. A set of hand-crafted features is then extracted for each cluster, and these features are used to train a simple classifier~(\textit{e.g.}~AdaBoost). In the case of~\cite{Leigh15ICRA,Pantofaru10ROS}, additional heuristics from tracking are employed to post-process the detected clusters. Most recent developments~\cite{Beyer16RAL,Beyer18RAL,Jia20arXiv} use deep learning techniques to detect persons directly from data, without manually engineered heuristics or features. The DROW detector~\cite{Beyer16RAL} is the first deep learning-based walking aid detector working on 2D range data and was later extended to additionally detect persons~\cite{Beyer18RAL}. The current state-of-the-art method is the DR-SPAAM detector~\cite{Jia20arXiv}, which leverages a temporal aggregation paradigm to incorporate multiple scans into the detection process, alleviating the problems associated with the low information content in a single LiDAR scan while retaining real-time computation on a mobile robotic platform. \subsection{Automatic Label Generation} Supervised learning approaches demand intense effort to manually collect and annotate data for the purpose of training and testing. Automatic label generation has been attempted to reduce the required labeling effort. For example, Leigh \emph{et al}\onedot \cite{Leigh15ICRA} generate positive training examples by positioning a 2D LiDAR in an open environment populated with people and negative examples by moving a LiDAR in an environment devoid of people. This method limits the training examples to a few simple scenarios and cannot be adapted for dynamic data collection at deployment time. Aguirre \emph{et al}\onedot \cite{Aguirre2019IJCIS} use Mask R-CNN~\cite{He17ICCV} with a calibrated RGB-D camera to automatically label 2D LiDAR scans for person detection. However, no evaluation of the labels' accuracy is performed. Furthermore, only very simple LiDAR-based person detectors are used and both the training and the evaluation are conducted using (potentially incorrect) generated labels, with no evaluation using real annotated data. The method also relies on depth measurements from an RGB-D camera, imposing an additional sensor requirement. Automatic label generation has also been attempted for 3D LiDARs. Piewak \emph{et al}\onedot \cite{Piewak18ECCV} and Wang \emph{et al}\onedot \cite{Wang19RAL} train point-cloud segmentation networks, using segmentation results on matching pixels as supervision. In this work, we propose to automatically generate training data for a 2D LiDAR-based person detector using a calibrated RGB camera and image-based person detections. Compared to~\cite{Leigh15ICRA}, our pseudo-labels are dynamically generated and do not rely on specific conditions of the environment. Unlike~\cite{Aguirre2019IJCIS}, our method operates on normal RGB cameras and does not require expensive segmentation on images or pixel-precise calibration between sensors. We conduct extensive experiments to analyze the quality of pseudo-labels and empirically prove their value for training state-of-the-art person detectors. \subsection{Learning with Noisy Labels} Training neural networks with imperfect datasets is becoming an increasingly relevant topic. Proposed methods range from robust loss functions~\cite{Ghosh17AAAI,Zhang18NIPS,Wang2019ICCV,Menon2020ICLR}, or noise modeling~\cite{xiao2015learning,Goldberger17ICLR,Han18NIPSMasking}, to sample selection~\cite{Pawan10NIPS,Jiang18ICML,Han18NIPS}, or re-weighting samples~\cite{Ren18ICML,Shu19NIPS}. Regularization techniques~\cite{Srivastava14JMLR,Jindal16ICDM,Menon2020ICLR} have also been shown to reduce the effect of label noise. For a more thorough study on this topic, we refer readers to surveys~\cite{Zhang16AIR,Algan19arXiv,Song20arXiv}. In our work, we experiment with the~\textit{partially Huberized cross-entropy loss}~\cite{Menon2020ICLR} and the~\textit{mixup} regularization~\cite{Zhang2018ICLR} to deal with the inherent noise of pseudo-labels. These two methods were picked for their applicability\textemdash they do not rely on specific noise properties, nor do they impose additional constraints (\textit{e.g.} a small set of clean training data). \section{Generating Pseudo-Labels} \label{sec:generating_pseudo_labels} We use a calibrated camera to generate pseudo-labels for training a 2D LiDAR-based person detector. These pseudo-labels include the location of persons in the LiDAR coordinate frame $\{(p_{x, i}, p_{y,i})\}_i$, and a set of scan points that belong to the background of the scene. We first use an object detector (\textit{e.g.}~Faster~R-CNN~\cite{Ren15NIPS}) to obtain person bounding boxes. From all bounding boxes, a subset is selected using the following constraints: \begin{itemize} \item \textit{classification score} greater than a threshold $T_{c}$, \item \textit{aspect ratio}, the ratio between width and height, smaller than $T_{AR}$, \item \textit{overlap ratio}, with any other bounding box is smaller than $T_o$. The overlap ratio is defined as the intersection area divided by the area of the box. \end{itemize} The goal is to select boxes from which the location of persons can be confidently extracted, rather than locating \textit{all} persons in the scene (\textit{i.e.}~we favor precision over recall in this step). Given a selected bounding box, we estimate the center location $(p_{x,i}, p_{y,i})$ of the person in the LiDAR coordinate frame. Utilizing the known camera calibration, we project the LiDAR points onto the image and extract points that fall within the bottom half of the bounding box (since the LiDAR is mounted at the height of the lower body). These points either correspond to a person or to the background (see~Fig.\,\ref{fig:qualitative_results}). To localize the person, we first run a $k$-means clustering in the range space with $k=2$, which groups points into a close and a far cluster. We take the average 2D location of points in the close cluster as the initial estimation, and iteratively refine this estimation using a mean shift procedure with a circular kernel of 0.5~$m$ radius. The mean shift result is used as the estimated person location. This proposed method assumes that the person belongs to the foreground of the scene and is the dominant object in the cropped LiDAR scan, which is typically satisfied by the content of detection bounding boxes. LiDAR points that do not project to any bounding box (including discarded boxes) are taken as the negative training samples. For increased robustness, we enlarge the width of each bounding box by a factor of 0.1 when generating the negative samples. \section{Person Detection with Pseudo-Labels} \subsection{DROW3 and DR-SPAAM Detector} We experiment with two state-of-the-art person detectors, DROW3~\cite{Beyer18RAL} and its successor DR-SPAAM~\cite{Jia20arXiv}. The DROW3 detector takes as input a 2D LiDAR scan, expressed as a one-dimensional vector of range measurements. For each point, it outputs a classification label and, for the positive points, a location offset to the person center. This is accomplished by pooling a small window of neighboring points, which are processed by a 1D convolutional neural network. DR-SPAAM improves upon this approach by introducing a spatial attention and auto-regressive model that integrates temporal information to improve detection performance. Thus, it requires the input to be a sequence of scans. We refer readers to~\cite{Beyer18RAL,Jia20arXiv} for more details. We use the generated pseudo-labels to train DROW3 and DR-SPAAM. For supervising the classification branch, we use points less than 0.4~$m$ away from an estimated person center $(p_{x}, p_{y})$ as the positive samples, and the marked out background points as the negative samples. For supervising the regression branch, we use points less than 0.8~$m$ away from an estimated person center. To increase robustness, we discard pseudo-labels with less than five surrounding positive points. Points that are neither close to a person nor marked as a background are ignored during training. \subsection{Robust Training} In the default setup, the classification branch of DROW3 and DR-SPAAM is supervised with the cross-entropy loss, which is prone to label noise in the training samples~\cite{Ghosh17AAAI,Wang2019ICCV}. To limit the influence of wrongly generated pseudo-labels, we resort to robust training techniques. We experiment with the~\textit{partially Huberized cross-entropy loss}~\cite{Menon2020ICLR}, a more robust loss function, and the~\textit{mixup} regularization~\cite{Zhang2018ICLR}. \subsubsection{Partially Huberized cross-entropy loss} The softmax cross-entropy loss is composed of a base loss (cross-entropy) with a sigmoid link. Menon \emph{et al}\onedot \cite{Menon2020ICLR} introduce a composite loss-based gradient clipping, linearizing the base loss beyond a threshold while leaving the sigmoid link untouched. The overall loss function takes the form: \begin{equation} l = \begin{cases} - \tau \cdot p + log(\tau) + 1, & \text{if}\ p\leq\frac{1}{\tau} \\ - log(p), & \text{else,} \end{cases} \end{equation} which asymptotically saturates for $p\leq\frac{1}{\tau}$, and was proven to be robust against label noise. The parameter $\tau$ should be set in proportion with the amount of noise in the training samples. In our experiments, we use $\tau=5$. \subsubsection{Mixup regularization} Given two training samples $(x_i,y_i)$ and $(x_j,y_j)$, Zhang \emph{et al}\onedot \cite{Zhang2018ICLR} proposed to construct augmented training samples: \begin{align*} \tilde{x}&=\lambda x_i + (1-\lambda)x_j \\ \tilde{y}&=\lambda y_i + (1-\lambda)y_j, \end{align*} with $\lambda\,{\sim}\,$Beta$(\alpha,\alpha)$ and the parameter $\alpha \in (0,\infty)$ controlling the augmentation strength. Unlike the conventional data augmentation which operates on a single sample (e.g. randomly flipping an image), \textit{mixup} generates virtual samples across training data. It encourages linear behavior in-between training examples, and was shown to improve generalization of a network and increase its robustness against label noise. To adapt \textit{mixup} for training a detection network, which includes both classification and regression, we use the following multi-task loss \begin{equation} l_{total} = l_{reg} + (1 - w) \cdot l_{cls} + w \cdot l_{mixup}, \end{equation} where $l_{reg}$ and $l_{cls}$ are the regression and classification loss without \textit{mixup} regularization, $l_{mixup}$ is the classification loss with \textit{mixup}, and $w$ is a weighting factor. To avoid additional memory overhead, we split the cost into \begin{align*} l_1 &= l_{reg} + (1 - w) \cdot l_{cls}\\ l_2 &= w \cdot l_{mixup} \end{align*} and perform gradient descent over $l_1$ and $l_2$ sequentially at each training iteration. In our experiment, we use $w=0.7$ and $\alpha=0.2$. \begin{figure*} \newlength{\imw} \setlength{\imw}{0.8cm} \newlength{\imh} \setlength{\imh}{1.8cm} \centering \newcommand{\impairall}[1]{% \begin{subfigure}{\imw}% \includegraphics[width=\imw,height=\imh]{pics/pl_example/#1_im.pdf}% \end{subfigure}% \begin{subfigure}{\imh}% \includegraphics[width=\imh,height=\imh]{pics/pl_example/#1_pt.pdf}% \end{subfigure}% }% \newcommand{\impair}[1]{\impairall{#1}\hfill} \newcommand{\impairend}[1]{\impairall{#1}} \rotatebox[origin=c]{90}{Success}\hspace{2mm} \impair{success/000001_12}% \impair{success/000004_2}% \impair{success/000039_1}% \impair{success/000126_0}% \impair{success/000248_0}% \impairend{success/000314_2}% \vspace{2mm}% \rotatebox[origin=c]{90}{Success}\hspace{2mm} \impair{success/000007_14}% \impair{success/000067_0}% \impair{success/000083_3}% \impair{success/000457_2}% \impair{success/000534_7}% \impairend{success/000656_0}% \vspace{2mm}% \rotatebox[origin=c]{90}{Success}\hspace{2mm} \impair{success/000384_1}% \impair{success/000437_2}% \impair{success/000487_2}% \impair{success/000593_1}% \impair{success/000650_3}% \impairend{success/000921_2}% \vspace{2mm}% \rotatebox[origin=c]{90}{Success}\hspace{2mm} \impair{success/000739_0}% \impair{success/000781_4}% \impair{success/000897_1}% \impair{success/000947_3}% \impair{success/000977_4}% \impairend{success/001020_0}% \vspace{2mm}% \rotatebox[origin=c]{90}{Failure}\hspace{2mm} \impair{failure/000431_0}% \impair{failure/000458_12}% \impair{failure/000250_0}% \impair{failure/000629_0}% \impair{failure/000677_11}% \impairend{failure/001373_1}% \vspace{2mm}% \rotatebox[origin=c]{90}{Failure}\hspace{2mm} \impair{failure/000968_2}% \impair{failure/000813_5}% \impair{failure/000759_3}% \impair{failure/000964_0}% \impair{failure/000164_2}% \impairend{failure/000615_5}% \newcommand\drawcross{% \begin{tikzpicture \draw[color=new_blue_dark,line width=1.5pt]% (-3pt,0) -- (3pt,0)% (0,-3pt) -- (0,3pt);% \end{tikzpicture}% } \caption{Person detections and the top-down view of the matching pseudo-labels (\raisebox{-0.5mm}{\protect\drawcross{}}) with surrounding LiDAR points (within a 0.5 $m$ radius). The LiDAR points are overlaid in the detection images, where the color encodes measured distance with red being closest and white being farthest. The bottom two rows demonstrate some failure cases: heavy occlusion (columns 1-2); background distraction (column 3-4); sparse LiDAR points at far distance (column 5); and faulty calibration or synchronization between sensors (column6).} \label{fig:qualitative_results} \end{figure*} \section{EVALUATION} \label{sec:evaluation} We first conduct experiments to validate the quality of the pseudo-labels and then demonstrate their effectiveness for training person detectors. We present two case studies: training detectors using pseudo-labels with pre-collected data as well as fine-tuning detectors using dynamically generated pseudo-labels. \subsection{The JackRabbot Dataset (JRDB)} Our experiments are conducted on the JackRabbot dataset~\cite{Martin2019arXiv}. The dataset was collected using a mobile robot, the \textit{JackRabbot}, in both indoor and outdoor environments. It includes point clouds from 3D LiDARs, annotated with 3D bounding boxes for persons, and RGB camera images, annotated with 2D bounding boxes. Although not directly annotated, the JackRabbot dataset contains scans from two SICK 2D~LiDARs. These LiDARs were mounted at the height of the lower legs, facing the front and the back of the robot, respectively. The dataset features a full 360\degree~scan with 1091 points, generated by combining scans from the two LiDARs, which was used as the LiDAR input in our experiments. For the evaluation of our pseudo-labels and trained detectors, we convert the annotated 3D bounding boxes into 2D LiDAR annotations by using their center as the ground truth location of persons in the scene. Since many 3D bounding boxes are occluded in the view of the 2D LiDARs, we only keep annotations that have at least five points within a 0.5~$m$ radius. The dataset includes person detections from a Faster R-CNN detector~\cite{Ren15NIPS}, which we use as the input to our pseudo-label generation approach. However, using the included RGB images other detectors can be applied, allowing us to potentially further improve the label quality. For generating pseudo-labels, we use $T_{c}=0.75$, $T_{AR}=0.45$, and $T_o=0.4$. The JRDB does not provide a train-validation split and the test set annotations are not publicly available. Hence, we employ a custom train-test split. We split the 27 sequences of the original train set into 17 sequences for training and 10 sequences for testing. Our train-test split is balanced with respect to person detection difficulty (assessed using a pre-trained DROW3 detector) and scene properties (indoor \textit{vs.} outdoor). We refer readers to the released code for further details. \subsection{Pseudo-Label Statistics} \label{ssec:pseudo_label_quality} To evaluate the quality of our pseudo-labels, we calculate the true positive and the true negative rate (TPR, TNR) of the classification target generated using pseudo-labels by comparing them against the target generated using ground truth annotations. More than 90 percent of the training samples are labeled correctly (see~Table\,\ref{table:pl_pr_tnr}), indicating the validity of the pseudo-labels. Qualitative results of pseudo-labels are shown in~Fig.\,\ref{fig:qualitative_results}. In most cases, locations of persons are successfully estimated in both indoor and outdoor environments, with people at different distances or having different poses. Common failure cases are based on occluding objects, persons merging with the background, sparse LiDAR measurements at high distances, or a faulty calibration between sensors. These cases result in noisy labels, which we deal with by using a robust training loss. \begin{table}[b] \centering \setlength{\tabcolsep}{2.0pt} \begin{tabularx}{0.8\columnwidth}{ l ccc YY } \toprule Bounding Boxes &&&& TPR & TNR \\ \midrule Faster R-CNN Detections &&&& 91.6 & 99.4\\ 2D Annotations &&&& 90.6 & 99.1\\ \bottomrule \end{tabularx} \caption{Accuracy of pseudo-labels on the training split.} \label{table:pl_pr_tnr} \end{table} To ablate the effect of uncertainty in the image-based person detector, we generate pseudo-labels using the annotated 2D bounding boxes. The TPR and TNR of the pseudo-labels generated using detections are higher than those generated using annotations, showing that our proposed method is robust against uncertainty in bounding boxes. Due to the sparsity of LiDAR points, it is difficult to generate pseudo-labels for far away persons contained in 2D annotations. When using a detector, these persons are often missed or detected with low confidence, thus not leading to a (potentially false) pseudo-label. We also analyze the distance distribution of our pseudo-labels. As Fig.\,\ref{fig:pl_dist} shows, it is similar to that of the annotations. Pseudo-labels do not introduce any distance bias which may wrongly emphasize samples within a certain range. Due to the high mounting of cameras in the JackRabbot dataset, the legs of persons close to the robot are outside the camera's field of view. Thus, no pseudo-label can be generated within 1~$m$ distance. \begin{figure} \includegraphics[width=\linewidth]{pics/tikz/hist.tikz}% \caption{ Distance distribution of pseudo-labels and annotations. Pseudo-labels have a similar distance distribution to that of the annotations and do not introduce a distance bias. }% \label{fig:pl_dist}% \end{figure} \subsection{Training with Pseudo-Labels} \label{sec:offline_training} We examine the performance of detectors trained using either pseudo-labels or ground truth annotations. We train the DROW3 detector for 40 epochs, using a batch size of 8 scans and the DR-SPAAM detector for 20 epochs, using a batch size of 4 scans, We use the Adam optimizer~\cite{Kingma15ICLR}, and a learning rate of $10^{-3}$. Starting from 10 epochs for DROW3 and 5 epochs for DR-SPAAM, we exponentially decay the learning rate until it reaches $10^{-6}$ at the end of training. Implementation and hyper-parameters are taken from~\cite{Jia20arXiv} for both detectors, with the only exception being that we enlarge the spatial window size for DR-SPAAM to 17 points (from the original 11 points). This enlarged window matches the effective opening angles, since the scans in JRDB have a higher angular resolution. Due to GPU memory constraints, we randomly crop the scans to 1000 points for DR-SPAAM at training time. We report the average precision (AP$_{\text{0.3}}$ and AP$_{\text{0.5}}$) on the test split (see~Table\,\ref{table:result_train}). A detection is considered positive if there is a ground truth within 0.3~$m$ or 0.5~$m$, and a ground truth can only be matched with a single detection. As baselines, we evaluate the released DROW3 and DR-SPAAM model from~\cite{Jia20arXiv}, which is trained on the DROW dataset. These pre-trained networks have significantly lower AP compared to the ones trained on JRDB, confirming our initial speculation that networks trained on a single dataset may not generalize well to new environments or different LiDAR models. The pre-trained DR-SPAAM, despite having a higher score on the DROW dataset, performed worse than DROW3, showing the effect of overfitting. Networks trained using pseudo-labels, benefiting from a smaller domain gap to the test data, outperform the pre-trained networks, proving the validity of our approach. Starting from a pre-trained model improves the detector performances for pseudo-labels, but gives no clear improvement for 3D annotations. The performance gap between pseudo-labels and annotations could be caused by two factors: label noise and less training samples. To study the effect of label noise, we additionally train networks with pseudo-labels, while removing falsely labeled points and correcting the regression target using ground truth annotations (see~Table\,\ref{table:result_train_pl}). Both false positives and false negatives reduce the detector performance, with the later having a strong influence on DROW3. Correcting the regression target improves AP$_\text{0.3}$ for both networks, especially for the more powerful DR-SPAAM. Cleaning pseudo-labels increases detector performance significantly, showing that label noise is the more dominant cause to the performance gap between pseudo-labels and annotations. Detectors trained using clean pseudo-labels trail the ones trained using annotations by around 2 percent AP, due to reduced amounts of training samples. Fine-tuning with a pre-trained model further narrows this performance gap. \begin{table} \centering \setlength{\tabcolsep}{2.0pt} \begin{tabularx}{\linewidth}{l cc YYccYY } \toprule &&& \multicolumn{2}{c}{DROW3} &&& \multicolumn{2}{c}{DR-SPAAM} \\ \cmidrule{4-5} \cmidrule{8-9} Supervision &&& AP$_{\text{0.3}}$ & AP$_{\text{0.5}}$ & & & AP$_{\text{0.3}}$ & AP$_{\text{0.5}}$ \\ \midrule Pre-trained from \cite{Jia20arXiv} &&& 65.5 & 70.8 &&& 62.5 & 68.3 \\ Pseudo-labels &&& 68.5 & 77.3 &&& 66.9 & 75.5 \\ \quad + fine-tuning from \cite{Jia20arXiv} &&& 69.0 & 77.8 &&& 69.2 & 76.7 \\ 3D Annotation &&& 76.2 & \textbf{82.9} &&& 78.5 & \textbf{84.9} \\ \quad + fine-tuning from \cite{Jia20arXiv} &&& \textbf{76.4} & 82.5 &&& \textbf{78.6} & 83.8 \\ \bottomrule \end{tabularx} \caption{Performance of DROW3 and DR-SPAAM trained using different supervision.} \label{table:result_train} \end{table} \begin{table} \centering \setlength{\tabcolsep}{2.0pt} \begin{tabularx}{\linewidth}{l c YYccYY } \toprule && \multicolumn{2}{c}{DROW3} && \multicolumn{2}{c}{DR-SPAAM} \\ \cmidrule{3-4} \cmidrule{6-7} Supervision && AP$_{\text{0.3}}$ & AP$_{\text{0.5}}$ && AP$_{\text{0.3}}$ & AP$_{\text{0.5}}$ \\ \midrule Pseudo-labels && 68.5 & 77.3 && 66.9 & 75.5 \\ $\ldots$ (remove FP) && 71.6 & 79.0 && 71.1 & 77.8 \\ $\ldots$ (remove FN) && 68.4 & 77.8 && 70.3 & 78.6 \\ $\ldots$ (remove FP \& FN) && 73.1 & 80.8 && 72.2 & 78.8 \\ $\ldots$ (remove FP \& FN, correct reg.) && 74.6 & 80.5 && \textbf{76.5} & \textbf{81.5} \\ \quad + fine-tuning from \cite{Jia20arXiv} && \textbf{74.7} & \textbf{81.2} && 76.2 & \textbf{81.5} \\ \bottomrule \end{tabularx} \caption{Performance of DROW3 and DR-SPAAM trained with different variants of pseudo-labels.} \label{table:result_train_pl} \end{table} \begin{table} \centering \setlength{\tabcolsep}{2.0pt} \begin{tabularx}{\linewidth}{l cc YYccYY } \toprule &&& \multicolumn{2}{c}{DROW3} &&& \multicolumn{2}{c}{DR-SPAAM} \\ \cmidrule{4-5} \cmidrule{8-9} Training scheme &&& AP$_{\text{0.3}}$ & AP$_{\text{0.5}}$ & & & AP$_{\text{0.3}}$ & AP$_{\text{0.5}}$ \\ \midrule Cross-entropy loss &&& 68.5 & 77.3 &&& 66.9 & 75.5 \\ \quad + mixup regularization &&& 69.5 & 78.0 &&& 65.8 & 74.0 \\[0.1cm] Partially Huberized cross-entropy loss &&& \textbf{71.4} & \textbf{79.0} &&& 69.4 & 76.4 \\ \quad + mixup regularization &&& 71.1 & 78.5 &&& \textbf{70.0} & \textbf{78.3} \\ \bottomrule \end{tabularx} \caption{Performance of DROW3/DR-SPAAM trained with pseudo-labels and different robust training methods.} \label{table:result_robust} \end{table} To mitigate the problem caused by labeling noise, we experiment with two robust training methods: the \textit{partially Huberized cross-entropy loss}, and the \textit{mixup} regularization (see~Table\,\ref{table:result_robust}). Both methods improve the network performance, with the exception of applying \textit{mixup} alone on DR-SPAAM (potentially due to suboptimal hyper-parameters). Combined with robust training methods, detectors trained using pseudo-labels outperform the pre-trained detectors by a large margin, and reach a performance close to training using annotations, without using any labeled data. Pseudo-labels provide an effective way to adjust person detectors to new environments or LiDAR models. \subsection{Online Fine-Tuning with Pseudo-Labels} In practice, it is desirable to use a detector that can dynamically fine-tune itself in an online fashion during deployment. To study the network performance undergoing such fine-tuning, we take a DROW3 detector, pre-trained on DROW dataset~\cite{Jia20arXiv}, and fine-tune it on the JRDB train split for one epoch, with the partially Huberized cross-entropy loss. We use a learning rate of $5\times10^{-5}$ and a batch size of 8 scans. In the first set of experiments, we shuffle data only within each sequence (the whole train split is composed of 17 sequences) and pass the in-sequence shuffled data to the network. This mimics the situation where a mobile robot enters into a new environment, curates a small amount of data, and fine-tunes itself. In the second set of experiments, we shuffle data within the whole training split, giving more diversity in each batch. The detector performance at different stages of fine-tuning is shown in~Fig.\,\ref{fig:online_training}. When the data is shuffled within the whole training split, the network performance increases significantly, from the pre-trained 70.8 percent AP$_{\text{0.5}}$ to more than 74 percent, using less than one hundred updates. This fast performance increase implies that, even in applications with insufficient computation for a full training, it is still possible to adapt the detector and improve its performance, by running a small number of updates using pseudo-labels. However, having curated training samples with enough diversity is a key prerequisite, as fluctuating performance is observed when the data was shuffled only within each sequence. Although for most of the time the detector benefits from fine-tuning (having better performance than that of a pre-trained detector), there are adversarial samples that cause dramatic performance reductions. The same fluctuating behavior exists for fine-tuning using annotations, showing that it is an innate problem of network training, rather than caused by pseudo-labels. Curriculum learning methods~\cite{Bengio09ICML,Jiang18ICML} may help mitigating this problem, and thus relaxing the requirement of data collection. They are interesting directions to be explored in future research. \begin{figure} \newcommand\drawdashedline{% \begin{tikzpicture \draw[gray, dashed, line width=0.75pt]% (-7.5pt,0) -- (7.5pt,0);% \end{tikzpicture}% } \centering \includegraphics[width=\linewidth]{pics/tikz/online_training.tikz} \caption{ Performance at different steps of fine-tuning a pre-trained DROW3 detector with partially Huberized cross-entropy loss. Data is shuffled either on the whole training set or within each sequence, mimicking different amounts of curated data. With diverse batches, the detector performance improves significantly even with only a small number of updates. Fine-tuning longer using the default training schedule yields 79.2 and 82.3 percent AP$_\text{0.5}$ for pseudo-labels and annotations, respectively. } \label{fig:online_training} \end{figure} \section{CONCLUSION} In this paper, we proposed a method to automatically generate pseudo-labels for training 2D LiDAR-based person detectors, using bounding boxes generated from an image-based person detector with a calibrated camera. We analyzed the quality of pseudo-labels by comparing them against ground truth annotations and proved their validity. Experiments were conducted to train or fine-tune DROW3 and DR-SPAAM detectors using pseudo-labels, and these self-supervised detectors outperformed the detectors trained on annotations using a different dataset. Even stronger detectors were obtained by combining pseudo-labels with robust training techniques. Our method provides an effective way to bridge the domain gap between data encountered during training and during deployment. With our method, a mobile robot equipped with a 2D LiDAR-based person detector can fine-tune the detector during deployment, improving its performance with no additional labeling effort. With the released code, we expect our method will be useful for many robotic applications. \textbf{Acknowledgements:} We thank Hamid Rezatofighi, JunYoung Gwak, and Mihir Patel for their help with the JackRabbot dataset. This project was funded by the EU H2020 project "CROWDBOT" (779942). Most experiments were performed on the RWTH Aachen University CLAIX 2018 GPU Cluster (rwth0485). \section{Notes 1} \textbf{Experiments (Mats)} Results of all experiments we discussed on Monday, 28.9., are available under \textit{/globalwork/steinweg/IGPD/Paper\_Experiments}. Check out readme in that directory for more information. \textbf{Update Tuesday, 29.9} \begin{itemize} \item Results in Tab. 1 + 2 are updated. \item directory \textit{data} contains csv files for training splits of DROW, JRDB and JRDB\_Pseudo for Fig. 5 (distance histogram of xy detections 0-30m) \end{itemize} \subsection{Detector Performance} \subsubsection{Train from scratch} Table~\ref{table:result_train} reports the performance of DROW3 and DR-SPAAM detector on the evaluation split, trained using either generated labels or the annotated 3D bounding boxes. Since by design the generated labels do not overlap, we additionally train both detectors using ground truth, yet removing the overlapping samples. This is accomplished by projecting 3D bounding boxes to image plane and removing boxes with overlapping projections, evaluated using the same $T_{iou}$ threshold for label generation (c.f.~Sec.~\ref{sssec:bbox_selection}). For baseline comparison, we take the detector models pre-trained on the DROW dataset (from~\cite{Jia20arXiv}) and evaluate them without fine-tuning. \todo{The results show that} \begin{table}[h!] \centering \setlength{\tabcolsep}{2.0pt} \begin{tabularx}{\linewidth}{l YYccYY } \toprule & \multicolumn{2}{c}{DROW3} &&& \multicolumn{2}{c}{DR-SPAAM} \\ \cmidrule{2-3} \cmidrule{6-7} Supervision & $AP_{0.3}$ & $AP_{0.5}$ & & & $AP_{0.3}$ & $AP_{0.5}$ \\ \midrule Pre-trained from \cite{Jia20arXiv} & 68.3 & 72.3 &&& 66.1 & 70.8 \\ Pseudo-labels & 70.5 & 75.6 &&& 68.9 & 73.6 \\ Ground Truth (w/o overlapping) & 78.8 & 82.2 &&& 79.5 & 82.6 \\ Ground Truth (w/ overlapping) & 80.1 & 83.6 &&& 80.0 & 83.6 \\ \bottomrule \end{tabularx} \caption{Performance of DROW3 and DR-SPAAM detector trained from scratch using different supervision.} \label{table:result_train} \end{table} \begin{table}[] \resizebox{\linewidth}{!}{% \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline & \multicolumn{2}{l|}{DROW (pre-trained)} & \multicolumn{2}{l|}{DR-SPAAM (pre-trained)} & \multicolumn{2}{l|}{DROW (ground truth)} & \multicolumn{2}{l|}{DROW (pseudo labels)} \\ \hline Sequence & AP 0.3 & AP 0.5 & AP 0.3 & AP 0.5 & AP 0.3 & AP 0.5 & AP 0.3 & AP 0.5 \\ \hline packard-poster-session-2019-03-20\_1 & 0.8193 & 0.8563 & 0.7665 & 0.8206 & 0.9232 & 0.9461 & 0.8481 & 0.9027 \\ \hline gates-to-clark-2019-02-28\_1 & 0.7989 & 0.8194 & 0.8234 & 0.8441 & 0.8657 & 0.8903 & 0.7959 & 0.8202 \\ \hline packard-poster-session-2019-03-20\_0 & 0.8003 & 0.8157 & 0.8043 & 0.8244 & 0.9020 & 0.9154 & 0.8240 & 0.8499 \\ \hline tressider-2019-03-16\_1 & 0.6466 & 0.6638 & 0.5344 & 0.5506 & 0.8981 & 0.9214 & 0.6810 & 0.7095 \\ \hline clark-center-2019-02-28\_0 & 0.6877 & 0.7161 & 0.7171 & 0.7459 & 0.7935 & 0.8220 & 0.7249 & 0.7554 \\ \hline svl-meeting-gates-2-2019-04-08\_1 & 0.5832 & 0.6918 & 0.5608 & 0.6911 & 0.6616 & 0.7241 & 0.5469 & 0.6070 \\ \hline meyer-green-2019-03-16\_0 & 0.6583 & 0.6809 & 0.6844 & 0.7234 & 0.7613 & 0.7823 & 0.6006 & 0.6266 \\ \hline gates-159-group-meeting-2019-04-03\_0 & 0.6029 & 0.6645 & 0.5584 & 0.6283 & 0.7704 & 0.8314 & 0.6582 & 0.7486 \\ \hline huang-2-2019-01-25\_0 & 0.5714 & 0.6186 & 0.6087 & 0.6512 & 0.7135 & 0.7584 & 0.5983 & 0.6557 \\ \hline gates-ai-lab-2019-02-08\_0 & 0.4037 & 0.4362 & 0.4066 & 0.4451 & 0.4775 & 0.5376 & 0.2965 & 0.3398 \\ \hline Total & 0.6833 & 0.7225 & 0.6615 & 0.7078 & 0.8019 & 0.8370 & 0.6883 & 0.7321 \\ \hline \end{tabular}% } \end{table} \subsubsection{Fine-tune} During deployment, it is often desired to take a pre-trained detector model and fine-tune it to the specific environment or sensor model. We simulate such situation by taking pre-trained detectors and fine-tune them on the training split. The results are recorded in Table~\ref{table:result_ft}. \todo{The numbers show...} \begin{table}[h!] \centering \setlength{\tabcolsep}{2.0pt} \begin{tabularx}{\linewidth}{l YYccYY } \toprule & \multicolumn{2}{c}{DROW3} &&& \multicolumn{2}{c}{DR-SPAAM} \\ \cmidrule{2-3} \cmidrule{6-7} & $AP_{0.3}$ & $AP_{0.5}$ & & & $AP_{0.3}$ & $AP_{0.5}$ \\ \midrule Pre-trained from \cite{Jia20arXiv} (w/o fine-tune) & 68.3 & 72.3 &&& 66.1 & 70.8 \\ Pre-trained from \cite{Jia20arXiv} (w/ fine-tune) & 00.0 & 00.0 &&& 00.0 & 00.0 \\ \bottomrule \end{tabularx} \caption{Performance of pre-trained DROW3 and DR-SPAAM detector with or without fine-tuning.} \label{table:result_ft} \end{table} \subsection{Online Learning} The issue of dataset size for manually labeled point cloud datasets has been stressed repeatedly throughout this paper. To illustrate the potential of essentially unlimited access to training data, we leverage the JRDB test set. As the labels for this dataset split are not publicly available, we use the 2D bounding boxes and laser range data to generate additional training data for our image-guided approach. By using all 27 sequences of the JRDB test set, we increase the size of our training set from X scans to Y scans. We compare the performance of a detector trained on the increased pseudo dataset to the one trained on the JRDB ground truth to see if we can approximate the performance obtained using manually labeled annotations by increasing the size of the dataset. The results of the experiment are displayed in Table \ref{table:results_dataset_size}. \begin{table}[h!] \centering \begin{tabular}{ p{2.5cm} c c c c } \toprule Method && Scans & AP$_{0.3}$ & AP$_{0.5}$ \\ \midrule Pseudo Standard && 0 & 00.0 & 00.0 \\ Pseudo Large && 0 & 00.0 & 00.0 \\ \arrayrulecolor{lightgray}\midrule[0.25pt]\arrayrulecolor{black} 3D Ground Truth && 0 & 00.0 & 00.0 \\ \bottomrule \end{tabular} \caption{Detection accuracy on the JRDB validation split for DROW trained on different dataset sizes. The second column shows the number of scans in the training dataset.} \label{table:results_dataset_size} \end{table} \begin{table}[h!] \centering \setlength{\tabcolsep}{2.0pt} \begin{tabularx}{\linewidth}{l cc YYccYY } \toprule &&& \multicolumn{2}{c}{DROW3} &&& \multicolumn{2}{c}{DR-SPAAM} \\ \cmidrule{4-5} \cmidrule{8-9} Supervision &&& $AP_{0.3}$ & $AP_{0.5}$ & & & $AP_{0.3}$ & $AP_{0.5}$ \\ \midrule Pre-trained from \cite{Jia20arXiv} &&& 65.5 & 70.8 &&& 62.5 & 68.3 \\ Pseudo labels &&& 67.2 & 75.2 &&& 69.1 & 76.3 \\ Pseudo labels (remove FP) &&& 69.6 & 76.6 &&& 72.8 & 79.0 \\ Pseudo labels (remove FN) &&& 70.0 & 78.3 &&& 70.3 & 77.4 \\ Pseudo labels (remove FP \& FN) &&& 73.0 & 79.9 &&& 72.2 & 79.1 \\ 3D Annotation &&& 76.3 & 82.6 &&& 76.5 & 82.1 \\ \bottomrule \end{tabularx} \caption{OLD: before using a longer training schedule. Performance of DROW3 and DR-SPAAM detector trained from scratch using different supervision.} \label{table:result_train} \end{table} \section{INTRODUCTION} \label{sec:introduction} 2D LiDARs provide accurate range measurements with a large field of view, often greater than 200 degrees, at an affordable price, and are a popular sensor choice for many robotic tasks, including person detection. While early approaches for detecting persons in 2D range data focused on heuristics with hand-crafted features \cite{Leigh15ICRA,Arras07ICRA}, recent studies used convolutional neural networks and further improved the detection results \cite{Beyer18RAL,Jia20arXiv}. Deep learning has become an integral part of modern detection algorithms, whether from 2D range data \cite{Beyer18RAL,Jia20arXiv} or images \cite{Tan20CVPR,He17ICCV,Redmon16CVPR,Liu16ECCV,Ren15NIPS}. The success of these detectors hinges upon the availability of large and high-quality datasets. Over the past years, significant effort has gone into labeling images with bounding boxes or segmentation masks, whereas relatively little attention has been given to annotating 2D range data (see~Fig.\,\ref{fig:datasets}). The few available datasets for 2D LiDARs do not possess enough diversity in terms of the surrounding environments and sensor models. Networks trained solely on these data may not generalize well at deployment, where both the environment and the sensor specification are likely to differ from those encountered during training. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{pics/tikz/teaser.tikz} \caption{ Only few 2D LiDAR datasets with person annotations exist, while a vast amount of image-based annotations are available. We utilize the image-based annotations to train 2D LiDAR-based person detectors by generating pseudo-labels from image-based detections. $^*$: Numbers are estimated based on person-per-image statistics of the training set. } \label{fig:datasets} \end{figure} To overcome the limitation imposed by insufficient training data, we propose a method to automatically generate labels for training LiDAR-based person detectors using the output of an image-based detector on a calibrated camera. Given a person bounding box in an image, our method takes the 2D LiDAR points that fall within the box frustum and uses a clustering algorithm to locate the person in the LiDAR coordinate. The estimated locations of persons in the scene, as well as a set of negative points, are used as pseudo-labels for training a detector. This allows a robot to train or fine-tine a laser-based detector in a self-supervised fashion. We empirically demonstrate the validity of the pseudo-labels and show that they can be used for both training and fine-tuning a 2D LiDAR-based person detector. Additionally, we experiment with robust training techniques to further improve the detector performance. Our method is an effective way to boost the performance of a person detector during deployment without any additional labeling effort, and has great potential for many robotic applications. In summary, the main contributions of this work are: \begin{itemize} \item We propose a method to automatically generate pseudo-labels for training 2D LiDAR-based person detectors, leveraging the output of an image-based detector and known extrinsic camera calibration. \item We demonstrate that the generated pseudo-labels can be used to train or fine-tune a person detector and experiment with robust training techniques to further improve its performance. \item We release our code, implemented in PyTorch with an easy-to-use detector ROS node, for robotic applications.\footnote{\url{https://github.com/VisualComputingInstitute/2D_lidar_person_detection}} \end{itemize} \section{RELATED WORK} \label{sec:related_work} \subsection{LiDAR-based Person Detection} Person detection from 2D range data has a long-standing history in the robotics community. Early approaches~\cite{Fod02ICRA,Scheutz04IROS,Schulz03IJRR} focus on tracking moving blobs in sequential LiDAR scans. These blobs are detected using manually engineered heuristics in a non-learning fashion. Later developments~\cite{Leigh15ICRA,Arras07ICRA,Pantofaru10ROS} improved the detection stage by using supervised learning techniques. In these approaches, a clustering algorithm is first applied over scan points, using clues like proximity~\cite{Leigh15ICRA} or jump distance~\cite{Arras07ICRA}. A set of hand-crafted features is then extracted for each cluster, and these features are used to train a simple classifier~(\textit{e.g.}~AdaBoost). In the case of~\cite{Leigh15ICRA,Pantofaru10ROS}, additional heuristics from tracking are employed to post-process the detected clusters. Most recent developments~\cite{Beyer16RAL,Beyer18RAL,Jia20arXiv} use deep learning techniques to detect persons directly from data, without manually engineered heuristics or features. The DROW detector~\cite{Beyer16RAL} is the first deep learning-based walking aid detector working on 2D range data and was later extended to additionally detect persons~\cite{Beyer18RAL}. The current state-of-the-art method is the DR-SPAAM detector~\cite{Jia20arXiv}, which leverages a temporal aggregation paradigm to incorporate multiple scans into the detection process, alleviating the problems associated with the low information content in a single LiDAR scan while retaining real-time computation on a mobile robotic platform. \subsection{Automatic Label Generation} Supervised learning approaches demand intense effort to manually collect and annotate data for the purpose of training and testing. Automatic label generation has been attempted to reduce the required labeling effort. For example, Leigh \emph{et al}\onedot \cite{Leigh15ICRA} generate positive training examples by positioning a 2D LiDAR in an open environment populated with people and negative examples by moving a LiDAR in an environment devoid of people. This method limits the training examples to a few simple scenarios and cannot be adapted for dynamic data collection at deployment time. Aguirre \emph{et al}\onedot \cite{Aguirre2019IJCIS} use Mask R-CNN~\cite{He17ICCV} with a calibrated RGB-D camera to automatically label 2D LiDAR scans for person detection. However, no evaluation of the labels' accuracy is performed. Furthermore, only very simple LiDAR-based person detectors are used and both the training and the evaluation are conducted using (potentially incorrect) generated labels, with no evaluation using real annotated data. The method also relies on depth measurements from an RGB-D camera, imposing an additional sensor requirement. Automatic label generation has also been attempted for 3D LiDARs. Piewak \emph{et al}\onedot \cite{Piewak18ECCV} and Wang \emph{et al}\onedot \cite{Wang19RAL} train point-cloud segmentation networks, using segmentation results on matching pixels as supervision. In this work, we propose to automatically generate training data for a 2D LiDAR-based person detector using a calibrated RGB camera and image-based person detections. Compared to~\cite{Leigh15ICRA}, our pseudo-labels are dynamically generated and do not rely on specific conditions of the environment. Unlike~\cite{Aguirre2019IJCIS}, our method operates on normal RGB cameras and does not require expensive segmentation on images or pixel-precise calibration between sensors. We conduct extensive experiments to analyze the quality of pseudo-labels and empirically prove their value for training state-of-the-art person detectors. \subsection{Learning with Noisy Labels} Training neural networks with imperfect datasets is becoming an increasingly relevant topic. Proposed methods range from robust loss functions~\cite{Ghosh17AAAI,Zhang18NIPS,Wang2019ICCV,Menon2020ICLR}, or noise modeling~\cite{xiao2015learning,Goldberger17ICLR,Han18NIPSMasking}, to sample selection~\cite{Pawan10NIPS,Jiang18ICML,Han18NIPS}, or re-weighting samples~\cite{Ren18ICML,Shu19NIPS}. Regularization techniques~\cite{Srivastava14JMLR,Jindal16ICDM,Menon2020ICLR} have also been shown to reduce the effect of label noise. For a more thorough study on this topic, we refer readers to surveys~\cite{Zhang16AIR,Algan19arXiv,Song20arXiv}. In our work, we experiment with the~\textit{partially Huberized cross-entropy loss}~\cite{Menon2020ICLR} and the~\textit{mixup} regularization~\cite{Zhang2018ICLR} to deal with the inherent noise of pseudo-labels. These two methods were picked for their applicability\textemdash they do not rely on specific noise properties, nor do they impose additional constraints (\textit{e.g.} a small set of clean training data). \section{Generating Pseudo-Labels} \label{sec:generating_pseudo_labels} We use a calibrated camera to generate pseudo-labels for training a 2D LiDAR-based person detector. These pseudo-labels include the location of persons in the LiDAR coordinate frame $\{(p_{x, i}, p_{y,i})\}_i$, and a set of scan points that belong to the background of the scene. We first use an object detector (\textit{e.g.}~Faster~R-CNN~\cite{Ren15NIPS}) to obtain person bounding boxes. From all bounding boxes, a subset is selected using the following constraints: \begin{itemize} \item \textit{classification score} greater than a threshold $T_{c}$, \item \textit{aspect ratio}, the ratio between width and height, smaller than $T_{AR}$, \item \textit{overlap ratio}, with any other bounding box is smaller than $T_o$. The overlap ratio is defined as the intersection area divided by the area of the box. \end{itemize} The goal is to select boxes from which the location of persons can be confidently extracted, rather than locating \textit{all} persons in the scene (\textit{i.e.}~we favor precision over recall in this step). Given a selected bounding box, we estimate the center location $(p_{x,i}, p_{y,i})$ of the person in the LiDAR coordinate frame. Utilizing the known camera calibration, we project the LiDAR points onto the image and extract points that fall within the bottom half of the bounding box (since the LiDAR is mounted at the height of the lower body). These points either correspond to a person or to the background (see~Fig.\,\ref{fig:qualitative_results}). To localize the person, we first run a $k$-means clustering in the range space with $k=2$, which groups points into a close and a far cluster. We take the average 2D location of points in the close cluster as the initial estimation, and iteratively refine this estimation using a mean shift procedure with a circular kernel of 0.5~$m$ radius. The mean shift result is used as the estimated person location. This proposed method assumes that the person belongs to the foreground of the scene and is the dominant object in the cropped LiDAR scan, which is typically satisfied by the content of detection bounding boxes. LiDAR points that do not project to any bounding box (including discarded boxes) are taken as the negative training samples. For increased robustness, we enlarge the width of each bounding box by a factor of 0.1 when generating the negative samples. \section{Person Detection with Pseudo-Labels} \subsection{DROW3 and DR-SPAAM Detector} We experiment with two state-of-the-art person detectors, DROW3~\cite{Beyer18RAL} and its successor DR-SPAAM~\cite{Jia20arXiv}. The DROW3 detector takes as input a 2D LiDAR scan, expressed as a one-dimensional vector of range measurements. For each point, it outputs a classification label and, for the positive points, a location offset to the person center. This is accomplished by pooling a small window of neighboring points, which are processed by a 1D convolutional neural network. DR-SPAAM improves upon this approach by introducing a spatial attention and auto-regressive model that integrates temporal information to improve detection performance. Thus, it requires the input to be a sequence of scans. We refer readers to~\cite{Beyer18RAL,Jia20arXiv} for more details. We use the generated pseudo-labels to train DROW3 and DR-SPAAM. For supervising the classification branch, we use points less than 0.4~$m$ away from an estimated person center $(p_{x}, p_{y})$ as the positive samples, and the marked out background points as the negative samples. For supervising the regression branch, we use points less than 0.8~$m$ away from an estimated person center. To increase robustness, we discard pseudo-labels with less than five surrounding positive points. Points that are neither close to a person nor marked as a background are ignored during training. \subsection{Robust Training} In the default setup, the classification branch of DROW3 and DR-SPAAM is supervised with the cross-entropy loss, which is prone to label noise in the training samples~\cite{Ghosh17AAAI,Wang2019ICCV}. To limit the influence of wrongly generated pseudo-labels, we resort to robust training techniques. We experiment with the~\textit{partially Huberized cross-entropy loss}~\cite{Menon2020ICLR}, a more robust loss function, and the~\textit{mixup} regularization~\cite{Zhang2018ICLR}. \subsubsection{Partially Huberized cross-entropy loss} The softmax cross-entropy loss is composed of a base loss (cross-entropy) with a sigmoid link. Menon \emph{et al}\onedot \cite{Menon2020ICLR} introduce a composite loss-based gradient clipping, linearizing the base loss beyond a threshold while leaving the sigmoid link untouched. The overall loss function takes the form: \begin{equation} l = \begin{cases} - \tau \cdot p + log(\tau) + 1, & \text{if}\ p\leq\frac{1}{\tau} \\ - log(p), & \text{else,} \end{cases} \end{equation} which asymptotically saturates for $p\leq\frac{1}{\tau}$, and was proven to be robust against label noise. The parameter $\tau$ should be set in proportion with the amount of noise in the training samples. In our experiments, we use $\tau=5$. \subsubsection{Mixup regularization} Given two training samples $(x_i,y_i)$ and $(x_j,y_j)$, Zhang \emph{et al}\onedot \cite{Zhang2018ICLR} proposed to construct augmented training samples: \begin{align*} \tilde{x}&=\lambda x_i + (1-\lambda)x_j \\ \tilde{y}&=\lambda y_i + (1-\lambda)y_j, \end{align*} with $\lambda\,{\sim}\,$Beta$(\alpha,\alpha)$ and the parameter $\alpha \in (0,\infty)$ controlling the augmentation strength. Unlike the conventional data augmentation which operates on a single sample (e.g. randomly flipping an image), \textit{mixup} generates virtual samples across training data. It encourages linear behavior in-between training examples, and was shown to improve generalization of a network and increase its robustness against label noise. To adapt \textit{mixup} for training a detection network, which includes both classification and regression, we use the following multi-task loss \begin{equation} l_{total} = l_{reg} + (1 - w) \cdot l_{cls} + w \cdot l_{mixup}, \end{equation} where $l_{reg}$ and $l_{cls}$ are the regression and classification loss without \textit{mixup} regularization, $l_{mixup}$ is the classification loss with \textit{mixup}, and $w$ is a weighting factor. To avoid additional memory overhead, we split the cost into \begin{align*} l_1 &= l_{reg} + (1 - w) \cdot l_{cls}\\ l_2 &= w \cdot l_{mixup} \end{align*} and perform gradient descent over $l_1$ and $l_2$ sequentially at each training iteration. In our experiment, we use $w=0.7$ and $\alpha=0.2$. \begin{figure*} \newlength{\imw} \setlength{\imw}{0.8cm} \newlength{\imh} \setlength{\imh}{1.8cm} \centering \newcommand{\impairall}[1]{% \begin{subfigure}{\imw}% \includegraphics[width=\imw,height=\imh]{pics/pl_example/#1_im.pdf}% \end{subfigure}% \begin{subfigure}{\imh}% \includegraphics[width=\imh,height=\imh]{pics/pl_example/#1_pt.pdf}% \end{subfigure}% }% \newcommand{\impair}[1]{\impairall{#1}\hfill} \newcommand{\impairend}[1]{\impairall{#1}} \rotatebox[origin=c]{90}{Success}\hspace{2mm} \impair{success/000001_12}% \impair{success/000004_2}% \impair{success/000039_1}% \impair{success/000126_0}% \impair{success/000248_0}% \impairend{success/000314_2}% \vspace{2mm}% \rotatebox[origin=c]{90}{Success}\hspace{2mm} \impair{success/000007_14}% \impair{success/000067_0}% \impair{success/000083_3}% \impair{success/000457_2}% \impair{success/000534_7}% \impairend{success/000656_0}% \vspace{2mm}% \rotatebox[origin=c]{90}{Success}\hspace{2mm} \impair{success/000384_1}% \impair{success/000437_2}% \impair{success/000487_2}% \impair{success/000593_1}% \impair{success/000650_3}% \impairend{success/000921_2}% \vspace{2mm}% \rotatebox[origin=c]{90}{Success}\hspace{2mm} \impair{success/000739_0}% \impair{success/000781_4}% \impair{success/000897_1}% \impair{success/000947_3}% \impair{success/000977_4}% \impairend{success/001020_0}% \vspace{2mm}% \rotatebox[origin=c]{90}{Failure}\hspace{2mm} \impair{failure/000431_0}% \impair{failure/000458_12}% \impair{failure/000250_0}% \impair{failure/000629_0}% \impair{failure/000677_11}% \impairend{failure/001373_1}% \vspace{2mm}% \rotatebox[origin=c]{90}{Failure}\hspace{2mm} \impair{failure/000968_2}% \impair{failure/000813_5}% \impair{failure/000759_3}% \impair{failure/000964_0}% \impair{failure/000164_2}% \impairend{failure/000615_5}% \newcommand\drawcross{% \begin{tikzpicture \draw[color=new_blue_dark,line width=1.5pt]% (-3pt,0) -- (3pt,0)% (0,-3pt) -- (0,3pt);% \end{tikzpicture}% } \caption{Person detections and the top-down view of the matching pseudo-labels (\raisebox{-0.5mm}{\protect\drawcross{}}) with surrounding LiDAR points (within a 0.5 $m$ radius). The LiDAR points are overlaid in the detection images, where the color encodes measured distance with red being closest and white being farthest. The bottom two rows demonstrate some failure cases: heavy occlusion (columns 1-2); background distraction (column 3-4); sparse LiDAR points at far distance (column 5); and faulty calibration or synchronization between sensors (column6).} \label{fig:qualitative_results} \end{figure*} \section{EVALUATION} \label{sec:evaluation} We first conduct experiments to validate the quality of the pseudo-labels and then demonstrate their effectiveness for training person detectors. We present two case studies: training detectors using pseudo-labels with pre-collected data as well as fine-tuning detectors using dynamically generated pseudo-labels. \subsection{The JackRabbot Dataset (JRDB)} Our experiments are conducted on the JackRabbot dataset~\cite{Martin2019arXiv}. The dataset was collected using a mobile robot, the \textit{JackRabbot}, in both indoor and outdoor environments. It includes point clouds from 3D LiDARs, annotated with 3D bounding boxes for persons, and RGB camera images, annotated with 2D bounding boxes. Although not directly annotated, the JackRabbot dataset contains scans from two SICK 2D~LiDARs. These LiDARs were mounted at the height of the lower legs, facing the front and the back of the robot, respectively. The dataset features a full 360\degree~scan with 1091 points, generated by combining scans from the two LiDARs, which was used as the LiDAR input in our experiments. For the evaluation of our pseudo-labels and trained detectors, we convert the annotated 3D bounding boxes into 2D LiDAR annotations by using their center as the ground truth location of persons in the scene. Since many 3D bounding boxes are occluded in the view of the 2D LiDARs, we only keep annotations that have at least five points within a 0.5~$m$ radius. The dataset includes person detections from a Faster R-CNN detector~\cite{Ren15NIPS}, which we use as the input to our pseudo-label generation approach. However, using the included RGB images other detectors can be applied, allowing us to potentially further improve the label quality. For generating pseudo-labels, we use $T_{c}=0.75$, $T_{AR}=0.45$, and $T_o=0.4$. The JRDB does not provide a train-validation split and the test set annotations are not publicly available. Hence, we employ a custom train-test split. We split the 27 sequences of the original train set into 17 sequences for training and 10 sequences for testing. Our train-test split is balanced with respect to person detection difficulty (assessed using a pre-trained DROW3 detector) and scene properties (indoor \textit{vs.} outdoor). We refer readers to the released code for further details. \subsection{Pseudo-Label Statistics} \label{ssec:pseudo_label_quality} To evaluate the quality of our pseudo-labels, we calculate the true positive and the true negative rate (TPR, TNR) of the classification target generated using pseudo-labels by comparing them against the target generated using ground truth annotations. More than 90 percent of the training samples are labeled correctly (see~Table\,\ref{table:pl_pr_tnr}), indicating the validity of the pseudo-labels. Qualitative results of pseudo-labels are shown in~Fig.\,\ref{fig:qualitative_results}. In most cases, locations of persons are successfully estimated in both indoor and outdoor environments, with people at different distances or having different poses. Common failure cases are based on occluding objects, persons merging with the background, sparse LiDAR measurements at high distances, or a faulty calibration between sensors. These cases result in noisy labels, which we deal with by using a robust training loss. \begin{table}[b] \centering \setlength{\tabcolsep}{2.0pt} \begin{tabularx}{0.8\columnwidth}{ l ccc YY } \toprule Bounding Boxes &&&& TPR & TNR \\ \midrule Faster R-CNN Detections &&&& 91.6 & 99.4\\ 2D Annotations &&&& 90.6 & 99.1\\ \bottomrule \end{tabularx} \caption{Accuracy of pseudo-labels on the training split.} \label{table:pl_pr_tnr} \end{table} To ablate the effect of uncertainty in the image-based person detector, we generate pseudo-labels using the annotated 2D bounding boxes. The TPR and TNR of the pseudo-labels generated using detections are higher than those generated using annotations, showing that our proposed method is robust against uncertainty in bounding boxes. Due to the sparsity of LiDAR points, it is difficult to generate pseudo-labels for far away persons contained in 2D annotations. When using a detector, these persons are often missed or detected with low confidence, thus not leading to a (potentially false) pseudo-label. We also analyze the distance distribution of our pseudo-labels. As Fig.\,\ref{fig:pl_dist} shows, it is similar to that of the annotations. Pseudo-labels do not introduce any distance bias which may wrongly emphasize samples within a certain range. Due to the high mounting of cameras in the JackRabbot dataset, the legs of persons close to the robot are outside the camera's field of view. Thus, no pseudo-label can be generated within 1~$m$ distance. \begin{figure} \includegraphics[width=\linewidth]{pics/tikz/hist.tikz}% \caption{ Distance distribution of pseudo-labels and annotations. Pseudo-labels have a similar distance distribution to that of the annotations and do not introduce a distance bias. }% \label{fig:pl_dist}% \end{figure} \subsection{Training with Pseudo-Labels} \label{sec:offline_training} We examine the performance of detectors trained using either pseudo-labels or ground truth annotations. We train the DROW3 detector for 40 epochs, using a batch size of 8 scans and the DR-SPAAM detector for 20 epochs, using a batch size of 4 scans, We use the Adam optimizer~\cite{Kingma15ICLR}, and a learning rate of $10^{-3}$. Starting from 10 epochs for DROW3 and 5 epochs for DR-SPAAM, we exponentially decay the learning rate until it reaches $10^{-6}$ at the end of training. Implementation and hyper-parameters are taken from~\cite{Jia20arXiv} for both detectors, with the only exception being that we enlarge the spatial window size for DR-SPAAM to 17 points (from the original 11 points). This enlarged window matches the effective opening angles, since the scans in JRDB have a higher angular resolution. Due to GPU memory constraints, we randomly crop the scans to 1000 points for DR-SPAAM at training time. We report the average precision (AP$_{\text{0.3}}$ and AP$_{\text{0.5}}$) on the test split (see~Table\,\ref{table:result_train}). A detection is considered positive if there is a ground truth within 0.3~$m$ or 0.5~$m$, and a ground truth can only be matched with a single detection. As baselines, we evaluate the released DROW3 and DR-SPAAM model from~\cite{Jia20arXiv}, which is trained on the DROW dataset. These pre-trained networks have significantly lower AP compared to the ones trained on JRDB, confirming our initial speculation that networks trained on a single dataset may not generalize well to new environments or different LiDAR models. The pre-trained DR-SPAAM, despite having a higher score on the DROW dataset, performed worse than DROW3, showing the effect of overfitting. Networks trained using pseudo-labels, benefiting from a smaller domain gap to the test data, outperform the pre-trained networks, proving the validity of our approach. Starting from a pre-trained model improves the detector performances for pseudo-labels, but gives no clear improvement for 3D annotations. The performance gap between pseudo-labels and annotations could be caused by two factors: label noise and less training samples. To study the effect of label noise, we additionally train networks with pseudo-labels, while removing falsely labeled points and correcting the regression target using ground truth annotations (see~Table\,\ref{table:result_train_pl}). Both false positives and false negatives reduce the detector performance, with the later having a strong influence on DROW3. Correcting the regression target improves AP$_\text{0.3}$ for both networks, especially for the more powerful DR-SPAAM. Cleaning pseudo-labels increases detector performance significantly, showing that label noise is the more dominant cause to the performance gap between pseudo-labels and annotations. Detectors trained using clean pseudo-labels trail the ones trained using annotations by around 2 percent AP, due to reduced amounts of training samples. Fine-tuning with a pre-trained model further narrows this performance gap. \begin{table} \centering \setlength{\tabcolsep}{2.0pt} \begin{tabularx}{\linewidth}{l cc YYccYY } \toprule &&& \multicolumn{2}{c}{DROW3} &&& \multicolumn{2}{c}{DR-SPAAM} \\ \cmidrule{4-5} \cmidrule{8-9} Supervision &&& AP$_{\text{0.3}}$ & AP$_{\text{0.5}}$ & & & AP$_{\text{0.3}}$ & AP$_{\text{0.5}}$ \\ \midrule Pre-trained from \cite{Jia20arXiv} &&& 65.5 & 70.8 &&& 62.5 & 68.3 \\ Pseudo-labels &&& 68.5 & 77.3 &&& 66.9 & 75.5 \\ \quad + fine-tuning from \cite{Jia20arXiv} &&& 69.0 & 77.8 &&& 69.2 & 76.7 \\ 3D Annotation &&& 76.2 & \textbf{82.9} &&& 78.5 & \textbf{84.9} \\ \quad + fine-tuning from \cite{Jia20arXiv} &&& \textbf{76.4} & 82.5 &&& \textbf{78.6} & 83.8 \\ \bottomrule \end{tabularx} \caption{Performance of DROW3 and DR-SPAAM trained using different supervision.} \label{table:result_train} \end{table} \begin{table} \centering \setlength{\tabcolsep}{2.0pt} \begin{tabularx}{\linewidth}{l c YYccYY } \toprule && \multicolumn{2}{c}{DROW3} && \multicolumn{2}{c}{DR-SPAAM} \\ \cmidrule{3-4} \cmidrule{6-7} Supervision && AP$_{\text{0.3}}$ & AP$_{\text{0.5}}$ && AP$_{\text{0.3}}$ & AP$_{\text{0.5}}$ \\ \midrule Pseudo-labels && 68.5 & 77.3 && 66.9 & 75.5 \\ $\ldots$ (remove FP) && 71.6 & 79.0 && 71.1 & 77.8 \\ $\ldots$ (remove FN) && 68.4 & 77.8 && 70.3 & 78.6 \\ $\ldots$ (remove FP \& FN) && 73.1 & 80.8 && 72.2 & 78.8 \\ $\ldots$ (remove FP \& FN, correct reg.) && 74.6 & 80.5 && \textbf{76.5} & \textbf{81.5} \\ \quad + fine-tuning from \cite{Jia20arXiv} && \textbf{74.7} & \textbf{81.2} && 76.2 & \textbf{81.5} \\ \bottomrule \end{tabularx} \caption{Performance of DROW3 and DR-SPAAM trained with different variants of pseudo-labels.} \label{table:result_train_pl} \end{table} \begin{table} \centering \setlength{\tabcolsep}{2.0pt} \begin{tabularx}{\linewidth}{l cc YYccYY } \toprule &&& \multicolumn{2}{c}{DROW3} &&& \multicolumn{2}{c}{DR-SPAAM} \\ \cmidrule{4-5} \cmidrule{8-9} Training scheme &&& AP$_{\text{0.3}}$ & AP$_{\text{0.5}}$ & & & AP$_{\text{0.3}}$ & AP$_{\text{0.5}}$ \\ \midrule Cross-entropy loss &&& 68.5 & 77.3 &&& 66.9 & 75.5 \\ \quad + mixup regularization &&& 69.5 & 78.0 &&& 65.8 & 74.0 \\[0.1cm] Partially Huberized cross-entropy loss &&& \textbf{71.4} & \textbf{79.0} &&& 69.4 & 76.4 \\ \quad + mixup regularization &&& 71.1 & 78.5 &&& \textbf{70.0} & \textbf{78.3} \\ \bottomrule \end{tabularx} \caption{Performance of DROW3/DR-SPAAM trained with pseudo-labels and different robust training methods.} \label{table:result_robust} \end{table} To mitigate the problem caused by labeling noise, we experiment with two robust training methods: the \textit{partially Huberized cross-entropy loss}, and the \textit{mixup} regularization (see~Table\,\ref{table:result_robust}). Both methods improve the network performance, with the exception of applying \textit{mixup} alone on DR-SPAAM (potentially due to suboptimal hyper-parameters). Combined with robust training methods, detectors trained using pseudo-labels outperform the pre-trained detectors by a large margin, and reach a performance close to training using annotations, without using any labeled data. Pseudo-labels provide an effective way to adjust person detectors to new environments or LiDAR models. \subsection{Online Fine-Tuning with Pseudo-Labels} In practice, it is desirable to use a detector that can dynamically fine-tune itself in an online fashion during deployment. To study the network performance undergoing such fine-tuning, we take a DROW3 detector, pre-trained on DROW dataset~\cite{Jia20arXiv}, and fine-tune it on the JRDB train split for one epoch, with the partially Huberized cross-entropy loss. We use a learning rate of $5\times10^{-5}$ and a batch size of 8 scans. In the first set of experiments, we shuffle data only within each sequence (the whole train split is composed of 17 sequences) and pass the in-sequence shuffled data to the network. This mimics the situation where a mobile robot enters into a new environment, curates a small amount of data, and fine-tunes itself. In the second set of experiments, we shuffle data within the whole training split, giving more diversity in each batch. The detector performance at different stages of fine-tuning is shown in~Fig.\,\ref{fig:online_training}. When the data is shuffled within the whole training split, the network performance increases significantly, from the pre-trained 70.8 percent AP$_{\text{0.5}}$ to more than 74 percent, using less than one hundred updates. This fast performance increase implies that, even in applications with insufficient computation for a full training, it is still possible to adapt the detector and improve its performance, by running a small number of updates using pseudo-labels. However, having curated training samples with enough diversity is a key prerequisite, as fluctuating performance is observed when the data was shuffled only within each sequence. Although for most of the time the detector benefits from fine-tuning (having better performance than that of a pre-trained detector), there are adversarial samples that cause dramatic performance reductions. The same fluctuating behavior exists for fine-tuning using annotations, showing that it is an innate problem of network training, rather than caused by pseudo-labels. Curriculum learning methods~\cite{Bengio09ICML,Jiang18ICML} may help mitigating this problem, and thus relaxing the requirement of data collection. They are interesting directions to be explored in future research. \begin{figure} \newcommand\drawdashedline{% \begin{tikzpicture \draw[gray, dashed, line width=0.75pt]% (-7.5pt,0) -- (7.5pt,0);% \end{tikzpicture}% } \centering \includegraphics[width=\linewidth]{pics/tikz/online_training.tikz} \caption{ Performance at different steps of fine-tuning a pre-trained DROW3 detector with partially Huberized cross-entropy loss. Data is shuffled either on the whole training set or within each sequence, mimicking different amounts of curated data. With diverse batches, the detector performance improves significantly even with only a small number of updates. Fine-tuning longer using the default training schedule yields 79.2 and 82.3 percent AP$_\text{0.5}$ for pseudo-labels and annotations, respectively. } \label{fig:online_training} \end{figure} \section{CONCLUSION} In this paper, we proposed a method to automatically generate pseudo-labels for training 2D LiDAR-based person detectors, using bounding boxes generated from an image-based person detector with a calibrated camera. We analyzed the quality of pseudo-labels by comparing them against ground truth annotations and proved their validity. Experiments were conducted to train or fine-tune DROW3 and DR-SPAAM detectors using pseudo-labels, and these self-supervised detectors outperformed the detectors trained on annotations using a different dataset. Even stronger detectors were obtained by combining pseudo-labels with robust training techniques. Our method provides an effective way to bridge the domain gap between data encountered during training and during deployment. With our method, a mobile robot equipped with a 2D LiDAR-based person detector can fine-tune the detector during deployment, improving its performance with no additional labeling effort. With the released code, we expect our method will be useful for many robotic applications. \textbf{Acknowledgements:} We thank Hamid Rezatofighi, JunYoung Gwak, and Mihir Patel for their help with the JackRabbot dataset. This project was funded by the EU H2020 project "CROWDBOT" (779942). Most experiments were performed on the RWTH Aachen University CLAIX 2018 GPU Cluster (rwth0485).
{ "timestamp": "2021-06-04T02:20:00", "yymm": "2012", "arxiv_id": "2012.08890", "language": "en", "url": "https://arxiv.org/abs/2012.08890" }
\section{Appendix} \subsection{Network Architectures} \end{document} \section{Conclusion} In this paper, we propose Coarse-to-Fine Flow Warping Network (C2F-FWN) to achieve both spatial and temporal consistency for HVMT, enabling us to preserve exemplary appearances as well as improve video coherence. Specifically, our coarse-to-fine flow warping can precisely model geometric deformations caused by motions to ensure the spatial consistency, where we further utilize our Layout-Constrained Deformable Convolution (LC-DConv) to enhance features for estimating the transformation flows. To achieve the temporal consistency, we propose a novel Flow Temporal Consistency (FTC) Loss to learn explicit temporal consistency between successive transformation flows, which significantly improves the video coherence. Experimental results tested on our SoloDance dataset and the iPER dataset show our superiority to other methods in terms of both spatial and temporal consistency. Ablation studies w.r.t. our FTC loss and LC-DConv demonstrate their effectiveness in improving our synthesis quality. We also demonstrate that our method can achieve flexible appearance attribute editing provided with alterable multi-source appearance inputs, which shows promising application prospects. \subsection{Limitations and Future Work} Although our method works well in most cases, it may fail (e.g., jitters, blurs) due to errors in poses and semantic layouts, which would cause errors in our model inputs and further result in artifacts in our output results. In the future, we can utilize more accurate pose and layout estimation techniques to eliminate these errors. Besides, we currently only provide our model with one single exemplary image, which might suffer from self occlusions and texture missings in cases of extremely large motion changes. Thus, how to attend and aggregate multiple exemplary images for warping is also worth studying in future works. \section{Acknowledgments} This work was supported by the National Natural Science Foundation of China (U19B2043). \section{Experiments} \subsection{Dataset} \subsubsection{SoloDance Dataset} We built a large-scale SoloDance dataset containing 179 solo dance videos with 53,700 frames. Specifically, 143 human subjects were captured with each wearing various clothes and performing complex dances (e.g., modern, street dances) in various backgrounds. Compared to the iPER dataset \cite{liu2019liquid} that only contains 30 subjects performing simple moves (e.g., random actions, A-poses), our dataset offers more appearance variety and motion complexity. We utilized \cite{cao2017realtime} and \cite{gong2018instance} to detect body poses and semantic layouts, and further obtained foregrounds and backgrounds for each video. In our experiments, we randomly split the dataset into 153 and 26 videos for training and testing. \subsubsection{iPER Dataset} We also evaluated our method on the iPER dataset \shortcite{liu2019liquid}. The data preprocessing of the iPER dataset is the same as our SoloDance dataset. Following the original protocal of iPER, we used 164 videos for training and the remaining 42 videos for testing. \subsection{Implementation Details} All the frames were resized and cropped to 256x256 sizes to train our models. Since backgrounds are fixed and easy to generate compared to animated human foregrounds, we further cropped the frames to central 192x256 body regions during evaluation to focus on the quality of the synthesized foregrounds. The design of the layout GAN in Stage 1 and the composition GAN in Stage 3 followed \cite{wang2018video}. The design of our FPNs in Stage 2 followed \cite{lin2017feature} except that we replaced standard convolutions in bottom-up pathways of FPN-A and FPN-M with our LC-DConv to enhance the features. Particularly, the LC-DConv was implemented based on \cite{dai2017deformable} by employing layout-constrained sampling locations. Moreover, to enable the supervision of the proposed FTC loss, we utilized \cite{ilg2017flownet} to obtain the ground-truth optical flows. We trained each stage for 10 epochs separately with Adam optimizers \cite{kingma2014adam} (learning rate: 0.0002, ${\beta}_1$: 0.5, ${\beta}_2$: 0.999) on an Nvidia RTX 2080 Ti GPU, where we set $\lambda_{1}=5$ and $\lambda_{2}=0.5$ in Eq. \ref{con:E4} to trade-off the two losses. \subsection{Baselines} To evaluate our proposed approach, we made comparisons with state-of-art HVMT methods including a personalized method EDN \cite{chan2019everybody}, a direct generation method FSV2V \cite{wang2019few}, two feature warping methods LWGAN \cite{liu2019liquid} and SGWGAN \cite{dong2018soft}, and an image warping method ClothFlow \cite{han2019clothflow}. In our implementation, we used 3000 frames for each person to train personalized models for EDN, and used the same data as ours to train models for other methods. \begin{figure*}[htbp] \centering \includegraphics[width=.95\textwidth]{./figures/comparison.pdf} \caption{Qualitative comparisons with other methods including EDN \shortcite{chan2019everybody}, FSV2V \shortcite{wang2019few}, LWGAN \shortcite{liu2019liquid}, SGWGAN \shortcite{dong2018soft}, ClothFlow \shortcite{han2019clothflow}, albated variants without FTC loss, LC-DConv. Yellow, blue and red circles point out blurry surfaces, over-stretched clothes patterns, and black chinks (caused by misplacements), respectively. \emph{Please zoom in for a better view.}} \label{comparison} \end{figure*} \subsection{Quantitative Results} We utilized both traditional (SSIM and PSNR) and CNN-based metrics (LPIPS \cite{zhang2018unreasonable} and FID \cite{heusel2017gans}) to measure the quality of synthesized frames, which can assess the spatial consistency between synthesized and exemplary images. We also utilized a Temporal Consistency Metric (TCM) \cite{yao2017occlusion} to evaluate the temporal consistency, which is an essential factor in measuring the quality of videos rather than single frames. Specifically, TCM measures temporal consistency by calculating warping errors between successive synthesized frames, where each frame is warped by the ground-truth optical flow to compare with its neighbored frame. The quantitative results of all the methods are summarized in Table \ref{table1}. We can see that our proposed C2F-FWN significantly outperforms all the other methods including the personalized method EDN \shortcite{chan2019everybody} for all the metrics (especially for the TCM scores) on both of the two datasets, which indicates that our approach can achieve HVMT with better spatial and temporal consistency. \subsection{Qualitative Results} As shown in Figure \ref{comparison}, we randomly visualize some motion transfer video frames synthesized by different methods for qualitative comparisons, where our approach outperforms all the other methods. Specifically, we achieve better spatial consistency with exemplary images than others, especially the direct generation method FSV2V \shortcite{wang2019few}, where our method preserves the exemplary appearance details such as decorative patterns and colors well. Besides, benefiting from our coarse-to-fine flow warping, we can capture the desired motions better than existing warping-based methods, with our warped clothes precisely aligned with the body layouts. However, the feature warping method SGWGAN \shortcite{dong2018soft} can't enable precise feature alignment with the desired motions due to the limited warping capability, which causes poor appearance details. Another feature warping method LWGAN \shortcite{liu2019liquid} results in blurry details on the surface of bodies and clothes (e.g., circled in yellow in Figure \ref{comparison}) because of the low-precision SMPL models. The image warping method ClothFlow \shortcite{han2019clothflow} can't warp the exemplary images to align with the desired motions, which results in visual artifacts such as over-stretching and misplacement near the layout boundaries (e.g., circled in blue and red in Figure \ref{comparison}). Although the personalized method EDN \shortcite{chan2019everybody} can generate comparable results to us, it often results in blurrier textures. We also show some of our multi-source appearance synthesis results in Figure \ref{appearance}, where we utilized fashion images dissimilar from our training data to extract tops and bottoms. We can see that the multi-source exemplary appearances are also well preserved, enabling flexible appearance attribute editing for HVMT. Videos of the qualitative comparisons and our synthesized results can be found in \emph{our supplementary materials}, where we show that our method can also achieve better temporal consistency. \subsection{Ablation Study} We also conducted ablation studies w.r.t. our FTC loss and LC-DConv to demonstrate their effectiveness. Specifically, we implemented two variant models for comparisons. One was trained without our FTC loss, and another only adopted standard convolutions to extract features. As shown in Table \ref{table1}, our full method outperforms the two variants for all the metrics. As shown in Figure \ref{comparison}, without the two components, the clothes are warped imprecisely (e.g., circled in blue and red in Figure \ref{comparison}), which indicates the importance of the LC-DConv as well as the FTC loss for enhancing our flow warping and improving spatial consistency. Moreover, we observed that the variant without the FTC loss would result in much worse video coherence than our full method, which shows our superiority in improving temporal consistency. \textbf{Please refer to \emph{our supplementary video} for more details: \url{https://youtu.be/THuQN1GXuGI}.} \section{Introduction} Human Video Motion Transfer (HVMT) refers to the task of synthesizing videos that one person imitates motions of other persons, which has attractive potential applications in movies, interactive games, virtual shopping, etc. With the development of Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} and GAN-based image-to-image translation techniques \cite{wang2018high,wang2018video,park2019semantic}, HVMT works have achieved great success.\par In general, existing HVMT methods have two main streams: personalized HVMT and general-purpose HVMT. Personalized methods \cite{chan2019everybody,liu2019neural} focus on learning the mapping from motion inputs (e.g., body poses or semantic layouts that describe the desired motions) to video frames for a specific person, with a large number of frames from this person collected as the training data to fit the model for his/her appearance. To generate videos for another person, they have to perform a new round of data collection and model training, which requires massive human resources and computation costs. The recent emergence of general-purpose methods \cite{wang2019few,liu2019liquid,wei2020gac} manages to solve this by providing additional appearance inputs (e.g., exemplary images that describe the desired appearances) for GANs. Thus they can generate videos for new persons by altering the input exemplary images. However, most of these methods directly utilize GANs to generate values for all the pixels from scratch without preserving their spatial consistency with pixels in the exemplary images, which results in the loss of appearance details such as decorative patterns and colors of clothes. Besides, they either don't consider temporal consistency or only focus on implicit temporal consistency among frame images when synthesizing videos, which causes low temporal coherence in their video results. Moreover, most of them don't support HVMT with fully editable appearances, lacking flexibility and efficiency for real applications.\par In this paper, to address these limitations, we propose Coarse-to-Fine Flow Warping Network (C2F-FWN) to ensure both spatial and temporal consistency for HVMT. For spatial consistency, our C2F-FWN synthesizes motion transfer videos through warping based on coarse-to-fine transformation flows rather than direct generation based on GANs. Thus we can precisely model geometric deformations caused by motions to preserve spatial correlations between synthesized and exemplary image pixels. Moreover, Layout-Constrained Deformable Convolution (LC-DConv) is utilized to extract deformable features for C2F-FWN, further improving the spatial consistency. For temporal consistency, we propose Flow Temporal Consistency (FTC) Loss with optical flows as the constraints to enforce explicit temporal consistency among transformation flows instead of frame images, radically ensuring the video coherence.\par \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{./figures/overview.pdf} \caption{Overview of our method. Orange, green and blue rectangles specify processes in Stages 1, 2 and 3, respectively. Black arrows denote ordinary data flows like pose detection, layout detection and body division. Orange, green and blue arrows denote data flows of the layout GAN in Stage 1, our C2F-FWN in Stage 2, and the composition GAN in Stage 3, respectively.} \label{overview} \end{figure*} In our experiments, we evaluate our method on both iPER dataset \cite{liu2019liquid} and a large-scale SoloDance dataset collected by ourselves. Both quantitative and qualitative results demonstrate that videos generated by our method have significantly better spatial and temporal consistency than existing personalized and general-purpose methods. We also show that our approach can utilize multi-source appearance inputs to enable full appearance attribute editing (e.g., change identities, tops, bottoms, backgrounds) for HVMT, which has promising application prospects. \section{Method} \subsection{Overview} The overview of our method is shown in Figure \ref{overview}, which contains three stages: layout synthesis (Stage 1), clothes warping (Stage 2) and image composition (Stage 3). For ease of discussion, the used symbols are presented as follows. Given an exemplary foreground image $FG$ describing the desired human appearance and an exemplary background image $BG$ describing the desired background appearance, we aim at synthesizing videos that the exemplary human foreground $FG$ performs motions described by the pose sequence $P^{1\sim{T}}$ in the exemplary background $BG$. For the exemplary foreground $FG$, we detect its semantic layout ${LO}$ and divide it to further obtain layout $LO_{C}$ and foreground $FG_{C}$ for the clothing parts (i.e., tops and bottoms), as well as layout $LO_{\bar{C}}$ and foreground $FG_{\bar{C}}$ for the non-clothing parts (i.e., hair, face, torso and shoes). Provided with these processed inputs, we synthesize the corresponding output video sequence $\hat{I}^{1\sim{T}}$ by three stages described in Figure \ref{overview}. Since $\hat{I}^{1\sim{T}}$ is generated frame by frame, we take the synthesis of the $t$-th frame $\hat{I}^t$ as an example for brevity.\par \textbf{In Stage 1}, we utilize a layout GAN to generate the semantic layout $\hat{LO}^t$, which has the same motion as $P^{t}$ and the same appearance as $LO$. We further divide $\hat{LO}^t$ to obtain the clothing layout $\hat{LO}_C^t$. \textbf{In Stage 2}, taking $\chi_1=\{{LO}_{C},FG_{C}\}$ as the appearance input and taking $\chi_2=\hat{LO}_C^t$ as the motion input, our C2F-FWN computes the transformation flow $\hat{F}^t$ to warp the exemplary clothing foreground $FG_{C}$ into the foreground $\hat{FG}_C^t$, which precisely aligns with the generated clothing layout $\hat{LO}_C^t$. \textbf{In Stage 3}, we utilize a composition GAN to generate the remaining parts including the non-clothing foreground and the background, and compose them with the clothing foreground $\hat{FG}_C^t$ from Stage 2 to generate the full frame image $\hat{I}^t$. Note that we don't generate the non-clothing parts through warping for two reasons. First, the appearance of the non-clothing parts varies sharply in different views, making it extremely hard to model their appearance changes through warping. Second, texture and color patterns of the non-clothing parts are simple and easy to generate using GANs. Therefore, we utilize the composition GAN to synthesize the non-clothing parts.\par Particularly, the layout GAN and the composition GAN follow the Vid2Vid design presented in \cite{wang2018video}. Vid2Vid is a general image-to-image translation backbone consisting of two encoders and two decoders ($E_1$,$E_2$,$D_1$,$D_2$ for brevity). $E_1$ and $E_2$ aim to encode features for two inputs $\mathcal{I}_1$ (i.e., current conditional inputs) and $\mathcal{I}_2$ (i.e., previous generated results), respectively. $D_1$ and $D_2$ aim to decode the added features of $\mathcal{I}_1$ and $\mathcal{I}_2$ to output $\mathcal{O}_1$ (i.e., a raw result) and $\mathcal{O}_2$ (i.e., an optical flow). Then we can obtain the current frame result by using $\mathcal{O}_2$ to warp the last frame result and add it to $\mathcal{O}_1$. For the \textbf{layout GAN}, $\mathcal{I}_1$ denotes the concatenated $\{P^t,LO\}$. $\mathcal{I}_2$ denotes the concatenated $\{\hat{LO}^{t-1},\hat{LO}^{t-2}\}$. Thus we can utilize the Vid2Vid backbone to generate $\hat{LO}^t$. Besides, to better synthesize the one-hot semantic layouts rather than RGB images, we replaced image reconstruction losses of Vid2Vid with a structure-sensitive pixel-wise softmax loss introduced in human parsing works \cite{liang2018look}. Similarly, for the \textbf{composition GAN}, $\mathcal{I}_1$ denotes the concatenated $\{\hat{LO}^t,\hat{FG}_C^t,LO_{\bar{C}},FG_{\bar{C}},BG\}$. $\mathcal{I}_2$ denotes the concatenated $\{\hat{I}^{t-1},\hat{I}^{t-2}\}$. The Vid2Vid backbone learns to automatically attach non-clothing parts to clothes synthesized in Stage 2, and thus obtain the full image $\hat{I}^t$.\par In the following, the details of our \textbf{C2F-FWN} including coarse-to-fine flow warping, Layout-Constrained Deformable Convolution (LC-DConv) and Flow Temporal Consistency (FTC) Loss are presented. At last, the unique characteristic of our C2F-FWN, multi-source appearance attribute editing is discussed. \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{./figures/stage2.pdf} \caption{Illustration of C2F-FWN. We use feature maps in different colors to differentiate the three FPNs, where blue, green, and orange feature maps specify FPN-A, FPN-M, and FPN-F, respectively. Each FPN has two pathways connected by lateral connections (horizontal arrows), where we use light and dark colors to differentiate the features of bottom-up and top-down pathways, respectively. Steps 1$\sim$5 drawn in gold describe the procedure of our coarse flow warping.} \label{stage2} \end{figure} \subsection{Coarse-to-Fine Flow Warping} Before diving into details of the coarse-to-fine flow warping, we first explain its motivation and mechanism. In cases that ${LO}_{C}$ greatly differs from $\hat{LO}_C^t$ in motion, there would be far distances between pixels in the appearance input $\chi_1$ and their correlated pixels in the motion input $\chi_2$. If we estimate the whole transformation flow directly based on the concatenation of misaligned appearance and motion features, we would fail because standard convolutions with limited kernel sizes can't build correlations between pixels far away from each other in position. Differently, our C2F-FWN first estimates a coarse Thin-Plate-Spline (TPS) flow $\hat{F}_{coarse}^t$ based on the smallest bottom-up features to coarsely warp the appearance features into the desired motion, where the effects of far distances w.r.t. the size of inputs can be ignored due to the large receptive fields of small-size features. Thus the appearance and the motion features are aligned. Then we can concatenate the largest top-down appearance and motion features to further compute the refinement flow $\hat{F}_{fine}^t$ for fine warping, where the effects of far distances have been eliminated after preliminary feature alignment.\par As shown in Figure \ref{stage2}, C2F-FWN contains three feature pyramid networks (FPN) \cite{lin2017feature}: FPN-A, FPN-M and FPN-F, responsible for extracting pyramidal features of appearance input $\chi_1=\{{LO}_{C},{FG}_{C}\}$, motion input $\chi_2=\hat{LO}_C^t$ and previously estimated transformation flow $\chi_3=\hat{F}^{t-1}$. Each FPN has two symmetrical pathways (bottom-up and top-down). Specifically, the top-down pathway is built upon the bottom-up pathway via lateral connections, with sizes of the bottom-up features growing smaller and sizes of the top-down features growing larger. Benefiting from such symmetric design, both coarse and fine warpings can be realized in the unified C2F-FWN.\par \subsubsection{Coarse Flow Warping} The procedure of our coarse flow warping is described in steps 1$\sim$5 in Figure \ref{stage2}. First, we compute the correlation map $C$ with each position containing the pairwise similarities between the smallest bottom-up features of FPN-A and FPN-M. The correlation map is then fed into a regression layer to compute $K{\times}2$ parameters ($\theta$), which represent positions of $K$ control points ($K$=$3{\times}3$ in this paper). Based on TPS interpolation \cite{rocco2017convolutional}, we can generalize the mapping between the estimated $K$ control points and their corresponding predefined grid points to all the pixels of ${FG}_{C}$, and hence move each pixel to its new position to obtain the coarsely warped clothes $\hat{FG}_{C,coarse}^t$. To enable the supervision of the coarse warping, we utilize a VGG loss \cite{johnson2016perceptual} $L^{coarse}_{VGG}$ to minimize the difference between $\hat{FG}_{C,coarse}^t$ and the ground truth.\par Then, to make the TPS transformation compatible with our transformation flow, we convert it to a coarse flow $\hat{F}_{coarse}^t$ by computing the position difference before and after transformation for each pixel. Let $P=(x,y)$ denote the position of a pixel in the warped clothes $\hat{FG}_{C,coarse}^t$, and let $P^{'}=(x^{'},y^{'})$ denote the position of the same pixel in the exemplary clothes ${FG}_{C}$. The coarse flow at position $(x,y)$ can be given by: $\hat{F}_{coarse}^{t}(x,y)=\overrightarrow{PP^{'}}=(x^{'}-x,y^{'}-y)$, which is the same for all the other positions.\par Then we downsample $\hat{F}_{coarse}^t$ to different sizes to warp all the bottom-up features of FPN-A, roughly aligning them with the generated layout $\hat{LO}_C^t$, which represents the desired motion. Thus we can compute the corresponding roughly-aligned top-down appearance features via lateral connections, with pixels located at positions close to the correlated pixels in the top-down motion features, facilitating the subsequent estimation of the refinement flow. \subsubsection{Fine Flow Warping} As shown in Figure \ref{stage2}, we predict the refinement flow $\hat{F}_{fine}^t$ based on the concatenation of the largest top-down features of the three FPNs, where we include features of FPN-F to allow for learning the temporal consistency with previous transformation flows. Specifically, the refinement flow $\hat{F}_{fine}^t$ has the same size as the coarse flow $\hat{F}_{coarse}^t$, adding pixel-wise offsets to $\hat{F}_{coarse}^t$ to precisely align with the generated layout $\hat{LO}_C^t$. Thus our final transformation flow is given by: $\hat{F}^t=\hat{F}_{coarse}^t+\hat{F}_{fine}^t$. Using $\hat{F}^t$ to warp the exemplary clothes ${FG}_{C}$, we can obtain the final warped clothes $\hat{FG}_C^t$.\par During training, we also utilize a VGG loss \shortcite{johnson2016perceptual} $L_{VGG}$ to minimize the difference between $\hat{FG}_C^t$ and the ground truth, which enables the supervision of the fine warping.\par \subsection{Layout-Constrained Deformable Convolution} Since both the coarse and the refinement flows are predicted based on FPN-A and FPN-M features, feature extraction in these two FPNs directly affects the quality of our warping results. In motion transfer tasks, clothes items may change to various shapes along with body poses. Thus the extracted features should be able to generalize to various shapes correspondingly. Unfortunately, standard CNN features are transformation-invariant, which means receptive fields remain fixed no matter how the shape changes and hence can't accommodate the geometric deformations for different shapes. Besides, such fixed receptive fields are not large enough to accommodate the misalignment between appearance and motion features.\par \begin{figure}[t] \centering \includegraphics[width=0.7\columnwidth]{./figures/LD-DConv.pdf} \caption{Illustration of LC-DConv. Here we take the LC-DConv in the first layer of FPN-M as an example, which is the same for FPN-A expect FPN-A takes foregrounds in addition to layouts as its inputs. We let orange and dark green represent semantic classes of tops and bottoms, respectively. In the small patch, $\checkmark$ in green and $\times$ in red denote valid and invalid sampling positions, respectively.} \label{DConv} \end{figure} Therefore, we replace all the standard convolutions in bottom-up pathway layers of FPN-A and FPN-M with Deformable Convolutions (DConv) \cite{dai2017deformable}, which can model geometric deformations adaptively with deformable receptive fields. As shown in Figure \ref{DConv}, DConv learns additional 2D offsets to shift regular sampling locations of the standard convolution, which enables deformable and larger receptive fields. However, the unconstrained offsets may result in invalid sampling from positions not semantically related to the output position, causing the loss of semantic information in the output feature. Therefore, our Layout-Constrained Deformable Convolution (LC-DConv) utilizes input semantic layouts as priors to set amplitudes of features sampled from invalid positions to zero, precisely preserving the layout boundaries and thus enhancing the semantic information in the output feature. Taking the convolution with a 3x3 kernel of dilation 1 as an example, we explain how our LC-DConv works. Let $X(p)$ and $Y(p)$ be the input and the output features at position $p$ respectively, and let $w_k$, $\Delta{p_k}$ and $p_k{\in}\{(-1,-1),(-1,0),...,(1,1)\},k=1{\sim}9$ represent the $k$-th kernel weight, the $k$-th sampling offset and the $k$-th regular sampling position respectively, we can derive the LC-DConv as follows: \begin{equation} \begin{aligned} Y(p)=&\sum_{k=1}^K{w_k\cdot{X(p+p_k+\Delta{p_k})\cdot\Delta{m_k}}},\\ \Delta{m_k}=& \begin{cases} 0,& \text{$LO(p)\,\,{\neq}\,LO(p+p_k+\Delta{p_k})$},\\ 1,& \text{otherwise,} \end{cases} \label{con:E1} \end{aligned} \end{equation}where $K=9$, $\Delta{m_k}$ is the modulation scalar determined by the layout prior $LO$, deciding the validity of the $k$-th offset sampling position. For FPN-A, $LO$ refers to the exemplary clothing layout ${LO}_{C}$. For FPN-M, $LO$ refers to the generated clothing layout $\hat{LO}_C^t$. As depicted in Figure \ref{DConv}, we set feature amplitudes to zero if they belong to semantic classes different from the class at the output position, which can effectively avoid any invalid sampling.\par \begin{figure}[t] \centering \includegraphics[width=0.7\columnwidth]{./figures/FTC-Loss.pdf} \caption{Illustration of FTC loss. The first image is the exemplary clothes ${FG}_{C}$. The second image is a combination of two warped clothes $\hat{FG}_C^{t-l}$ and $\hat{FG}_C^t$. We take the position $P$ in the right arm region as an example, which moves from $P_{t-l}$ to $P_t$ during time $t-l{\sim}t$. Yellow, blue and green solid arrows represent $\hat{F}^{t}$, $\hat{F}^{t-1}$ and $U$, respectively. Blue dotted arrow denotes the resampled $\hat{F}^{t-1}$.} \label{FTC} \end{figure} \subsection{Flow Temporal Consistency Loss} Compared to other methods \cite{chan2019everybody,wang2019few} that only learn implicit temporal consistency among frame images, our FTC loss uses optical flows to enforce explicit temporal consistency among transformation flows, also enabling direct supervision on transformation flows instead of warped clothes. Specifically, benefiting from the flow format of our transformation, we can build the correlation between two transformation flows $\hat{F}^t$ and $\hat{F}^{t-l}$ using the optical flow $U$ between the corresponding two frames. Let $P=(x,y)$ denote the position of a pixel in the exemplary clothes ${FG}_{C}$, and let $P_t=(x_t,y_t)$ and $P_{t-l}=(x_{t-l},y_{t-l})$ denote positions of the same pixel in the warped clothes $\hat{FG}_C^t$ and $\hat{FG}_C^{t-l}$. Thus, for this pixel, the transformation flow vectors at time $t$ and $t-l$ are given by: \begin{equation} \begin{aligned} &\hat{F}^t(x_t,y_t)=(x-x_t,y-y_t),\\ &\hat{F}^{t-l}(x_{t-l},y_{t-l})=(x-x_{t-l},y-y_{t-l}). \label{con:E2} \end{aligned} \end{equation}Note that all the flows used in this paper are backward flows. Therefore, the flow vectors at time steps $t$ and $t-l$ are actually located at the transformed positions $P_t$ and $P_{t-l}$ w.r.t. the warped clothes, rather than the original position $P$ w.r.t. the exemplary clothes. Such backward format can ensure each pixel in the warped clothes has a flow vector to indicate its original position to be sampled from the exemplary clothes, further ensuring the warping operation is valid.\par In principle, if the frames at $t$ and $t-l$ are temporally consistent, $\hat{F}^t(x_t,y_t)-\hat{F}^{t-l}(x_{t-l},y_{t-l})$ should be equal to the ground-truth optical flow vector $U(x_t,y_t)$, which is from $t$ to $t-l$ and equal to $(x_{t-l}-x_t,y_{t-l}-y_t)$. As shown in Figure \ref{FTC}, to generalize this equation to the whole image rather than a single pixel, we should resample $\hat{F}^{t-l}(x_{t-l},y_{t-l})$ at $P_{t-l}$ to the same position $P_t$ as $\hat{F}^t(x_t,y_t)$, and do the same to all the remaining pixels of $\hat{F}^{t-l}$ to make them share positions with those of $\hat{F}^t$. We can realize this by using $U$ to warp $\hat{F}^{t-l}$. Thus our FTC loss is given by: \begin{equation} \begin{aligned} L_{FTC,l}=\|\hat{F}^t-\mathrm{W}_{U}(\hat{F}^{t-l})-{U}\|_1, \label{con:E3} \end{aligned} \end{equation}where $\mathrm{W}$ denotes the warping operation based on $U$. To guarantee both short-term and long-term temporal consistency, we set $l=1,3,9$ to compute FTC losses at three time scales and sum them together as our full FTC loss $L_{FTC}$.\par We further utilize a TVL1 loss \cite{fan2018end} $L_{TVL1}$ to minimize the difference between flow vectors at neighbored positions of $\hat{F}^t$, which smooths the warping. Summarily, the full objective is a weighted sum of several losses, given by: \begin{equation} \begin{aligned} L_{full}=L_{VGG}+L^{coarse}_{VGG}+\lambda_{1}L_{FTC}+\lambda_{2}L_{TVL1}, \label{con:E4} \end{aligned} \end{equation}where $\lambda_{1}$ and $\lambda_{2}$ denote the weights of FTC and TVL1 losses, respectively. \begin{table*}[htbp] \begin{center} \caption{Quantitative results tested on our SoloDance dataset and iPER \shortcite{liu2019liquid} dataset. SSIM, PSNR, TCM are similarity metrics, the higher the better (SSIM and TCM range from 0 to 1). LPIPS and FID are distance metrics, the lower the better. Note that TCM measures temporal consistency while other metrics measure spatial consistency.}\label{table1} \resizebox{1\textwidth}{!}{ \begin{tabular}{c"lccccc"cc"c} \thickhline Datasets & Metrics & EDN\shortcite{chan2019everybody} & FSV2V\shortcite{wang2019few} & LWGAN\shortcite{liu2019liquid} & SGWGAN\shortcite{dong2018soft} & ClothFlow\shortcite{han2019clothflow} & w/o FTC loss & w/o LC-DConv & Ours\\ \thickhline & SSIM & 0.811 & 0.721 & 0.786 & 0.763 & 0.843 & 0.849 & 0.850 & \textbf{0.879}\\ & PSNR & 23.22 & 20.84 & 20.87 & 20.54 & 22.06 & 23.05 & 23.19 & \textbf{26.65}\\ SoloDance & LPIPS & 0.051 & 0.132 & 0.106 & 0.124 & 0.072 & 0.065 & 0.063 & \textbf{0.049}\\ & FID & 53.17 & 112.99 & 86.53 & 99.24 & 76.61 & 64.92 & 61.03 & \textbf{46.49}\\ & TCM & 0.347 & 0.106 & 0.176 & 0.166 & 0.322 & 0.319 & 0.401 & \textbf{0.641}\\ \thickhline & SSIM & 0.840 & 0.780 & 0.825 & 0.818 & 0.814 & 0.824 & 0.822 & \textbf{0.849}\\ & PSNR & 23.39 & 20.44 & 21.43 & 22.41 & 21.87 & 22.76 & 22.52 & \textbf{24.27}\\ iPER\shortcite{liu2019liquid} & LPIPS & 0.076 & 0.110 & 0.091 & 0.086 & 0.088 & 0.082 & 0.084 & \textbf{0.072}\\ & FID & 56.29 & 110.99 & 77.99 & 101.99 & 71.21 & 64.40 & 63.72 & \textbf{55.07}\\ & TCM & 0.361 & 0.184 & 0.197 & 0.260 & 0.422 & 0.411 & 0.499 & \textbf{0.687}\\ \thickhline \end{tabular}} \end{center} \end{table*} \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{./figures/appearance_control.pdf} \caption{Examples of our multi-source appearance synthesis. Red-edged images describe the non-clothing foreground appearances (hair, face, torso, shoes). Green-edged images from left to right describe the appearances of background, tops, bottoms. Black-edged images are our synthesized motion transfer results. \emph{Please zoom in for a better view.}} \label{appearance} \end{figure} \subsection{Multi-Source Appearance Attribute Editing} Compared with existing HVMT methods, C2F-FWN can support multi-source appearance attribute editing when transferring motions. As described above, we divide the exemplary appearance into the background, clothing and non-clothing foregrounds. The clothing foreground can be further divided into tops and bottoms, which decide how the exemplary human subject is dressed in the synthesized videos. The non-clothing foreground can be further divided into hair, face, torso and shoes, with the first three parts deciding the human identity. With the help of semantic layouts, the background and each part of the foregrounds can be extracted from different sources to achieve the multi-source exemplary appearance. For example, the background can be replaced by arbitrary fixed images. Tops and bottoms in the clothing foreground can be extracted from arbitrary fashion or portrait images, which is the same for parts of the non-clothing foreground. With such multi-source appearance inputs, our proposed method can generate the corresponding multi-source appearance in the synthesized videos, which enables full appearance attribute editing for motion transfer as shown in Figure \ref{appearance}. Such capability can achieve rather high flexibility and efficiency in real applications. For example, users can arbitrarily change their clothes and backgrounds in videos without really wearing these clothes or performing in these backgrounds, enabling convenient video re-creation. \section{Related Work} \subsection{Personalized HVMT} Personalized HVMT \cite{chan2019everybody,liu2019neural,aberman2019deep,yang2020transmomo} only learns the mappings from motion inputs to video frames, with appearances learned individually in different models. Once trained, one model can only generate videos with specific appearances. To generate videos with new appearances, they need to train new models. Although such approaches can generate high-fidelity videos, they lack the efficiency for applications. \subsection{General-Purpose HVMT} General-purpose HVMT can be divided into direct generation methods \cite{wang2019few,wei2020gac} and warping-based methods \cite{liu2019liquid,dong2018soft,han2019clothflow}. Both utilize additional appearance inputs to control the synthesized appearances in addition to motions. \subsubsection{Direct Generation Methods} leverage GAN-based image-to-image translation techniques \cite{wang2018high,wang2018video,park2019semantic} to generate video frames from appearance and motion inputs directly. \citeauthor{wang2019few} utilize several SPADE blocks \cite{park2019semantic} to adaptively propagate the appearance information throughout the network, which can achieve appearance control by altering the appearance inputs. \citeauthor{wei2020gac} propose an appearance-consistency discriminator to force the generator to generate appearances consistent with the alterable appearance inputs, which also achieves the appearance control. However, these methods don't consider the spatial consistency between pixels in outputs and appearance inputs. Thus they can't preserve appearance details such as textures and colors well. Moreover, they only consider temporal consistency among frame images, which is implicit and hard to learn. On account of mode collapse and over-fitting problems of GANs \cite{webster2019detecting}, these methods often obtain low-fidelity results. \subsubsection{Warping-Based Methods} focus on generating images through warping to preserve spatial consistency. \citeauthor{dong2018soft} utilize Thin-Plate-Spline (TPS) transformation for warping to align features of appearance inputs with those of motion inputs before GAN-based generation. Similar feature warping can also yield fancy facial animation results for face video synthesis \cite{chen2020puppeteergan}. Unfortunately, the TPS transformation is decided by a few control points, which restricts its warping capability due to the low degree of freedom. Thus it can't precisely model the geometric deformations. Moreover, \citeauthor{liu2019liquid} propose liquid warping based on 3D SMPL models to achieve similar feature alignment. However, the SMPL models \cite{loper2015smpl} only describe naked human bodies. Thus they can't model surface deformations for clothes and hair. Instead of warping features, \citeauthor{han2019clothflow} propose to warp images using flows, which is similar to our approach in spirit. However, they directly estimate the dense flow field from misaligned features of appearance and motion inputs, failing to model large deformations when the two inputs greatly differ from each other in motion. Besides, none of these warping-based methods considers the temporal consistency between warping operations of neighbored frames, making them not capable of synthesizing coherent videos. Moreover, these methods all adopt standard convolutions in their networks, where the fixed receptive fields can't accommodate shape variances. Hence they can't extract appropriate features for human subjects with various shapes to estimate warping functions.
{ "timestamp": "2020-12-17T02:19:21", "yymm": "2012", "arxiv_id": "2012.08976", "language": "en", "url": "https://arxiv.org/abs/2012.08976" }
\section{Introduction} The origin of energetic particles is a long-standing problem of major importance in astrophysics. While it is widely assumed that cosmic rays (CRs) with energies up to $\sim10^{15}$~eV are produced at non-relativistic shocks of Galactic supernova remnants, higher-energy particles, in particular the so-called ultra-high-energy cosmic rays (UHECRs) with energies above $\sim10^{18}$ eV, are presumably generated in extragalactic systems with relativistic plasma outflows -- active galactic nuclei (AGN) and/or gamma-ray bursts (GRBs). Non-thermal synchrotron and inverse Compton emission in blazar jets extends in broad energy range from radio up to TeV $\gamma$ rays, indicating the presence of ultrarelativistic electrons. Recently established possible association of one of the high-energy neutrino sources with a flaring blazar TXS 0506+056 \citep{2018Sci...361.1378I} shows that also CR hadrons can be produced in AGN. High-energy particles in AGN and GRBs are often assumed to be accelerated at shock waves associated with the jets. These shocks have Lorentz factors, $\gamma_{\rm sh}$, ranging from mildly-relativistic to ultrarelativistic values. Many such systems are magnetized, exhibiting inherently quasi-perpendicular and superluminal conditions. \textcolor{black}{Superluminal} shocks are mediated by magnetic reflection of the incoming flow off the shock-compressed magnetic field \cite[e.g.][]{langdon1988,gallant1992,hoshino1992}. Coherent gyration of particles at the shock front breaks up in bunches of charge and triggers the synchrotron maser instability (SMI), which excites large-amplitude electromagnetic waves of the extraordinary mode (X-mode) that can escape towards the upstream region. This precursor wave emission has been confirmed through one-dimensional (1D) \citep[e.g.][]{langdon1988,hoshino1991,gallant1992,hoshino1992,amato2006,plotnikov2019} and two-dimensional (2D) \citep[e.g.][]{sironi2009,sironi2011,iwamoto2017,iwamoto2018,plotnikov2018,iwamoto2019} PIC simulations. In the electron-ion plasmas, interactions of the incoming electrons with the precursor waves can also generate large-amplitude longitudinal electrostatic oscillations, so-called wakefield \citep{lyubarsky2006}. As demonstrated by \citet{hoshino2008}, a large-amplitude coherent electromagnetic wave propagating in the plasma can expel electrons in front of the wave packet and so induces a longitudinal polarization electric field. Electron expulsion \textcolor{black}{results because} the so-called ponderomotive force is proportional to the gradient of the wave pressure and acts much stronger on electrons than ions. The electric field excites longitudinal electron motions that lead to the electrostatic Langmuir waves. The formation of large-amplitude wakefields results from the parametric decay instability \citep[PDI; e.g.][]{kruer1988}. In this wave-wave interaction the large-amplitude electromagnetic (pump) wave decays into a Langmuir wave and a scattered electromagnetic (light) wave. If the pump-wave frequency is much larger than the plasma frequency, Forward Raman Scattering (FRS) is triggered, in which the scattered electromagnetic wave and the Langmuir wave propagate in the same direction as the pump wave. The wavelength of the Langmuir wave is close to the electron inertial length, and its phase velocity approaches the group velocity of the pump wave, that is close to the speed of light. Electrons and ions can be energized to very high energies in a manner analogous to wakefield acceleration (WFA) during the nonlinear \textcolor{black}{collapse} of the Langmuir waves \citep{hoshino2008}. WFA was first proposed in laboratory plasmas \citep{1979PhRvL..43..267T} and later applied to UHECR acceleration \citep[e.g.][]{2002PhRvL..89p1101C}. It was then demonstrated through laser plasma experiments and simulations \citep[e.g.][]{kuramitsu2008} that the WFA produces power-law energy spectra with a spectral index of~2. Relativistic magnetized shocks have recently been studied with 2D PIC simulations for the case of pair plasmas \citep{iwamoto2017,iwamoto2018,sironi2009,plotnikov2018}, electron-ion plasmas \citep{sironi2011,stockem2012,iwamoto2019} and also mixed-composition plasmas \citep{stockem2012}. \citet{iwamoto2017} \textcolor{black}{demonstrated} that simulations need to have high numerical resolution to capture the precursor waves, in which case coherent waves persist even in weakly magnetized plasmas, dominated by the relativistic Weibel instability \cite[e.g.][]{Kato2010,sironi2011}. In pair plasmas, the precursor wave amplitudes were found to be systematically smaller in 2D simulations \textcolor{black}{than in the 1D case, but are still} sufficient to disturb the upstream medium. 2D simulations with magnetic field in the simulation plane showed that also ordinary mode (O-mode) waves are excited, \textcolor{black}{which at low magnetizations are amplified by} the Weibel instability \citep{iwamoto2018}. The amplitudes in pair plasmas are in general much smaller than at ion-electron shocks \citep{iwamoto2019}. In conditions of high electron magnetization the wave energy exceeds that in pair plasmas by almost two orders of magnitude, and the 2D amplitude is close to the 1D level. This amplification at high-\textcolor{black}{$\gamma_{\rm sh}$} shocks is attributed to a positive feedback process associated with the ion-electron coupling through the induced wakefields. In the turbulent wakefields close to the shock the electrons can be efficiently heated so that the energy equipartition between electrons and ions may be achieved before the flow arrives at the shock front. At the same time non-thermal electrons and ions can be generated. Most \textcolor{black}{published studies address} ultra-relativistic shocks with Lorentz factors $\gamma_{\rm sh} \geq 10$. The mildly relativistic regime, $\gamma_{\rm sh} \approx 3$, has \textcolor{black}{been explored only with low-resolution studies which \textcolor{black}{for superluminal shocks} show very weak \citep{sironi2011} or no wakefield \citep{lyubarsky2006}.} It has been estimated that only for electron-ion shocks with $\gamma_{\rm sh}\gtrsim (m_i/m_e)^{1/3}$, where $m_i/m_e$ is the ion-to-electron mass ratio, the electrons will form ring-like phase-space distribution unstable to SMI. \textcolor{black}{If so, one would expect little electron energization upstream of the shock in blazar jets, which has important consequences for their synchrotron and inverse Compton emission} \cite[e.g.][]{sikora2013}. Here we \textcolor{black}{revisit the efficiency of WFA and the level of the electron-proton coupling at mildly-relativistic \textcolor{black}{magnetized} shocks in electron-ion plasma with unprecedentedly high-resolution 2D PIC simulations}. We also account for ion-scale corrugations of the shock surface by employing a very large computational box. \textcolor{black}{We study strictly perpendicular shocks, in which the strength of the precursor wave is expected to be largest for all superluminal obliquities \citep{lyubarsky2006,sironi2011}.} \textcolor{black}{We assume} a shock Lorentz factor of $\gamma_{sh}\simeq 3$ and the plasma magnetization (the ratio of the Poynting flux to the kinetic energy flux) $\sigma=0.1$. \textcolor{black}{These values are in the range of those expected for internal shocks in AGN jets.} In this first paper we discuss the shock structure and the generation of plasma instabilities and waves. In a forthcoming publication (Ligorini et al., in preparation, Paper II) we present the particle acceleration and heating mechanisms and discuss the energy transfer from ions to electrons downstream of the shock. Section~\ref{sec:setup} presents the simulation setup. Section~\ref{sec:out-of-plane} shows results for the out-of-plane field orientation, which are compared to the case with the in-plane magnetic field in Section~\ref{sec:in-plane}. Section~\ref{sec:summary} presents a summary and conclusions of this first part of our study. \section{Simulation setup} \label{sec:setup} We use a modified version of the relativistic electromagnetic PIC code TRISTAN \citep{buneman1993} with MPI-based parallelization \citep{niemiec2008} and the option to trace individual particles. The simulation setup is shown in Fig.~\ref{fig:setup}. \textcolor{black}{An electron-ion beam flows with speed $\boldsymbol{v_0}$ in the negative $x$-direction. It bounces off a {\itshape reflective wall} at the left side of the simulation box and collides} with the incoming flow to form a shock propagating in the positive $x$-direction. \begin{figure} \includegraphics[width=0.99\linewidth]{plot_eps/fig1.png} \caption{Illustration of the simulation setup.} \label{fig:setup} \end{figure} \textcolor{black}{In our 2D3V simulations we use a two-dimensional spatial grid but follow three components of particle momenta and electromagnetic fields.} The beam carries a large-scale homogeneous magnetic field, $\boldsymbol{B_0}$, \textcolor{black}{oriented perpendicular to the shock normal, and the associated motional electric field, $\boldsymbol{E_0} = - \boldsymbol{\mathit v_0 \times B_{0}}$.} We study two configurations of the large-scale field with respect to the simulation plane, \textcolor{black}{described by the angle~$\varphi_B$ (see Fig.~\ref{fig:setup})}: the \textcolor{black}{ {\it out-of-plane} magnetic field, $\boldsymbol{B_0}=\textcolor{black}{B_{0z}}\boldsymbol{\hat{z}}$ and \textcolor{black}{$\varphi_B=90^{o}$} and the {\it in-plane} setup with $\boldsymbol{B_0}=\textcolor{black}{B_{0y}}\boldsymbol{\hat{y}}$ and \textcolor{black}{$\varphi_B=0^{o}$}.} \textcolor{black}{The beam Lorentz factor, \textcolor{black}{$\gamma_{0}=2.03$}, results in the shock Lorentz factor $\gamma_{\rm sh}\simeq 3.3$ in the upstream rest frame. The total plasma magnetization, $\sigma = 0.1$, is written with simulation-frame magnetic-field strength, $B_{0}$, and ion density, $N_{i}$, as} $\sigma = B_{0}^2/(\mu_{0} N_{i}(m_{e} +m_{i})\gamma_{0}c^2)$, where $c$ is the speed of light, $\mu_{0}$ is the permeability of free space, $m_e$ and $m_i$ are the electron and ion mass, respectively \citep{hoshino1992}. \textcolor{black}{The reduced ion-to-electron mass ratio, $m_{i}/m_{e} = 50$, determines the electron and ion magnetizations, $\sigma_{e}\simeq 5.1$ and $\sigma_{i}\simeq \sigma$, through $1/\sigma = 1/\sigma_{e} + 1/\sigma_{i}$.} {We verified that our results do not change if a higher mass ratio of $m_{i}/m_{e}$ = 100 is used.} The unit of length used here is the ion skin depth, $\lambda_{\mathrm{si}} = c/\omega_{\rm pi}$, where $\omega_{\rm pi}=\sqrt{e^2N_i/\gamma_0\epsilon_0m_{i}}$ is the {relativistic} ion plasma frequency. {Here, $e$ is the electron charge, and $\epsilon_0$ is the vacuum permittivity. Time is expressed in units of the upstream ion cyclotron frequency $\Omega_{\mathrm{ci}} = (eB_{\mathrm{0}})/(m_i \gamma_0)$. We ran our 2D simulations up to $t_{\rm max}=84.3\,\Omega_{\mathrm{ci}}^{-1}$ and complementary 1D simulations reach $t_{\rm max}=163.1\,\Omega_{\mathrm{ci}}^{-1}$. The time-step is $\delta t={1/1131}\,\omega_{\rm pi}^{-1}={1/3556.8}\Omega_{\mathrm{ci}}^{-1}$.} \textcolor{black}{\citet{iwamoto2017} noted that numerical investigations of magnetized shocks require high resolution, otherwise the precursor waves may be artificially damped. {Based on extensive tests described in {Appendix \ref{app:conv_test}, we }set the grid resolution to} $\lambda_{\mathrm{se}} = 80\Delta$, where $\lambda_{\mathrm{se}}=\sqrt{m_e/m_i}\lambda_{\mathrm{si}}$ is the electron skin depth and $\Delta$ is the size of the grid cells. The corresponding ion skin depth is $\lambda_{\mathrm{si}}\simeq 566\Delta$. This resolution is twice larger than that adopted in \citet{iwamoto2017,iwamoto2018}. This unprecedentedly high resolution allowed us to detect precursor waves in the mildly-relativistic regime, that were invisible with} lower resolution \citep[e.g.][]{sironi2011}. Since our convergence tests show no dependence of the results on the number of particles per cell, $N_{\rm ppc}$, here we use $N_{\rm ppc}=10$ per particle species. Relativistic shock simulations are extremely prone to the numerical Cherenkov instability that artificially heats and slows down the plasma beam \citep{yee1966,birdsall1991,hockney1981}. We minimize these unphysical effects, by using Friedman filters, a fourth-order accurate FTFD field-pusher \citep{greenwood2004}, and also by injecting cold plasma. \textcolor{black}{This numerical model also stabilizes the beam against the so-called finite-grid instability arising from an unresolved Debye length. The performance of the model has been extensively verified through test simulations.} \textcolor{black}{Our {\itshape moving injection layer} makes sure that the simulated plasma contains all particles and waves propagating upstream, while we minimize the propagation time of the unperturbed beam.} The transverse size of our simulation box is $L_y=5760\Delta\simeq 10\lambda_{\mathrm{si}}$, \textcolor{black}{enough to capture structures in the shock surface with a characteristic length of several $\lambda_{\mathrm{si}}$. The box length, $L_x$, increases during simulations and reaches a final size of $L_x=160000\Delta\simeq 283\lambda_{\mathrm{si}}$.} \section{\textcolor{black}{Shocks with out-of-plane magnetic field}} \label{sec:out-of-plane} \textcolor{black}{A mildly relativistic strictly perpendicular shock in ion-electron plasma {with large-scale} magnetic field pointing out of the simulation plane is followed up to $t_{\rm max}\Omega_{\mathrm{ci}}= 84.3$. Unlike for highly relativistic {flows} \citep[e.g.][]{sironi2011, iwamoto2019}, {the mildly relativistic shock is not} laminar. At $t\Omega_{\mathrm{ci}}\sim 6.5$ it develops corrugations visible in both the density and the {electromagnetic} field, that are fully developed at $t\Omega_{\mathrm{ci}}\sim 8.5$.} In Section~\ref{sec:linstage} we first present the structure of the semi-laminar shock at $t\Omega_{\mathrm{ci}}\simeq 7.5$ {to demonstrate that the SMI already operates at this early stage {in line with theoretical expectations}. {In Section~\ref{sec:latestage}} \textcolor{black}{we discuss the} fully-evolved rippled shock.} \begin{figure} \centering \includegraphics [width=0.99\linewidth] {plot_eps/fig2.png} \caption{Distribution at time $t\Omega_{\mathrm{ci}} = 7.5$ of the normalized electron density, $N_{\mathrm{e}}/N_{0}$ (a), and \textcolor{black}{a component of the electric field, $E_{x}$ (b), and the magnetic field fluctuations, $\delta B_{z}$ (c). Logarithmic scaling is applied, which is sign-preserving for electromagnetic fields (e.g. $\mathrm{sgn}(B_{z}) \cdot \{2+\log[\max(|B_{z}|/B_{0},10^{-2})]\}$),} so that field amplitudes below $10^{-2}B_{0}$ are not resolved. Panel d) shows the transversely averaged profile of the electric field, $\langleE_{\mathrm{x}}\rangle$, and panel e) displays the profile of $\deltaB_{\mathrm{z}}$ taken along $y/\lambda_{\mathrm{si}}=6$.} \label{fig:structure_early} \end{figure} \begin{figure} \begin{center} {\includegraphics[width=0.9\linewidth]{plot_eps/fig3a.png}} {\includegraphics[width=0.9\linewidth]{plot_eps/fig3b.png}} \end{center} \caption{Fourier power spectra for $B_{\mathrm{z}}$ \textcolor{black}{and $E_{\mathrm{x}}$ at $t = 7.5 \Omega_{\mathrm{ci}}^{-1}$, calculated upstream of the shock in the region $x/\lambda_{\mathrm{si}}=13-18$ (compare Fig.~\ref{fig:structure_early}). {The solid white line represents the theoretical cutoff of the precursor waves.}} } \label{fig:fourier_early} \end{figure} \begin{figure*} \begin{center} {\includegraphics[scale=0.47]{plot_eps/fig4.png}} \end{center} \caption{Map of normalized magnetic-field fluctuations, $\deltaB_{\mathrm{z}}$, at time $t\Omega_{\mathrm{ci}} = 56.2$. The shock is located at $x/\lambda_{\mathrm{si}}\simeq 84$. Sign-preserving logarithmic scaling is applied (see Fig.~\ref{fig:structure_early}). Regions~1 and~2 highlighted with blue squares mark the initial positions of the regions chosen for the Fourier-Laplace spectra presented in Figs.~\ref{fig:omegak_bz_oop_1} and~\ref{fig:omegak_bz_oop_2}. \textcolor{black}{Note that we show only part of the precursor, that extends up to $x/\lambda_{\mathrm{si}}\approx 177\approx ct$.}} \label{fig:full_map200} \end{figure*} \subsection{Semi-laminar shock stage} \label{sec:linstage} \textcolor{black}{Fig.~\ref{fig:structure_early} displays the initial {development of} the shock front, then located at $x\simeq 11\lambda_{\mathrm{si}}$. The density compression by a factor around $3$ is the theoretically expected value, $\kappa=3.18$, for relativistic plasma with adiabatic index $\Gamma={2}$ \citep{gallant1992}. Upstream of the shock, at $x\gtrsim 12\lambda_{\mathrm{si}}$, one can see X-mode waves as plane-wave fluctuations in $B_z$ that move with the speed of light and have a wave vector $\boldsymbol{k}_{\rm Bz}={k}_{\rm Bz, x}\boldsymbol{\hat{x}}$. \textcolor{black}{The tip of the waves is at $x\approx 23\lambda_{\mathrm{si}}$, which is the light travel distance $ct$ from the reflective wall for $t\Omega_{\mathrm{ci}} = 7.5$.} One can also see co-moving longitudinal fluctuations in $E_{\mathrm{x}}$ at longer wavelength (Figs.~\ref{fig:structure_early}(b) and \ref{fig:structure_early}(d)) with the same phase velocity. The normalized amplitude of these {\it electrostatic} waves averaged over the three oscillations observed, $E_x/B_0c\simeq 1.8\cdot 10^{-2}$, is a factor ten smaller than that of the X-mode waves. Note, that already at this very early stage the shock surface is perturbed.} The emission of X-mode \textcolor{black}{waves indicates the operation of SMI at the shock \citep[e.g.][]{hoshino2008,iwamoto2017,iwamoto2018}. We calculated Fourier spectra of $B_{\mathrm{z}}$ and $E_{\mathrm{x}}$ for a region upstream of the shock at $x/\lambda_{\mathrm{si}}=13-18$ (see Fig.~\ref{fig:structure_early}). \textcolor{black}{The waves localized in the $x/\lambda_{\mathrm{si}}\sim 20-23$ region were emitted during the initial beam reflection off the conducting wall, when the shock had not yet formed. They are heavily affected by the initial conditions and hence not considered in our analysis.} The X-mode waves can reach the precursor only, if the $x$-component of their group velocity is faster than that of the shock, which imposes a limit to the wave vector. Fig.~\ref{fig:fourier_early} demonstrates that most of the wave power is indeed observed at a $k_x$ larger than that limit, supporting the association of the X-mode waves with SMI at the shock. } \textcolor{black}{The interaction of the precursor waves with the magnetized electron-proton plasma upstream should lead to electrostatic wakefield fluctuations that are evident at $k_{\rm Ex,x}\lambda_{\mathrm{si}}\simeq 2.5-3.0$ in the power spectrum of $E_{\mathrm{x}}$ in Fig.~\ref{fig:fourier_early}(b).} The signal at \textcolor{black}{${k}_{\rm Ex, x}\lambda_{\mathrm{si}}\sim 1-4$ and ${k}_{\rm Ex, y}\lambda_{\mathrm{si}}\sim 3-20$} is due to filamentation and discussed in detail in Section~\ref{sec:filam}. \textcolor{black}{In Appendix~\ref{app:lintheory} we derive the expected frequency of the X-mode waves as $\omega'/\Omega_{\mathrm{ce}}\gtrsim 1$, where $\Omega_{\mathrm{ce}}\simeq 2.25\omega_{\mathrm{pe}}$ is the electron cyclotron frequency and the prime denotes a quantity measured in the upstream frame. Then, wakefield generation by Raman scattering should yield $\omega'_L\simeq \omega_{\mathrm{pe}}$ and $k'_L \simeq 1/\lambda_{\mathrm{se}}$ \citep{kruer1988, hoshino2008}. In the downstream (simulation) frame $k_{L,y}=k_{L,y}'$ and \begin{equation} \begin{aligned} k_{L,x}=&\gamma_0~ k'_{L,x}~(1-\beta_0\frac{\omega'_L}{c~ k'_{L,x}}) \simeq \gamma_0 ~ \frac{1}{\lambda_{\mathrm{se}}}(1-\beta_0 ~ \frac{\omega_{\mathrm{pe}}~ \lambda_{\mathrm{se}}}{c}) \\ =& \frac{\gamma_0}{\lambda_{\mathrm{si}}}~ \sqrt{\frac{m_i}{m_e}}~(1-\beta_0) \simeq \frac{1.9}{\lambda_{\mathrm{si}}}, \end{aligned} \label{eq:exwavelenght} \end{equation} where we inserted our parameters to derive the last expression. Despite the poor wavenumber sampling of the signal in Fig.~\ref{fig:structure_early}, the match is reasonable. } \subsection{Non-linear \textcolor{black}{electromagnetic} shock structure} \label{sec:latestage} \subsubsection{Precursor waves} Fig.~\ref{fig:full_map200} demonstrates that at time $t\Omega_{\mathrm{ci}} = 56.2$ the magnetic-field fluctuations extend to the far upstream. There, at $x/\lambda_{\mathrm{si}} \gtrsim 148$, one can only find waves \textcolor{black}{that have been emitted very early when the shock was semi-laminar shock and have a very large $x$-component of their group speed.} Hence the precursor waves retain their plane-wave character. \textcolor{black}{Behind this region,} closer to the shock, the waves also have an oblique component. Near the shock and up to $x/\lambda_{\mathrm{si}}\sim 120$ (see also Fig.~\ref{fig:full_map}) \textcolor{black}{these oblique} waves form a quasi-regular pattern of oblique stripes. \textcolor{black}{The emergence of the oblique wave component and formation of oblique stripes} are related to the ripples in the shock surface, as we demonstrate below. A~Fourier-Laplace analysis in two selected regions of the shock precursor confirms that the waves \textcolor{black}{in the upstream region} are X modes, presumably generated through SMI. {The regions are stationary in the upstream plasma rest frame, and their initial location in the simulation box is marked in Fig.~\ref{fig:full_map200}}. The electromagnetic field is correspondingly transformed to the upstream frame. The Fourier-Laplace power spectra \textcolor{black}{are shown in Figs.~\ref{fig:omegak_bz_oop_1} and~\ref{fig:omegak_bz_oop_2}. They} can be compared with the theoretical dispersion relation \textcolor{black}{for the electron SMI} discussed in Appendix~\ref{app:lintheory}. \textcolor{black}{Since the group velocity of the waves emitted at the shock is small except for the wave numbers near the light mode, the majority of the waves cannot outrun the shock and propagate upstream, and hence the dispersion relation is indicated with white dots only along $\omega' = k' c$ in Figs.~\ref{fig:omegak_bz_oop_1} and~\ref{fig:omegak_bz_oop_2}.} \begin{figure} \begin{center} \includegraphics[width=0.99\linewidth]{plot_eps/fig5.png} \end{center} \caption{\textcolor{black}{Fourier-Laplace spectrum in the \emph{upstream} frame of $B_z'$} in $\omega' - k'_x$ space taken along $y=5\lambda_{\mathrm{si}}$ in Region~1 marked in Fig.~\ref{fig:full_map200}. The time interval is $0.562\,\Omega_{\mathrm{ci}}^{-1}\simeq28.1\,\Omega_{\mathrm{ce}}^{-1}$, starting from time $t\Omega_{\mathrm{ci}}\simeq 56.23$. The angular frequency, $\omega'$, and the wave vector, $k'_x$, are normalized with the electron cyclotron frequency, $\Omega_{\mathrm{ce}}$. Overlaid with {white} dots is the dispersion relation derived in Appendix~\ref{app:lintheory}. } \label{fig:omegak_bz_oop_1} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.99\linewidth]{plot_eps/fig6.png} \end{center} \caption{\textcolor{black}{Fourier-Laplace spectrum of $B_z'$ at $y=5\lambda_{\mathrm{si}}$ as in Fig.~\ref{fig:omegak_bz_oop_1}, but here for Region~2 marked in Fig.~\ref{fig:full_map200}.}} \label{fig:omegak_bz_oop_2} \end{figure} \textcolor{black}{The low-frequency modes of SMI generated by ions are mostly subluminous \textcolor{black}{and do not propagate ahead of the shock}. In any case, they would not be detectable with our time window, $t\Omega_{\mathrm{ci}}\simeq 56.23$ to $t\Omega_{\mathrm{ci}}\simeq56.79$, and the sampling every 10 time steps.} \textcolor{black}{Fig.~\ref{fig:omegak_bz_oop_1} demonstrates that in \emph{Region~1} far ahead of the shock the observed signal very well matches the theoretical dispersion relation for the electron SMI. In particular, the wave power is mostly along the light mode, $\omega'=k'c$, and a few harmonic modes exist for a wide wave vector range. The signal between the harmonics arises from fluctuations in the shock-compressed magnetic field. The power spectrum in \emph{Region~2}, shown in Fig.~\ref{fig:omegak_bz_oop_2}, is heavily influenced by shock rippling and also by the non-linear evolution of the wave modes, but retains qualitative agreement with the electron maser model.} \textcolor{black}{Between Region 1 and 2 one finds a slow transition from parallel to oblique modes that represents a spatial mapping of the temporal development of shock rippling.} \begin{figure} \begin{center} \includegraphics [width=0.97\linewidth] {plot_eps/fig7.png} \end{center} \caption{Normalized \textcolor{black}{electron, (a), and ion density, (b), as well as the fluctuations in $\deltaB_{\mathrm{z}}$ (c) and $E_{\mathrm{x}}$ (d) at $t\Omega_{\mathrm{ci}} = 84.3$. Logarithmic scaling is applied as in Fig.~\ref{fig:structure_early}. Panel (e) shows a close-up of the electron density in Region~A. Region~C in panel (c) is chosen to plot in panel (f) a profile of $\deltaB_{\mathrm{z}}$.} } \label{fig:full_map} \end{figure} \subsubsection{Effects of shock rippling on wave properties \label{sec:ripples}} The shock ripples visible in Figs.~\ref{fig:full_map200} and~\ref{fig:full_map} \textcolor{black}{propagate in $-y$-direction with average speed \textcolor{black}{$v_{\rm rippl}\approx 0.8c$}, commensurate with that of ions gyrating at the shock. Their mean separation {along the shock surface}, $\lambda_{\rm rippl}\simeq 3.3\lambda_{\mathrm{si}}\simeq 2\,r_g$, and extension in $x$-direction, $1.6\lambda_{\mathrm{si}}\simeq r_g$, {reflect the} ion gyro-radius, $r_g$.} We associate the shock ripples with the modulation of shock-reflected ions along the shock surface first described by \citet{burgess2007} for low-Mach-number nonrelativistic shocks. The instability occurs only in simulations with out-of-plane magnetic field, \textcolor{black}{for which the ions gyrate in the simulation plane. In contrast, parallel-propagating waves driven by ion temperature anisotropy are frequently observed with in-plane field in the regime of low Mach numbers} \citep[e.g.][]{winske1988,2014PhPl...21b2102U}, but also in studies of high-Mach-number perpendicular nonrelativistic shocks \citep{wieland2016} and ultrarelativistic perpendicular shocks with moderate magnetizations, $\sigma\lesssim 0.1$ \citep{sironi2013}. In our simulation with $\varphi_B=90^{\circ}$ the shock ripples quickly grow from small-scale fluctuations to a long-wave mode visible in Figs.~\ref{fig:full_map200} and~\ref{fig:full_map}, \textcolor{black}{in particular in the ion density.} Their structure is highly dynamic on time-scales shorter than $\Omega_{\mathrm{ci}}^{-1}$ \textcolor{black}{and driven by the magnetic-field compression and charge separation induced by the} different inertia of electrons and ions. \textcolor{black}{Arcs of increased magnetic field, electron density, and associated electric field are generated. The maps of $B_z$ and $E_x$ in Figs.~\ref{fig:full_map}(c-d) suggest that these arcs are the origin of} the observed pattern of oblique waves. \textcolor{black}{The oblique structure of the precursor waves result from relativistic retardation and aberration of light and precursor-wave emission in a direction normal to the local front of the arcs. The arcs move with $v_{\rm rippl}$ close to $c$ in the negative $y$-direction. Retardation of emission in $x$-direction in the simulation frame gives rise to the stripes of high wave intensity seemingly oriented at an angle of $\vartheta\approx 37 ^{\circ}$ with the $y$-axis. Aberration provides a large $x$-component of the wave speed, so that the precursor waves can outpace the shock.} The emission normal to the arc front results \textcolor{black}{from phase bunching \textcolor{black}{of the electron distribution} \citep{hoshino1991,sprangle1977} which requires that the frequency of the wave be} slightly higher than the plasma cyclotron frequency. \textcolor{black}{Then,} the particles on average gyrate less than $2\pi$ in a wave period and slip behind the waves. After a number of wave periods their distribution \textcolor{black}{is} bunched \textcolor{black}{in gyrophase. The wave emission thus will be defined by the structure of} the compressed magnetic field at the arcs, in which the electrons gyrate. The \textcolor{black}{combined effects of the gyrophase bunching, retardation, and aberration} cause the direction of \textcolor{black}{precursor wave} emission to vary with the evolving shape of the ripples, leading to a wide range of angles, as visible in Figs.~\ref{fig:full_map200} and~\ref{fig:full_map}(c-d) and apparent in 2D Fourier power spectra of fluctuations in $B_{\mathrm{z}}$ and $E_{\mathrm{x}}$ shown in Figs.~\ref{fig:fourier2}(a-b) for waves in \emph{Region~B} marked in Fig.~\ref{fig:full_map}. The dominant emission pattern, \textcolor{black}{however, comes from the average ripple profile and is compatible} with the ripple scale-length. \begin{figure} \includegraphics[width=0.99\linewidth]{plot_eps/fig8a.png} \includegraphics[width=0.99\linewidth]{plot_eps/fig8b.png} \includegraphics[width=0.99\linewidth]{plot_eps/fig8c.png} \caption{Fourier spectra for $B_{\mathrm{z}}$ fluctuations (a), $E_{\mathrm{x}}$ (b), and the electron density (c) for Region B at $x/\lambda_{\mathrm{si}}=130-140$ at time $t\Omega_{\mathrm{ci}} = 84.3$ (see Fig.~\ref{fig:full_map}). The solid white line represents the wave vector cutoff of precursor waves. \label{fig:fourier2} \end{figure} Fig.~\ref{fig:full_map}f shows the averaged and smoothed profile of the $B_{\mathrm{z}}$ magnetic-field fluctuations taken along an oblique direction, as marked with a blue parallelogram in panel (c). \textcolor{black}{High-intensity patches are spaced every $(2-3.5)\lambda_{\mathrm{si}}$,} consistent with $\lambda_{\rm rippl}$. Similar wave profiles are observed in $E_{\mathrm{x}}$ and $E_{\mathrm{y}}$ (not shown). \textcolor{black}{Short-wavelength fluctuations in $B_{\mathrm{z}}$ have associated electric field in the $x-y$ plane, approximately perpendicular to the wave vector, which suggests they are X-mode waves. Their spectra have cutoffs, indicated as white lines in Fig.~\ref{fig:fourier2}(a-b), that arise from the requirement that the waves be faster than the shock. While the peculiar structure of the precursor waves in our mildly relativistic shock results from shock rippling, the emission mechanism appears to be generic and corresponds to the well-known electron synchrotron maser.} In addition to the dominant oblique component, the power spectrum of $B_{\mathrm{z}}$ oscillations in Fig.~\ref{fig:fourier2}(a) shows parallel waves with \textcolor{black}{${k}_{\rm Bz, x} \lambda_{\mathrm{si}} \simeq (10-30)$ that have an electric counterpart in $E_{\mathrm{y}}$, not in $E_{\mathrm{x}}$. The wakefield in $E_{\mathrm{x}}$ has a wave number ${k}_{\rm Ex, x}\lambda_{\mathrm{si}}\simeq 2$ and wide distribution in ${k}_{\rm Ex, y}$ reflecting the entire range of obliquity of the precursor waves. The wave number of the wakefield agrees with that expected in the standard electron SMI scenario, as estimated in equation~\ref{eq:exwavelenght}. It non-linearly couples to magnetic-field and density perturbations in the same wave band} (Figs.~\ref{fig:fourier2}a and~\ref{fig:fourier2}c). \subsubsection{Filamentation via parametric instability \label{sec:filam}} \textcolor{black}{The Fourier spectrum of electron density in Fig.~\ref{fig:fourier2}(c) shows significant wave power at $k_y\lambda_{\mathrm{si}}\sim 10-30$ and $k_x\lambda_{\mathrm{si}}\sim 2-4$. The corresponding density perturbations are highlighted in Fig.~\ref{fig:full_map}(e). They form oblique filamentary structures whose transverse scale is a few $\lambda_{\mathrm{se}}$.} We interpret these perturbations as result of the parametric filamentation instability \citep{kaw1973,drake1974} triggered when intense electromagnetic waves interact with the incoming upstream plasma. Similar filaments in density and magnetic field have been recently identified in high-resolution studies of ultrarelativistic magnetized pair shocks \citep{iwamoto2017,iwamoto2018,plotnikov2018} and electron-ion shocks \citep{iwamoto2019}. Their presence indicates coherence and self-focusing of the precursor waves in 2D systems. The filaments observed in pair plasmas largely retain their structure \textcolor{black}{during advection toward the shock. In electron-ion plasma, instead,} the filaments quickly merge to form long, ion-scale turbulent structures ahead of the shock. At our mildly-relativistic shock the filaments resemble those at pair shocks. They \textcolor{black}{are observable very far upstream, up to $x/\lambda_{\mathrm{si}}\sim 200$, but their structure is disrupted by the oblique waves, and their amplitude is only $\delta N_e/N_0\approx 0.1$. The corresponding spectral signature in the magnetic field is very weak (compare Fig.~\ref{fig:fourier2}(a)). Although at mildly relativistic shocks the precursor waves are less prominent than in the ultrarelativistic case, and the parametric instability is weakly driven, our high-resolution simulations can still detect coherent precursor-wave emission.} \begin{figure} \begin{center} \includegraphics [width=0.99\linewidth] {plot_eps/fig9.pdf} \end{center} \caption{\textcolor{black}{Normalized profiles along the shock normal at time $t\Omega_{\mathrm{ci}} = 84.3$ of $\langleE_{\mathrm{x}}\rangle$ averaged over $y$ (a) and $\deltaB_{\mathrm{z}}$ measured in the middle of the box at $y/\lambda_{\mathrm{si}}=6$ (b).}} \label{fig:profile} \end{figure} \begin{table} \caption{Normalized precursor wave amplitudes, $\delta B/B_0$, and energy density, $\epsilon_{\rm pe}=\delta B^2/(2\mu_0\gamma_0N_em_ec^2)$, for 2D simulations with $\varphi_B=90^{o}$ and $\varphi_B=0^{o}$ and 1D simulation. For comparison we also list results of \citet{gallant1992}, marked "a", and \citet{iwamoto2019}, marked "b". Here, $\delta B= \delta B_z$ for $\varphi_B=90^{o}$ and 1D runs, and $\delta B= \sqrt{\deltaB_{\mathrm{x}}^2+\deltaB_{\mathrm{y}}^2+\deltaB_{\mathrm{z}}^2}$ for $\varphi_B=0^{o}$. The amplitudes are averaged in a region of length $\delta x=5\lambda_{\mathrm{si}}$ located $2\lambda_{\mathrm{si}}$ upstream of the shock. The field fluctuations in \citet{gallant1992} were measured in the entire precursor region. \citet{iwamoto2019} calculated $\delta B$ and $\epsilon_{\rm pe}$ in a region about $70\lambda_{\mathrm{si}}$ in length and $32\lambda_{\mathrm{si}}$ upstream of the shock.} \centering \begin{tabular}{lcc} \hline \hline \noalign{\smallskip} & $\delta B/B_0$ & $\epsilon_{\rm pe}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} 2D $i-e^-$, $\varphi_B=90^{o}$ & $0.19\pm 0.01$ & $0.09\pm 0.005$ \\ 2D $i-e^-$, $\varphi_B=0^{o}$ & $0.15\pm 0.01$ & $0.07\pm 0.005$ \\ 1D $i-e^-$ & $0.18\pm 0.01$ & $0.08\pm 0.005$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} 1D$^{\rm a}$ $e^+\!\!\!-e^-$ & 0.46$^{+0.18}_{-0.12}$ & 0.53$^{+0.22}_{-0.15}$ \\ 2D$^{\rm b}$ $e^+\!\!\!-e^-$ & $0.064\pm 0.031$ & $0.010\pm 0.005$ \\ 1D$^{\rm b}$ $e^+\!\!\!-e^-$ & $0.10\pm 0.01$ & $0.025\pm 0.005$ \\ 2D$^{\rm b}$ $i-e^-$ & $0.50\pm 0.10$ & $0.65\pm 0.25$ \\ 1D$^{\rm b}$ $i-e^-$ & $0.75\pm 0.09$ & $1.4\pm 0.4$ \\ \noalign{\smallskip} \hline \end{tabular} \label{table1} \end{table} \subsubsection{Precursor wave amplitudes \label{precursor_waves}} \textcolor{black}{Fig.~\ref{fig:profile} shows profiles in the precursor region of $\deltaB_{\mathrm{z}}$, taken at $y/\lambda_{\mathrm{si}}=6$, and $\langleE_{\mathrm{x}}\rangle$ $y$-averaged to filter out the contribution of the oblique large-amplitude precursor waves. These profiles can be compared to corresponding profiles in 1D simulation that we describe in Appendix~\ref{app:1d}.} \textcolor{black}{The magnetic-field fluctuation amplitude \textcolor{black}{normalized to the upstream field strength, $\delta B/B_0$}, and energy density \textcolor{black}{normalized to the upstream \emph{electron} kinetic energy}, $\epsilon_{\rm pe}=\delta B^2/(2\mu_0\gamma_0N_em_ec^2)$, are listed in Table~\ref{table1} in comparison with results of other studies. For out-of-plane magnetic field the only relevant polarization is $\delta B=\deltaB_{\mathrm{z}}$.} The amplitudes are averaged in the region of $x/\lambda_{\mathrm{si}}=129-134$, located about $2\lambda_{\mathrm{si}}$ from the shock. \textcolor{black}{Our 1D test simulation is described in detail in Appendix~\ref{app:1d}. The amplitude of the precursor waves is comparable in 2D and 1D simulations. In 2D, the average wave amplitude out to $x/\lambda_{\mathrm{si}}\approx 230$ is even larger than the strongly coherent oscillations at the tip of the precursor, that were generated in the early linear phase (see Fig.~\ref{fig:full_map200}). Fig.~\ref{fig:zoom_1d} demonstrates that in 1D the waves closer to the shock are weaker, possibly due to heating of the electrons that suppresses higher harmonics in the electron SMI \citep{amato2006}. In a 2D simulation the same should happen, and inhomogeneities at the shock may further cause a reduction of wave coherency. But we observe that shock rippling is highly organized and produces a semi-coherent, modulated train of oblique precursor waves. Thus, instead of being destructive, the ripples amplify the precursor-wave amplitude.} \textcolor{black}{The magnetic-field amplitude can be compared to that observed at ultra-relativistic shocks. Since electron magnetization is a relevant parameter, in Table~\ref{table1} we list available results for shocks with $\sigma_e=5$. As expected, both $\delta B/B_0$ and $\epsilon_{\rm pe}$ are smaller than those in 1D simulations of pair shocks \citep{gallant1992}, but they are much larger than those obtained in the high-resolution 1D and 2D simulations of pair shocks by \citet{iwamoto2019}. The wave energy is much smaller than that at ion-electron shocks with $\gamma_0=40$ \citep{iwamoto2019}, which both in 1D and 2D exceeds that in pair plasmas by almost two orders of magnitude, and the 2D amplitude is close to that in 1D. The high wave intensity at high-$\gamma_{\rm sh}$ ion-electron shocks was attributed to the so-called positive feedback, in which incoming electrons accelerated by the wakefield cause enhanced precursor-wave emission, that in turn amplifies the wakefield. At the mildly relativistic shocks described here, the wakefield does not reach a very high amplitude (see Section~\ref{sec:wakefield}), and the positive feedback is not effective. However, electromagnetic precursor wave amplification up to the 1D level is achieved through shock rippling.} \begin{figure} \begin{center} \includegraphics [width=0.99\linewidth] {plot_eps/fig10.pdf} \end{center} \caption{Stack plot of the averaged wakefield \textcolor{black}{profile, $\langle E_x\rangle/(B_0c)$, between time $t_0\Omega_{\mathrm{ci}}=56.2$ and $t_{\rm max}\Omega_{\mathrm{ci}}=84.7$. The shock is always located at the left edge of the panel.}} \label{fig:stacked_profile} \end{figure} \subsubsection{Wakefield waves \label{sec:wakefield}} As noted before, the ponderomotive force on the upstream plasma leads to longitudinal plasma \textcolor{black}{motions and associated electrostatic wakefield, whose average amplitude does not exceed $\langleE_{\mathrm{x}}\rangle/(B_0c)\sim 0.01$. The weakness of the wakefield reflects the relatively low amplitude of the precursor waves, that can be expressed through the so-called strength parameter $a$ that can be \textcolor{black}{estimated} as \citep{iwamoto2017} \begin{equation} a\simeq\gamma_0\sqrt{\sigma_e}\frac{\omega_{\mathrm{pe}}}{\omega}\frac{\delta B}{B_{\mathrm{0}}}. \label{eq:a} \end{equation} Here $\omega$ is the wave frequency. If $a \gtrsim 1$, the precursor waves can generate intense wakefield \citep{kuramitsu2008}. Fig.~\ref{fig:fourier2}(a) indicates typical wave numbers in the range $k\approx (15-40)\lambda_{\mathrm{si}}^{-1}\simeq (2.1-5.7)\lambda_{\mathrm{se}}^{-1}$. The dispersion relation in equation~\ref{eq:xmodedisp_approx_fin} then gives for the wave frequency $\omega\approx (2.5-6)\,\omega_{pe}$. The average magnetic-field amplitude is $\deltaB_{\mathrm{z}}/B_0\simeq 0.19$ (see Table~\ref{table1}), and all together we find for the strength parameter \begin{equation} a\approx (0.14-0.35) . \end{equation} The corresponding amplitude of the wakefield can be estimated following \citet{hoshino2008}, using $\xi=1/2$ as for a linearly polarized wave: \begin{equation} \frac{\langleE_{\mathrm{x}}\rangle}{B_0c}\simeq \frac{\xi a^2}{\sqrt{1+\xi a^2}}\,\frac{1}{\sqrt{\sigma_e}\gamma_0}\approx (2-13)\cdot 10^{-3}, \label{eq:wakefield} \end{equation} in agreement with our simulation result.} Shock rippling causes enhanced emission of the precursor waves at oblique angles which in turn produces oblique Langmuir waves. \textcolor{black}{They are averaged out of the wakefield profile shown in Fig.~\ref{fig:profile}(a), and so the local wave amplitude may be} \textcolor{black}{much larger than the mean} amplitude. In fact, from time $t\Omega_{\mathrm{ci}}\sim 40$ on, when the oblique precursor-wave structure is well established, episodes of stronger semi-coherent wave emission from the shock lead to stronger wakefield in the near-upstream region, \textcolor{black}{that non-linearly evolves.} Fig.~\ref{fig:stacked_profile} shows a stack plot of averaged wakefield profiles \textcolor{black}{for a time period $28.1\,\Omega_{\mathrm{ci}}^{-1}$, starting at $t_0\Omega_{\mathrm{ci}}=56.2$. The profiles are given in shock-centered coordinates, $x-x_\mathrm{shock}$. Far upstream of the shock, the electrostatic waves propagate away from the shock, but within $70\,\lambda_{\mathrm{si}}$ of the shock the wakefield on average moves back toward the shock. } \textcolor{black}{We interpret} this {downstream-directed motion of the wakefield} \textcolor{black}{as result of Forward Raman Scattering (FRS)} operating at our mildly relativistic shock. The ponderomotive force is proportional to the gradient of the wave pressure and can also act inside the precursor, if the electromagnetic waves are modulated \citep{hoshino2008}. The enhanced emission of the precursor waves through shock rippling amplifies the waves and \textcolor{black}{triggers the nonlinear FRS. In this process} the scattered electromagnetic waves successively decay into other electromagnetic waves and Langmuir waves. {As the frequency of the scattered wave is lower than that of the pump wave, broadband precursor wave spectra extending from the initial $\omega'\gtrsim\Omega_{\mathrm{ce}}$ down to $\omega'>\omega_{\mathrm{pe}}$ are generated. Similarly, broadband Langmuir waves are produced with $k'_L<\omega_{\mathrm{pe}}/c$ and $\omega'_L\simeq\omega_{\mathrm{pe}}$} \citep{hoshino2008}. In the upstream plasma rest frame the electromagnetic and Langmuir waves \textcolor{black}{all} have phase velocities in the upstream direction, \textcolor{black}{but} in the simulation frame part of these waves move toward downstream. \textcolor{black}{The presence of large-amplitude wakefield propagating toward the shock is of importance for electron energization upstream of the shock. We will show in Paper II that the waves can scatter electrons and boost them toward the shock, contributing to ion-to-electron energy transfer in the precursor region. } \section{Comparison with the in-plane setup} \label{sec:in-plane} In this section we compare \textcolor{black}{the electromagnetic structure of} a mildly relativistic shock with upstream magnetic field lying in the plane of the simulation, $\boldsymbol{B_0}=B_{\mathrm{y}}\boldsymbol{\hat{y}}$, \textcolor{black}{thus $\varphi=0^{o}$}, \textcolor{black}{with that for out-of-plane field discussed in Section~\ref{sec:out-of-plane}. With in-plane magnetic field the shock quickly acquires its steady-state form, and so we show the structure and discuss its properties only at time $t\Omega_{\mathrm{ci}}= 84.3$.} \begin{figure} \centering \includegraphics [width=0.99\linewidth] {plot_eps/fig11.pdf} \caption{Normalized electron density (a), field fluctuations in $B_{\mathrm{y}}$ (b), $B_{\mathrm{z}}$ (c), and $E_{\mathrm{x}}$ (d), \textcolor{black}{as well as the averaged profile of the ion temperature parallel and perpendicular to the mean magnetic field (e)}, at the final stage of the simulation, at $t\Omega_{\mathrm{ci}} = 84.3$. Logarithmic scaling is applied as in Fig.~\ref{fig:structure_early}. Panel (f) shows a close-up of Region~A in the electron density plot (a). For region~B we show Fourier spectra in Fig.~\ref{fig:fourier_B_ip}.} \label{fig:maps_inplane} \end{figure} \subsection{Shock front and downstream turbulence} \textcolor{black}{To be noted from Fig.~\ref{fig:maps_inplane} is the lower shock speed, $v=0.41c$, compared to the out-of-plane case, that is implied by its location at $x/\lambda_{\mathrm{si}}\simeq 108$.} The density compression is \textcolor{black}{$\kappa\simeq 3.2$}. \textcolor{black}{Both are consistent with a shock in} plasma with three degrees of freedom and adiabatic index $\Gamma ={4/3}$ \citep{plotnikov2018}. \textcolor{black}{Ion gyration at the shock happens in the $x-z$ plane, which suppresses the gyration-driven rippling mode seen with out-of-plane magnetic field.} Fluctuations in density and electromagnetic field can be observed together with corrugations in the shock surface, that develop very early in the simulation and quickly evolve into a large-scale rippling mode with $\lambda_{\rm rippl}\simeq 5\lambda_{\mathrm{si}}$ \textcolor{black}{and amplitude $\lesssim \lambda_{\mathrm{si}}$.} They propagate along the mean magnetic field and are most likely driven by the anisotropy in the ion temperature that results from ion reflection from the shock \textcolor{black}{and was geometrically suppressed in the out-of-plane simulation.} \textcolor{black}{Fig.~\ref{fig:maps_inplane}(e) demonstrates that at the shock $T_{\mathrm{i}\, \perp} \gg T_{\mathrm{i}\, \parallel}$} \textcolor{black}{with respect to the mean magnetic field direction, which triggers the Alfv\'en Ion Cyclotron (AIC) instability that is known to produce} ripples in low-Mach-number shocks \citep[e.g.][]{winske1988,2014PhPl...21b2102U} and can generate magnetic-field fluctuations at the front of relativistic pair shocks \citep{iwamoto2018}. Shock-front corrugations are also a source of downstream turbulence, through the mechanism of the vorticity generation via a process similar to the Richtmyer-Meshkov instability \citep[e.g.][]{2011ApJ...726...62M,2014MNRAS.439.3490M}. \begin{figure} \centering \includegraphics [width=0.99\linewidth] {plot_eps/fig12.pdf} \caption{Profiles along the shock normal of normalized \textcolor{black}{field components at time $t\Omega_{\mathrm{ci}} = 84.3$ for in-plane upstream magnetic field configuration. We averaged $\langleE_{\mathrm{x}}\rangle$ over $y$ (a), whereas $B_{\mathrm{x}}, \deltaB_{\mathrm{y}}=(B_{\mathrm{y}}-B_0)$, and $B_{\mathrm{z}}$ are measured} in the middle of the box along $y/\lambda_{\mathrm{si}}=6$ (b-d) (see also Fig.~\ref{fig:profile}). \label{fig:shock_str_z_ip}} \end{figure} \begin{figure} \centering \includegraphics [width=0.99\linewidth] {plot_eps/fig13.pdf} \caption{Enlarged view of the region $x/\lambda_{\mathrm{si}}=120-125$ \textcolor{black}{with magnetic-field and the electric-field profiles plotted in black and red, respectively, and normalized as in Fig.~\ref{fig:shock_str_z_ip}. The phase (anti-)correlation between $y$ and $z$ field components indicate X-mode (a) and O-mode (b) waves.}} \label{fig:profiles_correl} \end{figure} \subsection{Upstream waves} \label{sec:ip_waves} Short-scale precursor waves and large-scale electrostatic wakefield are evident in Fig.~\ref{fig:maps_inplane} and in the profiles covering the entire upstream region at time $t\Omega_{\mathrm{ci}} = 84.3$, that we plot in Fig.~\ref{fig:shock_str_z_ip} in the same manner as earlier for 2D out-of-plane simulation and in Appendix~\ref{app:1d} for a 1D test run. Fig.~\ref{fig:profiles_correl} shows an enlarged view of the region $x/\lambda_{\mathrm{si}}=(120-125)$, presenting also electric-field fluctuations. \textcolor{black}{Electromagnetic waves with $B_{\mathrm{y}}$ fluctuations parallel to the large-scale magnetic field, $\boldsymbol{B_0}=B_{\mathrm{y}}\boldsymbol{\hat{y}}$, and anticorrelated with $E_{\mathrm{z}}$ fluctuations are X-mode waves. Correlated $B_{\mathrm{z}}$ and $E_{\mathrm{y}}$ fluctuations are O-mode waves. Oscillations in $B_{\mathrm{x}}$ result from an oblique propagation of the X-mode wave, as we discuss below.} The magnetic-field fluctuation amplitudes are listed in Table~\ref{table1}. The time evolution of the field amplitudes is shown in Fig.~\ref{fig:ampl_a_oop} in Appendix~\ref{app:1d}. We list the total amplitude of the magnetic field oscillations, $\delta B=\sqrt{\deltaB_{\mathrm{x}}^2+\deltaB_{\mathrm{y}}^2+\deltaB_{\mathrm{z}}^2}$, not differentiating between the X-mode and O-mode waves. The amplitude of the X-mode wave, $\delta B_X/B_0=\sqrt{\deltaB_{\mathrm{x}}^2+\deltaB_{\mathrm{y}}^2}/B_0\simeq 0.12$, is larger than the amplitude of the O-mode wave, $\delta B_O/B_0=\deltaB_{\mathrm{z}}/B_0\simeq 0.08$. The total precursor wave amplitude, $\delta B/B_0\simeq 0.15$, is slightly smaller than the amplitudes obtained in our 2D run with \textcolor{black}{$\varphi_{\rm B}=90^{o}$} and in the 1D simulation. \textcolor{black}{With out-of-plane magnetic field strong shock ripples increase the precursor-wave amplitude to the level observed in 1D simulation. With in-plane field we see a} similar amplification. \textcolor{black}{Fig.~\ref{fig:maps_inplane}(b) shows the emission of precursor waves in bunches that correspond to rippling features at the shock.} The rippling driven by the AIC instability is relatively weak and \textcolor{black}{cannot fully compensate losses in coherency due to random fluctuations in shock structure at the shock surface and the thermal damping of the waves. The precursor wave amplitude is large enough, though, to induce the wakefield and \textcolor{black}{thus} accelerate and heat particles}. Fig.~\ref{fig:ampl_a_oop} shows a roughly constant wave amplitude throughout the 2D simulation with the in-plane setup, which is due to an early formation of the shock ripples. \textcolor{black}{The linear theory of the SMI predicts X-mode emission at a level above that of O-mode waves \citep{melrose1984, lee1980, wu1979}. In earlier 2D simulations of ultrarelativistic pair-plasma shocks with in-plane magnetic field the O-mode energy was observed to exceed that in X modes at very small electron magnetizations, $\sigma_e\lesssim 10^{-2}$ \citep{iwamoto2018}, which was attributed to mode conversion from X to O modes at the turbulent shock transition. Early in} the shock evolution, charged particles gyrate in the $x-z$ plane and emit X-mode waves with $\deltaB_{\mathrm{y}}$. As plasma instabilities generate fluctuations in $B_{\mathrm{z}}$ of amplitude comparable to $B_0$, the net magnetic field undulates in the $y-z$ plane, and the X-mode waves carry both $\deltaB_{\mathrm{y}}$ and $\deltaB_{\mathrm{z}}$ fluctuations. \textcolor{black}{Upon transmission to the upstream region the $\deltaB_{\mathrm{z}}$ components may be converted into O-mode waves.} Indeed, Fig.~\ref{fig:shock_str_z_ip} shows the tip of the $B_{\mathrm{z}}$ turbulence behind that of $B_{\mathrm{y}}$, indicating that the O-mode waves are \textcolor{black}{produced a bit later than} the X-mode waves, after the shock front has developed substantial turbulence. The slightly smaller amplitude of the O-mode waves with respect to the X-mode waves reflects the moderate level of $B_{\mathrm{z}}$ fluctuations in the shock front, $\deltaB_{\mathrm{z}}/B_0\approx 1$ (compare results for $\sigma_e\gtrsim 10^{-2}$ in \citet{iwamoto2018}). \begin{figure} \centering \includegraphics[width=0.91\linewidth]{plot_eps/fig14a.pdf} \includegraphics[width=0.91\linewidth]{plot_eps/fig14b.pdf} \includegraphics[width=0.91\linewidth]{plot_eps/fig14c.pdf} \includegraphics[width=0.91\linewidth]{plot_eps/fig14d.pdf} \caption{Fourier power spectra for $B_{\mathrm{y}}$ (a), $B_{\mathrm{z}}$ (b), $E_{\mathrm{x}}$ (c), and the electron density (d) for Region B at $x/\lambda_{\mathrm{si}}=110-120$ at time $t\Omega_{\mathrm{ci}} = 84.3$ (see Fig.~\ref{fig:maps_inplane}). The white lines show the cutoff for the X-mode (a) and the O-mode (b) waves.} \label{fig:fourier_B_ip} \end{figure} Fig.~\ref{fig:fourier_B_ip} shows Fourier power spectra of $B_{\mathrm{y}}$, $B_{\mathrm{z}}$, $E_{\mathrm{x}}$, and the electron density in \emph{Region~B} marked in Fig.~\ref{fig:maps_inplane}. \textcolor{black}{Most of the wave power in $\deltaB_{\mathrm{y}}$ and $\deltaB_{\mathrm{z}}$ is located to the right of the theoretical cutoff calculated in Appendix~\ref{app:lintheory}, and the waves are faster than the shock. The wave vector range of the precursor waves is similar to that in the out-of-plane setup, and an oblique component is likewise present. Coherent precursor waves are persistent in mildly relativistic shocks, albeit with smaller amplitude than in the ultrarelativistic regime.} \textcolor{black}{Also with $\boldsymbol{B_0}$ in the simulation plane we observe transverse density filaments upstream of the shock, whose amplitude, $\delta N_e/N_0\approx 0.5$, is much larger than in the out-of-plane simulation. They are better aligned with the $x$-direction, though. It appears that the parametric filamentation instability is not affected by the weak ripples at the shock.} \textcolor{black}{The wakefield is evident upstream of the shock, as for $\varphi_{\rm B}=90^{o}$. Fig.~\ref{fig:fourier_B_ip}(c) shows a signal at ${k}_{\rm Ex, x}\simeq 2.5\lambda_{\mathrm{si}}^{-1}$ that is marginally consistent with equation~\ref{eq:exwavelenght}. The wave power is slightly less than that observed with out-of plane magnetic field. As in Section~\ref{sec:wakefield} we take the typical wavenumber of the precursor waves, $k\approx 2.8\lambda_{\mathrm{se}}^{-1}$, and the typical frequency of the dominant X-mode waves, $\omega\approx 3.2\,\omega_{pe}$, and add the contributions from the X- and O-modes to estimate the strength parameter and calculate the amplitude of the wakefield (equatio~\ref{eq:wakefield}), \begin{equation} a\simeq 0.21 \qquad \frac{\langleE_{\mathrm{x}}\rangle}{B_0c}\approx 5\times 10^{-3}. \end{equation} Therefore, the wakefield should exert similar effects on the upstream plasma in simulations with in-plane and out-of-plane magnetic field.} \section{Summary and conclusions} \label{sec:summary} \textcolor{black}{This is the first of two articles in which we investigate mildly-relativistic magnetized shocks in electron-ion plasma. In this paper we explore with PIC simulations the electromagnetic shock structure and the production of plasma instabilities and waves. Paper II shall be devoted to particle acceleration, heating, and the energy transfer from ions to electrons downstream of the shock. } \begin{comment} The main application of our studies is in the physics of AGN jets, that are observed to be sources of high-energy electromagnetic emission and also CR particles. In this context the focus of this work is on conditions typically assumed in the internal shock scenario of the electromagnetic emission production. We therefore assume the shock Lorentz factor of $\gamma_{sh}\simeq 2$ and plasma magnetization, $\sigma=0.1$. At this magnetization the shocks are mediated through particle reflection off the shock-compressed magnetic field and the flow energy dissipation processes involve emission of strong coherent electromagnetic radiation. Nevertheless, mildly-relativistic shocks in this parameter regime have been poorly explored and only with very-low-resolution PIC simulations, that indicated a low efficiency of particle energisation processes and a resulting very weak proton-electron coupling. However, it has been recently noticed that an appropriate scrutiny of the electromagnetic shock structure requires sufficiently high numerical resolution. Only such studies can properly quantify the amplitude of the precursor waves and their interactions with particles. The aim of this work is to re-assess the physics of mildly relativistic magnetized shocks with kinetic PIC simulations that have unprecedentedly high-resolution. In addition, our simulations take into account large-scale effects related to the proton gyration at the shock and the excitation of the corrugations along the shock surface. This is to investigate the applicability of the WFA model in AGN jets for the problem of high-energy CR origin and also to evaluate a realistic level of the proton-to-electron energy transfer in the shock. The study is performed for shocks in plasma composed of electrons and ions, without a positron content. {As relativistic shocks are typically superluminal, our simulations investigate strictly perpendicular shocks. The geometry of the simulations is 2D because 3D large-scale high-resolution studies are at present not feasible from the computational side. However, to evaluate realistic 3D physics} we probe two different {configurations of the mean} magnetic {field with respect to} the simulation plane, namely {the out-of-plane and the in-plane field orientation.} \end{comment} \textcolor{black}{Our high-resolution studies show that the SMI operates at mildly relativistic shocks as theoretically predicted and produces coherent emission of upstream-propagating electromagnetic waves. The waves are substantially weaker than at ultra-relativistic shocks \citep{iwamoto2017,iwamoto2018}, but reach a persistent level that is similar in 2D and 1D simulations. In 2D shock corrugation provides wave amplification that compensates other destructive 2D effects. Shock ripples appear for both in-plane and out-of-plane mean magnetic field, but their generation mechanism differs -- modulation of ion gyration \citep{burgess2007} for $\varphi_B=90^{o}$ and the AIC temperature-anisotropy instability with $\varphi_B=0^{o}$. In both cases} the ripples heavily influence the upstream plasma and the structure of downstream turbulence. \textcolor{black}{For out-of-plane mean magnetic field the precursor waves are of the X-mode type. Both the emission direction about $40\deg$ off the shock normal and the wave amplification are caused by the shock ripples. With in-plane magnetic field the AIC-generated shock-front corrugations have a slightly lower amplitude, and the waves propagate mostly along the shock normal. Magnetic turbulence at the shock causes part of the precursor waves to be O-mode waves. For both magnetic-field orientations we observe in the upstream plasma the electrostatic Langmuir modes -- the wakefields -- and the density filaments that result from the parametric filamentation instability. Except for brief periods, their average amplitude is moderate and smaller than at ultrarelativistic shocks.} \textcolor{black}{The important role of shock rippling has not been demonstrated so far for relativistic shocks. At perpendicular high-Lorentz-factor shocks \citet{sironi2013} found shock corrugations consistent with \citet{burgess2007} only in a limited range of plasma magnetization, $3\times 10^{-4}\lesssim\sigma \lesssim 10^{-1}$, probably on account of electron heating at Weibel filaments for $\sigma \leq 10^{-4}$ and in the SMI-mediated precursor for $\sigma > 0.1$. The ripples at mildly relativistic shocks may be similarly suppressed at low magnetizations \textcolor{black}{due to} the Weibel instability \textcolor{black}{that still operates in this regime} \citep[e.g.,][]{Kato2010}, but at $\sigma> 0.1$ one does not expect precursor wave emission stronger than for the $\sigma=0.1$ analysed here \citep[see][]{iwamoto2019}, and rippling may persistently develop.} {The same should apply to AIC-instability-generated corrugations.} \textcolor{black}{Our 2D simulations show intense coherent precursor waves generated by the SMI {irrespective of the} magnetic-field configuration. One should expect strong precursor waves also in 3D, that are a mixture of X-mode and O-mode waves. The intensity of the ordinary mode is difficult to estimate, because these waves arise from local variations in the gyration direction at the shock front that may be less coherent in 3D than in the in-plane 2D simulations, possibly leading to weaker O-mode emission \citep{iwamoto2018}. \citet{plotnikov2019} showed for {ultra}-relativistic shocks in pair plasma that at high $\sigma_e\gtrsim 1$ ($\sigma_e=5.1$ in this study) the physics of precursor-wave emission in 3D {is} better represented {with} the out-of-plane 2D {model.} This result will likely hold in electron-ion plasma, since the SMI mechanism operates as in pair plasma. As we demonstrated, at mildly relativistic shocks the precursor-wave strength and structure is significantly affected by shock rippling. Rippling along the lines of \citet{burgess2007} requires a suppression of fluctuations parallel to the mean magnetic field that might be difficult to achieve in 3D, but ripples generated through the AIC instability amplify the precursor waves to comparable amplitudes, and so precursor-wave amplification may be expected in 3D as well.} \section*{Acknowledgements} \textcolor{black}{J.N. acknowledges inspiring discussions with Marek Sikora.} This work has been supported by Narodowe Centrum Nauki through research projects DEC-2013/10/E/ST9/00662 (A.L., J.N., O.K.), UMO-2016/22/E/ST9/00061 (O.K.) and 2019/33/B/ST9/02569 (J.N.). This research was supported by PLGrid Infrastructure. Numerical experiments were conducted on the Prometheus system at ACC Cyfronet AGH. This work was supported by JSPS-PAN Bilateral Joint Research Project Grant Number 180500000671. Part of the numerical work was conducted on resources provided by the North-German Supercomputing Alliance (HLRN) under projects bbp00003, bbp00014, and bbp00033. \section*{Data availability} The data underlying this article will be shared on reasonable request to the corresponding author.
{ "timestamp": "2020-12-17T02:19:14", "yymm": "2012", "arxiv_id": "2012.08969", "language": "en", "url": "https://arxiv.org/abs/2012.08969" }
\section{Motivation} \label{sec:mot} Uncertainty quantification has gained a great deal of prominence in nuclear theory over the past several years. From \emph{ab initio} methods to macroscopic theories, many groups are tackling the broad challenge of including meaningful uncertainties on their theoretical calculations \cite{Melendez2017,Furnstahl2015,Wesolowski2016,Schindler2009,Fukui2014,Schunck2015,Bernhard2016,Sangaline2016,Utama2016}. Although standard $\chi^2$-minimization techniques and covariance matrix propagation had been the standard for decades, Bayesian methods have recently become the gold standard in uncertainty quantification, aimed at more reliably reporting calculated uncertainties, particularly from parameter optimization, e.g. \cite{Neufcourt2018}. Building off of these improvements, there has also recently been a push to investigate techniques such as Gaussian Processes (GP) \cite{Novak2014,Neufcourt2018,Neufcourt2019} and other machine learning methods \cite{Neufcourt2018,Utama2016a,Lovell2020,Wang2019,Regnier2020} to create emulators in addition to quantifying uncertainties. In few-body reaction theory, the typical means of quantifying uncertainties coming from approximations made to the theory was to compare a calculated observable (such as a differential cross section) to the same observable calculated using a more exact theory \cite{Nunes2011,Upadhyay2012,Deltuva2007,Capel2012,Titus2016}. This comparison, however, only indicates the relative uncertainty between methods, not the absolute uncertainty on a calculation. Parametric uncertainties were studied in much the same way - comparing calculations using different parameterizations. However, these uncertainties can come from many sources. The four main sources of uncertainty in few-body reaction theories are: \begin{enumerate} \item effective potentials (the optical potential) \item approximations made to the few-body problem \item the structure functions used \item degrees of freedom left out of the model space \end{enumerate} \noindent Our group began to explore uncertainties within few-body reactions in the first special issue of this journal aimed at bridging the gap between experiment and theory \cite{Lovell2015}. At that point, the state-of-the-art in the field, as mentioned previously, was using standard $\chi^2$ minimization to optimize the optical potential parameters fit elastic scattering data. Subsequently, different parameterizations or models were compared based on relative differences. Under this assumption, we found only 10-30\% differences between various optical model parameterizations, even for reactions on highly exotic nuclei, such as $^{132}$Sn(d,p)$^{133}$Sn. Not only was this a basic arbitrary procedure to estimate uncertainties in the optical model, it provided no avenue to quantify the uncertainties coming from items (ii)--(iv). In the meantime, the many studies on uncertainty quantification for few-body reactions have greatly improved our understanding and helped establish the Bayesian framework in this area. This is in line with improvements made in the broader nuclear physics community. We will now briefly discuss each of the new developments and provide more detail in the next sections of this paper. First, with regards to (i) the optical potential, we have replaced the frequentist propagation of uncertainties \cite{Lovell2017} with a full Bayesian analysis \cite{Lovell2018}. This decision was strongly motivated by the direct detailed comparison of the frequentist and Bayesian methods for reactions on stable targets \cite{King2019}. For (ii) the approximations made to the few-body problem, we have compared the distorted-wave Born approximation (DWBA) and the adiabatic wave approximation (ADWA) using both the frequentist \cite{King2018} and Bayesian \cite{Lovell2018} optimizations. Although this comparison does not take into account the model defects in either approximation, we are now able to determine whether these two models are consistent within the uncertainties propagated from the optical model. With regards to (iii) and (iv), small studies have been started to incorporate these uncertainties as well, particularly focusing on the single-particle potential in transfer reactions and analyzing the change in the uncertainties when a more exact few-body model is used. In this paper, we highlight the improvements that have been made to uncertainty quantification of few-body models with regards to the above four sources. First, we outline the physical problem, the difference in philosophy between the frequentist and Bayesian methods in our context, and the various reaction models that are used throughout this work in Section \ref{sec:stats}. In Section \ref{sec:optical}, we discuss the improvements that have been made to quantifying uncertainties coming from the parametrization of the optical model potential; in Section \ref{sec:fewbody}, we show quantified differences in approximations made to the few-body problem; in Section \ref{sec:structure}, we discuss the near-term progress that can be made on the structure functions and adding degrees of freedom to the model space that had previously been removed - both now in a Bayesian framework. Finally, we conclude and give broad remarks about the next steps for uncertainty quantification in few-body reaction theory in Section \ref{sec:outlook}. \section{Reaction theory and statistical considerations} \label{sec:stats} We focus on two methods of quantifying uncertainties which have seen a great deal of use in nuclear physics both historically and recently: standard $\chi^2$ minimization and covariance matrix propagation - referred to here as frequentist methods - and Bayesian methods. In both cases, the main idea is to fit a theoretical model, $\sigma ^\mathrm{th}(\mathbf{x})$ with free parameters $\mathbf{x}$, to a set of experimental data, $\sigma_i^\mathrm{exp}$ with experimental errors $\Delta \sigma _i$. The parameters, $\mathbf{x}$, that we aim to optimize are those of the optical model, an effective potential describing the interaction between a light projectile and a heavy target. The optical potentials typically have real and imaginary terms, \begin{equation} U(r) = V(r) + i W(r) + iW_s(R), \label{eq:pot} \end{equation} \noindent where the imaginary term takes into account the loss of flux to reaction channels not explicitly included in the model. The potentials contain three parts, a volume term, surface term, and spin orbit term (in addition to the standard Coulomb term, e.g. \cite{Fukui2014}, when charged projectiles are considered). In the reaction models considered here, the volume term contains a real and imaginary part parameterized as a Woods-Saxon, \begin{equation} V(r) = -\frac{V_o}{1+\exp((r-R_o)/a_o)}, \end{equation} \noindent and \begin{equation} W(r) = -\frac{W_o}{1+\exp((r-R_w)/a_w)}, \end{equation} \noindent where $V_o$, $R_o$, and $a_o$ ($W$, $R_w$, and $a_w$) are the depth, radius, and diffuseness of the real (imaginary) term of the potential. The surface term, $W_s(R)$, typically contains only an imaginary term which is parametrized as the derivative of a Woods-Saxon (parameters $W_s$, $R_s$ and $a_s$). The spin-orbit term is also parametrized as a derivative of a Woods-Saxon, but these parameters are typically held constant, because unpolarized elastic scattering is not very sensitive to this piece of the interaction. In total, we have nine free parameters (in each case, $R_i=r_iA^{1/3}$ where $A$ is the mass of the target nucleus and $r_i$ is the fitted parameter). These parameters have historically been fitted separately for elastic scattering of incident neutrons and protons as a function of the mass and charge of the target, and the incident particle energy, e.g. \cite{Becchetti1969,Koning2003}. Such global parameterizations are able to provide a fair description overall, but when considering one single data set, for a given target and a given beam energy, it is common to improve upon the description obtained with global potentials. This is the methodology that we employ in the current study: we begin with a global optical model parametrization (typically the Bechetti and Greenlees (BG) \cite{Becchetti1969}) for the initialization of the minimization procedure and optimize the parameters with respect to a single reaction. We focus primarily on elastic scattering data, $d\sigma/d\Omega(\theta)$ but also discuss the effects of including total or reaction cross sections, $\sigma _\mathrm{tot}$ (for neutron scattering) or $\sigma_R$ (for proton scattering), and vector analyzing powers, $Re(iT_{11}(\theta))$. Note that these observables are sometimes included in fitting global optical potentials (e.g. \cite{Koning2003}). We use experimental data where available; Table \ref{tab:expdata} contains the types of reactions, the beam energies, and the original references for the experimental data used in the rest of this work. \begin{table} \centering \begin{tabular}{l|c|c|c} \textbf{Reaction} & \textbf{Data type} & \textbf{Energy (MeV)} & \textbf{Citation} \\ \hline $^{40}$Ca(n,n)$^{40}$Ca & $d\sigma/d\Omega(\theta)$ & 11.9 & \cite{Tornow1982} \\ $^{40}$Ca(n,n)$^{40}$Ca & $d\sigma/d\Omega(\theta)$ & 13.9 & \cite{Tornow1982} \\ $^{40}$Ca(n,n)$^{40}$Ca & $Re(iT_{11}(\theta))$ & 13.9 & \cite{Tornow1982} \\ $^{40}$Ca(n,n)$^{40}$Ca & $\sigma _\mathrm{tot}$ & 14.1 & \cite{Mcdonald1964} \\ $^{40}$Ca(p,p)$^{40}$Ca & $d\sigma/d\Omega(\theta)$ & 12.5 & \cite{Aoki1996} \\ $^{40}$Ca(p,p)$^{40}$Ca & $d\sigma/d\Omega(\theta)$ & 14.5 & \cite{Aoki1996} \\ $^{40}$Ca(p,p)$^{40}$Ca & $Re(iT_{11}(\theta))$ & 14.5 & \cite{Aoki1996} \\ $^{40}$Ca(p,p)$^{40}$Ca & $\sigma _R$ & 14.48 & \cite{Dicello1970} \\ $^{40}$Ca(p,p)$^{40}$Ca & $d\sigma/d\Omega(\theta)$ & 26.3 & \cite{Mccamis1986} \\ $^{40}$Ca(p,p)$^{40}$Ca & $d\sigma/d\Omega(\theta)$ & 27.5 & \cite{Mccamis1986} \\ $^{40}$Ca(p,p)$^{40}$Ca & $Re(iT_{11}(\theta))$ & 26.3 & \cite{Mccamis1986} \\ $^{40}$Ca(p,p)$^{40}$Ca & $\sigma _R$ & 24.5 & \cite{Carlson1975} \\ $^{40}$Ca(d,d)$^{40}$Ca & $d\sigma/d\Omega(\theta)$ & 30.0 & \cite{Roche1974} \\ \end{tabular} \caption{Overview of the experimental data used in this work. First columns gives the reaction at the energy listed in the third column. The second column lists the type of data: $d\sigma/d\Omega (\theta)$ for the differential cross section, $Re(iT_{11}(\theta))$ the analyzing powers (polarization data), and $\sigma _{R/tot}$ for either the reaction cross section or total cross section. A reference for each data set is listed in the fourth column. } \label{tab:expdata} \end{table} \subsection{Reaction models} \label{sec:theory} There are several reaction models that we consider in this work, all of which are not computationally demanding. We should emphasize that although there are several reaction models of higher complexity (e.g. \cite{xcdcc-summers,xcdcc-summers2,xcdcc-moro-prl,deltuva2009b}) and a number of frameworks for the optical potential that go well beyond the simple parameterization introduced in Eq.(\ref{eq:pot}) (e.g. \cite{non-local,dom,rotureau2018}), it is critical to initially explore the new statistical tools with simpler models that enable a full investigation of this statistical methodology. For most of the cases that we study, we fit the optical potential parameters to reproduce single-channel elastic scattering, which is calculated using partial wave decomposition as explained in \cite{ThompsonNunes}. For each projectile-target angular momentum, the S-matrix $S_L$ (where $|S_L|^2$ is the probability that the projectile gets absorbed by the target) is obtained from solving the scattering equation and matching the solution to the known asymptotic behavior. From there, all reaction observables can be calculated (for details see Section 3.2 of \cite{ThompsonNunes}). Here we consider two models to predict the single-neutron transfer cross section for $A(d,p)B$, namely the one-step distorted-wave Born approximation (DWBA) and the adiabatic-wave approximation (ADWA). In DWBA, the elastic scattering of the incoming deuteron is described by an effective deuteron-target interaction, $V_{dA}$. Instead of solving the full three-body problem, the true three-body wave function is replaced by the elastic channel (represented by the deuteron distorted wave multiplied by the corresponding bound state of the deuteron) \cite{ThompsonNunes}: \begin{equation} \textbf{T} ^\mathrm{DWBA} _\mathrm{post} = \langle \Phi_{nA} (\vec{r}_{nA}) \chi_p (\vec{R}_f) | V_{np}+\Delta | \Phi _{np}(\vec{r}_{np}) \chi _{d\vec{k}_{i}}(\vec{R}_i) \rangle, \label{eqn:TDWBA} \end{equation} \noindent with $\Phi_{np}(\vec{r}_{np})$ the bound-state deuteron wave function, $\Phi_{nA} (\vec{r}_{nA})$ the final bound-state wave function of $B$, $\chi _{d\vec{k}_{i}}(\vec{R}_i)$ the distorted wave of the $d+A$ system, $\chi_p (\vec{R}_f)$ the distorted wave of the proton interacting with $B$, $V_{np}$ the deuteron binding potential, and $\Delta$ the difference between the $A+p$ and $B+p$ optical potentials. Note that the coordinates introduced in Eq. (\ref{eqn:TDWBA}) are the standard Jacobin coordinates in the entrance ($\vec{r}_{np},\vec{R}_i$) and exit ($\vec{r}_{nA},\vec{R}_f$) channels as in Ref.\cite{ThompsonNunes}. However, because of its small binding energy, it is likely that deuteron breakup will occur in the field of the target, and this has been shown to influence other channels such as the transfer \cite{Nunes2011}. For this reason, many reaction theories start from the three-body $n+p+A$ scattering problem, and solve with different levels of approximations. Since solving the three-body scattering problem exactly (in a Faddeev formalism) presents challenges and requires significant computational time, it is not feasible to use these methods in the context of the Bayesian approach discussed in Section \ref{sec:bayes}. Instead we use the adiabatic approximation of Johnson and Tandy \cite{tandy} by considering that the excitation energy of the deuteron is negligible compared to the beam energy. This approximation leads to a simplified three-body equation: \begin{equation} \left [ T_R +V_{pA}(\vec{r},\vec{R}) +V_{nA}(\vec{r},\vec{R}) -(E-\epsilon_0) \right ] \Psi ^\mathrm{ad} (\vec{r},\vec{R}) = 0, \label{eqn:ad} \end{equation} \noindent where $T_R$ is the center of mass kinetic energy, $V_{nA}$ and $V_{pA}$ are the neutron-target and proton-target optical potentials, $E$ is the incident beam energy, and $\epsilon_0$ is the binding energy of the deuteron. Furthermore, by considering a Weinberg expansion for $\Psi ^\mathrm{ad} (\vec{r},\vec{R})$, it is possible to integrate out the $\vec r$ variable in Eq. (\ref{eqn:ad}) and obtain a single-channel scattering equation, greatly reducing the computational time \cite{tandy}. The adiabatic wave function is then used in the post-form transfer T-matrix, \begin{equation} \textbf{T}^\mathrm{ADWA} _\mathrm{post} = \langle \Phi _{nA} (\vec{r}_{nA}) \chi _p (\vec{R}_f) |V_{np} | \Psi ^\mathrm{ad} (\vec{r}, \vec{R}) \rangle. \label{eqn:TADWA} \end{equation} This approach is usually referred to as the finite-range adiabatic wave approximation (ADWA). In both DWBA or ADWA, the transfer cross section is obtained from the norm squared of the T-matrix. \subsection{Frequentist methods} \label{sec:freq} In this work, we refer to standard $\chi^2$ minimization and propagation of the resulting covariance matrix as frequentist methods. This type of optimization has been the standard in the field for many decades. The goal is to describe a true function $\sigma (\theta)$ with a model $\sigma (\textbf{x},\theta)$ that has $N$ free parameters, $x_1,x_2,...x_N$. To constrain these parameters, we have a set of M experimental data pairs, $\{ (\theta_1, \sigma _1), (\theta_2, \sigma _2) ... (\theta_M, \sigma _M)\}$, each of which has an associated experimental uncertainty, $\Delta \sigma _i$. The typical assumption is that the experimental values are uncorrelated with one another, \begin{equation} \sigma ^\mathrm{exp} _i = \sigma (\theta _i) + \epsilon_i, \end{equation} \noindent where each $\epsilon _i$ is normally distributed, \begin{equation} \epsilon _i \sim \mathcal{N}(0,(\Delta \sigma _i)^2). \end{equation} \noindent In matrix form, this is written, $\sigma ^\mathrm{exp} \sim \mathcal{N}(\sigma, \Sigma)$, where $\Sigma$ is an $M \times M$ matrix with $(\Delta \sigma_i)^2$ on the diagonal. The residuals between the experimental values and the theory predictions, $\sigma_i^\mathrm{exp} - \sigma (\textbf{x},\theta_i)$, are also normally distributed as $ \mathcal{N}(0,\Sigma)$. Maximizing the likelihood function for $\textbf{x}$ is equivalent to minimizing the corresponding objective function, \begin{equation} \chi^2_{UC} = \sum \limits _{i=1} ^{M} \frac{(\sigma^\mathrm{exp} _i - \sigma(\mathbf{x},\theta_i))^2}{(\Delta \sigma_i) ^2}, \label{eqn:chiUC} \end{equation} \noindent where $UC$ stands for uncorrelated. Equation (\ref{eqn:chiUC}) is proportional to the standard $\chi^2$ function, and its minimization leads to the determination of a best fit set of parameters, $\hat{x}_{UC}$. The 95\% confidence intervals around $\sigma(\hat{x}_{UC},\theta)$ can be constructed by assuming that the true parameter values are distributed normally around the best-fit parameter set, so parameters can be drawn from \begin{equation} \mathcal{N}(\hat{x},\mathbb{C}_p) \sim \exp [ (\textbf{x} - \hat{x}_{UC} ) ^T \mathbb{C}_p (\textbf{x} - \hat{x}_{UC}) ] \label{eqn:parmdraws} \end{equation} \noindent where $\mathbb{C}_p$ is the $N \times N$ parameter covariance matrix. The goodness of fit is taken into account by scaling this covariance matrix by \begin{equation} s^2 = \frac{\chi^2_{UC}}{M-N}. \end{equation} \noindent Then $\mathbb{C}_p$ in Eq. (\ref{eqn:parmdraws}) is replaced by $s^2\mathbb{C}_p$. However, if we consider differential elastic cross sections as calculated in the standard manner using a partial wave decomposition, where the use of the Legendre polynomials inherently correlates the cross section at each angle, we can explore a different approach to include these correlations in the fitting procedure. We can then introduce a correlated $\chi^2$ function: \begin{equation} \chi^2_C = \sum \limits _{i=1} ^{M} \sum \limits _{j=1} ^{M} \mathbb{W}_{ij} (\sigma^\mathrm{exp} _i - \sigma(\mathbf{x},\theta_i))(\sigma^\mathrm{exp} _j - \sigma(\mathbf{x},\theta_j)), \label{eqn:chiC} \end{equation} \noindent where $\mathbb{W}_{ij}$ are the $(ij)^\mathrm{th}$ elements of the inverse of the data covariance matrix defined as \begin{equation} \mathbb{W} = (\mathbb{C}_m + \Sigma)^{-1}, \label{eqn:expCovariance} \end{equation} \noindent with $\mathbb{C}_m$ being the model covariance matrix between the angles of the experimental data points and $\Sigma_{ii}=(\Delta \sigma_i)^2$. To calculate the model covariance matrix, parameter sets in the optical model are randomly sampled around the initial parametrization and run through the model. $\mathbb{C}_m$ is then explicitly calculated as the covariance between each of the angles included in the fitting procedure. The elements of $\mathbb{C}_m$ do not have to be positive, leading to interference between the different angles in the model, and therefore, $\chi^2_C < \chi^2_{UC}$. In addition, because the model correlation matrix is not normalized, $\chi^2/M \leq 1$ is no longer the definition of a good statistical fit. To construct the 95\% confidence intervals, once the best-fit set of parameters, $\hat{x}_{UC}$ or $\hat{x}_C$, is determined, parameter sets are sampled from Eq. (\ref{eqn:parmdraws}), where $\mathbb{C}_p$ and $s^2$ are determined from either the uncorrelated or correlated $\chi^2$, and run through the model. At each evaluated angle, the top 2.5\% and bottom 2.5\% of the calculations are removed to leave 95\% intervals. \subsection{Bayesian methods} \label{sec:bayes} In contrast to frequentist methods, where the fundamental interpretation is that out of a list of options, one must occur, Bayesian statistics give the probability of a single occurrence independent of all others. In addition, with Bayesian methods, we are able to introduce prior information in the statistical formulation, compare two theories, or even mix models, all within a consistent framework. We give a brief overview in the following section, but more details can be found in \cite{BayesBook,Trotta2008}. For a hypothesis, $H$, (in this work, a given model with some set of free parameters) that is constrained by some experimental data, $D$, Bayes' theorem is written as \begin{equation} p(H|D) = \frac{p(H)p(D|H)}{P(D)}, \label{eqn:bayes} \end{equation} \noindent where the prior, $p(H)$, is information that is known about the model before seeing the experimental data, and the likelihood, $p(D|H)$ contains information about the goodness of fit between the model and the data, here a standard normal distribution, $\exp ^{-\chi^2/2}$. For our $\chi^2$ function, we only consider the uncorrelated $\chi^2$ of Eq. (\ref{eqn:chiUC}). Bayes' theorem allows for the calculation of the posterior distribution, $p(H|D)$, which is the most likely probability distribution of the fitting parameters conditional on the experimental data. The last piece of Eq. (\ref{eqn:bayes}) is $p(D)$, the Bayesian evidence which typically consists of a weighted sum over all possible hypotheses - or models. Often, the Bayesian evidence is difficult or nearly impossible to calculate directly, so Monte Carlo methods are used to sample the posterior distribution directly. Here, we use the Metropolis-Hastings Markov Chain Monte Carlo (MCMC) \cite{Metropolis1953,Hastings1970}. We begin with an initial set of optical model parameters $\textbf{x}_i$ from which the prior, $p(H_i)$, and likelihood, $p(D|H_i)$, are calculated. A new set of parameters is sampled from a normal distribution, $\textbf{x}_f \sim \mathcal{N}(\textbf{x}_i,\epsilon \textbf{x}_0)$, where $\epsilon$ is a scaling factor that controls the step size through parameter space. From the updated parameter space, $\textbf{x}_f$, the new prior and likelihood are calculated, $p(H_f)$ and $p(D|H_f)$. This new parameter set is accepted if the following condition is fulfilled: \begin{equation} \frac{p(H_f)p(D|H_f)}{p(H_i)p(D|H_i)} > R, \label{eqn:Rcondition} \end{equation} \noindent where $R$ is a random number between 0 and 1. If $\textbf{x}_f$ is accepted, it becomes the initial parameter set, and a new random set of parameters is drawn. If Eq. (\ref{eqn:Rcondition}) is not fulfilled, the parameter set $\textbf{x}_f$ is rejected and a new parameter set is drawn from $\mathcal{N}(\textbf{x}_i,\epsilon \textbf{x}_0)$. This process continues until a predetermined number of parameter sets have been accepted. Initially, there is no guarantee that the initial parameter set is within the targeted posterior distribution. For this reason, we need a burn-in period where a certain number of parameter sets, $N_\mathrm{burn-in}$, are discarded, regardless of whether or not these sets were accepted by the Monte Carlo condition of Eq. (\ref{eqn:Rcondition}). The end of the burn-in is signified by a likelihood and parameter distributions that oscillate around a mean value and are not consistently increasing or decreasing. After the burn-in period, each accepted parameter set is directly correlated to the parameter sets accepted before it. To remove this dependency, we do not record a certain number of accepted parameters, $N_\mathrm{thin}$ between each recorded parameter set. Confidence intervals in Bayesian statistics are calculated slightly differently than confidence bands in the frequentist method. In this case, 95\% confidence intervals are defined by the smallest interval, $[a,b]$, where \begin{equation} \int \limits _a ^b p(H_i|D) dx_i = 0.95 \label{eqn:bayesianCI} \end{equation} \noindent for a given variable $x_i$. In practice, because our parameters are sampled numerically from the MCMC, the integral becomes a sum. These intervals define the region where we believe, with a 95\% probability, that the true value of the cross section or optical potential parameters fall. \subsection{Numerical details} \label{sec:num} There are several optical potentials that are needed to calculate the single-nucleon transfer cross sections using DWBA or ADWA. In this work, we focused on calculating the $^{40}$Ca(d,p)$^{41}$Ca transfer cross section in the ground state (g.s.) at 28.4 MeV. In DWBA, the deuteron-target interaction is needed at the incident deuteron energy. For ADWA, the neutron-target and proton-target potentials are needed which we take at half the deuteron energy. For both calculations, we additionally require the potential between the proton and the $^{41}$Ca at an energy $E-Q$, where $E$ is the incident deuteron energy and $Q$ is the $Q$-value of the reaction. However, scattering data on $^{40}$Ca is more readily available than data on $^{41}$Ca - and there is very little difference between the optical potentials on these two targets - so we constrain this channel based on $^{40}$Ca-p scattering data, as in all of our previous studies \cite{Lovell2017,Lovell2018,King2018,King2019,CatacoraRios2019}. These data are summarized in Table \ref{tab:expdata} in addition to listing the other data sets that we will later use to explore further constraints on the optical potential. For all of the elastic scattering angular distribution data and total reaction cross sections, we take the experimental uncertainty to be 10\%. In Section 3.4, we will also consider the vector analyzing powers. For those, we also take the uncertainty to be 10\% except in cases where the value of the analyzing power at a given angle is less than 5\% of the maximum value across all angles (e.g. when the analyzing power is very close to zero). For these cases, we take the uncertainty to be 5\% of the maximum value. This provides a lower bound of the error, in line with real data. For the uncorrelated frequentist calculations, we initialize each of the parameters with the Becchetti-Greenlees (BG) \cite{Becchetti1969} optical potential parameters for neutron and proton scattering and the An-Cai (AC) \cite{AC} optical potential for deuteron scattering. With the $\chi^2$ minimization, it is possible that the parameters can fall into a minimum that is outside of the physical range of these parameters. To prevent that, we fix some of the parameters during the optimization (typically those of the imaginary volume term). The potential for the incoming proton at 14.5 MeV in the frequentist optimizations was initialized with the incoming neutron parameters to ensure that the geometry was similar for both neutrons and protons. The Bayesian optimization is also initialized from the BG potential parameters. In this case, we also must define a shape for the prior distributions. As in our previous works, we take independent Gaussian priors for each of the optical potential parameters, centered at the Becchetti-Greenlees parameter values with a width of 100\% of those values. This is a very wide prior, but it allows the data to drive the optimization, instead of being driven by the shape of the prior. These assumptions about the shape of the prior were explored in \cite{Lovell2018}. The frequentist minimizations for elastic scattering use \texttt{sfresco}, a best fit program using the \texttt{MINUIT} routines \cite{minuit}, packaged with \texttt{fresco} \cite{fresco}. The Bayesian statistical tools were developed from scratch as wrapper codes that call {\sc fresco}, {\sc sfresco} \cite{fresco} and {\sc nlat} \cite{nlat}. We will refer these wrappers collectively as {\sc QUILTR}, {\it \textbf{Q}uantified \textbf{U}ncertainties \textbf{I}n \textbf{L}ow-energy \textbf{T}heory for \textbf{R}eactions}. \section{The optical potential} \label{sec:optical} Most of our efforts over the past several years have concentrated on constraining the parameters of the optical potential; we present a summary of the results of these studies here. Although the type of studies described in Sections 3.1-3.3 have been published in some form in recent years, in this work we bring all the features together, and apply these to a single new case (reactions on $^{40}$Ca) with consistent assumptions, both regarding the experimental data and the statistical methods. We use experimental data from three reactions, $^{40}$Ca(p,p), $^{40}$Ca(n,n), $^{40}$Ca(d,d) to constrain the optical potentials needed to calculate the single-neutron transfer cross section, $^{40}$Ca(d,p)$^{41}$Ca(g.s.) using both DWBA and ADWA with the data outline in Table \ref{tab:expdata}. As most of the comparisons in this field has historically been performed using frequentist methods, we first compare the frequentist methods using the uncorrelated and correlated $\chi^2$ functions of Eqs. (\ref{eqn:chiUC}) and (\ref{eqn:chiC}). \subsection{Confidence intervals for angular distributions} In Fig. \ref{fig:UCCB}, we show the 95\% confidence intervals for the uncorrelated (\emph{UC}, blue) and correlated (\emph{C}, red) fits for all incident and outgoing channels needed for the DWBA and ADWA calculations, (a) $^{40}$Ca(n,n) at 13.9 MeV, (c) $^{40}$Ca(p,p) at 14.5 MeV, (e) $^{40}$Ca(p,p) at 26.3 MeV, and (g) $^{40}$Ca(d,d) at 30.0 MeV. In addition, in panels (b), (d), (f), and (g) we show the corresponding uncertainty on the differential cross sections defined as $\varepsilon = \Delta \sigma / \overline{\sigma}$, where $\Delta \sigma$ is the width of the 95\% confidence interval and $\overline{\sigma}$ is the best-fit cross section. We see, in all cases, the confidence intervals for the correlated fits are at least as large (but in most cases larger) than those for the uncorrelated fits. This increase is seen even though the $\chi^2$ values are at least 5 times smaller in the correlated best fit compared to the uncorrelated best fit, due to the inclusion of the angular covariance matrix in the definition of $\chi^2_{C}$ in Eq. (\ref{eqn:chiC}). The addition of this covariance matrix also allows the best fit the flexibility of not going through all of the data points, which is why we see the data outside of the confidence intervals, in particular for both proton-scattering reactions, in Fig. \ref{fig:UCCB} (c) and (e). The standard, uncorrelated $\chi^2$ minimization requires that for every data point above the best fit, there is a data point below the best fit; this restriction is loosened when the angular covariance matrix is introduced in the correlated $\chi^2$ optimization. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{40Can14Band.pdf} & \includegraphics[width=0.5\textwidth]{40Can14Error.pdf} \\ \includegraphics[width=0.5\textwidth]{40Cap14Band.pdf} & \includegraphics[width=0.5\textwidth]{40Cap14Error.pdf} \\ \includegraphics[width=0.5\textwidth]{40Cap26Band.pdf} & \includegraphics[width=0.5\textwidth]{40Cap26Error.pdf} \\ \includegraphics[width=0.5\textwidth]{40Cad30Band.pdf} & \includegraphics[width=0.5\textwidth]{40Cad30Error.pdf} \\ \end{tabular} \caption{95\% confidence bands (left column) and percentage uncertainty (right column) for $^{40}$Ca(n,n) at 13.9 MeV - (a) and (b) - $^{40}$Ca(p,p) at 14.5 MeV - (c) and (d) - $^{40}$Ca(p,p) at 26.3 MeV - (e) and (f) - and $^{40}$Ca(d,d) at 30.0 MeV - (g) and (h). Comparison between the uncorrelated (\emph{UC}, in blue), correlated (\emph{C}, in red), and Bayesian (\emph{B}, in green) optimizations.} \label{fig:UCCB} \end{figure} In Fig. \ref{fig:percentages}, we calculate the percentage of the experimental data that falls within various confidence intervals for the elastic-scattering reactions shown in Fig. \ref{fig:UCCB}. If the confidence intervals truthfully reproduced the uncertainties, X percent of the data would fall within the X percent confidence intervals, indicated by the solid black lines. The uncorrelated confidence intervals consistently over-predict the uncertainties for small values of the confidence intervals, below $\sim 50$\%. For the neutron and proton scattering, the trends in the uncorrelated optimization are flat compared to the solid black line. The correlated optimization systematically over-predicts the uncertainty in each case except for proton scattering at 14.5 MeV, panel (b). While over-predicting the uncertainty provides a more conservative estimate, the trends in Fig. \ref{fig:percentages} are not the same across the four scattering cases \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{40Can14-perband.pdf} & \includegraphics[width=0.5\textwidth]{40Cap14-perband.pdf} \\ \includegraphics[width=0.5\textwidth]{40Cap26-perband.pdf} & \includegraphics[width=0.5\textwidth]{40Cad30-perband.pdf} \\ \end{tabular} \caption{Percentage of the the data that falls within a given confidence interval for the uncorrelated (UC, blue circles), correlated (C, red squares), and Bayesian (B, green triangles) minimizations, for the four elastic scattering reactions studied in this work, (a) $^{40}$Ca(n,n) at 13.9 MeV, (b) $^{40}$Ca(p,p) at 14.5 MeV, (c) $^{40}$Ca(p,p) at 26.3 MeV, and (d) $^{40}$Ca(d,d) at 30.0 MeV.} \label{fig:percentages} \end{figure} These results highlight some of the shortcomings of the frequentist methods. The uncorrelated and correlated confidence intervals do not consistently predict the uncertainties. Although the uncorrelated confidence intervals give a more true representation of the uncertainties at the 1$\sigma$ level -- 68\% -- for nucleon scattering on $^{40}$Ca, as shown here, in \cite{King2019}, we found that the uncorrelated frequentist optimization consistently underpredicted the uncertainties on the elastic scattering cross sections. Thus, this test should be performed for each reaction of interest. The model covariance, as included in $\chi^2_C$, is somewhat arbitrarily defined, pointing to possible limitations of the formulation, and we ultimately do not advocate for including model correlations in this manner. In addition, these types of frequentists methods assume that the uncertainties can be described as a Gaussian distribution in parameter and observable space, which is not generally the case. Particularly, this is not true for our reaction model where the dependence of the observable (differential cross section) on the parameters (optical potential) is strongly non-linear. For these reasons, we have also implemented a Bayesian optimization algorithm for fitting the optical potential. The Bayesian results (\emph{B}) are compared to the frequentist uncorrelated and correlated results in Figs. \ref{fig:UCCB} and \ref{fig:percentages}: green lines or green triangles. For the Bayesian calculations, the error $\varepsilon$ is defined again as $\varepsilon = \Delta \sigma/\overline{\sigma}$, where $\Delta \sigma$ is still the width of the 95\% confidence interval but now $\overline{\sigma}$ is the mean value of the cross sections within the 95\% confidence interval. In all cases, the confidence intervals and related uncertainties obtained with the Bayesian approach are larger than their frequentist counterparts (green compared to blue). Particularly at the lower uncertainty values, the uncorrelated frequentist optimization more accurately predicts the uncertainties, but if we want to look at higher confidence values -- such as 2$\sigma$ or 95\% confidence -- the truthfulness of the uncorrelated falls off, and the Bayesian optimization becomes more reliable. In this work and our previous studies, we look at 95\% confidence intervals, at which point the frequentist methods tend to underpredict the uncertainties and the Bayesian optimization is more truthful. The discrepancies between the optimization methods are also seen in the propagation of the optical model uncertainties to the single-nucleon transfer cross sections, as will be discussed in Sec. \ref{sec:fewbody}. These characteristics are not unique to nucleon scattering on $^{40}$Ca, and other targets are discussed in \cite{King2019}. \subsection{Correlations in parameter space} \label{sec:corr} It is worth also discussing the differences between the minima and correlations in parameter space among the three methods. In Table \ref{tab:parameters}, we show the best-fit parameters for the uncorrelated and correlated frequentist fits along with the average parameter values from the accepted Bayesian samples. We note that, especially in the imaginary volume term, many of the parameters have to be fixed in order to prevent the minimum from going into unphysical regions of the parameter space when the frequentist optimization is used. These parameters are shown in italics in Table \ref{tab:parameters}. However, the real volume term is fairly similar between the three optimization schemes, meaning that they all lead to similar minima, the biggest differences being the geometries of the potentials. Moreover, particularly in the frequentist optimization, the parameters tend to be highly correlated, leading to similar elastic-scattering cross sections, even when the geometry is not the same. We can also constrain the Bayesian optimization to have the same parameters fixed (at the same values) as the uncorrelated frequentist optimization, and when this is done, we find essentially the same minima between the two routines. We then conclude that the prior distribution -- especially a very wide Gaussian prior, as we use here -- does not strongly affect the minima that are found in the Bayesian optimization and does not necessarily keep the minimum closer to the starting parameter set. The Bayesian optimization procedure is data-driven (or driven by the likelihood) rather than being driven by the prior distribution, and the Bayesian procedure allows the parameters to remain in the physical region of the parameter space, while adding a very minimal constraint with the prior distribution. In the bottom half of Table \ref{tab:parameters}, we list the widths of the parameter distributions, either from the parameter covariance matrix for the frequentist calculations or the standard deviation of the set of accepted parameters for the Bayesian calculations. In most cases, the Bayesian parameters widths are larger than both the uncorrelated and correlated frequentist widths. The largest difference is for the correlated frequentist optimization, where sometimes the imaginary surface parameters are wider than the parameters found in the Bayesian optimization. Noticeably, the optimization of $^{40}$Ca(p,p) at 14.5 MeV is the only case where all of the Bayesian parameter widths are larger than the correlated frequentist parameters, and this is the only case where the Bayesian confidence intervals are larger than the correlated confidence intervals across the full angular range. There is a high level of degeneracy in the parameter space, both continuous and discrete, due to the fact that different sets of optical parameters can provide the same elastic scattering distribution. One way to address this degeneracy is to introduce new parameters resulting from a combination of optical potential parameters that would remove the degeneracy. This is not our approach. In fact, because we choose to use wide priors, we explore a wide region of parameter space, with all its degeneracies, and allow for multimodal posteriors. Our uncertainty intervals thus have no biases toward one or another minimum. Nevertheless, by construction the priors do not favor unphysical value for the parameters, avoiding issues with negative values for radii and diffuseness. \begin{landscape} \begin{table} \centering \begin{tabular}{clccccccccc} \textbf{Optimization} & \textbf{Reaction} & \textbf{V ($\mu$)} & \textbf{r ($\mu$)} & \textbf{a ($\mu$)} & \textbf{W$_\mathrm{s}$ ($\mu$)} & \textbf{r$_\mathrm{s}$ ($\mu$)} & \textbf{a$_\mathrm{s}$ ($\mu$)} & \textbf{W$_\mathrm{v}$ ($\mu$)} & \textbf{r$_\mathrm{W}$ ($\mu$)} & \textbf{a$_\mathrm{W}$ ($\mu$)} \\ \hline \textbf{UC} & $^{40}$Ca(n,n) (13.9) &45.629 &1.293 &0.500 &3.959 &1.129 &\it{0.890} &\it{1.289} &\it{1.094} &\it{0.301} \\ \textbf{C} & $^{40}$Ca(n,n) & 43.284 &1.316 &0.532 & 4.303 &1.111 & 0.789 &\it{1.301} &\it{1.332} &\it{0.652} \\ \textbf{B} & $^{40}$Ca(n,n) & 44.246 & 1.317 & 0.550 & 7.268 & 1.231 & 0.484 & 1.276 & 1.095 &0.587 \\ \hline \textbf{UC} & $^{40}$Ca(p,p) (14.5) &53.286 &1.160 &0.590 &2.145 &1.279 &\it{0.892} &\it{1.189} &\it{1.070} &\it{0.757} \\ \textbf{C} & $^{40}$Ca(p,p) & 52.467 &1.267 &0.488 &2.008 &\it{1.045} &\it{0.7849} &\it{2.088} &\it{1.118} &\it{0.652} \\ \textbf{B} & $^{40}$Ca(p,p) & 51.472 & 1.207 & 0.639 & 6.471 & 1.312 & 0.371 & 0.425& 1.335 & 0.530 \\ \hline \textbf{UC} & $^{40}$Ca(p,p) (26.3) & 55.776 &1.067 &0.881 &4.206 &\it{1.034} &\it{0.721} &\it{4.421} &\it{1.3121} &\it{0.51} \\ \textbf{C} & $^{40}$Ca(p,p) & 44.676 &1.258 &0.7077 & 6.7437 &1.210 & 0.403 & \it{2.588} &\it{1.362} &\it{0.7556} \\ \textbf{B} & $^{40}$Ca(p,p) & 56.538 & 1.025 & 0.865 & 3.027 & 1.591 & 0.602 & 3.408 & 1.390 & 0.483 \\ \hline \textbf{UC} & $^{40}$Ca(d,d) (30.0) & 97.936 &1.065 &0.805 &9.045 &1.516 &0.630 &\it{2.444} &\it{1.479} &\it{0.311} \\ \textbf{C} & $^{40}$Ca(d,d) & 100.49 &1.077 &0.774 &\it{6.624} &1.413 &0.853 &\it{2.010} &\it{1.600} &\it{0.334} \\ \textbf{B} & $^{40}$Ca(d,d) & 97.026&1.080&0.782&9.697&1.486&0.676&2.617&1.045&0.555 \\ \hline & & \textbf{V ($\sigma$)} & \textbf{r ($\sigma$)} & \textbf{a ($\sigma$)} & \textbf{W$_\mathrm{s}$ ($\sigma$)} & \textbf{r$_\mathrm{s}$ ($\sigma$)} & \textbf{a$_\mathrm{s}$ ($\sigma$)} & \textbf{W$_\mathrm{v}$ ($\sigma$)} & \textbf{r$_\mathrm{W}$ ($\sigma$)} & \textbf{a$_\mathrm{W}$ ($\sigma$)} \\ \hline \textbf{UC} & $^{40}$Ca(n,n) (13.9) &2.426 &0.045 &0.048 &0.722 &0.185 & --- & --- & --- & --- \\ \textbf{C} & $^{40}$Ca(n,n) & 2.022 &0.029 &0.076 &1.210 &0.203 &0.237 & --- & --- & --- \\ \textbf{B} & $^{40}$Ca(n,n) & 3.277 & 0.057 & 0.057 & 0.604 & 0.086 & 0.038 & 0.150 & 0.106 & 0.069 \\ \hline \textbf{UC} & $^{40}$Ca(p,p) (14.5) &1.475 &0.016 &0.024 &0.309 &0.170 & --- & --- & --- & --- \\ \textbf{C} & $^{40}$Ca(p,p) & 1.889 &0.020 &0.017 &0.202 & --- & --- & --- & --- & ---- \\ \textbf{B} & $^{40}$Ca(p,p) & 2.915 & 0.047 & 0.060 & 0.648 & 0.114 & 0.053 & 0.046 & 0.154 & 0.059 \\ \hline \textbf{UC} & $^{40}$Ca(p,p) (26.3) & 2.245 &0.027 &0.023 &0.210 & --- & --- & --- & --- & --- \\ \textbf{C} & $^{40}$Ca(p,p) &1.062 & 0.027 &0.031 &1.243 &0.035 &0.050 & --- & --- & --- \\ \textbf{B} & $^{40}$Ca(p,p) & 3.622 & 0.044 & 0.050 & 0.446 & 0.096 & 0.058 & 0.285 & 0.098 & 0.058 \\ \hline \textbf{UC} & $^{40}$Ca(d,d) (30.0) & 5.361 &0.043 &0.023 &0.511 &0.015 &0.026 & --- & --- & ---- \\ \textbf{C} & $^{40}$Ca(d,d) & 5.743 &0.127 &0.095 & --- &0.155 &0.148 & --- & --- & --- \\ \textbf{B} & $^{40}$Ca(d,d) & 7.269 & 0.056 & 0.038 & 0.950 & 0.042 & 0.054 & 0.299 & 0.144 & 0.051 \\ \end{tabular} \caption{Optimized parameter means, $\mu$, and and posterior widths, $\sigma$, for each of the three fitting techniques (first column) and each reaction studied (incident energies, in MeV, listed in parentheses in the second column). The real volume, imaginary surface, and imaginary volume depths are listed, in MeV, in the third, sixth, and ninth columns, respectively. The corresponding radii (diffusenesses) are listed in fm in the fourth (fifth), seventh (eighth), and tenth (eleventh) columns.} \label{tab:parameters} \end{table} \end{landscape} In addition, the correlations between the parameters from the four elastic-scattering optimizations are shown in Fig. \ref{fig:correlations}. To focus on the differences in the correlations and not the parameter values, the correlations have been normalized such that each mean is zero and the width of the distribution is one. Circular distributions indicate uncorrelated parameters while the more oval distribution indicate more correlated (or anti-correlated) parameter pairs. Historically, the optical model parameters had been found to be extremely correlated \cite{ThompsonNunes}, and these correlations can be seen in the blue and red distributions of the uncorrelated and correlated parameters. On the other hand, the Bayesian optimization shows very few correlations, except between the depth and radius of the real volume potential. This lack of correlation was also seen in \cite{King2019} and indicates that strong correlations may have been induced by the $\chi^2$ minimization procedure. \begin{figure} \centering \begin{tabular}{cc} \pdfimageresolution=100\includegraphics[width=0.45\textwidth]{40can13_scatter.pdf} & \pdfimageresolution=100\includegraphics[width=0.45\textwidth]{40cap14_scatter.pdf} \\ \pdfimageresolution=100\includegraphics[width=0.45\textwidth]{40cap26_scatter.pdf} & \pdfimageresolution=100\includegraphics[width=0.45\textwidth]{40cad30_scatter.pdf} \\ \end{tabular} \caption{Correlations between the optical model parameters for (a) $^{40}$Ca(n,n) at 13.9 MeV, (b) $^{40}$Ca(p,p) at 14.5 MeV, (c) $^{40}$Ca(p,p) at 26.3 MeV, and (d) $^{40}$Ca(d,d) at 30.0 MeV for the uncorrelated, correlated, and Bayesian optimizations in blue, red, and green respectively. Histograms on the diagonal and parameters in the off-diagonal scatter plots have been normalized to mean zero and standard deviation one to emphasize the differences in the correlations.} \label{fig:correlations} \end{figure} Now that we have seen the uncertainties coming from the optical potential with the constraints coming from fitting the differential cross sections, we find that these uncertainties are so large as to be almost unusable. The uncertainties that we have found in all studied cases have varied anywhere from 20\% to over 100\% - and are not any smaller when they are propagated to the single-nucleon transfer cross sections. Therefore, we have also been exploring ways to reduce the uncertainties in the optical potential. In the next two subsections, we discuss further experimental constraints to shrink the uncertainties on the optical potential - and the resulting uncertainties on the elastic scattering and transfer cross sections. \subsection{Other experimental constraints on the optical potential} \label{sec:energies} Next we explore experimental conditions for elastic-scattering measurements, with the intent of reducing the uncertainties. Here we include tests on the angular range of the data fitted and the effects of including a second set of elastic angular distributions, obtained at a nearby energy. The angular range of the data included in the optimization procedure in \cite{CatacoraRios2019}, tested whether fitting only angles forward of 100$^\circ$ or fitting a reduced set of angular data (where every other angular data point was removed) produced significant differences in the width of the uncertainty interval. Because of the correlations between the differential cross section at various angles, due to the partial wave decomposition, constrains at one angle are propagated to other angles. Consistent with that, we find that we do not gain more information with a denser angular grid, and constraining backwards angles expectedly only affects the backward angle uncertainty. Instead, we turn to including a second set of elastic-scattering data at a nearby energy. There are two ways to include this second set: 1) sequentially, where the Bayesian optimization is run using the first set of data and the parameter posterior distribution is used as the prior distribution to optimize over the second set of data, or 2) simultaneously, where the two data sets are both fed into the optimization routine at the same time. When the two data sets are included sequentially, we find a very small improvement in the uncertainties (decrease in the size of the confidence intervals), just as in \cite{CatacoraRios2019}. However, while \cite{CatacoraRios2019} shows a large improvement when the two data sets are included simultaneously, in the cases studied here we find only a modest improvement. This is illustrated in Fig. \ref{fig:energies}: a simultaneous fit (denoted \emph{multiple}, red) is compared to the fitting of only one data set (\emph{single}, blue) in for $^{40}$Ca(n,n) at 13.9 MeV (a) and $^{40}$Ca(p,p) at 14.5 MeV (c). In panels (b) and (d), we show the percent uncertainty of the confidence intervals, $\varepsilon$. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{energies40Can14.pdf} & \includegraphics[width=0.5\textwidth]{energies40Can14Error.pdf} \\ \includegraphics[width=0.5\textwidth]{energies40Cap14.pdf} & \includegraphics[width=0.5\textwidth]{energies40Cap14Error.pdf} \\ \end{tabular} \caption{Comparison between the Bayesian optimization of a single elastic scattering data set (\emph{single}, blue) and the simultaneous optimization of two nearby energies (\emph{multiple}, red) for (a) $^{40}$Ca(n,n) at 13.9 MeV (second set at 11.9 MeV), and (c) $^{40}$Ca(p,p) at 14.5 MeV (second set at 12.5 MeV). In (b) and (d) are the corresponding percentage uncertainties, $\varepsilon$.} \label{fig:energies} \end{figure} As is evident from the percent uncertainty in panels (b) and (d) for the neutron scattering and proton scattering respectively, no significant decrease in the uncertainty is found when the second set of experimental data is included. In particular, for the neutron scattering case, we actually see an increase of the uncertainties at backwards angles. This increase is due to the discrepancies between the two data sets backwards of $100^\circ$ (shown as the black and grey symbols). In both cases, we used data that was measured by the same group with the same experimental set-up, to remove as many sources of systematic uncertainty as possible. The differences between the results here and the results in our previous work \cite{CatacoraRios2019} are mainly due to the use of mock data in the previous work and real experimental data here. We discuss this further in Section \ref{sec:dataDifferences}. \subsection{Including a variety of reaction observables} Within the Bayesian framework, we can study how various additional experimental constraints can impact - and hopefully reduce - the uncertainties in the optical potential. The first experimental constraint that we explore is adding other reaction data to the fitting procedure. We will start first considering vector analyzing powers, $Re(iT_{11})$, and then add total or reaction cross sections, $\sigma_\mathrm{tot}$ or $\sigma_\mathrm{R}$. The combined $\chi^2$ applies equal weights to the different sets of data. Although the vector analyzing powers should be more sensitive to the spin-orbit part of the potential than the differential cross sections or the total and reaction cross sections, for consistency within this work, we still only optimize the volume and surface terms of the optical potential. In Fig. \ref{fig:observables}, we show the differential cross sections (left column) and analyzing powers (right column) when only the differential cross section is fit (blue), only the polarization is fit (red), and both are fit simultaneously (green) for $^{40}$Ca(n,n) at 13.9 MeV and $^{40}$Ca(p,p) at 14.5 MeV. First, we notice that in (a) and (c) the best representation of the data is when only the differential cross section is fit; likewise, in panels (b) and (d), the best representation of the polarization data is when only the polarization data is included in the optimization. These are the sorts of difficulties that can arise when addressing the real problem, with real data. In both the neutron and proton scattering cases, we see that the minima are shifted significantly from the differential cross section to the polarization minimum. When both observables are included in the Bayesian optimization (green regions), the confidence intervals and posterior distributions typically fall somewhere between the minima from the elastic cross section and polarization, as for $^{40}$Ca(p,p) at 14.5 MeV in Fig. \ref{fig:observables}(c) and (d). However, as for $^{40}$Ca(n,n) at 13.9 MeV, in panels (a) and (b), we find that the confidence intervals when both observables are included in the fitting process are similar in shape to the confidence intervals when only the analyzing power is included in the optimization. The authors in \cite{Tornow1982}, \cite{Aoki1996}, and , \cite{Mccamis1986} all found that for $^{40}$Ca-nucleon scattering reactions, an optical potential of the form that we are investigating here (e.g. local, energy-dependent, and non-relativistic) was not sufficient to describe both their differential cross section data and polarization data simultaneously. These joint optimizations should be studied for other targets. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{xs40Can14.pdf} & \includegraphics[width=0.5\textwidth]{pol40Can14.pdf} \\ \includegraphics[width=0.5\textwidth]{xs40Cap14.pdf} & \includegraphics[width=0.5\textwidth]{pol40Cap14.pdf} \\ \end{tabular} \caption{Comparison of the differential cross sections (a) and (c) and polarization analyzing powers (b) and (d) when only differential cross section data was fit (blue), only polarization data was fit (red), and both differential cross section and polarization data were fit, for $^{40}$Ca(n,n) at 13.9 MeV, (a) and (b), and for $^{40}$Ca(p,p) at 14.5 MeV, (c) and (d).} \label{fig:observables} \end{figure} We have also considered the effect of including total or reaction cross sections along with polarization. In \cite{CatacoraRios2019}, we found that adding the reaction or total cross section into the Bayesian optimization routine did not offer an improvement on the uncertainties, when compared to the use of the full angular distribution. In that work, mock data was used and thus the total (reaction) cross section did not contain additional information to the differential cross section over all angles (by construction that total cross section was exactly the integral of the angular distribution included in the fit already). Although here we are using real data, we have chosen cases for which the angular distributions have full coverage, and therefore again, when including total (reaction) cross sections to the likelihood, the reduction on the uncertainties is minimal. \subsection{Differences between mock data and measured data} \label{sec:dataDifferences} Some of the results presented in this section are significantly different than those obtained in \cite{CatacoraRios2019}, particularly when multiple energies were included in the optimization. We had previously found that fitting data at two nearby energies reduced the uncertainty on the differential cross section by as much as - and sometimes more than - 50\%. Those results were calculated using mock data generated from the Koning-Delaroche optical potential \cite{Koning2003} to ensure that we could remove any difficulties coming from discrepancies in experimental data (such as the error in the normalization, the incident energies not lining up, etc.). In Fig. \ref{fig:energiesMock}, we show the single and multiple optimization for $^{40}$Ca(n,n) at 13.9 MeV and $^{40}$Ca(p,p) at 14.5 MeV using mock data: confidence intervals in panels (a) and (c) and the percentage uncertainty in (b) and (d). Comparing this figure with Fig. \ref{fig:energies} - same calculation but with real experimental data - we see that the uncertainties are smaller when mock data are used compared to the case when real experimental data are used. In both cases though, we do not see the same degree of reduction in the uncertainty when a second energy is included in the optimization process. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{energiesMock40Can14.pdf} & \includegraphics[width=0.5\textwidth]{energiesMock40Can14Error.pdf} \\ \includegraphics[width=0.5\textwidth]{energiesMock40Cap14.pdf} & \includegraphics[width=0.5\textwidth]{energiesMock40Cap14Error.pdf} \\ \end{tabular} \caption{Comparison between the Bayesian optimization of a single elastic scattering data set (\emph{single}, blue) and the simultaneous optimization of two nearby energies (\emph{multiple}, red) for (a) $^{40}$Ca(n,n) mock data at 13.9 MeV (second set at 11.9 MeV), and (c) $^{40}$Ca(p,p) mock data at 14.5 MeV (second set at 12.5 MeV). In (c) and (d) are the corresponding percentage uncertainties, $\varepsilon$.} \label{fig:energiesMock} \end{figure} However, in \cite{CatacoraRios2019}, we also found that the uncertainty was strongly dependent on the percent difference between the two energies that are included in the fit. There was a greater impact at higher incident energies and when the two energies included were $\sim$ 10\% apart (if the difference in energies were higher the two minima would be too different as the optical potentials are strongly energy dependent, if it were lower it would offer no extra information to the optimization procedure). With real experimental data we cannot control this. Indeed, the existing data for this case is slightly farther apart than 10\%, which may also contribute to the differences seen here compared to \cite{CatacoraRios2019}. Having a targeted experimental study to measure elastic scattering at two close energies could help us understand if these results are due to using real experimental data (not mock data) or from the percent difference between the two incident energies. \section{Solving the few-body scattering problem} \label{sec:fewbody} Using the potential parameters that led to the cross sections shown in Fig. \ref{fig:UCCB}, we can propagate the uncertainties to the single-neutron transfer cross section, $^{40}$Ca(d,p)$^{41}$Ca(g.s.) at 28.4 MeV. In this section, we consider two different three-body approximations to $(d,p)$ reaction, as discussed in Section \ref{sec:stats}, namely the distorted-wave Born approximation (DWBA) and the adiabatic wave approximation (ADWA). In DWBA, the incoming channel requires the deuteron-target distorted wave while ADWA requires as input the proton-target and neutron-target distorted waves. We compare DWBA and ADWA using both the uncorrelated frequentist and Bayesian optimizations, Fig. \ref{fig:transfer} (a) and (b) respectively. Typically, a spectroscopic factor would be extracted by normalizing the calculation to the data at forward angles. However, because there is no data near this incident energy to compare to, we consider the spectroscopic factors in each case to be unity and focus on the differences in the shape of the resulting cross sections. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{40CadpFrequentist.pdf} & \includegraphics[width=0.5\textwidth]{40CadpBayes.pdf} \end{tabular} \caption{Comparison between the $^{40}$Ca(d,p)$^{41}$Ca(g.s.) at 28.4 MeV cross section using DWBA (blue) and ADWA (red) for (a) the uncorrelated frequentist optimization and (b) the Bayesian optimization.} \label{fig:transfer} \end{figure} When the optical potential uncertainties are propagated through each reaction formulation, we see a significant difference in shape of the angular distributions between the DWBA and ADWA calculations, for both the frequentist and Bayesian techniques. However, depending on the angular range over which the experimental data would be measured, these calculations would still be difficult to distinguish, unless a detailed angular distribution could be measured between zero and 40 degrees. For stable targets, this is certainly feasible, but when considering reactions with unstable beams where the experimental errors are typically larger and the angular coverage is reduced due to the reaction being measured in inverse kinematics, these two models will become increasingly difficult to distinguish. (Comparisons between the uncorrelated and correlated frequentist optical potentials show similar angular dependence for the DWBA and ADWA calculations, but the confidence intervals are significantly wider when the correlated fit is propagated, as would be expected from the confidence intervals on the elastic scattering in Fig. \ref{fig:UCCB}). It is clear that the larger uncertainties on the elastic scattering calculations translate to larger uncertainties on the transfer cross section, even though these uncertainties are propagated in a non-linear fashion. \section{The overlap function and interplay of other degrees of freedom} \label{sec:structure} The last two items to consider are the overlap functions and the interplay of degrees of freedom that are left out of the model space. For single-nucleon transfer reactions, when we discuss the overlap function, we are thinking in particular of the bound-state wave function between the target and the transferred nucleon. Typically, this interaction is assumed to have a Woods-Saxon shape with a standard geometry and a depth that reproduces the nucleon-target binding energy. In \cite{Lovell2015}, we showed that changing the geometry while keeping the binding energy fixed could drastically change the resulting transfer cross section, by a factor of two or more at the peak of the angular distribution. These differences were seen when the radius parameter was changed from 1.1 fm to 1.3 fm. Between the two cases that were studied, the changes were larger in the more exotic target, $^{132}$Sn, compared to the stable target, $^{48}$Ca. We would like to study the uncertainties in the description of the bound state in a Bayesian manner. One option to incorporate these uncertainties would be to sample the geometry of the single-particle potential, constrained by narrow prior distributions for the radius and diffuseness, and imposing that the potential depth reproduces the binding energy. But one could also use constraint of other known properties. The asymptotic normalization coefficient (ANC) provides one such constraint. Previous reaction studies have shown that breakup cross sections are scaled by the ANC squared \cite{capel2006,capel2007}. Since the ANC can be calculated microscopically for light nuclei \cite{brida2011} and can be extracted unambiguously from peripheral reactions for many other nuclei \cite{combined}, the ANC constrain could strongly reduce the ambiguity in the singe-particle potential. The effects of using the ANC as a constraint can be quantified with the current tools (similarly to what was done in \cite{CatacoraRios2019}). Were this to provide a significant constraint on the single-particle potential, it would give a strong motivation to develop a program to measure this quantity for a variety of heavy nuclei. In addition, the overlap function can be extracted from the one-body density matrix (OBDM) \cite{Ivanov2001,Timofeyuk2020,Dimitrova1997}, which has already been calculated for the $^{40}$Ca target studied in this work. Extracting the uncertainties on the overlap function using both of these methods could provide an interesting comparison between the two constraints. The last source of uncertainty in few-body reactions discussed in Section \ref{sec:mot} comes from the degrees of freedom left out of the model space. The most obvious source of this uncertainty is the reduction of the many-body model space to a few-body problem. If one could compare exact many-body calculations directly with the few-body calculations, it would allow us to construct an emulator to connect the few-body to the many-body calculations (which typically contains an uncertainty estimate such as from a Gaussian Process \cite{Rasmussen}). Unfortunately, this is currently not possible for any but the lightest systems and given the computational scale of \emph{ab-initio} reaction calculations it would take an unfeasibly long time. Instead, what can be done effectively is to test different levels of approximations in the few-body models. We have begun to explore the differences between DWBA and ADWA calculations, two of the few-body approximations that are commonly made. The study discussed in Sec. \ref{sec:fewbody} explores the differences in the uncertainty interval of the predicted cross sections when one used deuteron optical potential uncertainties versus nucleon optical potential uncertainties. A quantitative comparison of ADWA and DWBA can be performed more systematically using Bayesian evidence, as will be discussed briefly in the conclusions. In the framework of model comparison, ADWA and DWBA are not relatable in the sense that they are not nested: one is not a simplification of the other. It is best to first consider nested models. In that context, the coupled-channel Born approximation (CCBA) is a useful model, because one can easily switch of the coupling to the inelastic state and reduce the model to DWBA. With the Bayesian machinery that we have put in place, we can systematically study the effects that the addition of inelastic scattering channels has on the widths and means of the confidence intervals for the cross sections. This starting point would enable the development of the methodology needed for more complex comparisons where one model is not just a subset of another. \section{Outlook} \label{sec:outlook} In this overview, we have described, using the example of $^{40}$Ca elastic scattering and transfer reactions, recent progress made on quantifying uncertainties in few-body reaction theory. We have shown the development from frequentist $\chi^2$ minimization methods and covariance matrix propagation to a full Bayesian analysis in determining and propagating uncertainties in effective interactions. In addition, under these two frameworks, we can now directly compare how uncertainties are propagated from these interactions when various approximations to the few-body model are made, namely between the distorted-wave Born approximation and the adiabatic wave approximation. Although much work has been done, there are many exciting opportunities ahead. The long-term vision for the future of this work involves using Bayesian methodology along two major interacting thrusts: the first concerns experimental design and second is focused on improving theory itself (one might call it theory design). In this section we discuss the overarching vision and specific developments that are needed for implementing that vision. One tool that we have identified as necessary to implement this long-term vision is calculating the Bayesian evidence for various theories. Described briefly in Sec. \ref{sec:theory}, the Bayesian evidence is the denominator in Bayes' theorem, Eq. (\ref{eqn:bayes}), which is a weighted sum over all possible models and provides a formal way of evaluating relative probabilities of different models. Bayesian evidence is a numeric formulation of \emph{Occam's razor} - a systematic way of showing that a simpler model which reproduces the data should be favored over a more complex model. However, computing the Bayesian evidence typically requires an integral over all of a model's parameter space, a potentially computationally demanding task, particularly as the models become more complex. In addition, the interpretation of the value of the evidence is not always straight-forward. Still, being able to calculate the Bayesian evidence is an important development for our long-term uncertainty quantification vision. In the first thrust of experimental design, there are many controllable conditions in reaction experiments (beam energy, angular range, target, etc.), and the tools developed thus far can help in determining those conditions that produce optimum sensitivity to the quantities of interest, e.g. as in \cite{Melendez2020}. The work in \cite{CatacoraRios2019} is the first step along this direction, but other tools should be considered when assessing what combination of observables contains most information. Tools such as Principle Component Analysis (PCA) and Bayesian evidence have been successfully applied in other areas, e.g. \cite{Sangaline2016,Novak2014,Trotta2008,Fox2020}, and should be studied in the context of nuclear reactions. The Bayesian methodology here discussed can also be extremely useful in improving theoretical models. So far, due to computational considerations, the models used have been very simple (optical model, DWBA, and ADWA). However, we understand that there is important physics that these models do not contain. Ultimately, we want to increase the complexity of the models, matching the state of the art in theory to the degree called for by the data. In other words, the model should be as sophisticated at the data requires. In addition, as we include more and more data in our analysis, there will be an increasing demand on the physics included in the model to be able to describe all observables simultaneously with the same accuracy. In order to start down this path, we will need quantitative and reliable methods for model comparison. An essential tool for performing model comparisons in the Bayesian world is the Bayesian evidence. Also, model mixing is the natural sequence from model comparison, as a robust way to incorporate the best aspects of each theory. Both of these developments, though, pose specific challenges that still need to be addressed. What to do when the models are not nested, and moreover when the models do not contain similar parameters? What are the most efficient and accurate methods to perform numerical integrals over a large parameter space? These are questions that must be investigated to ensure a robust development in this area. \ack This work was supported by the National Science Foundation under Grant PHY-1403906 and the Department of Energy under Contract No. DE-FG52-08NA28552 and was performed under the auspice of the U.S. Department of Energy by Los Alamos National Laboratory under Contract 89233218CNA000001. A.E.L. would like to thank W.J. Ong for her help in brainstorming names for the {\sc QUILTR} wrapper. We also gratefully acknowledge the support of the U.S. Department of Energy through the LANL/LDRD Program and the Center for Non Linear Studies. \newpage \normalsize
{ "timestamp": "2020-12-17T02:20:45", "yymm": "2012", "arxiv_id": "2012.09012", "language": "en", "url": "https://arxiv.org/abs/2012.09012" }
\section{Introduction} This paper considers the Robust Distributed Model Predictive Control (RDMPC) of M discrete-time linear dynamical systems with disturbance, which formulate as \begin{align} x^i(t+1)&=A^ix^i(t)+B^iu^i(t)+w^i(t)\\ x^i(t)\in \mathcal{X}^i,& u^i(t)\in \mathcal{U}^i, w^i(t)\in \mathcal{W}^i, i=1,\dots,M \end{align} With all of them have to respect a global coupled constraints as following \begin{align} \sum_{i=1}^{M}(\Psi^i_xx^i(t)+\Psi^i_uu^i(t))\leq\textbf{1}_p, \text{for all}\quad t \end{align} where $x^i\in\mathbb{R}^{n_i}$, $u^i\in\mathbb{R}^{m_i}$ and $w^i\in\mathbb{R}^{n_i}$ are the state, input and disturbance of of the $i^{th}$ system; $\mathcal{X}^i$ and $\mathcal{U}^i$ are the corresponding state and input constraints for the $i^{th}$ system; $\mathcal{W}^i\triangleq\{w^i\in R^{n_i} | \parallel w^i \parallel \le\bar{w^i}\}$ is a compact set containing the origin in its interior; the matrices $\Psi^i_x\in\mathbb{R}^{p\times n_i}$ and $\Psi^i_u\in\mathbb{R}^{p\times m_i}$ denote the coupled constraints of all $M$ systems.\\ \indent Model predictive control has been proved to be highly successful in comparison with alternative methods of multivariable control due to its independent adoption by the process industries \citep{MAYNE20142967}. The study of RMPC is an active area of research since the robust stability has been studied using the theory of input-to-state stability (ISS)\citep{JIANG2001857}\citep{Sontag199520}. Some examples of the use of ISS theory include\citep{Limon2009}\citep{5129691} The classical Lyapunov and input-to-state definitions of stability have subtle differences especially when the system is discontinuous; these related issues including implications for MPC are thoroughly explored in\citep{6376100}.\\ \indent The useful concept of tube-based method originated with the seminal papers in 1970s and remains an active area of research\citep{Kurzhanski1993}. It mainly construct nominal MPC system ignoring the additive distrubance to simplify the optimization problem in predictive step. It was initially used in MPC \citep{CHISCI20011019} for linear system, and the tube-based procedure has also been extended to control nonlinear systems \citep{doi:10.1002/rnc.1758}. Aperiodic MPC is one another approach to lose the stress of computation burden, there are two main types of aperiodic MPC, which are event-triggered MPC\citep{ShuaiLiu2017} \citep{7922495} and self-triggered MPC\citep{BERGLIND2012342} \citep{GOMMANS20141279}\citep{6862397}. The detail introduction about the trigering mechanism have been shown in\citep{6425820}. Recently,\citep{DAI20191446} have proposed a novel self-triggered MPC algorithm for linear system subject to both disturbances and state/input constraints, and an self-triggered MPC algorithm with adaptive predictive horizon for perturbed nonlinear system\citep{8667353} have been proposed further. These latest literature shows that tube-based method and self-triggered mechanism can be combined perfectly in discrete-time linear system with bounded disturbance.\\ \indent There exist many complex systems require control, but the traditional centralized control are unsuitable because of their complexity. The complexity gives rise to modeling and data collection issues, raises computational and communication problems, and make centralized control impractical. A very large literature on distributed control and distributed model predictive control (DMPC) has emerged for this reason\citep{WANG20102053}. One popular area in DMPC are state/input globle coupled when the constraints come to $(3)$. \citep{doi:10.1080/00207170701491070}gives a method of sequential process, it arrange every subsystem updated in a certain sequnce until all system complete their iterations. Another approach called cooperative MPC method given by\citep{Trodden2014}\citep{TRODDEN201498}. The main idea of that approach is that all systems within a cooperating set are optimized jointly while systems outside set following their old predictive controls.\\ \indent A reasonable appoach for system $(1)-(3)$ without disturbance is proposed in\citet{bertsekas1998nonlinear}. The method can achieve overall optimality through dual problem involving the Lagrange function, and the dual variable is function as consensus variable in distributed optimal control problem. Until the Alternative Direction Method of Multipliers(ADMM) is reproposed by\citep{10.1145/2020408.2020410}, and its nice numerical performance in distributed system in many appliations motivated some scholars restudy the distributed Optimal control problem(DOCP) using distributed ADMM recently\citep{FARINA20121088}\citep{7799069}. While the computation expendiency can be very high according to huge iteration steps in ADMM, a brand new method has been proposed to handle premature termination ADMM algorithm through constraints tightening\citep{WANG2017184}.\\ \indent Inspired by the above work of distributed Model Predictive Control under distributed ADMM. A novel robust self-triggered distributed model pridictive control is proposed for a family of discrete-time linear perturbed systems with local and global constraints in this paper. The main contributions of this paper organized as follows.\\ \indent 1) It mainly contributes the tube-based method in robust MPC to distributed discrete-time linear system. The distribution of system gives rise in terminal set constructure, constraints satisfaction and stability issues, especially the perturbed distributed system makes global constraints $(3)$ tough to satisfied. A special form of constraints tightening is established so that all the above difficulties are solved.\\ \indent 2) Sufficient condition about local and global constraints of bounded disturbance is given to gurantee recursive feasibility in Optimal control problem(OCP), which can link the tube-based method in RMPC and constraints tightening in distributed ADMM properly.\\ \indent 3) A parallel DMPC algorithm based on augmented lagrange method(ALM) is given in this paper, which applies a self-triggering mechanism in RDMPC to lessen computation burdan of optimization control problem as well. It shows the advantage of partial parallel algorithm in iteration times efficiency than traditional central methods.\\ \indent The paper is organised as follows. In Section 2, the method of tube-based to handle bounded dissturbance is given and some useful properties under that approach is introduced. In Section 3, a robust distributed model predictive control scheme is designed and a robust self-triggered DMPC algorithm is proposed. Main results involving recursive feasibility and stability are developed in Section 4. Section 5 gives a numerical example to illustrating the results obtained and conclusion of the paper is given in Section 6. \begin{notation} The set of $n^i$-dimensional Euclidean space, $m^i$-dimensional Euclidean space, the set of $p \times n^i$ real matrices and $p \times m^i$ real matrices are denoted $\mathbb{R}^{n_i},\mathbb{R}^{m_i},\mathbb{R}^{p\times n_i}$, and $\mathbb{R}^{p\times m_i}$ respectively. $\textbf{1}_p$ denote p-dimensional Column vector whose element is full of 1. The notations $N_{[0,N-1]}, \mathbb{Z}_M$ indicate the sets $\{r\in \mathbb{N}|0\le r\le N-1\}$ and $\{r\in \mathbb{Z}|r\le M\}$ respectively. for $W \in \mathbb{R}^{n\times n}$, the notation $W>0$ indicate $W$ is positive definite, $\parallel W \parallel$ denotes the 2-norm of $W$ and $\mathbb{S}_{++}^{n}$ denotes the space of symmetric $n\times n$ positive definite matrices. for $W \in\mathbb{S}_{++}^{n}$ and $x\in \mathbb{R}^n$,$\parallel x\parallel^2 = x^Tx$ and $ \parallel x \parallel_W^2 = x^TWx$. Given $\mathcal{X},\mathcal{Z}\subseteq\mathbb{R}^n$ and $A\in \mathbb{R}^{n\times n}$, $A\mathcal{X} = \{Ax|x\in\mathcal{X}\},\mathcal{X} \oplus \mathcal{Z} = \{x+z| x\in\mathcal{X},z\in\mathcal{Z}\},\mathcal{X} \ominus \mathcal{Z} = \{x+z| x\in\mathcal{X},z\in\mathcal{Z}\}$. We denote the smallest and largest eigenvalues of a matrix as $\lambda_{max}(\centerdot),\lambda_{min}(\centerdot)$. $\mathcal{V} = \{1,2,\dots,M\}$ and $ \mathcal{E} \subset \mathcal{V} \times \mathcal{V}$ are the vertex set and edge set of undirect graph $G$. The $i$-step-ahead predicted value of a variable at time $t_k$ denote as $(t_k+i|t_k)$. \end{notation} For the sake of readability, some proofs are located in Appendix. \section{Preliminaries and properties of tube-based method} This section reviews some results in tube-based RMPC and other related concepts. For a expository reason, we using tube-based approach to achieve the control goal. We can take full advantage of the optimal control input sequence calculated at each sampling time instead of just utilising the first element of the sequence. Inter-sampling time is chosen by a self-triggered condition, which actually through a minimun function. We first define the nominal system with a given prediction horizon $N\in\mathbb{N}$ by neglecting the disturbance in$(1)$ \begin{equation} z^i(k+l+1|k)=A^iz^i(k+l|k)+B^iu^i(k+l|k), l\in N_{[0,N-1]} \end{equation} where $z^i(k+l|k)$ is the predicted nominal state at step $k$, and we denotes the error between the actual state and the nominal one as $e^i(k+l|k)\triangleq x^i(k+l|k)-z^i(k+l|k)$, with initial conditions $z^i(k|k)=x^i(k|k)$ and $e^i(k|k)=0$. It evolve as \begin{align} e^i(k+l|k)=&A^ie^i(k+l-1|k)+w^i(k+l-1)\\ =&{A^i}^{l-1}w^i(k)+{A^i}^{l-2}w^i(k+1)+ \dots +w^i(k+l-1)\notag \end{align} \indent If we can assure the boundness of above error $e_i(k+l|k)$ which we'll prove in section 4, and ignoring the based coupled constraints $(3)$, then we can construct the OCP at state $x^i(t_k)$ as \begin{subequations} \begin{align} \mathbb{P}(x^i(t_k)) \triangleq &\min_{u^i(t_k)}J^i(x^i(t_k),u^i(t_k))\notag\\ =&\sum_{l=0}^{N-1}[\parallel z^i(t_k+l|t_k)\parallel_{Q^i}^2+\parallel u^i(t_k+l|t_k)\parallel_{R^i}^2]+\parallel z^i(t_k+N|t_k)\parallel_{P^i}^2 \end{align} subject to \begin{align} &z^i(t_k|t_k)=x_i(t_k)\\ &z^i(t_k+N|t_k)\in T^i_f\\ &\forall l\in\mathbb{N}_{[0,N-1]}:\notag\\ &z^i(t_k+l+1|t_k)=A^iz^i(t_k+l|t_k)+B^iu^i(k+l|k)\\ &z^i(t_k+l|t_k)\in \mathcal{Z}_l^i\\ &u^i(t_k+l|t_k)\in \mathcal{U}^i \end{align} \end{subequations} \indent The nominal predictive state constraints set is defined as $\mathcal{Z}_l^i\triangleq\mathcal{X}^i\ominus\mathcal{R}_l^i,l\in \mathbb{N}_{[1,N-1]}$, with $\mathcal{R}_l^i\triangleq\oplus_{j=0}^{l-1}{A^i}^j\mathcal{W}^i$. $Q^i>0,R^i>0$ and $P^i>0$ are the weighting matrices of cost function. Regarding to the terminal set $\mathcal{X}^i_\varepsilon$, we first construct a set $\mathcal{X}^i_r \triangleq \{z^i \in R^{n_i}| \parallel z^i \parallel_{P^i} \le r^i\}$ for this nominal system$(5)$, such that the set $\mathcal{X}^i_r$ is maximal Robust Control Invariant Set(RCIS) under local state feedback control law : $u^i(z^i) = K^iz^i$, which also need to satisfied with following basic constraints: \begin{align} &\forall z^i \in \mathcal{X}^i_r, K^iz^i \in \mathcal{U}^i\\ &\mathcal{X}^i_r \subset \mathcal{X}^i \end{align} \indent Where $K^i$ is chosen as the unconstrained LQ optimal feedback control law for the nominal system$(5)$, while $P^i$ is the solution of the Lyapunov equation \begin{align} (A^i+B^iK^i)^T P^i (A^i+B^iK^i) + Q^i + {K^i}^TR^iK^i = P^i \end{align} And then we give the terminal set a define $\mathcal{X}^i_\varepsilon \triangleq \{z^i \in R^{n_i}| \parallel z^i \parallel_{P^i} \le \varepsilon^i\}$ with $\sqrt{1-\frac{\lambda_{min}(Q^i)}{\lambda_{max}(P^i)}} r^i \le \varepsilon^i \le r^i$. \quad We'll use this condition to guarantee recursive feasibility and stability under a self-triggered mechanism. \begin{lemma}\citep{DAI20191446} The nominal predictive state difference between $t_k$ and $t_{k+1} = t_k + M_k^i$ is norm-bounded by \begin{align} ||z^i(t_{k+1}+l|t_{k+1})-z^i(t_{k+1}+l|t_k)||_\phi &= \parallel{A^i}^le^i(t_k+M_k^i|t_k)\parallel_\phi\notag \\ &\le \begin{cases} \mathcal{B}(\centerdot)||A^i||^l(1-||A^i||^{M_k^i}), & \text{if} ||A^i||\neq 1\\ \mathcal{B}(\centerdot)M_k, & \text{if} ||A^i||= 1 \end{cases} \end{align} where $\phi\in\mathbb{S}_{++}^{n}$ and $\mathcal{B}(\centerdot):\mathbb{S}_{++}^{n}\to\mathbb{R}$ is defined as \begin{equation} \mathcal{B}(\centerdot)\triangleq \begin{cases} \sqrt{\lambda_{max}(\centerdot)}\frac{\bar{w^i}}{1-||A^i||},& \text{if} ||A^i||\neq 1\\ \sqrt{\lambda_{max}(\centerdot)}\bar{w^i}, & \text{if} ||A^i||= 1 \end{cases} \end{equation} \end{lemma} \begin{lemma}\citep{DAI20191446} For any sampling instant $t_k$ $k\in N$, let $u^*(t_k)$ be the solution of $\mathbb{P}(x^i(t_k))$. If the first $M_k^i$ step are applied to $(1)$ in a strictly open-loop fashion, then the difference between $J^*(x^i(t_k),u^i(t_k))$ and $J^*(x^i(t_{k+1}),u^i(t_{k+1}))$ is bounded by \begin{align} &J^*(x^i(t_{k+1}),u^i(t_{k+1})) - J^*(x^i(t_k),u^i(t_k))\le g^i(M^i_k,x^i(t_k),u^{i,*}(t_k),\bar{w^i}) \\ \text{where} &\notag\\ &g^i(M^i_k,x^i(t_k),u^*(t_k),\bar{w^i}) \triangleq g_0^i(M^i_k,x^i(t_k),u^*(t_k),\bar{w^i}) -\notag\\ & \sum_{l=0}^{M_k^i-1}[||z^*(t_k+l|t_k)||_{Q^i}^2 + ||u^*(t_k+l|t_k)||_{R^i}^2]. \end{align} \end{lemma} In discrete system ,we get a self-triggering machanism which is proposed as following \begin{subequations} \begin{align} M^i_k \triangleq \arg\min_{M_k\in N_{[1,N]}}g^i(M_k,x^i(t_k),u^*(t_k),\bar{w^i})\\ \text{subjec to}\quad g^i(M_k,x^i(t_k),u^*(t_k),\bar{w^i}) < 0 \end{align} \end{subequations} \indent If the optimizaition problem is infeasible, then we set $M^i_k = 1$. \section{RDMPC scheme for the coupled constraints system} Our control goal is to design a robust self-triggered DMPC algorithm, which can stablize the whole system at a fast convergence rate and satisfied with constraints (2) and (3), while simultaneously reducing the amount of compute burden. \subsection{Constriants tightening } Now considering the coupled constraints$(3)$, as we construct a nominal system$(5)$ which has a tube error $e$ with actual system$(1)$, we should tightening the constraints to satisfied with the feasibility. Specifically, the tightening constraints of $(3)$ are \begin{subequations} \begin{align} \sum_{i=1}^{M}(\Psi^i_xz^i(t_k+l|t_k)+\Psi^i_uu^i(t_k+l|t_k))\leq(1- \epsilon(l))\textbf{1}_p, \quad &\forall l \in \mathbb{N}_{[0,N-1]}\\ \sum_{i=1}^{M}(\Psi^i_Nz^i(t_k+N|t_k))\leq(1- \epsilon(N))\textbf{1}_p,\quad & \forall z^i(t_k+N|t_k) \in T^i_f \end{align} \end{subequations} where terminal coupled constriants coefficient matrix formulate as $\Psi^i_N = \Psi^i_x +\Psi^i_uK^i$, and supposed there are $I$ subsystems $||A^i||\neq 1,$ $J$ subsystems $||A^j||= 1$ in the discrete system. We give a simlified denote that \begin{align} &\epsilon(l) = \notag \sum_{i=1}^{I}\parallel\Psi_x^i\parallel\bar{w^i}\frac{1-\parallel A^i \parallel^l}{1-\parallel A^i \parallel} + \sum_{j=1}^{J}\parallel\Psi_x^j\parallel l\bar{w^j} \end{align} \begin{align} &\epsilon(N) = \notag \sum_{i=1}^{I}\parallel\Psi_N^i\parallel\bar{w^i}\frac{1-\parallel A^i \parallel^N}{1-\parallel A^i \parallel} + \sum_{j=1}^{J}\parallel\Psi_N^j\parallel N\bar{w^j}\notag \end{align} \indent Obviously, the tolerance $0 < \epsilon(1) < \dots < \epsilon(N) <1$ should be satisfied to ensure Gradual decrease in terminal constraints and $0 \in T^i_f$. Correspondingly, the tightened RDMPC formulation is: \begin{align} \mathbb{P}_{\epsilon}(x^i(t_k)) \triangleq \min_{\textbf{u}^i}\{\sum_{i=1}^{M}J^i(x^i(t_k),\textbf{u}^i(t_k)):(6b-6f) \text{and} (15a)\} \end{align} $\textbf{u}^i := \{u^i(0),u^i(1),\dots,u^i(N-1)\}$, the choice of $T^i_f$ is chosen to be$ T^i_f \triangleq \mathcal{X}^i_\varepsilon$, which subject to \begin{align} \sqrt{1-\frac{\lambda_{min}(Q^i)}{\lambda_{max}(P^i)}} r^i \le \varepsilon^i \le r^i. \end{align} \indent We can represented the tightend RDMPC formulation as \begin{subequations} \begin{align} \mathbb{P}_\epsilon(x(t_k)): &\min_{u^i(k)}\sum_{i=1}^{M}J^i(x^i(t_k),\textbf{u}^i(t_k))\\ s.t. & (6b)-(6f), i \in [1,M] \notag\\ & \sum_{i=1}^{M}f^i(x^i(t_k),\textbf{u}^i(t_k)) \le b(\epsilon) \end{align} \end{subequations} where $f^i(x^i(t_k),\textbf{u}^i(t_k))$ is an appropriate function form rewriting from $(15a)$, $b(\epsilon) = [\textbf{1}_p,(1- \epsilon(1))\textbf{1}_p,\dots,(1- \epsilon(N-1))\textbf{1}_p]^T$.\\ \indent The connection of nominal system and real system is shown in the following lemma 3.1, while its proof locate at appedix A. \begin{lemma} The coupling constraints (3) in real system can be satisfied when (15) is satisfied in nominal system if the upper bound of disturbance satisfies with \begin{align} \begin{cases} \bar{w^i}\le (\frac{1}{M\parallel\Psi_N^i\parallel}-\frac{r^i}{\sqrt{\lambda_{max}(P^i)}})\frac{1-\parallel A^i\parallel}{1-\parallel A^i\parallel^N}, & \text{if} \parallel A^i\parallel\neq 1\notag\\ \bar{w^j}\le (\frac{1}{M\parallel\Psi_N^j\parallel}-\frac{r^j}{\sqrt{\lambda_{max}(P^j)}})\frac{1}{N} &\text{if} \parallel A^j\parallel= 1 \end{cases} \end{align} \end{lemma} \indent In the end of this subsection, we give some general assumptions to ensure initial feasibility of OCP and global constraints from predictive system to real system.\\ $(A1)$: $(A^i,B^i)$ is reachable and $x^i(t_k)$ is measurable for all $i \in \mathbb{Z}_M$.\\ $(A2)$: $\mathcal{W}^i$, $\mathcal{X}^i$ and $\mathcal{U}^i$ are polytope containing the origins in its interior for all $i \in \mathbb{Z}_M$.\\ $(A3)$: The undirected graph $G = (\mathcal{V},\mathcal{E})$ is full connected.\\ \subsection{Distributed ADMM form of OCP } Let $\lambda \in\mathbb{R}^{p\times N}$ be the dual variable associated with constraints $(17b)$, the lagrange function of OCP can formulate as: \begin{align} \mathcal{L}({\textbf{u}^i},\lambda) =: \sum_{i=1}^{M}J^i(x^i,\textbf{u}^i) + \lambda^T(\sum_{i=1}^{M}f^i(x^i,{\textbf{u}^i})-b(\epsilon)) \end{align} The dual problem is \begin{align} \max_{\lambda\ge 0} \min_{\textbf{u}^i \in \mathcal{U}} \mathcal{L}({\textbf{u}^i},\lambda) := \min_{\lambda\ge 0} \max_{\textbf{u}^i \in \mathcal{U}} -\mathcal{L}({\textbf{u}^i},\lambda) =\min_{\lambda\ge 0}\sum_{i=1}^{M}g^i(\lambda) \end{align} where \begin{align} g^i(\lambda) := \max_{\textbf{u}^i \in \mathcal{U}}-J^i(x^i,\textbf{u}^i)-\lambda(f^i(x^i,{\textbf{u}^i})-\frac{1}{M}b(\epsilon)) \end{align} The dual problem is not distributed as $\lambda$ is a common variable in $g^i(\lambda)$. According to assumption $(A3)$, the dual problem $(19)$ can rewritten as a consensus problem in the following \begin{align} \min_{\lambda^{i}\ge 0}\sum_{i=1}^{M}g^i(\lambda^i) \quad s.t. \lambda^i =\lambda^j, (i,j)\in \mathcal{E} \end{align} where $\lambda^i$ is the local copy of $\lambda$ in the $i^{th}$ system , and the conditions $\lambda^i =\lambda^j$ ensure the consensus of whole system dual variable. It can be further rewritten by using a new set of reference variable in the form of \begin{align} \max_{\lambda^i\ge 0}\sum_{i=1}^{M}g^i(\lambda^i) \quad s.t. \sum_{i=1}^{M}E^i\lambda^i = c. \end{align} The augmented lagrangian methods form of $(23)$ is \begin{align} \mathcal{L}_{\rho}(\lambda^i,\omega) =&\sum_{i=1}^{M}g^i(\lambda^i) + \omega^T(\sum_{i=1}^{M}E^i\lambda^i - c) + \frac{\rho}{2}\parallel \sum_{i=1}^{M}E^i\lambda^i - c \parallel^2_2. \end{align} \indent To apply the ALM form to solve problem$(22)$, the distributed ADMM is giving in the following. \begin{align} &(\lambda^1_{k+1},\dots,\lambda^M_{k+1})=\arg\min_{\lambda^i\ge 0}\mathcal{L}_{\rho}(\lambda^1,\dots,\lambda^M,\omega_k),\\ &\textbf{u}^i_{k+1}=\arg\min_{{\textbf{u}^i \in \mathcal{U}^i}}\mathcal{L}({\textbf{u}^i},\lambda^i_{k+1})\\ &\omega_{k+1}=\omega_k-\rho(\sum_{i=1}^{M}E^i\lambda^i_{k+1} - c) \end{align} \indent For every subproblem with regard to $\lambda^i$, we can introduce neighbor term $\frac{1}{2}\parallel \lambda^i - \lambda^i_k \parallel_s^2$ and relaxation factor $\gamma$ to reduce iteration burden. Then the updating step of $\lambda^i_{k+1},\omega_{k+1}$ comes as \begin{align} &\lambda^i_{k+1}=\arg\min_{\lambda^i\ge 0}\mathcal{L}_{\rho}(\lambda^1_k,\dots,\lambda^i,\dots,\lambda^M_k,\omega_k)+\frac{1}{2}\parallel \lambda^i - \lambda^i_k \parallel_s^2,\\ &\omega_{k+1}=\omega_k-\rho\gamma(\sum_{i=1}^{M}E^i\lambda^i_{k+1} - c) \end{align} \indent We noted that $\textbf{u}^i_{t_k} = \{u^i_{k}(0),u^i_{k}(1),\dots,u^i_{k}(N-1)\}$ is the optimal solution of $\mathbb{P}_\epsilon(x(t_k))$,and define the applied optimal control sequence as $\textbf{u}^{i,*}(t_k) \triangleq \{ u^i_{k}(0),u^i_{k}(1),\dots,u^i_{k}(M_k) \} $ when we get $M_k$ from algorithm 2. Then we give the closed-loop sysytem under the Robust self-triggered DMPC before next sampling time in the following: \begin{align} x^i(t_k+l+1) = A^ix^i(t_k+l) + B^iu^{i,*}(t_k+l) + w^i(t_k + l) \end{align} \subsection{Self-triggered DMPC algorithm} The overall procedure of the distributed ADMM algorithm at time $t_k$ is summarized in the following algorithm 1. \begin{algorithm} \caption{Consensus ADMM algorithm} \hspace*{0.02in} {\bf Input:} Input measured system state $x^i$,$i\in\mathbb{Z}_M$\\ \hspace*{0.02in} {\bf Output:} $\textbf{u}^{i,*}_{k},i\in\mathbb{Z}_M$\\ Initiallization: choose $\rho > 0$,set $k=0,\lambda_0^i=0,\omega_0 = 0$ for all $i\in\mathbb{Z}_M,j\in N_i$.\\ \textbf{repeat}\\ \textbf{for}all $i\in\mathbb{Z}_M$(parallel)\textbf{Do}\\ obtain $\lambda^i_{k+1}$ from (28);\\ obtain $\textbf{u}^i_{k+1}$ from (26);\\ obtain $\omega_{k+1}$ from (29);\\ \textbf{end for} $k \leftarrow k+1$ \textbf{until} stop criterion is satisfied \end{algorithm}\\ \indent From \citep{DAI20191446} we know that self-triggered machanism can reduce the burden on compuation largely, we use it in the overall Robust self-triggerd DMPC scheme at time $t_k$. The framework is shown in the algorithm 2. \begin{algorithm} \caption{Robust self-triggered DMPC } 1:Every subsystem $i$ measure its own state $x^i(t_k)$;\\ \textbf{IF} $x^i(t_k) \in \mathcal{X}^i_{\varepsilon^i}$, Subsystem $i$ obtains local feedback control law $u^i(t_k) = K^ix^i(t_k)$ and applied itself until maximum iteration step.\\ \textbf{IF ELSE}\\ 2:Other subsystem $i$ calls algorithm 1 with $x^i(t_k)$ and get $\textbf{u}^{i,*}$;\\ 3:Subsystem $i$ calculate optimization problem (14a) to get $M^i_k $;\\ 4:Choose $M_k = \min\{M_k^i\}$ as next sampling step;\\ 5:Subsystem $i$ obtains $\textbf{u}^{i,*}(x(t_k))$ and applied it to Every subsystem $i$ ;\\ 6:Let $t_k = t_k+M_k$ and go back to step 1. \end{algorithm} \section{Main properties of the RDMPC scheme} This section presents the main properties of RDMPC schme in algorithm 2, that is recursive feasibility, satisfaction of both the local state and input constraints and the global coupled constraints, and closed-loop stability. The following lemma ensure that if OCP $(18)$ is feasible at some sampling time, then its feasiblity remains at the next sampling time. \begin{lemma}[Recursive feasibility of nominal predictive system] Assume that $\bm{u}(t_k)=[\bm{u}^1(t_k), \bm{u}^2(t_k), \dots, \bm{u}^M(t_k)]$ where $\bm{u}^i(t_k) = [u^i(t_k|t_k), u^i(t_k+1|t_k), \dots, u^i(t_k+N-1|t_k)]^T$ is a feasible solution at time $t_k$ for OCP $\mathbb{P}_\epsilon(x(t_k))$, Then the OCP $\mathbb{P}_\epsilon(x(t_{k+1}))$ is feasible at time $t_{k+1}$.\\ If the bound on disturbance satisfies with \\ (1)the local condition \begin{align} \begin{cases} \bar{w^i}\le \frac{(r^i - \varepsilon^i)(1-||A^i||)}{\sqrt{\lambda_{max}(P^i)}(1-||A^i||)^N}, & \text{if} ||A^i|| \neq 1\notag\\ \bar{w^j}\le \frac{(r^j - \varepsilon^j)}{\sqrt{\lambda_{max}(P^j)}N}, &\text{if} ||A^j||= 1 \end{cases} \end{align} (2)the global condition \begin{align} \begin{cases} \bar{w^i}\le (\frac{1}{M\parallel\Psi_N^i\parallel}-\frac{r^i}{\sqrt{\lambda_{max}(P^i)}})\frac{1-\parallel A^i\parallel}{1-\parallel A^i\parallel^N}, & \text{if} \parallel A^i\parallel\neq 1\notag\\ \bar{w^j}\le (\frac{1}{M\parallel\Psi_N^j\parallel}-\frac{r^j}{\sqrt{\lambda_{max}(P^j)}})\frac{1}{N} &\text{if} \parallel A^j\parallel= 1 \end{cases} \end{align}\\ \end{lemma} The proof of lemma 4.1 locate at appedix B. Through the recursive feasibility lemma of nominal system, we can conclude the theorem of recursive feasibility and constraints satisfaction in real system. \begin{theorem}[Recursive feasibility and constraints satisfaction] If the $\mathbb{P}_\epsilon(x(t_0))$ is feasible and the conditions in lemma 4.1 is satisfied, then for the distributed system $(1)$ under algorithm 2, it holds that \\ (1)$\mathbb{P}_\epsilon(x(t_k))$ is feasible for all $t_k$ and $k \in \mathbb{N}$;\\ (2)$x^i(t)\in \mathcal{X}^i, u^i(t)\in \mathcal{U}^i$, $\sum_{i=1}^{M}(\Psi^i_xx^i(t)+\Psi^i_uu^i(t))\leq\textbf{1}_p$ for every realization $w^i(t)\in \mathcal{W}^i.$ \end{theorem} \begin{proof} By lemma 4.1, the assumption that feasible of initial optimization problem $\mathbb{P}_\epsilon(x(t_0))$ implies all $\mathbb{P}_\epsilon(x(t_k))$ is feasible for all $t_k$ and $k \in \mathbb{N}$, which proves (1). \\ \text{\quad} To show the satisfaction of local constriants, the time horizon is divided into two parts, $k\in\mathbb{N}_{[0, t^i]}$ and $k\in\mathbb{N}_{[ t^i, \infty]}$. Where $t^i$ denotes the time when the state of system $i$ first enter $\mathcal{X}^i_{\varepsilon}$. for every sampling time $t_k$ before the time $t^i$, from (30) and (15), for $ l \in \mathbb{N}_{\le M_k}$, we have the close-loop system formulat as \begin{align} x^i(t_k+l+1) &= A^ix^i(t_k+l) + B^iu^{i,*}(t_k+l) + w^i(t_k + l)\notag\\ &= A^i(z^{i,*}(t_k+l|t_k) + e^i(t_k+l|t_k)) + B^iu^{i,*}(t_k+l|t_k)+ w^i(t_k + l)\notag\\ &= z^{i,*}(t_k+l+1|t_k) + e^i(t_k+l+1|t_k)\notag\\ &\in \mathcal{Z}^i_{l+1} \oplus \mathcal{R}^i_{l+1} \subseteq \mathcal{X}\notag \end{align} \indent Further, according to constraints $(6f)$, $u^i(t_k+l+1) = u^{i,\star}(t_k+l+1) \in \mathcal{U}^i$.\\ \indent Now considering the sampling time after $t^i$, the state of system $i$ have already enter $\mathcal{X}^i_{\varepsilon}$. The local control $u^i(t^i+l) = K^ix^i(t^i+l) \in \mathcal{U}^i, l \in \mathbb{N}$ is applied to the real system as dual control strategy. So the close-loop system $i$ after $t^i$ is \begin{align} x^i(t^i+l+1) &= A^ix^i(t^i+l) + B^iK^ix^i(t^i+l) + w^i(t^i+l)\notag\\ &= (A^i + B^iK^i)x^i(t^i+l) + w^i(t^i+l)\notag \end{align} from the definition of RCIS $\mathcal{X}^i_{\varepsilon}$, $x^i(t^i) \in \mathcal{X}^i_{\varepsilon}$ implies $(A^i + B^iK^i)x^i(t^i) \in \mathcal{X}^i_{\varepsilon}, l \in \mathbb{N}$, according to the relation of disturbance and RCIS, $ \lambda_{max}(P^i)\bar{w^i} \le r^i - \varepsilon^i$, it holds from (8) that $ x^i(t^i+1) \in \mathcal{X}^i_{r} \in \mathcal{X}$. Consider $x^i(t^i+l)\in \mathcal{X}^i_{r}$ and (B3) we have \begin{align} &\parallel(A^i + B^iK^i)x^i(t^i+l)\parallel_{P^i} \le \varepsilon^i\notag\\ &\parallel x^i(t^i+1+1)\parallel_{P^i} \le \parallel(A^i + B^iK^i)x^i(t^i+l)\parallel_{P^i} +\parallel w^i(t^i)\parallel_{P^i}\notag\\ &\le\varepsilon^i + r^i - \varepsilon^i=r^i\notag \end{align} \indent Then $x^i(t^i+l)\in\mathcal{X}^i_{r}\in\mathcal{X}^i, u^i(t^i+l) = K^ix^i(t^i+l) \in \mathcal{U}^i, l \in \mathbb{N} $. \\ \indent We denote $t_{out}=\min\{t^i\}$, $t_{in}=\max\{t^i\}$, for every sampling time $t_k$ before the time $t_{out}$, according to constraints $(15a)$ and lemma 3.1, we have \begin{align} \sum_{i=1}^{M}(\Psi^i_xx^i(t_k)+\Psi^i_uu^i(t_k))\leq\textbf{1}_p\notag \end{align} \indent For every sampling time $t_k+l$ between $t_{out}$ and $t_{in}$, assume L subsystem have already get in terminal set. As we take $u^i(t_k+l)=u^i(t_k+l|t_k)$ and $u^i(t_k+l|t_k)=K^ix^i(t_k+l|t_k)$ when $x^i(t_k+l|t_k)\in\mathcal{X}^i_{r}$, according to constraints $(15a)$ and lemma 3.1, then we have \begin{align} &\sum_{i=1}^{M-L}(\Psi^i_xx^i(t_k+l)+\Psi^i_uu^i(t_k+l))+ \sum_{i=1}^{L}(\Psi^i_xx^i(t_k+l)+\Psi^i_uK^ix^i(t_k+l))\notag\\ &=\sum_{i=1}^{M-L}(\Psi^i_xx^i(t_k+l|t_k)+\Psi^i_uu^i(t_k+l|t_k))+ \sum_{i=1}^{L}(\Psi^i_xx^i(t_k+l|t_k)+\Psi^i_uK^ix^i(t_k+l|t_k))\notag\\ &=\sum_{i=1}^{M-L}(\Psi^i_xx^i(t_k+l|t_k)+\Psi^i_uu^i(t_k+l|t_k))+ \sum_{i=1}^{L}(\Psi^i_xx^i(t_k+l|t_k)+\Psi^i_uu^i(t_k+l|t_k))\notag\\ &=\sum_{i=1}^{M}(\Psi^i_xx^i(t_k+l|t_k)+\Psi^i_uu^i(t_k+l|t_k))\le\textbf{1}_p\notag \end{align} \indent For every sampling time $t_k$ after the time $t_{in}$, according to last conclusion in lemma 3.1, we have \begin{align} \sum_{i=1}^{M}\Psi^i_Nx^i(t_{k}) \le (1- \epsilon(N)) \textbf{1}_p \le\textbf{1}_p \notag \end{align} Hence, we have already proved (2) for different situation of all sampling time. \end{proof} In the following part, we concentrate our attentions to the stability of whole distributed system. Fisrt we give a normal definition of stability in discrete-time system. \begin{definition} \citep{Sontag199520} System $x(k+1)=f(x(k),u(k))$ is (globally) input-to-state stable(ISS) if there exist a $\mathcal{KL}$-function $\beta : \mathbb{R}_{\ge 0} \times \mathbb{R}_{\ge 0} \to \mathbb{R}_{\ge 0}$ and a $\mathcal{K}$-function $\gamma$ such that, for each input $u \in l_{\infty}^m $ and each $\xi \in \mathbb{R}^n $, it holds that \begin{align} x(k,\xi,u) \le \beta(k,|\xi|) + \gamma(u) \end{align} for each $ t \in \mathbb{Z}^n$. \end{definition} When the dynamics comes to a closed-loop perturbed system, we introduce a common lemma to simplify the prove of ISS. \begin{lemma} \citep{JIANG2001857} A system of the form $x(k+1)=f(x(k),w(k))$ is input-to-state stability(ISS) if and only if there exists a continuous ISS-lyapunov fuction $V(x(k))$ such that for $\mathcal{K}_\infty$ functions $\rho_1(\centerdot),\rho_2(\centerdot)$, and $\rho_3(\centerdot)$, and a $\mathcal{K}$ function $\alpha(\centerdot)$, it satisfies with \begin{align} &\rho_1(\parallel x(k)\parallel) \le V(x(k)) \le \rho_2(\parallel x(k)\parallel) \notag\\ &V(x(k+1)) - V(x(k)) \le \alpha(\bar{w}) - \rho_3(\parallel x(k)\parallel) \notag \end{align} \end{lemma} Then we give the input-to-state stability theorem of RDMPC scheme as follows. \begin{theorem} Given fesibility of $\mathbb{P}_\epsilon(x(t_0))$ and satisfaction of lemma 4, the closed-loop system in (30) is ISS. \end{theorem} \begin{proof} The ISS-Lyapunov function can be chosen as $V(\textbf{x}(t_k)) \triangleq J^*(\textbf{x}(t_k),\textbf{u}^*(t_k))$, apparently $J^*(\textbf{x}(t_k),\textbf{u}^*(t_k)) = \sum_{i=1}^{M}J^*(x^i(t_{k}),u^i(t_k))$ is a $\mathcal{K}_\infty$ functions where $\textbf{x}(t_k)) = [x^1(t_{k}),x^2(t_{k}),\dots,x^M(t_{k})]$, $\textbf{u}^*(t_k)) = [u^{1,*}(t_{k}),u^{2,*}(t_{k}),\dots,u^{M,*}(t_{k})]$ is boundness in local constriants (2), so the first ISS condition is trivially proved. Then from lemma 2,the difference between $V(\textbf{x}(t_k))$ and $V(\textbf{x}(t_k+1))$ is bounded by \begin{align} V(\textbf{x}(t_k+1)) - V(\textbf{x}(t_k))&= \sum_{i=1}^{M}\{J^*(x^i(t_{k+1}),u^i(t_{k+1})) - J^*(x^i(t_k),u^i(t_k))\}\notag\\ &\le \sum_{i=1}^{M}\{g^i(M^i_k,x^i(t_k),u^{i,*}(t_k),\bar{w^i})\}\notag\\ &\le \sum_{i=1}^{M}\{g_0^i(M^i_k,x^i(t_k),u^*(t_k),\bar{w^i}) - [||x^i(t_k)||_{Q^i}^2 + ||u^{i,*}(t_k)||_{R^i}^2]\}\notag \\ &\le \sum_{i=1}^{M}\{\beta(\bar{w^i}) - ||x^i(t_k)||_{Q^i}^2\}\notag\\ &\le \alpha(\bar{\textbf{w}^i}) - \sum_{i=1}^{M}||x^i(t_k)||_{Q^i}^2\notag \end{align} where the bound of distrubance in real system defined as $\bar{\textbf{w}^i} = [\bar{w^1},\bar{w^2},\dots,\bar{w^M}]$, while $\alpha(\bar{\textbf{w}^i}) \triangleq \sum_{i=1}^{M}\beta(\bar{w^i})$ is a $\mathcal{K}$ function defined as the maximum of $\sum_{i=1}^{M}g^i(M^i_k,x^i(t_k),u^{i,*}(t_k),\bar{w^i})$ and $\sum_{i=1}^{M}||x^i(t_k)||_{Q^i}^2$ is absolutely $\mathcal{K}_\infty$ functions on $\textbf{x}(t_k)$. \end{proof} \section{Numerical simulation} The numerical example is a 4-agent system with same system matrix $A^{1,2,3,4} = [1.1, 0.12;0.35 ,0.0075]$, $B^{1,2,3,4} = [1.5;0.5]$, the local constraints in all subsystems are $\mathcal{X}^i = \{x^i||x^{i,1}|\le20 , |x^{i,1}| \le 5 \}$ and $\mathcal{U}^i = \{u^i||u^i|\le 2\}$, while global constraints is $\parallel \sum_{i=1}^4\Psi^i_xx^i+0.01u^1 + 0.02u^2 + 0.03u^3 + 0.04u^4\parallel \le 10\textbf{1}_p$. Where $\Psi^i_x=[0.08,0.02],p=1$. The set of bounded disturbance are satisfied with condition of lemma 3, we set that as $\mathcal{W}^i = [-0.3,0.3] \times [-0.3,0.3]$. Set weight matrix $Q^i = [1,0;0,1],R^i = 0.1, i =1,2,3,4$, we obtain from LQR feedback control law and Riccati equation that $P^{1,2,3,4} = [1.0516, 0.0057;0.0057, 1.0015], K^{1,2,3,4} = [-0.7033,-0.0710]$, and initiall state $x^1(0) = [-19;-4], x^2(0) = [-18;-3], x^3(0) = [-10;4], x^4(0) = [-18;3]$. The prediction horizon is chosen to be $N =5$ and simulation length is $T_{run}=30$ steps. \begin{figure}[htbp] \centering{\includegraphics[width=3.5in,height=2.5in]{function.png}} \caption{Cost function of overall system under algorithm 2} \label{fig} \end{figure} \begin{figure}[htbp] \centering{\includegraphics[width=3.5in,height=2.5in]{state.png}} \caption{System trajectories under algorithm 2} \label{fig} \end{figure} \begin{figure}[htbp] \centering{\includegraphics[width=3.5in,height=2.5in]{control.png}} \caption{Control inputs trajectories under algorithm 2} \label{fig} \end{figure} \begin{figure}[htbp] \centering{\includegraphics[width=3.5in,height=2.5in]{constraints.png}} \caption{Global constraints of overall system} \label{fig} \end{figure} \indent Fig.1 indicates the summarize of the cost function in all subsystem, from that we can know the objective function in $(6a)$ is convergence to a stable value within algorithm tolerance.\\ \indent Fig.2 shows the sate trajectories of all subsystem under algorithm 2 with different sequence of disturbances, the system state convergence to the neighborhood of the origin, the state trajectories reflect the excellent performance of algorithm we proposed from the stability side.\\ \indent Fig.3 dispalys the Control inputs trajectories under algorithm 2, the sampling instants is indicated by scarlet circles while each iteration steps denotes by blue crossmark. Iteration time of algorithm without and with self-triggering machanism is $T_{run}^{a1}=1.8190s$ and $T_{run}^{a2}=1.1097s$, the difference between computation time shows off the latter algorithm can lossen computation burden apparently. \\ \indent The evolution of global constraints function $f=\sum_{i=1}^{M}f^i$ has shown in Fig.4, in which the blue dashed line represents the maximum value of the global constraint. Together with Fig.2 and Fig.3, they illustrated the satisfaction of the local constraints on state and control input for every subsystem, as well as the global constraints in overall system. \section{conclusions} A self-triggered DMPC scheme has been proposed for disturbed linear systems with local and global constraints. The proposed scheme uses tube method to capture the bounded disturbance in subsystem, while it also gives a form of global constraints tightening based on upper bound of disturbance to apply parallel distributed ADMM methods. For closed-loop system under algorithm 2, the properties about the recursive feasibility of OCP and ISS stability of overall system are given, some numerical examples has been shown to demonstrated the good performance of proposed scheme. \section*{Acknowledgement(s)} An unnumbered section, e.g.\ \verb"\section*{Acknowledgements}", may be used for thanks, etc.\ if required and included \emph{in the non-anonymous version} before any Notes or References. \section{References} \bibliographystyle{apacite}
{ "timestamp": "2020-12-17T02:15:35", "yymm": "2012", "arxiv_id": "2012.08872", "language": "en", "url": "https://arxiv.org/abs/2012.08872" }
\section{Class BDI} \begin{figure}[b] \includegraphics[width=\linewidth]{FigS1.jpg} \caption{ Schematic description of the manifold enclosing a node on which the primary and the secondary topological charges are defined. A nodal structure (an enclosing manifold) is depicted in red (green). } \label{fig:nodeshp&TC} \end{figure} \subsection{Topological charges of class BDI} Let us first consider the nodal surface which generally has a cylindrical shape shown in Fig.~\ref{fig:nodeshp&TC}. The nodal surface has two types of topological charges: a 0D charge and a 1D charge. The 0D charge can be defined as follows. Due to the symmetry $\mathcal{C}=\sigma_z$, the Hamiltonian $H(\mathbf{k})$, which is a $2N\times 2N$ matrix, takes a block off-diagonal form as \begin{align} H(\mathbf{k})=\begin{pmatrix} 0&A(\mathbf{k})\\A^{T}(\mathbf{k})&0 \end{pmatrix}, \end{align} where $A(\mathbf{k})$ denotes an $N\times N$ real matrix. The 0D charge $c_{\mathrm{BDI}}(S^{0})$ is defined using $A(\mathbf{k})$ as \begin{align} c_{\mathrm{BDI}}(S^{0})=\mathrm{sign}\{\mathrm{det}A(\mathbf{k}_{\text{in}})\cdot \mathrm{det}A(\mathbf{k}_{\text{out}})\}, \label{0D_BDI} \end{align} where $S^{0}=\{\mathbf{k}_{\text{in}},\mathbf{k}_{\text{out}}\}$ and $\mathbf{k}_{\text{in}}$ ($\mathbf{k}_{\text{out}}$) indicates a momentum inside (outside) the nodal surface~\cite{bzduvsek2017doubly}. The definition of the 0D charge can be understood in terms of a band inversion process across the nodal surface. Suppose there are $N$ occupied bands and $N$ unoccupied bands, and the energies of unoccupied bands $\{E_{n\mathbf{k}}\}$~$(n=1,\cdots,N)$ are aligned as $0\leq E_{1\mathbf{k}} \leq \cdots \leq E_{N\mathbf{k}}$. Since $\{H(\mathbf{k}), \mathcal{C}\}=0$, for each occupied state $|u_{n\mathbf{k}}^{\text{occ}}\rangle$ with the energy $-E_{n\mathbf{k}}$, there is a relevant unoccupied state $|u_{n\mathbf{k}}^{\text{unocc}}\rangle\propto \mathcal{C} |u_{n\mathbf{k}}^{\text{occ}}\rangle$ with the energy $E_{n\mathbf{k}}$. Also $|u_{n\mathbf{k}}^{\text{occ}}\rangle$ and $|u_{n\mathbf{k}}^{\text{unocc}}\rangle$ can be chosen as \begin{gather} |u_{n\mathbf{k}}^{\text{occ}}\rangle ={1 \over \sqrt{2}} \begin{pmatrix} |u_{n\mathbf{k}}^{\uparrow}\rangle \\ |u_{n\mathbf{k}}^{\downarrow}\rangle \end{pmatrix},\; |u_{n\mathbf{k}}^{\text{unocc}}\rangle ={1 \over \sqrt{2}} \begin{pmatrix} |u_{n\mathbf{k}}^{\uparrow}\rangle \\ -|u_{n\mathbf{k}}^{\downarrow}\rangle \end{pmatrix}, \label{BDIstate} \end{gather} where $|u_{n\mathbf{k}}^{\uparrow}\rangle,~ |u_{n\mathbf{k}}^{\downarrow}\rangle(n=1,\cdots ,N)$ are $N$-dimensional vectors that satisfy $\langle u^{\uparrow}_{n\mathbf{k}}|u^{\uparrow}_{m\mathbf{k}} \rangle=\langle u^{\downarrow}_{n\mathbf{k}}|u^{\downarrow}_{m\mathbf{k}} \rangle=\delta_{nm}$. Using these $N$-dimensional vectors, $A(\mathbf{k})$ can be expressed as \begin{gather} A(\mathbf{k})=\sum_{n=1}^{N} E_{n\mathbf{k}}|u^{\uparrow}_{n\mathbf{k}}\rangle\langle u^{\downarrow}_{n\mathbf{k}}|. \label{A} \end{gather} Suppose that, at one side of the nodal surface, the highest occupied state $|u^{\mathrm{occ}}_{1\mathbf{k}}\rangle$ and the lowest unoccupied state $|u^{\mathrm{unocc}}_{1\mathbf{k}}\rangle$ are given by Eq.~(\ref{BDIstate}). If there is a band inversion across the nodal surface, $|u^{\mathrm{occ}}_{1\mathbf{k}}\rangle$ and $|u^{\mathrm{unocc}}_{1\mathbf{k}}\rangle$ at the other side of the nodal surface are given by \begin{gather} |u_{1\mathbf{k}}^{\text{occ}}\rangle ={1 \over \sqrt{2}} \begin{pmatrix} |u_{1\mathbf{k}}^{\uparrow}\rangle \\ -|u_{1\mathbf{k}}^{\downarrow}\rangle \end{pmatrix},\; |u_{1\mathbf{k}}^{\text{unocc}}\rangle ={1 \over \sqrt{2}} \begin{pmatrix} |u_{1\mathbf{k}}^{\uparrow}\rangle \\ |u_{1\mathbf{k}}^{\downarrow}\rangle \end{pmatrix}.\label{InvBDIstate} \end{gather} After the band inversion, $A(\mathbf{k})$ changes from Eq.~(\ref{A}) to \begin{gather} A(\mathbf{k})=-E_{1\mathbf{k}}|u^{\uparrow}_{1\mathbf{k}}\rangle\langle u^{\downarrow}_{1\mathbf{k}}|+\sum_{n=2}^{N} E_{n\mathbf{k}}|u^{\uparrow}_{n\mathbf{k}}\rangle\langle u^{\downarrow}_{n\mathbf{k}}|. \label{A'} \end{gather} Since each of $\left\{|u^{\uparrow}_{n\mathbf{k}}\rangle\right\}$ and $\left\{|u^{\downarrow}_{n\mathbf{k}}\rangle\right\}$ satisfies the orthonormality condition, $\mathrm{det}A(\mathbf{k})$ for Eq.~(\ref{A}),~(\ref{A'}) should be either $E_{1}\cdots E_{N}$ or $-E_{1}\cdots E_{N}$. The signs of the determinants are determined by relative orientations between the bases $\left\{|u^{\uparrow}_{n\mathbf{k}}\rangle\right\}$ and $\left\{|u^{\downarrow}_{n\mathbf{k}}\rangle\right\}$. For example, for Eq. (\ref{A}), the determinant of $A(\mathbf{k})$ is $E_{1\mathbf{k}}\cdots E_{N\mathbf{k}}$ when $\left\{|u^{\uparrow}_{n\mathbf{k}}\rangle\right\}$ and $\left\{|u^{\downarrow}_{n\mathbf{k}}\rangle\right\}$ have the same orientation. Regardless of the relative orientation between these bases, however, $\mathrm{det}A(\mathbf{k})$s for Eq.~(\ref{A}) and (\ref{A'}) have the opposite signs. Therefore $c_{\mathrm{BDI}}(S^{0})=-1$ when there is a band inversion across the nodal surface. In fact, the band inversion between the highest occupied band and the lowest unoccupied band is the only allowed change of the eigenstates across the nodal surface. At the nodal surface, both the energy of the highest occupied state and that of the lowest unoccupied state are zero. This means that the highest occupied state and lowest unoccupied state can be discontinuous across the nodal surface. On the other hand, the other states change continuously across the nodal surface because their energies are generally non-degenerate at the nodal surface. Then the highest occupied and lowest unoccupied states at one side of the nodal surface is given by linear combinations of them at the other side of the nodal surface. But not all linear combinations are possible as $\mathfrak{T}$ and $\mathcal{C}$ symmetries have to be satisfied. Considering these symmetries, we can find that a band inversion is the only possible change of the eigenstates across the nodal surface. Now let us consider the 1D charge of the nodal surface. For this, we first consider the spectral flattening of the Hamiltonian by smoothly deforming the band structure so that all the energies $E_{1\mathbf{k}},\cdots,E_{N\mathbf{k}}$ become 1. After the flattening, $A(\mathbf{k})$ in Eq.~(\ref{A}) becomes an element of $\mathrm{O(N)}$. Then the 1D charge $c_{\mathrm{BDI}}(S^{1})$ can be defined as \begin{gather} c_{\mathrm{BDI}}(S^{1})=\left[A_{FB}:S^{1}\rightarrow\mathrm{O(N)}\right]. \end{gather} Here $A_{FB}$ is an off-diagonal block of the flattened Hamiltonian and $S^{1}$ is a circle encircling the nodal surface [see Fig.~\ref{fig:nodeshp&TC}]. $\left[A_{FB}:S^{1}\rightarrow\mathrm{O(N)}\right]$ means the homotopy equivalence class within the $N$-dimensional orthogonal group. In the case of the nodal line below the Fermi level, there are two different ways of describing its 1D charge. One is to take the representation $\mathfrak{T}=\mathcal{K}$ so that the eigenstates become real-valued. For a nodal line formed between two occupied bands $|u_{n,\mathbf{k}}^{\mathrm{occ}}\rangle$ and $|u_{n+1,\mathbf{k}}^{\mathrm{occ}}\rangle$, its topological charge can be defined on a circle, enclosing the nodal line, which is parametrized by an angle $\theta\in[-\pi,\pi]$. If we assume that the state $|u_{i,\mathbf{k}}^{\mathrm{occ}}\rangle$ $(i=n, n+1)$ changes continuously for $\theta\in(-\pi,\pi)$, the following relation should be satisfied at $\theta=\pm \pi$~\cite{ahn2018linking}, \begin{gather} |u_{i,\pi}^{\mathrm{occ}}\rangle=\pm|u_{i,-\pi}^{\mathrm{occ}}\rangle. \label{1stSW} \end{gather} When the state $|u_{i,\mathbf{k}}^{\mathrm{occ}}\rangle$ changes discontinuously (continuously) at $\theta=\pm \pi$, $|u_{i,\mathbf{k}}^{\mathrm{occ}}\rangle$ does (does not) undergo an orientation-reversal on $S^{1}$, which indicates the nontrivial (trivial) 1D topological charge of the nodal line~\cite{ahn2018linking}. The second way is to choose a smooth complex gauge and compute the winding number of the eigenstates. Consider an effective Hamiltonian of two occupied bands $|u^{\mathrm{occ}}_{n,\mathbf{k}}\rangle$ and $|u^{\mathrm{occ}}_{n+1,\mathbf{k}}\rangle$, \begin{gather} H_{\textrm{eff}}(\mathbf{k})=|u_{n,\mathbf{k}}^{\textrm{occ}}\rangle\langle u_{n,\mathbf{k}}^{\textrm{occ}}|-|u_{n+1,\mathbf{k}}^{\textrm{occ}}\rangle\langle u_{n+1,\mathbf{k}}^{\textrm{occ}}|. \end{gather} Since the eigenvalues of this Hamiltonian are $\pm 1$, there can be an effective chiral symmetry $\mathcal{C}_{\mathrm{eff}}$ so that $\left\{H_{\mathrm{eff}}(\mathbf{k}), \mathcal{C}_{\mathrm{eff}}\right\}=0$. One can show that the effective Hamiltonian of $4$-band model has an effective chiral symmetry. Focusing on the $4$-band model, we can get the off-diagonal block $A_{\mathrm{eff}}(\mathbf{k})$ of the effective Hamiltonian after appropriate basis transformation. The 1D charge of the nodal line can be defined using $A_{\mathrm{eff}}(\mathbf{k})$ by \begin{align} \tilde{c}_{\textrm{1D}}(\tilde{S^{1}})&=\frac{i}{2\pi}\oint_{\tilde{S^{1}}} d\mathbf{k}\cdot \mathrm{tr}\left[(A_{\textrm{eff}}(\mathbf{k}))^{\dagger} \nabla A_{\textrm{eff}}(\mathbf{k})\right], \end{align} where $\tilde{S}^{1}$ is a circle encircling the nodal line between two occupied bands $|u^{\mathrm{occ}}_{n,\mathbf{k}}\rangle$ and $|u^{\mathrm{occ}}_{n+1,\mathbf{k}}\rangle$. In Sec III.B, we show that $\tilde{c}_{\textrm{1D}}$ is quantized in the $4$-band model. \subsection{Linking structure of class BDI} \subsubsection{Case of 4 bands} Let us consider a 4-band model with energies $\pm E_{1}$, $\pm E_{2}$ ($0\leq E_{1}\leq E_{2}$). Corresponding $A(\mathbf{k})$ can be parametrized by two angles $\theta(\mathbf{k})$ and $\phi(\mathbf{k})$ as \begin{align} A_\pm(\mathbf{k})&=\frac{E_{2}\mp E_{1}}{2}\begin{pmatrix} \sin\theta(\mathbf{k}) & \cos\theta(\mathbf{k}) \\ \cos\theta(\mathbf{k}) & -\sin\theta(\mathbf{k}) \end{pmatrix}\nonumber\\ &+\frac{E_{2}\pm E_{1}}{2} \begin{pmatrix} \cos\phi(\mathbf{k}) & -\sin\phi(\mathbf{k}) \\ \sin\phi(\mathbf{k}) & \cos\phi(\mathbf{k}) \end{pmatrix}\label{Apm}, \end{align} where $\mathrm{det}A_\pm(\mathbf{k})=\pm E_1E_2$. We note that two Hamiltonians described by $A_{+}(\mathbf{k})$ and $A_-(\mathbf{k})$, respectively, are related by a band inversion between $|u_{1\mathbf{k}}^{\text{occ}}\rangle$ and $|u_{1\mathbf{k}}^{\text{unocc}}\rangle$, and the corresponding band crossing points form a nodal surface. This is consistent with the fact that the 0D charge of the nodal surface is given by Eq. (\ref{0D_BDI}). To determine the 1D charge of the the nodal surface, we assume that the Hamiltonian outside (inside) the nodal surface is described by $A_-(\mathbf{k})\left(A_+(\mathbf{k})\right)$. After flattening the Hamiltonian, $A_-(\mathbf{k})$ depends on $\theta(\mathbf{k})$ only. Then $c_{\textrm{BDI}}(S^{1})$ is given by \begin{gather} c_{\textrm{BDI}}(S^{1})=\frac{1}{2\pi}\oint_{S^{1}} d\mathbf{k}\cdot \nabla \theta(\mathbf{k}) \end{gather} where $S^{1}$ is a circle surrounding the nodal surface. Now let us show that the 1D charge $c_{\textrm{BDI}}(S^{1})$ of the nodal surface is identical to the 1D charge $\tilde{c}_{1\mathrm{D}}$ of a nodal line formed between occupied bands, which is inside the nodal surface. Since $A_+(\mathbf{k})$ is an off-diagonal block of the Hamiltonian defined inside the nodal surface, $\tilde{c}_{1\mathrm{D}}$ can be determined by $A_+(\mathbf{k})$ and the corresponding occupied states $|u_{1,2\mathbf{k}}^{\text{occ}}\rangle$. The winding number of the nodal line formed between occupied bands can be evaluated using an effective two-band Hamiltonian given by \begin{gather}\label{eq:Heff} H_{\textrm{eff}}(\mathbf{k})=|u_{1\mathbf{k}}^{\textrm{occ}}\rangle\langle u_{1\mathbf{k}}^{\textrm{occ}}|-|u_{2\mathbf{k}}^{\textrm{occ}}\rangle\langle u_{2\mathbf{k}}^{\textrm{occ}}|. \end{gather} Plugging the explicit form of $|u_{1,2\mathbf{k}}^{\text{occ}}\rangle$ into Eq.~(\ref{eq:Heff}), we find that $H_{\textrm{eff}}(\mathbf{k})$, expressed in terms of $\theta$ and $\phi$, has an effective chiral symmetry so that it can be transformed to a block off-diagonal form with the off-diagonal block $A_{\textrm{eff}}(\mathbf{k})$ given by \begin{gather} A_{\textrm{eff}}(\mathbf{k})=\frac{1}{2}i e^{-i\theta(\mathbf{k})} \begin{pmatrix} -e^{i\phi(\mathbf{k})} & 1\\1&-e^{-i\phi(\mathbf{k})}\label{Aeff} \end{pmatrix}. \end{gather} In terms of $A^{\textrm{eff}}(\mathbf{k})$, $\tilde{c}_{\textrm{1D}}$ is given by \begin{align} \tilde{c}_{\textrm{1D}}(\tilde{S^{1}})&=\frac{i}{2\pi}\oint_{\tilde{S^{1}}} d\mathbf{k}\cdot \mathrm{tr}\left[(A^{\textrm{eff}}(\mathbf{k}))^{\dagger} \nabla A^{\textrm{eff}}(\mathbf{k})\right]\nonumber\\ &=\frac{1}{2\pi}\oint_{\tilde{S^{1}}} d\mathbf{k}\cdot \nabla \theta(\mathbf{k}), \label{BDI4BNLch} \end{align} where $\tilde{S^{1}}$ is a circle surrounding the nodal line inside the nodal surface. Since $\theta(\mathbf{k})$ is continuously defined across the nodal surface, $\tilde{c}_{\textrm{1D}}(\tilde{S^{1}})$ and $c_{\mathrm{BDI}}(S^{1})$ are the same. To confirm that the doubly charged nature of the nodal surface is originated from its linking structure with nodal lines between occupied bands, we have to show that the nontrivial $\tilde{c}_{1D}(\tilde{S}^{1})$ arises from the nodal line between the occupied bands. At each nodal line, $A_{+}(\mathbf{k})$ is $\theta$-independent because $E_{1}=E_{2}$. This means that, inside the nodal surface, $\theta$ can have non-trivial winding around the nodal line. On the other hand, $\theta$ cannot have non-trivial winding inside the nodal surface when there isn't any nodal lines because the Hamiltonian always has well-defined $\theta$ dependent term inside the nodal surface. Therefore, $\tilde{c}_{1D}(\tilde{S}^{1})$ can be non-trivial only if $\tilde{S}^{1}$ surrounds a nodal line inside the nodal surface. \subsubsection{Cases of $2N$ bands ($N>2$)} Now we consider general the cases of $2N$ bands with $N>2$. On both sides of the nodal surface, the possible forms of $A(\mathbf{k})$ are described by Eq.~(\ref{A}) and (\ref{A'}). After flattening the energy spectrum, the corresponding off-diagonal blocks of a flat-band Hamiltonian with $2N$ bands are given by \begin{gather} A^{\pm}_{FB}(\mathbf{k})=\pm |u^{\uparrow}_{1\mathbf{k}}\rangle\langle u^{\downarrow}_{1\mathbf{k}}|+\sum_{n=2}^{N} |u^{\uparrow}_{n\mathbf{k}}\rangle\langle u^{\downarrow}_{n\mathbf{k}}|, \end{gather} where $A^{+}_{FB}(\mathbf{k})$ ($A^{-}_{FB}(\mathbf{k})$) corresponds to the Hamiltonian defined inside (outside) of the nodal surface. It is obvious that $A_{FB}(\mathbf{k})$ changes discontinuously across the nodal surface. To describe the topological charge, we additionally introduce $A'_{FB}(\mathbf{k})$ given by \begin{gather}\label{eq:A_prime} A'_{FB}(\mathbf{k})=-|u^{\uparrow}_{1\mathbf{k}}\rangle\langle u^{\downarrow}_{1\mathbf{k}}|+\sum_{n=2}^{N} |u^{\uparrow}_{n\mathbf{k}}\rangle\langle u^{\downarrow}_{n\mathbf{k}}|, \end{gather} which is defined inside the nodal surface. $A'_{FB}(\mathbf{k})$ and $A^{-}_{FB}(\mathbf{k})$ have the same form but are defined in different regions of the momentum space, i.e., inside and outside the nodal surface, respectively. To evaluate the 1D charge of the nodal surface, let us consider a circle $S^{1}$ surrounding it. The 1D charge is given by the homotopy equivalence class of $A^{-}_{FB}(\mathbf{k})$ defined on the circle $S^{1}$. As noted above, $A^{-}_{FB}(\mathbf{k})$ defined outside the nodal surface is continuously connected with $A'_{FB}$ defined inside the nodal surface. Hence, the 1D charge defined in terms of $A^{-}_{FB}(\mathbf{k})$ outside the nodal surface can be equivalently described by $A'_{FB}(\mathbf{k})$ defined inside the nodal surface as follows: \begin{gather} c_{\mathrm{BDI}}(S^{1})=\left[A'_{FB}:S'^{1}\rightarrow \mathrm{O(N)}\right], \end{gather} where $S'^{1}$ is a circle inside the nodal surface, which is obtained by deforming $S^{1}$ continuously. Now we ask whether $S'^{1}$ can be shrunk to a point while keeping $A'_{FB}$ well-defined on it. In general, such a smooth deformation is impossible when there is a nodal line inside $S'^{1}$ at the energy satisfying $-E_{1}=-E_{2}$. This is because $|u^{\mathrm{occ}}_{1\mathbf{k}}\rangle$ and $|u^{\mathrm{occ}}_{2\mathbf{k}}\rangle$ cannot be uniquely specified at the nodal line due to the degeneracy so that $A'_{FB}$ cannot be defined as well [see Eq.~(\ref{eq:A_prime})]. Let us note that the presence of other nodal lines at the energy $-E_{n}=-E_{n+1}$ ($n>1$) does not affect $A'_{FB}(\mathbf{k})$. Let us shrink $S'^{1}$ encircling the nodal line at the energy satisfying $-E_{1}=-E_{2}$, and see how $A'_{FB}(\mathbf{k})$ changes on $S'^{1}$. There are two possible behaviors of $A'_{FB}(\mathbf{k})$ expected on $S'^{1}$ : staying nearly constant or oscillating prominently along $S'^{1}$. In the former case, the homotopy equivalence class of $A'_{FB}$ defined on $S'^{1}$ should be trivial, so that the 1D charge of the nodal surface is trivial. On the other hand, in the latter case, the homotopy equivalence class should be non-trivial. Therefore, the 1D charge of the nodal surface is non-trivial. In short, whether the 1D charge of the nodal surface is trivial or not can be determined from the behavior of $A'_{FB}(\mathbf{k})$ on a small circle encircling the nodal line at the energy satisfying $-E_{1}=-E_{2}$ inside the nodal surface. Interestingly, this information is sufficient to characterize the 1D charge of the nodal surface because the fundamental group of the orthogonal group is given by \begin{gather} \pi_{1}(\mathrm{O(N)})=\mathbb{Z}_{2}, \end{gather} when $\mathrm{N}>2$. The off-diagonal block $A^{+}_{FB}(\mathbf{k})$ of the flattened Hamiltonian is well-defined inside the nodal surface. Therefore, $A^{+}_{FB}(\mathbf{k})$ should be nearly constant along $S'^{1}$ when $S'^{1}$ is sufficiently close to the nodal line. As a result, one can consider the homotopy equivalence class of $A^{+}_{FB}(\mathbf{k})-A'_{FB}(\mathbf{k})$, instead of that of $A'_{FB}(\mathbf{k})$, to determine the 1D charge of the nodal surface. Since $A^{+}_{FB}(\mathbf{k})-A'_{FB}(\mathbf{k})$ is given by \begin{gather} A_{FB}(\mathbf{k})-A'_{FB}(\mathbf{k})=2|u^{\uparrow}_{1\mathbf{k}}\rangle\langle u^{\downarrow}_{1\mathbf{k}}|, \end{gather} the behavior of $|u^{\uparrow}_{1\mathbf{k}}\rangle$ and $|u^{\downarrow}_{1\mathbf{k}}\rangle$ along $S'^{1}$ should determine the 1D charge of the nodal surface. If the circle $S'^{1}$ encircles a nodal line between the topmost and second topmost occupied bands, then $|u^{\mathrm{occ}}_{1\mathbf{k}}\rangle$ undergoes an orientation-reversal on $S'^{1}$. Therefore, $|u^{\uparrow}_{1\theta}\rangle$ and $|u^{\downarrow}_{1\theta}\rangle$ also undergo orientation-reversals [see Eq.~(\ref{BDIstate})]. Here $\theta$ denotes the angle parametrizing $S'^{1}$. From $\langle u^{\uparrow(\downarrow)}_{1\theta}|u^{\uparrow(\downarrow)}_{1\theta}\rangle=1$, it is easy to show that, for some $i_{0}$ and $j_{0}$, the $i_{0}$-th component of $|u^{\uparrow}_{1\pi}\rangle$ ($\left[|u_{1\pi}^{\uparrow}\rangle\right]_{i_0}$) and the $j_{0}$-th component of $|u^{\downarrow}_{1\pi}\rangle$ ($\left[|u_{1\pi}^{\downarrow}\rangle\right]_{j_0}$) satisfy \begin{gather} \left|\left[|u_{1\pi}^{\uparrow}\rangle\right]_{i_0}\right|,\left|\left[|u_{1\pi}^{\downarrow}\rangle\right]_{j_0}\right| \geq \frac{1}{\sqrt{N}}. \label{revcomp} \end{gather} Since $|u^{\uparrow}_{1\theta}\rangle$ and $|u^{\downarrow}_{1\theta}\rangle$ undergo orientation-reversals along $S'^{1}$, $\left[|u_{1\theta}^{\uparrow}\rangle\right]_{i_0}$ and $\left[|u_{1\theta}^{\downarrow}\rangle\right]_{j_0}$ cross 0 at some $\theta$. Now let us consider an $N\times N$ matrix $|u^{\uparrow}_{1\theta}\rangle\langle u^{\downarrow}_{1\theta}|$ which is proportional to $A^{+}_{FB}(\mathbf{k})-A'_{FB}(\mathbf{k})$. From Eq.~(\ref{1stSW}) and (\ref{revcomp}), one can see that the $(i_{0},j_{0})$-component of $|u^{\uparrow}_{1\theta}\rangle\langle u^{\downarrow}_{1\theta}|$ oscillates between 0 and $1/N$ along $S'^{1}$. This corresponds to the case when the nodal surface carries a non-trivial 1D charge. \subsubsection{Double band inversion : topological phase transition between DCNSs and trivial NSs} \begin{figure}[t] \centering \includegraphics[width=8.5cm]{FigS2.jpg} \caption{Schematic description of double band inversion process and their corresponding nodal structures using the model (\ref{CTmodel}) with $m=1$. Blue surfaces are NSs at $E_F$ and red lines are NLs below $E_F$.} \label{fig:DBI} \end{figure} Even trivial NSs can be transformed to DCNSs via continuous deformation of band structure, which is referred as double band inversion process. Here we provide a simple continuum model for double band inversion, applicable to the systems with inversion and chiral symmetries. Let us consider a Hamiltonian $H_{\mathrm{conti}}(\mathbf{k})$, \begin{gather} H_{\mathrm{conti}}(\mathbf{k})=k_x \sigma_{x}+\left(k_{x}^{2}+k_{y}^{2}-M(k_z)\right)\sigma_{y}\tau_{y}+m\sigma_{x}\tau_{z}, \label{CTmodel} \end{gather} where $M(k_z)=M_{0}-0.1\cos k_{z}$. Note that $H_{\mathrm{conti}}(\mathbf{k})$ has symmetries $\mathfrak{T}=\mathcal{K}$ and $\mathcal{C}=\sigma_{z}$. While $M_{0}$ increases, we can see that there appears doubly charged nodal surfaces near $k_x=k_y=0$. When $M<-m$, there are nodes neither at $E_F$ nor below $E_F$. After $M_0$ increases so that $M>-m$, a NS appears near $k_x=k_y=0$. When $M$ becomes larger than $0$, there appear two NLs below $E_{F}$ inside the NS. When $M>m^2+1/4$, the NS is separated into two NSs while each NS surrounds one of the NLs, which means that the two NSs are doubly charged. These processes are illustrated in Fig. \ref{fig:DBI}. \subsection{Lattice model of a class BDI superconductor} \subsubsection{Constraints on the BdG Hamiltonian} In the context of the second quantization, a tight binding Hamiltonian is given by \begin{gather} H_{\mathrm{normal}}=\sum_{\alpha\beta\mathbf{k}}\mathcal{H}^{\alpha\beta}_{\mathbf{k}}c^{\alpha\dagger}_{\mathbf{k}}c^{\beta}_{\mathbf{k}}, \end{gather} where $\alpha$ and $\beta$ are indices for orbital and spin degrees of freedom. If $N$ is the number of orbital states used, $\alpha$ and $\beta$ run from 1 to $2N$ respectively. Here $\mathcal{H}_{\mathbf{k}}$ is a $2N\times 2N$ matrix and it describes the band structure. Let us call $\mathcal{H}_{\mathbf{k}}$ as a normal state Hamiltonian. Note that $\mathcal{H}_{\mathbf{k}}=\mathcal{H}^{\dagger}_{\mathbf{k}}$ due to the hermicity. We can get a Hamiltonian for a superconductor by adding a pairing field between electrons to the normal state Hamiltonian. Under the mean-field approximation, the Hamiltonian to which a pairing potential is added is given by~\cite{altland1997nonstandard} \begin{gather} \nonumber H_{\mathrm{SC}}=\sum_{\alpha\beta\mathbf{k}}\left(\mathcal{H}^{\alpha\beta}_{\mathbf{k}}c^{\alpha\dagger}_{\mathbf{k}}c^{\beta}_{\mathbf{k}}+\frac{1}{2}\Delta^{\alpha\beta}_{\mathbf{k}}c^{\alpha\dagger}_{\mathbf{k}}c^{\beta\dagger}_{-\mathbf{k}}\right. \\\left.+\frac{1}{2}\Delta^{\alpha\beta*}_{-\mathbf{k}}c^{\beta}_{-\mathbf{k}}c^{\alpha}_{\mathbf{k}}\right). \end{gather} where $\Delta_{\mathbf{k}}$ is called by a gap function, which is a $2N\times 2N$ matrix. The gap function satisfies $\Delta_{\mathbf{k}}=-\Delta^{T}_{-\mathbf{k}}$ due to the fermionic statistics. This Hamiltonian $H_{\mathrm{SC}}$ can be expressed by the Nambu spinor, $\begin{pmatrix} c^{\alpha}_{\mathbf{k}}\\c^{\alpha\dagger}_{-\mathbf{k}} \end{pmatrix}$. The upper component $c^{\alpha}_{\mathbf{k}}$ of the Nambu spinor is the annihilation operator of an electron with the orbital and spin degree of freedom $\alpha$ and the lower component $c^{\alpha\dagger}_{-\mathbf{k}}$ of it is the annihilation operator of a hole with the orbital and spin degree of freedom $\alpha$. Let us call the freedom to choose the electron or hole as the particle-hole degree of freedom. Then the Hamiltonian $H_{\mathrm{SC}}$ can be rewritten by \begin{gather} H_{\mathrm{SC}}=\frac{1}{2}\sum_{\alpha\beta\mathbf{k}}\begin{pmatrix} c^{\alpha\dagger}_{\mathbf{k}}&&c^{\alpha}_{-\mathbf{k}} \end{pmatrix}\mathcal{H}_{\mathrm{BdG}}^{\alpha\beta}(\mathbf{k})\begin{pmatrix} c^{\beta}_{\mathbf{k}}\\c^{\beta\dagger}_{-\mathbf{k}} \end{pmatrix}. \end{gather} Here $\mathcal{H}_{\mathrm{BdG}}(\mathbf{k})$ is a $4N\times 4N$ Bogoliubov–de Gennes(BdG) Hamiltonian, which is given by \begin{gather} \mathcal{H}_{\mathrm{BdG}}(\mathbf{k})=\begin{pmatrix} \mathcal{H}_{\mathbf{k}}&&\Delta_{\mathbf{k}}\\ -\Delta^{*}_{-\mathbf{k}}&&-\mathcal{H}^{T}_{-\mathbf{k}} \end{pmatrix}.\label{BdGorigin} \end{gather} Due to the particular form of the BdG Hamiltonian, it has a particle-hole symmetry $\mathcal{P}$, \begin{gather} \mathcal{P}=\sigma_{x}\mathcal{K},\label{PHsym} \end{gather} where $\sigma_{x}$ is a Pauli matrix acting on the particle-hole space. If the system has the full spin-rotation symmetry, which is true for class BDI, we can reduce the spin degrees of freedom in the BdG Hamiltonian. In the particle-hole space, the spin-rotation symmetry $J_{i}\ (i=x,y,z)$ is given by~\cite{altland1997nonstandard} \begin{gather} J_{i}=\begin{pmatrix} s_{i}&&0\\ 0&&-s_{i}^{T} \end{pmatrix}. \end{gather} Here $s_{i}$ are Pauli matrices and they act on the spin degrees of freedom. If the superconducting system has a full spin-rotation symmetry, then $\left[\mathcal{H}_{\mathrm{BdG}},J_{i}\right]=0$ for all $i=x,y,z$. This condition changes a form of the BdG Hamiltonian to \begin{gather} \mathcal{H}_{\mathrm{BdG}}(\mathbf{k})=\begin{pmatrix} h_{\mathbf{k}}&&0&&0&&\delta_{\mathbf{k}}\\ 0&&h_{\mathbf{k}}&&-\delta_{\mathbf{k}}&&0\\ 0&&-\delta^{*}_{-\mathbf{k}}&&-h^{T}_{-\mathbf{k}}&&0\\ \delta^{*}_{-\mathbf{k}}&&0&&0&&-h^{T}_{-\mathbf{k}} \end{pmatrix}, \end{gather} where $h_{\mathbf{k}}$ and $\delta_{\mathbf{k}}$ are $N\times N$ matrices and act on the orbital degrees of freedom. Switching the second and fourth lows and columns, we can get a block diagonalized form of the BdG Hamiltonian. Each block matrix gives the same second quantized Hamiltonian due to the full spin-rotation symmetry. The first block is given by \begin{gather} \mathcal{H}_{\mathrm{rBdG}}(\mathbf{k})=\begin{pmatrix} h_{\mathbf{k}}&&\delta_{\mathbf{k}}\\ \delta^{*}_{-\mathbf{k}}&&-h^{T}_{-\mathbf{k}} \end{pmatrix}.\label{rBdG} \end{gather} Let us call this $2N\times 2N$ matrix as a reduced BdG Hamiltonian. $h_{\mathbf{k}}$ and $\delta_{\mathbf{k}}$ inherit the hermicity of $\mathcal{H}_{\mathbf{k}}$ and the property of $\Delta_{\mathbf{k}}$ coming from the fermionic statistics; $h^{\dagger}_{\mathbf{k}}=h_{\mathbf{k}}$ and $\delta_{\mathbf{k}}=\delta^{T}_{-\mathbf{k}}$. The reduced BdG Hamiltonian has a similar form comparing with the BdG Hamiltonian and we can find a particle-hole symmetry $\mathcal{P}_{r}$ for the reduced BdG Hamiltonian, which is given by \begin{gather} \mathcal{P}_{r}=ir_{y}\mathcal{K},\label{Pr} \end{gather} where $r_{y}$ is a Pauli matrix acting on the reduced particle-hole space. We can check that this particle-hole symmetry satisfies $\mathcal{P}_{r}^2=-1$. To make a BdG Hamiltonian with full spin-rotation symmetry belong to class BDI, we impose three conditions on the BdG Hamiltonian~\cite{bzduvsek2017doubly}; (i) the system has an inversion symmetry $\mathcal{I}$. (ii) it has a time reversal symmetry. (iii) the parity of the gap function is odd. The full rotation symmetry of the BdG Hamiltonian imposes the particle-hole symmetry $\tilde{\mathcal{P}}$. Considering $\tilde{\mathcal{P}}$ acting on the block diagonalized BdG Hamiltonian whose the first block is the reduced BdG Hamiltonian, we can choose one of the representation of $\tilde{\mathcal{P}}$. In this case, $\tilde{\mathcal{P}}$ is given by \begin{gather} \tilde{\mathcal{P}}=\begin{pmatrix} \mathcal{P}_{r} & 0 \\ 0 & \mathcal{P}_{r} \end{pmatrix}. \end{gather} Here $\mathcal{P}_{r}$ is the reduced particle-hole symmetry which is given by Eq.~(\ref{Pr}). On the other hands, we can consider $\tilde{\mathcal{P}}$ acting on the spin space and particle-hole space. In this case, $\tilde{\mathcal{P}}$ is given by \begin{gather} \tilde{\mathcal{P}}=i \sigma_{x}\otimes s_{y} \mathcal{K},\label{Ptilde} \end{gather} where $\sigma_{i},s_{j}$ are the Pauli matrices which are acting on the particle hole space and spin space, respectively. The inversion operator $\mathcal{I}$ of the BdG Hamiltonian can be expressed using the inversion operator $\mathcal{I}_{n}$ of the normal state Hamiltonian. The expression for $\mathcal{I}$ is given by \begin{gather} \mathcal{I}=\begin{pmatrix} \mathcal{I}_{n}&&0\\0&&-\mathcal{I}_{n} \end{pmatrix}=\sigma_{z}\otimes \mathcal{I}_{n}.\label{Itot} \end{gather} Note that the minus sign of the $\mathcal{I}_{n}$ at the second diagonal block comes from the condition (iii). Since $\mathcal{I}_{n}$ operates on the normal state Hamiltonian, $\mathcal{I}_{n}$ acts on both the orbital and spin degrees of freedom. On the spin degrees of freedom, however, $\mathcal{I}_{n}$ is a trivial operator. Therefore, $\mathcal{I}_{n}$ is block-diagonalized in the spin degrees of freedom, \begin{gather} \mathcal{I}_{n}=s_{0}\otimes\mathcal{I}_{o}. \end{gather} $\mathcal{I}_{o}$ is an inversion operator acting on the orbital degrees of freedom. From Eq.~(\ref{Ptilde}) and Eq.~(\ref{Itot}), $(\tilde{\mathcal{P}}\mathcal{I})^{2}=1$. From $\mathcal{I}\mathcal{H}_{\mathrm{BdG}}(\mathbf{k})\mathcal{I}^{-1}=\mathcal{H}_{\mathrm{BdG}}(-\mathbf{k})$, we can deduce inversion symmetry constraints on $h_{\mathbf{k}}$ and $\delta_{\mathbf{k}}$; \begin{gather} \mathcal{I}_{o}h_{\mathbf{k}}\mathcal{I}_{o}^{-1}=h_{-\mathbf{k}},\label{Invh}\\ \mathcal{I}_{o}\delta_{\mathbf{k}}\mathcal{I}_{o}^{-1}=-\delta_{-\mathbf{k}}.\label{Invdel} \end{gather} \subsubsection{Lattice model of a 4-band BdG Hamiltonian} Here, we explain the 4-band lattice model in the main text. We first introduce a 4-band BdG Hamiltonian belonging to class BDI on the AA-stacked honeycomb layers. The lattice is made by stacking honeycomb layers with same distance and without any translation along in-plane direction [see Fig. 2~(a),~(b) in the main text]. The primitive Bravais vectors are given by \begin{gather} \mathbf{R}_{1,2}=\left(\frac{3}{2},\pm\frac{\sqrt{3}}{2},0\right)a,~\mathbf{R}_{3}=(0,0,c), \end{gather} where $a$ is a distance between the nearest neighboring atoms in the plane and $c$ is a distance between the layers. And relative position vectors $\mathbf{t}_{1,2,3}$ between the nearest neighboring atoms in the plane are given by \begin{gather} \mathbf{t}_{1,2}=\left(\frac{1}{2},\pm\frac{\sqrt{3}}{2},0\right)a,~\mathbf{t}_{3}=(-a,0,0). \end{gather} For the convenience to write the equations, we define relative position vectors $\mathbf{T}_{1,2,3}$ between the next nearest neighboring atoms in the plane, \begin{gather} \mathbf{T}_{1,2}=\left(\pm\frac{3}{2},\frac{\sqrt{3}}{2},0\right)a,~\mathbf{T}_{3}=(0,-\sqrt{3}a,0). \end{gather} We add an s orbital at each atomic position which is represented by black dots in Fig. 2~(a),~(b). We consider the on-site energy $E_{\mathrm{on}}$ which is the same for all s orbitals and the intra-layer nearest-neighbor hopping with an amplitude $t$ and the inter-layer nearest-neighbor hopping with an amplitude $t_{z}$. They produce a normal state Hamiltonian, \begin{align} h_{2}(\mathbf{k}) &= \left(E_{\mathrm{on}}+2 t_{z} \cos(k_{z} c)\right)\mathbb{1}_{2\times 2}\nonumber \\ &+ t\sum_{i=1}^{3}\left(\cos (\mathbf{k}\cdot \mathbf{t}_{i})\tau_{x}+\sin (\mathbf{k}\cdot \mathbf{t}_{i})\tau_{y}\right),\label{hnor2} \end{align} where $\tau_{i}$ are Pauli matrices acting on the orbital degree of freedom. This system has the $C_{6}$ rotation symmetry, the inversion symmetry and the time-reversal symmetry. In particular, the inversion symmetry is represented by $\mathcal{I}=\tau_{x}$ and the time-reversal symmetry is represented by $\mathcal{T}=\mathcal{K}$. If we consider a $2\times 2$ reduced gap function $\delta_{2}(\mathbf{k})$ which is given by \begin{gather} \delta_{2}(\mathbf{k})=\psi_{0}\tau_{z},\label{d2} \end{gather} where $\psi_{0}$ is a real-valued $s$-wave order parameter, then $\delta_{2}(\mathbf{k})$ is time-reversal symmetric and satisfies Eq.~(\ref{Invdel}). This means that the reduced BdG Hamiltonian constituted by $h_{2}(\mathbf{k})$ and $\delta_{2}(\mathbf{k})$ belongs to the class BDI. This reduced BdG Hamiltonian has two symmetries, \begin{gather} \mathfrak{T}=\tau_{x}\otimes r_{0}\mathcal{K},~\mathfrak{B}=i\tau_{x}\otimes r_{y}\mathcal{K}, \end{gather} where $\tau_{i},r_{j}$ are Pauli matrices and $\tau_{i}$ act on the orbital degrees of freedom and $r_{i}$ act on the reduced particle-hole space. \begin{figure}[t] \centering \includegraphics[width=8.5cm]{FigS3.jpg} \caption{Shrinking the nodal surface. The black line is the First Brillouin zone and the blue dot is the nodal line and the red lines are the nodal surface. All of the figures are evaluated on $k_{z}=0$ plane. (a), (b), (c) are evaluated for $E_{0}=-0.6, -1.5, -3$, respectively.} \label{fig:Shrk} \end{figure} In the main text, we consider two sets of the parameters. The only difference between the parameters' sets is $E_{\mathrm{on}}$. When $E_{\mathrm{on}}$ changes from $-0.5$ to $-3$, while the other parameters are fixed, the nodal surfaces merge and then disappear, see Fig.~\ref{fig:Shrk}. We can see that the doubly charged nodal surfaces can be disappeared after merging together, although the doubly charged nodal surface cannot be disappeared alone. Note that the nodal surfaces merge at $E_{\mathrm{on}}\approx -0.67$ and the single nodal surface enclosing the BZ center disappears at $E_{\mathrm{on}}\approx -2.8$ \subsubsection{Lattice model of a 6-band BdG Hamiltonian} We expand the $4$ bands BdG Hamiltonian to a $6$ bands BdG Hamiltonian by adding one more s orbital inside the unit cell of the crystal structure of the 4 bands model. It is located at the middle of the adjacent honeycomb layers and at the center of the honeycomb structure at the top viewpoint. For the added orbital, we consider the on-site energy $E'_{\mathrm{on}}$ and the nearest-neighbor hopping between the added orbital and the orbital in the honeycomb layers with an amplitude $t'$. Then $3\times 3$ normal state Hamiltonian $h_{3}(\mathbf{k})$ is given by \begin{gather} h_{3}(\mathbf{k})=\left(\begin{array}{ccc} & & [h_{3}(\mathbf{k})]_{13} \\ \multicolumn{2}{c}{\smash{\raisebox{.5\normalbaselineskip}{$h_{2}(\mathbf{k})$}}} & [h_{3}(\mathbf{k})]_{23} \\ \left[h_{3}(\mathbf{k})\right]^{*}_{13} & [h_{3}(\mathbf{k})]^{*}_{23} & [h_{3}(\mathbf{k})]_{33} \end{array} \right), \end{gather} where $h_{2}(\mathbf{k})$ is the $2\times 2$ normal state Hamiltonian before adding the s orbital and the other components of $h_{3}(\mathbf{k})$ are given by \begin{gather} [h_{3}(\mathbf{k})]_{13}=t'\sum_{i=1}^{3}(e^{i\mathbf{k}\cdot (\mathbf{t}_{i}+\mathbf{t}_{z})}+e^{i\mathbf{k}\cdot (\mathbf{t}_{i}-\mathbf{t}_{z})}),\\ [h_{3}(\mathbf{k})]_{23}=t'\sum_{i=1}^{3}(e^{-i\mathbf{k}\cdot (\mathbf{t}_{i}+\mathbf{t}_{z})}+e^{-i\mathbf{k}\cdot (\mathbf{t}_{i}-\mathbf{t}_{z})}),\\ [h_{3}(\mathbf{k})]_{33}=E'_{\mathrm{on}},\label{hnor3} \end{gather} where $\mathbf{t}_{z}=(0,0,c/2)$. This tight binding Hamiltonian has the inversion symmetry and the time-reversal symmetry. The inversion symmetry operator $\mathcal{I}$ is given by \begin{gather} \mathcal{I}=\begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}, \end{gather} and the time-reversal symmetry $\mathcal{T}$ is given by $\mathcal{K}$. We consider the time-reversal symmetric reduced gap function $\delta_{3}(\mathbf{k})$ which satisfies Eq.~(\ref{Invdel}), \begin{gather} \delta_{3}(\mathbf{k})=\left(\begin{array}{ccc} & & \psi'_{0} \\ \multicolumn{2}{c}{\smash{\raisebox{.5\normalbaselineskip}{$\delta_{2}(\mathbf{k})$}}} & -\psi'_{0} \\ \psi'_{0} & -\psi'_{0} & 0 \end{array} \right),\label{d3} \end{gather} where $\delta_{2}(\mathbf{k})$ is the $2\times 2$ gap function given by Eq.~(\ref{d2}) and $\psi'_{0}$ is another $s$-wave order parameter. \begin{figure}[t] \centering \includegraphics[width=8.5cm]{FigS4.jpg} \caption{NSs and Euler angles for 6-band lattice models. (a) A cross section of the NS with $c_{\textrm{1D}}=1$ on the $k_{z}=0$ plane. The light red (white) region has the same meaning as in Fig.~2. (b) Lifted Euler angles of the flattened Hamiltonian along the dashed black lines in (a). (c) Components of $|u^{\mathrm{occ}}_{1}\rangle$ along the small black circle in (a). Red, blue and black solid (dashed) curves correspond to the first, second and third (fourth, fifth and sixth) components of $|u^{\mathrm{occ}}_{1}\rangle$. (d-f) Similar figures for the NS with $c_{\textrm{1D}}=0$.} \label{fig:BDI6B} \end{figure} To evaluate $c_{\textrm{1D}}$ in the 6-band model, we compute the homotopy equivalence class of $A_{FB}(\mathbf{k})\in \mathrm{O(3)}$ defined on a circle surrounding a NS. $A_{FB}$ can be restricted to $\mathrm{SO(3)}$ using a map $f : \left[\mathrm{O}(3)-\mathrm{SO}(3)\right]\rightarrow \mathrm{SO}(3)$ as before. In terms of three Euler angles $\alpha, \beta, \gamma\in [0,2\pi]$, $A_{FB}$ can be writtened as $A_{FB}= e^{\gamma L_{3}}e^{\beta L_{1}}e^{\alpha L_{3}}$ where $L_{i}~(i=1,2,3)$ are generators of Lie algebra $\mathfrak{so}(3)$ defined by $[L_{i}]_{jk}=-\epsilon_{ijk}$.\\ \indent To determine the homotopy equivalence class of a closed loop in $\mathrm{SO(3)}$, we examine a lifting of the closed loop to the double covering group $\mathrm{SU(2)}$ by replacing $L_{j}$ by $-\frac{i}{2}\sigma_{j}$ where $\sigma_{j}$ are Pauli matrices~\cite{bzduvsek2019nonabelian}, and substituting $\alpha, \beta, \gamma$ with $\tilde{\alpha}, \tilde{\beta}, \tilde{\gamma}\in [0,4\pi]$, respectively. The lifted loop can take a form of either a closed loop or an open line because a covering map $p:\mathrm{SU(2)}\rightarrow \mathrm{SO(3)}$ is two-to-one. In the former case, as $\mathrm{SU(2)}$ is simply connected, a closed loop in $\mathrm{SU(2)}$ can always be contracted to a point. This means that the homotopy equivalence class of the original loop is a trivial element of $\pi_{1}(\mathrm{SO(3)})$. In the latter case, on the other hand, as the two end points of the lifted open line in $\mathrm{SU(2)}$ have a fixed $L^{2}$-norm $2\sqrt{2}$, it cannot be smoothly contracted to a point by deforming the original loop in $\mathrm{SO(3)}$ continuously. This corresponds to the case when the original loop corresponds to a non-trivial element of $\pi_{1}(\mathrm{SO(3)})$. As $\pi_{1}(\mathrm{SO(3)})=\mathbb{Z}_{2}$, the shape of the lifted line (closed or open) gives sufficient information for the homotopy equivalence class of the original loop. \\ \indent Fig.~\ref{fig:BDI6B}(b, e) show the lifted Euler angles of $A_{FB}(\mathbf{k})$ for the 6-band model with the NSs in Fig.~\ref{fig:BDI6B}(a, d). The case with different (same) lifted Euler angles at $\theta =0$ and $\theta =2\pi$ corresponds to $c_{\textrm{1D}}=1$ ($c_{\textrm{1D}}=0$). Here $\theta$ parametrizes the black dotted circle. In Fig.~\ref{fig:BDI6B}(c, f), we plot the components of the highest occupied states $|u^{\mathrm{occ}}_{1}\rangle$ computed on a small black circle parametrized by $\theta'\in[0,2\pi]$ inside the NS shown in Fig.~\ref{fig:BDI6B}(a, d). The opposite signs of $|u^{\mathrm{occ}}_{1}\rangle$ at $\theta'=0$ and $2\pi$ in Fig.~\ref{fig:BDI6B}(c) indicate the presence of a NL between occupied bands, marked by a blue dot in Fig.~\ref{fig:BDI6B}(a), which again confirms the linking structure of DCNSs. \subsubsection{1D charge calculation of a 2N-band model} The 1D charge of the 2N-band case is defined by the homotopy equivalence class of $A_{FB}(\mathbf{k})\in \mathrm{SO(N)}$. Similar to $\mathrm{SO(3)}$, an arbitrary element of $\mathrm{SO(N)}$ can be represented by generalized $\mathrm{N(N-1)/2}$ Euler angles. Here, We consider the passive transformation. $A_{FB}(\mathbf{k})$ can be determined by specifying how $A_{FB}(\mathbf{k})$ transforms the standard orthonormal basis $\{\mathbf{e}_{i}\}$. The transformed basis $\{\mathbf{e}^{\mathrm{N}}_{j}\}$ is given by \begin{gather} \mathbf{e}^{\mathrm{N}}_{j}=\sum_{i=1}^{\mathrm{N}}\mathbf{e}_{i}[A_{FB}(\mathbf{k})]_{ij}. \end{gather} \indent We are going to represent $A_{FB}(\mathbf{k})$ by a product of rotation matrices $\mathrm{M}_{i}\in\mathrm{SO(N)}$ $(i=1,\cdots,\mathrm{N-1})$. The first rotation matrix $\mathrm{M}_{1}$ turns the basis $\{\mathbf{e}_{i}\}$ into the other basis $\{\mathbf{e}^{1}_{i}\}$. And we are going to set $\mathrm{M}_{1}$ to make the last rotated basis vector $\{\mathbf{e}^{1}_{\mathrm{N}}\}$ satisfy \begin{gather} \mathbf{e}^{1}_{\mathrm{N}}=\mathbf{e}^{\mathrm{N}}_{\mathrm{N}}. \end{gather} The second rotation matrix $\mathrm{M}_{2}$ turns the basis $\{\mathbf{e}^{1}_{i}\}$ into the other basis $\{\mathbf{e}^{2}_{i}\}$. And we are going to set $\mathrm{M}_{2}$ to make the rotated basis vectors $\{\mathbf{e}^{2}_{\mathrm{N}}\}$ and $\{\mathbf{e}^{2}_{\mathrm{N-1}}\}$ satisfy \begin{gather} \mathbf{e}^{2}_{\mathrm{N}}=\mathbf{e}^{\mathrm{N}}_{\mathrm{N}},\mathbf{e}^{2}_{\mathrm{N-1}}=\mathbf{e}^{\mathrm{N}}_{\mathrm{N-1}}. \end{gather} Like this, we will define all the matrices $\mathrm{M}_{i}$ so that \begin{gather} A_{FB}(\mathbf{k})=\mathrm{M}_{\mathrm{N-1}}\cdots \mathrm{M}_{2}\mathrm{M}_{1}.\label{Mprod} \end{gather} Let us consider $\mathrm{M}_{1}$ first. The N-dimensional unit vector $\mathbf{e}^{\mathrm{N}}_{\mathrm{N}}$ can always be defined by $\mathrm{N}-1$ angles $\alpha^{\mathrm{N}}_{1},\cdots,\alpha^{\mathrm{N}}_{\mathrm{N-1}}$, \begin{align} \mathbf{e}^{\mathrm{N}}_{\mathrm{N}}&=\sin\alpha^{\mathrm{N}}_{1}\sin\alpha^{\mathrm{N}}_{2}\cdots\sin\alpha^{\mathrm{N}}_{\mathrm{N}-1}\mathbf{e}_{1}\nonumber \\ &+\cos\alpha^{\mathrm{N}}_{1}\sin\alpha^{\mathrm{N}}_{2}\cdots\sin\alpha^{\mathrm{N}}_{\mathrm{N}-1}\mathbf{e}_{2}\nonumber \\ &+\cos\alpha^{\mathrm{N}}_{2}\cdots\sin\alpha^{\mathrm{N}}_{\mathrm{N}-1}\mathbf{e}_{3}\nonumber \\ &+\cdots+\cos\alpha^{\mathrm{N}}_{\mathrm{N}-1}\mathbf{e}_{\mathrm{N}}.\label{Nangexp} \end{align} Let us define the $\mathrm{N(N-1)/2}$ generators $L_{ij}$ of the Lie algebra $\mathfrak{so}\mathrm{(N)}$ by \begin{gather} [L_{ij}]_{nm}= \begin{cases} -1, & \mathrm{if}~i=n,~j=m, \\ 1, & \mathrm{if}~i=m,~j=n, \\ 0, & \mathrm{otherwise}, \end{cases} \end{gather} where $i<j$. Note that the matrix exponential $e^{\theta L_{ij}}$ of a generator $L_{ij}$ is an element of the Lie group $\mathrm{SO(N)}$ and $e^{\theta L_{ij}}$ is a counter-clockwise rotation of the basis vectors $\mathbf{e}_{i},\mathbf{e}_{j}$ through angle $\theta$. If we define $\mathrm{M}_{1}$ by \begin{gather} \mathrm{M}_{1}=e^{-\alpha^{\mathrm{N}}_{1}L_{12}}e^{-\alpha^{\mathrm{N}}_{2}L_{23}}\cdots e^{-\alpha^{\mathrm{N}}_{\mathrm{N-1}}L_{\mathrm{N}-1,\mathrm{N}}}, \end{gather} one can show that \begin{gather} \mathbf{e}^{1}_{\mathrm{N}}=\sum_{i=1}^{\mathrm{N}}\mathbf{e}_{i}[\mathrm{M}_{1}]_{i\mathrm{N}}=\mathbf{e}^{\mathrm{N}}_{\mathrm{N}}. \end{gather} \indent To define $\mathrm{M}_{2}$, we are going to represent $\mathbf{e}^{\mathrm{N}}_{\mathrm{N-1}}$ using the basis $\{\mathbf{e}_{i}^{1}\}$ first. Since $\mathbf{e}^{\mathrm{N}}_{\mathrm{N-1}}$ and $\mathbf{e}^{\mathrm{N}}_{\mathrm{N}}$ are orthogonal, $\mathbf{e}^{\mathrm{N}}_{\mathrm{N-1}}$ can be written by the linear combination of the $\mathrm{N-1}$ basis vectors $\mathbf{e}^{1}_{1},\cdots,\mathbf{e}^{1}_{\mathrm{N-1}}$. Similar to Eq.~(\ref{Nangexp}), $\mathbf{e}^{\mathrm{N}}_{\mathrm{N-1}}$ can be defined by $\mathrm{N-2}$ angles $\alpha^{\mathrm{N-1}}_{1},\cdots,\alpha^{\mathrm{N-1}}_{\mathrm{N-2}}$, \begin{align} \mathbf{e}^{\mathrm{N}}_{\mathrm{N-1}}=&\sin\alpha^{\mathrm{N-1}}_{1}\sin\alpha^{\mathrm{N-1}}_{2}\cdots\sin\alpha^{\mathrm{N-1}}_{\mathrm{N-2}}\mathbf{e}^{1}_{1}\nonumber \\ +&\cos\alpha^{\mathrm{N-1}}_{1}\sin\alpha^{\mathrm{N-1}}_{2}\cdots\sin\alpha^{\mathrm{N-1}}_{\mathrm{N-2}}\mathbf{e}^{1}_{2}\nonumber \\ +&\cos\alpha^{\mathrm{N-1}}_{2}\cdots\sin\alpha^{\mathrm{N-1}}_{\mathrm{N-2}}\mathbf{e}^{1}_{3}\nonumber \\ +&\cdots+\cos\alpha^{\mathrm{N-1}}_{\mathrm{N-2}}\mathbf{e}^{1}_{\mathrm{N-1}}. \end{align} $\mathrm{M}_{2}$ can be defined by a similar form of $\mathrm{M}_{1}$, \begin{gather} \mathrm{M}_{2}=e^{-\alpha^{\mathrm{N-1}}_{1}L_{12}}e^{-\alpha^{\mathrm{N-1}}_{2}L_{23}}\cdots e^{-\alpha^{\mathrm{N-1}}_{\mathrm{N-2}}L_{\mathrm{N-2},\mathrm{N-1}}},\label{M2} \end{gather} And one can show that \begin{gather} \mathbf{e}^{2}_{\mathrm{N-1}}=\sum_{i=1}^{\mathrm{N}}\mathbf{e}^{1}_{i}[\mathrm{M}_{2}]_{i,\mathrm{N-1}}=\mathbf{e}^{\mathrm{N}}_{\mathrm{N-1}}. \end{gather} Moreover, $\mathrm{M_{2}}$ in Eq.~(\ref{M2}) does not rotate $\mathrm{N}$-th basis vector. This means that \begin{gather} \mathbf{e}^{2}_{\mathrm{N}}=\mathbf{e}^{1}_{\mathrm{N}}=\mathbf{e}^{\mathrm{N}}_{\mathrm{N}}. \end{gather} \indent Following these steps, we get the rotation matrices $\mathrm{M}_{1},\cdots,\mathrm{M}_{\mathrm{N-1}}$ satisfying Eq.~(\ref{Mprod}). The $\mathrm{N(N-1)/2}$ angles $\alpha^{i}_{j}~(i>j)$ are the generalized Euler angles of $\mathrm{SO(N)}$~\cite{hoffman1972EulerangSO(N)}.\\ \indent The next step is lifting $A_{FB}(\mathbf{k})$ in $\mathrm{SO(N)}$ to $\mathrm{Spin(N)}$ which is a doubly covering space of $\mathrm{SO(N)}$. Lifting $A_{FB}(\mathbf{k})$ replaces $\alpha^{i}_{j}$ in each $\mathrm{M}_{k}$ by $\tilde{\alpha}^{i}_{j}\in[0,4\pi]$. Also, we have to change $L_{ij}$ in each $\mathrm{M}_{k}$ into the generators $t_{ij}$ of the Lie algebra $\mathfrak{spin}\mathrm{(N)}$, \begin{gather} t_{ij}=-\frac{1}{4}[\Gamma_{i},\Gamma_{j}], \end{gather} where $\Gamma_{i}~(i=1,\cdots,\mathrm{N})$ are gamma matrices of dimensions $2^{[\mathrm{N}/2]}\times 2^{[\mathrm{N}/2]}$ which mutually anti-commute, i.e. $\{\Gamma_{i},\Gamma_{j}\}=2\delta_{ij}$~\cite{bzduvsek2019nonabelian}. \section{Class D} In class D, we have $\mathfrak{B}^2=1$. We represent this symmetry as a complex conjugation operator $\mathcal{K}$. From this symmetry, we can deduce the nodal structure of class D. Consider an effective 2-band Hamiltonian, \begin{align} H_{\mathrm{eff}}(\mathbf{k})=E_{0}(\mathbf{k})\mathbb{1}_{2\times 2}+h_{x}(\mathbf{k})\sigma_{x}+h_{y}(\mathbf{k})\sigma_{y}+h_{z}(\mathbf{k})\sigma_{z}, \end{align} which describing the neighborhood of a node. Here $E_{0}(\mathbf{k}),h_{x,y,z}(\mathbf{k})$ are real-valued functions. Like class BDI, there is a symmetry which is anti-commuting with a Hamiltonian: $\{H(\mathbf{k}),\mathfrak{B}\}=0$. This makes a nodal structure at $E_{0}=0$ and that at $E_{0}\neq 0$ different. If $E_{0}=0$, $h_{x}=h_{z}=0$ due to $\mathfrak{B}=\mathcal{K}$. Then there is one constraint on the node, which is $h_{y}=0$. Therefore, the node at the Fermi level is a surface in 3D momentum space. If $E_{0}\neq 0$, this type of nodes should appear between occupied bands or unoccupied bands. Since $\mathfrak{B}$ is anti-commute with the Hamiltonian, $\mathfrak{B}$ does not constrain on the nodal structure at $E_{0}\neq 0$. This means that a node between occupied bands or unoccupied bands is a point. \\ \subsection{Topological charges of class D} Focusing on the node at $E_{F}$, we can find two types of topological charges; a 0D charge and a 2D charge. The 0D charge is defined by the Pfaffian of the Hamiltonian. Due to the symmetry $\mathfrak{B}$, the Hamiltonian $H$ is purely imaginary. Then $H$ should be skew-symmetric, \begin{gather} H^{T}=H^{*}=-H. \end{gather} Since $\mathfrak{B}$ makes the Hamiltonian even dimensional, $iH$ is a skew-symmetric and even-dimensional real matrix. For a skew-symmetric and even-dimensional matrix, we can define the Pfaffian of the matrix~\cite{bzduvsek2017doubly}. For a given $2N\times 2N$ skew-symmetric matrix $A$, we can always find an orthogonal matrix $Q$ such that $A=Q^{T}\Sigma Q$ where \begin{gather} \Sigma=\bigoplus_{n=1}^{N}\begin{pmatrix} 0&&a_{n}\\-a_{n}&&0 \end{pmatrix}. \end{gather} In that case, the Pfaffian $\mathrm{Pf}(A)$ of $A$ is given by $\prod_{n=1}^{N}a_{n}$. Since the flat-band Hamiltonian $H_{\mathrm{FB}}$ is also a skew-symmetric and even-dimensional matrix, $\mathrm{Pf}\left[iH_{\mathrm{FB}}\right]$ can be defined. The Pfaffian $\mathrm{Pf}[iH_{\mathrm{FB}}]$ is related to the determinant $\mathrm{det}[iH_{\mathrm{FB}}]$ of the flat-band Hamiltonian through an identity, \begin{gather} \mathrm{det}[iH_{\mathrm{FB}}]=\left(\mathrm{Pf}[iH_{\mathrm{FB}}]\right)^2. \end{gather} \noindent For the $H_{\mathrm{FB}}$ with $2N$ bands, $\mathrm{det}[H_{\mathrm{FB}}]=(-1)^{N}$ because there are $N$ bands with an energy $1$ and the other $N$ bands with an energy $-1$. Therefore, $\mathrm{Pf}[iH_{\mathrm{FB}}]$ should be $\pm 1$. And the boundary of each sector with $\mathrm{Pf}[iH_{\mathrm{FB}}]=\pm 1$ appears when the gap is closed. From these facts, we can define the 0D charge of the node by \begin{align} c_{\mathrm{D}}(S^{0})=\mathrm{Pf}\left[iH_{\mathrm{FB}}(\mathbf{k}_{1})\right]\cdot\mathrm{Pf}\left[iH_{\mathrm{FB}}(\mathbf{k}_{2})\right]\in \{-1,1\}, \end{align} where $S^{0}=\{\mathbf{k}_{1},\mathbf{k}_{2}\}$ and $\mathbf{k}_{1}$ and $\mathbf{k}_{2}$ locate on the opposite sides of the node. Similar to class BDI, the 0D charge can be explained by a band inversion across the nodal surface. For an occupied state $|u^{\mathrm{occ}}_{n\mathbf{k}}\rangle$ with an energy $-E_{n\mathbf{k}}$, there is an unoccupied state $|u^{\mathrm{unocc}}_{n\mathbf{k}}\rangle\propto \mathfrak{B}|u^{\mathrm{occ}}_{n\mathbf{k}}\rangle$ with an energy $E_{n\mathbf{k}}$. We can express $|u_{n}^{\mathrm{occ}}\rangle$ and $|u_{n}^{\mathrm{unocc}}\rangle$ by \begin{gather} |u_{n}^{\mathrm{occ}}\rangle =\frac{1}{\sqrt{2}}\left(|u_{n}^{\mathrm{r}}\rangle+i|u_{n}^{\mathrm{i}}\rangle\right),\label{Docc}\\ |u_{n}^{\mathrm{unocc}}\rangle =\frac{1}{\sqrt{2}}\left(|u_{n}^{\mathrm{r}}\rangle-i|u_{n}^{\mathrm{i}}\rangle\right)\label{Dunocc}, \end{gather} where $|u_{n}^{\mathrm{r}}\rangle$ and $|u_{n}^{\mathrm{i}}\rangle$ are real-valued vectors. It follows from the orthonormal condition of the eigenstates that $\left\{|u_{n}^{\mathrm{r}}\rangle,|u_{n}^{\mathrm{i}}\rangle:n=1,\cdots N\right\}$ satisfies the orthonormal condition, too. Changing the basis from $\{|u^{\mathrm{occ}}_{n}\rangle,|u^{\mathrm{unocc}}_{n}\rangle\}$ to $\left\{|u_{n}^{\mathrm{r}}\rangle,|u_{n}^{\mathrm{i}}\rangle\right\}$, we can get \begin{gather} iH_{\mathrm{FB}}=\sum_{n=1}^{N} \left(|u_{n}^{\mathrm{i}}\rangle\langle u_{n}^{\mathrm{r}}|-|u_{n}^{\mathrm{r}}\rangle\langle u_{n}^{\mathrm{i}}|\right). \end{gather} In this basis, $iH_{\mathrm{FB}}$ is block-diagonalized. If we set an order of basis by $\{|u_{n}^{\mathrm{r}}\rangle,|u_{n}^{\mathrm{i}}\rangle\}$ in each block, $iH_{\mathrm{FB}}$ is given by $N$-direct sum of $\begin{pmatrix} 0&&1\\-1&&0 \end{pmatrix}$. Therefore, $\mathrm{Pf}\left[iH_{\mathrm{FB}}\right]=1$. Suppose that there is a band inversion between the highest occupied band and lowest unoccupied band across the nodal surface. This results in adding a minus sign to $|u_{1}^{\mathrm{i}}\rangle$. Then $iH'_{\mathrm{FB}}$ after a band inversion is given by \begin{gather} iH'_{\mathrm{FB}}=O^{T}\cdot iH_{\mathrm{FB}}\cdot O,\label{iHbasis} \end{gather} where $O$ is an orthogonal matrix which changes the sign of $|u_{1}^{i}\rangle$. The well-known identity of Pfaffian is given by \begin{gather} \mathrm{Pf}[O^{T}\cdot iH_{\mathrm{FB}}\cdot O]=\mathrm{det}O\cdot \mathrm{Pf}[iH_{\mathrm{FB}}],\label{Pforder} \end{gather} which shows that $\mathrm{Pf}[iH_{\mathrm{FB}}]$ has to change its sign across the nodal surface when there is a band inversion. Note that, across the nodal surface, the only allowed change for eigenstates is a band inversion between the highest occupied state and lowest unoccupied state. The main reason is almost the same as that of class BDI: the symmetry $\mathfrak{B}$ does not allow to mix the highest occupied state and lowest unoccupied state across the nodal surface. The 2D charge of class D is defined by the Chern number. Consider a sphere $S^{2}$ surrounding a nodal surface. Then the 2D charge $c_{\mathrm{D}}(S^{2})$ of the nodal surface is given by \begin{gather} c_{\mathrm{D}}(S^2)=\frac{i}{2\pi} \sum_{n\in \mathrm{occ}}\oint_{S^2} d^{2}\mathbf{k} \cdot \nabla_{\mathbf{k}} \times \mathbf{A}^{\mathrm{occ}}_{n}(\mathbf{k}), \end{gather} where $\mathbf{A}^{\mathrm{occ}}_{n}(\mathbf{k})=\langle u^{\mathrm{occ}}_{n}(\mathbf{k})|\nabla_{\mathbf{k}}|u^{\mathrm{occ}}_{n}(\mathbf{k})\rangle$ is the Berry connection of $n$-th occupied band~\cite{bzduvsek2017doubly}. We can express the 2D charge as a sum of the Chern numbers of each occupied band, \begin{gather} c_{\mathrm{D}}(S^2)=\sum_{n\in \mathrm{occ}} c^{\mathrm{occ}}_{\mathrm{A},n}(S^2),\label{2Din} \end{gather} where $c^{\mathrm{occ}}_{\mathrm{A},n}(S^2)$ is the Chern number of the $n$-th occupied band over the sphere $S^{2}$. The topological charge of a nodal point between the occupied bands can be defined using Chern numbers of each band. For a nodal point between $n$-th and $n+1$-th occupied bands, we can get non-trivial Chern numbers $c^{\mathrm{occ}}_{\mathrm{A},n}(S^{2})$ and $c^{\mathrm{occ}}_{\mathrm{A},n+1}(S^{2})$ of $n$-th and $n+1$-th occupied bands, where $S^{2}$ surrounds the nodal point. \subsection{Linking structure of class D} The 2D charge of a nodal surface is given by \begin{gather} c_{D}(S^{2}_{\mathrm{out}})=\sum_{n\in \mathrm{occ}} c^{\mathrm{occ}}_{A,n}(S^{2}_{\mathrm{out}}), \end{gather} where $S^{2}_{\mathrm{out}}$ is a sphere surrounding the nodal surface. For $n\geq 2$, the $n$-th occupied band is continuous across the nodal surface. This means that \begin{gather} c^{\mathrm{occ}}_{\mathrm{A},n}(S^{2}_{\mathrm{out}})=c^{\mathrm{occ}}_{\mathrm{A},n}(S^{2}_{\mathrm{in}}), \end{gather} where $S^{2}_{\mathrm{in}}$ is a sphere inside the nodal surface. However, the case of $n=1$ is different. Due to the band inversion across the nodal surface, the Chern number of the highest occupied band is switched with that of the lowest unoccupied band, \begin{gather} c^{\mathrm{occ}}_{\mathrm{A},1}(S^{2}_{\mathrm{out}})=c^{\mathrm{unocc}}_{\mathrm{A},1}(S^{2}_{\mathrm{in}}). \end{gather} Using the symmetry $\mathfrak{B}$, we can deduce a simple relation between the Chern number of the unoccupied band and that of the occupied band. Since $|u^{\mathrm{unocc}}_{n\mathbf{k}}\rangle\propto \mathfrak{B}|u^{\mathrm{occ}}_{n\mathbf{k}}\rangle$, the Berry connection of the $n$-th occupied band is related to that of the $n$-th unoccupied band by \begin{gather} \left(\mathbf{A}^{\mathrm{occ}}_{n}\right)^{*}=\mathbf{A}^{\mathrm{unocc}}_{n}.\label{Aoccunocc} \end{gather} Also we can deduce that $\mathbf{A}^{\mathrm{occ}}_{n}+\left(\mathbf{A}^{\mathrm{occ}}_{n}\right)^{*}=\nabla_{\mathbf{k}}\langle u^{\mathrm{occ}}_{n}|u^{\mathrm{occ}}_{n}\rangle=0$ from the normalization of the occupied state. The above two equations say that the Chern number of the $n$-th occupied band has a different sign comparing with that of the $n$-th unoccupied band, \begin{gather} c_{\mathrm{A},n}^{\mathrm{unocc}}(S^{2})=-c_{\mathrm{A},n}^{\mathrm{occ}}(S^{2}).\label{cAoccunocc} \end{gather} \indent To sum up, the 2D charge of the nodal surface is given by \begin{gather} c_{\mathrm{D}}(S^{2}_{\mathrm{out}})= -c_{\mathrm{A},1}^{\mathrm{occ}}(S^{2}_{\mathrm{in}})+\sum_{n\in \mathrm{occ}-\{1\}} c^{\mathrm{occ}}_{\mathrm{A},n}(S^{2}_{\mathrm{in}}). \end{gather} Inside the nodal surface, $\sum_{n\in \mathrm{occ}} c^{\mathrm{occ}}_{\mathrm{A},n}(S^{2}_{\mathrm{in}})=0$ because there is no nodal surface inside $S^{2}_{\mathrm{in}}$. Consequently, \begin{gather} c_{\mathrm{D}}(S^{2}_{\mathrm{out}})=-2 c^{\mathrm{occ}}_{\mathrm{A},1}(S^{2}_{\mathrm{in}}).\label{Dlinkingeq} \end{gather} Therefore, the 2D charge of the nodal surface comes from the Chern number of the highest occupied band inside the node~\cite{bzduvsek2017doubly}. \begin{figure}[t] \includegraphics[width=\linewidth]{FigS5.jpg} \caption{Topological charges of the nodes of the 8 bands BdG Hamiltonian belonging to class D. (a) The first Brillouin zone and the nodal surface. The red surfaces are the nodal surface. (b) One of the nodal surface. The blue dot inside the surface is a nodal point between the topmost occupied band and the second topmost occupied band. (c) A cross section of nodal structure of (a) at $k_{z}c/2\pi=0.293$. The light red(white) region is where the Pfaffian of $iH_{\mathrm{FB}}$ is positive(negative). (d) Winding of the Berry phase on a sphere surrounding the nodal surface in (b). $\gamma(S^{1}_{\theta})$ is the Berry phase of a band along a parallel of latitude with a latitude $\theta$. The values of straight lines are calculated outside the nodal surface and those of the dashed line are calculated inside the nodal surface. The green line corresponds to the topmost occupied band and the second topmost occupied band and the blue line corresponds to the third topmost occupied band and the fourth topmost occupied band. The dashed orange line corresponds to the topmost occupied band.} \label{fig:Dcharge} \end{figure} \subsection{Lattice model of a class D superconductor} We start from a BdG Hamiltonmian (\ref{BdGorigin}). To make a BdG Hamiltonian belonging to class D, we impose four conditions on a BdG Hamiltonian; (i) the system has an inversion symmetry. (ii) a parity of a gap function is even. (iii) there is no time-reversal symmetry. (iv) there is no spin-rotation symmetry~\cite{bzduvsek2017doubly}. From the conditions (i) and (ii), we can deduce that \begin{gather} \mathcal{I}\mathcal{H}_{\mathbf{k}}\mathcal{I}^{-1}=\mathcal{H}_{-\mathbf{k}},\\ \mathcal{I}\Delta_{\mathbf{k}}\mathcal{I}^{-1}=\Delta_{-\mathbf{k}}, \end{gather} where $\mathcal{I}$ is an inversion symmetry operator of the normal state Hamiltonian. From the condition (i), the BdG Hamiltonian has a symmetry $\mathfrak{B}=\mathcal{PI}$, where $\mathcal{P}$ is the particle-hole symmetry which is Eq. (\ref{PHsym}). From the conditions (ii) and (iii), $\mathfrak{B}^{2}=1$. We made a simple tight binding model to check the linking structure in class D. The tight binding model is constructed on the AA-stacked honeycomb layers which is the same lattice as 4 bands modal of class BDI. An s orbital is located at each atomic position and all the orbitals are the same. Inside the layer, we consider the on-site energy, the nearest-neighbor hopping and the next nearest-neighbor hopping with the amplitudes $E_{\mathrm{on}}$, $t_{1}$, $t_{2}$, respectively. These parameters make a tight binding Hamiltonian $\mathcal{H}_{1}(\mathbf{k})$, \begin{align} \mathcal{H}_{1}(\mathbf{k})=&(E_{\mathrm{on}}+2t_{2}\sum_{i=1}^{3} \left(\cos (\mathbf{k}\cdot \mathbf{T}_{i})\right) \sigma_{0}\otimes\tau_{0}\nonumber\\ +& t_{1} \sigma_{0}\otimes \sum_{i=1}^{3}\left(\cos (\mathbf{k}\cdot \mathbf{t}_{i})\tau_{x}+\sin (\mathbf{k}\cdot \mathbf{t}_{i})\tau_{y}\right), \end{align} where $\sigma_{i},\tau_{j}$ are the Pauli matrices and act on the spin degree of freedom and the orbital degree of freedom, respectively. We also consider the spin-orbit coupling between the next nearest neighboring atoms with an amplitude $v_{z}$. This makes another tight binding Hamiltonian $\mathcal{H}_{2}(\mathbf{k})$, \begin{gather} \mathcal{H}_{2}(\mathbf{k})=v_{z}\sum_{i=1}^{3} \sin (\mathbf{k}\cdot\mathbf{T}_{i}) \sigma_{z}\otimes\tau_{z}. \end{gather} Due to this term, the only left spin-rotation symmetry is $\sigma_{z}$. Between the layers, we only consider the nearest-neighbor hopping with an amplitude $t_{z}$ and the tight binding Hamiltonian $\mathcal{H}_{3}(\mathbf{k})$ resulting from $t_{z}$ is given by \begin{gather} \mathcal{H}_{3}(\mathbf{k})=2 t_{z} \cos(\mathbf{k}\cdot\mathbf{R}_{3}) \sigma_{0}\otimes\tau_{0}. \end{gather} The total tight binding Hamiltonian $\mathcal{H}(\mathbf{k})=\sum_{i=1}^{3} \mathcal{H}_{i}(\mathbf{k})$ has the inversion symmetry $\mathcal{I}$, the time-reversal symmetry $\mathcal{T}$ and $\mathrm{U}(1)$ spin-rotation symmetry $\sigma_{z}$. The inversion symmetry and time-reversal symmetry are represented by \begin{gather} \mathcal{I} = \sigma_{0}\otimes\tau_{x},\\ \mathcal{T} = i\sigma_{y}\otimes\tau_{0}\mathcal{K}, \end{gather} where $\mathcal{K}$ is the complex conjugation operator. Since class D does not have the time-reversal symmetry and the spin-rotation symmetry, we break them by adding appropriate gap functions. We consider two gap functions $\Delta_{1,2}(\mathbf{k})$, \begin{gather} \Delta_{1}(\mathbf{k})=\psi_{0} \sum_{n=1}^{3} e^{2n\pi i/3} \cos(\mathbf{k}\cdot\mathbf{T}_{n}) (i\sigma_{y})\otimes\tau_{0},\\ \Delta_{2}(\mathbf{k})=d_{z} \sin(2\mathbf{k}\cdot\mathbf{R}_{3})(\sigma_{x}-i\sigma_{y})(i\sigma_{y})\otimes\tau_{z}. \end{gather} Since the point group of the lattice is $D_{6h}$, each gap function belongs to one of the representation of $D_{6h}$. $\Delta_{1}(\mathbf{k})$ is a spin-singlet pairing function belonging to $E_{2g}$ representation and $\Delta_{2}(\mathbf{k})$ is a spin-triplet pairing function belonging to $E_{1u}$ representation~\cite{bzduvsek2017doubly,fischer2015symmetry}. We can easily check that $\Delta_{1}(\mathbf{k})$ breaks the time-reversal symmetry and $\Delta_{2}(\mathbf{k})$ breaks $\mathrm{U}(1)$ spin-rotation symmetry $\sigma_{z}$. Also, the parity of each gap function is even. This means that the gap function $\Delta(\mathbf{k})=\Delta_{1}(\mathbf{k})+\Delta_{2}(\mathbf{k})$ turns the tight binding Hamiltonian $\mathcal{H}(\mathbf{k})$ into the BdG Hamiltonian belonging to class D. And the symmetry $\mathfrak{B}$ is given by \begin{gather} \mathfrak{B}=s_{x}\otimes\tau_{x}\mathcal{K}. \end{gather} We set the parameters by \begin{gather} E_{\mathrm{on}}=-4,~t_{1}=-1,~t_{2}=-0.5,\nonumber\\ t_{z}=-0.7,~v_{z}=0.5,\\ \psi_{0}=0.3,~d_{z}=0.5.\nonumber \end{gather} These parameters make tiny nodal surfaces at the edge of the first Brillouin zone which is a hexagonal prism, which are illustrated in Fig. \ref{fig:Dcharge} (a). We illustrate one of the nodal surfaces at Fig. \ref{fig:Dcharge} (b). The red sphere is the nodal surface. Inside the sphere, there is a nodal points between the topmost occupied band and the second topmost occupied band, which is a blue dot in Fig. \ref{fig:Dcharge} (b). To check the linking structure between the nodal surface and the nodal point, we first check the 0D charge of the nodal surface. The result of the Pfaffian calculation is illustrated at Fig. \ref{fig:Dcharge} (c). There is a cross section of the nodal surface at $k_{z}=0.293\cdot(2\pi/c)$. The light red(white) region is where $\mathrm{Pf}[iH_{\mathrm{FB}}]$ is positive(negative). Since the sign of $\mathrm{Pf}[iH_{\mathrm{FB}}]$ is changed across the nodal surface, its 0D charge is non-trivial. The next step is calculating the 2D charge of the nodal surface, which is given by Eq. (\ref{2Din}). We calculate the Chern number $c_{\mathrm{A},n}^{\mathrm{occ}}(S^{2})$ of the n-th occupied band using the Berry phase of the band on $S^{2}$. The Berry curvature $\mathbf{F}_{n}^{\mathrm{occ}}(\mathbf{k})=\nabla_{\mathbf{k}}\times \mathbf{A}_{n}^{\mathrm{occ}}(\mathbf{k})$ of the n-th occupied band is well-defined over the $S^{2}$ because the n-th occupied band is gapped from the other bands on $S^{2}$. Therefore, we can get $c_{\mathrm{A},n}^{\mathrm{occ}}(S^{2})$ by calculating a surface integral of $\mathbf{F}_{n}^{\mathrm{occ}}(\mathbf{k})$ over a pierced $S^{2}$, i.e. $S^{2}-\{x_{1},\cdots,x_{n}\}$ where $x_{1},\cdots,x_{n}\in S^{2}$. Let us pierce $S^{2}$ in its north pole and south pole. Using the Stokes' theorem, we can get the below equation, \begin{gather} c_{\mathrm{A},n}^{\mathrm{occ}}(S^{2})=\frac{1}{2\pi}\left(\gamma_{n}(S_{\theta =\pi}^{1})-\gamma_{n}(S_{\theta =0}^{1})\right),\label{ChernBerry} \end{gather} where $S^{1}_{\theta=0(\pi)}$ is a tiny circle surrounding the north(south) pole and $\gamma_{n}(S^{1})$ is the Berry phase of the n-th occupied band along $S^{1}$ which is given by \begin{gather} \gamma_{n}(S^{1})=i\oint_{S^{1}} d\mathbf{l}\cdot\mathbf{A}_{n}^{\mathrm{occ}}(\mathbf{k}). \end{gather} Let us denote a circle, which is a parallel of $S^{2}$ with a latitude $\theta\in[0,\pi]$, as $S^{1}_{\theta}$. Then the Eq. (\ref{ChernBerry}) is deformed to \begin{gather} c_{\mathrm{A},n}^{\mathrm{occ}}(S^{2})=\frac{1}{2\pi}\int_{0}^{\pi} d\theta\ \frac{d}{d\theta}\gamma_{n}(S^{1}_{\theta}), \end{gather} which means that we can calculate $c_{\mathrm{A},n}^{\mathrm{occ}}(S^{2})$ by evaluating the difference between $\gamma_{n}(S^{1}_{\theta=\pi})$ and $\gamma_{n}(S^{1}_{\theta=0})$ while $\theta$ changes from $0$ to $\pi$. We illustrate $\gamma_{n}(S^{1}_{\theta})$ for $\theta\in[0,\pi]$ in Fig. \ref{fig:Dcharge} (d). To calculate the 2D charge of the nodal surface, the Chern numbers of the occupied bands outside the nodal surface is needed. We display the $\gamma_{n}(S^{1}_{\theta})$ calculated outside the nodal surface as the straight lines. The green line corresponds to the $\gamma_{1}(S^{1}_{\theta})$ and $\gamma_{2}(S^{1}_{\theta})$ and the blue line corresponds to the $\gamma_{3}(S^{1}_{\theta})$ and $\gamma_{4}(S^{1}_{\theta})$. We can check that the green line is increasing and winds from $\pi$ to $-\pi$ one time. Therefore, the 2D charge of the nodal surface is $2$. The Chern number of the highest occupied band inside the nodal surface can be similarly calculated. The dashed orange line in Fig. \ref{fig:Dcharge} (d) corresponds to $\gamma_{1}(S^{1}_{\theta})$ inside the nodal surface. It is decreasing and winds one time from $-\pi$ to $\pi$, which means that the Chern number of the highest occupied band inside the nodal surface is $-1$. Therefore, we can check that the linking structure in class D, which is Eq. (\ref{Dlinkingeq}), is satisfied in this model. \section{First-principles calculations} To simulate electronic structure of AA-stacked bilayer graphene multilayers, we have performed density functional theory (DFT) calculation. We have used projector augmented wave band method implemented in Vienna ab initio simulation package (VASP) \cite{kresse1999ultrasoft,kresse1996efficient,kresse1996efficiency} with generalized-gradient approximation \cite{perdew1996generalized}. In-plane hexagonal lattice constant is 2.46 $\mathrm{\AA}$ and Interlayer distances are 2.5 $\mathrm{\AA}$ and 3.5 $\mathrm{\AA}$ respectively, as indicated in Fig. 4 of the main text. \end{document}
{ "timestamp": "2021-06-24T02:11:49", "yymm": "2012", "arxiv_id": "2012.08908", "language": "en", "url": "https://arxiv.org/abs/2012.08908" }
\section{The Voronoi tessellation in a spatial galaxy distribution: first works and basic approach} \label{sec:1} The geometrical methods based on the Voronoi diagram deal with a partitioning of space into regions in a specific subset of generators. It was named after Georgy F. Voronoi (April 28, 1868, Zhuravka village, Chernihiv region, Ukraine – Nov 20, 1908, Warsaw, Poland), the outstanding Ukrainian mathematician ~\cite{Syta2009, Pratsyovity2018}, who studied the general n-dimensional case of these diagrams ~\cite{Voronoi1907, Voronoi1908}. In 1984, Matsuda and Shima advanced the idea to apply the Voronoi tessellation method for describing the cellular structure of the local Universe ~\cite{Matsuda1984}, finding a topological tendency of galaxies ``to cluster at the vertices, edges and faces of polyhedral shaped voids''. In 1987, Ling demonstrated that the Voronoi tessellation and the Minimal Spanning Tree being applied to the CfA Redshift Survey of galaxies (the first survey to map the large-scale structure of the Universe) are able to detach filamentary structures and voids \cite{Ling1987}. In 1989, Yoshioka \& Ikeuchi proposed three-dimensional Voronoi tessellation as a model of the evolution of the negative density perturbations regions, which resulted in the overlapping of shells while the modeled skeleton can be compared with real observed structures and with mass distribution correlation functions \cite{Yoshioka1989}. For the first time, the Voronoi tessellation was considered in detail as a pattern of matter distribution in the Universe in work by Icke and Weygaert ~\cite{Icke1987} and series of their following works ~\cite{Icke1988, Weygaert1989, Icke1991}. These authors concluded that the regions of lower density become more spherical with evolution and matter floods away from expansion centers and accrues at the borders of packing of spheres. This leads to the partition of space on the Voronoi tessellation with nuclei in the centers of low-density regions called the voids. High-density regions - clusters of galaxies - lie at the crossing of vertexes of adjacent cells, filaments at the edge of cells, and pancakes of large-scale structure (LSS) are faces of cells (Fig.~\ref{voronoi-pattern}, right). Sheth et al. ~\cite{Sheth2004} have developed its idea and considered the model of a void created in the frame of the Voronoi tessellation paradigm. The Voronoi tessellation can be constructed as follows. Let us consider a Voronoi cell of finite size in N-dimensional space (usually N = 2 or N = 3), where a fixed number of points is distributed according to some statistical law (for example, the Poisson law). Suppose that each point is the center of a spherical expanding bubble structure. If all these structures begin to expand at the same moment with the same rate, the bubbles will be touched in planes that perpendicularly bisect the lines connecting the centers of expansion. These bisecting planes, in turn, intersect each other. As a result of this process, new lines will be generated, which in turn intersect each other and form a network. Using an adopted terminology, we will call such a center of the cell as a nucleus. So, each nucleus will be enclosed by a set of (N - 1) - dimensional planes forming a convex cell. Distribution of nuclei forms the Voronoi tessellation. The realization of Voronoi tessellations for a certain number of expanding nuclei, which is known as the Voronoi foam, can be found in \cite{Icke1987, Weygaert1989}. In the case of two-dimensional realization, the construction of a Voronoi cell consists of the search for all the Delaunay triangles having three nuclei (the center of the circumscribing circle is a vertex of the Voronoi foam). The program proposed by the authors \cite{Icke1987}, allows one to find all the Delaunay triangles having $N_{1}$ as a corner and construct the Voronoi cell belonging to $N_{1}$ by joining the circumcentres of the Delaunay triangles. Having applied this procedure to all nuclei, we obtain the Voronoi tessellation. The process of forming the Voronoi tessellation is shown in Fig.~\ref{voronoi-pattern} (left). The points $N_{0}$, $N_{1}$, $N_{2}$ form a Delaunay triangle obtained in a previous search; corresponding Voronoi vertex \textit{V} is shown within the (dashed) circumcircle of $N_{0}$, $N_{1}$, $N_{2}$ as well as stubs of the Voronoi cell walls. On the left hand side of the diagram, the \textit{T} are a sequence of trial points, the third of which produces a circle that encompasses two nuclei, $P_{1}$ and $P_{2}$. The radius of the circumcircle of ($N_{1}$, $N_{2}$, $P_{1}$) being smaller than that of ($N_{1}$, $N_{2}$, $P_{2}$), the point $P_{2}$ is $N_{3}$, i.e. the third corner of the Delaunay triangle. Thus, the circumcenter of ($N_{1}$, $N_{2}$, $P_{1}$) is the next Voronoi vertex which, if connected with \textit{V}, produces a complete Voronoi cell wall (\cite{Icke1987}). \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{voronoi-pattern.eps} \caption{(Left) The construction of a new Delaunay triangle from two known nuclei $N_{3}$ such that ($N_{1}$, $N_{2}$, $N_{3}$) forms a triangle whose circumsphere does not contain any other nucleus in the Voronoi tessellation. (Right) Identification of the four quantities which were calculated in each Voronoi cell: $l_{i}$, the length of wall \textit{i}; $\alpha$, the angle between two walls meeting at vertex; $d_{w}$, the distance between the nucleus and a wall, where the projection of the nucleus doesn't necessarily lie on the wall (Icke, 1987, open astronomy).} \label{voronoi-pattern} \end{figure} The obtained results could explain the heuristic models that supposing Voronoi tessellations as 3D templates for the galaxy distribution as well as could reproduce a variety of galaxy clustering properties. In an ideal scenario, the LSS is organized by equal spherical voids expanding at the same rate. The walls and filaments would be found precisely between expanding voids, and the resulting LSS web skeleton would the Voronoi tessellation. The Voronoi tessellation method was picked up and also thrived in our research on a spatial galaxy distribution since 1990-is \cite{Vavilova1995} that allowed us to obtain several priority results. Namely, we elaborated three main approaches in Voronoi tessellation application: (1) to describe a cosmic web skeleton in matter distribution as a Voronoi tessellation with nuclei at low-density regions; (2) to use Voronoi tessellation as a tool for direct measurement of galaxy local concentration and environmental description of low-populated galaxy systems such as triplets, pairs, and isolated galaxies; 3) to apply Voronoi diagrams altogether with machine learning methods for 3D mapping of the Zone of Avoidance of our Galaxy \cite{Vavilova2018, Vavilova2020b}, where Generative Adversarial Network (GAN) algorithms are very useful \cite{Ambrogioni2019, Elyiv2020}. In particular, Coutinho et al. \cite{Coutinho2016} performed verification of various algorithms that can reproduce the cellular structure of the Universe. By comparing the simulated distributions with real observational data, these authors showed that the best algorithm uses the nearest neighbour parameter between galaxies, and that network algorithms can be improved to reproduce the large-scale structure of the Universe. We give examples in Chapter 2, how manner our developed approach is working. We briefly overview in Chapter 3 various astronomical research with the Voronoi diagrams, accentuating the papers related to the large-scale structure of the Universe, as well as we highlight in Chapter 4 several works and software, where the Voronoi tessellation and machine learning get along well with each other. \section{Voronoi tessellation of the first, second and third orders: identification of the low-populated galaxy groups, environment effect, and dark matter content} \label{sec:2} Because of Voronoi tessellation is a geometrical method based only on galaxy positions, it allows detaching overdensity regions of galaxies in comparison with the background \cite{Vavilova2005b}. We tested it with various samples of galaxies. First of all, we used the Local Supercluster of galaxies, which is well studied among other galaxy superclusters, for identifying galaxy groups of various populations. It was revealed that Voronoi’s tessellation method depends weakly on the richness-parameter of groups, and the number of galaxies in the rich structures is growing rather than in the weak structures with an increase of this parameter \cite{Melnyk2006b}. In the first-order Voronoi tessellation, the critical parameter is the volume of the galaxy’s Voronoi cell \textit{V}. This parameter characterizes an environmental galaxy density. The condition of cluster/group membership of a particular galaxy is the relatively small V. This condition is actual when close neighbouring galaxies surround the galaxy. That is why the first order Voronoi tessellation is not corrected for the identification of small isolated galaxy systems \cite{Melnyk2006b}. We used the second-order Voronoi tessellation for the identification of pairs and single galaxies. Each galaxy $i$ from set $S$ forms the common cells with a certain number of neighbouring galaxies (Fig.~\ref{fig:V1}). So, under neighbouring galaxies of galaxy $i$, we understand only galaxies that create common cells with this galaxy. For example, galaxy 1 creates only 4 common cells ($V_{1,2}$ , $V_{1,3}$ , $V_{1,4}$ , $V_{1,5}$ ) with neighbouring galaxies 2, 3, 4, and 5, respectively. Each pair of galaxies $i$, $j$ is characterized by the dimensionless parameters $p_{i,j}$: \begin{equation} p_{i,j} = \frac{\sqrt[D]{V_{i,j}}}{m_{i,j}}, \label{eq:pij} \end{equation} where $D$ -- space dimension, $V_{i,j}$ -- the area (for 2D) or volume (for 3D) of cell, $m_{i,j}$ -- distance between galaxies $i$ and $j$. So, contrary to the first-order tessellation, the second-order tessellation for set \textit{S} distribution of nuclei is the partition of the space which associates a region $V_{1,2}$ with each pair of nuclei 1 and 2 from $S$ in such a way that all points in $V_{1,2}$ are closer to 1 and 2 than other nuclei from S. Region $V_{1,2}$ is a common cell for nuclei 1 and 2. However, these nuclei do not need to lie in the common cell. For example, nuclei 1 and 5 create the common cell $V_{1,5}$, and they do not lie in this cell. In such a way, the second-order Voronoi tessellation is available for the identification of single galaxies and pairs (Fig.~\ref{fig:V1}b). Let us introduce the parameter $p_{e}$, which describe only pair environment and does not depend on the distance between pair members directly. We define it as the mean value of $p_{j}(1)$ and $p_l$(2) parameters of the first and second galaxy, excepting $p$ from both sets: \begin{equation} p_{e} = \frac {\sum_{j = 2}^k p_j (1) + \sum_{l = 2}^n p_l (2)} {k+n-2}, \label{eq:pe} \end{equation} where $k$ and $n$ -- number of neighbouring galaxies for 1 and 2 galaxies of geometric pair, respectively. We started sums from $j$ = 2 and $l$ = 2 for excepting $2 · p$, because the first galaxy is neighbour for the second galaxy and vice versa. Therefore $k + n-2$ is sum of neighbouring galaxies of pair members excepting of pair galaxies as neighbouring for each other. Parameter $p_{e}$ depends on the distribution of neighbouring galaxies. A small value of $p_{e}$ points out that such a pair is located in a loose environment. In such case the average volume of common cells of pair components with neighbouring galaxies is relatively small, and distance between them is significant, see formula ~\eqref{eq:pij} and Fig.~\ref{fig:V2}a. \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{V1.eps} \caption{2D Voronoi tessellation of the first- a), second- b) and third- c) order for the same distribution of the random nuclei (\cite{Elyiv2009}, open astronomy).} \label{fig:V1} \end{figure} A single galaxy is a galaxy, which is not a member of any geometric pair. The single galaxies are field galaxies in the environment of geometric pairs. Every single galaxy has the own neighbours; single galaxies and geometric pair members can be among them. According to the second-order Voronoi tessellation, the larger is the degree of galaxy isolation, the larger is the number of neighbours (see Fig.~\ref{fig:V1}b in comparison with Fig.~\ref{fig:V2}b), but these neighbours locate farther. The best parameter that describes the isolation degree of the single galaxy, $s$, is the mean value of all parameters $p_{j}$ of this galaxy: \begin{equation} s = \frac {\sum_{j = 1}^k p_j} {k} \label{eq:s} \end{equation} The third-order Voronoi tessellation is appropriate for the identification of galaxy triplets. It is the partition of the space which associates a region $V_{1,2,3}$ with each triplet of nuclei 1, 2, 3 in such a way that all points in $V_{1,2,3}$ are closer to nuclei 1, 2, 3 than other nuclei from $S$ \cite{Lindenbergh2002}. All points of the common triplet’s cell are closer to galaxies of this triplet than to other galaxies. Similarly to the parameter $p_{i,j}$ for pairs, we can set up the parameter $t_{i,j,u}$ for triplets: \begin{equation} t_{i,j,u} = \frac{\sqrt[D]{V_{i,j,u}}}{max(m_{i,j}, m_{i,u}, m_{j,u}),} \label{eq:tiju} \end{equation} where {D} is the space dimension, $V_{i,j,u}$ is the area (for 2D) or volume (for 3D) of the cell, and $m_{i,j}$, $m_{i,u}$, $m_{j,u}$ are the distances between galaxies in the triplet. A geometric triplet in the third-order Voronoi tessellation contains three galaxies that have a common cell and the same maximal parameters $t_{max}(1) = t_{max}(2) = t_{max}(3) = t$. The parameter $t$ characterizes a degree of geometric triplet isolation. We can define the parameter of triplet environment $t_{e}$ as the mean value of parameters $t_{i}(1)$, $t_{j}(2)$, and $t_{u}(3)$, except $t$ from three sets: \begin{equation} t_{e} = \frac {\sum_{i = 2}^k t_i (1) + \sum_{j = 2}^n t_j (2) + \sum_{u = 2}^q t_u (3)} {k+n+q-3} \label{eq:te} \end{equation} here in the case of the third-order Voronoi tessellation, $k$, $n$, and $q$ denote the number of neighbouring triplets which contain galaxies 1, 2, and 3, respectively. Therefore, ($k$ + $n$ + $q$ - 3) is the number of neighbouring triplets for a certain triplet that contain at least one galaxy from this triplet (see, Fig.~\ref{fig:V2}). Parameters $p$, $s$, and $t$ are the basic ones and define the isolation degree of a galaxy pair, single galaxy, or triplet compared to the background, respectively. Parameters $p_e$ and $t_e$ are additional ones and contain information about the distribution of the neighbouring galaxies (environment). Similar to the second- and third-order Voronoi tessellations, it is possible to apply more high-order Voronoi tessellations to identify galaxy quartets and quintets, etc. \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{V2.eps} \caption{Different configurations of the galaxies: isolated close pair a) and isolated single galaxy b) in the second-order tessellation; isolated close triplet in the third-order tessellation c) (Elyiv,2009, open astronomy).} \label{fig:V2} \end{figure} So, one can use galaxies as the nuclei of the Voronoi tessellation taking into account equatorial coordinates $\alpha$, $\delta$ and radial velocities of galaxies $V_h$ only. For the construction of the 3D Voronoi tessellations, it is necessary to determine the distances in 3D space. The spatial distance between two galaxies can be decomposed into projected (tangential) distance r and radial component v (difference of the radial velocities). We can determine the projected distance with a relatively high accuracy. Simultaneously, the radial component has errors due to the inaccuracy of radial velocity measurement of each galaxy and existing strong peculiar velocities (due to virial motions of galaxies in groups and clusters). As a result, the galaxy distribution in the radial velocities space is extended along the radial component, the so-called fingers-of-God effect. This is attributed to the random velocity dispersion in a galaxy volume-limited sample that cause a galaxy’s velocity to deviate from pure Hubble flow, stretching out a group of galaxies in redshift space (\cite{Jackson1972, Melnyk2009}). Various authors take into account this effect in their way, depending on the specifics of their problem. For example, Marinoni et al. (\cite{Marinoni2002}) chose some cylindrical window of clustering, which is extended along the radial component. We introduced the weight for a radial component (\cite{Elyiv2009}, avoiding the problem of tangential and radial distance in equivalence to apply the high-order 3D Voronoi tessellation method. An efficient way to show Voronoi tessellation advantages was to apply it to the galaxy samples from the Local Supercluster \cite{Vavilova2005c, Melnyk2006b, Vavilova2015} and the Sloan Digital Sky Survey (SDSS), where at the first time we examined it for spectroscopic aims \cite{Melnyk2006c, Elyiv2009, Vavilova2009, Melnyk2012, OMill2012, Pulatova2015}. We did not consider galaxies that located within $2^{o}$ near borders, because the correct estimation of Voronoi cell volume is not possible in this case. Selecting single galaxies and pairs by the second-order Voronoi tessellation, as well as triplets by the third-order Voronoi tessellation method, we obtained 2196 geometric pairs, 1182 triplets and 2394 single galaxies. We did not make a clear division between physical gravitationally bound systems and non-physical ones, following the supposition that the more isolated a system is, the higher probability that it is physical (compact pairs are with $R_h$ < 150 kpc and triplets are with $R_h$ < 200 kpc). \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{DMgroups.eps} \caption{Mass-to-luminosity ratio diagram for galaxy systems of different population (star clusters, galaxies, galaxy groups, clusters and superclusters), where the result for the low-populated groups (Melnyk, 2009) is pointed (Vavilova, 2015, open astronomy).} \label{fig:DM} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{triplets.eps} \caption{The interacting (VV894), most compact (KTG39), and wide triplets of galaxies, where $S_{v}$ is the rms velocity of galaxies with respect to the triplet centre, $R_{h}$ -- harmonic mean radii of the triplet, $\tau = 2H_{0}R_{h}/S_{v}$ -- its dimensionless crossing time (Vavilova, 2015, open astronomy).} \label{fig:trio} \end{figure} Estimating the dark matter content in the low-populated groups, we obtained the median values of mass-to-luminosity ratio (\cite{Elyiv2009}): $12M_{solar}/L_{solar}$ for the isolated pairs and $44M_{solar}/L_{solar}$ for the isolated triplets. Note that for the most compact pairs and triplets (with R < 50 (100) kpc, respectively) there is not a very large difference in dark matter content for pairs and triplets: $7~M_{solar}/L_{solar}$ and $8~M_{solar}/L_{solar}$. The mass-to-luminosity ratio diagram for galaxy systems of different population (star clusters, galaxies, galaxy groups, including the low-populated ones, clusters, and superclusters) is presented in Fig.~\ref{fig:DM}. Several examples of isolated triplets of galaxies are given in Fig.~\ref{fig:trio}. We conclude about the dark matter distribution that for the dynamically younger sparsely groups (triplets), dark matter is more likely associated with the individual galaxy halos, for the interacting and late sparsely groups the dark matter lies in a common halo of galaxy groups. Using an inverse volume of Voronoi cell (1/$V$) as a parameter describing the local environmental density of a galaxy, we considered the volume-limited SDSS (DR5 and DR9) galaxy samples ($0.02<z<0.1$, $-24<M_r<-19.4$) \cite{Melnyk2012, Dobrycheva2013, Dobrycheva2015, Dobrycheva2017} and found that \begin{itemize} \item the early type galaxies prefer to reside in the Voronoi cells of smaller volumes (i.e., dense environments) than the late type galaxies, which are located in the larger Voronoi cells (i.e., sparse environments); \item the relationships between the morphological types and the $u-r$, $g-i$, and $r-z$ color indices of pairs of galaxies with radial velocities $3000<V<9500$ km/s evident that the Holmberg effect is not revealing, by the other words, it can be considered only in historical aspect \cite{Dobrycheva2016}; \item properties of such small groups as pairs and triplets, where segregation by luminosity was clearly observed, are fit well to Dressler effect: galaxies in isolated pairs and triplets are on average two times more luminous than isolated galaxies; \item the dependence of the color indices and stellar magnitudes is effective for the automated morphological classification of the galaxies ($E$ -- early types, $L$ -- late types). \end{itemize} The morphological types of the galaxies were divided into two classes: Early - $E$ (from elliptical and lenticular) and Late - $L$ (from $Sa$ to $Irr$). The absolute magnitude \begin{equation} M_{r} = m_{r} - 5log(V/H_{0})-25-K(z)-ext \label{eq:Mr} \end{equation} could be corrected for Galactic absorption $ext$ in accordance with ~\cite{Schlegel1998} and $K$ - correction $K(z)$ according to ~\cite{Chilingarian2010}. Here we used the CDM model of the Universe with the WMPS7 cosmological parameters ($\Omega_{M}$ = 0.27, $\Omega_{\Lambda}$ = 0.73, $\Omega_{k}$ = 0, $H_{0}$ = 0.71). In order to apply the Voronoi tessellation method we should done transition from equatorial coordinates and velocities to the comoving $x$, $y$, $z$ coordinates for each central galaxy in the sample ($M_r <$ -20.7). To do this we can transform the redshift $z$ to the corresponding distance $\chi(z)$ for each galaxy by integrating as follows \begin{equation} \chi(z) = D_{H}\int_{0}^{z} \frac {dz'}{E(z')} \label{eq:hi(z)} \end{equation} where $D_{H}=c/H_{0}$ is the Hubble distance and $E(z')$ is the Hubble parameter, defined as follows \begin{equation} E(z') = \sqrt{\Omega_{M}(1+z)^3 + \Omega_{k}(1+z)^2 + \Omega_{\Lambda}} \label{eq:E(z)} \end{equation} The coordinates $x$, $y$, $z$ of the galaxies in the comoving space are determined as follows \begin{equation} x = \chi(z)cos(\theta)cos(\phi) \label{eq:x} \end{equation} where ($\theta$) is the declination of each galaxy, ($\phi$) is the right ascension, and ($\chi(z)$) is the corresponding distance for redshift $z$. After getting the three-dimensional Cartesian coordinates of the galaxies, we divided the geometrical space occupied by the galaxy sample in mosaic cells (volumes $V$ in the 3D case). Each cell has a galaxy as a nucleus and consists of elementary volumes of space closer to this galaxy than to any other galaxy \cite{Matsuda1984}. The use of the Voronoi tessellation to isolate groups of galaxies in three dimensions has been described in detail by Melnyk et al. \cite{Melnyk2006b}. Fig.~\ref{fig:V1}a shows an example of the Voronoi tessellation in a two dimensional case to make it easier to see. Let us use the value of inverse volume ($1/V$) of the Voronoi cells to describe the density of galaxy environments; when $1/V$ is higher, a galaxy is less isolated. Examples of the distributions of $E$ and $L$ galaxies vs. inverse volume of the Voronoi cells that contain them are shown in Fig.~\ref{fig6}. In work \cite{Dobrycheva2015} we grouped galaxies from the SDDSS sample at z < 0.1 into 4 logarithmic intervals $1/V<0.001$, $0.001<1/V<0.01$, $0.01<1/V<0.1$, and $1/V>0.1$ for four ranges of the redshift $0.02<z<0.04$, $0.04<z<0.06$, $0.06<z<0.08$, and $0.08<z<0.1$ (in the rows) and for different ranges of absolute stellar magnitude, $-21.5<M_{r}<-20.7$, $-22.5<M_{r}<-21.5$, and $M_{r}<-22.5$. The number of galaxies in each bin for the $E$ and $L$ types is normalized to the total number of $E \div L$ galaxies within the given subsample. Fig.~\ref{fig6} shows that the fraction of galaxies of spiral and late types becomes larger while redshift increasing, while the fraction of early types, on the contrary, is smaller. That follows the well known evolutionary trend of a reduction in the number of galaxies with suppression of star formation for increasing redshift \cite{Cucciati2006, Tal2014}, even at comparatively low redshifts down to $z<0.1$. Also, for the brighter galaxies in the sample, the fraction of galaxies of earlier types is larger since, on the average, earlier types have higher luminosities (the well-known morphological type vs. colour indices/luminosity relation) \cite{Blanton2005, Park2007, Hogg2004}. The brightest galaxies of earlier types with $M_{r}<-22.5$ appear preferentially in denser environments: the peak of the distribution of the inverse volumes of the Voronoi cells for the $E$ types lie within the interval $0.01<1/V<0.1$, while in other intervals of $M_{r}$, for the $L$ types the peak of the distribution always is within $0.001<1/V<0.01$ (the morphology-density relation \cite{Dressler1980, Blanton2005, Hogg2004, Dobrycheva2016}). \begin{figure}[h!] \centering \includegraphics[width=.95\textwidth]{E_L.eps} \caption{The distribution of the number of galaxies vs inverse volume of the Voronoi cell (local density parameter), with early morphological type $E$ indicated by red lines and late type $L$ indicated by blue lines, for different ranges of redshift; absolute stellar magnitude of galaxies selected from the SDSS at z < 0.1 is $-22.5<M_{r}<-21.5$. The number of galaxies in each bin is normalized to the total number of $E \div L$ within the given subsample. The number of central bright $E$ and $L$ galaxies is as follows: $E$ = 1636, $L$ = 459 for $0.02<z<0.04$, $E$ = 3609, $L$ = 1247 for $0.04<z<0.06$, $E$ = 9432, $L$ = 3596 for $0.06<z<0.08$ (Dobrycheva, 2015).} \label{fig6} \end{figure} We can also determine the density of galaxies in a Voronoi cell, including their faint satellites, i.e., galaxies with $M_{r}r>-20.7$: $(n+1)/V$, where n is the number of faint galaxies in the Voronoi cell, and $V$ is the volume of the Voronoi cell. We also constructed distributions of early $E$ and $L$ types galaxies in dependence on the parameter $(n+1)/V$ in four intervals: $(n+1)/V<0.01$, $0.01<(n+1)/V<0.1$, $0.1<(n+1)/V<1$, and $(n+1)/V>1$. The number of galaxies is normalized to the number of $E \div L$ galaxies within the given range of $(n+1)/V$. We examined the density of galaxies only in the first two redshift intervals, since we cannot evaluate the evolution of their properties at a higher $z$ because there are not enough faint galaxies. However, we can compare the galaxies' environmental density as a function of the absolute magnitude and morphological type of the bright central galaxy. Thus, the fraction of early types of central galaxies increases with increasing environmental density, while, on the other hand, the fraction of late types decreases; that is, the earlier types are in a denser environment than the late types. When the central galaxy is brighter, the fraction of early types in a subsample will be larger \cite{Dobrycheva2015, Vavilova2020c}. \section{The Voronoi tessellation in astrophysical research} \label{sec:3} Ebeling and Wiedenmann \cite{Ebeling1993} were the first to apply the Voronoi tessellation for finding galaxy groups and clusters. Later such an approach was used by Ramella et al. \cite{Ramella2001}, Kim et al. \cite{Kim2002}, Lopes et al. \cite{Lopes2004}, Barrena et al. \cite{Barrena2005}, Melnyk et al. \cite{Melnyk2006c}, Panko and Flin \cite{Panko2006}. Doroshkevich \cite{Doroshkevich1997} introduced its for filaments and walls (1D and 2D LSS structures) as well as Neyrinck \cite{Neyrinck2008} for the search of voids in a spatial galaxy distribution. We note some important earlier works as concerns with other applications of Voronoi diagrams to the large-scale galaxy distribution: for revealing the quasi-periodicities in a pencil-beam survey ~\cite{Subba1992, Ickeuchi1991}, for a description of constraints on the Voronoi model when applied to the isotropic cosmic microwave background ~\cite{Coles1990}. A significant contribution for Voronoi tessellation application to various astronomical tasks was made by Zanninetti, who considered two- and three-dimensional cases of the explosion scenario likely supernova events and developed a dynamical method allowing to describe the explosion phases ~\cite{Zaninetti1989, Zaninetti1990}. Ramella et al. ~\cite{Ramella2001} created a Voronoi Galaxy Cluster Finder, which uses positions and magnitudes of galaxies to define galaxy clusters and extract its parameters: size, richness, central density, etc. The 3D Voronoi tessellation for galaxy group identification was realized by Marinoni et al. \cite{Marinoni2002} and Cooper et al. \cite{Cooper2005}. Weygaert et al. prepared a useful review of the spatial galaxy distribution and Delaunay and Voronoi tessellations \cite{Weygaert2009, Hidding2015}. They discussed the Delaunay Tessellation Field Estimator (DTFE) and the concept of Alphashapes for matter distribution; the Multiscale Morphology Filter (MMF), which uses the DTFE for detachment of filaments, sheets, and clusters; the Watershed Voidfinder (WVF) to identify voids. The era of big data surveys (see, for example, review in work by Vavilova et al. \cite{Vavilova2020a} accelerated the Voronoi diagrams application on a spatial galaxy distribution properties and environment influence: $z = 0.1 - 3.0$, COSMOS survey \cite{Scoville2013}; z {\ensuremath{\leq}} 0.5, Herschel-ATLAS/GAMA \cite{Burton2013}; z < 0.1, Coma Supercluster \cite{Cubul2014}; z < 0.3, ALHAMBRA survey \cite{SanRoman2018}. S{\"o}chting et al. used Voronoi tessellation within overlapping slices in the photometric redshift space (0.2<z<3.0). It allowed them to detach region $z\sim0.4$ with a slow emergence of virialized clusters accordingly to the hierarchical scenario and to detect new superclusters as the peaks of a matter distribution up to z = 2.9 \cite{Sochting2012}. As for the Voronoi tessellation cluster finder algorithms, we note the work by Soares et al., who developed it to produce reliable cluster catalogs up to $z=1$ or beyond and down to $10^{13.5}$ solar masses. They built the Voronoi tessellation cluster finder in photometric redshift shells and used the two-point correlation function of the galaxies in the field to determine the density threshold for the detection of cluster candidates and to establish their significance \cite{Soares2011}. A principal new galaxy cluster finder based on a 3D Voronoi Tessellation plus a maximum likelihood estimator, followed by gapping-filtering in radial velocity ($VoML+G$), was developed by Pereira et al. \cite{Pereira2017a, Pereira2017b}. They applied it successfully to find optical clusters ($R_{200}$) in the Local Universe as well as Santiago-Batista et al. for the identification of continuous filaments in the environment of superclusters \cite{SantiagoBautista2020}. Grokhovskaya et al. developed filtering algorithms of multiparameter analysis of the large scale distribution of galaxies (identification of galaxy systems and voids) in narrow slices in the entire range of redshifts of HS 47.5-22 constructing density contrast maps, namely with adaptive kernel and Voronoi tessellation \cite{Grokhovskaya2019}. The 3D Voronoi tessellation application to the DEEP2 survey was first introduced by Gerke et al. \cite{Gerke2005}. Meanwhile, Shen Ying et al. \cite{Ying2015} proposed an algorithm which computes the cluster of 3D points by applying a set of 3D Voronoi cells and allows a 3D point cluster pattern can be highlighted and easily recognized. Hung et al. have demonstrated that Voronoi tessellation Monte-Carlo mapping is beneficial for studying the environment effect on galaxy evolution in high-redshift large-scale structures (z$\sim$1) in the ORELSE survey (Observations of Redshift Evolution in Large Scale Environments) \cite{Hung2019}. An exciting application of Voronoi tessellation was proposed by Lam et al. \cite{Lam2019}: for constructing the white dwarfs luminosity functions they used parameters of proper motion and colours from the Pan-STARRS\,1\,3$\pi$ Steradian Survey Processing Version 2; for improving the accuracy of the maximum volume method they used Voronoi tessellation space binning to recalculate photometric/astrometric uncertainties. It helped to estimate disk-to=halo dark matter ratio as 100. Another a non-parametric method for estimating halo concentration using Voronoi tessellation, TesseRACT, was proposed by Lang et al. \cite{Lang2015}, who showed that it fit well with non-spherical halos and more accurate at recovering intermediate concentrations for N-body halos than techniques that assume spherical symmetry. The very interesting algorithm, Void Finder ZOBOV (ZOnes Bordering On Voidness), based on Voronoi tessellation, was proposed by Neyrinck et al. \cite{Neyrinck2008}. This algorithm finds density depressions galaxy distribution without free parameters. To estimate local density, it uses the Voronoi tessellation. One of the output of this algorithm is the probability that each void arises from Poisson fluctuations. However, Elyiv et al. \cite{Elyiv2015} have demonstrated a weak spot for ZOBOV void finder. Voids are the lowest density regions, so any method that uses the positions of galaxies directly to measure density for identifying the voids is then prone to shot noise error since voids are the regions with a very low concentration of galaxies by definition (Fig. \ref{fig7}). The Void IDentification and Examination toolkit (VIDE) developed by Sutter et al. \cite{Sutter2015} includes the parameter-free void finder ZOBOV, where ``Voronoi tessellation of the tracer particles is used to estimate the density field followed by a watershed algorithm to group Voronoi cells into zones and subsequently voids''. \begin{figure}[h!] \centering \includegraphics[width=.95\textwidth]{el.eps} \caption{The reconstructed displacement field (top panels) and its divergence (bottom panels) obtained with the two void finders, the Uncorrelating Void Finder (left-hand panels) and the Lagrangian Zel’dovich Void Finder (right-hand panels). } \label{fig7} \end{figure} Zaninetti in series of works \cite{Zaninetti2010a, Zaninetti2012} developed a practical statistics for the voids between galaxies with two new survival functions and considered the 3D distribution of the volumes of Poissonian Voronoi Diagrams to their 2D cross-sections in the assumption of gamma-function for the 3D statistics of the volumes of the voids in the Local Universe. He also conducted simulations \cite{Zaninetti2010b} of a spatial galaxy distribution using the Poissonian Voronoi polyhedra and the 2dF Galaxy Redshift Survey and the Third Reference Catalog of Bright Galaxies; Zaninetti gives a brief overview of a current status of the research on the statistics of the Voronoi Diagrams in \cite{Zaninetti2013}. Among other astronomical tasks, the Voronoi diagrams have been used for image processing, adaptive smoothing, segmentation, for signal-to-noise ratio balancing ~\cite{Chadha2016}, for spatial structure of the solar wind and solar-terrestrial connections \cite{Borovsky2018}, for spectrography data analysis in different electromagnetic regions \cite{Cappellari2002, Cappellari2003, Diehl2006}, in the moving-mesh cosmology simulation ~\cite{Springel2010} and \cite{Weinberger2020} (AREPO Public Code), chemical evolution in the early universe \cite{Chiaki2016}, star formation simulation \cite{Hubber2016}, spatial distribution of lunar craters \cite{Honda2019}. For example, Cabrera et al. ~\cite{Cabrera2008} applied the Voronoi diagram for image reconstruction technique in the interferometric data based on the Bayesian approach. Cadha et al. proposed Voronoi compact image descriptors and showed that Voronoi partitioning improves the geometric invariance and performance of image retrieval ~\cite{Chadha2016} as well as they developed a Voronoi-based machine learning method (deep convolution neural network). As for the cosmological simulation, Busch and White \cite{Busch2020} used Voronoi tessellation for a hierarchical tree structure that allowed them to associate local density peaks with disjoint subsets of particles and to analyze mass distribution at different levels of threshold. Similar to our work \cite{Dobrycheva2015}, when we introduced parameter of the volume of Voronoi cell to study environment influence on galaxies from the SDSS, Paranjape \& Alam \cite{Paranjape2020} applied inverse local number density parameter to study physical effects for such properties as halo (subhalo) mass, large-scale environment, etc. in various cosmological dark matter models and concluded that the Voronoi volume function gives a new mathematical instrument for galaxy evolution physics and dark sector study. Neyrinck developed the sectional-Voronoi algorithms in Python for cosmic-web research, because the Voronoi/Delaunay duals and origami tessellation give a wide class of spiderwebs. ``Voronoi edges are perpendicular bisectors of their corresponding Delaunay edges; the `bisector' part can be relaxed. Each Voronoi edge may be slid along its Delaunay edge, closer to one of the generators. They may not be slid entirely independently, though, since the Voronoi edges must still join vertices. There turns out to be one extra degree of freedom per generator, causing its cell to expand or contract. The result is a sectional-Voronoi diagram, a section through a higher-dimensional Voronoi tessellation. A generator’s extra degree of freedom in a sectional-Voronoi diagram can be thought of as its distance from the space being tessellated. A sectional-Voronoi diagram can also be thought of as a Voronoi tessellation in which each generator may have a different additive `power' in the distance function used to determine which points are closest to the generator (thus an alternative term, power diagram). Ash and Bolker \cite{Ash1986} showed that 2D spiderwebs and sectional Voronoi tessellations are equivalent'' (cited by \cite{Neyrinck2008b}. The package is available at \url{https://github.com/neyrinck/sectional-tess}, \url{https://mybinder.org/v2/gh/neyrinck/sectional-tess/master}. In the present day, the Voronoi diagrams methods have many applications in various fields of science and technology, as well as in social sciences and visual art ~\cite{Aurenhammer1991, Aurenhammer2000}. They are commonly used in computational fluid dynamics, computational geometry, geolocation and logistics, game dev programming, cartography, engineering, liquid crystal electronic technology, etc. For the first time, the Voronoi tessellation was utilized by Debnath et al. \cite{Debnath2015} for the discoveries in the particle physics beyond the Standard Model at the Large Hadron Collider at CERN. ``Since such tessellations capture the underlying probability distributions of the phase space, interesting features in the data can be detected by studying the geometrical aspects of the ensemble of Voronoi cells (cited by \cite{Debnath2018}). These methods allow identifying kinematic edges in two dimensions and generalize the technique for robust detection of phase space boundaries, which could be applied to discover new physics. An interesting library of ''Voronoi Diagrams: Applications from Archaeology to Zoology" is collected by Scot Drysdale on the website \url{https://www.ics.uci.edu/~eppstein/gina/scot.drysdale.html}. \section{The Voronoi tessellation and Machine learning} \label{sec:4} Straight application of classical Voronoi diagram in Machine Learning is the k-nearest neighbors (k-NN) algorithm at the number of neighbors $k=1$. In the case of the classifier, the output class is choosing among its \textit{k} the closest neighbors. Each of them gives a contribution to the class with some weight. Normally weight is inverse to the distance between target object and neighbor (closer neighbors will have a stronger influence than further neighbors) or uniform (all points in neighborhood are weighted equally). If $k=1$, then the object is just linked to the class of the nearest neighbor. From the other side, it could be interpreted as the building of the Voronoi diagram by training objects as nuclei of the diagram. The target object will have a class depending on which Voronoi cell it resides. Bring your data to life. A set of programming codes for 1-NN visualization ($k=1$) with examples (Hover Voronoi, a demonstration of d3-Delaunay, Voronoi Labels, Voronoi neighbors, Voronoi update) are available on the website \url{https://observablehq.com/@d3/} by Mike Bostock (2018). For the color image segmentation problem in computer vision, an adaptive and unsupervised clustering approach with Voronoi diagrams was introduced, which outperforms the existing algorithms \cite{Hettiarachchi2016}. A Python library ''Pycobra" contains several ensemble machine learning algorithms and visualization tools based on the Voronoi tessellations \cite{Guedj2017}. It can be downloaded from the Python Package Index (PyPi) and Machine Learning Open Source Software (MLOSS) at \url{https://github.com/bhargavvader/pycobra}. In the case of $k>1$, we should use the concept of high order Voronoi diagram, where a cell represents the set of points in space closer to a given $k$ nuclei that to all others (see, Chapter 2 and works by Elyiv et al. \cite{Elyiv2009}, \cite{Vavilova2020b}). In this case, k--order Voronoi space dividing can help us to find k-near neighbors directly. The crossing of high-order Voronoi diagram borders represent changing the set of k near neighbors. In k--NN regression, the output value for the target object is the average of the values of k nearest neighbors. If each neighbor has equal weight, it means that for each cell could be assigned pre-calculated averaged value. Next, if the target object resides in this cell, automatic assigned could be done. In all these cases, creating a Voronoi diagram on the training sample could make a faster k--NN algorithm application. For example, Inkulu and Kapoor \cite{Inkulu2011} presented an algorithm covering the Voronoi diagram with hyperboxes, which provides ANN queries. Another parallel spatial range query algorithm based on Voronoi diagrams and MR-tree, which is benefiting from the k-NN, is developed by Fu and Liu \cite{Fu2012}. Voronoi diagram also has a wide application in deep learning. In work \cite{Balestriero2019}, the authors studied the geometry of Deep Artificial Neural Networks with piecewise affine and convex nonlinearities. The authors demonstrated that each layer's input space partition corresponds to the Voronoi diagram with several regions that grow exponentially with increasing neurons. Numerical experiments for classification problems support their main theoretical results are expressed by the Deep ANN decision boundary in the input space, a measure of its curvature that depends on the network architecture, activation functions, and weights. In work \cite{Igashov2020} the authors presented a Deep Convolution Neural Network (CNN) constructed on a Voronoi tessellation of 3D molecular structures of proteins (VoroCNN model). Both convolution and pooling operations were used as a part of network architecture to predict local qualities of 3D protein folds. They computed Voronoi tessellation of molecular 3D structures and converted them into a protein interaction graph. The graph's critical property is that it implicitly keeps the information about the spatial relationship between the atoms of the protein model. The authors claim that for presently available amounts of data and computational resources, Voronoi tessellation is the best representation of 3D protein structure than raw volumetric data. \section{Instead of Conclusion} \label{sec:5} Today, hierarchical clustering is a common scenario for the evolution of galaxies. The fact that galaxies are observed mostly at redshifts to $z \sim 5$, while the most distant observed galaxy clusters are at $z \sim 2$, suggests that galaxies and sparsely populated groups were formed first, and galaxy clusters later by subcluster merging and/or via capturing galaxies and galaxy groups. The hierarchical clustering scenario is in good agreement with the cosmological $\Lambda$CDM model. Having great success in explaining the formation of the large-scale structure of the Universe as a whole, this model faces potentially severe problems on the scales of individual dark halo of galaxies and galaxy clusters, with statistics of the distribution of galaxies with different morphological types in a wide range of redshifts, with evolutionary properties of sparsely populated groups and galaxy clusters, with the lack of data on the large-scale structure of the Universe behind the Zone of Avoidance of the Galaxy. In this context, we have demonstrated the perfection and elegance of the Voronoi tessellation in solving many astronomical problems, focusing on its effectiveness for describing the web of large-scale structures of the Universe and data mining of its properties at various redshifts from early epochs to the scales of the Local Universe. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Voronoi.eps} \caption{(Left) Vortex theory applied to the Solar system (R. Descartes, 1644) (Aurenhammer, 2000, open access). (Right) Illustration of the Voronoi tessellation for galaxy web distribution.} \label{fig:Dec1644} \end{figure} ``God first partitioned the plenum into equal-sized portions, and then placed these bodies into various circular motions that, ultimately, formed the three elements of matter and the vortex systems'' (cited by R. Descartes, 1644 year \cite{Descartes1644}, vol.III, article 46, in \cite{Aurenhammer2000}). ''The modern view shoves baryogenesis, leptogenesis, WIMP-- genesis, and all very far back in time, but builds up structure continuously, using not-very-special initial conditions and gravity (plus perhaps other forces) to develop what we see today. In between come some remarkable constructs, including Thomas Wright’s hierarchy, Descartes’s Voronoi tessellation of whirlpools in the ether, Alfred Russel Wallace’s (yes, the evolution guy) ''Goldilocks” location for the Solar system, Cornelis Easton’s off-center spiral arms, and the Kapteyn Universe" (cited by V.\,Trimble, 2014 year \cite{Trimble2014}). We have combined this representation, which is consonant with ours, in Fig. \ref{fig:Dec1644} as an illustration of partitioning the space into cells for the subsequent extraction of the physical essence of the phenomena: one of them displays classical physics, Vortex theory applied to the Solar system (Descartes, 1644), the other gives a visualization of galaxy distribution through the 2D- Voronoi tessellation. \begin{acknowledgement} This work was partially conducted in the frame of the budgetary program of the NAS of Ukraine ``Support for the development of priority fields of scientific research'' (CPCEL 6541230). \end{acknowledgement} \input{references} \end{document}
{ "timestamp": "2020-12-17T02:19:12", "yymm": "2012", "arxiv_id": "2012.08965", "language": "en", "url": "https://arxiv.org/abs/2012.08965" }
\section{Introduction} We give an equivariant (jet coordinate free) version of Morse cohomology estimates of J. P. Demailly \cite{D1}, \cite{D2} for invariant k-jet metrics on Demailly-Semple bundles. The result will have the same effect toward the Green-Griffith conjecture on the entire curve locus in projective manifolds. M. Green and P. Griffiths \cite{GG} give several equivalent construction of the jet bundle together with a method of calculation of their Chern classes. They also give an application toward the Kobayashi hyperbolicity conjecture. In \cite{D1} Demailly proves the existence of global (weighted) homogeneous sections of $k$-jets by holomorphic Morse inequalities. By standard facts any entire curve $f:\mathbb{C} \to X$ is satisfied by such sections. It is preferable to give the same result using coordinate free jet bundles of Semple and Demailly. In this article we generalize the main result of Demailly \cite{D2} for the bundles $E_{k,m}^{GG}(V^*)$ of jet differentials of order $k$ and weighted degree $m$ to the bundles $E_{k,m}(V^*)$ of the invariant jet differentials of order $k$ and weighted degree $m$. Namely, Theorem 0.5 from \cite{D2} and Theorem 9.3 from \cite{D1} provide a lower bound $\frac{c^k}{k}m^{n+kr-1}$ on the number of the linearly independent holomorphic global sections of $E_{k,m}^{GG} V^* \bigotimes \mathcal{O}(-m \delta A)$ for some ample divisor $A$. The group $G_k$ of local reparametrizations of $(\mathbb{C},0)$ acts on the $k$-jets by orbits of dimension $k$, so that there is an automatic lower bound $\frac{c^k}{k} m^{n+kr-1}$ on the number of the linearly independent holomorphic global sections of $E_{k,m}V^* \bigotimes \mathcal{O}(-m \delta A)$. P. Griffiths and M. Green \cite{GG} provided several equivalent methods for characterizing the bundle of jets over a projective variety $X$. Later, the techniques developed by Semple, Demailly, McQuillan, Siu and Green-Griffiths led to new conjectures in Kahler geometry. Demailly \cite{D1}, \cite{D2} shows the existence of global sections of the Green-Griffiths bundles with sufficiently high degree. One can define the Green-Griffiths bundle $X_k$ as \begin{equation} X_k: = (J_kV \smallsetminus {0}) / \mathbb{C}^* \end{equation} \noindent where $J_k$ is the bundle of germs of $k$-jets of Taylor expansion for $f$. The bundles $\pi_k:X_k \to X$ provide a tool to study entire holomorphic curves $ f: \mathbb{C} \to X $, since such a curve has a lift to $f_{[k]}:\mathbb{C} \to X_k $ for every $ k $. Using the notation in \cite{D1}, we write $E_{k, m}^{GG}V^* =(\pi_{k})_* \mathcal{O}_{X_k}(m)$. Alternatively; the Green-Griffith bundle associated to a pair $(X,V)$ where $X$ is a complex projective manifold (may be singular) and $V$ is a holomorphic subbundle of $T_X$ the tangent bundle of $X$, with $rank(V)= r$, is a prolongation sequence of projective bundles \begin{equation} \mathbb{P}^{r-1} \to X_k=P({V_{k-1}}) \stackrel{\pi_k}{\longrightarrow} X_{k-1}, \qquad k \geq 1 \end{equation} \noindent obtained inductively making $X_k$ a weighted projective bundle over $X$ (see \cite{D1} and \cite{D2} for definitions). The sequence provides a tool to study the locus of nonconstant holomorphic maps $f:\mathbb{C} \to X$ such that $f'(t) \in V$. It is a conjecture due to Green-Griffiths that the total image of all these curves is included in a proper subvariety of $X$; provided $X$ is of \textbf{general type}. By general type we mean $K_{V}$ the canonical bundle of $V$ is big, cf. \cite{D1}. An alternative way to define $X_k$, is to identify its sections by the weighted homogeneous polynomials in the jet variables $\xi_1,..., \xi_k$ with weights $(1,2,...,k)$ respectively, and coefficients to be germs of analytic functions on $X$. We denote by $E_{k,m}^{GG}$ the sections of total weight $m$. \begin{theorem}\cite{D1} Lets $(X,V)$ be a directed projective variety such that $K_V$ is big, and let $A$ be an ample divisor. Then for $k>>1$ and $\delta \in \mathbb{Q}_+$ small enough, and $\delta \leq c (\log k)/k$, the number of sections $h^0(X,E_{k,m}^{GG} \otimes \mathcal{O}(-m \delta A))$ has maximal growth, i.e is larger than $c_km^{n+kr-1}$ for some $m \geq m_k$, where $c,c_k >0, n=\dim (X), r= \text{rank}(V)$. In particular entire curves $f:\mathbb{C} \to X$ satisfy many algebraic differential equations. \end{theorem} \noindent The proof of the above theorem is mainly an estimate on the curvature of a suitable metric ($k$-jet metric) namely $h_k$ on the Green-Griffiths jet bundles. The singularity locus for the metric $h_k$ which we denote by $\Sigma_{h_k}$ satisfies the inductive relation \begin{equation} \Sigma_{h_k} \subset \pi_k^{-1}(\Sigma_{h_{k-1}}) \cup D_k \end{equation} \noindent where $D_k=P(T_{X_{k-1}/X_{k-2}}) \subset X_k$. The divisors $D_k$ are the singularity locus of the projective jet bundle $X_k$ and their relation with the singularity of the k-jet metric is $\mathcal{O}_{X_k}(1)=\pi_k^*\mathcal{O}_{X_{k-1}}(1) \otimes \mathcal{O}(D_k)$. \begin{theorem} (\cite{D1}) Let $(X,V)$ be a compact directed manifold. If $(X,V)$ has a $k$-jet metric $h_k$ with negative jet curvature, then every entire curve $f:\mathbb{C} \to X$ tangent to $V$ satisfies $f_k(\mathbb{C}) \subset \Sigma_{h_k}$, where $\Sigma_{h_k}$ is the singularity locus of $h_k$. \end{theorem} \noindent The theorem provides a method to read the entire curve locus from the singularities of suitable metrics on the jet bundle, see \cite{D1} and \cite{D2} for details. Any entire curve $ f: \mathbb{C} \to X $ satisfies $Q(f_{[k]})=0$, where $f_{[k]}$ a lift of $f$ and $Q$ a global section of the jet bundle (see \cite{D1}). As the section $Q$ involves both the manifold and jet coordinates, it is desirable to show that there are enough dual global sections along the jet fibers to pair with the given section to give equations on the variety $X$. We prove a Serre duality for asymptotic sections of jet bundles. An application is also given for partial application to the Green-Griffiths conjecture. \vspace{0.1cm} \section{Invariant Jets vs Invariant metrics} \vspace{0.3cm} J. P. Demaily develops the ideas in \cite{GG} and considers the jets of differentials that are also invariant under change of coordinate on $\mathbb{C}$. The bundle $J_k \to X$ of $k$-jets of germs of parametrized curves in $X$ has its fiber at $x \in X$ the set of equivalence classes of germs of holomorphic maps $f:(\mathbb{C},0) \to (X,x)$ with equivalence relation $f^{(j)}(0)=g^{(j)}(0), \ 0 \leq j \leq k$. By choosing local holomorphic coordinates around $x$, the elements of the fiber $J_{k,x}$ can be represented by the Taylor expansion \begin{equation} f(t)=tf'(0)+\frac{t^2}{2!}f''(0)+...+\frac{t^k}{k!}f^{(k)}(0)+O(t^{k+1}) \end{equation} \noindent Setting $f=(f_1,...,f_n)$ on open neighborhoods of $0 \in \mathbb{C}$, the fiber is \begin{equation} J_{k,x}=\{(f'(0),...,f^{(k)}(0))\} = \mathbb{C}^{nk} \end{equation} \noindent Lets $G_k$ be the group of local reparametrizations of $(\mathbb{C},0)$ \begin{equation} t \longmapsto \phi(t)=a_1t+a_2t^2+...+a_kt^k+..., \qquad a_1 \in \mathbb{C}^* \end{equation} \noindent Its action on the $k$-jet vectors (2.2) is given by the following matrix multiplication \begin{equation} [f'(0), f''(0)/2!, ..., f^{(k)}(0)/k!]. \left[ \begin{array}{ccccc} a_1 & a_2 & a_3 & ... & a_k \\ 0 & a_1^2 & 2a_1a_2 & ... & a_1a_{k-1}+...a_{k-1}a_1\\ 0 & 0 & a_1^3 & ... & 3a_1^2a_{k-2}+...\\ . & . & . & ... & .\\ 0 & 0 & 0 & ... & a_1^k \end{array} \right ] \end{equation} \noindent The group $G_k$ decomposes as $\mathbb{C}^* \times U_k$, where $U_k$ is the unipotent radical of $G_k$. Let $E_{k,m}$ be the Demaily-Semple bundle whose fiber at $x$ consists of $U_k$-invariant polynomials on the fiber coordinates of $J_k$ at $x$ of weighted degree $m$. Set $E_k=\bigoplus_m E_{k,m}$, the Demaily-Semple bundle of graded algebras of invariant jets. Then one needs to work out the calculations in \cite{D1}, \cite{D2} with an invariant metric. Toward this we examine the following metric \begin{equation} \left (\sum_{s=1}^k \epsilon_s \left (\ \sum_{\alpha} \mid P_{\alpha}(\xi) \mid ^2 \right )^{\frac{p}{w(P_{\alpha})}}\right )^{1/p} \end{equation} \noindent where the $P_{\alpha}$ are a set of invariant polynomials in jet coordinates. The effect of this is then, the Demailly-Semple locus of the lifts of entire curves should be contained in \begin{equation} \Sigma_{h_k} \subset\ \{ P_{\alpha}=0 , \ \ \forall \alpha\} \end{equation} \noindent where the jet coordinates on the fibres $J_{k,x}$ are $\xi_1,...,\xi_k \in \mathbb{C}^n$ with $\xi_i=f^{(i)}(0)$ for an entire holomorphic curve $f : \mathbb{C} \longrightarrow X$ tangent to $V$, Besides, $\epsilon_1 \gg \epsilon_2 \gg . . . \gg \epsilon_k > 0$ are sufficiently small and $w(P_{\alpha})$ is the weight of $P_{\alpha}$. For instance a choice of $P_{\alpha}$'s could be the Wronskians. However we slightly try to do some better choice. First lets make some correspondence between invariant jets with non-invariant ones. Lets consider a change of coordinates \begin{equation} (f_1,...,f_r) \longmapsto (f_1 \circ f_1^{-1},...,f_r \circ f_1^{-1})=(t,g_2,...,g_r)=\eta \end{equation} \noindent locally defined in a neighborhood of a point, where $n-r$ coordinates $f_i$ of $f(t) \in V_{f(t)}$ are completely determined by the remaining $r$ coordinates. This makes the first coordinate to be the identity. If we differentiate in the new coordinates successively, then all the resulting fractions are invariant of degree $0$, \begin{equation} g_2'=\frac{f_2'}{f_1'}\circ f_1^{-1}, \qquad g_2''=\frac{f_1'f_2''-f_2'f_1''}{f_1'^3},... \end{equation} \noindent We take the $P_{\alpha}$'s to be all the polynomials that appear in the numerators of the components when we successively differentiate (2.7) with respect to $t$. An invariant metric in the first coordinates corresponds to a usual metric in the second one subject to the condition that we need to make the average under the unitary change of coordinates in $V$. It corresponds to change of coordinates on the manifold $X$ as \begin{equation} (\psi \circ f)^{(k)}(0)= \psi'(0). f^{(k)}+\text{higher order terms according to epsilons } \end{equation} \noindent with $\psi$ to be unitary. That is the effect of the change of variables in $X$ has only effect as the first derivative by composition with a linear map, up to the scaling epsilon factors (cf. \cite{D1}). \begin{equation} \mid (z;\xi)\mid \sim \left (\sum_s \epsilon_s \parallel \eta_s . (\eta_{11})^{2s-1}\parallel_h^{p/(2s-1)}\right )^{1/p} =\left (\sum_s \epsilon_s \parallel \eta_s \parallel_h^{p/(2s-1)} \right )^{1/p}\mid \eta_{11} \mid \end{equation} \noindent where $\eta_s$ are the jet coordinates $\eta_s = g^{(s)}(0), 1 \leq s \leq k$, induced by $g = (t, g_2(t), . . . , g_r(t))$. The weight of $\eta_s$ can be seen by differentiating (9) to be equal $(2s-1)$ inductively. Therefore the above metric becomes similar to the metric used by Demailly in the new coordinates produced by $g$'s. We need to modify the metric in (2.10) slightly to be invariant under hermitian transformations of the vector bundle $V$. In fact the role of $\eta_{11}$ can be done by any other $\eta_{1i}$ or even any other non-zero vector. To fix this we consider \begin{equation} \mid (z;\xi)\mid = \int_{\parallel v \parallel_1=1} \left (\sum_s \epsilon_s \parallel \eta_s \parallel_h^{p/(2s-1)} \right )^{1/p}\mid <\eta_{1}.v> \mid^2 \end{equation} \noindent where the integration only affects the last factor making average over all vectors in $v \in V$. This will remove the former difficulty. The curvature is the same as for the metric in \cite{D1} but only an extra contribution from the last factor, \begin{equation} \gamma_k(z,\eta)=\frac{i}{2\pi}\left ( w_{r,p}(\eta)+\sum_{lm\alpha}b_{lm\alpha}\left (\int_{\parallel v \parallel_1=1} v_{\alpha}\right )dz_l \wedge d\bar{z}_m +\sum_{s} \frac{1}{s}\frac{\mid \eta_s\mid^{2p/s}}{\sum_t \mid \eta_t \mid^{2p/t}} \sum c_{ij\alpha \beta}\frac{ \eta_{s\alpha}\bar{\eta}_{s\beta}}{\mid \eta_s \mid^2} dz_i \wedge d\bar{z}_j \right ) \end{equation} \noindent Namely, if $\pi_r : \mathbb{C}^{kr} \setminus {0} \longrightarrow \mathbb{P}(1^r, 2^r, . . . , k^r)$ is the canonical projection of the weighted projective space $\mathbb{P}(1^r, 2^r, ... , k^r)$ and \begin{equation} \phi_{r,p}(z) := \frac{1}{p}\left (\log (\sum_{k}^{s=1}|z_s|^{\frac{2p}{s}} \right ) \end{equation} \noindent for some $p > 0$ then $w_{r,p}$ is the degenerate Kahler form on $\mathbb{P}(1^r, 2^r, . . . , k^r)$ with \begin{equation} \pi_r^*w_{r,p} = dd^c\phi_{r,p}. \end{equation} \noindent We have $b_{l,m,\alpha} \in \mathbb{C}$. The contribution of the factor $\mid \eta_{11} \mid$ can be understood as the curvature of the sub-bundle of $V$ which is orthogonal complement to the remainder. Thus \begin{equation} b_{lm\alpha}=c_{lm\alpha \alpha} \end{equation} \noindent where $c_{lm11}$ is read from the coefficients of the curvature tensor of $(V,w^{FS})$ the Fubini-Study metric on $V$ (the second summand in (2.12)). Then we need to look at the integral \begin{equation} \int_{X_k,q}\Theta^{n+k(r-1)}=\frac{(n+k(r-1))!}{n!(k(r-1))!}\int_X \int_{P(1^r,...,k^r)}w_{a,r,p}^{k(r-1)}(\eta)1_{\gamma_k,q}(z,\eta) \gamma_k(z,\eta)^n \end{equation} \noindent In the course of evaluating with the Morse inequalities the curvature form is replaced by the trace of the above tensor in raising to the power $n=\dim X$, then if we use polar coordinates \begin{equation} x_s=\parallel \eta_s \parallel^{2p/s}, \qquad u_s=\eta_s/\parallel \eta_s \parallel \end{equation} \noindent Then the value of the curvature when integrating over the sphere yields the following \begin{equation} \gamma_k=\frac{i}{2\pi}\big(\sum_{lm}b_{lm\alpha} dz_l \wedge d\bar{z}_m+\sum_{s} \frac{1}{s}\sum c_{ij\lambda \lambda} u_{s\alpha}\bar{u}_{s \beta}dz_i \wedge d\bar{z}_j\big) \end{equation} \noindent Because the first term is a finite sum with respect to $1 \leq \alpha \leq r$, since $b_{lm\alpha}$ are labeled by $\alpha$, the estimates for this new form would be essentially the same as those in \cite{D1}. Therefore one expects \begin{equation} \int_{X_k,q}\Theta^{n+k(r-1)}=\frac{(\log k)^n}{n!(k!)^r}\left ( \int_X 1_{\gamma,q}\gamma^n +O((\log k)^{-1}) \right) \end{equation} \noindent where $\gamma$ is the curvature form of $\text{det}(V^*/G_k)$ with respect to the Chern connection of the determinant of the invariant metric and $1_{\gamma,q}$ is the characteristic function of the set of those $(z, \eta)$, at which $\gamma$ is of signature $(n-q, q)$. We have proved the following. \begin{prop} The analogue of Theorem 1.1 holds if the bundle $E_{k,m}^{GG}$ is replaced by $E_{k,m}$. \end{prop} \begin{proof} The proof follows from the proof in \cite{D1} and \cite{D2}, and the formulas (2.13-2.19) above. \end{proof} \begin{remark} If $P=P(f,f',...,f^{(k)})$ and $Q=Q(f,f',...,f^{(k)})$ are two local sections of the Green-Griffiths bundle, then the first invariant operator is $f \mapsto f_j'$. Define a bracket operation as follows \begin{equation} [P,Q]=\big(d \log \frac{P^{1/deg(p)}}{Q^{1/deg(Q)}} \big) \times PQ=\frac{1}{deg(P)}QdP-\frac{1}{deg(Q)}PdQ \end{equation} \noindent This is compatible with Merker’s baracket $[P, Q]=\deg(Q)QdP- \deg(P)P dQ$ cf. \cite{M2}. If $(V,h)$ is a Hermitian vector bundle, the equations in (2.20) define inductively $G_k$-equivariant maps \begin{equation} Q_k:J_kV \to S^{k-2} V \otimes \bigwedge^2 V, \qquad Q_k(f)=[f',Q_{k-1}(f)] \end{equation} \noindent The sections produced by $Q_k(f)$ generate the fiber rings of Demailly-Semple bundle. In fact taking charts on the projective fibers one can check that locally the ring that these sections generate are equal to that of $J_k/G_k$, cf. \cite{M2}. \end{remark} \vspace{0.3cm} \section{Existence of global dual differential operators} \vspace{0.3cm} In \cite{M1} J. Merker proves the Green-Griffiths-Lang conjecture for a generic hypersurface in $\mathbb{P}^{n+1}$. He proves for $X \subset \mathbb{P}^{n+1}(\mathbb{C})$ of degree $d$ as a generic member in the universal family \begin{equation} \mathfrak{X} \subset \mathbb{P}^{n+1} \times \mathbb{P}^{(\frac{n+1+d)!}{(n+1)!d!}-1} \end{equation} \noindent parametrizing all such hypersurfaces, the GG-conjecture holds. In the proof by Merker for hypersurface case, the Theorem is established outside an algebraic subset $\Sigma \subset J_{\text{vert}}^n(\mathfrak{X})$ defined by vanishing of certain Wronskians, by using a result of Y. T. Siu in \cite{S}. In order to give a similar proof of GG conjecture for general $X$ one may use the following generalization. \vspace{0.3cm} \noindent \textbf{Question:} If $X \subset \mathbb{P}^{n+1}$ be a generic member of a family $\mathfrak{X}$ of projective varieties, then there are constants $c_n$ and $c_n'$ such that \begin{equation} T_{J_{\text{vert}}^n(\mathfrak{X})} \otimes \mathcal{O}_{\mathfrak{X}_k}(c_n) \otimes \pi_{0k}^* L^{c_n'} \end{equation} \noindent is generated at every point by its global sections, where $L$ is an ample line bundle on $\mathfrak{X}$. By the analogy between microlocal differential operators and formal polynomials on the symmetric tensor algebra it suffices to show \begin{equation} H^0(X_k, Sym^{\leq m'}\tilde{V}_k \otimes \mathcal{O}_{X_k}(m) \otimes \pi_{0k}^*B) \ne 0, \qquad m'>>m >>k \end{equation} \noindent where $\tilde{V}_k$ is the in-homogenized $V_k$ as acting as differential operators in first order. We also wish to work over the Demailly-Semple bundle of invariant jets. To this end by a similar procedure as the former case one may check the holomorphic Morse estimates applied to the following metric on the symmetric powers. \begin{equation} \vert (z,\xi) \vert= \Big( \sum_{s=1}^k \epsilon_s \big( \sum_{u_i \in S^sV^*} \vert W_{u_1,...,u_s}^s \vert^2 + \sum_{iju_{\alpha}u_{\beta}}C_{iju_{\alpha}u_{\beta}} z_i\bar{z}_j u_{\alpha}\bar{u}_{\beta}\big)^{p/s(s+1)} \Big)^{1/p} \end{equation} \noindent where $W_{u_1,...,u_s}^s$ is the Wronskian \begin{equation} W_{u_1,...,u_s}^s=W(u_1 \circ f,...,u_s \circ f) \end{equation} \noindent and we regard the summand front the $\epsilon_s$ as a metric on $S^sV^*$. We need to find estimates for the coefficients $C_{iju_{\alpha}u_{\beta}}$. Moreover the frame $\langle u_i \rangle$ is chosen of monomials to be holomorphic and orthonormal at $0$ dual to the frame $\langle e^{\alpha}=\sqrt{l!/\alpha!}e_1^{\alpha_1}...e_r^{\alpha_r}\rangle$. The scaling of the basis in $S^lV^*$ is to make the frame to be orthonormal and are calculated as follows; \begin{equation} \langle e^{\alpha}, e^{\beta} \rangle=\langle \sqrt{l!/\alpha!}e_1^{\alpha_1}...e_r^{\alpha_r},\sqrt{l!/\beta!}e_1^{\beta_1}...e_r^{\beta_r} \rangle = \sqrt{1/\alpha! \beta!}\langle \prod_{i=1}^le_{\eta(i)}, \sum_{\sigma \in S_l} \prod_{i=1}^le_{\eta \circ \sigma(i)} \rangle \end{equation} \noindent via the embedding $S^lV^* \hookrightarrow V^{\otimes l}$ and the map $\eta:\{1....l\} \to \{1....r\}$ taking the value $i$ the $\alpha_i$ times. Toward a Morse estimate with the coefficients of the curvature tensor we proceed as follows. Because the frame $\langle e_{\lambda} \rangle$ of $V$ were chosen to be orthonormal at the given point $x \in X$, substituting \begin{equation} \langle e_{\lambda} , e_{\mu} \rangle=\delta_{\lambda \mu}+ \sum_{ij\lambda \mu}c_{ij\lambda \mu} z_i\bar{z}_j+... \end{equation} \noindent It follows that \begin{equation} \langle e^{\alpha}, e^{\beta} \rangle= \sqrt{1/\alpha! \beta!} \left (\delta_{\alpha \beta} + \sum_{\eta \circ \sigma(i)=\eta(i)} c_{ij\alpha_{\eta(i)}\beta_{\eta(i)}}z_i\bar{z}_j+... \right ) \end{equation} \noindent The strategy is to find the scalars $C_{iju_{\alpha}u_{\beta}}$ in terms of curvature of the metric on $V$, in order to examine an estimation of the volume \begin{equation} \int_{X_k} \Theta^{n+k(r-1)} = \frac{(n+k(r-1))!}{n!(k(r-1))!}\int_X \int_{\mathbb{P}(1^{[r]},...,k^{[r]})} \Theta_{\text{vert}}^{k(r-1)}\Theta_{hor}^n \end{equation} \noindent to be positive. However the calculations with $\Theta$ involves more complicated estimates. We pose the question of existence of a positive lower bound for the global sections of the bundle in (3.3) as a step toward the Green-Griffiths conjecture. \begin{remark} In \cite{DR} the existence of dual sections to $H^0(X,E_{k,m}^{GG}V^*)$ for $m \gg k \gg 0$ has been proved using Morse inequalities, involving higher cohomologies of $X_k$ with similar coefficients, cf. loc. cit.. \end{remark} \section{Serre duality for Jet bundles} \vspace{0.2cm} In \cite{D1}, \cite{D2}, the existence of asymptotic global sections for Green-Griffiths jets on a projective variety $ X $ has been proved, by using Morse estimates for the curvature of suitable metrics on these bundles. It is shown in \cite{D1} that \begin{equation} H^0(X,E_{k,m}^{GG}V^* \otimes A^{-1})=H^0(X_k,\mathcal{O}_{X_k}(m) \otimes \pi_{k}^*A^{-1})) \end{equation} \noindent are non-trivial when $m \gg k \gg 0$, where $A$ is a hermitian ample line bundle on $X$. In fact the factor $A$ can be absorbed in the other over $X_k$ if we assume $m \in \mathbb{Q}$. The formulation of Serre duality is based on the existence of the canonical sheaf. A technical difficulty arises here that in the definition of the relative canonical sheaf, the fibers of $X_k \to X$ have singularities. However, there is a generalization of the definition of the canonical sheaf for singular varieties as explained in \cite{D1}, \cite{DR}. The relative canonical sheaf $K_{X_k/X}$ can be defined in a manner similar to the definition of the canonical sheaf $K_V$, where $V$ is a holomorphic subbundle of $T_X$. Serre duality can then be written as \begin{equation} H^0(X_k,(\pi_k)_*\mathcal{O}_{X_k}(m)) \bigotimes H^{k(r-1)}(X_k, K_{X_k/X} \otimes (\pi_k)_*\mathcal{O}_{X_k}(-m')) \longrightarrow \mathcal{O}_X \end{equation} \noindent for $ m, m'\gg k \gg 0 $. The non-triviality of the cohomology groups has been proved by Demailly and the author, \cite{D1}, \cite{D2} et \cite{DR}. The existence of the relative canonical sheaves $K_{X_k/X}$ provides the possibility to discuss Serre duality along fibers of jet bundles. Because the fibers in the Green-Griffiths bundle are weighted projective spaces of appropriate weights $(1,2,...,n)$ the relative Serre duality can be interpreted as the duality for coherent sheaves on weighted projective spaces. On account of this we review the formulation of the adjoint pair along the fibers of $E_{k,m}^{GG}V^*$. Then, according to the classical Serre duality the dual pair associated to $H^{0} (F_x,(\pi_k)_* \mathcal{O}_{{X_k},x} (m))$ is $H^{k (r-1)} (F_x, K_{F_x} \otimes (\pi_k)_*\mathcal{O}_{{X_k},x} (- m))$, where $ K_{F_x} $ is the canonical sheaf of the fiber. In other words we have \begin{equation} H^0(\pi_{k}^{-1}(x),(\pi_k)_* \mathcal{O}_{X_k}(m))^{\vee}=H^{k(r-1)}((\pi_{k}^{-1}(x), K_{F_x} \otimes (\pi_k)_*\mathcal{O}_{X_k}(-m)) \end{equation} \noindent By the Leray spectral sequence for $\pi_k:X_k \to X$ we get the following \begin{equation} H^{k (r-1)} (X_k, K_{X_k / X} \otimes (\pi_k)_*\mathcal{O}_{X_k} (- m ')) = H^0 (X, R^{k (r-1)} (\pi_{k})_* (K_{X_k / X} \otimes (\pi_k)_*\mathcal{O}_{X_k} (-m '))) \end{equation} \noindent On account of the formalism of Serre duality for coherent sheaves on projective spaces, the candidate for the duality is \begin{equation} H^0(X_k,(\pi_k)_*\mathcal{O}_{X_k}(m)) \bigotimes H^{k(r-1)}(X_k, K_{X_k/X} \otimes (\pi_k)_*\mathcal{O}_{X_k}(-m')) \longrightarrow \mathcal{O}_X \end{equation} \noindent We have to make sure that the sheaf $H^{k(r-1)}(X_k, K_{X_k / X} \otimes (\pi_k)_*\mathcal{O}_{X_k} (- m')) $ is non-trivial i.e. has enough sections for $ m \gg k \gg 0 $. \begin{theorem} (Serre Duality for Jet Fibers) There is a Serre duality for asymptotic cohomologies, $m, m' \gg k \gg 0$, \begin{equation} H^0(X_k,(\pi_k)_*\mathcal{O}_{X_k}(m)) \bigotimes H^{k(r-1)}(X_k, K_{X_k/X} \otimes (\pi_k)_*\mathcal{O}_{X_k}(-m')) \longrightarrow \mathcal{O}_X \end{equation} \end{theorem} \begin{proof} The duality in construction is the duality on each fiber of the jet bundle, all glued by the spectral sequence of Leray. In that sense, it is a duality of coherent sheaves on the weighted projective spaces \cite{RT}. The adjoint pair on the fiber along the fiber $\pi_k^{-1}(x)$ is given by the formula \begin{equation} H^0(\pi_{k}^{-1}(x),(\pi_k)_* \mathcal{O}_{X_k}(m))^{\vee}=H^{k(r-1)}((\pi_{k}^{-1}(x), K_{F_x} \otimes (\pi_k)_*\mathcal{O}_{X_k}(-m)) \end{equation} \noindent The degeneration of the Leray spectral sequence of the fibration $\pi_k:X_k \to X$ provides \begin{equation} H^0(X_k,(\pi_k)_*\mathcal{O}_{X_k}(m))^{\vee} = H^0(X, R^{k(r-1)}(\pi_{k})_*( K_{X_k/X} \otimes (\mathcal{O}_{X_k}(-m))) \end{equation} \noindent which is equivalent to (4.6). The non-triviality of the first factor in (4.6) is proved in \cite{D1}. The nontriviality of the adjoint cohomology group in the pairing is obtained from estimates \begin{equation} H^q(X_k, K_{X_k/X} \otimes (\pi_k)_*\mathcal{O}_{X_k}(-m'))\geq \sum_{q-1,q,q+1}\frac{rm^n}{r!}\int_{X(\Theta,j)}(-1)^{q-j}\Theta^n-o(m^n) \end{equation} \noindent where $\Theta$ is the curvature of suitable k-jet metric on $X_k$ and $X(\Theta,j)=\{x \in X; \ \Theta \ \text{has signature} \ (n-j,j)\}$. We have considered the inequality for $q=k(r-1)$. It follows that when the $ K_{X_k / X} $ is big, both of the factors in the pairing (4.6) are nontrivial for $m,m'\gg 0$, [cf. \cite{DR} section 4]. The theorem follows. \end{proof} In \cite{M1}, J. Merker shows that when $X$ is a hypersurface of degree $d$ in $ \mathbb{P}^{n + 1}$ and is a generic member of the universal family $\mathfrak{X} \subset \mathbb{P}^{n+1} \times \mathbb{P}^{N_d}$, the Green-Griffiths conjecture holds for $X$. His method uses ideas of Y. T. Siu, on existence of slanting vector fields, see \cite{S}, \cite{P}. The proof establishes the conjecture outside a certain algebraic subvariety $ \Sigma \subset J_{\text{vert}}^n (\mathfrak{X}) $ defined by Wronskians. An implication of Merker's result \cite{M1} is the following; \begin{equation} H^0 \big (\mathfrak{X}_k,(\pi_k)_*\mathcal{O}_{\mathfrak{X}_k}(m) \big ) \bigotimes H^{k(r-1)}\big (\mathfrak{X}_k, K_{\mathfrak{X}_k/\mathfrak{X}} \otimes \big \langle J_{\text{vert}}^{k} (\mathfrak{X}) \big \rangle^{m'} \big ) \longrightarrow \mathcal{O}_\mathfrak{X} \end{equation} \noindent where $m' , m \gg k \gg 0$ and $\langle J_{\text{vert}}^{k} (\mathfrak{X}) \rangle^{m'}$ means the ring of operators generated by the $J_{\text{vert}}^{k} (\mathfrak{X})$ of degree $m'$. \vspace{0.5cm} \noindent \textit{Application to Green-Griffiths Conjecture:} We note that any entire curve $f:\mathbb{C} \to X$ is satisfied by the sections $ Q \in H^0(X_k, (\pi_k)_*\mathcal{O}_{X_k} (m)) $ i.e $Q(f_{[k]})=0$ for $f_{[k]}:\mathbb{C} \to X_k$ a lift of $f$, cf. \cite{D1}. It follows that, in (4.6) after the composition for $Q \in H^0 (X_k,(\pi_k)_*\mathcal{O}(X_k (m) )$ and $P \in H^0 (X_k,K_{X_k/X} \otimes (\pi_k)_*\mathcal{O}(X_k (-m') )$, the resulting function also satisfies \begin{equation} \langle Q,P \rangle (f_{[k]})=0, \qquad Q=\sum_{|\alpha|=m}A_{\alpha}(z)\xi^{\alpha}, \ P= \sum_{|\beta|=m'}B_{\beta}(z)\partial_{\xi}^{\beta} \end{equation} \noindent Therefore, the issue is to show that the image of (4.6) is a non-trivial ideal of $\mathcal{O}_X$. The above procedure plays a similar task as the idea on the existence of slanting vector fields along jet fibers, due to Siu \cite{S}. In fact the slanting vector fields play the role of generating sections in the adjoint fiber rings of jet bundles. \section{Appendix} The transformation group of $k$-jets is a non-reductive subgroup $G_k=\mathbb{C}^* \ltimes U_k$ of $GL_k(\mathbb{C})$, where $U_k$ is the unipotent radical consisting of upper-triangular $k \times k$ matrices of certain type. Un-like the reductive case one can not deduce that the ring of polynomials invariant under the action of $G_k$ or its unipotent part is finitely generated. An attempt toward this conjecture has been done in \cite{BK}. The question is if the ring of invariants $\mathbb{C}[f'(0),f''(0),...,f^{(k)}(0)]^{G_k}$ is finitely generated. Where we have considered $f^{(k)}(0)$ as germ of variables. The ring appears as the local ring of invariant sections of $J_k(X)=J_kT_X$ at a generic point $x \in X$. The projectivized bundle $J_k(X)/\mathbb{C}^*$ is the Green-Griffiths bundle. As an alternative way to define these bundles is through their ring of sections. One can consider ring of weighted homogeneous polynomials $P(z,\xi)=\sum_a A(z) \xi^a$ in the variabales $\xi=(\xi_1,...,\xi_k)$ with weights $1,2,...,k$ and $a=(a_1,...,a_k)$ respectively, denoted $E_k=\oplus_m E_{k,m}^{GG}$, where $m$ stands for the weighted degree. A differential field is a field $A$ with aderivation $\delta:A \to A, \ \delta(ab)=b\delta(a) + a \delta(b)$. Let $A$ be a differential field and $B$ a differential subfield. The differential Galois group $G$ of $A/B$ is the group of all differential automorphisms of $A$ living $B$ fixed. Then the same formalism like the Galois groups of usual fields appear here also. For any intermediate differential subfield $C$, denote the subgroup of $G$ living $C$ elementwise fixed by $C'$; and similar for any subgroup $H$ of $G$ denote by $H'$ the elements in $A$ fixed by that. Call a field or group closed if it is equal to its double prime. Now with these notations making PRIMED defines the Galois correspondence between closed subgroups and closed differential subfields [see \cite{K} for notations]. The Wronskian of $n$ elements $y_1,...,y_n$ in a differential ring is defined as the determinant \begin{equation} W(y_1,...,y_n)=\left| \begin{array}{cccc} y_1 & y_2 & ... & y_n \\ y_1' & y_2' & ... &y_n' \\ & & & \\ y_1^{(n)} & & &y_n^{(n-1)} \end{array} \right|. \end{equation} \noindent It is a quite well known that, $n$ elements in a differential field are linearly dependent over the field of constants if and only if their Wronskian vanishes. We will call an extension of the form $A=K\langle u_1,...,u_n\rangle$ with $u_1,...,u_n$ are solutions of \begin{equation} L(y)=\frac{W(y,u_1,...,u_n)}{W(u_1,...,u_n)}=y^{(n)}+a_1y^{(n-1)}+...+a_ny=0 \end{equation} \noindent a Picard extension, cf. \cite{K}. \begin{theorem} \cite{K} We have the following \begin{itemize} \item[(1)] Let $K \subset L \subset M$ be differential fields. Suppose that $L$ is Picard over $K$ and $M$ has the same field of constants as $K$. Then any differential automorphism of $M$ over $K$ sends $L$ into itself. \item[(2)] The differential Galois group of a Picard extension is an algebraic matrix group over the field of constants. \item[(3)] If $K$ has an algebraically closed constant field of characteristic $0$, and $M$ a Picard extenstion of $K$, then any differential isomorphism over $K$ between two intermediate fields extends to the whole $M$. In particular this also holds for any differential automorphism of an intermediate field over $K$. \item[(4)] Galois theory implements a one-to-one correspondence between the intermediate differential fields and the algebraic subgroups of the differential Galois group $G$. A closed subgroup $H$ is normal iff the corresponding field $L/K$ is normal, then $G/H$ is the full differential Galois group of $L$ over $K$. \end{itemize} \end{theorem} \noindent In fact over a constant field of $char=0$ any differential isomorphism between intermediate fields extends to the whole differential field. Let $A=K\langle u_1,...,u_n\rangle$ be a Picard extension and $W$ the Wronskian of $u_1,...,u_n$. A basic fact about the Wronskians is that for a differential automorphism $\sigma$ of $A$; we have $\sigma(W)=|c_{ij}|W$. Therefore $W$ is fixed by $\sigma$ if and only if $ |c_{ij}| =1$. A family of elements $(x_i)_{i \in I}$ is called differential algebraic independent; if the family $(x_i^{(j)})_{i \in I,j \geq 0}$ is algebraically independent over the field of constants, otherwise we call them dependent. An element $x$ is called differentially algebraic if the family consisting of $x$ only, is differential algebraic dependent. An extension is called differential algebraic if any element of it, is so. Finally we say $G$ is differentially finite generated over $F$ if there exists elements $x_1,...,x_n \in G$ such that $G$ is generated over $F$ by the family $(x_i^{(j)})_{1 \leq i \leq n,j \geq 0}$, cf. \cite{K}. \begin{theorem} \cite{K} Let $F \subset G$ be an extension of differential fields, then \begin{itemize} \item If $G=F\langle x_1,...,x_n\rangle$ and each $x_i$ is differential algebraic over $F$ then $G$ is finitely generated over $F$. \item If $G$ is differential finite generated over $F$ and $F \subset E \subset G$ is an intermediate differential field, the $E$ is also differentially finite generated. \end{itemize} \end{theorem} \begin{theorem} The local ring of invariant sections of $(E_{k, \leq m})_x^{G_k}$ is differentially finitely generated for each $m \in \mathbb{N}$. Furthermore, $\mathbb{C}[(J_{k,x}(X)]^{G_k}=\mathbb{C}\langle \wp_1,...,\wp_l \rangle (\alpha_1,...,\alpha_n)$, where $\wp_i$ s are polynomials on the Wronskians. The local ring of invariants $\mathbb{C}[(J_{k,x}(X)]^{U}$ under any subgroup of $U \subset SL_k$ is differentially finite generated. \end{theorem} \begin{proof} The fiber rings of the Green-Griffiths bundles $X_k$ and sheaves $E_{k,m}^{GG}(V^*)$ are differential rings. We shall consider their quotient fields. The algebraic groups $GL_k$, $SL_k$ and also $G_k$ act linearly and are differential Galois groups. As we explained the fixed field of $SL_k$ and the Galois group $1$ are the fields generated by the Wronskians and the Whole quotient field of the fibers respectively. Therefore the middle group $G_k$ also has differentially finitely generated fixed field, where we have used the criteria in the Theorems 5.1 and 5.2. Furthermore one finds that a possible choice of generators may include the fixed generators of $SL_k$, i.e. the Wronskians. By the Noether normalization theorem, there exists a finite number of generators $\wp_1,...,\wp_l$ such that the ring of fibers in $J_k(X)^{G_k}$ is algebraic over $\mathbb{C}\langle \wp_1,...,\wp_l\rangle$. It follows that \begin{equation} \mathbb{C}[(J_{k,x}(X)]^{G_k}=\mathbb{C}\langle \wp_1,...,\wp_l \rangle (\alpha_1,...,\alpha_n) \end{equation} \noindent as differenial fields. The sections $\wp_i$ are local sections of the $J_k(X)$ and as was explained can be taken polynomials on Wronskians. \end{proof} \bibliographystyle{amsplain}
{ "timestamp": "2020-12-17T02:21:17", "yymm": "2012", "arxiv_id": "2012.09024", "language": "en", "url": "https://arxiv.org/abs/2012.09024" }
\section{Introduction} \label{sec:Intro} In the link prediction (LP) task, we are given a snapshot of a social network, and asked to predict future links that are most likely to emerge between nodes. LP has a wide variety of applications, \eg, recommending friends in Facebook, followers in Twitter, products in Amazon, or connections on LinkedIn. An LP algorithm typically considers current non-edges as potential edges, and ranks them by decreasing likelihoods of becoming edges in future. \subsection{Prior Work and Their Limitations} \label{sec:PriorWork} LP methods abound in the literature, and predominantly follow two approaches. The first approach relies strongly on hand-engineering node features and edge likelihoods based on the network structure and domain knowledge~\cite{Katz1997start,LibenNowellK2007LinkPred,BackstromL2011SRW}. However, such feature engineering often demands significant domain expertise. The second approach learns low dimensional node embeddings which serve as node features in LP tasks. Such embedding models include Node2Vec~\cite{grover2016node2vec}, DeepWalk~\cite{perozzi2014deepwalk}, etc., and various graph neural networks (GNN), \eg, GCN~\cite{kipf2016semi}, GraphSAGE~\cite{hamilton2017inductive}, GAT~\cite{velivckovic2017graph}, etc. \paragraph{Limited expressive power of GNNs} While deep graph representations have shown significant potential in capturing complex relationships between nodes and their neighborhoods, they lack representational power useful for LP. A key reason for this weakness is the use of symmetric aggregates over a node $u$'s neighbors, driven by the desideratum that the representation of $u$ should be invariant to a permutation of its neighbor nodes~\citep{zaheer2017deep,ravanbakhsh2016deep,qi2017pointnet}. Such networks have recently been established as low-pass filters \citep{wu2019simplifying, nt2019revisiting}, which attenuate high frequency signals. This prevents LP methods based on such node representations from reaching their full potential. Although recent efforts \citep{lee2019set, bloem2019probabilistic, shi2020deep, stelznergenerative, skianis2020rep,ZhangC2018LinkPredGNN} on modeling inter-item dependencies have substantially improved the expressiveness of set representations in applications like image and text processing, they offer only modest improvement for LP, as we shall see in our experiments. Among these approaches, SEAL \citep{ZhangC2018LinkPredGNN} improves upon GNN performance but does not readily lend itself to efficient top-$K$ predictions via LSH. \paragraph{Limitations of sequence driven embeddings} We could arrange the neighbors of $u$ in some arbitrary canonical order, and combine their features sequentially using, say, a recurrent neural network (RNN). This would capture feature correlations between neighbors. But now, the representation of $u$ will become sensitive to the order in which neighbors are presented to the RNN. In our experiments, we see loss degradation when neighbors are shuffled. We seek to resolve this central dilemma. An obvious attempted fix would be to present many permutations (as Monte Carlo samples) of neighbor nodes but, as we shall see, doing so in a data-oblivious manner is very inefficient in terms of space and time. \subsection{Our Proposal: \textsc{Perm\-Gnn}\xspace} In response to the above limitations in prior work, we develop \textsc{Perm\-Gnn}\xspace: a novel node embedding method specifically designed for LP. To avoid the low-pass nature of GNNs, we eschew symmetric additive aggregation over neighbors of a node $u$, instead using a recurrent network to which neighbor node representations are provided sequentially, in some order. The representation of $u$ is computed by an output layer applied on the RNN states. To neutralize the order-sensitivity of the RNN, we cast LP as a novel min-max optimization, equivalent to a game between an adversary that generates worst-case neighbor permutations (to maximize LP loss) and a node representation learner that refines node representations (to minimize LP loss) until they become insensitive to neighborhood permutations. To facilitate end-to-end training and thus avoiding exploration of huge permutation spaces, the adversarial permutation generator is implemented as a Gumbel-Sinkhorn neural network \citep{Mena+2018GumbelSinkhorn}. Next, we design a hashing method for efficient LP, using the node representation learnt thus far. We propose a smooth optimization to compress the learned embeddings into binary representations, subject to certain hash performance constraints. Then we leverage locality sensitive hashing \citep{GionisIM1999hash} to assign the bit vectors to buckets, such that nodes likely to become neighbors share buckets. Thus, we can limit the computation of pairwise scores to within buckets. In spite of this additional compression, our hashing mechanism is accurate and fast. We evaluate \textsc{Perm\-Gnn}\xspace on several real-world datasets, which shows that our embeddings can suitably distill information from node neighborhoods into compact vectors, and offers accuracy boosts beyond several state-of-the-art LP methods\footnote{Code: \url{https://www.cse.iitb.ac.in/~abir/codes/permgnn.zip}.}, while achieving large speed gains via LSH. \subsection{Summary of Contributions} (1)~\textbf{Adversarial permutation guided embeddings:} We propose \textsc{Perm\-Gnn}\xspace, a novel node embedding method, which provides high quality node representations for LP. In a sharp contrast to additive information aggregation in GNNs, we start with a permutation-sensitive but highly expressive aggregator of the graph neighbors and then desensitize the permutation-sensitivity by optimizing a min-max ranking loss function with respect to the smooth surrogates of adversarial permutations. \noindent (2)~\textbf{Hashing method for scalable predictions:} We propose an optimized binary transformation to the learnt node representations, that readily admits the use of a locality-sensitive hashing method and shows fast and accurate predictions. \noindent (3)~\textbf{Comprehensive evaluation:} We provide a rigorous evaluation to test both the representational power of \textsc{Perm\-Gnn}\xspace and the proposed hashing method, which show that our proposal usually outperforms classical and recent methods. Further probing the experimental results reveal insightful explanations behind the success of \textsc{Perm\-Gnn}\xspace. \section{Preliminaries} \label{sec:Prelim} In this section, we describe necessary notation and the components of a typical LP system. \subsection{Notation} We consider a snapshot of an undirected social network $\Gcal=(\Vcal,\Ecal)$. Each node $u$ has a feature vector $\fb_u$. We use $\nbr(u)$ and $\nnbr(u) $ to indicate the set of neighbors and non-neighbors of $u$. Our graphs do not have self edges, but we include $u$ in $\nbr(u)$ by convention. We define $\nbr(u)=\set{u}\cup\set{v\,|\, (u,v)\in \Ecal}$, $\nnbr(u)=\set{v| v \neq u, (u,v)\not \in \Ecal}$ and also $\overline{\Ecal}$ to be the set of non-edges, \ie, $\overline{\Ecal}=\cup_{u\in\Vcal}\nnbr(u)$. Finally, we define $\Pi_\delta$ to be the set of permutations of the set $[\delta]=\set{1,2,..., \delta}$ and $\Pcal_\delta$ to be the set of all possible 0/1 permutation matrices of size $\delta\times\delta$. \subsection{Scoring and Ranking} Given a graph snapshot $\Gcal=(\Vcal,\Ecal)$, the goal of a LP algorithm is to identify node-pairs from the current set of non-edges $\overline{\Ecal}$ (often called potential edges) that are likely to become edges in future. In practice, most LP algorithms compute a \textbf{score} $s(u,v)$ for each potential edge $(u,v)\in\overline{\Ecal}$, which measures their likelihood of becoming connected in future. Recently invented network embedding methods~\cite{kipf2016semi,grover2016node2vec,Salha+2019gravity} first learn a latent representation $\xb_u$ of each node $u\in\Vcal$ and then compute scores $s(u,v)$ using some similarity or distance measure between the corresponding representations $\xb_u$ and $\xb_v$. In the test fold, some nodes are designated as \emph{query} nodes~$q$. Its (current) non-neighbors $v$ are sorted by decreasing~$s(q,v)$. We are primarily interested in LP systems that can retrieve a small number of $K$ nodes with largest $s(q,v)$ for all $q$ in $o(N^2)$ time. \section{Proposed Approach} \label{sec:PermGNN} In this section, we first state the limitations of GNNs. Then, we present our method for obtaining high quality node embeddings, with better representational power than GNNs. \subsection{GNNs and Their Limitations} GNNs start with a graph and per-node features $\fb_u$ to obtain a neighborhood-sensitive node representation $\xb_u$ for $u\in\Vcal$. To meaningfully compare $\xb_u$ and $\xb_v$ and compute $s(u,v)$, information from neighbors of $u$ (and $v$) should be aggregated in such a way that the embeddings become invariant to permutations of the neighbors of $u$ (and $v$). GNNs ensure permutation invariance by additive aggregation. Given an integer $K$, for each node $u$, a GNN aggregates structural information $k$ hops away from $u$ to cast it into~$\xb_u$ for $k\le K$. Formally, a GNN first computes intermediate embeddings $\set{\zb_u(k)\,|\, k\in [K]}$ in an iterative manner and then computes $\xb_u$, using the following recurrent propagation rule. \begin{align} \overline{\zb}_u(k-1) &= \aggregate\big(\set{\zb_v(k-1) \,|\, v\in\nbr(u)}\big);\label{eq:gnn1}\\[-0.1ex] \zb_u(k) &= \combine_1\big(\zb_u(k-1),\; \overline{\zb}_u(k-1) \big);\label{eq:gnn3}\\[-0.1ex] \xb_u &= \combine_2(\zb_u(1),\ldots,\zb_u(K))\label{eq:gnn4} \end{align} Here, for each node $u$ with feature vector $\fb_u$, we initialize $\zb_u(0) = \fb_u$; $\aggregate$ and $\combine_{1,2}$ are neural networks. To ensure permutation invariance of the final embedding $\xb_u$, $\aggregate$ aggregates the intermediate $(k-1)$-hop information $\zb_v(k-1)$ with an additive (commutative, associative) function, guided by set function principles~\citep{zaheer2017deep}: \begin{multline} \aggregate\big(\set{\zb_v(k-1) \,|\, v\in\nbr(u)}\big) \\[-1ex] = \sigma_1\left(\textstyle \sum_{v\in\nbr(u)} \sigma_2\big(\zb_v(k-1)\big) \right). \label{eq:agg} \end{multline} Here $\sigma_1,\sigma_2$ are nonlinear activations. In theory \citep[Theorem~2]{zaheer2017deep}, if $\combine_{1,2}$ are given `sufficient' hidden units, this set representation is universal. In practice, however, commutative-associative aggregation suffers from limited expressiveness \citep{PabbarajuJ2019permute,wagstaff2019limitations,garg2020generalization,cohenkarlik2020regularizing}, which degrades the quality of $\xb_u$ and~$s(\cdot,\cdot)$, as described below. Specifically, their expressiveness is constrained from two perspectives. \paragraph{Attenuation of important network signals} GNNs are established to be intrinsically low pass filters~\cite{nt2019revisiting, wu2019simplifying}. Consequently, they can attenuate high frequency signals which may contain crucial structural information about the network. To illustrate, assume that the node $u$ in Eqs.~\eqref{eq:gnn1}--\eqref{eq:gnn4} has two neighbors $v$ and $w$ and $\zb_v(k-1)=[+1, -1]$ and $\zb_w(k-1)=[-1, +1]$, which induce high frequency signals around the neighborhood of $u$. In practice, these two representations may carry important signals about the network structure. However, popular choices of $\sigma_2$ often diminish the effect of each of these vectors. In fact, the widely used linear form of $\sigma_2$ \citep{hamilton2017inductive, kipf2016semi} would completely annul their effects (since $\sigma_2(\zb_v(k-1)) + \sigma_2(\zb_w(k-1))=\bm{0}$) in the final embedding~$\xb_u$, which would consequently lose capacity for encapsulating neighborhood information. \paragraph{Inability to distinguish between correlation structures} In Eq.~\eqref{eq:agg}, the outer nonlinearity $\sigma_1$ operates over the sum of all representations of neighbors of~$u$. Therefore, it cannot explicitly model the variations between the joint dependence of these neighbors. Suppose the correlation between $\zb_{v}(k-1)$ and $\zb_{w}(k-1)$ is different from that between $\zb_{v'}(k-1)$ and $\zb_{w'}(k-1)$ for $\set{v,v',w,w'} \subseteq \nbr(u)$. The additive aggregator in Eq.~\eqref{eq:agg} cannot capture the distinction. Here, we develop a mitigation approach which exploits sequential memory, e.g., LSTMs, even though they are order-sensitive, and then neutralize the order sensitivity by presenting adversarial neighbor orders. An alternative mitigation approach is to increase the capacity of the aggregator (while keeping it order invariant by design) by explicitly modeling dependencies between neighbors, as has been attempted in image or text applications \cite{lee2019set, bloem2019probabilistic, shi2020deep, stelznergenerative}. \subsection{Our Model: {\protect\textsc{Perm\-Gnn}\xspace{}}} Responding to the above limitations of popular GNN models, we design \textsc{Perm\-Gnn}\xspace, the proposed adversarial permutation guided node embeddings. \subsubsection*{Overview.} \hspace{-2mm} Given a node $u$, we first compute an embedding $\xb_u$ using a \emph{sequence} encoder, parameterized by $\theta$: \begin{align} \xb_u = \rho_{\theta}\big(\set{\fb_v \,|\, v\in\nbr(u)}\big), \end{align} where $\nbr(u)$ is presented in some arbitrary order (to be discussed). In contrast to the additive aggregator, $\rho$ is modeled by an LSTM \citep{HochreiterS1997LSTM}, followed by a fully-connected feedforward neural network (See Figure~\ref{fig:PermGnnLossSchematic}). Such a formulation captures the presence of high frequency signal in the neighborhood of $u$ and the complex dependencies between the neighbors $\nbr(u)$ by combining their influence via the recurrent states of the LSTM However, now the embedding $\xb_u$ is no longer invariant to the permutation of the neighbors $\nbr(u)$. As we shall see, we counter this by casting the LP objective as an instance of a min-max optimization problem. Such an adversarial setup refines $\xb_u$ in an iterative manner, to ensure that the resulting trained embeddings are permutation invariant (at least as far as possible in a non-convex optimization setting). \subsubsection*{\textsc{Perm\-Gnn}\xspace{} architecture.} Let us suppose $\pib = [\pi_1,...,\pi_{|\nbr(u)|}] \in \Pi_{|\nbr(u)|}$ is some arbitrary permutation of the neighbors of node~$u$. We take the features of neighbors of $u$ in the order specified by $\pib$, i.e., $\bigl(v_{\pi_1}, v_{\pi_2}, \ldots, v_{\pi_{|\nbr(u)|}}\bigr)$, and pass them into an LSTM: \begin{align} \hspace{-2mm} \yb _{u,1},..., \yb _{u,{|\nbr(u)|}} = \textsc{LSTM}_{\theta}\big(\fb_{v_{\pi_1}}, ..., \fb_{v_{\pi_{|\nbr(u)|} }}\big). \label{eq:lstm} \end{align} Here $\big(\yb _{u, k} \big)_{k\in [|\nbr(u)|]}$ is a sequence of intermediate representation of node $u$, which depends on the permutation~$\pib$. Such an approach ameliorates the limitations of GNNs in two ways:\\ (1) Unlike GNNs, the construction of $\yb_{\bullet}$ is not limited to symmetric aggregation, and is therefore able to capture crucial network signals including those with high frequency~\cite{borovkova2019ensemble}. \\ (2) An LSTM (indeed, any RNN variant) is designed to capture the influence of one token of the sequence on the subsequent tokens. In the current context, the state variable $\hb_{k}$ of the LSTM combines the influence of first $k-1$ neighbors in the input sequence, \ie, $v_{\pi_1},\ldots v_{\pi_{k-1}}$ on the $k$-th neighbor $v_{\pi_k}$. Therefore, these recurrent states allow $\yb_{\bullet}$ to capture the complex dependence between the features $\fb_\bullet$.\\ Next, we compute the final embeddings $\xb_u$ by using an additional nonlinearity on the top of the sequence $(\yb_{u, k} )_{k\in [|\nbr(u)|]}$ output by the LSTM: \begin{align} \xb_{u;\pib} = \sigma_{\theta} \big(\yb_{u, 1}, \yb_{u, 2}, \ldots, \yb_{u, |\nbr(u)|}\big) \in \mathbb{R}^D. \label{eq:sigma-theta-intro} \end{align} Note that the embeddings $\set{\xb_u}$ computed above depends on $\pib$, the permutation of the neighbors $\nbr(u)$ given as the input sequence to the LSTM in Eq.~\eqref{eq:lstm}. \paragraph{Removing the sensitivity to~$\pib$} One simple way to ensure permutation invariance is to compute the average of $\xb_{u;\pib}$ over all permutations~$\pib \in \Pi_{|\nbr(u)|}$, similar to~\citet{murphy2019janossy}. At a time and space complexity of at least $O(\sum_{u\in\Vcal} |\Pi_{|\nbr(u)|}|)$, this is quite impractical for even moderate degree nodes. Replacing the exhaustive average by a Monte Carlo sample does improve representation quality, but is still very expensive. {\citet{murphy2019relational} proposed a method called $\pi$-SGD, which samples one permutation per epoch. While it is more efficient than sampling multiple permutations, it shows worse robustness in practice.} \subsubsection*{Adversarial permutation-driven LP objective.} Instead of brute-force sampling, we setup a two-party game, one being the network for LP, vulnerable to $\pib$, and {the other} being an adversary, which tries to make the LP network perform poorly by choosing a `bad'~$\pib$ at each node. \begin{tcolorbox}[colframe=gray!40,boxsep=0mm,left=0pt,right=0pt,top=0pt,bottom=0pt] \begin{algorithmic}[1] \State pick initial $\pib^u$ at each node~$u$ \Repeat \State fix $\set{\pib^u: u\in\Vcal}$; optimize $\theta$ for best LP accuracy \State fix $\theta$; find next $\pib^u$ at all $u$ for worst LP accuracy \Until{LP performance stabilizes} \end{algorithmic} \end{tcolorbox} \noindent Let $\pib^u \in \Pi_{|\nbr(u)|}$ be the permutation used to shuffle the neighbors of $u$ in Eq.~\eqref{eq:lstm}. Conditioned on $\pib^u, \pib^v$, we compute the score for a node-pair $(u,v)$ as \begin{align} s_{\theta}(u,v|\pib^u,\pib^v) &=\similarity(\xb_{u;\pib^u}, \xb_{v;\pib^v}), \label{eq:SimCos} \end{align} where $\similarity(\ab,\bb)$ denotes the cosine similarity between $\ab$ and~$\bb$. To train our LP model to give high quality ranking, we consider the following AUC loss surrogate \cite{Joachims2005multivariate}: \begin{align} & \loss(\theta; \set{\pi^{w}}_{w\in\Vcal} ) \nn \\ &= \!\!\! \sum_{ \substack{(u,v)\in \Ecal\\ (r,t)\in\overline{\Ecal}}} \Big[ \Delta + s_{\theta}(r,t)|\pib^r,\pib^t - s_{\theta}(u,v|\pib^u,\pib^v) \Big]_{+} \label{eq:HingeRankingLoss} \end{align} where $\Delta$ is a tunable margin and $[a]_+=\max\set{0,a}$. \begin{figure} \centering\resizebox{0.48\textwidth}{!}{ \begin{tikzpicture}[>=latex] \def\nbrhd#1#2{% \begin{tikzpicture} \node [circle,fill=gray!20,inner sep=0, outer sep=0] (#1) {$#2$}; \node [inner sep=0, outer sep=0, right=2mm of #1] (fe#1) {$\vdots$}; \node [inner sep=0, outer sep=0, above=0mm of fe#1] (fne#1) {$\bm{f}$}; \node [inner sep=0, outer sep=0, below=0mm of fe#1] (fse#1) {$\bm{f}$}; \draw (#1) to (fe#1); \draw (#1) to (fne#1); \draw (#1) to (fse#1); \draw [decorate,decoration={brace,amplitude=4pt}] (fne#1.north east) -- (fse#1.south east) node (box#1) [midway,xshift=3pt] {}; \node [anchor=center, right=2mm of box#1.east] (ffv#1) {$\bm{F}_{#2}$}; \draw (box#1.east) -- (ffv#1.west); \node [anchor=center, fill=red!10, draw=red!40, inner sep=2pt, right=3mm of ffv#1] (Pphi#1) {$T_\phi$}; \draw [->] (ffv#1.east) -- (Pphi#1.west); \node [anchor=center, draw=green!50, fill=green!10, inner sep=2pt, right=3mm of Pphi#1] (lstm#1) {$\text{LSTM}_\theta$}; \draw [->] (Pphi#1.east) -- (lstm#1); \node [anchor=center, right=4mm of lstm#1, draw=green!50, fill=green!10] (sigma#1) {$\sigma_\theta$}; \draw [->] (lstm#1.east) -- (sigma#1.west); \node [anchor=center, right=4mm of sigma#1] (xb#1) {$\bm{x}_{#2}$}; \draw [->] (sigma#1.east) -- (xb#1.west); \end{tikzpicture}} \node [inner sep=0,outer sep=0, fill=gray!8] (vplus) {\nbrhd{vpluss}{v_+}}; \node [inner sep=0,outer sep=0, fill=gray!8, below=1mm of vplus.south east, anchor=north east] (u) {\nbrhd{us}{u}}; \node [inner sep=0,outer sep=0, fill=gray!8, below=1mm of u.south east, anchor=north east] (vminus) {\nbrhd{vminuss}{v_-}}; \draw [thick] (vplus.west) -- (u.west) node [midway,xshift=-4mm] (edge) {edge}; \draw [thick,dashed] (vminus.west) -- (u.west) node [midway,xshift=-7mm] (nonedge) {non-edge}; \node [above right=2mm and 2mm of u, outer sep=0, inner sep=0] (uvplus) {$\odot$}; \draw [->] (vplus.east) -- (uvplus); \draw [->] (u.east) -- (uvplus); \node [below right=2mm and 2mm of u, outer sep=0, inner sep=0] (uvminus) {$\odot$}; \draw [->] (vminus.east) -- (uvminus); \draw [->] (u.east) -- (uvminus); \node [anchor=center, inner sep=0pt, outer sep=0pt, right=2mm of uvplus] (simuvplus) {$\text{sim}(u,v_+)$}; \draw [->] (uvplus.east) -- (simuvplus.west); \node [anchor=center, inner sep=0pt, outer sep=0pt, right=2mm of uvminus] (simuvminus) {$\text{sim}(u,v_-)$}; \draw [->] (uvminus.east) -- (simuvminus.west); \node [anchor=center, draw, inner sep=2pt, outer sep=0pt, right=28mm of u.east] (relu) {ReLU}; \draw [->] (simuvplus.south east) -- (relu.north west); \draw [->] (simuvminus.north east) -- (relu.south west); \node [anchor=center, inner sep=0pt, outer sep=0pt, right=3mm of relu.east] (loss) {loss}; \draw [->] (relu.east) -- (loss.west); \node [left=2mm of relu.west, inner sep=0, outer sep=0] (margin) {$\Delta$}; \draw [->] (margin.east) -- (relu.west); \node [right=4mm of u.east, fill=blue!10, draw=blue!40] (Cpsi) {$C_\psi$}; \draw [->] (u) -- (Cpsi); \node [right=3mm of Cpsi] (beeu) {$\bm{b}_u$}; \draw [->] (Cpsi) -- (beeu); \end{tikzpicture} } \caption{\textsc{Perm\-Gnn}\xspace{} min-max loss and hashing schematic.} \label{fig:PermGnnLossSchematic} \end{figure} As stated above, we aim to train LP model parameters $\theta$ in such a way that the trained embeddings $\set{\xb_u}$ become invariant to the permutations of $\nbr(u)$ for all nodes $u\in\Vcal$. This requirement suggests the following min-max loss: \begin{align} \min_{\theta} \max_{\set{\pib^{w}}_{w\in\Vcal}} \loss(\theta; \set{\pib^{w}}_{w\in\Vcal}). \label{eq:MinMaxOptHard} \end{align} \subsubsection*{Neural permutation surrogate.} As stated, the complexity of Eq.~\eqref{eq:MinMaxOptHard} seems no better than exhaustive enumeration of permutations. To get past this apparent blocker, just as max is approximated by softmax (a multinomial distribution), a `hard' permutation (1:1 assignment) $\pib^w$ is approximated by a `soft' permutation matrix $\Pb^w$ --- a doubly stochastic matrix --- which allows continuous optimization. Suppose $\Fb_w =\big[\fb_{v_1},\fb_{v_2},\ldots,\fb_{v_{|\nbr(w)|}}\big]$ is a matrix whose rows are formed by the features of $\nbr(w)$ presented in some canonical order. Then $\Pb^w \Fb_w$ approximates a permuted feature matrix corresponding to some permuted sequence of neighbor feature vectors. The RHS of Eq.~\eqref{eq:lstm} can be written as $\text{LSTM}_\theta(\Pb^w \Fb_w)$, which eventually lets us express loss as a function of~$\Pb^w$. We can thus rewrite the min-max optimization \eqref{eq:MinMaxOptHard}~as \begin{align} \min_{\theta} \max_{\set{\Pb^w \,|\, w\in\Vcal}} \loss(\theta; \set{\Pb^{w}}_{w\in\Vcal}), \label{eq:MinMaxOptSoft} \end{align} where the inner maximization is carried out over all `soft' permutation matrices $\Pb^w$, parameterized as follows. In deep network design, a trainable multinomial distribution is readily obtained by applying a softmax to trainable (unconstrained) logits. Analogously, a trainable soft permutation matrix $\Pb^w$ can be obtained by applying a Gumbel-Sinkhorn network `GS' \citep{Mena+2018GumbelSinkhorn} to a trainable (unconstrained) `seed' square matrix, say,~$\Ab^w$: \begin{align} &\Pb^w = \lim_{n\to\infty} \text{GS}^n(\Ab^w), \quad \text{where} \nn \\ &\text{GS}^0(\Ab^w) = \exp(\Ab^w) \quad \text{and} \nn \\ &\text{GS}^n(\Ab^w) = \ColScale\left( \RowScale\big(\text{GS}^{n-1}(\Ab^w)\big) \right). \nn \end{align} Here, $\ColScale$ and $\RowScale$ represent column and row normalization. $\text{GS}^n(\Ab^w)$ is the doubly stochastic matrix obtained by consecutive row and column normalizations of~$\Ab^w$. It can be shown that \begin{align} \lim_{n\to\infty} \text{GS}^n(\Ab^w) &= \argmax_{\Pb \in \Pcal_{|\nbr(w)|}}\text{Tr}\left[\Pb^\top \Ab^w\right]. \end{align} $\text{GS}^n$ thus represents a recursive differentiable operator that permits backpropagation of $\loss$ to $\set{\Ab^w}$. In practice, $n$ is a finite hyperparameter, the larger it is, the closer the output to a `hard' permutation. Allocating a separate unconstrained seed matrix $\Ab^w$ for each node $w$ would lead to an impractically large number of parameters. Therefore, we express $\Ab^w$ using a globally shared network $\Tb_\phi$ with model weights $\phi$, and the per-node feature matrix $\Fb_w$ already available. I.e., we define \begin{align} \Ab^w := \Tb_\phi(\Fb_w / \tau), \label{eq:TIntro} \end{align} where $\tau>0$ is a temperature hyperparameter that encourages $\text{GS}^n(\Ab^w)$ toward a `harder' soft permutation. The above steps allow us to rewrite optimization \eqref{eq:MinMaxOptSoft} in terms of $\theta$ and $\phi$ in the form $\min_\theta \max_\phi \loss(\theta; \phi)$. After completing the min-max optimization, the embedding $\xb_u$ of a node $u$ can be computed using some arbitrary neighbor permutation. By design, the impact on $\similarity(u,v)$ is small when different permutations are used. \section{Scalable LP by Hashing Representations} \label{sec:LSH} At this point, we have obtained representations $\xb_u$ for each node~$u$ using \textsc{Perm\-Gnn}\xspace. Our next goal is to infer some number of most likely future edges. \paragraph{Prediction using exhaustive comparisons} Here, we first enumerate the scores for all possible potential edges (the current non-edges) and then report top-$K$ neighbors for each node. Since most real-life social networks are sparse, potential edges can be $\Theta(|\Vcal|^2)$ in number. Scoring all of them in large graphs is impractical; we must limit the number of comparisons between potentially connecting node pairs to be as small as possible. \subsection{Data-Oblivious LSH with Random Hyperplanes} \label{sec:Hyperplanes} When for two nodes $u$ and $v$, $\similarity(u,v)$ is defined as $\cos(\xb_u, \xb_v)$ with $\xb_\bullet\in\RR^D$, the classic random hyperplane LSH can be used to hash the embeddings $\xb_\bullet$. Specifically, we first draw $H$ uniformly random hyperplanes passing through the origin in the form of their unit normal vectors $\bm{n}_h \in \mathbb{R}^D, h\in[H]$~\cite{Charikar2002lsh}. Then we set $b_u[h] = \sign(\bm{n}_h \cdot \xb_u) \in \pm1$ as a 1-bit hash and $\bm{b}_u \in \pm1^H$ as the $H$-bit hash code of node~$u$. Correspondingly, we set up $2^H$ hash buckets with each node going into one bucket. If the buckets are balanced, we expect each to have $N/2^H$ nodes. Now we limit pairwise comparisons to only node pairs within each bucket, which takes $N^2/2^H$ pair comparisons. By letting $H$ grow slowly with $N$, we can thus achieve sub-quadratic time. However, such a hashing method is data oblivious--- the hash codes are not learned from the distribution of the original embeddings $\xb_\bullet$. It performs best when the embeddings are uniformly dispersed in the $D$-dimensional space, so that the random hyperplanes can evenly distribute the nodes among several hash buckets. \subsection{Learning Data-Sensitive Hash Codes} \label{sec:HashOpt} To overcome the above limitation of random hyperplane based hashing, we devise a data-driven learning of hash codes as explored in other applications~\citep{WeissTF2009SpectralHashing}. Specifically, we aim to design an additional transformation of the vectors $\set{\xb_u}$ into compressed representations $\set{\bm{b}_u}$, with the aim of better balance across hash buckets and reduced prediction time. \subsubsection*{Hashing/compression network.} In what follows, we will call the compression network $C_\psi: \mathbb{R}^D\to [-1,1]^H$, with model parameters~$\psi$. We interpret $\sign\big(C_\psi(\xb_u)\big)$ as the required binary hash code $\bb_u\in\set{-1,+1}^H$, with the surrogate $\tanh(C_\psi(\xb_u))$, to be used in the following smooth optimization: \begin{multline} \hspace{-2mm}\min_\psi \textstyle \frac{\alpha}{|\Vcal|} \sum_{u\in \Vcal} \big| \bm{1}^\top \tanh(C_\psi(\xb_u)) \big| \\[-1ex] + \textstyle \frac{\beta}{|\Vcal|} \sum_{u\in\Vcal} \Big\| \big| \tanh(C_\psi(\xb_u)) \big| - \bm{1} \Big\|_1 \\ + \textstyle \frac{\gamma}{|\overline{E}|} \sum_{(u,v) \in \overline{E}} \left|\tanh(C_\psi(\xb_u)) \cdot \tanh(C_\psi(\xb_v)) \right| \label{eq:HashOpt} \end{multline} Here, $\overline{E}$ is the set of non-edges and $\alpha, \beta, \gamma \in (0,1)$, with $\alpha+\beta+\gamma=1$ are tuned hyperparameters. The final binary hash code $\bb_u=\text{sign}(C_\psi(\xb_u))$. The salient terms in the objective above seek the following goals. \noindent{\bfseries \slshape Bit balance:} If each bit position has as many $-1$s as $+1$s, that bit evenly splits the nodes. The term $\big| \bm{1}^\top \tanh(C_\psi(\xb_u)) \big|$ tries to bit-balance the hash codes. \noindent{\bfseries \slshape No sitting on the fence:} The optimizer is prevented from setting $\bm{b}=\bm{0}$ (the easiest way to balance it) by including a term $\sum_h \big| |\bm{b}[h]| - 1 \big| = \big\| |\bm{b}| - \bm{1} \big\|_1$. \noindent{\bfseries \slshape Weak supervision:} The third term encourages currently unconnected nodes to be assigned dissimilar bit vectors. \subsubsection*{Bucketing and ranking.} Note that, we do not expect the dot product between the learned hash codes $\bm{b}_u \cdot \bm{b}_v$ to be a good approximation for $\cos(\xb_u,\xb_v)$, merely that node pairs with large $\cos(\xb_u,\xb_v)$ will be found in the same hash buckets. We form the buckets using the recipe of \citet{GionisIM1999hash}. We adopt the high-recall policy that node-pair $u,v$ should be scored if $u$ and $v$ share at least one bucket. Algorithm~\ref{algo:LshTopK} shows how the buckets are traversed to generate and score node pairs, then placed in a heap for retrieving top-$K$ pairs. Details can be found in the Appendix. \begin{algofig} \begin{tcolorbox}[colframe=gray!40,boxsep=0mm] \footnotesize \small \begin{algorithmic}[1] \State \textbf{Input}: Graph $\Gcal=(\Vcal, \Ecal)$; binary hash-codes $\set{\bb_u}$; query nodes $\Qcal$; the number ($K$) of nodes to be recommended per query node \State \textbf{Output}: Ranked recommendation list $R_q$ for all $q\!\in\!\Qcal$ \vspace{.5mm plus.1mm} \State initialize LSH buckets \For{$u\in\Vcal$} \State add $u$ to appropriate hash buckets \EndFor \For{$q\in\Qcal$} \State initialize score heap $H_q$ with capacity~$K$ \EndFor \For{each LSH bucket $B$} \For{$(u,v)\in B $} \If{$u\in\Qcal$} \State insert $\langle v, s(u,v)\rangle$ in $H_u$; prune if $|H_u|\!>\!K$ \EndIf \If{$v\in\Qcal$} \State insert $\langle u, s(u,v)\rangle$ in $H_v$; prune if $|H_v|\!>\!K$ \EndIf \EndFor \EndFor \vspace{.5mm plus.1mm} \For{$q\in\Qcal$} \State sort $H_q$ by decreasing score to get ranked list~$R_q$ \EndFor \vspace{.5mm plus.1mm} \State \textbf{return} $\set{R_q |q \in\Qcal}$ \end{algorithmic} \end{tcolorbox} \caption{Reporting ranked list of potential edges fast.} \label{algo:LshTopK} \end{algofig} \begin{table*}[t] \begin{center} \maxsizebox{0.81\hsize}{!}{ \begin{tabular}{l|ccccc|ccccc} \hline & \multicolumn{5}{c|}{\textbf{Mean Average Precision (MAP)}} & \multicolumn{5}{c}{\textbf{Mean Reciprocal Rank (MRR)}} \\ &Twitter & \text{Google+}\xspace & Cora & Citeseer & PB &Twitter & \text{Google+}\xspace & Cora & Citeseer & PB \\ \hline\hline AA & 0.727 & 0.321 & 0.457 & 0.477 & \best{0.252} & 0.904 & {0.553} & \best{0.535} & 0.548 & 0.508 \\ CN & 0.707 & 0.292 & 0.377 & 0.401 & 0.218 & \best{0.911} & 0.553 & 0.460 & 0.462 & \best{0.516} \\\hline Node2Vec & 0.673 & 0.330 & 0.448 & 0.504 & 0.182 & 0.832 & 0.551 & 0.484 & 0.546 & 0.333 \\ DeepWalk & 0.624 & 0.288 & 0.432 & 0.458 & 0.169 & 0.757 & 0.482 & 0.468 & 0.492 & 0.303 \\\hline GraphSAGE & 0.488 & 0.125 & 0.393 & 0.486 & 0.077 & 0.638 & 0.233 & 0.425 & 0.523 & 0.156 \\ GCN & 0.615 & 0.330 & 0.408 & 0.464 & 0.200 & 0.789 & 0.482 & 0.444 & 0.505 & 0.345 \\ Gravity & \best{0.735} & 0.360 & 0.407 & 0.462 & 0.193 & 0.881 & 0.540 & 0.438 & 0.518 & 0.330 \\\hline \textsc{Perm\-Gnn}\xspace & \best{0.735} & \best{0.385} & \best{0.480} & \best{0.560} & 0.220 & 0.880 & \best{0.581} & 0.524 & \best{0.600} & 0.397 \\ \hline % \end{tabular} } \end{center} \caption{MAP and MRR for all LP algorithms (\textsc{Perm\-Gnn}\xspace and baselines) on the ranked list of all potential edges ($K=\infty$) across all five datasets, with 20\% test set. Numbers in bold font indicate the best performer. } \label{tab:main-map-mrr} \end{table*} \section{Experiments} \label{sec:Expt} We report on a comprehensive evaluation of \textsc{Perm\-Gnn}\xspace{} and its accompanying hashing strategy. Specifically, we address the following research questions. \begin{itemize*} \item[\textbf{RQ1:}] How does the LP accuracy of \label{rq:prediction}\textsc{Perm\-Gnn}\xspace{} compare with classic and recent link predictors? Where are the gains and losses? \item[\textbf{RQ2:}] How \label{rq:PermGnnVsMultiPerm}does \textsc{Perm\-Gnn}\xspace{} compare with brute-force sampling of neighbor permutations? \item[\textbf{RQ3:}] Exactly \label{rq:permInvariance}where in our adversarially trained network is permutation insensitivity getting programmed? \item[\textbf{RQ4:}] Does \label{rq:hashingBasics}the hashing optimization reduce prediction time, compared to exhaustive computation of pairwise scores? \end{itemize*} \subsection{Experimental Setup} \subsubsection*{Datasets.} We consider five real world datasets: \begin{enumerate*}[label=(\arabic*)] \item Twitter~\cite{leskovec2012learning}, \item \text{Google+}\xspace~\cite{leskovec2010kronecker}, \item Cora~\cite{getoor2005link,sen2008collective}, \item Citeseer~\cite{getoor2005link,sen2008collective} and \item PB~\cite{ackland2005mapping}. \end{enumerate*} \subsubsection*{Baselines.} We compare \textsc{Perm\-Gnn}\xspace{} with several hashable LP algorithms. Adamic Adar (AA) and Common Neighbors (CN) \cite{LibenNowellK2007LinkPred} are classic unsupervised methods. Node2Vec \cite{grover2016node2vec} and DeepWalk \cite{perozzi2014deepwalk} are node embedding methods based on random walks. Graph Convolutional Network (GCN) \cite{kipf2016variational}, GraphSAGE \cite{hamilton2017inductive} Gravity~\cite{Salha+2019gravity} are node embedding methods based on GNNs. We highlight that SEAL~\citep{ZhangC2018LinkPredGNN} does not readily lend itself to a hashable LP mechanism and therefore, we do not compare it in this paper. \subsubsection*{Evaluation protocol.} Similar to the evaluation protocol of \citet{BackstromL2011SRW}, we partition the edge (and non-edge) sets into training, validation and test folds as follows. For each dataset, we first build the set of query nodes $\Qcal$, where each query contains at least one triangle around it. Then, for each $q\in\Qcal$, in the original graph, we partition the neighbors $\nbr(q)$ and the non-neighbors $\nnbr(q)$ which are within 2-hop distance from $q$ into 70\% training, 10\% validation and 20\% test sets, where the node pairs are sampled uniformly at random. We disclose the resulting sampled graph induced by the training and validation sets to the LP model. Then, for each query $q\in\Qcal$, the trained LP model outputs a top-$K$ list of potential neighbors from the test set. Using ground truth, we compute the average precision (AP) and reciprocal rank (RR) of each top-$K$ list. Then we average over all query nodes to get mean AP (MAP) and mean RR (MRR). \subsection{Comparative Analysis of LP Accuracy} First, we address the research question \textbf{RQ1} by comparing LP accuracy of \textsc{Perm\-Gnn}\xspace against baselines, in terms of MAP and MRR across the datasets. \subsubsection*{MAP and MRR summary.} Table~\ref{tab:main-map-mrr} summarizes LP accuracy across all the methods. We make the following observations. \begin{enumerate*}[label=(\arabic*)] \item \textsc{Perm\-Gnn}\xspace outperforms all the competitors in terms of MAP, in four datasets, except PB, where it is outperformed by AA. Moreover, in terms of MRR, it outperforms all the baselines for \text{Google+}\xspace\ and Citeseer datasets. \item The performance of GNNs are comparable for Cora and Citeseer. Due to its weakly supervised training procedure, the overall performance of GraphSAGE is poor among the GNN based methods. \item The classic unsupervised predictors, \ie, AA and CN often beat some recent embedding models. AA is the best performer in terms of MAP in PB and in terms of MRR in Twitter. Since AA and CN encourage triad completion, which is a key factor for growth of several real life networks, they often serve as good link predictors~\cite{SarkarCM2011LPembed}. \item The random walk based embeddings, \emph{viz.} Node2Vec and DeepWalk, show moderate performance. Notably, Node2Vec is the second best performer in Citeseer. \end{enumerate*} \begin{figure}[t] \centering \subfloat[\text{Google+}\xspace]{ \includegraphics[width=.21\textwidth]{FIG/drill_down_Gplus_1_new.pdf} }\hspace{2mm} \subfloat[Citeseer]{ \includegraphics[width=.21\textwidth]{FIG/drill_down_citeseer_new.pdf} } \caption{Query-wise wins and losses in terms of $\text{AP}(\textsc{Perm\-Gnn}\xspace)-\text{AP}(\text{baseline})$, the gain (above x-axis) or loss (below x-axis) of AP of \textsc{Perm\-Gnn}\xspace with respect to competitive baselines. Queries $\Qcal$ are sorted by decreasing gain of \textsc{Perm\-Gnn}\xspace{} along the $x$-axis. } \label{fig:MapMrrDiff} \end{figure} \subsubsection*{Drill-down.} Next, we compare ranking performance at individual query nodes. For each query (node) $q$, we measure the gain (or loss) of \textsc{Perm\-Gnn}\xspace{} in terms of average precision, \ie, $\text{AP}(\textsc{Perm\-Gnn}\xspace)-\text{AP}(\text{baseline})$ for three competitive baselines, across \text{Google+}\xspace and Citeseer datasets. From Figure~\ref{fig:MapMrrDiff}, we observe that, for \text{Google+}\xspace and Citeseer respectively, \textsc{Perm\-Gnn}\xspace\ matches or exceeds the baselines for 60\% and 70\% of the queries. \begin{figure}[t] \centering \subfloat[Twitter] {\includegraphics[width=.21\textwidth]{FIG/permGNNvsMultiPerm_Twitter_3_new.pdf}}\hspace{2mm} \subfloat[\text{Google+}\xspace] {\includegraphics[width=.21\textwidth]{FIG/permGNNvsMultiPerm_Gplus_1_new.pdf}} \caption{Validation MAP against training epochs for Twitter and \text{Google+}\xspace. \textsc{Perm\-Gnn}\xspace converges faster than MultiPerm. } \label{fig:PermGnnVsMultiPerm} \end{figure} \subsection{\textsc{Perm\-Gnn}\xspace vs.\ Sampling Permutations} Next, we address research question \textbf{RQ2} by establishing the utility of \textsc{Perm\-Gnn}\xspace{} against its natural alternative \textbf{MultiPerm}, in which a node embedding is computed by averaging permutation-sensitive representations over several sampled permutations. Figure~\ref{fig:PermGnnVsMultiPerm} shows that \textsc{Perm\-Gnn}\xspace is ${>}15{\times}$ and ${>}4.5{\times}$ faster than the permutation averaging based method for Twitter and \text{Google+}\xspace datasets. MultiPerm also occupies significantly larger RAM than \textsc{Perm\-Gnn}\xspace. % \begin{figure}[t] \centering \subfloat[Cora]{\includegraphics[width=.20\textwidth]{FIG/TrLoss_vs_ktau_cora_new.pdf}}\hspace{2mm} \subfloat[Citeseer]{\includegraphics[width=.20\textwidth]{FIG/TrLoss_vs_ktau_citeseer_new.pdf}} \caption{Effect of neighbor order perturbation on training loss. As we move away from the canonical permutation $\pi_0$, training loss increases steeply for 1Perm, but remains roughly stable for MultiPerm and \textsc{Perm\-Gnn}\xspace.} \label{fig:KtauVsMap} \end{figure} \subsection{Permutation Invariance of \textsc{Perm\-Gnn}\xspace} Here, we answer the research question \textbf{RQ3}. To that end, we first train \textsc{Perm\-Gnn}\xspace along with its two immediate alternatives: (i)~\textbf{1Perm}, where a vanilla LSTM is trained with a single canonical permutation~$\pib_0$ of the nodes; and, (ii)~\textbf{Multiperm}, where an LSTM is trained using several sampled permutations of the nodes. Then, given a different permutation $\pib$, we compute the node embedding $\xb_{u;\pib}$ by feeding the corresponding sequence of neighbors $\pib(\nbr(u))$ (sorted by node IDs of $\nbr(u)$ assigned by~$\pib$), as an input to the trained models. Finally, we use these embeddings for LP and measure the relative change in training loss. Figure~\ref{fig:KtauVsMap} shows a plot of $(\textsc{loss}(\pib)-\textsc{loss}(\pib_0))/\textsc{loss}(\pib_0)$ against the correlation between $\pib$ and the canonical order $\pib_0$, measured in terms of Kendall's $\tau$, $\KTau(\pib,\pib_0)$. It reveals that 1Perm suffers a significant rise in training loss when the input node order $\pib$ substantially differs from the canonical order $\pib_0$, \ie, $\KTau(\pib,\pib_0)$ is low. Both Multiperm and \textsc{Perm\-Gnn}\xspace turns out to be permutation-insensitive across a wide range of node orderings. To probe this phenomenon, we instrument the stability of $\fb, \yb, \xb$ to different permutations. Specifically, we define $\text{insensitivity}(\zb;\pib,\pib_0)=\sum_{u\in V}\textsc{sim}(\zb_{u;\pib},\zb_{u;\pib_0})/|V|$ for any vector or sequence $\zb$. We compute insensitivity of the input sequence $\set{\fb_v : v\in\nbr(u)}$, the intermediate LSTM output $\set{\yb}$ and the final embedding $\xb_u$ with respect to different permutations $\pib$. Figure~\ref{fig:KtauVsInsensitivity} summarizes the results, and shows that as information flows through \textsc{Perm\-Gnn}\xspace{} stages, from input feature sequence to the final embeddings, the insensitivity of the underlying signals increases. Thus, our adversarial training smoothly turns permutation-sensitive input sequences into permutation invariant node embeddings, without any explicit symmetric aggregator. \begin{figure}[t] \centering \subfloat[Cora]{\includegraphics[width=.20\textwidth]{FIG/fyx_vs_ktau_cora_new.pdf}}\hspace{2mm} \subfloat[Citeseer]{\includegraphics[width=.20\textwidth]{FIG/fyx_vs_ktau_citeseer_new.pdf}} \caption{Insensitivity of neighborhood features $\set{\fb_v \,|\, v\in\nbr(u)}$ LSTM output $\{\yb\}$ and the resultant node embeddings $\xb_u$ with respect to neighbor order permutations.} \label{fig:KtauVsInsensitivity} \end{figure} \begin{figure}[t] \centering \subfloat[Tensorized]{ \includegraphics[width=0.40\hsize]{FIG/Hashing_Link_Prediction_Tensorized_new.pdf}}\hspace{2mm} \subfloat[Non-tensorized]{ \includegraphics[width=0.40\hsize]{FIG/Hashing_Link_Prediction_Non_Tensorized_new.pdf}} \caption{{Running time for our LSH based scalable prediction, random-hyperplane based LSH method, exhaustive comparison. }} \label{fig:TopkTimeHashing} \end{figure} \subsection{Performance of Hashing Methods} Finally, we address \textbf{RQ4} by studying the performance of our LSH method (Section~\ref{sec:HashOpt}). Specifically, we compare the time spent in similarity computation and heap operations of our hashing method against random hyperplane based hashing (Section~\ref{sec:Hyperplanes}), compared to exhaustive computation of pairwise scores (as a slow but ``relatively perfect'' baseline). Since vectorized similarity computation inside Torch may be faster than numpy, we provide results on both implementations. Figure~\ref{fig:TopkTimeHashing} summarizes results in terms of running time. It shows that: (1)~hashing using $C_\psi$ leads to considerable savings in reporting top-$K$ node-pairs with respect to both random hyperplane based hashing {and exhaustive enumeration}, and (2)~the gains increase with increasing graph sizes (from \text{Google+}\xspace{} to PB). Because LSH-based top-$K$ retrieval may discard relevant nodes after $K$, it is more appropriate to study ranking degradation in terms of decrease in NDCG (rather than MAP). Suppose we insist that NDCG be at least 85, 90, or 95\% of exhaustive NDCG. How selective is a hashing strategy, in terms of the factor of query speedup (because of buckets pruned in Algorithm~\ref{algo:LshTopK})? Table~\ref{tab:LshNdcg} shows that our hashing method provides better pruning than random hyperplane for a given level of NDCG degradation. \begin{table}[ht] \centering \centering \maxsizebox{\hsize}{!}{ \tabcolsep 2pt \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{6}{|c|}{Minimum NDCG as \% of exhaustive NDCG} \\ \cline{2-7} & \multicolumn{2}{|c|}{85\%} & \multicolumn{2}{c|}{90\%} & \multicolumn{2}{c|}{95\%} \\ \cline{2-7} & Twitter & \text{Google+}\xspace & Twitter & \text{Google+}\xspace & \multicolumn{1}{c|}{Twitter} & \text{Google+}\xspace \\ \hline Our Hashing & 6.67 & 12.5 & 6.67 & 10 & \multicolumn{1}{c|}{6.25} & 5.5 \\ \hline RH & 1.78 & 3.45 & 1.78 & 3.45 & \multicolumn{1}{c|}{1.78} & 3.45 \\ \hline \end{tabular} } \caption{Speedup achieved by different hashing methods under various permitted NDCG degradation limits. } \label{tab:LshNdcg} \end{table} \section{Conclusion} \label{sec:End} We presented \textsc{Perm\-Gnn}\xspace, a novel LP formulation that combines a recurrent, order-sensitive graph neighbor aggregator with an adversarial generator of neighbor permutations. \textsc{Perm\-Gnn}\xspace{} achieves LP accuracy comparable to or better than sampling a number of permutations by brute force, and is faster to train. \textsc{Perm\-Gnn}\xspace{} is also superior to a number of LP baselines. In addition, we formulate an optimization to map \textsc{Perm\-Gnn}\xspace's node embeddings to a suitable locality-sensitive hash, which greatly speeds up reporting of the most likely edges. It would be interesting to extend \textsc{Perm\-Gnn}\xspace{} to other downstream network analyses, \eg, node classification, community detection, or knowledge graph completion. \section*{Acknowledgements} Partly supported by an IBM AI Horizons Grant. Thanks to Chitrank Gupta and Yash Jain for helping rectify an error in an earlier evaluation method. \begingroup
{ "timestamp": "2021-03-30T02:23:37", "yymm": "2012", "arxiv_id": "2012.08974", "language": "en", "url": "https://arxiv.org/abs/2012.08974" }
\section{Introduction} The estimation of the parameters of a stochastic process on the basis of its random sampling (i.e. the process is observed at random times) received a wide attention in the past. Such a problem is well motivated by practical aspects. Indeed, the measuring instruments (classical or modern, such as sattelites) may introduce random disturbances to the data. For examples, transaction data in finance arrive in irregular time intervals (see e.g. \cite{ER}), and the same happens with biological signals in medicine, such as heart rate (see e.g. \cite{BB}). Other examples of appearances of random models observed at unequally, possibly random, times can be found, among many others, in climatology (see \cite{C1}, \cite{C2}) or computer science. Although quite natural, the hypothesis of random sampling for stochastic models leads to more complex estimators and, in general, some particular choices for the random observation times are considered in the literature. For instance, \cite{DY}, the authors studied a diffusion process observed at independent Poisson times, \cite{J} the situation when the $i$th observation depends on the previous $i-1$ observations is considered, while \cite{Vi1}, \cite{Vi2} the authors used the so-called jittered and renewal sampling. Our purpose is to analyse the asymptotic properties of the least squares estimator (LSE in the sequel) for a simple regression model driven by a standard Wiener process, i.e. \begin{equation}\label{1} Y_{\tau_{i+1}} = a\tau _{i+1} + W_{\tau_{i+1}} - W_{\tau_{i}}, \hskip0.5cm i=0,..., N-1 \end{equation} where $\tau_{i}, i=0,.., N$ are random times, independent of $W$, with $\tau_{0}:=0$. We choose to work with the random sampling proposed by \cite{Vi1}, \cite{Vi2} which includes two types of randomness: the jittered sampling (the observation times are $\frac{i}{N}$, $i=1,..., N$ perturbed by a ``small" uniform random variable) or the renewal sampling (the $i$th observation times is a sum of $i$ independent positive random variables, so the randomness is somehow progressive). We construct a least squares estimator (LSE) for the drift parameter $a$ of the model (\ref{1}) and then we analyse its asymptotic properties. Our proofs are based on a sharp calculation of the mean square of the estimator and of its conditional distribution given the random times, which is Gaussian. We also use some results given by \cite{araya2019} where the asymptotic behavior of the denominator of the LSE estimator is obtained. We organized our paper as follows. In Section 2 we describe the model and we include a discussion about the number of random observations used to define to estimator. In Section 3 we calculate exactly the mean square norm of the estimator when the number of observations is large enough while in Section 4 we give the asymptotic distribution for the LSE. Many of our theoretical results are illustrated by numerical simulations in Section 5. \section{Preliminaries} Let us now introduced the random times considered in our model (\ref{1}). Our examples are inspired from \cite{araya2019} and \cite{Vi1}. \subsection{Random times} Let $T=1$ and $ \tau = \lbrace \tau_{i}; i=0,\ldots,N \rbrace$ a strictly increasing sequence of random points over time, where $N$ is the last integer such that $\tau_{N-1} \leq 1$, which exhibits one of the following two features. \begin{enumerate} \item { \bf Jittered sampling (JS)}. First, we assume that we observed a certain process at regular times $\tau$ with period $\delta = 1/N >0$ but contaminated by an additive noise $\nu$ which represents possible measurement errors. Then the sequence of random times $\tau_i, \quad 0 \le i \le N$ is defined as \begin{eqnarray} \label{js} \tau_{i, N}=:\tau_{i} = \dfrac{i}{N} + \nu_{i,N}, \quad i = 1, \dots, N \; and\; \tau_0 := 0, \dots , \end{eqnarray} where $\lbrace \nu_{i,N}; \quad 1 \le i \le N \rbrace$ constitutes a triangular array of independent and identically distributed set of random variables with common density function depending on $N$, called $g_{N}(t)$, which is assumed to be symmetric in $\left[ -\frac{1}{2N}, \frac{1}{2N} \right]$ for all $i= 1,\ldots , N$. From now on, we state the following about $\nu_{i,N}$ \begin{itemize} \item $\mathbb{E} \left[ \nu_{i,N} \right] = 0$, and \item $\mathbb{E} \left[ \nu^{2}_{i,N} \right]$ satisfies \begin{equation}\label{2d-1} \mathbb{E} \left[ \nu^{2}_{i,N} \right]= c_{1} \frac{1}{N ^ {2}} \mbox{ with } c_{1}>0. \end{equation} \end{itemize} Some distributions that satisfy the latter statement are the uniform distribution in $\left[ -\frac{1}{2N} , \frac{1}{2N} \right]$, triangular distribution with parameters $\left( -\frac{1}{2N} , 0 , \frac{1}{2N} \right)$ and the raised cosine distribution with parameters $\mu=0$ and $s=\frac{1}{2N}$. For instance, $c_{1}= \frac{1}{12}$ when $g_{N}$ is the uniform distribution over the interval $\left[ -\frac{1}{2N} , \frac{1}{2N} \right]$ and $c_{1}=\frac{1}{24} $ when $g_{N}$ is the triangular distribution with parameters $\left( -\frac{1}{2N} , 0 , \frac{1}{2N} \right)$. \item { \bf Renewal sampling (RP)}. In this case, the sequence $\tau$ satisfies the following property \begin{eqnarray} \tau_{i} = \sum_{j=1}^{i} t_j \ \ \ \ i=1,2,... \ \ \ \mbox{and} \, \tau_{0} := 0, \label{rp} \end{eqnarray} where $ \lbrace t_j , 1 \le j \rbrace$ is a sequence of independent and identically distributed random variables, with a common distribution function $G$ with support in $[0,\infty)$. In this work we consider that $G$ is an exponential distribution with parameter $\lambda =N$, i.e. it has density function $g(t)= Ne ^ {-Nt} 1_{(0, \infty) }(t).$ The random times $\tau_{i} $ given by (\ref{rp}) actually depend also on $N$ but we still use the notation $\tau_{i,N}= : \tau_{i}$, for simplicity. \end{enumerate} \subsection{The number of observations}\label{sec22} Assume that we observe a stochastic process $Y$ at times $\tau_{1},..., \tau _{ [\alpha N]}$ with $\tau_{i} <\tau _{i+1} $ for every $i\geq 1$ and with $0<\alpha \leq 1$. We want to ensure that our observation period remains, almost surely, inside the interval $[0, T]$ with $T=1$. That is, we would like to have that the last observation $ \tau _{[\alpha N ]} $ is almost surely less that $1$ for $N$ sufficiently large. In the case of jittered sampling, this is always true for $\alpha =1$, due to our hypothesis (\ref{2d-1}). Indeed, $\tau_{N-1} = \frac{N-1}{N} + \nu _{N-1, N}$ and $\tau_{N} = 1 + \nu _{N, N}$ and then $P (\tau_{N-1} >1)=0$ while $P(\tau_{N} >1)=\frac{1}{2}$, so $\tau_{N-1}$ is almost surely in the observation interval $[0,1]$. In this case we assume $\alpha=1$ and $Y_{\tau_{N}}=0$. On the other hand, in the situation of the renewal sampling, we have $\tau _{N} \sim G (N, N) $ (by $(G(a, \lambda)$ we denote the Gamma law with parameters $a>0, \lambda >0$)) and by a result of \cite{gaut1977}, $$\mathbb{P}(\tau_{N}>1)= \mathbb{P}( G(N, N) >1) \xrightarrow[N \to \infty]{} \frac{1}{2}.$$ In order to be sure that our observation period remains inside the interval $[0,1]$, the price to pay is to consider a slightly less number of observations, i. e. to take $\alpha \approx 1$ (which means $\alpha<1$ is arbitrary close to $1$). Then, $\tau_{\alpha N} \sim G(\alpha N , N)$, which can be written as $\tau_{\alpha N} \sim G(M, \tilde{\alpha} M)$ , where $M = \alpha N$ and $\tilde{\alpha} = 1/ \alpha$. We have \begin{align*} \mathbb{P} \left( \tau_{\alpha N} > 1 \right) = \int_{1}^{\infty} \dfrac{(\tilde{\alpha} M )^{M}}{\Gamma(M)} x^{M-1} e^{-\tilde{\alpha} M x} dx, \end{align*} and by the change of variable $y = \tilde{\alpha} M x$, the last equation can be written as \begin{align*} \mathbb{P} \left( \tau_{\alpha N} > 1 \right) &= \dfrac{(\tilde{\alpha} M )^{M}}{\Gamma(M)} \int_{\tilde{\alpha} M}^{\infty} \left( \dfrac{y}{\tilde{\alpha} M} \right) ^{M-1} e^{-y} dy \\ &= \dfrac{\tilde{\alpha} M}{\Gamma(M)} \int_{\tilde{\alpha} M}^{\infty} y^{M-1} e^{-y} dy = \dfrac{\tilde{\alpha} M}{\Gamma(M)} \Gamma(M, \tilde{\alpha} M). \end{align*} By the result obtained by \cite{gaut1977} on the limit behavior of $\Gamma(N, \alpha N)$ for $\tilde{\alpha} > 1$, the last result can be written as \begin{align*} \mathbb{P} \left( \tau_{\alpha N} > 1 \right) & \sim \dfrac{\tilde{\alpha} M}{\Gamma(M)} \cdot \dfrac{(\tilde{\alpha} M)^{M} e^{-\tilde{\alpha} M}}{(1 + \tilde{\alpha}) M} = \dfrac{\tilde{\alpha} ^{M+1} M^{M+1} e^{-\tilde{\alpha} M}}{(1 + \tilde{\alpha}) M !}, \end{align*} from Stirling approximation we get, \begin{align*} \mathbb{P} \left( \tau_{\alpha N} > 1 \right) & \sim \dfrac{\tilde{\alpha} ^{M+1} M^{M+1} e^{-\tilde{\alpha} M}}{(1 + \tilde{\alpha}) \sqrt{2 \pi M} \left( \frac{M}{e} \right)^{M}} = \dfrac{\tilde{\alpha} ^{M+1} M^{1 - 1/2} e^{-\tilde{\alpha} M} e^{M}}{(1 + \tilde{\alpha}) \sqrt{2 \pi} } \\\ &=\dfrac{\tilde{\alpha} ^{M+1} M^{1/2} e^{-M (\tilde{\alpha} +1 )}}{(1 + \tilde{\alpha}) \sqrt{2 \pi} } = \dfrac{\tilde{\alpha} ^{\alpha N+1} M^{1/2} e^{-\alpha N (\tilde{\alpha} +1 )}}{(1 + \tilde{\alpha}) \sqrt{2 \pi} } , \end{align*} and recalling that $M = \alpha N$, the last quantity goes to zero as $N\to \infty$ for $\alpha <1$. Then, for every $\alpha \approx 1$ and for every $ \varepsilon >0$ there exists a set $\Omega_{\alpha, \varepsilon}\subset \Omega$ such that $\mathbb{P}(\Omega_{\alpha, \varepsilon} ) >1-\varepsilon $ and for every $\omega \in \Omega _{\alpha, \varepsilon}$ we have $\tau_{[\alpha N]} (\omega) \leq 1$ when $N$ is sufficiently large. We then assume that, in the renewal case, $\alpha \approx 1$ and we always work on the space $\Omega_{\alpha, \varepsilon} .$ A similar procedure was considered by \cite{Mi}, Theorem 3.4.1. Notice that several authors (see e.g. \cite{Vi1}), consider that the model (\ref{1}) is at times $\tau_{1},..., \tau_{N(1)}$ where $N(1)$ is the last time contained in the interval $[0, 1]$. Instead, we prefer to work on a smaller probability space (but still very close to $\Omega)$ which guaranties that the entire observation period is contained in the unit interval $[0,1]$. \section{Least squares estimator} Let us fix $0<\alpha \leq 1$ and denote $N_{\alpha }= [\alpha N]$, the number of observations. Consider the model \begin{equation} \label{modelo} Y_{\tau_{i+1}} = a \tau_{i+1} + \Delta W_{\tau_{i+1}}, \hskip0.5cm i=0,\ldots N_{\alpha}-1 \end{equation} where $\Delta W_{\tau_{i+1}}= W_{\tau_{i+1}}-W_{\tau _{i}}$, with $0<\alpha \leq 1$. Actually, throughout this work we assume $\alpha =1$ in the jittered sampling case and $\alpha \approx 1$ in the renewal sampling case, see the discussion in Section \ref{sec22}. In Figures \ref{njs1} and \ref{njs2} we illustrate the behavior of the noise in (\ref{modelo}) at the random times (\ref{js}) and (\ref{rp}) (which appears to be similar to the behavior of the Brownian increment itself). The LSE for the drift parameter $a$ in the model (\ref{1}) is obtained in a standard way, by minimizing the function $f(a)= \sum_{i=0}^ {N_{\alpha}-1} \left( Y_{\tau_{i+1}} - a \tau_{i+1}\right) ^ {2} $ giving \begin{align} \hat{a}_{N} &= \dfrac{\displaystyle \sum_{i=0}^{N_{\alpha}-1} \tau_{i+1} Y_{\tau_{i+1}}}{\displaystyle \sum_{i=0}^{N_{\alpha}-1} \tau_{i+1}^2} \label{a-jt} \end{align} for both jittered sampling (JS) and renewal sampling (RS) cases. From (\ref{modelo}), (\ref{a-jt}) we immediately have, \begin{align} \hat{a}_{N} - a &= \dfrac{\displaystyle \dfrac{1}{N} \sum_{i=0}^{N_{\alpha}-1} \tau_{i+1} \Delta W_{\tau_{i+1}}}{\displaystyle \dfrac{1}{N} \sum_{i=0}^{N_{\alpha}-1} \tau_{i+1}^2} := \dfrac{A_N}{D_N}, \quad \label{DN-js} \end{align} for every $N\geq 1$, \begin{equation} \label{AD} A_{N}= \sum_{i=0}^{N_{\alpha}-1} \tau_{i+1} Y_{\tau_{i+1}} \mbox{ and } D_{N}=\dfrac{1}{N} \sum_{i=0}^{N_{\alpha}-1} \tau_{i+1}^2. \end{equation} Our purpose is to analyze the asymptotic properties of the LSE (\ref{a-jt}), in particular its asymptotic normality in distribution. The denominator of the expression (\ref{DN-js}) has been already studied by \cite{araya2019}. Let us recall their results (see Lemma 3.2 of \cite{araya2019}). \begin{prop}\label{pp1} Let $D_{N}$ be given by (\ref{AD}). Then $D_{N}$ converges almost surely, as $N\to \infty$ to $\frac{\alpha^{3}}{3}$. \end{prop} Actually, the result of \cite{araya2019} has been obtained for $\alpha=1$, but after inspecting the proof, it is clear that the same arguments holds for every $\alpha \in (0, 1)$. Therefore, in order to obtain the asymptotic behavior of the LSE, we need to analyse the sequence $A_{N}$ in (\ref{DN-js}). A first step in this direction is to evaluate the $L^ {2}(\Omega)$- norm of $A_{N}$ when $N$ is large. \begin{lemma}\label{ll1} Let $A_{N}$ given by \eqref{DN-js}, for every $N \geq 1$, then either if $\tau$ is defined as (\ref{js}) or (\ref{rp}), it holds \begin{equation*} \mathbb{E} \left| N A_{N} \right| ^{2} \xrightarrow[N \to \infty]{} \frac{1}{3}\alpha ^ {3}. \end{equation*} \end{lemma} Although the above result is the same in the JS and RS cases, the proof is different. While in the JS case, the limit if given by the ``deterministic part'' of the times (\ref{js}), in the RS case there is no deterministic part and both summands in (\ref{E2-RS}) contribute to the limit. The prroofs can be found in the appendix. \section{Limit distribution of the LSE} Lemma \ref{ll1} shows that the sequence $(A_{N}) _{N\geq 1}$ given by (\ref{AD}) converges in $L^ {2}(\Omega)$ to zero as $N\to \infty$. We can also show that $A_{N}$ converges to zero in $L^ {p}(\Omega)$ for every $p\geq 2$ and by a Borel-Cantelli argument, we get its almost sure convergence to zero. Indeed, via conditioning on $\nu$, $$\mathbb{E} \vert A_{N}\vert ^ {p} = \mathbb{E} \left[ \mathbb{E} \left[ \vert A_{N}\vert ^ {p}| \nu \right] \right]=\mathbb{E} \left[ g( \tau _{1},\ldots, \tau_{N})\right]$$ with, for $x_{1}< x_{2}<\ldots x_{N}$, $$g(x_{1},..., x_{N})= \mathbb{E} \left| \sum_{i=0} ^ {N-1} x_{i+1} (W_{x_{i+1}}-W_{x_{i}} ) \right| ^ {p} \leq C_{p} \left(\mathbb{E} \left| \sum_{i=0} ^ {N-1} x_{i+1} (W_{x_{i+1}}-W_{x_{i}} ) \right| ^ {2} \right) ^ {\frac{p}{2}}.$$ This implies, together with Lemma \ref{ll1} $$\mathbb{E} \vert A_{N}\vert ^ {p} \leq C_{p} N ^ {-2p}$$ and thus for every $\gamma >0$ and for every $p$ integer such $2p-\gamma>1$, $$\sum_{N\geq 1} \mathbb{P}( A_{N} > N ^ {-\gamma} ) \leq \sum_{N\geq 1} N ^ {\gamma-2p} <\infty.$$ Then, via Proposition \ref{pp1}, we obtain the consistency of the LSE (\ref{a-jt}). Let us now study the asymptotic limit in distribution of $A_{N}$. To this end, we need to study the sequence $(Q_{N})_{N\geq 1}$ defined, for every $N\geq 1$, by \begin{align} \label{qn} Q_{N} &= \sum_{j=0}^{N_{\alpha}-1} \tau_{j+1}^{2} \left( \tau_{j+1} - \tau_{j} \right). \end{align} This plays the role of the ``bracket'' of $A_{N}$. Before, let us introduce some notation: If $\tau_{j}$ is given by (\ref{js}), then we can write them as \begin{equation}\label{3d-1} \tau_{j} = \frac{j}{N} + \frac{X_{j}}{N}, \end{equation} where $X_{j}, j=1,..,.$ are independent random variables and $X_{j}$ follows a symmetric probability distribution with support in $\left[ -\frac{1}{2} , \frac{1}{2} \right]$ and with density denoted by $g$. If $\tau_{j}$ is given by (\ref{rp}), then \begin{equation}\label{3d-2} \tau_{j} = \frac{1}{N} \sum_{i=1}^{j} X_{i}, \end{equation} where $X_{i}$ follows the Gamma law $G(1, 1)$. Moreover, for every $i\geq 1$, $X_{i+1}- X_{i}$ is independent of $X_{l}$ if $l\leq i$. \begin{prop} \label{L2-conv} Let $Q_{N}$ be given by (\ref{qn}). Then $$\lim_{N \to \infty} \mathbb{E} \left[ \left( Q_{N} - \frac{1}{3} \right)^{2} \right] \xrightarrow[N \to \infty]{} 0.$$ \end{prop} We now give the limit in distribution of the sequence $A_{N}$. \begin{prop} \label{normalidad} Let $A_{N}$ given in \eqref{DN-js}, then the following convergence in distribution holds \begin{equation*} N A_{N} \xrightarrow[N \to \infty ^{+}]{\mathcal{L}} N \left( 0, \frac{\alpha ^ {3}}{3} \right), \end{equation*} either if $\tau _{i}$ are defined either by \eqref{js} or by \eqref{rp}. \end{prop} {\bf Proof: } We analyse the asymptotic distribution of the characteristic function of $NA_{N}$, denoted $\varphi_{NA_{N}}$ in the sequel. Via conditioning, with $X_{j}$ given by (\ref{3d-1}) or by (\ref{3d-2}), with $N'_{\alpha}= N-1$ in the JS case and $N'_{\alpha} = N_{\alpha} $ in the RP case, \begin{align} \varphi_{N A_{N}}(t) = \mathbb{E} \left[ e^{it N A_{N}} \right] &= \mathbb{E} \left[ e^{it \sum_{j=0}^{N'_{\alpha}-1} \left( \frac{j+1}{N} + \frac{X_{j+1}}{N} \right) \left( W_{\frac{j+1}{N} + \frac{X_{j+1}}{N}} - W_{\frac{j}{N} + \frac{X_{j}}{N}} \right) } \right] \nonumber \\ &= \mathbb{E} \left. \left[ \mathbb{E} \left[ e^{it \sum_{j=0}^{N'_{\alpha}-1} \left( \frac{j+1}{N} + \frac{X_{j+1}}{N} \right) \left( W_{\frac{j+1}{N} + \frac{X_{j+1}}{N}} - W_{\frac{j}{N} + \frac{X_{j}}{N} }\right) } \right| X \right] \right] \nonumber \\ &= \mathbb{E} \left[ e^{-\frac{t^{2}}{2} \sum_{j=0}^{N'_{\alpha}-1} \left( \frac{j+1}{N} + \frac{X_{j+1}}{N} \right)^{2} \left( \frac{X_{j+1}}{N} - \frac{X_{j}}{N} + \frac{1}{N} \right) } \right]=\mathbb{E} \left[ e^{-\frac{t^{2}}{2} Q_{N} } \right] \label{lim-js-bm} \end{align} with $Q_{N}$ from (\ref{qn}). Now, by Propostion \ref{L2-conv}, the sequence $Q_{N}$ converges, in $L^ {2}(\Omega)$, thus in probability, to $\frac{\alpha ^ {3}}{3}$. By Dominated convergence theorem, for every $t\in \mathbb{R}$, $$ \lim_{N \to \infty}\varphi_{N A_{N}}(t)= \mathbb{E} \left[ e^{-\frac{t^{2}}{2} \frac{\alpha^{3}}{3}} \right]$$ and this gives the conclusion. \hfill \vrule width.25cm height.25cm depth0cm\smallskip By Propositions \ref{pp1} and \ref{normalidad}, we immediately obtain the asymptotic normality of the LSE. We denote by $\xrightarrow{\mathcal{L}}$ the convergence in law. \begin{theorem} \label{an_dn} Consider the LSE $\hat{a}_{N}$ given by (\ref{a-jt}). Then $$N( \hat{a}_{N} - a) \xrightarrow[N \to \infty ^{+}]{\mathcal{L}}N(0,\frac{3}{\alpha ^{3}}). $$ \end{theorem} \begin{remark} Let us give some heuristics that explain the convergence in law of the sequence $(NA_{N}) _{N\geq 1}$. Consider the JS case and $\alpha =1$. Then we can write \begin{eqnarray*} NA_{N}=\sum_{i=0} ^{N-2} \tau_{i+1} (W_{\tau_{i+1}}- W_{\tau_{i}})= \int_{0}^{1} H_{N} (s) dW_{s} \end{eqnarray*} with $H_{N}(s)= \sum_{i=0} ^{ N-2} \tau_{i+1} 1_{(\tau_{i}, \tau_{i+1}]}(s)$, for $s\in [0,1]$. Intuitively, $H_{N}(t)$ converges to $t$ in $L ^{2}([0,1]\times \Omega)$ since $\vert \tau_{i}-\frac{i}{N} \vert \leq \frac{1}{N}$. Therefore $NA_{N}$ would converge in $L^{2}(\Omega)$, as $N\to \infty$, to $\int_{0}^{1} sdW_{s}$ whose law in $N(0, \frac{1}{3})$. \end{remark} Let us finish this theoretical part with some comment on the distance between the law of the sequence $(NA_{N})_{N\geq 1}$ and its limit. Recall that the distance between the laws of two random variables $X$ and $Y$ is defined as $$d (X, Y) =\sup_{h\in\mathcal{A}} \left| \mathbb{E} h(X)- \mathbb{E}h(Y) \right| $$ where $ \mathcal{A}$ is a class of functions (its choice defines specific distances, such as Kolmogorov, total variation or Wasserstein or other distances). Let $h$ be a function such that the all expectations below exist. Consider for simplicity $\alpha=1$ as in the JS case. Then, by taking the conditional expectation as in the proof of Lemma \ref{ll1}, we obtain $$\mathbb{E} h(NA_{N}) = \mathbb{E} h \left( \sqrt{\mathbb{E}(NA_{N}) ^ {2}} Z\right)$$ with $Z\sim N(0,1)$. This implies that, for $N$ large (see Proposition 3.6.1 of \cite{NP}) \begin{equation} \label{6d-1}d \left(NA_{N}, \sqrt{\frac{1}{3}}Z\right) \leq C \left| \mathbb{\mathbb{E}}(NA_{N})^ {2} -\frac{1}{3} \right| \leq C \frac{1}{N} \end{equation} where the last bound can be obtained easily from the proof of Lemma \ref{ll1}. A similar bound as in (\ref{6d-1}) can be obtained when we replace $NA_{N}$ by $\hat{a}_{N}-a$ with $\hat{a}_{N} $ given by (\ref{a-jt}). \section{Simulation study} In this section we consider the different problems that appear when studying the limit distribution of the least square estimator $\hat{a}_{N}$, properly normalized. First, we show the behavior of the increment of the standard Brownian motion, considering both type of random times, for different values of $N$. Next, we illustrate how the number of times, in which $ \tau_{\alpha N} > 1$, changes for different values of $\alpha$. In addition, we show by simulation the convergence of $Q_N$ as we proved in Proposition \ref{L2-conv} and \ref{normalidad}, which are necessary to show Theorem \ref{an_dn}. Finally, we define the error of our estimation and the corresponding simulation result. \\ We have simulated the observations $Y_{\tau_{1}}, \dots, Y_{\tau_{N_{\alpha}}}$, considering $N=5000$ and $\alpha = 1$ for jittered sampling and $\alpha = 0.98$ for renewal sampling, and we have repeated this procedure 10000 times in order to obtain the corresponding tables and histograms \subsection*{Increment of standard Brownian motion under observations sampled at random times:} Let us recall that the standard Brownian motion is an adapted process defined in some probability space $(\Omega, \mathcal{F}, \mathbb{P})$, $W_{0} = 0$, it has independent and stationary increments which follow a normal distribution, i.e. $W_{t} - W_{s} \sim N(0, t-s)$. In the following figures, it is possible to notice the behavior of the increment of the standard Brownian motion for different values of N and the different types of random times defined above. \begin{figure}[h!] \centering \begin{subfigure}{.40\textwidth} \centering \includegraphics[width=.95\linewidth]{n1_js.eps} \label{n10js} \end{subfigure}% \begin{subfigure}{.40\textwidth} \centering \includegraphics[width=.95\linewidth]{n2_js.eps} \label{n100js} \end{subfigure} \begin{subfigure}{.42\textwidth} \centering \includegraphics[width=.95\linewidth]{n3_js.eps} \label{n1000js} \end{subfigure} \caption{Behavior of the increment of standard Brownian motion under Jittered sampling observations with N=10, 100 and 1000 respectively.} \label{njs1} \end{figure} \begin{figure}[h!] \centering \begin{subfigure}{.40\textwidth} \centering \includegraphics[width=.95\linewidth]{n1_rp.eps} \label{n10rp} \end{subfigure}% \begin{subfigure}{.40\textwidth} \centering \includegraphics[width=.95\linewidth]{n2_rp.eps} \label{n100rp} \end{subfigure} \begin{subfigure}{.42\textwidth} \centering \includegraphics[width=.95\linewidth]{n3_rp.eps} \label{n1000rp} \end{subfigure} \caption{Behavior of the increment of standard Brownian motion under Renewal sampling observations with N=10, 100 and 1000 respectively.} \label{njs2} \end{figure} \subsection*{Number of times $\boldsymbol{\tau_{\alpha N} >1}$:} Under renewal sampling observations, for different values of $\alpha$ and $N$, we have the following experimental results concerning the number of times when the last observations is bigger that 1 (see the discussion in Section 2.2) \begin{table}[h!] \centering \begin{tabular}{cccc} \hline \hline & $N=100$ & $N=1000$ & $N=10000$ \\ \hline $\alpha = 0.99$ & 45 & 361 & 1624 \\ $\alpha = 0.98$ & 34 & 241 & 225 \\ $\alpha = 0.97$ & 29 & 164 & 17 \\ $\alpha = 0.96$ & 27 & 102 & 0 \\ $\alpha = 0.95$ & 24 & 50 & 0 \\ \hline \hline \end{tabular} \caption{Number of times $\tau_{\alpha N} >1$ for different values of $\alpha$ and $N$} \label{alpha_tau} \end{table} \subsection*{Convergence of $\boldsymbol{ Q_N}$, Proposition \ref{L2-conv}} As we shown in Proposition \ref{L2-conv}, $Q_{N}$ converges in mean square and also in probability to $1/3$ as $N$ goes to infinity. After simulating the value of $Q_{N}$ for different values of $N = 2, \dots , 10000$, we obtain the shape of the convergence. In the following Figure \ref{conv-plot}, the straight line in red, represents the value $Q_{N} = 1/3$. \begin{figure}[h!] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.9\linewidth]{Conv_js.eps} \label{ejs} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.9\linewidth]{Conv_rp.eps} \label{erp} \end{subfigure} \caption{Convergence of the sequence $Q_N$ under different random times. Left: Jittered Sampling, Right: Renewal Sampling.} \label{conv-plot} \end{figure} \pagebreak \subsection*{Convergence of $\boldsymbol{ Q_N}$, Proposition \ref{normalidad}} The results obtained, for both type of random times, are the following \begin{figure}[h!] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.9\linewidth]{hist_JS.eps} \label{ejs1} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.9\linewidth]{hist_RP.eps} \label{erp1} \end{subfigure} \caption{Histograms of the sequence $N A_{N}$ under different random times. Left: Jittered Sampling, Right: Renewal Sampling} \label{hist-plot} \end{figure} \begin{table}[h!] \centering \begin{tabular}{ccc} \hline \hline $N A_{N}$& Mean & Variance \\ \hline Jittered Sampling & -0.003562114 & 0.3328107 \\ Renewal Sampling & 0.002266849 & 0.3348508 \\ \hline \hline \end{tabular} \caption{Mean and Variance of the sequence $N A_{N}$} \label{hist-tab} \end{table} \newpage \subsection*{Convergence of $\boldsymbol{A_N}$: Theorem \ref{an_dn}} \begin{figure}[h!] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.9\linewidth]{ad_js.eps} \label{teo_js_hist} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.9\linewidth]{ad_rp.eps} \label{teo_rp_hist} \end{subfigure} \caption{Histograms of the sequence $N \hat{a}_{N}$ under different random times. Left: Jittered Sampling, Right: Renewal Sampling} \label{teohist} \end{figure} \begin{table}[h!] \centering \begin{tabular}{ccc} \hline \hline Random Time & Mean & Variance \\ \hline Jittered Sampling & -0.001518635 & 3.016777 \\ Renewal Sampling & -0.01345249 & 2.982566 \\ \hline \hline \end{tabular} \caption{Mean and variance of the sequence $N \hat{a}_{N}$} \label{tab-teo} \end{table} \subsection*{Estimation Error} As we shown previously, the sequence $N A_{N}$ converges, in law for $N$ large enough, to a normal distribution with mean $\mu =0$ and variance $\sigma^{2} = 1/3$, so we define the estimation error, for both type of random times, as $\varepsilon = N A_{N} - N(0,1/3)$. First, we present the behavior of the error for different values of $N$. The corresponding results, for both type of random times, are the followings \begin{figure}[h!] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.9\linewidth]{error_JS.eps} \label{ejs2} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.9\linewidth]{error_RP.eps} \label{erp2} \end{subfigure} \caption{Estimation error under different values of $N$ from 1 to 5000 random times. Left: Jittered Sampling, Right: Renewal Sampling} \label{errorplot} \end{figure} Secondly, we plot the histogram of the estimation error defined previously for a fixed value of $N=5000$. \begin{figure}[h!] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.9\linewidth]{error_hist_js.eps} \label{ejs_hist} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.9\linewidth]{error_hist_rs.eps} \label{erp_hist} \end{subfigure} \caption{Histogram of the estimation error under different random times and fixed value of $N=5000$. Left: Jittered Sampling, Right: Renewal Sampling} \label{errorhist} \end{figure} \begin{table}[h!] \centering \begin{tabular}{ccc} \hline \hline Random Time & Mean & Variance \\ \hline Jittered Sampling & 0.003081114 & 0.6664274 \\ Renewal Sampling & -0.000968569 & 0.6666846 \\ \hline \hline \end{tabular} \caption{Mean and variance of the estimation error}\label{tab:tab-error} \end{table} \pagebreak \subsection{Conclusions} In relation to simulation results previously shown, we can state the following conclusions \begin{itemize} \item It is quite easy to check, in Figure \ref{njs1} for jittered sampling and Figure \ref{njs2} for renewal sampling, that as the value of $N$ increases both type of random times are closer to their deterministic or equally spaced version. \item When working with renewal observations, we consider an unbounded support so is quite likely that $\tau_{N} > 1$; in order to avoid this, it was necessary to consider $\alpha \approx 1$ such that $\tau_{\alpha N} < 1$. The results given in Table \ref{alpha_tau} exhibits that for large values of $N$ and $\alpha$ close to $1$, it is possible to ensure that $\tau_{\alpha N} < 1$. \item It is possible to notice that, as the value of $N$ grows, the value of $Q_{N}$ is closer to $1/3$. \item In Figure \ref{hist-plot} and in Table \ref{hist-tab}, it is possible to see the asymptotic normality of the renormalized sequence. \item As we point out in theorem \ref{an_dn}, the normal distribution with its corresponding parameters is shown in Figure \ref{hist-plot} and Table \ref{hist-tab}, for both type of random times. \item In both Figures \ref{errorplot}, \ref{errorhist} and in Table \ref{tab:tab-error}, for both type of random times, the behavior of the error is as expected, i.e. is centered and with variance not depending on the value of $N$ and besides following a normal distribution. \end{itemize} \section{Appendix} \subsection{Proofs} {\bf Proof of Lemma \ref{ll1}:} Let us separate the proof upon the two situations (\ref{js}) and (\ref{rp}). \vskip0.2cm \noindent{\bf Jittered sampling case: }In this case, as discussed in Section \ref{sec22}, we take $\alpha =1$ and $Y_{\tau_{N}}=0$. Firstly, we compute the first moment of $A_{N}$, i.e. \begin{align*} \mathbb{E} \left[ A_{N} \right] &= \mathbb{E} \left[ \mathbb{E} \left[ A_{N} \mid \tau \right] \right] \\ &= \mathbb{E} \left[ \mathbb{E} \left[ \left. \dfrac{1}{N} \sum_{j=0}^{N-2} \left( \frac{j+1}{N} + \nu_{j} \right) \left( W_{\frac{j+1}{N} + \nu_{j+1}} - W_{\frac{j}{N} + \nu_{j}} \right) \right| \nu \right] \right] \nonumber\\ &= \mathbb{E} \left[ \dfrac{1}{N} \sum_{j=0}^{N-2} \left( \frac{j+1}{N} + \nu_{j+1} \right) \mathbb{E} \left[ W_{\frac{j+1}{N} + \nu_{j+1}} - W_{\frac{j}{N} + \nu_{j}} \mid \tau \right] \right]\\ &= 0. \end{align*} The conditioning with respect to $\tau$ (above and throughout) means that we conditions with respect to the sigma-field generated by the random variables $\nu_{i, N}, i=1,\ldots ,N$. The $L^{2}(\Omega)$ norm of $A_{N}$ can be calculated as follows \begin{align} \mathbb{E} \left[ A_{N}^{2} \right] &= Var (A_{N}) \nonumber \\ &= \mathbb{E} \left[ Var(A_{N} \mid \tau) \right] + Var(\mathbb{E} \left[ A_{N} \mid \tau \right]) = \mathbb{E} \left[ Var(A_{N} \mid \tau) \right] \nonumber \\ &= \mathbb{E}_{\nu} \left[ \dfrac{1}{N^{2}} \sum_{j=0}^{N-2} \left( \frac{j+1}{N} + \nu_{j+1} \right)^{2} \left( \nu_{j+1} - \nu_{j} + \frac{1}{N} \right) \right] \nonumber \\ &= \mathbb{E} \left[ \dfrac{1}{N^{2}} \sum_{j=0}^{N-2} \left( \frac{(j+1)^{2} \nu_{j+1}}{N^{2}} + \frac{2(j+1) \nu_{j+1}^{2}}{N} + \nu_{j+1}^{3} - \frac{(j+1)^{2} \nu_{j}}{N^{2}} - \frac{2(j+1) \nu_{j} \nu_{j+1}}{N} \right. \right. \nonumber \\ &- \left. \left. \nu_{j} \nu_{j+1}^{2} + \frac{(j+1)^{2}}{N^{3}} + \frac{2(j+1) \nu_{j+1}}{N^{2}} + \frac{ \nu_{j+1}^{2}}{N} \right) \right] \nonumber \\ &=: E_{N}^{(1)} + E_{N}^{(2)} + E_{N}^{(3)} - E_{N}^{(4)} - E_{N}^{(5)} - E_{N}^{(6)} + E_{N}^{(7)} + E_{N}^{(8)} + E_{N}^{(9)}. \label{E2-JS} \end{align} Notices that terms $E_{N}^{(1)}, E_{N}^{(4)} , E_{N}^{(5)} , E_{N}^{(6)}$ and $E_{N}^{(8)}$ due to the assumption that $\mathbb{E}(\nu_{i, N})= 0$ for every $i, N$, $E_{N}^{(3)}$ is also equal to zero do the symmetry of the law of $\nu _{i, N}$. Therefore \begin{eqnarray}\label{2d-2} \mathbb{E} \left[ A_{N}^{2} \right] &=& E_{N}^{(2)}+ E_{N}^{(7)}+ E_{N}^{(9)}. \end{eqnarray} We evaluate the three summands in the right-hand side above. By (\ref{2d-1}) \begin{align} E_{N}^{(2)} &= \mathbb{E} \left[ \dfrac{1}{N^{2}} \sum_{j=0}^{N-2} \dfrac{2 \nu_{j+1}^{2} (j+1)}{N} \right] = \dfrac{2}{N^{3}} \sum_{j=0}^{N-2} (j+1) \mathbb{E} \left[ \nu_{j+1}^{2} \right] \nonumber \\ &= \dfrac{2c_{1}}{ N^{5}} \left( \dfrac{N(N-1)}{2} \right)=\frac{c_{1}}{N ^ {3}} + o\left( \frac{1}{ N ^ {3}}\right). \label{JS-E2} \end{align} Next \begin{align} E_{N}^{(7)} &= \mathbb{E} \left[ \dfrac{1}{N^{2}} \sum_{j=0}^{N-2} \dfrac{(j+1)^{2}}{N^{3}} \right] = \dfrac{1}{N^{5}} \left( \dfrac{(N-1)(N)(2N-1)}{6} \right) \nonumber \\ &=\frac{1}{3} \frac{1}{N ^ {2}} +o\left(\dfrac{1}{N^{2}}\right) \label{JS-E7} \end{align} and, again by (\ref{2d-1}), for $N\geq 2$, \begin{align} E_{N}^{(9)} &= \mathbb{E} \left[ \dfrac{1}{N^{2}} \sum_{j=0}^{N-2} \dfrac{\nu_{j+1}^{2}}{N} \right] = \dfrac{1}{N^{3}} \sum_{j=0}^{N-2} \mathbb{E} \left[ \nu_{j+1}^{2} \right] = \dfrac{1}{N^{3}} \sum_{j=0}^{N-2}c_{1} \dfrac{1}{ N^{2}} = c_{1}\dfrac{1}{(N-1)^{4}}. \label{JS-E9} \end{align} From (\ref{2d-2}), (\ref{JS-E2}), (\ref{JS-E7}) and (\ref{JS-E9}), we obtain the conclusion in JS case. \vskip0.3cm \noindent {\bf Renewal sampling case: } Recall that in this case the number of observations is $N_{\alpha}= [\alpha N]$ with $\alpha <1$ close to $1$. As before, by conditioning on $\tau$, \begin{align*} \mathbb{E} \left[ A_{N} \right] &= \mathbb{E} \left[ \mathbb{E} \left[ A_{N} \vert \tau \right] \right] = \left. \mathbb{E} \left[ \mathbb{E} \left[ \dfrac{1}{N} \sum_{j=0}^{N_{\alpha}-1} \tau_{j+1} \left( W_{\tau_{j+1}} - W_{\tau_{j}} \right) \right| \tau \right] \right] \\ &= \mathbb{E} \left[ \left. \dfrac{1}{N} \sum_{j=0}^{N_{\alpha}-1} \tau_{j+1} \mathbb{E} \left[ W_{\tau_{j+1}} - W_{\tau_{j}} \right| \tau \right] \right] = 0, \end{align*} and, using that $A_{N}$ has, conditionally on $\tau$, a Gaussian distribution, we obtain \begin{align} \mathbb{E} \left[ A_{N}^{2} \right] &= Var (A_{N}) = \mathbb{E} \left[ Var(A_{N} \vert \tau) \right] + Var (\mathbb{E} \left[ A_{N} \vert \tau \right]) \nonumber \\ &= \mathbb{E} \left[ Var(A_{N} \vert \tau) \right] \nonumber = \mathbb{E} \left[ Var \left( \dfrac{1}{N} \sum_{j=0}^{N_{\alpha}-1} \tau_{j+1} \left( W_{\tau_{j+1}} - W_{\tau_{j}} \right) \right) \right] \nonumber \\ &= \mathbb{E} \left[ \dfrac{1}{N^{2}} \sum_{j=0}^{N_{\alpha}-1} \tau_{j+1}^{2} \left( \tau_{j+1} - \tau_{j} \right) \right]= \mathbb{E} \left[ \dfrac{1}{N^{2}} \sum_{j=0}^{N_{\alpha}-1} \tau_{j+1}^{3} - \dfrac{1}{N^{2}} \sum_{j=0}^{N_{\alpha}-1} \tau_{j+1}^{2} \tau_{j} \right] \nonumber \\ &= \dfrac{1}{N^{2}} \sum_{j=0}^{N_{\alpha}-1} \mathbb{E} \left[ \tau_{j+1}^{3} \right] - \dfrac{1}{N^{2}} \sum_{j=0}^{N_{\alpha}-1} \mathbb{E} \left[ \tau_{j+1}^{2} \tau_{j} \right]=: E_{N}^{(1)} - E_{N}^{(2)}. \label{E2-RS} \end{align} For the first term of \eqref{E2-RS}, we have \begin{align} E_{N}^{(1)} &= \dfrac{1}{N^{2}} \sum_{j=0}^{N_{\alpha}-1} \mathbb{E} \left[ \tau_{j+1}^{3} \right] = \dfrac{1}{N^{2}} \sum_{j=0}^{N_{\alpha}-1} \int_{0}^{\infty} x^{3} \dfrac{N^{j+1}}{\Gamma(j+1)} x^{j} e^{-Nx} dx\nonumber\\ &= \dfrac{1}{N^{2}} \sum_{j=0}^{N-1} \dfrac{N^{j+1}}{N^{j+4}} \dfrac{\Gamma(j+4)}{\Gamma(j+1)} = \dfrac{1}{N^{2}} \sum_{j=0}^{N_{\alpha}-1} \dfrac{(j+3)(j+2)(j+1) \Gamma(j+1)}{N^{3} \Gamma(j+1)} \nonumber \\ &= \dfrac{1}{N^{5}} \sum_{j=0}^{N_{\alpha}-1} (j^{3} + 6j^{2} + 11j + 6) \nonumber \\ &= \dfrac{1}{N^{5}} \left( \dfrac{(N_{\alpha}-1)(N_{\alpha})}{2} \right)^{2} + \dfrac{(N_{\alpha}-1)(N_{\alpha})(2(N_{\alpha}-1)+1)}{N^{5}} + \dfrac{11}{2} \dfrac{(N_{\alpha}-1)(N_{\alpha})}{N^{5}} + \dfrac{6N_{\alpha}}{N^{5}} \nonumber \\ &\sim \frac{\alpha^ {4}}{4N}+ \frac{3\alpha^ {3}}{N ^ {2}} +o\left( \frac{1}{N^ {2}}\right). \label{RS-E1} \end{align} For the second term of \eqref{E2-RS}, we consider the joint density computed by \cite{araya2019} (see Table (\ref{densities})) \begin{align} E_{N}^{(2)} &= \dfrac{1}{N^{2}} \sum_{j=0}^{N_{\alpha}-1} \mathbb{E} \left[ \tau_{j+1}^{2} \tau_{j} \right]= \dfrac{1}{N^{2}} \sum_{j=0}^{N_{\alpha}-1} \int_{0}^{\infty} \int_{0}^{\tau_{j+1}} \tau_{j} \tau_{j+1}^{2} \dfrac{N^{j+1}}{\Gamma(j)} \tau_{j}^{j-1} e^{-N \tau_{j+1}} d \tau_{j} d \tau_{j+1} \nonumber \\ &= \dfrac{1}{N^{2}} \sum_{j=0}^{N_{\alpha}-1} \dfrac{N^{j+1}}{\Gamma(j)} \int_{0}^{\infty} \tau_{j+1}^{2} e^{-N \tau_{j+1}} \int_{0}^{\tau_{j+1}} \tau_{j}^{j} d \tau_{j} d \tau_{j+1} \nonumber \\ &= \dfrac{1}{N^{2}} \sum_{j=0}^{N_{\alpha}-1} \dfrac{N^{j+1}}{(j+1) \Gamma(j)} \int_{0}^{\infty} \tau_{j+1}^{j+3} e^{-N \tau_{j+1}} d \tau_{j+1}= \dfrac{1}{N^{2}} \sum_{j=0}^{N_{\alpha}-1} \dfrac{N^{j+1}}{(j+1) \Gamma(j)} \dfrac{\Gamma(j+4)}{N^{j+4}} \nonumber \\ &= \dfrac{1}{N^{5}} \sum_{j=0}^{N_{\alpha}-1} \dfrac{(j+3)(j+2)(j+1) j \Gamma(j)}{(j+1) \Gamma(j)}= \dfrac{1}{N^{5}} \sum_{j=0}^{N_{\alpha}-1} (j^{3} + 5j^{2} + 6j) \nonumber \\ &= \dfrac{1}{N^{5}} \left( \dfrac{(N_{\alpha}-1)(N_{\alpha})}{2} \right)^{2} + \dfrac{5}{N^{5}} \left( \dfrac{(N_{\alpha}-1)(N_{\alpha})(2(N_{\alpha}-1)+1)}{6} \right) + \dfrac{6}{N^{5}} \left( \dfrac{(N_{\alpha}-1)(N_{\alpha})}{2} \right) \nonumber \\ &\sim \dfrac{\alpha ^ {4}}{4N} + \dfrac{7\alpha ^ {3}}{6N^{2}} +o\left( \frac{1}{N ^ {2}}\right). \label{RS-E2} \end{align} Replacing (\ref{RS-E1}) and (\ref{RS-E2}) in (\ref{E2-RS}) \begin{align} \mathbb{E} \left[ A_{N}^{2} \right] &\sim \dfrac{\alpha ^ {3}}{3N^{2}} + o\left( \frac{1}{N^ {2}}\right). \label{cte-rs} \end{align} \hfill \vrule width.25cm height.25cm depth0cm\smallskip \vskip0.2cm {\bf Proof of Proposition \ref{L2-conv}: }Again, we separately discuss the two cases of random sampling. \noindent{\bf Jittered sampling case: } Recall that in this case $\alpha =1$. First, we compute the first moment of \eqref{qn}, i.e \begin{align*} \mathbb{E} \left[ Q_{N} \right] &= \mathbb{E} \left[ \sum_{j=0}^{N-1} \left( \frac{j+1}{N} + \frac{X_{j+1}}{N} \right)^{2} \left( \frac{X_{j+1}}{N} - \frac{X_{j}}{N} + \frac{1}{N} \right) \right] \\ &= \dfrac{1}{N^{3}} \sum_{j=0}^{N-1} \mathbb{E} \left[ (j+1)^{2} X_{j+1} - (j+1)^{2}X_{j} + (j+1)^{2} + 2(j+1)X_{j+1}^{2} - 2(j+1)X_{j} X_{j+1} \right. \\ & \left.+ 2(j+1)X_{j+1} + X_{j+1}^{3} - X_{j}X_{j+1}^{2} + X_{j+1}^{2} \right] \\ &= \dfrac{1}{N^{3}} \sum_{j=0}^{N-1} \left[ Q_{N}^{(1)} - Q_{N}^{(2)} + Q_{N}^{(3)} + Q_{N}^{(4)} - Q_{N}^{(5)} + Q_{N}^{(6)} + Q_{N}^{(7)} - Q_{N}^{(8)} + Q_{N}^{(9)} \right]. \end{align*} From (\ref{3d-1}) and the hypothesis (\ref{2d-1}) on $\tau$, we have \begin{equation*} \mathbb{E}( X_{j})= 0 \mbox{ and } \mathbb{E} (X_{j}^ {2}) = c_{1}. \end{equation*} Thus $Q_{N}^{(1)}$, $Q_{N}^{(2)}$, $Q_{N}^{(5)}$ , $Q_{N}^{(6)}$, $Q_{N}^{(7)}$ and $Q_{N}^{(8)}$ are equal to zero, while the remaining terms can be computed as follows \begin{align} Q_{N}^{(3)} &= \mathbb{E} \left[ \dfrac{1}{N^{3}} \sum_{j=0}^{N-1} (j+1)^{2} \right] = \dfrac{N(N+1)(2N+1)}{6} \nonumber \\ &= \dfrac{2N^{3} + 3N^{2} + N}{6} \xrightarrow[N \to \infty]{} \dfrac{1}{3} \label{q3} \end{align} \begin{align} Q_{N}^{(4)} &= \mathbb{E} \left[ \dfrac{1}{N^{3}} \sum_{j=0}^{N-1} 2(j+1)X_{j+1}^{2} \right] = \dfrac{2}{N^{3}} \sum_{j=0}^{N-1} (j+1) Var \left( X_{j+1}^{2} \right) \nonumber \\ &= \dfrac{2}{N^{3}} \sum_{j=0}^{N-1} (j+1) \dfrac{1}{12}= \dfrac{1}{6 N^{3}} \left( \dfrac{N(N+1)}{2} \right) \xrightarrow[N \to \infty]{} 0 \label{q4} \end{align} and \begin{align} Q_{N}^{(9)} &= \mathbb{E} \left[ \dfrac{1}{N^{3}} \sum_{j=0}^{N-1} X_{j+1}^{2} \right]= \dfrac{1}{N^{3}} \sum_{j=0}^{N-1} Var \left( X_{j+1} \right) \nonumber \\ &= \dfrac{1}{N^{3}} \sum_{j=0}^{N-1} c_{1}\xrightarrow[N \to \infty]{} 0 \label{q9} \end{align} Taking into account \eqref{q3}, \eqref{q4} and \eqref{q9} we conclude \begin{equation} \label{lim-qj} \lim_{N \to \infty} \mathbb{E} \left[ Q_{N} \right] \xrightarrow[N \to \infty]{} \dfrac{1}{3}. \end{equation} Secondly, we study the second moment of \eqref{qn}, i.e. \begin{align*} \mathbb{E} \left[ Q_{N}^{2} \right] &= \dfrac{1}{N^{6}} \sum_{j=0}^{N-1} \mathbb{E} \left[ (j+1 + X_{j+1})^{4} (X_{j+1} - X_{j} + 1)^{2} \right] \\ &+ \dfrac{1}{N^{6}} \sum_{0 \leq j \neq k \leq N-1} \mathbb{E} \left[ (j+1 + X_{j+1})^{2} (k+1 + X_{k+1})^{2} (X_{j+1} - X_{j} + 1) (X_{k+1} - X_{k} + 1) \right] \\ &:= Q_{N}^{(I)} + Q_{N}^{(II)}. \end{align*} Some of the summands that compose $Q_{N}^ {(I)}$ are zero (those involving odd order moments of $X_{j}$ or $X_{j+1}$, which vanish) and the other summands of $Q_{N} ^ {(I)} $ converge to zero. For example \begin{align*} \mathbb{E} \left[ \dfrac{1}{N^{6}} \sum_{j=0}^{N-1} (j+1)^{4} X_{j+1}^{2} \right] &= \dfrac{1}{N^{6}} \sum_{j=0}^{N-1} (j+1)^{4} Var(X_{j+1}) \\ &= \dfrac{c_{1}}{ N^{6}} \left( \dfrac{6N^{5} + 15N^{4} + 10N^{3} - N}{30} \right) \xrightarrow[N \to \infty]{} 0 \end{align*} or \begin{align*} \mathbb{E} \left[ \dfrac{1}{N^{6}} \sum_{j=0}^{N-1} 2(j+1)^{4} X_{j} X_{j+1} \right] &= \dfrac{2}{N^{6}} \sum_{j=0}^{N-1} (j+1) \mathbb{E} \left[ X_{j} X_{j+1} \right] =0 \end{align*} while the remaining terms can be computed in the same way as the last two. \\ The summand $ Q_{N}^{(II)}$ give the limit of $Q_{N}$. Actually, the only non-vanishing term in $ Q_{N}^{(II)}$ is the one not depending on $X_{j}$, i.e. \begin{align*} \mathbb{E} \left[ \dfrac{1}{N^{6}} \sum_{0 \leq j \neq k \leq N-1} (j+1)^{2} (k+1)^{2} \right] &= \dfrac{1}{N^{6}} \sum_{0 \leq j \neq k \leq N-1} (j+1)^{2} (k+1)^{2}. \end{align*} To compute this term, we use the following identity \begin{align} \sum_{1 \leq i \neq j \leq N} a_{i} a_{j} &= \left( \sum_{i=1}^{N} a_{i} \right)^{2} - \sum_{i=1}^{N} a_{i}^{2} \label{suma} \end{align} Therefore, \begin{align} &\dfrac{1}{N^{6}} \sum_{0 \leq j \neq k \leq N-1} (j+1)^{2} (k+1)^{2} = \dfrac{1}{N^{6}} \left[ \left( \sum_{j=0}^{N-1} (j+1)^{2} \right)^{2} \sum_{j=0}^{N-1} (j+1)^{4} \right] \nonumber \\ &= \dfrac{1}{N^{6}} \left[ \left( \dfrac{N(N+1)(2N+1)}{6} \right)^{2} - \left( \dfrac{6N^{5} + 15N^{4} + 10N^{3} - N}{30} \right) \right] \nonumber \\ &= \dfrac{1}{N^{6}} \left[ \dfrac{(2N^{3} + 3N^{2} + N)^{2}}{36} - \dfrac{6N^{5} - 15N^{4} - 10N^{3} + N}{30} \right] \xrightarrow[N \to \infty]{} \dfrac{1}{9}. \label{e2} \end{align} Taking into account \eqref{lim-qj} and \eqref{e2}, we obtain the conclusion. \vskip0.3cm \noindent {\bf Renewal sampling case: } As before, we start by computing the expectation of $Q_{N}$. We have \begin{align} \mathbb{E} \left[ Q_{N} \right] &= \mathbb{E} \left[ \sum_{j=0}^{N_{\alpha}-1} \tau_{j+1}^{2} \left( \tau_{j+1} - \tau_{j} \right) \right] \nonumber \\ &= \mathbb{E} \left[ \sum_{j=0}^{N_{\alpha}-1} \tau_{j+1}^{3} \right] - \mathbb{E} \left[\sum_{j=0}^{N_{\alpha}-1} \tau_{j+1}^{2} \tau_{j} \right]= Q_{N}^{(1)} - Q_{N}^{(2)} \label{eqn} \end{align} with \begin{align} Q_{N}^{(1)} &= \mathbb{E} \left[ \sum_{j=0}^{N-1} \tau_{j+1}^{3} \right]= \sum_{j=0}^{N_{\alpha}-1} \int_{0}^{\infty} \tau_{j+1}^{3} \dfrac{N^{j+1}}{\Gamma(j+1)} \tau_{j+1}^{j} e^{-N \tau_{j+1}} d \tau_{j+1} \nonumber \\ &= \sum_{j=0}^{N_{\alpha}-1} \dfrac{N^{j+1}}{\Gamma(j+1)} \dfrac{\Gamma(j+4)}{N^{j+4}}= \dfrac{1}{N^{3}} \sum_{j=0}^{N_{\alpha}-1} (j+3)(j+2)(j+1) \nonumber \\ &= \dfrac{1}{N^{3}} \sum_{j=0}^{N_{\alpha}-1} [j^{3} + 6j^{2} + 11j + 6] \label{eqn1} \end{align} and \begin{align} Q_{N}^{(2)} &= \mathbb{E} \left[ \sum_{j=0}^{N_{\alpha}-1} \tau_{j+1}^{2} \tau_{j} \right] = \sum_{j=0}^{N_{\alpha}-1} \int_{0}^{\infty} \int_{0}^{\tau_{j+1}} \tau_{j+1}^{2} \tau_{j} \dfrac{N^{j+1}}{\Gamma(j)} \tau_{j}^{j-1} e^{-N \tau_{j+1}} d \tau_{j} d \tau_{j+1} \nonumber \\ &= \sum_{j=0}^{N_{\alpha}-1} \dfrac{N^{j+1}}{\Gamma(j)} \int_{0}^{\infty} \tau_{j+1}^{2} e^{-N \tau_{j+1}} \int_{0}^{\tau_{j+1}} \tau_{j}^{j} d \tau_{j} d \tau_{j+1}= \sum_{j=0}^{N_{\alpha}-1} \dfrac{N^{j+1}}{\Gamma(j) (j+1)} \int_{0}^{\infty} \tau_{j+1}^{j+4-1} e^{-N \tau_{j+1}} d \tau_{j+1} \nonumber \\ &= \sum_{j=0}^{N_{\alpha}-1} \dfrac{N^{j+1}}{N^{j+4}} \dfrac{\Gamma(j+4)}{\Gamma(j) (j+1)} = \dfrac{1}{N^{3}} \sum_{j=0}^{N_{\alpha}-1} (j+3)(j+2)j= \dfrac{1}{N^{3}} \sum_{j=0}^{N_{\alpha}-1} [j^{3} - 5j^{2} +6j] \label{eqn2}. \end{align} Replacing \eqref{eqn1} and \eqref{eqn2} in \eqref{eqn}, it results \begin{align*} \mathbb{E} \left[ Q_{N} \right] &= \dfrac{1}{N^{3}} \sum_{j=0}^{N_{\alpha}-1} [j^{2} - 5j +6] = \dfrac{1}{N^{3}} \left[ \dfrac{2N_{\alpha}^{3} - 18N_{\alpha}^{2} + 52N_{\alpha}}{6} \right] \xrightarrow[N \to \infty]{} \dfrac{\alpha ^ {3}}{3} \end{align*} For the second moment of $Q_{N}$ we have \begin{align} \mathbb{E} \left[ Q_{N}^{2} \right] &= \mathbb{E} \left[ \left( \sum_{j=0}^{N_{\alpha}-1} \tau_{j+1}^{2} \left( \tau_{j+1} - \tau_{j} \right) \right)^{2} \right] \nonumber \\ &= \mathbb{E} \left[ \sum_{j=0}^{N_{\alpha}-1} \tau_{j+1}^{4} \left( \tau_{j+1} - \tau_{j} \right)^{2} \right] + 2 \mathbb{E} \left[ \sum_{j<k}^{N_{\alpha}-1} \tau_{j+1}^{2} \tau_{k+1}^{2} \left( \tau_{j+1} - \tau_{j} \right) \left( \tau_{k+1} - \tau_{k} \right) \right] \nonumber \\ &= \mathbb{E} \left[ \sum_{j=0}^{N_{\alpha}-1} \tau_{j+1}^{6} - 2 \sum_{j=0}^{N_{\alpha}-1} \tau_{j+1}^{5} \tau_{j} + \sum_{j=0}^{N_{\alpha}-1} \tau_{j+1}^{4} \tau_{j}^{2} \right] \nonumber \\ &+ 2 \mathbb{E} \left[ \sum_{j<k}^{N_{\alpha}-1} \tau_{j+1}^{3} \tau_{k+1}^{3} - \sum_{j<k}^{N_{\alpha}-1} \tau_{j+1}^{3} \tau_{k} \tau_{k+1}^{2} - \sum_{j<k}^{N_{\alpha}-1} \tau_{j+1}^{2} \tau_{j} \tau_{k+1}^{3} + \sum_{j<k}^{N_{\alpha}-1} \tau_{j} \tau_{j+1}^{2} \tau_{k} \tau_{k+1}^{2} \right] \nonumber \\ &:= Q_{N}^{(I)} - Q_{N}^{(II)} + Q_{N}^{(III)} + Q_{N}^{(IV)} - Q_{N}^{(V)} - Q_{N}^{(VI)} + Q_{N}^{(VII)} \label{eqnn} \end{align} and the above terms can be calculated by using the joint densities from Table \ref{densities}. We calculate first the sum $Q_{N}^{(I)} - Q_{N}^{(II)} + Q_{N}^{(III)} $. First \begin{align} Q_{N}^{(I)} &= \mathbb{E} \left[ \sum_{j=0}^{N_{\alpha}-1} \tau_{j+1}^{6} \right]=\sum_{j=0}^{N_{\alpha}-1} \int_{0}^{\infty} \tau_{j+1}^{6} \dfrac{N^{j+1}}{\Gamma(j+1)} \tau_{j+1}^{j} e^{-N \tau_{j+1}} d \tau_{j+1} \nonumber \\ &= \sum_{j=0}^{N_{\alpha}-1} \dfrac{N^{j+1}}{\Gamma(j+1)} \int_{0}^{\infty} \tau_{j+1}^{j+7-1} e^{-N \tau_{j+1}} d \tau_{j+1}= \sum_{j=0}^{N_{\alpha}-1} \dfrac{N^{j+1}}{N^{j+7}} \dfrac{\Gamma(j+7)}{\Gamma(j+1)} \nonumber \\ &= \dfrac{1}{N^{6}} \sum_{j=0}^{N_{\alpha}-1} (j+6)(j+5)(j+4)(j+3)(j+2)(j+1) \nonumber \\ &= \dfrac{1}{N^{6}} \sum_{j=0}^{N_{\alpha}-1} [j^{6} + 21j^{5} + 175j^{4} + 735j^{3} + 1624j^{2} + 1764j + 720]. \label{e2-1} \end{align} For $Q_{N}^{(II)}, Q_{N}^{(III)}$ we use the joint density of the random vector $(\tau_{j}, \tau_{k})$ \begin{align} Q_{N}^{(II)} &= 2 \mathbb{E} \left[ \sum_{j=0}^{N_{\alpha}-1} \tau_{j+1}^{5} \tau_{j} \right]= 2 \sum_{j=0}^{N_{\alpha}-1} \int_{0}^{\infty} \int_{0}^{\tau_{j+1}} \tau_{j+1}^{5} \tau_{j} \dfrac{N^{j+1}}{\Gamma(j)} \tau_{j}^{j-1} e^{-N \tau_{j+1}} d \tau_{j}d \tau_{j+1} \nonumber \\ &= 2 \sum_{j=0}^{N_{\alpha}-1} \dfrac{N^{j+1}}{\Gamma(j)} \int_{0}^{\infty} \tau_{j+1}^{5} e^{-N \tau_{j+1}} \int_{0}^{\tau_{j+1}} \tau_{j}^{j} d \tau_{j} d \tau_{j+1} \nonumber \\ &= 2 \sum_{j=0}^{N_{\alpha}-1} \dfrac{N^{j+1}}{\Gamma(j) (j+1)} \int_{0}^{\infty} \tau_{j+1}^{j+7-1} e^{-N \tau_{j+1}} d \tau_{j+1} \nonumber \\ &= 2 \sum_{j=0}^{N_{\alpha}-1} \dfrac{N^{j+1}}{N^{j+7}} \dfrac{\Gamma(j+7)}{\Gamma(j) (j+1)} = \dfrac{2}{N^{6}} \sum_{j=0}^{N_{\alpha}-1} (j+6)(j+5)(j+4)(j+3)(j+2)j \nonumber \\ &= \dfrac{1}{N^{6}} \sum_{j=0}^{N_{\alpha}-1} [2j^{6} + 40j^{5} + 310j^{4} + 1160j^{3} + 2088j^{2} + 1440j] \label{e2-2} \end{align} and \begin{align} Q_{N}^{(III)} &= \mathbb{E} \left[ \sum_{j=0}^{N_{\alpha}-1} \tau_{j+1}^{4} \tau_{j}^{2} \right] = \sum_{j=0}^{N_{\alpha}-1} \int_{0}^{\infty} \int_{0}^{\tau_{j+1}} \tau_{j+1}^{4} \tau_{j}^{2} \dfrac{N^{j+1}}{\Gamma(j)} \tau_{j}^{j-1} e^{-N \tau_{j+1}} d \tau_{j}d \tau_{j+1} \nonumber \\ &= \sum_{j=0}^{N_{\alpha}-1} \dfrac{N^{j+1}}{\Gamma(j)} \int_{0}^{\infty} \tau_{j+1}^{4} e^{-N \tau_{j+1}} \int_{0}^{\tau_{j+1}} \tau_{j}^{j+1} d \tau_{j} d \tau_{j+1} \nonumber \\ &= \sum_{j=0}^{N_{\alpha}-1} \dfrac{N^{j+1}}{\Gamma(j) (j+2)} \int_{0}^{\infty} \tau_{j+1}^{j+7-1} e^{-N \tau_{j+1}} d \tau_{j+1} \nonumber \\ &= \sum_{j=0}^{N_{\alpha}-1} \dfrac{N^{j+1}}{N^{j+7}} \dfrac{\Gamma(j+7)}{\Gamma(j) (j+2)} = \dfrac{1}{N^{6}} \sum_{j=0}^{N_{\alpha}-1} (j+6)(j+5)(j+4)(j+3)(j+1)j \nonumber \\ &= \dfrac{1}{N^{6}} \sum_{j=0}^{N_{\alpha}-1} [j^{6} + 19j^{5} + 137j^{4} + 461j^{3} + 702j^{2} + 360j]. \label{e2-3} \end{align} By putting together \eqref{e2-1}, \eqref{e2-2} and \eqref{e2-3}, \begin{align} Q_{N}^{(I)} - Q_{N}^{(II)} + Q_{N}^{(III)} = \dfrac{1}{N^{6}} \sum_{j=0}^{N_{\alpha}-1} [j^{4} + 18j^{3} + 119j^{2} + 342j + 360] \xrightarrow[N \to \infty]{} 0 \label{lim1}, \end{align} For terms $Q_{N}^{(IV)}, Q_{N}^{(V)}, Q_{N}^{(VI)}$ and $Q_{N}^{(VII)}$ it is necessary to consider the following joint densities $f_{\tau_{j+1} , \tau_{k+1}}$, $f_{\tau_{j}, \tau_{k} , \tau_{l}}$ and $f_{\tau_{j}, \tau_{k} , \tau_{l}, \tau_{m}}$ with all the different indices. We also use the identity \begin{equation} \int_{0}^{b} x^{a} (b-x)^{c} dx = \dfrac{\Gamma(a+1) \Gamma(c+1)}{\Gamma(a+c+2)} b^{a+c+1} \label{inte}. \end{equation} We have the following calculations \begin{align} Q_{N}^{(IV)} &= \mathbb{E} \left[ \sum_{j<k}^{N_{\alpha}-1} \tau_{j+1}^{3} \tau_{k+1}^{3} \right] \nonumber \\ &= \sum_{j<k}^{N_{\alpha}-1} \int_{0}^{\infty} \int_{0}^{\tau_{k+1}} \tau_{j+1}^{3} \tau_{k+1}^{3} \dfrac{N^{k+1}}{\Gamma(j+1) \Gamma(k-j)} \tau_{j+1}^{j} (\tau_{k+1} - \tau_{j+1})^{k-j-1} e^{-N \tau_{k+1}} d \tau_{j+1} d \tau_{k+1} \nonumber \\ &= \sum_{j<k}^{N_{\alpha}-1} \dfrac{N^{k+1}}{\Gamma(j+1) \Gamma(k-j)} \int_{0}^{\infty} \tau_{k+1}^{3} e^{-N \tau_{k+1}} \int_{0}^{\tau_{k+1}} \tau_{j+1}^{j+3} (\tau_{k+1} - \tau_{j+1})^{k-j-1} d \tau_{j+1} d \tau_{k+1} \nonumber \\ &= \sum_{j<k}^{N_{\alpha}-1} \dfrac{N^{k+1}}{\Gamma(j+1) \Gamma(k-j)} \dfrac{\Gamma(j+4) \Gamma(k-j)}{\Gamma(k+4)} \int_{0}^{\infty} \tau_{k+1}^{k+7-1} e^{-N \tau_{k+1}} d \tau_{k+1} \nonumber \\ &= \sum_{j<k}^{N_{\alpha}-1} \dfrac{N^{k+1}}{N^{k+7}} \dfrac{\Gamma(j+4) \Gamma(k+7)}{\Gamma(j+1) \Gamma(k+4)}= \dfrac{1}{N^{6}} \sum_{j<k}^{N_{\alpha}-1} (j+3)(j+2)(j+1) (k+6)(k+5)(k+4) \label{e2-4}, \end{align} and, via \eqref{inte} \begin{align} Q_{N}^{(V)} &= \mathbb{E} \left[ \sum_{j<k}^{N_{\alpha}-1} \tau_{j+1}^{3} \tau_{k} \tau_{k+1}^{2} \right] \nonumber \\ &= \sum_{j<k}^{N_{\alpha}-1} \int_{0}^{\infty} \int_{0}^{\tau_{k+1}} \int_{0}^{\tau_{k}} \tau_{j+1}^{3} \tau_{k} \tau_{k+1}^{2} \dfrac{N^{k+1}}{\Gamma(j+1) \Gamma(k-j-1)} \tau_{j+1}^{j} (\tau_{k} - \tau_{j+1})^{k-j-2} e^{-N \tau_{k+1}} d \tau_{j+1} d \tau_{k} d \tau_{k+1} \nonumber \\ &= \sum_{j<k}^{N_{\alpha}-1} \dfrac{N^{k+1}}{\Gamma(j+1) \Gamma(k-j-1)} \int_{0}^{\infty} \tau_{k+1}^{2} e^{-N \tau_{k+1}} \int_{0}^{\tau_{k+1}} \tau_{k} \int_{0}^{\tau_{k}} \tau_{j+1}^{j+3} (\tau_{k} - \tau_{j+1})^{k-j-2} d \tau_{j+1} d \tau_{k} d \tau_{k+1} \nonumber \\ &= \sum_{j<k}^{N_{\alpha}-1} \dfrac{N^{k+1}}{\Gamma(j+1) \Gamma(k-j-1)} \dfrac{\Gamma(j+4) \Gamma(k-j-1)}{\Gamma(k+3)} \int_{0}^{\infty} \tau_{k+1}^{2} e^{-N \tau_{k+1}} \int_{0}^{\tau_{k+1}} \tau_{k}^{k+3} d \tau_{k} d \tau_{k+1} \nonumber \\ &= \sum_{j<k}^{N_{\alpha}-1} \dfrac{N^{k+1} \Gamma(j+4}{\Gamma(j+1) \Gamma(k+3) (k+4)} \int_{0}^{\infty} \tau_{k+1}^{k+7-1} e^{-N \tau_{k+1}} d \tau_{k+1} \nonumber \\ &= \sum_{j<k}^{N_{\alpha}-1} \dfrac{N^{k+1}}{N^{k+7}} \dfrac{\Gamma(j+4) \Gamma(k+7)}{\Gamma(j+1) \Gamma(k+3) (k+4)}= \dfrac{1}{N^{6}} \sum_{j<k}^{N_{\alpha}-1} (j+3)(j+2)(j+1)(k+6)(k+5)(k+3) \label{e2-5} \end{align} and \begin{align} Q_{N}^{(VI)} &= \mathbb{E} \left[ \sum_{j<k}^{N_{\alpha}-1} \tau_{j+1}^{2} \tau_{j} \tau_{k+1}^{3} \right] \nonumber \\ &= \sum_{j<k}^{N_{\alpha}-1} \int_{0}^{\infty} \int_{0}^{\tau_{k+1}} \int_{0}^{\tau_{j+1}} \tau_{j+1}^{2} \tau_{j} \tau_{k+1}^{3} \dfrac{N^{k+1}}{\Gamma(j) \Gamma(k-j)} \tau_{j}^{j-1} (\tau_{k+1} - \tau_{j+1})^{k-j-1} e^{-N \tau_{k+1}} d \tau_{j} d \tau_{j+1} d \tau_{k+1} \nonumber \\ &= \sum_{j<k}^{N_{\alpha}-1} \dfrac{N^{k+1}}{\Gamma(j) \Gamma(k-j)} \int_{0}^{\infty} \tau_{k+1}^{3} e^{-N \tau_{k+1}} \int_{0}^{\tau_{k+1}} \tau_{j+1}^{2} (\tau_{k+1} - \tau_{j+1})^{k-j-1} \int_{0}^{\tau_{j+1}} \tau_{j}^{j} d \tau_{j} d \tau_{j+1} d \tau_{k+1} \nonumber \\ &= \sum_{j<k}^{N_{\alpha}-1} \dfrac{N^{k+1}}{\Gamma(j) \Gamma(k-j) (j+1)} \int_{0}^{\infty} \tau_{k+1}^{3} e^{-N \tau_{k+1}} \int_{0}^{\tau_{k+1}} \tau_{j+1}^{j+3} (\tau_{k+1} - \tau_{j+1})^{k-j-1} d \tau_{j+1} d \tau_{k+1} \nonumber \\ &= \sum_{j<k}^{N_{\alpha}-1} \dfrac{N^{k+1}}{\Gamma(j) \Gamma(k-j) (j+1)} \dfrac{\Gamma(j+4) \Gamma(k-j)}{\Gamma(k+4)} \int_{0}^{\infty} \tau_{k+1}^{k+7-1} e^{-N \tau_{k+1}} d \tau_{k+1} \nonumber \\ &= \sum_{j<k}^{N_{\alpha}-1} \dfrac{N^{k+1}}{N^{k+7}} \dfrac{\Gamma(j+4) \Gamma(k+7)}{\Gamma(j) (j+1) \Gamma(k+4)} = \dfrac{1}{N^{6}} \sum_{j<k}^{N_{\alpha}-1} (j+3)(j+2)j (k+6)(k+5)(k+4) \label{e2-6} \end{align} and finally \begin{align} Q_{N}^{(VII)} &= \mathbb{E} \left[ \sum_{j<k}^{N_{\alpha}-1} \tau_{j} \tau_{j+1}^{2} \tau_{k} \tau_{k+1}^{2} \right] \nonumber \\ &= \sum_{j<k}^{N_{\alpha}-1} \int_{0}^{\infty} \int_{0}^{\tau_{k+1}} \int_{0}^{\tau_{k}} \int_{0}^{\tau_{j+1}} \tau_{j} \tau_{j+1}^{2} \tau_{k} \tau_{k+1}^{2} \dfrac{N^{k+1}}{\Gamma(j) \Gamma(k-j-1)} \tau_{j}^{j-1} (\tau_{k} - \tau_{j+1})^{k-j-2} e^{-N \tau_{k+1}} d \tau_{j}d \tau_{j+1} d \tau_{k} d \tau_{k+1} \nonumber \\ &= \sum_{j<k}^{N_{\alpha}-1} \dfrac{N^{k+1}}{\Gamma(j) \Gamma(k-j-1)} \int_{0}^{\infty} \tau_{k+1}^{2} e^{-N \tau_{k+1}} \int_{0}^{\tau_{k+1}} \tau_{k} \int_{0}^{\tau_{k}} \tau_{j+1}^{2} (\tau_{k} - \tau_{j+1})^{k-j-2} \int_{0}^{\tau_{j+1}} \tau_{j}^{j} d \tau_{j} d \tau_{j+1} d \tau_{k} d \tau_{k+1} \nonumber \\ &= \sum_{j<k}^{N_{\alpha}-1} \dfrac{N^{k+1}}{\Gamma(j) \Gamma(k-j-1) (j+1)} \int_{0}^{\infty} \tau_{k+1}^{2} e^{-N \tau_{k+1}} \int_{0}^{\tau_{k+1}} \tau_{k} \int_{0}^{\tau_{k}} \tau_{j+1}^{j+3} (\tau_{k} - \tau_{j+1})^{k-j-2} d \tau_{j+1} d \tau_{k} d \tau_{k+1} \nonumber \\ &= \sum_{j<k}^{N_{\alpha}-1} \dfrac{N^{k+1}}{\Gamma(j) \Gamma(k-j-1) (j+1)} \dfrac{\Gamma(j+4) \Gamma(k-j-1)}{\Gamma(k+3} \int_{0}^{\infty} \tau_{k+1}^{2} e^{-N \tau_{k+1}} \int_{0}^{\tau_{k+1}} \tau_{k+3} d \tau_{k} d \tau_{k+1} \nonumber \\ &= \sum_{j<k}^{N_{\alpha}-1} \dfrac{N^{k+1} \Gamma(j+4)}{\Gamma(j) (j+1) \Gamma(k+3) (k+4)} \int_{0}^{\infty} \tau_{k+1}^{k-7-1} e^{-N \tau_{k+1}} d \tau_{k+1} \nonumber \\ &= \sum_{j<k}^{N_{\alpha}-1} \dfrac{N^{k+1}}{N^{k+7}} \dfrac{\Gamma(j+4) \Gamma(k+7)}{\Gamma(j) (j+1) \Gamma(k+3) (k+4)} = \dfrac{1}{N^{6}} \sum_{j<k}^{N_{\alpha}-1} (j+3)(j+2)j (k+6)(k+5)(k+3) \label{e2-7}. \end{align} Taking into account the results from \eqref{e2-4}, \eqref{e2-5}, \eqref{e2-6} and \eqref{e2-7}, we get \begin{align} Q_{N}^{(IV)} - Q_{N}^{(V)} - Q_{N}^{(VI)} + Q_{N}^{(VII)} &= \dfrac{2}{N^{6}} \sum_{j<k}^{N-1} \left[ (j+3)(j+2)(j+1) (k+6)(k+5)(k+4) \right. \nonumber \\ & - (j+3)(j+2)(j+1)(k+6)(k+5)(k+3) \nonumber \\ & - (j+3)(j+2)j (k+6)(k+5)(k+4) \nonumber \\ &+ \left. (j+3)(j+2)j (k+6)(k+5)(k+3) \right] \nonumber \\ &= \dfrac{2}{N^{6}} \sum_{j<k}^{N-1} \left[ (j+3)(j+2)(j+1) (k+6)(k+5) \left[ (k+4) - (k+3) \right] \right. \nonumber \\ &+ \left. (j+3)(j+2)j (k+6)(k+5) \left[ (k+3) - (k+4) \right] \right] \nonumber \\ &= \dfrac{2}{N^{6}} \sum_{j<k}^{N-1} \left[ (j+3)(j+2) (k+6)(k+5) \left[ (j+1) - j \right] \right] \nonumber \\ &= \dfrac{2}{N^{6}} \sum_{j<k}^{N-1} \left[ (j+3)(j+2)(k+6)(k+5 \right] \nonumber \end{align} and this can be written as \begin{align} Q_{N}^{(IV)} - Q_{N}^{(V)} - Q_{N}^{(VI)} + Q_{N}^{(VII)} &= \dfrac{2}{N^{6}} \sum_{k=1}^{N_{\alpha}-1} (k+6)(k+5) \sum_{j=0}^{k-1} (j+3)(j+2)\nonumber \\ &\sim \dfrac{2}{N^{6}} \sum_{k=1}^{N_{\alpha}-1} (k+6)(k+5) k^ {3} + o\left( \frac{1}{N} \right) \nonumber \\ & \sim \frac{2}{18} \frac{(N_{\alpha})^ {6}}{N ^ {6} } + o\left( \frac{1}{N} \right) \nonumber \\ &\xrightarrow[N \to \infty]{} \dfrac{\alpha^6}{9}. \label{lim2} \end{align} Finally, considering the results obtained in \eqref{lim1} and \eqref{lim2}, we can conclude. \hfill \vrule width.25cm height.25cm depth0cm\smallskip \subsection{Joint densities under Renewal Sampling} \begin{table}[h!] \centering \begin{tabular}{|c|c|c|} \hline Joint distribution & Probability Density Function & Support \\\hline $f_{\tau_{i} , t_{i+1}}({a, b})$ & $\dfrac{N^{i+1}}{\Gamma(i)} a^{i-1} e^{-N (a + b)}$ & $0 \leq a < \infty$ \\ & & $0 \leq b < \infty$ \\ \hline $f_{\tau_{i} , \tau_{i+1}}({a, b}) $ & $\dfrac{N^{i+1}}{\Gamma(i)} a^{i-1} e^{-N b}$ & $0 \leq a \leq b$ \\ & & $0 \leq b < \infty$ \\ \hline $f_{\tau_{i-1} , t_{i}, t_{i+1}}({a, b, c})$ & $\dfrac{N^{i+1}}{\Gamma(i-1)} a^{i-2} e^{-N (a + b + c)}$ & $0 \leq a < \infty $ \\ & & $0 \leq b < \infty$ \\ & & $0 \leq c < \infty$ \\ \hline $f_{\tau_{i-1} , \tau_{i} , \tau_{i+1}}({a, b, c})$ & $\dfrac{N^{i+1}}{\Gamma(i-1)} a^{i-2} e^{-N c}$ & $0 \leq a \leq b$ \\ & & $0 \leq b \leq c$ \\ & & $0 \leq c < \infty$ \\ \hline $f_{\tau_{j} , \tau_{j+1} , \tau_{i} , \tau_{i+1}}({a, b, c,d})$ & $\dfrac{N^{i+1}}{\Gamma(j) \Gamma(i-j-1)} a^{j-1} \left(c - b \right)^{i-j-2} e^{-N d}$ & $0 \leq a \leq b$\\ & & $0 \leq b \leq c$ \\ & & $0 \leq c \leq d$ \\ & & $0 \leq d < \infty$ \\ \hline \end{tabular} \caption{Densities under Renewal Process} \label{densities} \end{table} \noindent {\bf Acknowledgements:} This research was partially supported by Project REDES 150038, MATHAMSUD 19-MATH-06, Math AmSud 18-MATH-07 SaSMoTiDep, CONICYT - MATHAMSUD FANTASTIC 20-MATH-05. T. Roa was partially supported by Beca CONICYT-PFCHA/Doctorado Nacional/2018-21180298, S. Torres was partially supported by FONDECYT 1171335 and C. Tudor was partially supported by MEC PAI80160046. \par \newpage
{ "timestamp": "2020-12-17T02:19:12", "yymm": "2012", "arxiv_id": "2012.08966", "language": "en", "url": "https://arxiv.org/abs/2012.08966" }
\section{Introduction} PHP remains the most common server-side language on the web, especially among the long tail of medium and small size websites. Due to the amount of economic activity taking place online, PHP web applications remain a tempting target for malicious actors looking to exploit security vulnerabilities for financial gain or in pursue of other illicit ends. In order to preempt the compromise of PHP web applications there has been a steady and growing trend by developers, security firms and white hat hackers to find, fix and disclose PHP vulnerabilities~\cite{MITRE2020BrowseDate}. The research community has also devoted a significant amount of effort to the automated discovery of PHP vulnerabilities. Besides established approaches based on static, flow, and taint analysis~\cite{Pixy2006,Dahse2010rips,DBLP:conf/ndss/DahseH14} data mining has proven to be another effective approach~\cite{Son2011SAFERPHP,Medeiros2014WAP,medeiros_equipping_2016,Nunes2015PhpSafe,huang_uchecker_2019}. These solutions are very efficient in analysing large quantities of code, but tend to suffer from limited detection performance, in terms of false positives or false negatives. Following recent advances in deep learning and natural language processing, security researchers started to develop deep learning based approaches to detect software vulnerabilities in C and C++ programs~\cite{Russell2018Draper,li2018vuldeepecker}. Only very recently we have seen the first applications of deep learning to PHP vulnerability discovery~\cite{Fang2019Tap,Fidalgo2020NewDekant,Guo2020Vulhunter}. Both these approaches apply Long-Short Term Memory (LSTM) neural networks to capture non-local dependencies over various transformations of the source code. LSTM is good at finding patterns in sequential data but is not equally well suited to learn from tree- or graph-structured data, which is a more natural representation of program semantics. In this paper we present \DT, a deep-learning based vulnerability detection approach, which aims to combine both syntactic and semantic properties of source code. The first key decision in a machine learning based vulnerability detection pipeline for source code is what code units constitute the samples. Many options are possible, ranging from lines of code to entire code bases. Depending on the technique employed, the characteristics of vulnerabilities targeted and the detection objectives, a number of options can be appropriate. In our case we selected both function- and file-level granularities for which we collected novel datasets and explored the trade-offs between the two. Intuitively, function-level classification helps to better locate vulnerable code and reduces the amount of unrelated source code to consider during the learning phase, which improves efficiency and may increase precision. On the other hand, some vulnerabilities are due to the interaction of non-local snippets of code, and may require a whole file, or even more, for identification. Moreover, file-level samples are easier to label in an automated fashion, and therefore constitute better learning datasets. In order to learn syntactic and structural properties from source code, \DT\ transforms it into a sequence of tokens to be analysed by a Gated Recurrent Unit (GRU), a neural network related to the LSTM and able to embed sequential information, in this case about the code structure. Novel to our approach for PHP, we attempt to learn semantic properties of the source code by analysing the CFG with a Graph Convolutional Network (GCN), a recent neural network architecture designed to handle graph-like data structures which during training can embed semantic and contextual information of the source code into the classification model. For our best model, this hybrid architecture achieves a 99.92\% F1 score on synthetic data from SARD~\cite{sardphp} and a 88.12\% F1 score on real-world data from GitHub (our novel dataset). We investigate the impact of different dataset distributions for detecting multiple vulnerabilities, and the challenges in creating such datasets. The key dimensions to take into account are the nature of the samples (synthetic versus realistic), the accuracy of the labels, the balance of the classes and the overarching difficulty in generating high-quality datasets. Existing work on PHP emphasised the use of clean and synthetic datasets, and in particular SARD. We found that even a model achieving 100\% F1-score when trained and tested on different portions of the same synthetic dataset can have dismal performance when tested on realistic data. We systematically compare the performance of \DT\ and a number of existing PHP vulnerability detection tools on SARD and on our real-world dataset. \DT\ outperforms the other tools on both datasets, but the gap becomes extremely large on the real-world one, even for pre-trained models. Finally, we tested \DT\ in the wild, evaluating its execution performance and its ability to generalise to a number of real-world PHP applications not present in the training dataset. We validated the practical usefulness of \DT\ by discovering 4 novel SQL injection and Cross-site scripting vulnerabilities in deployed plugins for WordPress. In summary, our main contributions are: \begin{itemize} \item The first investigation of the use of GCN and GRU to detect vulnerabilities in PHP source code, embedding both syntactic, structural and semantic information in the machine learning model. \item An analysis of the impact of dataset definition on model performance for vulnerability discovery, and the collection of new function- and file-level labelled PHP datasets. \item An extended evaluation of \DT, our GNN, by comparing it with selected existing tools for PHP vulnerability detection and by using it in the wild, where we discovered 4 novel vulnerabilities in established WordPress plugins. \end{itemize} \section{Background} In this Section, we review the three common PHP vulnerabilities targeted by our detector, and survey automated vulnerability detection approaches for PHP source code. \subsection{PHP Vulnerabilities} A software vulnerability is a mistake in software made by the software developer that can be used by a malicious actor to gain access to a system or network \cite{MITRE2019FAQ}. The 2020 CWE Top 25 Most Dangerous Software Weakness \cite{MITRE2020Top} list different types of vulnerabilities that are frequently reported in the current web application ecosystem. Since we target PHP source code, we focus on three types of vulnerabilities particularly common in PHP web applications: SQL injection (SQLi), Cross-site scripting (XSS) and OS Command injection (OSCI). Table~\ref{tab:code_snippet} shows vulnerable snippets of code from our GIT\ dataset (described in Section~\ref{sec:dataset}) for SQLi, XSS and OSCI. These are real-world security vulnerabilities found on open source PHP applications hosted on GitHub. We highlighted the vulnerable parts of each code snippet, and we explain them below. SQLi, also known as CWE-89, is a vulnerability that occurs when an attacker manages to alter a SQL query before it is passed to a database~\cite{Clarke2009sql}. It allows attackers to read or write data from the database without authorization, or to launch Denial of Service attacks. For example, in the first code snippet of Table \ref{tab:code_snippet}, the attacker is able to perform a SQLi by providing a carefully formatted string in place of their email in the variable \mmp$±email$, which is used as part of a dynamically generated SQL query in the highlighted line of code. The string provided by the attacker may be of the form ``\mmp$'' UNION [malicious SQL query]$'', where the attacker can use a succession of malicious SQL queries to learn about the structure of the database, exfiltrate and modify data. XSS, or CWE-79, is a vulnerability where the web application fails to sanitize user input before displaying it on a web page \cite{MITRE2019CWE-79Scripting}. This vulnerability occurs due to a nonexistent or insufficient implementation of input and output sanitization, which allows attackers to inject arbitrary JavaScript into an HTML file \cite{Johns2008Xss}. A successful XSS can lead to a number of malicious activities including extracting confidential information, such as session cookies, or sending malicious requests to other web applications. Table \ref{tab:code_snippet} shows an example of an XSS vulnerability found in a real-world PHP project. The highlighted part indicates the vulnerable part of the code. Based on this example, the input from \mmp$POST$ is not sanitised before being printed on the same line. Therefore, an attacker can trick a victim into inserting an attacker-controlled \mmp$<script>$ tag in the page, implementing arbitrary client-side malicious behaviour. OSCI, or CWE-78 is a web application vulnerability that allows attackers to execute malicious operating systems commands on the targeted server that runs the vulnerable web application \cite{LIU2012Improving}. The severity of this type of attack depends on the privilege level that the attackers gains through the injection attack. The highlighted code in Table \ref{tab:code_snippet} shows an example of a real-world OSCI vulnerability where the filename from \mmp$FILES['file']['name']$ is concatenated with a string to form a full file path. It is intended to be used to move files through the execution of shell command \mmp$mv$. However, due to the lack of sanitisation on the input for the file name, an attacker can exploit this vulnerability by setting a malicious filename like ``\mmp$any_name.txt; [any shell command here]$" to run arbitrary commands on the server. \begin{table*}[t] \caption{Vulnerable code snippets from GitHub projects.} \label{tab:code_snippet} \centering \begin{tabular}{p{5.6cm}p{5.6cm}p{5.6cm}} \toprule \textbf{SQL Injection (SQLi)} & \textbf{Cross-site scripting (XSS)} & \textbf{OS Command Injection (OSCI)} \\ \midrule \begin{lstlisting}[language=PHP,escapechar=@,basicstyle=\fontsize{6pt}{6pt}\selectfont\ttfamily, belowskip=-1.2 \baselineskip, aboveskip=-0.3 \baselineskip] $user_name = ($firstname AND $lastname) ? $firstname.' '.$lastname : ''; $user_email = ($email) ? $email : $this->getRandomString(); $user_color = ($color) ? $color : $this->random_color(); @\hl{\$query = 'SELECT id FROM '.\$this->table\_prefix.'users WHERE `email` = \''.\$user\_email.'\' LIMIT 1;';}@ $usercheck = $this->db->query($query); if ( isset($usercheck[0]->id) ) { $user_id = $usercheck[0]->id; $user_email = $usercheck[1]->email; } \end{lstlisting} & \begin{lstlisting}[language=PHP,escapechar=@,basicstyle=\fontsize{7pt}{3pt}\selectfont\ttfamily, belowskip=-1.2 \baselineskip, aboveskip=-0.3 \baselineskip] @\hl{print\_r(\$\_POST);}@ if($_POST) { if(isset($_POST['webdav_url'])) { OC_CONFIG::setValue('user_webdavauth_url', strip_tags($_POST['webdav_url'])); } } $tmpl = new OC_Template( 'user_webdavauth', 'settings'); $tmpl->assign( 'webdav_url', OC_Config::getValue( "user_webdavauth_url" )); return $tmpl->fetchPage(); }\end{lstlisting} & \begin{lstlisting}[language=PHP,escapechar=@, basicstyle=\fontsize{7pt}{3pt}\selectfont\ttfamily, belowskip=-1.2 \baselineskip, aboveskip=-0.3 \baselineskip] $zip = "/tmp/" . @\hl{\$\_FILES['file']['name'];}@ @\hl{\$command = "mv " .\$\_FILES['file']['tmp\_name']." \$zip";}@ @\hl{exec(\$command,\$output=array(),\$res);}@ if ($res) { $this->errors[] = lang::translate('gallery_error_zip_mv'); return false; } @\hl{\$command = "chmod 777 " . \$zip;}@ @\hl{exec(\$command,\$output=array(),\$res);}@ if ($res) { $this->errors[] = lang::translate('gallery_error_zip_chmod'); } \end{lstlisting}\\ \bottomrule \end{tabular} \end{table*} \subsection{Detecting Vulnerabilities in PHP} Researchers and practitioners, over the years, have developed many tools to detect vulnerabilities in PHP applications. \subsubsection{Traditional Approaches} Traditional approaches focus on the use of static, semantic and taint analysis to locate vulnerabilities. Pixy~\cite{Pixy2006} implements flow-sensitive and context-sensitive data flow analysis to detect vulnerable components in a PHP web applications, mainly targeting XSS. That approach can be extended to the detection of other taint-style vulnerabilities such as SQLi and OSCI. RIPS~\cite{Dahse2010rips,DBLP:conf/ndss/DahseH14} combines taint and static analysis to locate vulnerable program points in a PHP application. However, RIPS and Pixy are unable to analyze flaws that require the analysis of multiple files, or that depend on object-oriented features of PHP, limiting their effectiveness on current web applications. phpSAFE \cite{Nunes2015PhpSafe} performs a lexical and semantic analysis of code at the Abstract Syntax Tree (AST) level, before executing an inter-procedural analysis to follow the flow of tainted variables starting from the \mmp$main$ function. The authors report to be able to detect SQLi and XSS vulnerabilities with a lower false positive rate than RIPS and Pixy. Differently from previous approaches, SAFERPHP \cite{Son2011SAFERPHP} focuses on the detection of Denial of Service (DoS) and missing authorisation checks. Besides implementing taint analysis in the process, SAFERPHP also performs inter-procedural and semantic analysis by analysing the control dependencies via the control flow graph (CFG). The analysis allows the tool to identify and verify the consistency of possible security checks in all calling contexts. \subsubsection{Data Mining Approaches} More recent approaches aim to detect PHP web application vulnerabilities using data mining techniques. WAP~\cite{Medeiros2014WAP,medeiros_equipping_2016} implements taint analysis along with a number of machine learning models to predict vulnerable PHP samples. Logistic Regression obtains the best performance, and is able to detect 8 classes of vulnerabilities, including SQLi, XSS, and OSCI. In follow up work, DEKANT~\cite{Medeiros2016Dekant} adopts Natural Language Processing (NLP) techniques to detect vulnerabilities. In particular, it uses a Hidden Markov Model (HMM)~\cite{Rabiner1989HMM} to characterise vulnerabilities based on a set of source code slices. These code slices are marked as tainted or non-tainted and then passed on for further analysis. Like WAP, DEKANT handles a number of different vulnerability classes. In a comparison against other tools, DEKANT achieved 96\% accuracy as compared to 90\% for WAP and 18\% for Pixy. WIRECAML~\cite{Kronjee2018Wirecaml} combines data-flow analysis and machine learning to detect SQLi and XSS vulnerabilities in PHP source code. The combination of reaching definition, taint and reaching constant analysis allows the tool to extract meaningful data flow features from the CFG, and optimise the learning processing of the machine learning model. The best results are obtained by a Decision Tree classifier with a precision-recall curve score of 88\% for SQLi and 82\% for XSS. In comparison with previous static analysis approaches like Pixy, RIPS and WAP, WIRECAML achieved better detection performance in terms of precision, recall and F1-scores on all cases except non-vulnerable XSS samples, where RIPS scored best. WIRECAML was used to detect a SQLi vulnerability in a photo gallery web application called Piwigo, allowing an attacker to inject arbitrary queries via a \mmp$POST$ parameter. \subsubsection{Deep Learning Approaches} More recently, deep learning is being applied to vulnerability detection for PHP source code. TAP \cite{Fang2019Tap} proposes a static analysis approach of detecting PHP vulnerabilities based on code tokens and deep learning techniques. The tool extracts code tokens from PHP codes using a custom tokenizer, and performs data flow analysis to find relevant lines of code that contain function calls. TAP uses Word2Vec to generate numerical vectors from the code tokens, and implements a sequence-based deep learning technique called Long Short-term Memory (LSTM) to train the detection model. TAP handles several classes of vulnerabilities, and in a comparison with WIRECAML and RIPS achieved the best results for accuracy, F1-score and area under the curve (AUC) on both safe and vulnerable samples. Vulhunter \cite{Guo2020Vulhunter} proposes a different approach leveraging bytecode features to represent vulnerabilities. Vulhunter generates CFGs, data-flow graphs (DFGs) and analyses them to generate potentially suspicious code slices. The code slices are transformed into bytecode slices. Like TAP, Vulhunter uses Word2Vec to generate vectors from the bytecode slices. Vectors and tokens are passed to a Bi-directional Long Short-term Memory (Bi-LSTM). The evaluation results show that Vulhunter is capable of detecting SQLi and XSS vulnerabilities with higher recall and F1-scores than RIPS. Vulhunter was used to discover two XSS vulnerabilities and one SQLi vulnerability in SEACMS and CMS Made Simple. Also \cite{Fidalgo2020NewDekant} leverages PHP bytecode to locate vulnerabilities. Code slices are translated to bytecode using the Vulcan Logic Dumper (VLD), which intercepts Zend bytecode before it executed. Bytecode tokens are mapped to integers values understandable by a neural network using a vocabulary-based translation. The authors train a 2-layer LSTM model and achieved 95.35\% accuracy, 96.51\% precision and 96.14\% recall using RMSProp as the optimisation function during training. However, they only focus on detecting SQLi vulnerabilities and do not compare their performance with previous approaches. To the best of our knowledge, we are the first to use graph neural networks for detecting vulnerabilities in PHP source codes, and to investigate the effect of synthetic versus realistic datasets on model performance. \section{\DT}\label{sec:dt} \begin{figure*}[!t] \centering \includegraphics[scale=0.12]{img/DeepTective.jpg} \caption{\DT\ Architecture}\label{fig:arch} \end{figure*} In this Section, we introduce \DT, our novel PHP vulnerability detection model. \DT\ detects SQLi, XSS and OSCI vulnerabilities within source code, at function- and file-level granularity. It is divided into two key components: a Gated Recurrent Unit (GRU) which operates on the linear sequence of source code tokens, and a Graph Convolutional Network (GCN) which operates on the Control Flow Graph (CFG) of the source code. Each component provides a different mechanism for the model to detect multiple types of vulnerabilities effectively. We combine the GRU and GCN in a novel hybrid architecture able to leverage the strengths of both techniques. \subsection{Preprocessing} Our data samples are fragments of PHP code, either a function body or a whole file. As a first step, we raise the level of abstraction of the code to a format that will conceptually help the learning process. We extract the linear sequence of parsed tokens in order to capture syntactic dependencies, and we extract the set of intraprocedural CFGs to capture semantic dependencies. We also transform the sequence of tokens and the CFG in a suitable format for consumption by a neural network, that is multidimensional vectors of real numbers. \subsubsection{Sequence of Tokens} We parse a sample using \cmd{phply}~\cite{phply}, a PHP parsing library built on top of \cmd{ply}~\cite{ply}, an implementation of the \cmd{yacc} and \cmd{lex} parsing tools for Python. From parsing, we obtain an ordered sequence of tokens. We remove tokens for comments, tabs, spaces, and PHP open and close tags from the sequence, as the presence or absence of a vulnerability is not affected by these. In order to focus the learning on a manageable set of interesting tokens, we conflate the long tail of user-defined functions, variables, and constant values into abstract tokens, and retain the concrete token only for the first $k$ instances found in each sample. For example, at the function-level we substitute the first 10 variable tokens in a sample with the artificial tokens \tk{VAR0} - \tk{VAR9}, and substitute all the other ones with the abstract token \tk{VAR}. At the file-level, we retain the first 200 variables instead. We also retain the concrete token for selected PHP functions such as \mmp$query, exec, strip_tags$ which are relevant to the vulnerabilities we study, and typically represent sinks or santizers. Next, we turn each token into a number, using the \cmd{LabelEncoder} from \cmd{scikit-learn}\cite{LabelEncoder} which, given a vocabulary of tokens, maps each to a sequential natural number. The GRU that consumes our token sequences requires vectors of fixed length as inputs. For function-level samples we use a fixed length of 200, and for file-level samples, that tend to be significantly longer, we use 3000. In each case, if a sample has fewer tokens we pad it with zeros, and if it has more tokens we keep the maximum number of tokens allowed, starting from the end of the token sequence. This can lead to loss of information, but is a commonly accepted approach to analyse variable-length inputs, and in particular source code, with fixed-length neural architectures. \subsubsection{CFG} At the function-level, we parse each sample into an AST using \cmd{phply}. Then we extract the CFG, where each node represents a line of code, using our adaptation of code from \cmd{wirecaml}~\cite{Kronjee2018Wirecaml}. We use the same procedure as for sequences of tokens above, but with a fixed length of 20, to turn each node into a numerical vector. To transform the control flow graph into a tensor suitable for consumption by our GCN, we collate the nodes into a 2d matrix where each row contains the features of the node corresponding to the row index. Next we represent the CFG edges as a vector of tuples $(i,j)$ representing a directed edge from node $i$ to node $j$. The file-level process is analogous, except that we use \cmd{joernphp}~\cite{Backes2017JoernPHP} to parse and extract the CFGs from the samples, as it proved to be more robust for large files. \subsection{Model Architecture} Figure~\ref{fig:arch} illustrates the overall architecture of \DT. We now describe each component and summarize the architectural parameters. \subsubsection{Embedding Layer} The role of the embedding layer is to transform each numerical input produced in the preprocessing stage into a vector of real numbers, encoding that input as a combination of factors in a lower-dimensional space. We have two embedding layers; one for the token sequence and one for the CFG representation. Both embedding layers use vectors of length 100, which are learned via backpropagation during training and initialised at random. More formally, these layers are simply a mapping from a numerically tokenised function $t_i$, to a vector $v_i \in \mathbb{R}^{100}$. \subsubsection{GRU} We extract features from the sequence of tokens representations using a multi-layer bidirectional Gated Recurrent Unit~\cite{cho2014learning} which can learn long term dependencies between the tokens. Code patterns, such as those leading to vulnerabilities, heavily depend on the syntax of a programming language and the local context in which they appear. Most tokens carry information about the next token in the sequence. This information flow propagates until the end of a code statement. The GRU takes advantage of this information flows by learning bidirectional sequences (i.e. forwards and backwards) of code tokens throughout the source code. Internally, each layer of the GRU computes the following function for each element in the input sequence: \begin{equation*} \begin{array}{ll} r_t = \sigma(W_{ir} x_t + b_{ir} + W_{hr} h_{(t-1)} + b_{hr}) \\ z_t = \sigma(W_{iz} x_t + b_{iz} + W_{hz} h_{(t-1)} + b_{hz}) \\ n_t = \tanh(W_{in} x_t + b_{in} + r_t * (W_{hn} h_{(t-1)}+ b_{hn})) \\ h_t = (1 - z_t) * n_t + z_t * h_{(t-1)} \end{array} \end{equation*} where $h_t$ is the hidden state at time $t$, $x_t$ is the input at time $t$, $h_{(t-1)}$ is the hidden state of the layer at time $t-1$ or the initial hidden state at time $0$, and $r_t$, $z_t$, $n_t$ are the reset, update, and new gates, respectively. $\sigma$ is the sigmoid function, and $*$ is the Hadamard product. Since we are implementing a multilayer GRU, the input $x^{(l)}_t$ of the $l$-th layer ($l >= 2$) is the hidden state $h^{(l-1)}_t$ of the previous layer multiplied by dropout, $\delta^{(l-1)}_t$ where each $\delta^{(l-1)}_t$ is a Bernoulli random variable which is $0$ with probability dropout \cite{Pytorch2019GRU}. The output we take from the GRU is the concatenation of the hidden states at the beginning and end of each layer. \subsubsection{GCN} The CFG represents the control dependencies of functions and statements in a code sample. These approximate the flow of information from untrusted sources to sensitive sinks typical of injection vulnerabilities. Therefore, we extract features from the CFG using a Graph Convolutional Network~\cite{kipf2016semi}, which is able to embed such dependencies into our model, and learn their significance via backpropagation. Internally we use three layers of a GCN followed by Edge Pooling. Let $\mathbf{X}$ be a graph node vector, and $\mathbf{\hat{A}} = \mathbf{A} + \mathbf{I}$ the adjacency matrix of the graph, with inserted self-loops. The equation \begin{equation*} \mathbf{X}^{\prime} = \mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2} \mathbf{X} \mathbf{\Theta} \end{equation*} defines the convolved signal matrix $\mathbf{X'}$, where $\hat{D}_{ii} = \sum_{j=0} \hat{A}_{ij}$ denotes the diagonal degree matrix and $\mathbf{\Theta}$ denotes the convolutional filter parameters~\cite{Fey2019GCNConv}. \subsubsection{Classification} We take the output of the graph convolutional layers and flatten it using max pooling. The output of the graph convolutional layers are node vectors of length 4000. The max pooling scans the ith element of each node and selects the maximum values as the ith element of the output vector. Mathematically: \begin{equation*} o_i = max_{n=1}^{N} x_{in} \end{equation*} where $o_i$ is the ith element of the output vector, $x_in$ is the ith element of the nth node in the output graph, and $N$ is the number of nodes in the graph. We combine the output vector of the GCN with the output vector of the GRU and feed them to the linear classification layers. We have 3 linear classification layers, each with a dropout of 0.3 to combat overfitting, followed by a ReLU activation function. The final output of the ReLU is a probability vector of length 4, representing the confidence of assigning the sample to each class. \subsubsection{Architectural Parameters} We have tested alternative hyper-parameters settings to tune the model, and we found that the current configuration provides us with the best detection performance across different types of vulnerability. Table \ref{tab:deeptective_arch} shows the details of DeepTective architecture by layers. \begin{table}[h] \scriptsize \caption{\DT\ architectural parameters} \label{tab:deeptective_arch} \centering \begin{tabular}{p{1.8cm} p{0.5cm} p{1.1cm} p{0.8cm} p{0.9cm} p{0.8cm}} \toprule \textbf{Layer} & \textbf{Output size} & \textbf{Edge-pool size} & \textbf{Dimension} & \textbf{Activation function} & \textbf{Dropout}\\ \midrule Embedding & 200 & None & 100 & None & None\\ GRU & 200 & None & None & None & None\\ GRU & 200 & None & None & None & None\\ GRU & 200 & None & None & None & None\\ Embedding & 2000 & None & 100 & None & None\\ GCNConv & 2000 & 2000 & None & ReLU & None\\ GCNConv & 4000 & 4000 & None & ReLU & None\\ GCNConv & 4000 & 4000 & None & ReLU & None\\ Fully connected & 1000 & None & None & ReLU & 0.3\\ Fully connected & 500 & None & None & ReLU & 0.3\\ Fully connected & 4 & None & None & ReLU & None\\ \bottomrule \end{tabular} \end{table} \section{Datasets}\label{sec:dataset} In order to evaluate a supervised vulnerability detection model, we need to build datasets with vulnerable and non vulnerable samples. We are interested in two kind of samples: entire files, or individual functions. We label the samples as Safe, XSS, SQLi and OSCI, where the latter 3 labels together are the Unsafe ``virtual'' label. We extract the samples from synthetic data (SARD) and real-world projects (GitHub), as detailed below. In order to support further research in the area, and facilitate the comparison between different approaches, we plan to make our datasets available to the public. \subsection{Synthetic Samples} \newcommand\sard{SARD$^\#$} \newcommand\sardo{SARD} The Software Assurance Reference Dataset project \cite{sard} is a collection of code samples for multiple programming languages. The objective is to enable researchers and developers to evaluate alternative methods for detecting different types of bugs. Below, we consider the subset of SARD for PHP vulnerabilities~\cite{sardphp}. Each sample is a short standalone file with no external dependencies. Samples are generated by a tool called the PHP Vulnerability Test Suite Generator \cite{sardphpgenerator}. The dataset contains both safe and unsafe samples for different vulnerability types. A separate metadata file lists the line considered responsible for the vulnerability of each unsafe sample. \begin{figure}[!t] \label{fig:SARD} \begin{center} \hrule \begin{lstlisting}[language=PHP,basicstyle=\fontsize{8pt}{8pt}\selectfont\ttfamily] <!DOCTYPE html><html> <head><style><?php $array = array(); $array[] = 'safe' ; $array[] = $_GET['userData'] ; $array[] = 'safe' ; $tainted = $array[1] ; $tainted = http_build_query($tainted); //flaw echo $tainted ; ?></style></head> <body><h1>Hello World!</h1></body> </html> \end{lstlisting} \hrule \end{center} \caption{XSS test case from the SARD dataset.} \end{figure} Consider the example of a web page vulnerable to XSS from SARD reported in Figure~\ref{fig:SARD}. We can immediately see that SARD samples are rather simplistic and unlikely to reflect code in real world projects (as in Table~\ref{tab:code_snippet}), so it is not clear if models learned on SARD can be transferred to other code bases. Despite that, SARD has been widely used in previous studies of vulnerabilities, including specifically for PHP~\cite{Kronjee2018Wirecaml,Fang2019Tap}. The advantages of SARD are that vulnerabilities are guaranteed to be self-contained in the samples, and each sample has very few irrelevant lines of code. This helps focusing the learning process. Besides, the labels of SARD samples are highly accurate, which is a primary concern for supervised machine learning approaches. We extract the PHP code from each SARD safe and unsafe sample for XSS, SQLi and OSCI. Some SARD samples are very similar to each other. For example, there is a variant of the listing above where the \cmd{<style>} tags are replaced with \cmd{<script>} tags. This introduces duplicates in our code-only dataset, which we remove. We only collect file-level samples for SARD as each sample is already very short, and any function, if present, only contains very limited code (typically 1 line). We denote by \sard\ our derived dataset. The number of samples in the original SARD dataset and in our dataset are reported in Table~\ref{tab:alldata}. \subsection{Realistic Samples} Besides the focused, synthetic samples from \sard, we want to collect a dataset representing vulnerabilities as they actually appear in realistic PHP projects. GitHub hosts source code for PHP projects of all sizes, ranging from the extremely popular WordPress framework to a beginner's first PHP snippet. In order to select representative vulnerabilities, we searched the National Vulnerability Database (NVD)~\cite{nvd} for CVE entries labelled with the CWE identifier of XSS (CWE-79), SQLi (CWE-89) and OSCI (CWE-78). We extracted from the references of each relevant CVE any GitHub commit URL, and cloned the corresponding PHP repositories. In combination, we also cloned from GitHub some of the largest and most commonly used open source PHP projects: \cmd{Moodle}, \cmd{CodeIgniter}, \cmd{Drupal}, \cmd{ILIAS}, \cmd{phppmyadmin}, \cmd{wikia}, \cmd{magento2}, \cmd{simplesamlphp} and \cmd{WordPress}. \subsubsection{Sample Extraction} We search the commit history of each cloned project for keywords related to the vulnerabilities we are interested in, including ``xss'', ``sqli'' and several variants. There are a few commit messages that report fixing both XSS and SQLi vulnerabilities: we exclude these, as multi-label classification is beyond the scope of this project. When we come across a relevant commit, we extract the vulnerable version of the affected files, and add to each file the label for the corresponding vulnerability. These constitute our file-level positives. From the same version of the repository, we save the files not affected by the commit as our file-level negatives. To build a function-level dataset, we make the assumption that the presence in the function body of patched lines from a relevant commit implies that a function is vulnerable. We take a vulnerable file as identified above, start from the lines changed by relevant commits, and use interval trees to extract from the source code the functions containing the changed lines. These constitute our function-level positives. We also extract from each file the functions that do not contain any line changed in the commit, and save them as our function-level negatives. \subsubsection{Label Noise} The approach described above may introduce noise in the labelling of samples. Files may be mislabelled when a commit message misidentifies a vulnerability. Vulnerable files with a commit message that does not mention a vulnerability fix, and files which contain vulnerabilities not known or fixed by the developers, will be mistakenly labelled as negatives. A vulnerability-relevant commit may also include unrelated changes to non-vulnerable files. These files will be mistakenly labelled as positives. Similarly, if a vulnerable file contains changes for both vulnerable and non-vulnerable functions, the latter will be mislabelled as positives. To limit these effects, we ignore commits modifying more than 20 files, and we discard changes that only consist in deleting lines of code, as both cases are mostly associated with code refactoring. We manually inspected 10\% of the files labelled as positives, and did not detect any mislabelling. \subsubsection{Datasets} We denote the file-level dataset by GIT, and the function-level dataset by by GIT$^f$. The number of samples of each class in GIT\ and GIT$^f$\ are reported in Table~\ref{tab:alldata}. Note that the relation between the number of samples in the two datasets is not straightforward. GIT$^f$\ has more negatives samples, as a file with a vulnerable function may contain a sizeable number of functions not affected by the commit. On the other hand, GIT\ has more positives because some vulnerabilities are not located inside function bodies. \begin{table}[!t] \caption{Number of samples in relevant datasets.} \label{tab:alldata} \begin{center} \scriptsize \begin{tabular}{l l l l l } \toprule \textbf{Dataset} & \textbf{Safe}&\textbf{XSS} & \textbf{SQLi} & \textbf{OSCI}\\ \midrule \sardo & 16240 & 4352 &912 & 624 \\ \sard &2928 &960 & 288&250 \\ GIT & 2726 & 2117 & 604 & 7\\ GIT$^f$ & 4288 & 726 & 428 & 11\\ \bottomrule \end{tabular} \end{center} \end{table} \section{Model Evaluation} We evaluate \DT\ on the separate tasks of function classification and file classification. For each task, we train and test the model on data from SARD, GitHub, and from both. This allows us to compare the difference between using synthetic and real-world samples. Furthermore, we compare the classification performance of \DT\ with previous work, and identify interesting variations between the approaches. \subsection{Methodology \subsubsection{Experimental Setup} For both experiments, we use Pytorch 1.5 and Torch Geometric 1.5.0 with CUDA 10.1 on top of Python 3.8.1. We train the model on a computer running Intel Xeon Skylake CPU (40 cores), 128GB RAM and Nvidia GTX Titan XP. We use Weights \& Biases \cite{wandb} as our main experimental management tool to track the each run throughout this work. \subsubsection{Performance Criteria} For each experiment, we report true negatives (TN), false negatives (FN), true positives (TP), false positives (FP), accuracy, precision, recall and F1-score. Accuracy measures the percentage of correctly predicted samples, but is not very significant when test classes are imbalanced, like in the case of vulnerabilities which are very rare compared to safe samples. A trivial classifier marking everything as safe would have very high accuracy in the real world, but little use. Precision measures how many of the reported vulnerabilities are actual vulnerabilities: it tells us if it is worth investigating the results of the classifier. Recall measures the percentage of existing vulnerabilities that the classifier is able to discover: it tells us how worried we should still be after running the tool. Finally, the F1 score summarises numerically the balance between precision and recall. Note that in Tables~\ref{tab:exp_funclevel} and \ref{tab:exp_filelevel} we report only the figures for the binary classification problem where the positives classes XSS, SQLi and OSCI are merged in the Unsafe class. This is to simplify exposition, and because ultimately we care mostly about detecting vulnerabilities, irrespective of their specific label. Internally though we train our model and measure results for multiclass-classification, and will report relevant details where appropriate. \subsubsection{Model Training} Since this is a multiclass-classification problem, we use cross-entropy as our loss function. The training process uses a batch size of 64 along with an Adam optimiser and a learning rate of $10^{-5}$. Alongside this, we implement a learning rate scheduler that reduces the learning rate if the loss plateaus. Lastly, we split the dataset for training/validation/test to 80/10/10, and stratify data according to their classes. With the model and hyper-parameters in place, we train the model for 150 epochs to maximise the learning potential of our model. \subsection{Classification}\label{sec:classification} We perform two experiments to investigate different learning patterns across function-level and file-level granularity. Different granularity levels are expected to behave differently based on the information provided for the model to learn. \subsubsection{Function-Level Granularity} Table \ref{tab:exp_funclevel} shows the result of testing our function-level model on the \sard, GIT$^f$\ and combined dataset \combofu, after training on each of them respectively. \begin{table}[!t] \scriptsize \caption{Function-level granularity results.} \label{tab:exp_funclevel} \centering \begin{tabular}{p{0.8cm} p{0.7cm} p{0.2cm} p{0.2cm} p{0.2cm} p{0.2cm} p{0.7cm} p{0.7cm} p{0.5cm} p{0.4cm}} \toprule \textbf{Model}& \textbf{Testing \hspace{0.2cm}set} & \textbf{TN} & \textbf{FN} & \textbf{TP} & \textbf{FP} & \textbf{Accuracy (\%)} & \textbf{Precision (\%)} & \textbf{Recall (\%)} & \textbf{F1 (\%)} \\ \midrule Func-S& \sard & 277 & 0 & 149 & 16 & 96.38 & 90.30 & 100.0 & 94.90\\ (\sard) & GIT$^f$ & 4182 & 1133 & 42 & 105 & 76.88 & 28.57 & 3.57 & 6.35\\ & \combofu & 4455 & 1134 & 190 & 105 & 78.27 & 60.32 & 14.35 & 23.18\\ \midrule Func-G& \sard & 2361 & 1248 & 240 & 567 & 54.46 & 29.74 & 16.13 & 20.92\\ (GIT$^f$) & GIT$^f$ & 366 & 65 & 53 & 63 & 75.14 & 45.69 & 44.92 & 45.30\\ & \combofu & 2727 & 1313 & 293 & 630 & 56.74 & 31.74 & 18.24 & 23.17\\ \midrule Func-A& \sard & 290 & 0 & 149 & 3 & 99.32 & 98.03 & 100.0 & 99.0\\ (\combofu) & GIT$^f$ & 378 & 72 & 46 & 51 & 76.05 & 47.42 & 38.98 & 42.73\\ & \combofu & 661 & 70 & 197 & 61 & 85.84 & 76.36 & 73.78 & 75.05\\ \bottomrule \end{tabular} \end{table} \stitle{Func-S} The Func-S model is trained on the \sard\ dataset. It achieves great performance when testing on the \sard\ dataset itself (despite training data not overlapping with testing data). The precision is 90.30\% with a recall of 100\% and F1 of 94.9\%. The perfect recall score means that all vulnerable PHP samples are correctly classified as true positives. However, Func-S fails spectacularly on the real-world GIT$^f$\ dataset, with precision and recall down respectively to 28.57\% and 3.57\%. We hypothesize that this failure to generalise is due to the highly skewed and homogeneous nature of \sard\ samples on which the model is trained. In particular, the model fails to detect most of the vulnerable GIT$^f$\ samples (1133 FN). On inspection, \sard\ vulnerable samples are short and focused around the vulnerability, whereas GIT$^f$\ vulnerable functions may contain a lot of irrelevant context, and more varied vulnerability patters. As can be expected, the performance on \combofu\ is roughly a weighted average of the preceding two. \stitle{Func-G} The results for the Func-G model are qualitatively similar, but the performance on the same distribution (the GIT$^f$ dataset) is still disappointing in absolute terms, with 45.69\% precision, 44.92\% recall and 45.29\% F1. We believe this shows that the function-level model is not appropriate for real world code. In fact, by manually inspecting GIT$^f$\ samples we can observe that although a vulnerability may in effect be present inside a function, the vulnerable line by itself is not sufficient to detect the function as vulnerable. As an extreme example, the identify function can be considered as a vulnerable instance of a function to sanitize user input: but inspecting the identity function by itself gives no clues to the presence of a vulnerability. This observation motivated us to explore file-level granularity, but first we investigate if combining \sard\ and GIT$^f$\ could introduce synergies which improve the classification performance. \stitle{Func-A} The results for the Func-A model show a noticeable improvement on \sard\ over the already high performance of Func-S. On the other hand, the performance on GIT$^f$\ is similar but slightly worse (F1 score) than the one of Func-G, so the benefit of training across datasets was only felt in one direction. Finally, note that the jump in performance on \combofu\ is mostly an artefact of the lower number of samples available for testing, as 90\% of both \sard\ and GIT$^f$\ data is used for training. This leads to a higher weight given to the \sard\ performance in comparison to the Func-S case. Figure \ref{fig:dist_prediction_func}(A) compares the percentage of correct predictions for each fine-grained class on the \combofu\ test set, for Func-S,-G and -A. The main observation is that the Func-A model shows a reliable predictive capability, with all classes above 60\%. Overall, at the function-level, combining datasets from different data distributions allows the model to learn more vulnerability patterns, which help the model to generalise its detection ability across different code writing styles and application domains. \subsubsection{File-Level Granularity} We now want to test if providing more context to the model improves its ability to learn vulnerable code patterns. Contextual information is made available to the model by switching from function-level to file-level granularity of samples, and adapting the model as described in Section~\ref{sec:dt}. Table \ref{tab:exp_filelevel} shows the result of training our model at the file-level granularity and testing on \sard, GIT\ and their combination \combofi. \begin{table}[!t] \scriptsize \caption{File-level granularity results.} \label{tab:exp_filelevel} \centering \begin{tabular}{p{0.74cm} p{0.7cm} p{0.2cm} p{0.2cm} p{0.2cm} p{0.2cm} p{0.7cm} p{0.7cm} p{0.5cm} p{0.4cm}} \toprule \textbf{Model}& \textbf{Testing \hspace{0.2cm}set} & \textbf{TN} & \textbf{FN} & \textbf{TP} & \textbf{FP} & \textbf{Accuracy (\%)} & \textbf{Precision (\%)} & \textbf{Recall (\%)} & \textbf{F1 (\%)} \\ \midrule File-S& \sard & 1624 & 0 & 589 & 0 & 100 & 100.0 & 100.0 & 100.0\\ (\sard) & GIT & 1817 & 2263 & 465 & 909 & 36.89 & 33.84 & 17.05 & 22.67\\ & \combofi & 3439 & 2263 & 1054 & 911 & 55.13 & 53.64 & 31.78 & 39.91\\ \midrule File-G& \sard & 9143 & 2010 & 3878 & 7097 & 54.62 & 35.33 & 65.86 & 45.99\\ (GIT) & GIT & 251 & 44 & 229 & 22 & 83.33 & 91.24 & 83.88 & 87.40\\ & \combofi & 9396 & 2054 & 4107 & 7117 & 55.32 & 36.59 & 66.66 & 47.25\\ \midrule File-A& \sard & 1624 & 1 & 588 & 0 & 99.95 & 100.0 & 99.83 & 99.92\\ (\combofi) & GIT & 240 & 32 & 241 & 33 & 82.78 & 87.96 & 88.28 & 88.12\\ & \combofi & 1864 & 34 & 828 & 33 & 96.56 & 96.17 & 96.06 & 96.11\\ \bottomrule \end{tabular} \end{table} \stitle{File-S} The File-S model achieves perfect scores for all the metrics when testing on the \sard\ dataset itself. The improvement over the performance of Fun-S, which is trained on the same dataset, must then be due entirely to the model adaptations for file-level granularity, including a larger vocabulary size. The results of GIT\ are also better than Fun-S but still not practically useful. \stitle{File-G} A substantial improvement instead is observed on the performance of the File-G model, in particular on GIT. It achieves 91.24\% precision and 83.88\% recall, for an F1 score of 87.40\%. This shows that although GIT\ is a harder dataset to learn, consisting of highly diverse PHP files from popular projects, file-level granularity provides enough of the missing information to achieve usable performance. \stitle{File-A} Training on the combined dataset has the effect of slightly reducing the perfect performance of File-S on \sard, but yields a larger increases over the F1 score of File-G on GIT. In particular File-A finds more real-world vulnerabilities (increase of TP) but at the price of a few more false alarms (increase of FP). \begin{figure*}[!t] \begin{tabular}{cc} \includegraphics[scale=0.75]{img/dist_prediction_funclevel.pdf} & \includegraphics[scale=0.75]{img/dist_prediction_filelevel.pdf}\\ (A) Function-level & (B) File-level \end{tabular} \caption{Distribution of correctly predicted samples across different types of vulnerability.} \label{fig:dist_prediction_func}\label{fig:dist_prediction_file} \centering \end{figure*} \iffalse \begin{figure}[h] \scriptsize \captionsetup{font=footnotesize} \includegraphics[scale=0.6]{img/dist_prediction_filelevel.pdf} \captionsetup{justification=centering, margin=0pt} \caption{Distribution of correctly predicted samples across different types of vulnerability for all file-level models} \label{fig:dist_prediction_file} \centering \end{figure} \fi Figure \ref{fig:dist_prediction_file}(B) compares the percentage of correct predictions for each fine-grained class on the \combofi\ test set, for File-S,-G and -A. The average predictive capability of File-A is higher than 80\% for all classes, hence the additional contextual information provided at the file-level has a significant impact also at the multi-class classification level. \subsection{Tool Comparison}\label{sec:tools} We compared the classification performance of \DT\ File-A, our best model, with selected publicly available tools to find PHP vulnerabilities, based on machine learning (\cmd{wirecaml} and \cmd{TAP}) or static analysis (\cmd{progpilot}, \cmd{RIPS} and \cmd{WAP})~\cite{Kronjee2018Wirecaml,Fang2019Tap,progpilot,Dahse2010rips,Medeiros2014WAP}. We ran all the tools above on the same test sets from the \sard\ and GIT\ datasets which we used in Section~\ref{sec:classification} to evaluate File-A. We measured the tools detection performance, which is reported in Table~\ref{tab:compare_sard}. Note that \cmd{wirecaml} is made of two binary classifiers for XSS and SQLi and thus we report the performance of each individual classifier. Furthermore, vulnerabilities of the class that is not being classified by a \cmd{wirecaml} classifier were deemed as safe samples when judging performance. Machine learning tools often perform better when trained and tuned using their authors' datasets. Hence, we used \cmd{wirecaml} and \cmd{TAP} trained on their respective datasets, effectively testing their ability to generalise to new datasets. \begin{table}[!t] \scriptsize \caption{Comparison: \DT\ File-A vs. selected tools.} \label{tab:compare_sard} \centering \begin{tabular}{p{1.7cm} p{0.2cm} p{0.2cm} p{0.2cm} p{0.2cm} p{0.7cm} p{0.7cm} p{0.5cm} p{0.4cm}} \toprule \textbf{Tool name} & \textbf{TN} & \textbf{FN} & \textbf{TP} & \textbf{FP} & \textbf{Accuracy (\%)} & \textbf{Precision (\%)} & \textbf{Recall (\%)} & \textbf{F1 (\%)} \\ \toprule \multicolumn{9}{c}{\textbf{A}: Results for \sard\ dataset.}\\ \toprule DeepTective & 1624 & 1 & 588 & 0 & \textbf{99.95} & \textbf{100.0} & 99.83 & \textbf{99.92}\\ \midrule TAP & 1584 & 96 & 493 & 40 & 93.85 & 92.50 & 83.70 & 87.88\\ wirecaml-XSS & 470 & 50 & 385 & 1308 & 38.64 & 22.74 & 88.51 & 36.18\\ wirecaml-SQLi & 1496 & 0 & 91 & 626 & 71.71 & 12.69 & \textbf{100.00} & 22.52\\ \midrule progpilot & 629 & 304 & 285 & 995 & 41.30 & 22.27 & 48.39 & 30.50\\ WAP & 1342 & 477 & 112 & 282 & 65.70 & 28.43 & 19.02 & 22.79\\ RIPS & 1440 & 497 & 92 & 184 & 69.23 & 33.33 & 15.62 & 21.27\\ \toprule \multicolumn{9}{c}{\textbf{B}: Results for GIT\ dataset.}\\ \toprule DeepTective & 240 & 32 & 241 & 33 & 82.78 & \textbf{87.96} & \textbf{88.28} & \textbf{88.12}\\ \midrule TAP & 233 & 262 & 11 & 40 & 44.69 & 21.57 & 4.03 & 6.79\\ wirecaml-XSS & 299 & 171 & 41 & 35 & 62.27 & 53.95 & 19.34 & 28.47\\ wirecaml-SQLi & 484 & 60 & 0 & 2 & \textbf{88.64} & 0.00 & 0.00 & 0.00\\ \midrule progpilot & 265 & 257 & 16 & 8 & 51.47 & 66.67 & 5.86 & 10.77\\ WAP & 160 & 154 & 119 & 113 & 51.10 & 51.29 & 43.59 & 47.13\\ RIPS & 256 & 225 & 48 & 17 & 55.68 & 73.85 & 17.58 & 28.40\\ \bottomrule \end{tabular} \end{table} The results show that \DT\ significantly outperformed the other tools in terms of F1 score. \cmd{TAP} achieved a high F1 on the synthetic \sard\ dataset, but showed poor performance on the realistic samples from GIT. \cmd{wirecaml-SQLi} achieved a high accuracy on GIT, but at the price of null precision and recall. Note that the same tool had perfect recall on the \sard\ dataset. On a synthetic dataset intersecting with with our \sard, \cite{Fang2019Tap} measured F1 scores of 98.8\% and 97.5\% for \cmd{TAP} and \cmd{wirecaml} respectively. Our failure to replicate a similar result for those (pre-trained) tools on \sard\ points to the difficulty for some machine learning models to generalise even to related datasets. We have noted above how a perfect 100\% F1 for File-S on \sard\ translated into a poor 22.67\% F1 for the same model on GIT. That result is in line with the drop observed in the performance of all the tools above from testing on \sard\ to testing on GIT. We believe our results show that evaluating tools only on synthetic datasets is not a sufficient guarantee of practical performance, and that \DT\ File-A stands out in its ability to perform well on realistic samples. \section{Practical Experiments} In order to evaluate the practical usefulness of our model, we ran it on a number of PHP projects which we did not include in our GIT\ dataset. In particular we want to estimate the execution performance, to ensure that the tool can scale also to large projects, and assess it usability for actual vulnerability detection. For these experiments we have chosen 13 software projects divided in two sets: 8 \textbf{popular projects} and 5 \textbf{smaller plugins}. The popular projects are listed in the top 50 GitHub repositories (based on stars), that use PHP as their primary language, and span from a few hundred kilobytes to tens of megabytes in size. These are meant to be a representative benchmark for the execution performance. We expect the popular projects to be carefully reviewed, hence we make the assumption that they currently have no security vulnerabilities, and we assume no TP and FN for classification purposes. We also collect 5 WordPress plugins projects, with a limited number of users (less than 20,000), to increase the likelihood of them containing an undiscovered security vulnerability. Projects with a limited user base may have a smaller development team lacking security expertise, or be subject to less scrutiny than popular projects. Below we report the execution performance and accuracy for both sets, then we dig deeper on the smaller plugins sets to hunt for vulnerabilities, to limit the effort necessary in manually reviewing positives. \subsection{Execution Performance} The size of the software projects considered varies from 110KB with 2713 lines of codes (LoC) to 27MB with 242,299 lines of codes. The size and LoC distribution of these software projects reflect the distribution of real-world projects as some projects are small and large in scale. To evaluate the execution performance of \DT\ across real-word software projects, we use the following performance metrics: \begin{itemize} \item\stitle{Lines of codes (LoC)} The number of lines of codes in each file for all the PHP files in a specific software project. \item\stitle{Processing time} The time taken (in seconds) to process and transform a PHP file to the data structure used by our detection model. This process includes the creation of token sequences and CFGs. \item\stitle{Inference time} The time take to perform the classification of all the PHP files in a specific software project. \item\stitle{Time/LoC} The average total time taken (processing and inference) per line of code for a software project. \item\stitle{Time/File} The average total time taken (processing and inference) per file in a software project. \end{itemize} \begin{table*}[t] \scriptsize \caption{\DT\ execution performance.} \label{tab:execution_performance} \centering \begin{tabular}{p{3.2cm} p{0.9cm} p{0.5cm} p{0.6cm} p{1.1cm} p{0.9cm} p{1cm} p{1cm} p{1.5cm} p{1.6cm}} \toprule \textbf{Software project}& \textbf{Size (bytes)} & \textbf{PHP files} & \textbf{LoC} & \textbf{Processing time (s)} & \textbf{Inference time (s)} & \textbf{Time (s)/LoC} & \textbf{Time (s)/File} & \textbf{File-A accuracy(\%)} & \textbf{File-G accuracy(\%)}\\ \midrule \multicolumn{10}{c}{\textbf{A}: Results for popular projects}\\ \midrule Codeigniter & 7,416,704 & 669 & 138495 & 728.4836 & 7.4599 & 0.00531 & 1.10006 & 53.81 & 56.35 \\ Composer & 2,342,547 & 252 & 53518 & 384.9617 & 3.1456 & 0.00725 & 1.54011 & 55.16 & 72.22 \\ Grav & 5,955,146 & 347 & 60922 & 400.0879 & 4.0205 & 0.00663 & 1.16458 & 55.62 & 62.25\\ Guzzle & 352,741 & 32 & 4555 & 28.1737 & 0.7210 & 0.00634 & 0.90296 & 50.00 & 59.38\\ Laravel & 110,595 & 53 & 2713 & 17.1215 & 0.8413 & 0.00662 & 0.33892 & 69.81 & 75.47\\ PHPMailer & 381,439 & 55 & 2185 & 20.6959 & 0.9937 & 0.00993 & 0.39436 & 96.36 & 96.36\\ PHPUnit & 1,373,437 & 323 & 35367 & 225.5044 & 3.8504 & 0.00648 & 0.71008 & 66.80 & 83.59\\ Symphony & 27,052,061 & 2676 & 242299 & 1699.9081 & 26.2258 & 0.00712 & 0.64504 & 75.85 & 75.67\\ \midrule \multicolumn{10}{c}{\textbf{B}: Results for smaller plugins}\\ \midrule Appointment Booking Calendar & 2,826,657 & 16 & 4735 & 50.1272 & 0.8017 & 0.01076 & 3.18306 & 31.25 & 56.25\\ Payment Form for PayPal Pro & 1,005,490 & 13 & 4379 & 44.5420 & 0.7712 & 0.01035 & 3.48563 & 15.38 & 53.85\\ PayPal for Digital Goods & 149,137 & 7 & 1152 & 6.1942 & 0.5617 & 0.00586 & 0.96514 & 57.14 & 42.86\\ Sportspress & 4,834,097 & 256 & 50461 & 419.3428 & 3.4818 & 0.00838 & 1.65166 & 50.39 & 48.05\\ Simple Jobs Board & 9,783,895 & 198 & 19775 & 108.8408 & 2.3262 & 0.00562 & 0.56145 & 86.87 & 86.36\\ \midrule Total & 63,583,946 & 4897 & 620556 & 4133.9838 & 55.2009 & 0.00675 & 0.85546 & 61.98 & 71.37\\ \bottomrule \end{tabular} \end{table*} Table \ref{tab:execution_performance}-A shows the execution performance for popular projects. Symphony has the longest processing time of 1699.91 seconds as it has the most number of PHP files and LoC. Laravel has the shortest processing time of 17.12 seconds, despite having a higher number of LoC (2713) than PHPMailer (2185). This is due to the simpler structure of Laravel code, which is a lightweight PHP framework containing the wireframe to develop a PHP web application. In terms of inference time, the data shows a consistent trend based on the number of PHP files in a project. The higher the number of PHP files, the longer the time it takes to perform inference, as the process is done on the file-level granularity. Time/LoC metric demonstrates minor differences across all the software projects in GitHub. However, the Time/File metric shows some surprising pattern as Composer has the highest execution time per file even though the number of total PHP files and execution time are lower than other larger projects like Symphony and CodeIgniter. Composer \cite{Khliupko2017} is a dependency management tool for PHP projects, which allows the user to declare, update and manage external libraries. Based on this, it shows that the complexity of Composer contributes to the high Time/File performance metric as compared to other GitHub projects. Table \ref{tab:execution_performance}-B shows the execution performance for smaller plugins obtained from WordPress plugins website. The LoC for each project is consistent based on both the project size and the total number of PHP files. In terms of processing time and inference time, Sportspress takes much longer with 419.34 seconds and 3.48 seconds respectively, even though having fewer LoC than Simple Jobs Board. However, it is worth noting that Sportspress has a higher number of PHP files, and this significantly affects the execution time as the evaluation is done based on the file-level granularity. Surprisingly, in terms of Time/LoC and Time/File metrics, smaller projects like Appointment Booking Calendar and Payment Form for PayPal Pro recorded higher values as compared to larger projects like Sportspress and Simple Jobs Board. As for the popular projects, this variance reflects the different code complexity and style across different projects. \begin{figure}[h] \scriptsize \centering \includegraphics[scale=0.55]{img/ProcTime_vs_LoC.pdf} \caption{Processing time over LoC.}\label{fig:pTime_vs_LoC} \end{figure} We expect LoC and processing time to exhibit some linear dependence. To verify this, we visualise the plot of processing time against LoC for all software projects in Figure~\ref{fig:pTime_vs_LoC}. The figure shows a consistent pattern between processing time and LoC across all the software projects. The scatter of the points in the plot follows a pattern, where the LoC increase, the processing time also increases. This relationship is further demonstrated through the regression lines added in the figure. We can see that Sportspress has a near-perfect linear trend throughout all the data points. However, several outliers can also be seen in the figure, especially the one that belongs to Symphony. This outlier has a value of 2326 LoC and 50.09 seconds of processing time. This specific data point is far from the projected trends of all the software projects. We inspected the file representing that data point, which is FrameworkExtension.php. This file contains a lot of nested if-else conditions in most functions, which explains the longer processing time. The creation of CFG for nested if-else conditions takes a longer compared to simpler source code. \subsubsection{Discussion} Overall, this performance analysis shows that \DT\ is an efficient model which can scale without problems to larger code bases. In the worst case scenario it takes less than half an hour to analyse a 27MB project. Considering that this kind of vulnerability detection is an offline task that is performed only periodically on a whole project, this cost is negligible. On the other hand, the average processing time per file, under one second, is also sufficiently small to make it possible to run the model on individual files at each commit as part of a continuous integration pipeline. \subsection{Classification Performance} We established that \DT\ is usable in terms of execution performance. We now consider the usability in terms of vulnerability detection. As discussed above, we make the assumption that the popular projects currently do not have any security vulnerabilities. Hence we regard any positive reported by \DT\ as a false positive. In Table~\ref{tab:execution_performance} we report the accuracy of File-A and File-G for all projects. As observed in Section~\ref{sec:classification}, File-G has fewer FP than File-A, which translates to a higher accuracy on a vulnerability-free dataset. Hence we recommend to use File-G especially if the code base is large and the priority is to reduce false positives. We note a drop in accuracy for both models compared to the results reported in Table~\ref{tab:exp_filelevel} for GIT. This is due to the model being trained on a different code base, therefore encountering novel unfamiliar coding patterns. Still \DT\ File-G has an average accuracy of 71.37\%, which we consider an encouraging result in terms of generalisation to a new dataset, especially in comparison with the poor generalisation ability of the other tools considered in Sections~\ref{sec:tools}. \subsection{Vulnerability Detection} Finally we attempt to use \DT\ to discover new vulnerabilities. We assume that the smaller plugins we considered may indeed contain vulnerabilities, as discussed above. Our priority is to minimise the manual effort spent reviewing reported positives. Machine learning techniques make no promise of completeness, so it is preferable to miss some detections but focus the code reviewing efforts on code more likely to contain security flaws. We follow a layered approach: we first use File-G (our practical model with fewer false positives) to detect potentially vulnerable files from the smaller plugins projects. That yields 177 potentially vulnerable files across two vulnerabilities, SQLi and XSS. Then, to better localise vulnerabilities, we apply Func-A (our most precise function-level model) on the 439 functions extracted from these 177 files. That yields 60 potentially vulnerable functions that are distributed across SQLi, XSS and OSCI vulnerabilities. Table~\ref{tab:novel_vulns} shows the results for the layered approach. In absence of ground truth, we need to resort to manual inspection to verify the results. Several appeared suspicious (say concatenate a SQL string to a variable) but we did not have sufficient familiarity with the application to determine unequivocally if they constituted a vulnerability. We were able to confirm 4 of these functions as actual security vulnerabilities, and we responsibly disclosed our findings to the respective developers. We publicly disclosed 2 of them after they were patched and we describe them below. \begin{table}[h] \scriptsize \caption{Layered approach results.} \label{tab:novel_vulns} \centering \begin{tabular}{p{2.5cm} p{0.4cm} p{1cm} p{0.9cm} p{1cm}} \toprule \textbf{Software project}& \textbf{TN} & \textbf{FP-SQLi} & \textbf{FP-XSS} & \textbf{FP-OSCI}\\ \midrule \multicolumn{5}{c}{\textbf{A}: File-level granularity detection using File-G}\\ \midrule Booking calendar & 9 & 7 & 0 & 0\\ Payment form paypalpro & 7 & 1 & 5 & 0\\ Paypal for digital goods & 3 & 4 & 0 & 0\\ Sportspress & 123 & 31 & 102 & 0\\ Simple Jobs Board & 171 & 4 & 23 & 0\\ \midrule \multicolumn{5}{c}{\textbf{B}: Function-level granularity detection using Func-A}\\ \midrule Booking calendar & 10 & 0 & 1 & 0\\ Payment form paypalpro & 7 & 3 & 1 & 0\\ Paypal for digital goods & 10 & 1 & 4 & 0\\ Sportspress & 275 & 9 & 22 & 0\\ Simple Jobs Board & 77 & 6 & 12 & 1\\ \bottomrule \end{tabular} \end{table} \subsubsection{CVE-2020-14092} We found a SQL injection vulnerability in the plugin ``Payment Form for PayPal Pro''. It allowed any user to perform any SQL query they wanted, including retrieving user login information. This received a CVSS score of 9.8 (critical). Figure~\ref{fig:vul1} shows the vulnerable code snippet from the source codes. \begin{figure*}[!t] \begin{center} \hrule \begin{lstlisting}[language=PHP,escapechar=@,basicstyle=\fontsize{8pt}{8pt}\selectfont\ttfamily] function cp_ppp_init_ds(){ @\hl{\$query\_result = cp\_ppp\_ds( \$\_REQUEST );}@ $err = mysqli_error( $cpcff_db_connect ); if ( !is_null( mysqli_connect_error() )) $err .= mysqli_connect_error(); if ( $_REQUEST['cffaction'] == test_db_query){ print_r( ( ( empty( $err ) ) ? $query_result:$err)); } else { $result_obj = new stdClass; if( !empty( $err ) ){ $result_obj->error = $err; } else { $result_obj->data = $query_result } @\hl{print(json\_encode(\$result\_obj));}@ } } \end{lstlisting} \hrule \end{center} \caption{SQLi vulnerability CVE-2020-14092.}\label{fig:vul1} \end{figure*} \subsubsection{CVE-2020-13892} We found an XSS vulnerability in the ``SportsPress'' plugin, which allowed authenticated users to add malicious JavaScript to the WordPress installation. This received a CVSS score of 5.4 (medium). Figure~\ref{fig:vul2} shows the vulnerable code snippet from the source code. \begin{figure*}[!t] \begin{center} \hrule \begin{lstlisting}[language=PHP,escapechar=@,basicstyle=\fontsize{8pt}{8pt}\selectfont\ttfamily] public function save(){ parent::save(); if ( isset( $_POST[ 'sportpress_events_teams_delimiter' ])) @\hl{update\_option( 'sportpress\_event\_teams\_delimiter', \$\_POST['sportpress\_event\_teams\_delimiter']);}@ } \end{lstlisting} \hrule \end{center} \caption{XSS vulnerability CVE-2020-13892.}\label{fig:vul2} \end{figure*} We tested the tools from Section~\ref{sec:tools} on these projects to see if they could detect either of the above vulnerabilities, but none succeeded. \section{Conclusions} We have presented \DT, a novel vulnerability detection approach which aims to capture contextual information from real-world vulnerabilities in order to reduce false positives and false negatives. Our approach combines a Gated Recurrent Unit to learn long term sequential dependencies of source code tokens and a Graph Convolutional Network to incorporate contextual information from the control flow graph. \DT\ exhibits scalable execution performance to tackle large source code bases, and achieves a better classification performance that the state-of-the-art on both synthetic and realistic datasets. Using \DT\ we were able to detect, with limited manual effort, 4 novel security vulnerabilities in WordPress plugins, which other detection tools failed to detect.
{ "timestamp": "2020-12-17T02:14:11", "yymm": "2012", "arxiv_id": "2012.08835", "language": "en", "url": "https://arxiv.org/abs/2012.08835" }
\section{#1}} \renewcommand{\thesection.\arabic{equation}}}{\thesection.\arabic{equation}} \def\titlepage{\@restonecolfalse\if@twocolumn\@restonecoltrue\onecolumn \else \newpage \fi \thispagestyle{empty}\cdot@page\z@ \def\arabic{footnote}{\fnsymbol{footnote}} } \def\endtitlepage{\if@restonecol\twocolumn \else \fi \def\arabic{footnote}{\arabic{footnote}} \setcounter{footnote}{0}} \relax \hybrid \parskip=0.4em \makeatletter \newdimen\normalarrayskip \newdimen\minarrayskip \normalarrayskip\baselineskip \minarrayskip\jot \newif\ifold \oldtrue \def\oldfalse{\oldfalse} \def\arraymode{\ifold\relax\else\displaystyle\fi \def\eqnumphantom{\phantom{(\thesection.\arabic{equation}})}} \def\@arrayskip{\ifold\baselineskip\z@\lineskip\z@ \else \baselineskip\minarrayskip\lineskip1\baselineskip\fi} \def\@arrayclassz{\ifcase \@lastchclass \@acolampacol \or \@ampacol \or \or \or \@addamp \or \@acolampacol \or \@firstampfalse \@acol \fi \edef\@preamble{\@preamble \ifcase \@chnum \hfil$\relax\arraymode\@sharp$\hfil \or $\relax\arraymode\@sharp$\hfil \or \hfil$\relax\arraymode\@sharp$\fi}} \def\@array[#1]#2{\setbox\@arstrutbox=\hbox{\vrule height\arraystretch \ht\strutbox depth\arraystretch \dp\strutbox width\z@}\@mkpream{#2}\edef\@preamble{\halign \noexpand\@halignto \bgroup \tabskip\z@ \@arstrut \@preamble \tabskip\z@ \cr}% \let\@startpbox\@@startpbox \let\@endpbox\@@endpbox \if #1t\vtop \else \if#1b\vbox \else \vcenter \fi\fi \bgroup \let\par\relax \let\@sharp##\let\protect\relax \@arrayskip\@preamble} \def\eqnarray{\stepcounter{equation}% \let\@currentlabel=\thesection.\arabic{equation}} \global\@eqnswtrue \global\@eqcnt\z@ \tabskip\@centering \let\\=\@eqncr $$% \halign to \displaywidth \bgroup \eqnumphantom \@eqnsel \hskip\@centering $\displaystyle \tabskip\z@ {##}$% &\global\@eqcnt\@ne \hskip 2\arraycolsep $ \displaystyle \arraymode{##}$\hfil &\global\@eqcnt\tw@ \hskip 2\arraycolsep $\displaystyle\tabskip\z@{##}$\hfil \tabskip\@centering &{##}\tabskip\z@\cr} \makeatother \def\mathbb{A}{\mathbb{A}} \def\mathbb{B}{\mathbb{B}} \def\mathbb{C}{\mathbb{C}} \def\mathbb{D}{\mathbb{D}} \def\mathbb{E}{\mathbb{E}} \def\mathbb{F}{\mathbb{F}} \def\mathbb{G}{\mathbb{G}} \def\mathbb{H}{\mathbb{H}} \def\mathbb{K}{\mathbb{K}} \def\mathbb{L}{\mathbb{L}} \def\mathbb{P}{\mathbb{P}} \def\mathbb{Q}{\mathbb{Q}} \def\mathbb{R}{\mathbb{R}} \def\mathbb{Z}{\mathbb{Z}} \def\mathcal{A} {\mathcal{A}} \def\mathcal{B} {\mathcal{B}} \def\mathcal{C} {\mathcal{C}} \def\mathcal{D} {\mathcal{D}} \def\mathcal{E} {\mathcal{E}} \def\mathcal{F} {\mathcal{F}} \def\mathcal{G} {\mathcal{G}} \def\mathcal{H} {\mathcal{H}} \def\mathcal{I} {\mathcal{I}} \def\mathcal{J} {\mathcal{J}} \def\mathcal{K} {\mathcal{K}} \def\mathcal{L} {\mathcal{L}} \def\mathcal{M} {\mathcal{M}} \def\mathcal{N} {\mathcal{N}} \def\mathcal{O} {\mathcal{O}} \def\mathcal{P} {\mathcal{P}} \def\mathcal{Q} {\mathcal{Q}} \def\mathcal{R} {\mathcal{R}} \def\mathcal{S} {\mathcal{S}} \def\mathcal{T} {\mathcal{T}} \def\mathcal{U} {\mathcal{U}} \def\mathcal{V} {\mathcal{V}} \def\mathcal{W} {\mathcal{W}} \def\mathcal{X} {\mathcal{X}} \def\mathcal{Y} {\mathcal{Y}} \def\mathcal{Z} {\mathcal{Z}} \def{\mathfrak g}{{\mathfrak g}} \def{\mathfrak b}{{\mathfrak b}} \def{\mathfrak h}{{\mathfrak h}} \def\mathfrak{a}{\mathfrak{a}} \def\mathfrak{b}{\mathfrak{b}} \def\mathfrak{g}{\mathfrak{g}} \def\mathfrak{h}{\mathfrak{h}} \def\mathfrak{n}{\mathfrak{n}} \def\mathfrak{t}{\mathfrak{t}} \def\mathfrak{S}{\mathfrak{S}} \def\mathfrak{M}{\mathfrak{M}} \def\mathfrak{V}{\mathfrak{V}} \def\mathfrak{X}{\mathfrak{X}} \def{\theta} {{\theta}} \def{\Theta} {{\Theta}} \def{\omega} {{\omega}} \def{\alpha} {{\alpha}} \def{\beta} {{\beta}} \def{\gamma} {{\gamma}} \def\sigma {{\sigma}} \def{\Sigma}{{\Sigma}} \def\lambda{\lambda} \def\Lambda{\Lambda} \def\varepsilon{\varepsilon} \def\varsigma{\varsigma} \def\varpi{\varpi} \def\epsilon{\epsilon} \def\bf{E}{\bf{E}} \def\mathfrak{h}{\mathfrak{h}} \def\mathfrak{so}{\mathfrak{so}} \def\mathfrak{sp}{\mathfrak{sp}} \def\mathfrak{gl}{\mathfrak{gl}} \def{\overline}{\overline} \def{\rm ev}{{\rm ev}} \def\bar{0}{\bar{0}} \def\bar{1}{\bar{1}} \def{\rm str}\,{{\rm str}\,} \def{\rm odd}{{\rm odd}} \def{\bfit\alpha}{{\bfit\alpha}} \def{\bfit\beta}{{\bfit\beta}} \def{\bfit\gamma}{{\bfit\gamma}} \def\bnu{{\bfit\nu}} \def{\bfit\mu}{{\bfit\mu}} \def{\bfit\omega}{{\bfit\omega}} \def{\bfit\phi}{{\bfit\phi}} \def{\bfit\lambda}{{\bfit\lambda}} \def{\bfit\rho}{{\bfit\rho}} \def\partial {\partial} \def\overline {\partial } {\overline {\partial }} \def\bar{i}{\bar{i}} \def\bar{j}{\bar{j}} \def{\bar{u}}{{\bar{u}}} \def\bar{w} {\bar{w}} \def\bar{z} {\bar{z}} \def\bar{k} {\bar{k}} \def\overline{A} {\overline{A}} \def\widetilde {{\widetilde{\omega}}} \def{\widetilde{\rho}} {{\widetilde{\rho}}} \def{\rm Br}{{\rm Br}} \def{\mathop{\rm codim}}{{\mathop{\rm codim}}} \def{\rm cok}{{\rm cok}} \def{\mathop {\rm coker}}{{\mathop {\rm coker}}} \def{\rm Ch}{{\rm Ch}} \def{\rm ch}{{\rm ch}} \def{\rm Det}{{\rm Det}} \def{\rm DET}{{\rm DET}} \def{\rm diff}{{\rm diff}} \def{\rm Diff}{{\rm Diff}} \def{\rm Id}{{\rm Id}} \def\cdot{\cdot} \def{\mathop{\rm GSp}}{{\mathop{\rm GSp}}} \def{\mathop{\rm GO}}{{\mathop{\rm GO}}} \def{\rm Ker}{{\rm Ker}} \def{\rm Mat}{{\rm Mat}} \def{\rm End}{{\rm End}} \def\nabla_{\partial}{\nabla_{\partial}} \def\nabla_{\bar {\partial}}{\nabla_{\bar {\partial}}} \def{\rm Lie}{{\rm Lie}} \def{\rm Nm}{{\rm Nm}} \def{\rm Gal}{{\rm Gal}} \def\noindent{\noindent} \def\nonumber{\nonumber} \def{\mathop{\rm Pin}}{{\mathop{\rm Pin}}} \def{\rm pt}{{\rm pt}} \def{\rm rank}{{\rm rank}} \def{\mathop{\rm Res}}{{\mathop{\rm Res}}} \def{\mathop{\rm Sym}}{{\mathop{\rm Sym}}} \def{\mathop{\rm Spin}}{{\mathop{\rm Spin}}} \def{\mathop{\rm Sp}}{{\mathop{\rm Sp}}} \def{\mathop{\rm SO}}{{\mathop{\rm SO}}} \def{\mathop{\rm SL}}{{\mathop{\rm SL}}} \def{\rm Td}{{\rm Td}} \def{\rm vol}{{\rm vol}} \def{\rm Vol}{{\rm Vol}} \def\mathfrak{\mathfrak} \def{\overline} {{\overline}} \def{\rm tr}\,{{\rm tr}\,} \def{\rm Tr}\,{{\rm Tr}\,} \def\<{\langle} \def\>{\rangle} \def\sigma{\sigma} \def\bar{\sigma}{\bar{\sigma}} \def{\rm ad}{{\rm ad}} \def\widetilde{\widetilde} \def\mathfrak{osp}{\mathfrak{osp}} \newtheorem{te}{Theorem}[section \newtheorem{de}{Definition}[section] \newtheorem{prop}{Proposition}[section] \newtheorem{cor}{Corollary}[section] \newtheorem{lem}{Lemma}[section] \newtheorem{ex}{Example}[section] \newtheorem{rem}{Remark}[section] \newtheorem{conj}{Conjecture}[section] \newtheorem{prob}{Problem}[section] \newtheorem{quest}{Question}[section] \newcommand\bqa{\begin{eqnarray}} \newcommand\eqa{\end{eqnarray}} \def\begin{eqnarray}\new\begin{array}{cc}{\begin{eqnarray}\oldfalse\begin{array}{cc}} \def\end{array}\end{eqnarray}{\end{array}\end{eqnarray}} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\bse{\begin{subequations}} \def\end{subequations}{\end{subequations}} \def\begin{pmatrix}{\begin{pmatrix}} \def\end{pmatrix}{\end{pmatrix}} \def\be\label{\begin{eqnarray}\new\begin{array}{cc}\label} \def\hbar{\hbar} \def\imath{\imath} \def\square{\hfill{\vrule height6pt width6pt depth1pt} \break \vspace{.01cm}} \newcommand\cod{\operatorname{codim}} \newcommand\im{\operatorname{im}} \newcommand{\rm Inv}{\operatorname{Inv}} \newcommand\id{\operatorname{id}} \newcommand\coim{\operatorname{coim}} \newcommand\rk{\operatorname{rank}} \newcommand\ann{\operatorname{ann}} \newcommand{\noindent {\it Proof} }{\noindent {\it Proof} } \newcommand{{\mathfrak{sl}}}{{\mathfrak{sl}}} \newcommand{{\mathrm{ Ad}}}{{\mathrm{ Ad}}} \newcommand{{\mathrm{ Aut}}}{{\mathrm{ Aut}}} \newcommand{{\mathrm{ Int}}}{{\mathrm{ Int}}} \newcommand{{\mathrm{ Hom}}}{{\mathrm{ Hom}}} \newcommand{{\mathrm{ Out}}}{{\mathrm{ Out}}} \newcommand{{\mathrm{ diag}}}{{\mathrm{ diag}}} \newcounter{pac}[section] \newcommand{\npa}{\addtocounter{pac}{1} \noindent {\bf \arabic{section}.\arabic{pac}}\,\,\,} \newcounter{pacc}[subsection] \newcommand{\npaa}{\addtocounter{pacc}{1} \noindent {\bf \arabic{section}.\arabic{subsection}.\arabic{pacc}}\,\,\,} \setcounter{pac}{0} \setcounter{footnote}0 \begin{document} \title{\bf On quantum $\mathfrak{osp}(1|2\ell)$-Toda chain \footnote{Talk given by the second author at the "Polivanov-90" conference, 16-17 December 2020, Steklov Mathematical Institute of Russian Academy of Science.}} \author{A.A. Gerasimov, D.R. Lebedev and S.V. Oblezin} \date{} \maketitle \renewcommand{\abstractname}{} \begin{abstract} \noindent {\bf Abstract}. The orthosymplectic superalgebra $\mathfrak{osp}(1|\,2\ell)$ is the closest analog of standard Lie algebras in the world of super Lie algebras. We demonstrate that the corresponding $\mathfrak{osp}(1|\,2\ell)$-Toda chain turns out to be an instance of a $BC_\ell$-Toda chain. The underlying reason for this relation is discussed. \end{abstract} \vspace{5 mm} \section{Introduction} Representation theory is an essential tool in finding explicit solutions of known quantum integrable systems as well as construction of new ones. An important class of finite-dimensional quantum integrable systems allowing representation theory interpretation is provided by Toda chains. It is known that integrable Toda chains are classified by a class of root systems that include root systems of finite dimensional Lie algebras as well as their affine counterparts. For the Toda chains associated with the root systems of finite dimensional Lie algebras the corresponding integrable systems can be solved explicitly by representation theory methods \cite{K1}, \cite{GW} (see \cite{STS} for a review). Precisely, the eigenfunctions of the quantum Hamiltonians are given by special matrix elements of principal series representations of the totally split real form of the corresponding Lie group. The resulting functions should be considered as generalized Whittaker functions associated with the corresponding finite-dimensional Lie algebras \cite{K1}, \cite{Ha}. These functions allow quite explicit integral representations (see e.g. \cite{GLO1}). The class of integrable Toda chains is however a bit larger than the class of finite/affine Lie algebras and includes in particular non-reduce root systems $BC_\ell$ combining $B_\ell$ and $C_\ell$ root systems. The corresponding $BC_\ell$-Toda system is an important element of the web of Toda type theories connected by various intertwining relations \cite{GLO2}. Although $BC_\ell$ root system fits naturally in the classification of finite Lie algebra root systems the problem of construction of the Lie algebra type object corresponding to the non-reduced root $BC_\ell$ systems seems not yet obtained a satisfactory resolution. However, one should recall that $BC_\ell$ root systems appear in the Cartan classification of symmetric spaces \cite{H}, \cite{L}. Still $BC_\ell$-Toda chain can be solved via representation theory methods using a generalization of $C_\ell$-Whittaker functions (see e.g. \cite{J} for $\ell=1$ and a remark in \cite{RS}, relevant to $BC_\ell$ classical Toda system). This unfortunately does not elucidate the question of the interpretation of $BC_\ell$-Toda eigenfunctions as standard Whittaker functions for some group-like object. One should add that the integrability of the quantum $BC_\ell$-Toda chain for generic coupling constants was proven independently in \cite{S} using Yangian representation theory (aka quantum inverse scattering methods). This however also does not clarify the question of existence of a group-like structure behind $BC_\ell$ root systems. In this note we consider quantum Toda chains associated with the super Lie algebras $\mathfrak{osp}(1|2\ell)$. This series of super Lie algebras occupy a special place in the world of super Lie algebras. In particular, it is the only instance of simple super Lie algebras for which the corresponding category of finite-dimensional representations is semi-simple and thus allows direct analogs of the standard constructions of representation theory of semisimple Lie algebras \cite{Kac1}. In connection with this fact one should mention that $\mathfrak{osp}(1|2\ell)$ is the unique super Lie algebra with finitely-generated center of its universal enveloping algebra. The special properties of $\mathfrak{osp}(1|2\ell)$ makes it natural to consider the associated quantum integrable systems. In this note we demonstrate that $\mathfrak{osp}(1|2\ell)$-Toda chain may be also considered as a Toda chain associated with the $BC_\ell$ root system. This allows us to solve $BC_\ell$-Toda chain by standard representation theory methods i.e. by identifying the corresponding eigenfunctions with $\mathfrak{osp}(1|2\ell)$-Whittaker functions. The underlying reason for the appearance of $BC_\ell$ root structure in $\mathfrak{osp}(1|2\ell)$-Toda chain becomes clear by comparing $BC_\ell$ root data with that of the super Lie algebra $\mathfrak{osp}(1|2\ell)$. Actually the only difference is the opposite parity of the maximal commutative subalgebra eigenspaces in the Cartan decomposition corresponding to short roots of non-reduced $BC_\ell$ root system. This difference however does not affect the expressions for quantum Hamiltonians of the corresponding Toda chain. Exposition of the paper goes as follows. In Section 2 we provide basic facts on the structure of the orthosymplect super Lie algebra $\mathfrak{osp}(1|2\ell)$. In Section 3 we construct $\mathfrak{osp}(1|2\ell)$-Whittaker functions associated with representations of the super Lie algebra $\mathfrak{osp}(1|2\ell)$ and demonstrate that these functions are eigenfunctions of the quadratic quantum Hamiltonian of $BC_\ell$-Toda chain for special values of the coupling constants. Finally, in Section 4 we discuss the structure of root system of $\mathfrak{osp}(1|2\ell)$ versus $BC_\ell$ root system and provide an explanation of the apparent identification of quadratic Hamiltonian of $\mathfrak{osp}(1|2\ell)$-Toda chain with that of $BC_\ell$-Toda chain. {\it Acknowledgments:} The research of the second (D.R.L.) and third (S.V.O.) authors was supported by RSF grant 16-11-10075. \section{Basic facts on the super Lie algebra $\mathfrak{osp}(1|2\ell)$} We start with the basic definition of a Lie superalgebra structure and then we describe explicitly the algebra $\mathfrak{osp}(1|2\ell)$ in detail. This is a standard material that can be found in standard sources on super algebras e.g. \cite{Kac1}, \cite{Kac2}. The notion of Lie superalgebra is a direct generalization of the notion of Lie algebra to the category of vector superspaces. Vector superspace $V=V_{\bar{0}}\oplus V_{\bar{1}}$ is a $\mathbb{Z}_2$-graded vector space with the parity $p$ taking values $0$ and $1$ on $V_{\bar{0}}$ and $V_{\bar{1}}$ respectively. The tensor product structure is given by twisting of the standard tensor product structure in the category of vector spaces \begin{eqnarray}\new\begin{array}{cc} v\otimes w=(-1)^{p(v)\cdot p(w)}\,w\otimes v,\qquad v\in V, \quad w\in W, \end{array}\end{eqnarray} for $v$ and $w$ are homogeneous elements with respect to the $\mathbb{Z}_2$-grading. \begin{de} The structure of super Lie algebra on super vector space $\mathfrak{g}=\mathfrak{g}_{\bar{0}}\oplus\mathfrak{g}_{\bar{1}}$ is given by a bilinear operation $[\cdot,\,\cdot]$, called the bracket, so that for any homogeneous elements $X,\,Y,\,Z\in \mathfrak{g}$ the following hold: \begin{eqnarray}\new\begin{array}{cc} p\bigl([X,\,Y]\bigr)\,=\,p(X)\,+\,p(Y)\,, \end{array}\end{eqnarray} \begin{eqnarray}\new\begin{array}{cc} \,[X,\,Y]\,=\,-(-1)^{p(X)\cdot p(Y)}[Y,\,X]\,, \end{array}\end{eqnarray} \begin{eqnarray}\new\begin{array}{cc} \,[X,\,[Y,\,Z]](-1)^{p(X)\cdot p(Z)}\, +\,[Z,\,[X,\,Y]](-1)^{p(Y)\cdot p(Z)}\\ +\,[Y,\,[Z,\,X]](-1)^{p(X)\cdot p(Y)}\,=\,0\,. \end{array}\end{eqnarray} \end{de} We will be interested in a special instance of super Lie algebras, the ortho-symplectic super Lie algebra $\mathfrak{osp}(1|2\ell)$. To define this algebra let us first introduce the super Lie algebra $\mathfrak{gl}(1|2\ell)$. \begin{de}\label{GL1} The super Lie algebra $\mathfrak{gl}(1|\,2\ell)$ is generated by \begin{eqnarray}\new\begin{array}{cc} E_{0,\,i}\,,\quad E_{i,\,0}\,,\quad p(E_{0,\,i})\,=\,p(E_{i,\,0})\,=\,1\,,\qquad0\leq i\leq2\ell\,,\\ \text{and}\qquad E_{kl}\,,\qquad p(E_{kl})\,=\,0\,,\qquad1\leq k,\,l\leq2\ell\,, \end{array}\end{eqnarray} subjected to the following relations: \begin{eqnarray}\new\begin{array}{cc}\label{glbracket} \bigl[E_{ij},\,E_{kl}\bigr]\, =\,\delta_{jk}E_{il}\,-(-1)^{p(i)p(l)}\,\delta_{il}E_{kj}\,,\qquad 0\leq i,\,j\leq2\ell\,,\quad 0\leq k,\,l\leq2\ell\,. \end{array}\end{eqnarray} \end{de} The super Lie agebra $\mathfrak{gl}(1|2\ell)$ may be identified with the super Lie algebra structure $\bigl({\rm End}(V),\,[\cdot,\,\cdot]\bigr)$ on the space ${\rm End}(V)$ of endomorphisms of the superspace \begin{eqnarray}\new\begin{array}{cc} V\,=\,\mathbb{R}^{1|2\ell}\,=\,V_{\bar{0}}\,\oplus\,V_{\bar{1}}\,,\qquad V_{\bar{0}}\,=\,\mathbb{R}^{0|2\ell}\,,\qquad V_{\bar{1}}\,=\,\mathbb{R}^{1|0}\,, \end{array}\end{eqnarray} in the following way. Any zero parity linear endomorphism $A\in{\rm End}(V)$ is given by the matrix of the following shape: \begin{eqnarray}\new\begin{array}{cc}\label{shape} A\, =\,\left(\begin{array}{cc} A_{11} & A_{12}\\A_{21} & A_{22} \end{array}\right)\,,\quad \begin{array}{cc} A_{11}:\,V_{\bar{1}}\,\longrightarrow\,V_{\bar{1}}\,, & A_{12}:\,V_{\bar{0}}\,\longrightarrow\,V_{\bar{1}}\,,\\ A_{21}:\,V_{\bar{1}}\,\longrightarrow\,V_{\bar{0}}\,, & A_{22}:\,V_{\bar{0}}\,\longrightarrow\,V_{\bar{0}}\,, \end{array} \end{array}\end{eqnarray} where entries of blocks $A_{11},\,A_{22}$ are even while the entries of $A_{12}\,,A_{21}$ are odd so that \begin{eqnarray}\new\begin{array}{cc} {\rm End}(V)_{\bar{0}}\,=\,\Big\{\Big(\begin{smallmatrix} A_{11}&&0\\&&\\bar{0}&&A_{22}\end{smallmatrix}\Big)\Big\}\,,\qquad {\rm End}(V)_{\bar{1}}\,=\,\Big\{\Big(\begin{smallmatrix} 0&&A_{12}\\&&\\A_{21}&&0\end{smallmatrix}\Big)\Big\}\,. \end{array}\end{eqnarray} The super brackets on ${\rm End}(V)$ are defined on homogeneous elements $X,\,Y\in{\rm End}(V)$ as follows \begin{eqnarray}\new\begin{array}{cc}\label{bracket} [X,\,Y]\,=\,X\circ Y\,-\,(-1)^{p(X)\cdot p(Y)}Y\circ X\,. \end{array}\end{eqnarray} The description of $\mathfrak{g}(1|2\ell)$ given in Definition \ref{GL1} is then obtained via fixing a bases in $V$ \begin{eqnarray}\new\begin{array}{cc}\label{basis} \{\varepsilon_0,\,\varepsilon_1,\,\ldots,\,\varepsilon_{2\ell}\}\subset\mathbb{R}^{1|2\ell}\,,\qquad p(\varepsilon_0)=1\,,\quad p(\varepsilon_k)=0,\quad1\leq k\leq2\ell\,. \end{array}\end{eqnarray} The generators $E_{ij}$ are identified with the elementary matrices in ${\rm End}(V)$ with the only non-zero elements in the $i$-th row and the $j$-th column. \begin{de} The super transposition of a matrix $A\in{\rm End}(V)$ is defined by \begin{eqnarray}\new\begin{array}{cc}\label{sTransp} A^{\top}\, =\,\left(\begin{array}{cc} A_{11} & A_{12}\\A_{21} & A_{22} \end{array}\right)^{\top}\, =\,\left(\begin{array}{cc} A_{11}^t & -A_{21}^t\\A_{12}^t & A_{22}^t \end{array}\right)\,, \end{array}\end{eqnarray} where $X^t$ if the standard transposition of a matrix $X$. \end{de} \begin{lem} Super transposition \eqref{sTransp} possesses the following properties: \begin{eqnarray}\new\begin{array}{cc} (A\,v)^t\,=\,v^tA^{\top}\,,\qquad A\in{\rm End}(V)\,,\quad v\in V\,, \end{array}\end{eqnarray} \begin{eqnarray}\new\begin{array}{cc} (A\cdot B)^{\top}\,=\,B^{\top}\cdot A^{\top}\,, \end{array}\end{eqnarray} \begin{eqnarray}\new\begin{array}{cc} (A^{\top})^{\top}\,=\Pi\,A\,\Pi^{-1}\,, \end{array}\end{eqnarray} where $\Pi$ is the parity operator with the matrix $\Big(\begin{smallmatrix}-1&&0\\&&\\bar{0}&&{\rm Id}_{2\ell}\end{smallmatrix}\Big)$. \end{lem} \noindent {\it Proof} : Given $v\in V$ let us write it down in the basis $\{\varepsilon_0,\,\varepsilon_1,\ldots,\varepsilon_{2\ell}\}$: \begin{eqnarray}\new\begin{array}{cc} v\,=\,\xi\varepsilon_0\,+\,\sum_{i=1}^{2\ell}v_i\varepsilon_i\,, \end{array}\end{eqnarray} with odd Grassmann coordinate $\xi$, and even coordinates $v_i$. Then we have \begin{eqnarray}\new\begin{array}{cc} (A\,v)^t_i\,=\,a_{i0}\xi\,+\,a_{i1}v_1\,+\ldots+\,a_{i,\,2\ell}v_{2\ell}\,, \end{array}\end{eqnarray} and on the other hand, \begin{eqnarray}\new\begin{array}{cc} (v^tA^{\top})_0\, =\,\xi a_{00}\,+\,v_1a_{0,1}\,+\ldots+\,v_na_{0,n}\,,\\ (v^tA^{\top})_k\,=\,-\xi a_{k0}\,+\,v_1a_{k,1}\,+\ldots+\,v_na_{k,n}\,,\quad1\leq k\leq2\ell\,. \end{array}\end{eqnarray} Taking into account that \begin{eqnarray}\new\begin{array}{cc} \xi a_{00}\,=\,a_{00}\xi\,,\qquad-\xi a_{k,0}\,=\,a_{k,0}\xi\,,\quad1\leq k\leq2\ell\,, \end{array}\end{eqnarray} we deduce the first assertion. The second assertion can be verified by straightforward computation. The third assertion follows from the definition: on the one hand, we have \begin{eqnarray}\new\begin{array}{cc} (A^{\top})^{\top}\, =\,\left(\begin{array}{cc} A_{11}^t & -A_{21}^t\\A_{12}^t & A_{22}^t \end{array}\right)^{\top}\, =\,\left(\begin{array}{cc} A_{11} & -A_{12}\\-A_{12} & A_{22} \end{array}\right)\,; \end{array}\end{eqnarray} on the other hand, in the standard basis \eqref{basis} the matrix of parity operator reads \begin{eqnarray}\new\begin{array}{cc} \Pi\,=\,\Big(\begin{smallmatrix} -1 && 0\\&&\\bar{0}&&{\rm Id}_{2\ell} \end{smallmatrix}\Big)\,, \end{array}\end{eqnarray} so the assertion easily follows. $\Box$ Now $\mathfrak{osp}(1|2\ell)$ may be defined as a subalgebra of the general linear superalgebra $\mathfrak{gl}(1|2\ell)$. Introduce the following involution: \begin{eqnarray}\new\begin{array}{cc} \theta\,:\quad\mathfrak{gl}(1|2\ell)\,\longrightarrow\,\mathfrak{gl}(1|2\ell)\,,\qquad X\,\longmapsto\,X^{\theta}\,:=\,-JX^{\top}J^{-1}\,, \end{array}\end{eqnarray} where \begin{eqnarray}\new\begin{array}{cc} J\, =\,\Big(\begin{smallmatrix} 1 & 0 & 0\\bar{0} & 0 & -{\rm Id}_{\ell}\\ 0 & {\rm Id}_{\ell} & 0 \end{smallmatrix}\Big)\,\in\,{\rm End}(V)_{\bar{0}}\,. \end{array}\end{eqnarray} \begin{de} The orthosymplectic super Lie algebra $\mathfrak{osp}(1|2\ell)$ is defined as the $\theta$-invariant subalgebra of $\mathfrak{gl}(1|2\ell)$: \begin{eqnarray}\new\begin{array}{cc}\label{ospNrep} \mathfrak{osp}(1|\,2\ell)\, =\Big\{X\,\in\,\mathfrak{gl}(1|2\ell)\,:\quad X^{\theta}\,=\,X\Big\}\\ =\,\Big\{X\, =\,\Big(\begin{smallmatrix} 0 & x & y\\ y^t & A & B\\ -x^t & C & -A^t \end{smallmatrix}\Big)\,:\quad B^t\,=\,B\,,\quad C^t\,=\,C\Big\}\,\subset\,\mathfrak{gl}(1|\,2\ell)\,. \end{array}\end{eqnarray} \end{de} According to the classification of simple super Lie algebras \cite{Kac1} one associates the root system $B_{0,\ell}$ to the super Lie algebra $\mathfrak{osp}(1|2\ell)$. Let $\{\epsilon_1,\ldots,\,\epsilon_{\ell}\}\subset\mathbb{R}^{\ell}$ be an orthogonal basis in $\mathbb{R}^{\ell}$ with respect to the scalar product $(\,,\,)$. Then simple root system ${}^s\Delta^+(B_{0,\ell})$ of type $B_{0,\,\ell}$ consists of even simple positive roots ${}^s\Delta^+_{\bar{0}}$ and odd simple positive roots ${}^s\Delta^+_{\bar{1}}$: \begin{eqnarray}\new\begin{array}{cc}\label{OPSroot} {}^s\Delta^+_{\bar{0}}(B_{0,\,\ell})\,=\, \bigl\{{\alpha}_k\,=\,\epsilon_{\ell+1-k}-\epsilon_{\ell+2-k}\,,\quad1<k\leq\ell\bigr\}\,\,,\\ {}^s\Delta^+_{\bar{1}}(B_{0,\,\ell})\,=\,\bigl\{{\alpha}_1=\epsilon_{\ell}\bigr\}\,, \end{array}\end{eqnarray} indexed by $I=\{1,\,\ldots,\,\ell\}$. The simple co-roots ${\alpha}_i^{\vee},\,i\in I$ are defined in a standard way: $$ {\alpha}_i^{\vee}\,:=\,\frac{2{\alpha}_i}{({\alpha}_i,{\alpha}_i)}\,,\quad i\in I\,. $$ Note that the set $\Delta^+(B_{0,\ell})$ of positive roots contains the sub-system of even positive roots of $C_\ell$ root system with the corresponding set of simple roots: \begin{eqnarray}\new\begin{array}{cc} {}^s\Delta^+(C_{\ell})\,=\,\bigl\{2{\alpha}_1=2\epsilon_{\ell}\,,\quad {\alpha}_k\,=\,\epsilon_{\ell+1-k}-\epsilon_{\ell+2-k}\,,\,\,1<k\leq\ell\bigr\}\, \subset\,\Delta^+(B_{0,\ell})\,. \end{array}\end{eqnarray} The Cartan matrix $A=\|A_{ij}\|$ associated with the simple root system \eqref{OPSroot} is defined by the standard formula \begin{eqnarray}\new\begin{array}{cc} A_{ij}\,=\,\frac{2(\alpha_i,\alpha_j)}{(\alpha_i,\alpha_i)}\,,\qquad i,j\in I\,. \end{array}\end{eqnarray} Thus the Cartan matrix of ${}^s\Delta^+(B_{0,\ell})$ coincides with the standard $B_{\ell}$-type Cartan matrix \begin{eqnarray}\new\begin{array}{cc}\label{OSPcar} A\, =\,\left(\begin{array}{c|cccc} 2&-2&0& \ldots &0\\ \hline -1&2&-1&\ddots&\vdots\\ 0&\ddots&\ddots&\ddots&0\\ \vdots&\ddots&-1&2&-1\\ 0&\ldots&0&-1&2 \end{array}\right). \end{array}\end{eqnarray} The Cartan decomposition for $\mathfrak{osp}(1|2\ell)$ reads \begin{eqnarray}\new\begin{array}{cc}\label{ospNCartan} \mathfrak{osp}(1|2\ell)(\mathbb{C})\,=\,\bigoplus_{i\in I}\mathbb{C} h_i\,\oplus\,\bigoplus_{{\alpha}\in\Delta^+_{\bar{0}}}\bigl(\mathbb{C} X_{{\alpha}}\,\oplus\,\mathbb{C} X_{-{\alpha}}\bigr)\,\oplus\,\bigoplus_{\beta\in\Delta^+_{\bar{1}}}\bigl(\mathbb{C} X_{\beta}\,\oplus\,\mathbb{C} X_{-\beta}\bigr)\,,\\ \Delta^+_{\bar{0}}\,=\,\bigl\{2\epsilon_i\,;\qquad\epsilon_i\pm\epsilon_j\,,\quad i<j\,,\quad i,\,j\in I\bigr\}\,,\qquad \Delta^+_{\bar{1}}\,=\,\bigl\{\epsilon_i\,,\quad i\in I\bigr\}\,. \end{array}\end{eqnarray} and the Cartan-Weyl relations are the following: \begin{eqnarray}\new\begin{array}{cc}\label{osprel} \bigl[X_{\epsilon_i},\,X_{\epsilon_j}\bigr]\,=\,(1+\delta_{ij})X_{\epsilon_i+\epsilon_j}\,,\qquad \bigl[X_{-\epsilon_i},\,X_{-\epsilon_j}\bigr]\,=\,-(1+\delta_{ij})X_{-\epsilon_i-\epsilon_j}\,,\\ \bigl[X_{\epsilon_i},\,X_{-\epsilon_i}\bigr]\,=\,a_{ii}\,,\qquad i\in I\,;\\ \bigl[X_{\epsilon_i-\epsilon_j},\,X_{\epsilon_j}\bigr]\,=\,X_{\epsilon_i}\,,\qquad \bigl[X_{\epsilon_i-\epsilon_j},\,X_{-\epsilon_i}\bigr]\,=\,-X_{-\epsilon_j}\,,\\ \bigl[X_{\epsilon_i},\,X_{-\epsilon_i-\epsilon_j}\bigr]\,=\,X_{-\epsilon_j}\,,\qquad \bigl[X_{-\epsilon_i},\,X_{\epsilon_i+\epsilon_j}\bigr]\,=\,X_{\epsilon_j}\,,\qquad i<j\,;\\ \bigl[X_{{\alpha}},\,X_{-{\alpha}}\bigr]\,=\,h_{{\alpha}^{\vee}}\, =\,\sum_{i\in I}\<{\alpha}^{\vee},\,\epsilon_i\>a_{ii}\,,\\ \bigl[h_{{\alpha}^{\vee}},\,X_{{\gamma}}\bigr]\,=\,{\alpha}^{\vee}({\gamma})X_{{\gamma}}\,,\qquad {\alpha},\,{\gamma}\in\Delta^+\,. \end{array}\end{eqnarray} The Serre relations on $X_{{\alpha}_i},\,{\alpha}_i\in{}^s\Delta^+(B_{0,\ell})$ have the following form: \begin{eqnarray}\new\begin{array}{cc}\label{Serre} {\rm ad}_{X_{{\alpha}_1}}^2(X_{{\alpha}_1})\,=\,0\,,\qquad {\rm ad}_{X_{-{\alpha}_1}}^2(X_{-{\alpha}_1})\,=\,0\,,\\ {\rm ad}_{X_{{\alpha}_i}}^{1-a_{ij}}(X_{{\alpha}_j})\,=\,0\,,\quad {\rm ad}_{X_{-{\alpha}_i}}^{1-a_{ij}}(X_{-{\alpha}_j})\,=\,0\,,\qquad i,j\in I\,\,. \end{array}\end{eqnarray} The Cartan-Weyl generators $X_\alpha$ may be represented via matrix embedding \eqref{ospNrep} of $\mathfrak{osp}(1|2\ell)$ as follows: \begin{eqnarray}\new\begin{array}{cc}\label{ospNgen1} X_{\epsilon_i}\,=\,E_{i,\,0}\,+\,E_{0,\,\ell+i}\,,\qquad X_{-\epsilon_i}\,=\,E_{0,\,i}\,-\,E_{\ell+i,\,0}\,; \end{array}\end{eqnarray} \begin{eqnarray}\new\begin{array}{cc}\label{ospNgen0} X_{\epsilon_i-\epsilon_j}\,=\,E_{ij}-E_{2\ell+1-i,\,2\ell+1-j}\,,\\ X_{-\epsilon_i+\epsilon_j}\,=\,E_{ji}-E_{2\ell+1-j,\,2\ell+1-i}\,,\\ X_{\epsilon_i+\epsilon_j}\,=\,E_{i,\,\ell+j}+E_{j,\,\ell+i}\,,\quad X_{-\epsilon_i-\epsilon_j}\,=\,E_{\ell+j,\,i}+E_{\ell+i,\,j}\,,\quad i<j\,,\\ X_{2\epsilon_i}\,=\,E_{i,\,\ell+i}\,,\qquad X_{-2\epsilon_i}\,=\,E_{\ell+i,\,i}\,,\qquad i\in I\,. \end{array}\end{eqnarray} The Cartan subalgebra $\mathfrak{h}\subset\mathfrak{osp}(1|\,2\ell)$ is spanned by \begin{eqnarray}\new\begin{array}{cc}\label{ospNgen2} h_i\,=\,E_{ii}\,-\,E_{\ell+i,\,\ell+i}\,,\qquad i\in I\,. \end{array}\end{eqnarray} For a class of super Lie algebras $\mathfrak{g}$ allowing non-degenerate invariant pairing $(\,|\,)$ there is a canonical construction of the quadratic Casimir element $C_2\in\mathcal{Z}(\mathcal{U}(\mathfrak{g}))$ of the center of the universal enveloping algebra $\mathcal{U}(\mathfrak{g})$ (see e.g. \cite{Kac1}). Let us chose a pair $\{u_i,\,i\in I\}$, $\{u^i,\,i\in I\}$ of dual bases in the Cartan subalgebra $\mathfrak{h}\subset\mathfrak{g}$, and let $\{X_{{\alpha}},\,X^{{\alpha}},\,{\alpha}\in\Delta^+\}$ be the Cartan-Weyl generators normalized by $(X^{{\alpha}}|\,X_{{\alpha}})=1$. Then the quadratic Casimir element $C_2$ allows for the following presentation: \begin{eqnarray}\new\begin{array}{cc}\label{casimir} C_2\, =\,\sum_{i\in I}u^iu_i\, +\,\sum_{{\alpha}\in\Delta^+}\bigl((-1)^{p({\alpha})}X_{{\alpha}}X^{{\alpha}}\,+\,X^{{\alpha}}X_{{\alpha}}\bigr)\,. \end{array}\end{eqnarray} To define the quadratic Casimir element for $\mathfrak{osp}(1|2\ell)$ we shall first introduce an invariant non-degenerate invariant pairing. The super Lie algebra $\mathfrak{osp}(1|2\ell)$ allows the invariant scalar product defined as follows \begin{eqnarray}\new\begin{array}{cc}\label{ISP2} (X|Y)\,:=\,\frac{1}{2}{\rm str}\,\bigl(\rho_t(X)\circ \rho_t(Y)\bigr)\,,\qquad X,\,Y\in \mathfrak{osp}(1|2\ell)\,, \end{array}\end{eqnarray} where $\rho_t: \mathfrak{osp}(1|2\ell)\to {\rm End}(\mathbb{C}^{1|2\ell})$ is the tautological representation of $\mathfrak{osp}(1|2\ell)$ in $\mathbb{C}^{1|2\ell}$. The supertrace of $A\in{\rm End}(\mathbb{C}^{1|2\ell})$ of the shape \eqref{shape} is given by \begin{eqnarray}\new\begin{array}{cc} {\rm str}\,(A)\,=\,{\rm str}\,\left(\begin{array}{cc} A_{11} & A_{12}\\A_{21} & A_{22} \end{array}\right)\, =\,-{\rm tr}\,(A_{11})\,+\,{\rm tr}\,(A_{22})\,. \end{array}\end{eqnarray} The explicit form of the invariant scalar product \eqref{ISP2} may be directly derived using the matrix representation \eqref{ospNrep}. \begin{lem} The invariant scalar product \eqref{ISP2} on super Lie algebra $\mathfrak{osp}(1|2\ell)$ is as follows \begin{eqnarray}\new\begin{array}{cc}\label{ISP} (h_i|h_i)=1\,,\quad i\in I\,;\\ (X_\alpha|X_{-\alpha})=\frac{2}{(\alpha,\alpha)}\,,\quad \alpha\in \Delta^+_{\bar{0}}\,; \qquad (X_\beta|X_{-\beta})=1\,, \quad \beta\in \Delta^+_{\bar{1}}\,, \end{array}\end{eqnarray} with the rest of the products being zero. \end{lem} \noindent {\it Proof} : Validity of \eqref{ISP} may be checked directly. Thus for example we have using \eqref{ospNgen2} \begin{eqnarray}\new\begin{array}{cc} (h_i|h_j)=\frac{1}{2}{\rm str}\,\,\Big(\rho_t(E_{ii}-E_{\ell+i,\ell+i}) \rho_t(E_{jj}-E_{\ell+j,\ell+j})\Big)\, =\,\delta_{ij}\,. \end{array}\end{eqnarray} Similarly, using \eqref{ospNgen1} we obtain \begin{eqnarray}\new\begin{array}{cc} (X_{\epsilon_i}|X_{-\epsilon_j})\, =\,\frac{1}{2}{\rm str}\,\Big(\rho_t(E_{i,0}+E_{0,\ell+i}) \rho_t(E_{0,j}-E_{\ell+j,0})\Big)\, =\,\delta_{ij}\,. \end{array}\end{eqnarray} Similarly one might check the expressions for remaining products. $\Box$ \begin{prop} The following expression provides the quadratic Casimir element \eqref{casimir} for $\mathfrak{osp}(1|2\ell)$: \begin{eqnarray}\new\begin{array}{cc}\label{Gcasimir} C_2\, =\,\sum_{i\in I}\Big(h_{i}^2\, -\,X_{\epsilon_i}X_{-\epsilon_i}\,+\,X_{-\epsilon_i}X_{\epsilon_i}\Big)\\ +\,\sum_{{\alpha}\in\Delta^+_{\bar{0}}}\frac{({\alpha},\,{\alpha})}{2}\bigl(X_{{\alpha}}X_{-{\alpha}}\, +\,X_{-{\alpha}}X_{{\alpha}}\bigr)\,. \end{array}\end{eqnarray} \end{prop} \noindent {\it Proof} : Using the expressions \eqref{ISP} for the invariant pairing we have \begin{eqnarray}\new\begin{array}{cc}\label{ISP1} h^i\,=\,h_i\,,\quad i\in I\,;\\ X^{\alpha}\,=\,\frac{(\alpha,\alpha)}{2}\,X_{-\alpha}\,,\quad \alpha\in \Delta^+_{\bar{0}}\,;\qquad X^{\beta}\,=\,X_{-\beta}\,,\quad \beta\in \Delta^+_{\bar{1}}\,. \end{array}\end{eqnarray} Substituting \eqref{ISP1} into \eqref{casimir} we arrive at \eqref{Gcasimir}, and complete the proof. $\Box$ In the following it will more convenient to use another set of notations for generators which is adapted to the matrix form \eqref{ospNrep} \begin{eqnarray}\new\begin{array}{cc} y_i\,=X_{\epsilon_i}\,,\qquad x_i\,=\,X_{-\epsilon_i}\,\,;\\ a_{ii}\,=\,h_i\,, \quad b_{ii}\,=\,X_{2\epsilon_i}\,,\quad c_{ii}\,=\,X_{-2\epsilon_i}\,,\qquad i\in I\,;\\ a_{ij}\,=\,X_{\epsilon_i-\epsilon_j}\,\qquad a_{ji}\,=\,X_{-\epsilon_i+\epsilon_j}\,,\\ b_{ij}\,=\,X_{\epsilon_i+\epsilon_j}\,,\quad c_{ij}\,=\,X_{-\epsilon_i-\epsilon_j},\quad i<j\,\,. \end{array}\end{eqnarray} In addition to \eqref{osprel} the even part of $\mathfrak{osp}(1|2\ell)$ satisfies the following relations: \begin{eqnarray}\new\begin{array}{cc}\label{spNrels} [b_{ij},b_{kl}]=0, \qquad [c_{ij},c_{kl}]=0, \qquad [b_{ij},c_{kl}]=\delta_{jk}a_{il},\\ \,[a_{ii},b_{kl}]=(\delta_{ik}-\delta_{il})b_{kl}, \qquad [a_{ii},c_{kl}]=-(\delta_{ik}-\delta_{il})b_{kl}\,. \end{array}\end{eqnarray} Using these notations the quadratic Casimir element \eqref{Gcasimir} may be written as follows: \begin{eqnarray}\new\begin{array}{cc}\label{Gcasimir1} C_2\, =\,\sum_{i=1}^{\ell}(a_{ii}^2+x_iy_i-y_ix_i)\, +\,2(c_{ii}b_{ii}+b_{ii}c_{ii})\\ +\,\sum_{i<j}(a_{ij}a_{ji}+a_{ji}a_{ij})\,+\,(b_{ij}c_{ij}+c_{ij}b_{ij})\,. \end{array}\end{eqnarray} From now on we will consider the real form $\mathfrak{osp}(1|2\ell)(\mathbb{R})$ of the orthosymplectic super Lie algebra, such that the generators $a_{ii},\,b_{ii},\,c_{ii},\,i\in I$, $a_{ij},\,a_{ji},\,b_{ij},\,c_{ij},\,i<j$ as well as $x_i$ and $y_i$ are defined to be real. \section{The $\mathfrak{osp}(1|\,2\ell)$-Whittaker function} In this section we construct the Whittaker function associated with the super Lie algebra $\mathfrak{osp}(1|2\ell)$. There is a classical approach to the construction of Whittaker functions associated with semisimple Lie algebras \cite{J}, \cite{K1}, \cite{K2}, \cite{H}. Below we give a modified version of this construction due to Kazhdan-Kostant (see \cite{E}). Given a super Lie algebra ${\mathfrak g}$, let $\mathcal{U}({\mathfrak g})$ be the corresponding universal enveloping algebra and let $\mathcal{Z}(\mathcal{U}(\mathfrak{g}))\subset\mathcal{U}({\mathfrak g})$ be its center. A $\mathcal{U}(\mathfrak{g})$-module $\mathcal{V}$ admits an infinitesimal character $\zeta$ if there is a homomorphism $\zeta: \mathcal{Z}(\mathcal{U}(\mathfrak{g}))\rightarrow \mathbb{C}$ such that $zv=\zeta(z)v$ for all $z\!\in\!\mathcal{Z}(\mathcal{U}(\mathfrak{g}))$ and $v\in\mathcal{V}$. Given a character $\chi$ of a nilpotent super Lie subalgebra $\mathfrak{n}\subset \mathfrak{g}$, \begin{eqnarray}\new\begin{array}{cc} \chi\,:\quad\mathfrak{n}\longrightarrow\,\mathbb{C}^{1|1}\,; \end{array}\end{eqnarray} we define a Whittaker vector $\psi\in\mathcal{V}$ by the following relations: \begin{eqnarray}\new\begin{array}{cc} X\cdot\psi\,=\,\chi(X)\,\psi\,,\qquad\forall X\in\mathfrak{n}\subset\mathfrak{g}\,. \end{array}\end{eqnarray} The Whittaker vector $\psi\in\mathcal{V}$ is called cyclic, if it generates $\mathcal{V}$: $\mathcal{U}({\mathfrak g})\,\cdot\psi=\mathcal{V}$. A $\mathcal{U}({\mathfrak g})$-module $\mathcal{V}$ is called a Whittaker module if it contains a cyclic Whittaker vector. A pair of $\mathcal{U}({\mathfrak g})$-modules $\mathcal{V}$ and $\mathcal{V}'$ is called dual if there exists a non-degenerate pairing \begin{eqnarray}\new\begin{array}{cc} \<\,,\,\>\,:\quad\mathcal{V}\times\mathcal{V}'\,\longrightarrow\,\mathbb{C}^{1|1}\,, \end{array}\end{eqnarray} which is $\mathbb{C}$-antilinear in the first variable and $\mathbb{C}$-linear in the second one, and such that \begin{eqnarray}\new\begin{array}{cc}\label{hermitean} \<X\cdot v',\,v\>\,=\,-(-1)^{p(v')\cdot p(X)}\<v'\,,X\cdot v\>\,,\qquad v\in\mathcal{V},\,\, v'\in\mathcal{V}'\,,\quad X\in\mathfrak{g}\,. \end{array}\end{eqnarray} Now we restrict ourselves to the case of the orthosymplectic super Lie algebra $\mathfrak{osp}(1|2\ell)$. Let $\mathcal{V}_\lambda$ be a $\mathcal{U}(\mathfrak{osp}(1|\,2\ell))$-module with an infinitesimal central character allowing a vector $v_{\lambda}\in\mathcal{V}_{\lambda}$ defined by (see the notations of \eqref{ospNCartan}, \eqref{osprel}): \begin{eqnarray}\new\begin{array}{cc}\label{hwvec} h_{{\alpha}^{\vee}}\cdot v_{\lambda}\,=\,{\alpha}^{\vee}(\lambda)v_{\lambda}\,,\quad X_{{\alpha}}\cdot v_{\lambda}\,=\,0\,,\qquad\forall X_{{\alpha}}\in\mathfrak{n}_+\subset\mathfrak{osp}(1|2\ell)\,,\quad{\alpha}\in\Delta^+\,, \end{array}\end{eqnarray} where $\lambda$ is an element of the dual to the Cartan subalgebra $\mathfrak{h}\subset \mathfrak{osp}(1|2\ell)$. Value of the quadratic Casimir element $C_2$ on $\mathcal{V}_{\lambda}$ is uniquely determined by \eqref{hwvec}. Indeed let us re-write the Casimir element $C_2$ from \eqref{Gcasimir} as follows: \begin{eqnarray}\new\begin{array}{cc}\label{Gcasimir11} C_2\,=\,\sum_{i\in I}\Big(a_{ii}^2\, -\,a_{ii}\,+\,2X_{-\epsilon_i}X_{\epsilon_i}\Big)\, +\,\sum_{{\alpha}\in\Delta^+_{\bar{0}}}\frac{({\alpha},\,{\alpha})}{2}\Big(h_{{\alpha}^{\vee}}\, +\,2X_{-{\alpha}}X_{{\alpha}}\Big)\\ =\,\sum_{i\in I}\Big(a_{ii}^2\,+\,2\rho(\epsilon_i)a_{ii}\, +\,2X_{-\epsilon_i}X_{\epsilon_i}\Big)\, +\,\sum_{{\alpha}\in\Delta^+_{\bar{0}}}({\alpha},\,{\alpha})\,X_{-{\alpha}}X_{{\alpha}}\,, \end{array}\end{eqnarray} where \begin{eqnarray}\new\begin{array}{cc}\label{rho1} \rho(q)\, =\,\frac{1}{2}\Big(\sum_{{\alpha}\in\Delta^+_{\bar{0}}}{\alpha}(q)\, -\,\sum_{\beta\in\Delta^+_{\bar{1}}}\beta(q)\Big)\,. \end{array}\end{eqnarray} Thus using \eqref{hwvec} $C_2$ takes the following value on $v_{\lambda}\in\mathcal{V}_{\lambda}$: \begin{eqnarray}\new\begin{array}{cc}\label{CasimirValue} C_2(v_{\lambda})\,=\,\sum_{i\in I}\bigl(a_{ii}^2\,+\,2\rho(\epsilon_i)a_{ii}\bigr)(v_{\lambda})\, =\,(\lambda,\lambda+2\rho)\,v_{\lambda}\,. \end{array}\end{eqnarray} In the following we consider those $\mathcal{V}_\lambda$ that allow a structure of the Whittaker modules and also allow integration of the action of the Cartan subalgebra $\mathfrak{h}\subset\mathfrak{osp}(1|2\ell)$ to the action of the corresponding maximal torus $H$. Precisely let $\mathcal{V}_\lambda$ and $\mathcal{V}'_\lambda$ be a dual pair of Whittaker modules cyclically generated by Whittaker vectors $\psi_R\in\mathcal{V}_{\lambda}$ and $\psi_L\in\mathcal{V}_{\lambda}'$. Explicitly, the Whittaker vectors $\psi_R\in\mathcal{V}_{\lambda}$ and $\psi_L\in\mathcal{V}'_\lambda$ are defined by the following conditions \begin{eqnarray}\new\begin{array}{cc}\label{whittchar} X_{{\alpha}}\cdot\psi_R\,=\,\chi_R(X_{{\alpha}})\,\psi_R\,,\qquad X_{-{\alpha}}\cdot\psi_L\,=\,\chi_L(X_{-{\alpha}})\,\psi_L\,,\qquad \forall {\alpha}\in\Delta^+\,, \end{array}\end{eqnarray} where $\chi_R:\,\mathfrak{n}_+\to\mathbb{C}^{1|1}$ and $\chi_L:\,\mathfrak{n}_-\to\mathbb{C}^{1|1}$ are the characters of the opposite nilpotent super Lie subalgebras $\mathfrak{n}_{\pm}\subset\mathfrak{osp}(1|2\ell)$: \begin{eqnarray}\new\begin{array}{cc} \chi_R\,:\quad\mathfrak{n}_+\,=\,\Big(\bigoplus_{{\alpha}\in\Delta^+_{\bar{0}}}\mathbb{C} X_{{\alpha}}\,\oplus\,\bigoplus_{\beta\in\Delta^+_{\bar{1}}}\mathbb{C} X_{\beta}\Big)\,\longrightarrow\,\mathbb{C}^{1|1}\,,\\ \chi_L\,:\quad\mathfrak{n}_-\,=\,\Big(\bigoplus_{{\alpha}\in\Delta^+_{\bar{0}}}\mathbb{C} X_{-{\alpha}}\,\oplus\,\bigoplus_{\beta\in\Delta^+_{\bar{1}}}\mathbb{C} X_{-\beta}\Big)\,\longrightarrow\,\mathbb{C}^{1|1}\,. \end{array}\end{eqnarray} \begin{lem}\label{Nchar} \begin{itemize} \item[(i)] The function $\chi_R:\,\mathfrak{n}_+\to\mathbb{C}^{1|1}$ defined by \begin{eqnarray}\new\begin{array}{cc}\label{whittvecR} \chi_R\bigl(X_{\epsilon_{\ell}}\bigr)\,=\,\imath^{3/2}\xi_{{\alpha}_1}^+\,\in\, \imath^{3/2}\mathbb{R}^{1|0}\,,\quad \chi_R\bigl(X_{2\epsilon_{\ell}}\bigr)\,=\,\imath(\xi_{{\alpha}_1}^+)^2\,\in\,\imath\mathbb{R}^{0|1}\,,\\ \chi_R\bigl(X_{\epsilon_{\ell+1-k}-\epsilon_{\ell+2-k}}\bigr)\,=\, \imath\xi_{{\alpha}_k}^+\,\in\,\imath\mathbb{R}^{0|1}\,, \quad1<k\leq\ell\,,\\ \chi_R\bigl(X_{\epsilon_k}\bigr)\,=\,\chi_R\bigl(X_{{\alpha}}\bigr)\,=\,0\,,\qquad 1\leq k<\ell\,,\quad{\alpha}\in\Delta^+_{\bar{0}}\setminus{}^s\Delta^+_{\bar{0}}\,, \end{array}\end{eqnarray} is a character of the super Lie subalgebra $\mathfrak{n}_+\subset\mathfrak{osp}(1|\,2\ell)$. \item[(ii)] Similarly, the function $\psi_L:\,\mathfrak{n}_-\to\mathbb{C}^{1|1}$ defined by \begin{eqnarray}\new\begin{array}{cc}\label{whittvecL} \chi_L\bigl(X_{-\epsilon_{\ell}}\bigr)\,=\,\imath^{3/2}\xi_{{\alpha}_1}^-\,\in\, \imath^{3/2}\mathbb{R}^{1|0}\,,\qquad \chi_L\bigl(X_{-\epsilon_{2\ell}}\bigr)\,=\,\imath(\xi_{{\alpha}_1}^-)^2\,\in\,\imath\mathbb{R}^{0|1}\,,\\ \chi_L\bigl(X_{-\epsilon_{\ell+1-k}+\epsilon_{\ell+2-k}}\bigr) \,=\,\imath\xi_{{\alpha}_k}^-\,\in\,\imath\mathbb{R}^{0|1}\,,\quad1<k\leq\ell\,,\\ \chi_L\bigl(X_{-\epsilon_k}\bigr)\,=\,\chi_L\bigl(X_{-{\alpha}}\bigr)\,=\,0\,,\qquad 1\leq k<\ell\,,\quad{\alpha}\in\Delta^+_{\bar{0}}\setminus{}^s\Delta^+_{\bar{0}}\,, \end{array}\end{eqnarray} is a character of the super subalgebra Lie $\mathfrak{n}_-\subset\mathfrak{osp}(1|\,2\ell)$. \end{itemize} \end{lem} \noindent {\it Proof} : We provide the proof in the case of $\chi_R$ while the case of $\chi_L$ can be treated in a similar way. Let us verify that \eqref{whittvecR} defines a character $\chi_R$ of the super subalgebra $\mathfrak{n}_+\subset\mathfrak{osp}(1|2\ell)$ by checking the compatibility of \eqref{whittvecR} with the appropriate Cartan-Weyl relations \eqref{osprel}: \begin{eqnarray}\new\begin{array}{cc}\label{osprelN} [X_{\epsilon_i},\,X_{\epsilon_i}]\,=\,2X_{\epsilon_i}^2\,=\,2X_{2\epsilon_i}\,,\quad [X_{\epsilon_i},\,X_{\epsilon_j}]\,=\,X_{\epsilon_i+\epsilon_j}\,,\qquad i,j\in I\,,\\ \bigl[X_{\epsilon_i-\epsilon_{i+1}},X_{\epsilon_{i+1}}\bigr]=X_{\epsilon_i},\quad \bigl[X_{\epsilon_i-\epsilon_{i+1}},[X_{\epsilon_i-\epsilon_{i+1}},X_{2\epsilon_{i+1}}]\bigr] =2X_{2\epsilon_i},\quad1\leq i<\ell\,,\\ \bigl[X_{\epsilon_i-\epsilon_j},\,X_{2\epsilon_j}\bigr]\,=\,X_{\epsilon_i+\epsilon_j}\,,\qquad i<j\,,\\ \bigl[X_{\epsilon_i-\epsilon_j},\,X_{\epsilon_j-\epsilon_k}\bigr]\,=\,X_{\epsilon_i-\epsilon_k}\,,\qquad i<j<k\,, \end{array}\end{eqnarray} and with the Serre relations \eqref{Serre}: \begin{eqnarray}\new\begin{array}{cc}\label{SerreN} {\rm ad}_{X_{\epsilon_{\ell}}}^2(X_{\epsilon_{\ell}})\,=\,0\,,\quad {\rm ad}_{X_{\epsilon_{\ell}}}^3(X_{\epsilon_{\ell-1}-\epsilon_{\ell}})\, =\,{\rm ad}_{X_{\epsilon_{\ell-1}-\epsilon_{\ell}}}^2(X_{\epsilon_{\ell}})\,=\,0\,,\\ {\rm ad}_{X_{2\epsilon_{\ell}}}^2(X_{\epsilon_{\ell-1}-\epsilon_{\ell}})\, =\,{\rm ad}_{X_{\epsilon_{\ell-1}-\epsilon_{\ell}}}^3(X_{2\epsilon_{\ell}})\,=\,0\,,\\ {\rm ad}_{X_{\epsilon_{i-1}-\epsilon_i}}^2(X_{\epsilon_i-\epsilon_{i+1}})\, =\,{\rm ad}_{X_{\epsilon_i-\epsilon_{i+1}}}^2(X_{\epsilon_{i-1}-\epsilon_i})\,=\,0\,,\quad1<i<\ell\,. \end{array}\end{eqnarray} From the defining relations \eqref{whittvecR} we see that $\chi_R$ takes non-zero values only on the simple root generators $X_{\epsilon_{\ell}}$, $X_{\epsilon_k-\epsilon_{k+1}},\,1\leq k<\ell$ and on the special non-simple root generator $X_{2\epsilon_{\ell}}$. The latter follows from the first relation from \eqref{osprelN} for $i=\ell$: \begin{eqnarray}\new\begin{array}{cc} [X_{\epsilon_{\ell}},\,X_{\epsilon_{\ell}}]\,=\,2X_{\epsilon_{\ell}}^2\, =\,2X_{2\epsilon_{\ell}}\,. \end{array}\end{eqnarray} Indeed, given $X_{\epsilon_{\ell}}\cdot\psi_R=\imath^{3/2}\,\xi_{{\alpha}_1}^+\psi_R$ one readily deduces \begin{eqnarray}\new\begin{array}{cc} 2X_{\epsilon_{\ell}}^2\cdot\psi_R\,=\,2X_{\epsilon_{\ell}}\cdot(X_{\epsilon_{\ell}}\cdot\psi_R)\, =\,2X_{\epsilon_{\ell}}\cdot(\imath^{3/2}\xi_{{\alpha}_1}^+\psi_R)\, =\,-2\imath^{3/2}\xi_{{\alpha}_1}^+(X_{\epsilon_{\ell}}\cdot\psi_R)\\ =\,2\imath(\xi_{{\alpha}_1}^+)^2\psi_R\,, \end{array}\end{eqnarray} which matches with $2X_{\epsilon_{2\ell}}\cdot\psi_R\,=\,2\imath(\xi_{{\alpha}_1}^+)^2\psi_R$. Similarly, the first relation from \eqref{osprelN} for $1\leq i<\ell$ yields \begin{eqnarray}\new\begin{array}{cc} \chi_R(X_{2\epsilon_i})\,=\,\frac{1}{2}\chi_R(X_{\epsilon_i})^2\, =\,0\,,\qquad1\leq i<\ell\,, \end{array}\end{eqnarray} and the other relation in the first line of \eqref{whittvecR} entails \begin{eqnarray}\new\begin{array}{cc} \chi_R(X_{\epsilon_i+\epsilon_j})\,=\,\chi_R(X_{\epsilon_i}X_{\epsilon_j}+X_{\epsilon_j}X_{\epsilon_i})\, =\,0\,,\qquad i<j\,. \end{array}\end{eqnarray} The Serre relations imply that $\dim\mathfrak{n}_+=|\Delta^+|=\ell^2+\ell=\ell(\ell+1)$. Thus the rest of the defining relations \eqref{whittvecR} are provided by the fact that given $X_{{\alpha}}\in\mathfrak{n}_+$ and $X_{\beta},X_{{\gamma}}\in\mathfrak{n}_+$, such that ${\alpha}=\beta+{\gamma}$ and not both $\beta,{\gamma}$ are odd, we have \begin{eqnarray}\new\begin{array}{cc} \chi_R(X_{{\alpha}})\, =\,\chi_R\bigl(X_{\beta}X_{{\gamma}}-X_{{\gamma}}X_{\beta}\bigr)\,=\,0\,. \end{array}\end{eqnarray} Namely, the last line of \eqref{osprelN} for each $i<j<k$ with $1<i-k<\ell$ implies \begin{eqnarray}\new\begin{array}{cc} \chi_R(X_{\epsilon_i-\epsilon_k})\, =\,\chi_R\bigl(X_{\epsilon_i-\epsilon_j}X_{\epsilon_j-\epsilon_k} -X_{\epsilon_j-\epsilon_k}X_{\epsilon_i-\epsilon_j}\bigr)\, =\,0\,. \end{array}\end{eqnarray} Then the second and the third lines of \eqref{osprelN} for each $1\leq i<k\leq\ell$ we have \begin{eqnarray}\new\begin{array}{cc} \chi_R(X_{\epsilon_i})\, =\,\chi_R\bigl(X_{\epsilon_i-\epsilon_k}X_{\epsilon_k}-X_{\epsilon_k}X_{\epsilon_i-\epsilon_k}\bigr)\,=\,0\,. \end{array}\end{eqnarray} Finally, we check that the remaining relations \begin{eqnarray}\new\begin{array}{cc} \bigl[X_{\epsilon_i-\epsilon_j},\,X_{2\epsilon_j}\bigr]\,=\,X_{\epsilon_i+\epsilon_j}\,,\qquad [X_{\epsilon_i-\epsilon_j},\,X_{2\epsilon_j}]\,=\,X_{\epsilon_i+\epsilon_j}\,,\\ {\rm ad}_{X_{\epsilon_i-\epsilon_j}}^2(X_{2\epsilon_j})\, =\,\bigl[X_{\epsilon_i-\epsilon_j},[X_{\epsilon_i-\epsilon_j},X_{2\epsilon_j}]\bigr]\, =\,X_{2\epsilon_i}\,,\qquad i<j\,, \end{array}\end{eqnarray} are consistent with the defining relations \eqref{whittvecR}. This completes our proof. $\Box$ \begin{rem} Our choice of the characters \eqref{whittvecL}, \eqref{whittvecR} in Lemma \ref{Nchar} is compatible with the notion of a unitary operators in the case of super Hilbert spaces (see \cite{DM}). Note however that in our case we do not require the Hilbert space structure but only an invariant pairing. \end{rem} \begin{de} Let $\mathcal{V}_\lambda$ and $\mathcal{V}'_\lambda$ be a dual pair of cyclic Whittaker modules with the action of the Casimir element given by \eqref{CasimirValue}. The $\mathfrak{osp}(1|2\ell)$-Whittaker function is defined by \begin{eqnarray}\new\begin{array}{cc}\label{Gwhittaker} \Psi_{\lambda}(e^q)\, =\,e^{-\rho(q)}\<\psi_L\,,\,e^{-h_q} \cdot \psi_R\>\,,\qquad h_q\,=\,\sum_{i\in I}q_ia_{ii}\,, \end{array}\end{eqnarray} where $\rho$ is the half-sum of positive even roots minus the half-sum of positive odd roots, given by \eqref{rho1}. \end{de} \begin{prop} The $\mathfrak{osp}(1|\,2\ell)$-Whittaker function \eqref{Gwhittaker} is a solution to the following eigenvalue problem: \begin{eqnarray}\new\begin{array}{cc}\label{eigenvalue} \mathcal{H}_2^{\mathfrak{osp}(1|2\ell)}\cdot\Psi_{\lambda}(e^q) \,=\,-(\lambda+\rho)^2\,\Psi_{\lambda}(e^q)\,, \end{array}\end{eqnarray} \begin{eqnarray}\new\begin{array}{cc}\label{BCham} \begin{array}{c} \mathcal{H}_2^{\mathfrak{osp}(1|2\ell)}\, =-\,\sum_{i\in I}\frac{\partial^2}{\partial q_i^2}\, +\,2\!\!\sum_{{\alpha}_i\in{}^s\!\Delta^+}\xi_{{\alpha}_i}^-\xi_{{\alpha}_i}^+\,e^{{\alpha}_i(q)}\, +\,4(\xi_{{\alpha}_1}^-\xi_{{\alpha}_1}^+)^2\,e^{2{\alpha}_1(q)}\,, \end{array} \end{array}\end{eqnarray} where $\rho$ is given by \eqref{rho1}, and ${}^s\!\Delta^+={}^s\!\Delta^+(B_{0,\ell})$ is defined in \eqref{OPSroot}. \end{prop} \noindent {\it Proof} : On the one hand, by our construction we read from \eqref{CasimirValue}: \begin{eqnarray}\new\begin{array}{cc}\label{CasimirValue1} \<\psi_L\,,e^{-h_q}\,C_2\,\psi_R\>\, =\,(\lambda,\lambda+2\rho)\,\<\psi_L\,,e^{-h_q}\,C_2\,\psi_R\>\,. \end{array}\end{eqnarray} On the other hand, the action of the Casimir element $C_2\in\mathcal{Z}(\mathcal{U}(\mathfrak{osp}(1|2\ell)))$ is equivalent to action on \eqref{Gwhittaker} of a certain second-order differential operator. Namely, from \eqref{Gcasimir1} we take \begin{eqnarray}\new\begin{array}{cc} C_2\, =\,\sum_{i\in I}\Big(a_{ii}^2\,+\,2\rho(\epsilon_i)a_{ii}\, +\,2X_{-\epsilon_i}X_{\epsilon_i}\Big)\, +\,\sum_{{\alpha}\in\Delta^+_{\bar{0}}}({\alpha},\,{\alpha})\,X_{-{\alpha}}X_{{\alpha}}\,, \end{array}\end{eqnarray} and substituting this into \eqref{CasimirValue1} we obtain: \begin{eqnarray}\new\begin{array}{cc} \sum_{i\in I} \bigl\<\psi_L\,,e^{h_q}\bigl(a_{ii}^2\,+\,2\rho(\epsilon_i)a_{ii}\bigr)\,\psi_R\bigr\>\\ =\,\sum_{i\in I}\Big\{\frac{\partial^2}{\partial q_i^2}\, -\,2\rho(\epsilon_i)\frac{\partial}{\partial q_i}\Big\}\<\psi_L\,,e^{-h_q}\cdot\psi_R\>\,. \end{array}\end{eqnarray} Taking into account the defining equations \eqref{whittvecL}, \eqref{whittvecR} and the hermitian property \eqref{hermitean} of $\<\,,\,\>$ we find out \begin{eqnarray}\new\begin{array}{cc} 2\sum_{i\in I}\bigl\<\psi_L\,,e^{-h_q}X_{-\epsilon_i}X_{\epsilon_i}\,\psi_R\bigr\>\\ =\,-2\sum_{i\in I}e^{q_i} (-1)^{p(X_{-\epsilon_i})\,p(\psi_L)}\bigl\<X_{-\epsilon_i}\psi_L\,\,,e^{-h_q}\,X_{\epsilon_i}\, \psi_R\bigr\>\,\\ =\,-2(-1)^{p(X_{-\epsilon_{\ell}})\,p(\psi_L)}e^{q_{\ell}} \bigl\<\imath^{3/2}\xi_{{\alpha}_1}^-\psi_L\,,e^{-h_q}\imath^{3/2}\xi_{{\alpha}_1}^+\psi_R\bigr\>\,\\ =\,-2(-1)^{p(X_{-\epsilon_{\ell}})\,p(\psi_L)}(-1)^{p(\xi_{{\alpha}_1}^+)\cdot p(\psi_L)} \imath^{-3/2}\imath^{3/2}\xi_{{\alpha}_1}^-\xi_{{\alpha}_1}^+ e^{{\alpha}_1(q)}\bigl\<\psi_L\,,e^{-h_q}\cdot\psi_R\bigr\>\,\\ =\,-2\xi_{{\alpha}_1}^-\xi_{{\alpha}_1}^+e^{\alpha_1(q)}\,\bigl\<\psi_L\,,e^{-h_q}\cdot \psi_R\bigr\>\,. \end{array}\end{eqnarray} Here we use the fact that $\<\,,\,\>$ is $\mathbb{C}$-antilinear in the first variable and it is $\mathbb{C}$-linear in the second variable. In a similar way we derive \begin{eqnarray}\new\begin{array}{cc} \sum_{{\alpha}\in\Delta^+_{\bar{0}}}({\alpha},\,{\alpha})\bigl\<\psi_L\,,e^{-h_q}X_{-{\alpha}}X_{{\alpha}} \psi_R\bigr\>\\ =\,-\sum_{{\alpha}\in\Delta^+_{\bar{0}}}({\alpha},\,{\alpha})\,e^{{\alpha}(q)} \bigl\<X_{-{\alpha}}\,\psi_L\,, e^{h_q}X_{{\alpha}}\psi_R\bigr\>\\ =\,-2\sum_{i=2}^{\ell}\imath^{-1}\imath\xi_{{\alpha}_i}^-\xi_{{\alpha}_i}^+e^{{\alpha}_i(q)} \bigl\<\psi_L\,,e^{-h_q}\psi_R\bigr\>\\ -\,4\imath^{-1}\imath(\xi_{{\alpha}_1}^-\xi_{{\alpha}_1}^+)^2e^{2{\alpha}_1(q)} \bigl\<\psi_L\,,e^{-h_q}\psi_R\bigr\>\,. \end{array}\end{eqnarray} Collecting the contributions above we obtain the following: \begin{eqnarray}\new\begin{array}{cc} \<\psi_L\,,e^{-h_q}\,C_2\,\psi_R\>\, =\,\Big\{\sum_{i\in I}\Big(\frac{\partial^2}{\partial q_i^2}\, -\,2\rho(\epsilon_i)\frac{\partial}{\partial q_i}\Big)\\ -\,4(\xi_{{\alpha}_1}^-\xi_{{\alpha}_1}^+)^2e^{2{\alpha}_1(q)}\, -\,2\sum_{i=1}^{\ell}\xi_{{\alpha}_i}^-\xi_{{\alpha}_i}^+\,e^{{\alpha}_i(q)} \Big\} \<\psi_L\,,e^{-h_q}\cdot\psi_R\>\,. \end{array}\end{eqnarray} Now we observe that \begin{eqnarray}\new\begin{array}{cc} e^{-\rho(q)}\frac{\partial}{\partial q_i}e^{\rho(q)}\, =\,\frac{\partial}{\partial q_i}\,+\,\rho(q)'_{q_i}\, =\,\frac{\partial}{\partial q_i}\,+\,\rho(\epsilon_i)\,,\\ e^{-\rho(q)}\frac{\partial^2}{\partial q_i^2}e^{\rho(q)}\, =\,\frac{\partial^2}{\partial q_i^2}\,+\,2\rho(q)'_{q_i}\frac{\partial}{\partial q_i}\, +\,\bigl(\rho(q)'_{q_i}\bigr)^2\\ =\,\frac{\partial^2}{\partial q_i^2}\,+\,2\rho(\epsilon_i)\frac{\partial}{\partial q_i}\, +\,\rho(\epsilon_i)^2\,, \end{array}\end{eqnarray} hence we deduce the following: \begin{eqnarray}\new\begin{array}{cc} \sum_{i\in I}e^{-\rho(q)}\Big\{\frac{\partial^2}{\partial q_i^2}\, -\,2\rho(\epsilon_i)\frac{\partial}{\partial q_i}\Big\}e^{\rho(q)}\\ =\,\sum_{i\in I}\Big\{\frac{\partial^2}{\partial q_i^2}\,+\,2\rho(\epsilon_i)\frac{\partial}{\partial q_i}\, +\,\rho(\epsilon_i)^2\, -\,2\rho(\epsilon_i)\Big(\frac{\partial}{\partial q_i}\,+\,\rho(\epsilon_i)\Big)\Big\}\\ =\,\sum_{i\in I}\frac{\partial^2}{\partial q_i^2}\,-\,\rho^2\,. \end{array}\end{eqnarray} Finally, we collect all the contributions and substitute them into \eqref{CasimirValue1} to deduce the following: \begin{eqnarray}\new\begin{array}{cc} \Big\{\sum_{i\in I}\frac{\partial^2}{\partial q_i^2}\, -\,4(\xi_{{\alpha}_1}^-\xi_{{\alpha}_1}^+)^2e^{2{\alpha}_1(q)}\, -\,2\sum_{{\alpha}_i\in{}^s\!\Delta^+}\xi_{{\alpha}_i}^-\xi_{{\alpha}_i}^+\,e^{{\alpha}_i(q)} -\,\rho^2\Big\}\cdot\Psi_{\lambda}(e^q)\\ =\,(\lambda,\lambda+2\rho)\,\Psi_{\lambda}(e^q)\,, \end{array}\end{eqnarray} where ${}^s\!\Delta^+={}^s\!\Delta^+(B_{0,\ell})$. This easily entails the assertion \eqref{eigenvalue}. $\Box$ \begin{rem} In the special case $\lambda=\imath\mu-\rho$, the eigenvalue equation \eqref{eigenvalue} reads \begin{eqnarray}\new\begin{array}{cc}\label{BCham1} \mathcal{H}_2^{\mathfrak{osp}(1|\,2\ell)}\,\Psi_{\lambda}(e^q)\, =\,\mu^2\Psi_{\lambda}(e^q)\,,\\ \mathcal{H}_2^{\mathfrak{osp}(1|\,2\ell)}\, =\,-\sum_{i\in I}\frac{\partial^2}{\partial q_i^2}\, +\,2\!\!\sum_{{\alpha}_i\in{}^s\Delta^+}\xi_{{\alpha}_i}^-\xi_{{\alpha}_i}^+\,e^{{\alpha}_i(q)}\, +\,4(\xi_{{\alpha}_1}^-\xi_{{\alpha}_1}^+)^2\,e^{2{\alpha}_1(q)}\,. \end{array}\end{eqnarray} \end{rem} Let us introduce the corresponding couplings: \begin{eqnarray}\new\begin{array}{cc} g_i^2\,=\,\xi^-_{{\alpha}_i}\,\xi^+_{{\alpha}_i}\,, \qquad i\in I\,. \end{array}\end{eqnarray} \begin{lem}\label{INDEP} The $\mathfrak{osp}(1|2\ell)$-Whittaker function \eqref{Gwhittaker} depends on $\xi^{\pm}_{{\alpha}_i}$ via $g_{i}^2$, $i\in I$. \end{lem} \noindent {\it Proof} : The $\mathfrak{osp}(1|2\ell)$-Whittaker function \eqref{Gwhittaker} \begin{eqnarray}\new\begin{array}{cc} \Psi_{\lambda}(e^q|\xi^{\pm}_{{\alpha}_i})\,=\, e^{-\rho(q)}\<\psi_L\,,\,e^{-h_q}\cdot\psi_R\>\,,\qquad h_q\,=\,\sum_{i\in I}q_ia_{ii}\,, \end{array}\end{eqnarray} satisfies the following obvious relation: given $Q=\exp\bigl\{\sum\limits_{i=1}^\ell \theta_i h_i\bigr\}\in H$ \begin{eqnarray}\new\begin{array}{cc} \<\psi_L\,,\,Qe^{-h_q}Q^{-1}\cdot\psi_R\>\, =\,\<\psi_L\,,\,e^{-h_q}\cdot\psi_R\>\,. \end{array}\end{eqnarray} The adjoint action of $Q$ on the left and right Whittaker vectors $\psi_R,\,\psi_L$ \eqref{whittvecL}, \eqref{whittvecR} changes them, so that the eigenvalues $\xi_{{\alpha}_i}^{\pm}$ of the corresponding $\mathfrak{n}_{\pm}$-characters are changed as follows: \begin{eqnarray}\new\begin{array}{cc} \xi_{{\alpha}_1}^{\pm}\longrightarrow \xi_{{\alpha}_1}^{\pm} e^{\pm\theta_{\ell}}\,,\qquad \xi_{{\alpha}_i}^{\pm}\longrightarrow \xi_{{\alpha}_i}^{\pm} e^{\pm(\theta_{\ell+1-i}-\theta_{\ell+2-i})}\,,\quad1<i\leq\ell\,. \end{array}\end{eqnarray} The invariance of the Whittaker function under this transformation implies that the $\mathfrak{osp}(1|2\ell)$ -Whittaker function depends on $\xi_{{\alpha}_i}^{\pm}$ only via quadratic combinations $\xi_{{\alpha}_i}^+\xi_{{\alpha}_i}^-$. $\Box$ \begin{lem} Let us consider a specialization of the $\mathfrak{osp}(1|2\ell)$-Toda chain by taking arbitrary special values $g_i^2=\kappa_i^2\in \mathbb{R}$ of the couplings. Then by a linear change of variables $q_i$ one can bring the quadratic Hamiltonian \begin{eqnarray}\new\begin{array}{cc} \begin{array}{c} \mathcal{H}_2^{\mathfrak{osp}(1|\,2\ell)}\, =\,-\sum_{i\in I}\frac{\partial^2}{\partial q_i^2}\, +\,2\sum_{i=2}^{\ell} \kappa_i^2\,e^{{\alpha}_i(q)}\, +\,2\kappa_1^2\,e^{{\alpha}_1(q)}\, +\,4\kappa_1^4\,e^{2{\alpha}_1(q)}\,, \end{array} \end{array}\end{eqnarray} to the following canonical form: \begin{eqnarray}\new\begin{array}{cc}\label{BCcan} \mathcal{H}_2^{\mathfrak{osp}(1|\,2\ell)}\, =\,-\sum_{i\in I}\frac{\partial^2}{\partial q_i^2}\, +\,\sum_{i=2}^{\ell}e^{{\alpha}_i(q)}\, +\,e^{{\alpha}_1(q)}\,+\,e^{2{\alpha}_1(q)}. \end{array}\end{eqnarray} \end{lem} \noindent {\it Proof} : Indeed it is easy to check that the following transformation of variables, \begin{eqnarray}\new\begin{array}{cc} q_{\ell}\,\longmapsto\,q_{\ell}\,-\,\ln2\,-\,\ln \kappa_1^2\,,\\ q_k\,\longmapsto\,q_k\,-\,(\ell+1-k)\ln2\,-\,\ln \kappa^2_{k}\, +\,\ln \kappa^2_{\ell-k}\,,\qquad1\leq k<\ell\,, \end{array}\end{eqnarray} applied to \eqref{BCham} gives \eqref{BCcan}. $\Box$ \section{On $\mathfrak{osp}(1|2\ell)$ as a Lie algebra of type $BC_\ell$} Let us recall the construction of the Toda chain associated with the general root system. Let $\Delta$ be a rank $\ell$ root system realized as a set of vectors in $V=\mathbb{R}^{\ell}$. Chose an orthogonal basis $\{\epsilon_i,\,i\in I\}$ in $V$ and the dual basis $\{\epsilon^i,\,i\in I\}$ in $V^*$, both indexed by $I=\{1,\ldots,\ell\}$. Then elements $q\in V^*$ allow decomposition $q=\sum\limits_{i=1}^\ell q_i\epsilon^i$. Let ${}^s\!\Delta^+$ be as a set of simple positive roots in $\Delta$. The quadratic quantum Hamiltonian of the Toda chain associated with the root system $\Delta$ is given by \begin{eqnarray}\new\begin{array}{cc}\label{qHam} \mathcal{H}_2^{\Delta^+}\, =\,-\sum_{i\in I}\frac{\partial^2}{\partial q_i^2}+ \sum_{\alpha\in{}^s\!\Delta^+}\,g^2_\alpha \,e^{\alpha(q)} \end{array}\end{eqnarray} with the coupling constants $g_\alpha^2$. Note that the Hamiltonian depends only on the structure of simple positive roots ${}^s\!\Delta^+$. Now let us specialize this expression to the case of $BC_\ell$-root system, the unique non-reduced root system satisfying basic axioms of root systems of finite-dimensional Lie algebras (for a description of $BC_\ell$ root system see e.g. \cite{H}, \cite{L}). The set of simple positive roots of the $BC_{\ell}$ root system is given by \begin{eqnarray}\new\begin{array}{cc}\label{BC2} {}^s\Delta^+(BC_{\ell})\, =\,\bigl\{2\epsilon_{\ell};\,\, \epsilon_{\ell}\,,\quad \epsilon_{i}-\epsilon_{i+1}\,,\,\, 1\leq i<\ell\bigr\}\,. \end{array}\end{eqnarray} The Cartan matrix $A=\|A_{ij}\|$ associated with the set of simple positive roots is defined via standard formula \begin{eqnarray}\new\begin{array}{cc}\label{CAM} A_{ij}=\frac{2(\alpha_i,\alpha_j)}{(\alpha_i,\alpha_i)}\,,\qquad i,j\in I\, . \end{array}\end{eqnarray} Note that the Cartan matrix corresponding to $BC_\ell$ is degenerate. For example the Cartan matrix for $\ell=5$ is given by \eqref{OSPcar}: \begin{eqnarray}\new\begin{array}{cc} A\,= \left(\begin{smallmatrix} 2&4&-2&0\\ 1&2&-1&0\\ -1&-2&2&-1\\ 0&0&-1&2 \end{smallmatrix}\right). \end{array}\end{eqnarray} Using the general formula \eqref{qHam} for $BC_\ell$-root system we arrive at the following \begin{eqnarray}\new\begin{array}{cc}\label{BCcanG} \mathcal{H}_2^{BC_\ell}\, =\,-\sum_{i\in I}\frac{\partial^2}{\partial q_i^2}\, +\,\sum_{i=1}^{\ell-1}g_i^2\,e^{q_i-q_{i+1}}\, +\,g_\ell^2\,e^{q_{\ell}}\, +\,g_{\ell+1}^2\,e^{2q_{\ell}}. \end{array}\end{eqnarray} It is clear that specialization of the quadratic Hamiltonian \eqref{BCcanG} of the $BC_\ell$-Toda chain to the case of the coupling constants $g_i^2=1,\,i\in I$ coincides with the quadratic Hamiltonian $\eqref{BCcan}$ of $\mathfrak{osp}(1|2\ell)$-Toda chain. On the other hand for generic values of $g_i$ in \eqref{BCcanG} it is not possible by linear changes of variables $q_i$ to transform the Hamiltonian \eqref{BCcanG} into the Hamiltonian with $g_i^2=1$. Thus $\mathfrak{osp}(1|2\ell)$-Toda chain realized a special class of $BC_\ell$-Toda chains. There is a question on the underlying reason for this phenomenon. It is easy to see that the simple positive roots \eqref{BC2} of $BC_\ell$ and that of $B_{0,\ell}$ \eqref{OPSroot} are closely related. There are however two differences. First, the short simple root of $\mathfrak{osp}(1|2\ell)$ has odd parity while in $BC_\ell$ root system it is an even root. Second, while in the case of super Lie algebra $\mathfrak{osp}(1|2\ell)$ the corresponding root system includes the roots $\pm 2\epsilon_\ell$, these roots are not simple and thus do not enter the expression for the corresponding Cartan matrix. If however we formally add the root $2\epsilon_\ell$ to the set of positive simple roots then the corresponding Cartan matrix constructed according to \eqref{CAM} precisely coincides with the Cartan matrix of $BC_\ell$ root system. The fact that in the case of $\mathfrak{osp}(1|2\ell)$ the terms of the Cartan decomposition \eqref{ospNCartan} corresponding to short roots are odd actually does not manifest itself in the expressions for the Hamiltonians of the corresponding Toda chain. Indeed, according to Lemma \ref{INDEP} the eigenvalues $\xi_{{\alpha}_i}^{\pm}$ in \eqref{whittvecR}, \eqref{whittvecL} enter the expressions for quantum Hamiltonians only via combinations $g_i^2=\xi_{{\alpha}_i}^+\xi_{{\alpha}_i}^-$. Therefore $B_{0,\ell}$-Toda chains turns out to be a special case of $BC_\ell$-Toda chain. We have checked this explicitly for quadratic Hamiltonian in the previous Section 3. It is natural to wonder whether we might treat the super Lie algebra $\mathfrak{osp}(1|2\ell)$ as a proper candidate for the Lie algebra structure associated with $BC_\ell$ root system. Such identification has at least one obvious caveat. The root system $BC_\ell$ allows embedding of roots systems $B_\ell$ and $C_\ell$ having isomorphic Weyl groups $W_{B_\ell}\simeq W_{C_\ell}$. It would be natural to expect the same property for the corresponding Lie algebras i.e. a candidate for the Lie algebra associated with $BC_\ell$ should allow an embedding of the Lie algebras $\mathfrak{so}_{2\ell+1}$ and $\mathfrak{sp}_{2\ell}$ associated with the roots systems $B_\ell$ and $C_\ell$ correspondingly. While there indeed exists an embedding $\mathfrak{sp}_{2\ell}\subset \mathfrak{osp}(1|2\ell)$ the super Lie algebra $\mathfrak{osp}(1|2\ell)$ does not allow an embedding of $\mathfrak{so}_{2\ell+1}$.
{ "timestamp": "2020-12-17T02:16:59", "yymm": "2012", "arxiv_id": "2012.08902", "language": "en", "url": "https://arxiv.org/abs/2012.08902" }
\section{Introduction} The pencil sketch is one kind of drawing with a highly realistic style. It is not only a popular form of art creation but also the essential basic of other art forms such as oil painting. To draw a pencil sketch, artists should have accurate contour configuration ability and superb shading drawing skills, requiring long-term professional training. Therefore, there has always been a strong demand for the pencil sketch rendering algorithm. The existing pencil drawing algorithms are mainly implemented by Non-Photorealistic Rendering (NPR) \cite{rosin2012image} and can be further divided into 3D model-based sketching and 2D image-based sketching \cite{lu2012combining}. 3D models can provide the complete geometric information of the objects, and the lighting condition in the 3D scene is fully controllable. Thus 3D model-based sketching can accurately grasp the spatial structure and render the shading texture according to the light condition. However, in most application scenarios, we can not get 3D models but only 2D natural images, so there is a greater demand for image-based sketch rendering algorithms. For 2D natural images, the geometric information is often incomplete, and the light components are usually complicated and noisy, making it hard to do sketch rendering. Generally, mathematical rendering algorithms can maintain the structures well, control the rendering effect in a fine-grained manner, and are often interpretable. In recent years, many deep learning methods for image style transfer tasks have been developed. Results of these methods usually have a stronger style than those of procedural rendering algorithms. However, due to the complexity of pencil drawing, neural methods do not perform well at capturing pencil sketch texture \cite{li2019im2pencil}. Deep learning style translation usually suffers structure distortion and artifacts, which is a serious defect for the pencil drawing task that requires reserving structures and producing high-quality textures. Besides, neural networks' parameter control mechanisms are usually high-level and are difficult to explain. Whether it is a procedural algorithm or a deep learning neural style transferer, existing algorithms can only get the final result without offering the drawing process. Actually, there exists very limited research on auto-drawing with the drawing process. Our algorithm implements image-to-pencil with a drawing process for the first time, which significantly enhances our algorithm's novelty. As shown in Figure \ref{process}, given an image (in the rightmost column), our algorithm can produce a pencil sketch by drawing one stroke at a time (Figure \ref{process} only shows the drawing process in stages, the whole process can be found in the supplementary material). Our algorithm could generate high-quality details, and the final result has a strong style. Besides, we can produce pencil drawings with obviously different visual effects by adjusting the strokes' properties. Comparison with existing pencil drawing algorithms shows that our method performs favorably in terms of texture quality, style, and user evaluation. Our work is inspired by the observation of real pencil drawings, and our algorithm models the real artists' drawing techniques. Thus the interpretability of our method is stronger than others. Since pencil drawing has many different drawing styles and artists use various drawing tools \cite{dodson1990keys}, we only simulate the most popular sketching method. That is, drawing strokes with diverse directions, shades, lengths, and widths on a canvas to gradually form a picture. For the strokes' direction, artists usually use the tangent direction of objects' edges/contours to guide the strokes' direction; for the strokes' shade, artists usually adjust their pencil sketches' contrast higher than real light conditions to make their works more visually impactful \cite{kelen1974leonardo, hoffman1989vision}. We divide the pencil sketching task into two steps. For the first step, we develop a parameter-controlled pencil stroke generation mechanism based on the pixel-scale statistical results of some real pencil drawings. For the second step, we develop a framework to guide strokes arranging on the canvas. Finally, we implement the pencil sketch auto-drawing technique. In this work, our main contribution is that we propose a novel image-to-pencil translation method that could generate high-quality results and offer the drawing process. \section{Related Work} \subsection{Non-Photorealistic Rendering} There is a rich research history on the non-photorealistic texture rendering of pencil drawing. 3D models provide all the geometric information and light conditions, thus convenient for pencil drawing rendering. \cite{lake2000stylized} presented pencil sketching texture mapping technique and proposed pencil shading rendering. \cite{lee2006real} detected contours from 3D models and imitated human contour drawing. For expression of shading, they mapped oriented textures onto objects' surface. \cite{praun2001real} achieved real-time hatching rendering over arbitrary complex surfaces using 3D models. These 3D model-based methods usually obtain satisfactory results. However, when the 3D structure and light conditions are not available, these methods cannot work. 2D image-based methods mainly include the following typical algorithms. Sousa and Buchanan presented an observational model of blenders and kneaded eraser \cite{sousa1999observational}, and simulated artists and illustrators' graphite pencil rendering techniques in \cite{sousa1999computer}. \cite{chen2004example} proposed a composite sketching approach for portrait drawing. \cite{durand2001decoupling} presented an interactive system that allowed users to produce drawings in a variety of styles, including pencil sketching. \cite{mao2002automatic} detected the input image's local structure orientation and adopted linear integral convolution (LIC) to render sketch texture. \cite{yamamoto2004enhanced} divided the input image into several layers of successive intensity ranges, then did rendering for each layer and finally added them together. \cite{li2003feature} analyzed the image moment and texture of each region, using the captured feature geometric attributes to implement pencil drawing rendering. Others proposed some improved LIC-based method \cite{chen2008novel, gao2010automatic, kong2018hybrid, chen2017stylebank}. Pencil sketch rendering could also be implemented by image analogies \cite{hertzmann2001image}. \cite{lu2012combining} proposed a novel two-stage system combining both line and tone for pencil drawing production and obtained significantly better effect than the above methods. \subsection{Drawing with Process} All previous works on the image-to-pencil task can only generate a final result, without offering the drawing process. Here we review some related Stroke-Based Rendering methods which have a process. \cite{fu2011animated} proposed an algorithm that used human pre-drawn line drawing as the input to derive a stroke order and animate the sketching automatically, but this method couldn't well recover the input line drawing. \cite{ha2018a} presented a RNN-based method trained on a dataset of human-drawn simple images to draw stick figures. StrokeNet \cite{zheng2018strokenet} can generate a sequence of only a few strokes to write Chinese characters. However, the strokes as well as their sequence are very different from that of human writing. \cite{huang2019learning} adopted model-based Deep Deterministic Policy Gradient (DDPG) algorithm to train a neural agent to learn to do oil painting with process. However, this method cannot be directly applied to pencil drawing because pencil strokes' characteristics and fusion mode are distinctive from the oil painting. The strokes of pencil drawing are lines while the oil painting' strokes are color blocks; the newly drawn strokes cannot cover the old ones in pencil drawing while the oil painting's strokes can. Besides, lines' sparsity makes it hard to train pencil drawing neural agents. Deep reinforcement learning (DRL) requires a massive amount of parameters when training, so the network's input size is very limited. \cite{huang2019learning}'s oil agent can only handle $128 \times 128$ images, unable to generate fine-grained details, while our algorithm has no restriction on the size of the input image and could generate high-quality details. \begin{figure} \includegraphics[width=\linewidth]{real-pencil.pdf} \caption{Three real pencil drawings. It can be seen from the zoomed-in areas that the texture of pencil drawings is actually some parallel strokes.} \label{real} \end{figure} \section{Stroke Simulation} Lines are the fundamental elements of pencil sketching. Since pencil drawing strokes are lines, we regard the “line" and the “stroke" as the same concept in this article. \subsection{Observation and Statistic} The analysis of real pencil drawings can be performed globally or locally. The statistics of global features are mainly on the histogram. \cite{lu2012combining} counted and fitted the histogram distribution of several real pencil drawings, then did histogram matching to transfer the tone of input images. The analysis of local features is mainly for texture. \cite{sousa1999observational} observed the blenders and erasers' absorptive and dispersive properties, and studied their interacting with lead material which deposited over the drawing paper. \cite{hertzmann2000illustrating} analyzed different hatching styles of pencil sketch. \cite{xie2007efficient} studied the graphite's distribution in pencil drawings and made three assumptions about the local distribution characteristics. Generally, local features are more important than global features because local features can better reflect the characteristics of pencil drawings, while the global histogram distribution of various artists' works is often personalized. Our observation and analysis methods are entirely based on local features. Three real pencil drawings are shown in Figure \ref{real}. It can be seen from the zoomed-in area that the texture is composed of many parallel curves. This pattern is also prevalent and evident in the rest region of these drawings. For any group of parallel curves, lines within the group have a high degree of similarity. That is, the distance between any two adjacent lines, the shade, length, and width of each line are very close. Therefore, we can perform statistical analysis on these parallel lines. In order to facilitate statistics, we cut some patches just containing one set of parallel curves from some realistic pencil drawings, and then rotate the patches until the lines' direction is nearly horizontal, as shown in Figure \ref{patch}(a). We notice that the lines are slightly curved and not strictly parallel, which brings great difficulty to the statistics in the horizontal direction, as shown by the red dotted line marked with \emph{x} in Figure \ref{patch}(a). However, the lines' curving does not affect the gray values' distribution in the vertical direction, as shown by the red dotted line marked with \emph{y} in Figure \ref{patch}(a). So we do statistics in the vertical direction. For the pixels on the red dotted line marked with \emph{y} in Figure \ref{patch}(a), their gray values are shown in Figure \ref{patch}(b). We draw some red dotted lines at all peak points in (b), it shows that the gray value's distribution curve between any two adjacent peak points is close to the letter V, as shown in Figure \ref{patch}(c). In fact, each “V" curve in Figure \ref{patch}(b) corresponds to a stroke in Figure \ref{patch}(a). The same statistics are performed on all the columns of pixels. Suppose there are \emph{n} columns, then each stroke corresponds to \emph{n} “V". For the \emph{n} “V" corresponding to a particular stroke, the gray value of pixels at the same positions (the yellow points in Figure \ref{patch}(c)) can be assumed to be independent and identically distributed. Thus, the gray value's mean and variance of each pixel in each “V" can be calculated. \begin{figure}[h] \includegraphics[width=\linewidth]{patch.pdf} \caption{(a) is a patch cut from a real pencil drawing; (b) shows the gray value of the pixels on the red dotted line marked with \emph{y} in (a); (c) is the fitting of the gray value's periodic change in (b). (d) is the illustration of stroke bending.} \label{patch} \end{figure} We use the following method for fitting. We define two variables to determine a “V" curve: width \emph{W} and the mean value \emph{G} of the central pixel's gray value. As shown in Figure \ref{patch}(c), \emph{W} represents the pixel amount of the “V" curve. The red dot indicates the central pixel of the “V" curve. \emph{G} is the mean value of the central pixel's gray value. Assume a pixel on this curve is \emph{d} pixels away from the central red pixel, then the gray value's mean and variance of this pixel can be calculated by the following equations: \begin{equation} mean(d)=G+(255-G)\times\frac{2d}{W-1} \end{equation} \begin{equation} variance(d)=(255-G)\times\cos{\frac{\pi d}{W-1}} \end{equation} Now suppose we can straighten the lines in Figure \ref{patch}(a) in the horizontal direction, and assuming the length of the line is \emph{L}, then these lines can be represented by a gray value matrix with \emph{W} rows and \emph{L} columns. For the pixels in one specific row, their gray value can be considered to be independent and identically Gaussian distributed. Now we define a matrix \emph{F} with the shape of $(W, 2)$ to record the gray value's distribution of a line. Element $(w, 1)$ and $(w, 2)$ in \emph{F} indicates the gray value's mean and variance of all the pixels in row \emph{w} in the line. As long as \emph{G} and $W$ are specified, the distribution matrix \emph{F} of the line can be calculated. \subsection{Stroke Generation} We first simulate a straight line and then bend it in the vertical direction to get a more natural effect. To draw a straight line, we need to specify three parameters: central pixel's gray value mean \emph{G}, line width \emph{W}, and line length \emph{L}. Firstly, use \emph{G} and \emph{W} to calculate the distribution matrix \emph{F} of this line. Then for every row, the gray value of each pixel is randomly generated according to \emph{F}. As shown in Figure \ref{imitation}(a), We have drawn some lines with the width $W=7$ pixels but with different \emph{G}. These straight lines look too rigid, so we adjust their shape further. \begin{figure}[h] \includegraphics[width=\linewidth]{simulation.pdf} \caption{Stroke generation steps} \label{imitation} \end{figure} By observing realistic pencil drawing strokes, as shown in Figure \ref{imitation}(d), we found that the head/tail of the strokes are thinner and lighter than the middle part, which is because when the pencil tip just touches the paper's surface or when it is about to leave, the pressure of the pencil tip on the paper's surface is less than when drawing the middle part of the line. Lines are not entirely straight but are slightly curved is because artists draw lines by swinging their wrists, so the movement of the pencil tip on paper is essentially a circular motion with a large radius. We bend the previously generated straight lines twice to achieve these effects. For the first time of bending, as shown in Figure \ref{patch}(d), the yellow dots on the X-axis indicate the pixels in a particular row of the line. These pixels will be shifted to the blue circle. Use the midpoint of the line as the origin to establish a coordinate system, pixels with different abscissas on the line will have different degrees of deviation in the Y direction. Assuming the blue circle radius is \emph{R}, pixels with abscissa \emph{x} will be shifted by $\delta y(x)$ pixels in the Y direction. The radius \emph{R} and offset $\delta y(x)$ can be calculated as $R=\frac{L^{2}}{4W}$ and $\delta y(x)=\frac{x^{2}}{2R}$. Since $\delta y(x)$ is usually a decimal, we perform linear interpolation in the Y direction to achieve this operation. After bending, some pixels will exceed the matrix with \emph{W} rows and \emph{L} columns. For these pixels out of the matrix, we directly discard them. For the blank part of the matrix, we fill it with pure white pixels. Now we have the curves as shown in \ref{imitation}(b). The purpose of the first bending operation is to make the head and tail of the lines sharp. The second time of bending is almost the same as the first time, but we preserve those pixels out of the matrix. The purpose of the second bending is to increase the curvature further. Now we have the curves as shown in \ref{imitation}(c). It is worth noting that these curves are essentially still straight lines, only looking more natural than straight lines. We model the strokes as such straight lines rather than arbitrarily shaped curves is because, on the one hand, the real pencil strokes are mostly like this, and on the other hand, the straight lines are convenient for our subsequent work. \begin{figure} \includegraphics[width=\linewidth]{pipeline.pdf} \caption{Schematic illustration of our algorithm. \emph{I} is the input. \emph{ETF} is the visualization of edge tangent flow vector field \cite{kang2007coherent}. \{$\alpha_{1}$, $\alpha_{2}$, \ldots, $\alpha_{n}$\} are the area divisions of the input according to the direction of \emph{ETF} vectors. \emph{Q} is the quantization result of \emph{I}. \{$\beta_{1}$, $\beta_{2}$, \ldots, $\beta_{n}$\} are the stroke drawing results of each area. \emph{A} is the aggregation of \{$\beta_{1}$, $\beta_{2}$, \ldots, $\beta_{n}$\}. \emph{G} is the gradient map of \emph{I}. \emph{T} is the edge map generated by \emph{G}. \emph{R} is the final result obtained by multiplying \emph{A} and \emph{T}.} \label{pipeline} \end{figure} \section{Guided Stroke Drawing} Now we introduce how to do sketching by drawing one stroke every time on the canvas. To determine a stroke, we need to know the line's width \emph{W}, length \emph{L}, central pixel's gray value mean \emph{G}, starting point's coordinates, and this line's direction. For line width \emph{W}, now we specify the width of all strokes as a fixed value (we will discuss the influence of line width \emph{W} in Section 4.5 User Control). We will utilize the local characteristics of the input image to determine the other parameters. \begin{figure*} \includegraphics[width=\linewidth]{self-compare.pdf} \caption{Some images used for algorithm introduction. (a) is the result of drawing lines only in the horizontal direction; (b) is the visualization of edge tangent flow vector field \cite{kang2007coherent}; (c) and (d) are the results of drawing lines without/with extension respectively; (e) is the edge map obtained by \cite{lu2012combining}; (f) is the drawing result of 5 pixel wide lines while lines in (g) are 9 pixel wide.} \label{compare} \end{figure*} \subsection{Grayscale Guidance} To reduce the difficulty of the task, in this section, we do not consider how to determine the direction of the strokes temporarily, but only show how to draw strokes in a fixed direction. Figure \ref{compare}(a) is the result of drawing strokes only in the horizontal direction. In the following, we will introduce how to draw it. We first adjust the histogram distribution of the input gray image to improve its hue. We use contrast limited adaptive histogram equalization (CLAHE) \cite{zuiderveld1994contrast} to enhance the contrast of the input. Next, we uniformly quantify the image into several gray levels. We denote their gray values as \{$G_{1}$, $G_{2}$, ..., $G_{n}$\}. These values will be used as the strokes' central pixel gray value mean \emph{G}. As shown in Figure \ref{pipeline}, \emph{I} is the input and \emph{Q} is the result after quantization. Then we use \emph{Q} to search and determine the strokes' parameters by scanning in the horizontal direction. Take the first row of the pixels in \emph{Q} as an example. Search out all the intervals in the first row where the pixels' gray value is less than or equal to $G_{1}$. Use the starting point and length of these intervals as the starting point and length of the strokes to be drawn; use $G_{1}$ as the strokes' central pixel gray value mean \emph{G}. Then we can draw several strokes in the horizontal direction (width \emph{W} is specified in advance). Here we define a new random variable $D\sim N(W,1)$, which is used to generate the lines' pixel distance in the vertical direction. We have searched and drawn strokes in the first row. Every next time we move down \emph{D} rows in the vertical direction and repeat the same operation as the first row until reaching the bottom of the quantization image \emph{Q}. In this way, we could draw all the strokes with central pixel gray value mean $G_{1}$. Then we draw strokes for \{$G_{2}$, ..., $G_{n}$\} in the same way. In this process, different strokes' coverage regions will overlap. The gray value of the pixels in the overlapped region is determined by the minimum (darkest). Now we can get the drawing result shown in Figure \ref{compare}(a). Although we only introduce the method of drawing strokes in the horizontal direction, different directions are equivalent: we can rotate the input image by an angle clockwise before drawing strokes in the horizontal direction. After the drawing is done, we rotate it back counterclockwise. In this way, we can draw strokes in every direction. \subsection{Direction Guidance} We have introduced how to draw strokes in a fixed direction. Now suppose we can divide the picture into several areas. Only strokes in the same area have the same direction. Then we can use the method in Section 4.1 Grayscale Guidance to draw strokes for every area according to the area division. By observing real pencil drawings, it is easy to find the direction of strokes is usually along the edges' tangent \cite{kelen1974leonardo}. Therefore, we hope to use the edges of objects to guide the direction of the strokes nearby. However, it is difficult to predict the structure of an object from a 2D image. Actually, we do not need accurate predictions but only an estimation. We could use the input image's gradient information to do this estimation because gradient and edges are often closely related. Under the natural light condition, the change of light's intensity at objects' edges is often more apparent than in flat areas. Therefore, the gradient vector field could offer suggestions for determining the direction of the strokes. We use the edge tangent flow (\emph{ETF}) proposed by \cite{kang2007coherent} to estimate the strokes' direction for each pixel. The constructing of \emph{ETF} is as follows: First, calculate the modulus and direction of the gradient vector for each pixel. Then rotate the directions of these vectors counterclockwise by 90 degrees. Adjust the vectors' direction iteratively so that the direction of the vectors with small modulus tends to the direction of the vectors with larger modulus nearby. Finally, the direction of all the vectors will be roughly parallel to the tangent of the edges. The visualization of the edge tangent flow vector field is shown in Figure \ref{compare}(b). The small red arrows point to the direction of the \emph{ETF} vectors. Now we could divide the input image into several areas according to the \emph{ETF}. We uniformly quantify the direction $0 \sim 2 \pi$ into \emph{n} values. Vectors with a phase difference of $\pi$ are regarded as the same direction. After quantifying the direction, pixels with the same direction are divided into one area. As shown in the “Area division" box in Figure \ref{pipeline}, \{$\alpha_{1}$, $\alpha_{2}$, \ldots, $\alpha_{n}$\} indicate the area divisions (pixels belong to a certain area are white, while the others are black). After drawing strokes with different directions for each area we can get \emph {n} results, indicated by \{$\beta_{1}$, $\beta_{2}$, \ldots, $\beta_{n}$\} in the “Stroke drawing" box in Figure \ref{pipeline}. \subsection{Area Merging and Detail Enhancement} Now we aggregate \{$\beta_{1}$, $\beta_{2}$, \ldots, $\beta_{n}$\} into one picture, as shown in Figure \ref{compare}(c). There are obvious defects at the boundaries of different areas. Besides, the area division will cause a large number of very short strokes, which look like noise points. These two problems are solved by extending the head and tail of all strokes by \emph {2W} pixels (\emph {W} is the strokes' width), as shown in Figure \ref{compare}(d). \emph{A} in Figure \ref{pipeline} indicates the aggregation result of strokes in different directions, which could be obtained as this equation: \begin{equation} A=minimum({ \beta_{1}}, {\beta_{2}}, \ldots,{\beta_{n}} ) \end{equation} Now the strokes in different directions merge well and appear more continuous. However, extending the strokes also causes loss of detail clarity. For example, the lady's teeth and eyes in Figure \ref{compare}(d) are very unclear. Therefore, we need to enhance the details of the sketch. As shown in Figure \ref{pipeline}, \emph{G} is the gradient map of the input image. \emph{T} is the edge map obtained by \emph{G}. There are countless algorithms for generating the edge map from the gradient map. Here we adopt the linear convolution method of \cite{lu2012combining} because the edge map obtained by his method looks more like the result of a pencil drawing than other algorithms, as shown in Figure \ref{compare}(e). Finally, we multiply the edge map \emph{T} and the drawing result \emph{A} to get the final output \emph{R}, expressed as $R=A\cdot T$. The final output \emph{R} is shown in Figure \ref{compare}(f). \subsection{Process Reconstruction} The strokes' search method has been introduced: we first draw strokes in different directions for each area and then integrate them into the final output, which seems odd. However, due to all the strokes' parameters being determined and recordable when searching for, we can rearrange the strokes' drawing sequence in a more realistic and meaningful order. We call this “Process Reconstruction". When artists draw sketches, they usually draw the outlines first (which are often long or dark lines), and then the details (these lines are often short and not very dark). To imitate this, we use the index \emph{S} to measure whether each line is more likely to be an outline or a detail: \begin{equation} S=(255-G)\times\sum\nolimits_{i \in D}T_{i} \end{equation} where $G$ is the stroke's central pixel gray value mean, $D$ is the set of pixels where the stroke covers, $T_i$ indicates the absolute value of gradient at pixel $i$. The larger the $S$, the greater the possibility that the stroke to be an outline. So we reconstruct the strokes' drawing process according to $S$ in descending order. Three examples are shown in Figure \ref{process}. It can be seen that when we only draw about 20\% of the total strokes, we can almost present the drawn objects. The whole drawing process (video demo) and more interesting results can be found in the supplementary material. \subsection{User Control} \subsubsection{Fineness} Our pencil drawing's fineness is controlled by the strokes' width and the number of quantization order. Our method of searching strokes is essentially doing sampling, and the width \emph{W} of the strokes is the sampling interval. The wider the stroke, the lower the sampling frequency. Therefore, the wider the strokes, the rougher the pencil drawing will be. Figure \ref{compare}(g) is the result of $W=9$ while $W=5$ in Figure \ref{compare}(f). The quantization order's influence is the same as the general case: the larger the quantization order is, the less obvious the block effect is. Usually, we fix \emph{W} to be 5 pixels to get finer details. We fix the quantization order of direction to be $10$ because too few directions will make the texture fluency worse, but too many directions will not improve the visual experience. The quantization order of gray level could be chosen from 8-16 according to the input. Inputs with more low-frequency components require more quantization orders. More detailed explanation of the hyperparameters’ influence on the drawing result could be found in the supplementary material. \subsubsection{Color space conversion} All of the above work is performed on the gray image, and only need to do color space conversion to achieve coloring. We convert the original image to the YUV color space, replace the Y channel with the gray output, and then transfer back to RGB color space to obtain a colored pencil drawing. This coloring method is the same as \cite{lu2012combining}. The second column from the right in Figure \ref{process} shows our RGB result. \begin{figure} \includegraphics[width=\linewidth]{compare.pdf} \caption{Comparison with several existing algorithms} \label{C} \end{figure} \section{Experimental Results} In this section, we compare our results with several representative methods to prove the effectiveness of our algorithm. The pictures being compared are all from the original paper. Due to the page limit, we only compared with three methods in this article. More comparisons and comparisons with more methods (especially neural methods) can be found in the supplementary material. Firstly, we compare our proposed method with a classic LIC-based method. In the first row of Figure \ref{C}, (a) is the input, (b) is the result of \cite{mao2002automatic}, (c) is our result. Since LIC image is obtained by low-pass filtering a white noise input image, (b) introduces too much noise on the original image and looks dirty. The texture direction of (b) is very dull, with only three directions (30\degree, 90\degree, 135\degree), and the change of texture direction has nothing to do with semantic information. In the zoomed-in area (red border), the sea's texture direction in (b) is 135\degree while (c)'s is 0\degree, which is more in line with the texture of sea ripples. In the zoomed-in area (blue border), the texture direction of (b) is rigidly fixed at 30\degree while (c)'s texture looks very fluent. (c) well expresses the feeling of clouds floating in the wind, and the overall visual effect is much cleaner and clearer than (b). The comparison shows that our method's result performs better in terms of aesthetic perception and is closer to the artists' technique. Since we use \cite{lu2012combining}'s method to extract edges and enhance details, we make a comparison with \cite{lu2012combining} in the second row of Figure \ref{C}. \cite{lu2012combining}'s method can well depict the input's contours and edges, but their texture looks too smooth, more like adding a little noise to the gray image. Besides, \cite{lu2012combining}'s method uses histogram matching to fix the pencil drawing's tone, making many areas appear too pale, such as the girls' arms and the baby's head in (e), with only contours but no texture. In our result (f), the girls' arm and facial skin's texture direction is highly consistent with the actual structure of the human muscles, which is very aesthetic and vivid. The background in our result (f) has a strong style, while (e) lacks texture, very close to a gray image. Now we compare with \cite{li2019im2pencil}, which is state of the art among neural methods. As shown in the third row of Figure \ref{C}, (g) is the input images, (h) is the result of \cite{li2019im2pencil}. Li et al.'s method can produce different kinds of style combinations. For example, $L_1$+$S_2$ means using the first type of outline and the second type of shading. So we just chose one of their style combinations that archives relatively good effects for comparison. Observe (h), it is not difficult to find apparent artifacts along the edges and borders, such as the girl's shoulder in the zoomed-in area (blue border). These artifacts deteriorate the fineness of their pencil drawing. (i) is our result, our method can preserve objects' contours clearly and portray tiny details. In terms of texture, it can be seen from the zoomed-in area (red border) that (h)'s texture cannot establish a direct connection with semantics (folds of clothes), and the texture of (h) can only reflect the shade of the input but not the input's structure. Our method displays the shade and fold of the cloth well at the same time. \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Group} & \multirow{2}{*}{Evaluation} & \multicolumn{4}{c|}{Score (Average)} \\ \cline{3-6} & & {[}1{]} & {[}2{]} & {[}3{]} & {[}4{]} \\ \hline \multirow{5}{*}{1st} & Stroke/Texture & 66.4 & 52.8 & 73.1 & 88.6 \\ \cline{2-6} & Tone/Contrast & 78.1 & 72.6 & 84.5 & 87.7 \\ \cline{2-6} & Stereoscopy & 54.3 & 68.9 & 80.4 & 84.2 \\ \cline{2-6} & Authenticity & 60.7 & 65.3 & 83.1 & 95.3 \\ \cline{2-6} & Overall & 68.5 & 74.1 & 83.2 & 92.7 \\ \hline 2nd & Overall & 72.0 & 80.4 & 84.2 & 87.3 \\ \hline \end{tabular} \caption{People in the 1st group were asked to rate the result in terms of Stroke/Texture, Tone/Contrast, Stereoscopy, Authenticity, and Overall Perception, while the 2nd group only need to rate the last term. [1], [2], [3], [4] represent the methods of \cite{mao2002automatic}, \cite{lu2012combining}, \cite{li2019im2pencil}, and ours respectively. It can be seen that our method has the highest score in the opinions of both group of people.} \end{table} \section{User study} We investigated the preferences of two groups of people (100 people in each group) to different pencil drawing algorithms. People in the first group (including ten recognized artists in our city and ninety professors/lecturers/graduates in painting-related majors from several universities) have received professional training in painting, while those in the second group do not. We gave ten sets of pictures to every participant. Every set of pictures included an input and the corresponding result of four pencil drawing algorithms. Participants didn't know the drawing was from which method and the ordering of four methods were shuffled in every set. For the first group of participants, we provided them with a series of subjective evaluation indicators and asked them to rate these indicators for each result; for the second group, we only asked them to rate every result based on their overall perception. Score range was limited in $0\sim 100$ and participants were asked to give distinguished scores. The feedback results are shown in Table 1. After completing the above survey, we used A, B, C, and D to anonymously represent these four algorithms and let each participant choose their favorite algorithm. We counted the number of people who voted for each algorithm. Then we showed our results' drawing process and informed participants that the other three methods couldn't generate process. Now we let the participants choose their favorite algorithm again. The survey results are shown in Table 2: most participants chose our method as their favorite; after watching our drawing process, more people voted for our method. \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Group} & \multirow{2}{*}{See Drawing Process} & \multicolumn{4}{c|}{User Preference} \\ \cline{3-6} & & {[}1{]} & {[}2{]} & {[}3{]} & {[}4{]} \\ \hline \multirow{2}{*}{1st} & Before & 3 & 14 & 25 & 58 \\ \cline{2-6} & After & 2 & 9 & 12 & 77 \\ \hline \multirow{2}{*}{2nd} & Before & 2 & 19 & 27 & 52 \\ \cline{2-6} & After & 0 & 8 & 11 & 81 \\ \hline \end{tabular} \caption{User Preference indicates how many people voted for each algorithm. [1], [2], [3], [4] represent the methods of \cite{mao2002automatic}, \cite{lu2012combining}, \cite{li2019im2pencil}, and ours respectively.} \end{table} \section{Conclusions} In our work, we statistically analyze and explain the texture of real pencil drawings and propose a controllable pencil stroke generation mechanism. On this basis, we implement an image-to-pencil automatic drawing algorithm: we use the edge tangent flow vector field to guide the direction of the strokes; use the gray image to determine the location, length, and shade of the strokes; use the edge map for detail enhancement. Our method is a mathematical procedural algorithm with good interpretability. Comparison with other pencil drawing algorithms shows our method outperforms in terms of texture quality, style, and user evaluation. Our most prominent advantage is that we have the drawing process. \section{Acknowledgments} This work was supported by National Science Foundation of China (U20B2072, 61976137, U1611461).
{ "timestamp": "2020-12-17T02:20:29", "yymm": "2012", "arxiv_id": "2012.09004", "language": "en", "url": "https://arxiv.org/abs/2012.09004" }
\section{Introduction} Little is known about the complemented subspace structure of Lorentz sequence spaces $d(\textbf{w},p)$. Until recently, the only nontrivial complemented subspace discussed in the literature was $\ell_p$ (\cite{ACL73}). Then, in \cite{Wa20} it was shown that for certain weights $\textbf{w}$ (see Theorem \ref{NUC-sum} below), the space $d(\textbf{w},p)$ contains a 1-complemented subspace isomorphic to $(\bigoplus_{n=1}^\infty\ell_\infty^n)_p$. Up to now, these were the only nontrivial complemented subspaces known to exist. In this short note we show that each Lorentz sequence space admits a 1-complemented subspace $Y$ distinct from $\ell_p$ (\S2). We also give an explicit representation of $Y$ for the case $\textbf{w}=(n^{-\theta})_{n=1}^\infty$ ($0<\theta<1$), as the $\ell_p$-sum of finite-dimensional copies of $d(\textbf{w},p)$ (\S3). Finally, as an application we find a sixth distinct element in the lattice of closed ideals in the operator algebra $\mathcal{L}(d(\textbf{w},p))$, where only five were previously known in the general case (\S4). Let's set up the main notation we need to use. We begin by fixing $\mathbb{K}\in\{\mathbb{R},\mathbb{C}\}$. Denote by $\Pi$ the set of all permutations of $\mathbb{N}$, and denote by $\mathbb{W}$ the set of all sequences $\textbf{w}=(w_n)_{n=1}^\infty\in c_0\setminus\ell_1$ satisfying $$ 1=w_1\geq w_2\geq w_3\geq\cdots>0. $$ Fix $1\leq p<\infty$ and $\textbf{w}\in\mathbb{W}$. For each $(a_n)_{n=1}^\infty\in\mathbb{K}^\mathbb{N}$ we set $$ \left\|(a_n)_{n=1}^\infty\right\|_{d(\textbf{w},p)}:=\sup_{\pi\in\Pi}\left(\sum_{n=1}^\infty|a_{\pi(n)}|^pw_n\right)^{1/p}, $$ and let $d(\textbf{w},p)$ denote the linear space of all $(a_n)_{n=1}^\infty\in\mathbb{K}^\mathbb{N}$ with $\|(a_n)_{n=1}^\infty\|_{d(\textbf{w},p)}<\infty$ endowed with the norm $\|\cdot\|_{d(\textbf{w},p)}$, called a {\bf Lorentz sequence space}. Recall that if $(a_n)_{n=1}^\infty\in c_0$ then there exists a ``decreasing rearrangement'' $(\hat{a}_n)_{n=1}^\infty$ of $(|a_n|)_{n=1}^\infty$. In this case, the rearrangement inequality gives us $$ \left\|(a_n)_{n=1}^\infty\right\|_{d(\textbf{w},p)}=\left(\sum_{n=1}^\infty\hat{a}_n^pw_n\right)^{1/p}\;\;\;\text{ for all }(a_n)_{n=1}^\infty\in c_0. $$ Since $d(\textbf{w},p)\subset c_0$ as linear spaces (although not as normed spaces), this represents an alternative formulation of the Lorentz sequence space norm. For each $i,k\in\mathbb{N}$ we define $$ W_k:=\sum_{n=1}^kw_n\;\;\text{ and }\;\;w_i^{(k)}:=\frac{1}{W_k}\sum_{n=(i-1)k+1}^{ik}w_n $$ and $\textbf{w}^{(k)}:=(w_i^{(k)})_{i=1}^\infty$. It is readily apparent that $\textbf{w}^{(k)}\in\mathbb{W}$. When $p$ is clear from context, we also set $$ d_i^{(k)}:=\frac{1}{W_k^{1/p}}\sum_{n=(i-1)k+1}^{ik}d_n. $$ It's routine to verify that $(d_i^{(k)})_{i=1}^\infty$ is a normalized basic sequence isometrically equivalent to the $d(\textbf{w}^{(k)},p)$ basis. If necessary, we may sometimes abuse this notation; for instance, if $(j_k)_{k=1}^\infty$ is a sequence in $\mathbb{N}$, then we could write $((d_i^{(j_k)})_{i=1}^k)_{k=1}^\infty$ for appropriately-translated successive normalized constant-coefficient blocks of lengths $j_k$. Our main tool for finding complemented subspaces of $d(\textbf{w},p)$ is the fact that every constant-coefficient block basic sequence of a symmetric basis spans a 1-complemented subspace (cf.\ e.g.\ \cite[Proposition 3.a.4]{LT77}). We will use this well-known fact freely and without further reference. \section{Lorentz sequence spaces contain at least two nontrivial complemented subspaces} The first discovery of a nontrivial complemented subspace in $d(\textbf{w},p)$ came almost half a century ago, with the following result. \begin{theorem}[{\cite[Lemma 1]{ACL73}}]\label{ACL-subsequence-lemma} Fix $1\leq p<\infty$ and $\textbf{\emph{w}}\in\mathbb{W}$, and let $$ x_i=\sum_{n=p_i}^{p_{i+1}-1}a_nd_n,\;\;\;i\in\mathbb{N}, $$ form a seminormalized block basic sequence in $d(\textbf{\emph{w}},p)$. If $a_n\to 0$ then $(x_i)_{i=1}^\infty$ admits a subsequence equivalent to $\ell_p$ and complemented in $d(\textbf{\emph{w}},p)$. \end{theorem} \noindent By taking sufficiently long constant-coefficient blocks, it follows that $d(\textbf{w},p)$ contains a 1-complemented copy of $\ell_p$. Much later was shown the following. \begin{theorem}[{\cite[Theorem 4.3]{Wa20}}]\label{NUC-sum} Let $1\leq p<\infty$ and $\textbf{\emph{w}}=(w_n)_{n=1}^\infty\in\mathbb{W}$. If $$ \inf_{k\in\mathbb{N}}\frac{\sum_{n=1}^{2k}w_n}{\sum_{n=1}^kw_n}=1 $$ then $d(\textbf{\emph{w}},p)$ admits a 1-complemented subspace spanned by constant-coefficient blocks and isomorphic to $(\bigoplus_{n=1}^\infty\ell_\infty^n)_p$. \end{theorem} \noindent Thanks in large part to the ideas of William B.\ Johnson, our main result in this section is to generalize this to all Lorentz sequence spaces, as follows. \begin{theorem}\label{nontrivial-complemented} Let $1\leq p<\infty$ and $\textbf{\emph{w}}\in\mathbb{W}$. Then there exists an increasing sequence $(N_k)_{k=1}^\infty\in\mathbb{N}^\mathbb{N}$ such that $((d_i^{(k)})_{i=1}^{N_k})_{k=1}^\infty$ spans a 1-complemented subspace $Y$ which contains no isomorph of $d(\textbf{\emph{w}},p)$ and which is not isomorphic to $\ell_p$. \end{theorem} To prove it, we need a few preliminaries. \begin{lemma}\label{subsymmetric-complement} If $1<p<\infty$ then every complemented subspace of $L_p[0,1]$ with a subsymmetric basis $(x_n)_{n=1}^\infty$ is isomorphic to either $\ell_p$ or $\ell_2$. \end{lemma} \begin{proof} The case $p=2$ is trivial since every complemented subspace of $L_2[0,1]$ is isomorphic to $\ell_2$. For the case $p>2$ recall from \cite[Corollary 6]{KP62} that every seminormalized basic sequence in $L_p[0,1]$, $p\in(2,\infty)$, admits a subsequence equivalent to $\ell_p$ or $\ell_2$, and so since $(x_n)_{n=1}^\infty$ is also subsymmetric then it is in fact equivalent to $\ell_p$ or $\ell_2$. In case $1<p<2$, since $(x_n)_{n=1}^\infty$ is complemented in $L_p[0,1]$, its corresponding sequence of biorthogonal functionals $(x_n^*)_{n=1}^\infty$ is contained in $L_{p'}[0,1]$, where $\frac{1}{p}+\frac{1}{p'}=1$. Since $p'>2$, a subsequence of $(x_n^*)_{n=1}^\infty$ is equivalent to $\ell_{p'}$ or $\ell_2$, whence by subsymmetry $(x_n)_{n=1}^\infty$ is equivalent to $\ell_p$ or $\ell_2$. \end{proof} \begin{lemma}\label{canonical-copy-complemented} Let $X$ be a Banach space whose canonical isometric copy in $X^{**}$ is complemented. Then for any free ultrafilter $\mathcal{U}$ on $\mathbb{N}$, the canonical copy of $X$ in $X^\mathcal{U}$ is complemented in $X^\mathcal{U}$. \end{lemma} \begin{proof} Let $q:X\to X^{**}$ denote the canonical embedding, and define the norm-1 linear operator $V:\ell_\infty(X)\to X^{**}$ by the rule $$ V(x_n)_{n=1}^\infty=\underset{\mathcal{U}}{\text{weak*-lim}}\,qx_n, $$ which exists by the weak*-compactness of $B_{X^{**}}$ together with the fact that if $K$ is a compact Hausdorff space then for each $(k_n)_{n=1}^\infty\in K^\mathbb{N}$ the (unique) limit $\lim_\mathcal{U}k_n$ exists in $K$. Note that if $\lim_\mathcal{U}x_n=0$ then $V(x_n)_{n=1}^\infty=0$ and so $V$ induces an operator $\widehat{V}:X^\mathcal{U}\to X^{**}$ which agrees with $V$ along the diagonal. In particular, $\widehat{V}$ sends the canonical copy of $X$ in $X^\mathcal{U}$ isomorphically to the canonical copy of $X$ in $X^{**}$. \end{proof} \begin{theorem}\label{complemented-in-Lp} Fix $1\leq p<\infty$, and let $(x_n)_{n=1}^\infty$ be a basis for a Banach space $X$ whose canonical copy in $X^{**}$ is complemented. If the finite-dimensional spaces $[x_n]_{n=1}^N$, $N\in\mathbb{N}$, are uniformly complemented in $L_p(\mu)$ for some measure $\mu$, then $X$ is complemented in $L_p[0,1]$. \end{theorem} \begin{proof} Let $X_N=[x_n]_{n=1}^N$ and $P_N:X\to X_N$ the projection onto $X_N$. By uniform complementedness of $X_N$, we can find uniformly bounded linear operators $A_N:X_N\to L_p(\mu)$ and $B_N:L_p(\mu)\to X_N$ such that $B_NA_N$ is the identity on $X_N$. Let $\mathcal{U}$ be any free ultrafilter on $\mathbb{N}$. Define the bounded linear operators $A:X\to L_p(\mu)^\mathcal{U}$ by the rule $Ax=(A_NP_Nx)_\mathcal{U}$, and $B:L_p(\mu)^\mathcal{U}\to X^\mathcal{U}$ by $B(y_N)_\mathcal{U}=(B_Ny_N)_\mathcal{U}$. Let $x\in\text{span}(x_n)_{n=1}^\infty$ so that, for some $k\in\mathbb{N}$, $$ BAx=(P_1x,\cdots,P_kx,x,x,\cdots)_\mathcal{U}=x^\mathcal{U}. $$ By continuity, $BA$ is the canonical injection of $X$ into $X^\mathcal{U}$. Since this is complemented by Lemma \ref{canonical-copy-complemented}, we have the identity on $X$ factoring through $L_p(\mu)^\mathcal{U}$. It was proved in \cite[Theorem 3.3]{He80} that ultrapowers preserve $L_p$ lattice structure, and in particular $L_p(\mu)^\mathcal{U}$ is isomorphic to $L_p(\nu)$ for some measure $\nu$. Although $L_p(\nu)$ itself is nonseparable, we could pass to the closed sublattice generated by $AX$ to find a space isomorphic to a separable $L_p$ containing a complemented copy of $X$. Due mostly to a famous result of Lacey and Wojtaszczyk, it's known that separable and infinite-dimensional $L_p$ spaces are isomorphic to either $\ell_p$ or $L_p[0,1]$ (\cite[\S4, p15]{JL01}). This means an isomorph of $X$ is complemented in $L_p[0,1]$. \end{proof} An immediate corollary to Lemma \ref{subsymmetric-complement} and Theorem \ref{complemented-in-Lp} is as follows. \begin{corollary}\label{no-uniformly-complemented-copies} Let $1<p<\infty$ and $\textbf{\emph{w}}\in\mathbb{W}$. Then no $L_p(\mu)$ space contains uniformly complemented copies of $[d_n]_{n=1}^N$, $N\in\mathbb{N}$. \end{corollary} Now we're ready to prove the main result of this section. \begin{proof}[Proof of Theorem \ref{nontrivial-complemented}] Fix $k\in\mathbb{N}$, and note that $(d_i^{(k)})_{k=1}^\infty$ is isometric to the $d(\textbf{w}^{(k)},p)$ basis. Consider the case where $p=1$. Then we can choose the $N_k$'s large enough that each $(d_i^{(k)})_{i=1}^{N_k}$ fails to be $k$-equivalent to $\ell_1^{N_k}$, and hence $((d_i^{(k)})_{i=1}^{N_k})_{k=1}^\infty$ fails to be equivalent to $\ell_1$. As $\ell_1$ has a unique unconditional basis by a result of Lindenstrauss and Pe\l{}czy\'{n}ski, it follows that $Y$ is not isomorphic to $\ell_1$. Next, consider the case where $1<p<\infty$. By Corollary \ref{no-uniformly-complemented-copies} we can select $N_k$'s large enough that $[d_i^{(k)}]_{i=1}^{N_k}$ fails to be $k$-complemented in $\ell_p$. As $[d_i^{(k)}]_{i=1}^{N_k}$'s are all 1-complemented in $Y$, that means $Y$ is not isomorphic to $\ell_p$. It remains to show that $Y$ contains no isomorph of $d(\textbf{w},p)$. Suppose towards a contradiction that it does. As $(d_n)_{n=1}^\infty$ is weakly null (cf.\ e.g.\ \cite[Proposition 1]{ACL73}) we can use the gliding hump method together with symmetry to find a normalized block sequence of $((d_i^{(k)})_{i=1}^{N_k})_{k=1}^\infty$ equivalent to $(d_n)_{n=1}^\infty$. However, every such block sequence is also a block sequence w.r.t.\ $(d_n)_{n=1}^\infty$ with coefficients tending to zero. By Theorem \ref{ACL-subsequence-lemma} it follows that $(d_n)_{n=1}^\infty$ admits a subsequence equivalent to $\ell_p$, which is impossible. \end{proof} \section{A special case} In this section we show that when $\textbf{w}=(n^{-\theta})_{n=1}^\infty$ for some fixed $0<\theta<1$, the space $Y$ described in Theorem \ref{nontrivial-complemented} can be chosen to be isomorphic to the space $$ Y_{\textbf{w},p}:=\left(\bigoplus_{N=1}^\infty D_N\right)_p, $$ where $D_N:=[d_n]_{n=1}^N$ for each $N\in\mathbb{N}$. As usual, we require some preliminaries. \begin{lemma}\label{integral-estimate} Let $0<\theta<1$ and $j,k\in\mathbb{N}$. Then $$ ((j+1)/k+1)^{1-\theta}-((j+1)/k)^{1-\theta} \leq\frac{\sum_{n=j+1}^{j+k}n^{-\theta}}{\sum_{n=1}^kn^{-\theta}} \leq\frac{(j/k+1)^{1-\theta}-(j/k)^{1-\theta}}{2^{1-\theta}-1}. $$ \end{lemma} \begin{proof} Observe that the map $$ f(t)=(1+1/t)^{1-\theta}-(1/t)^{1-\theta} $$ is increasing on $[1,\infty)$, and hence has a minimum $f(1)=2^{1-\theta}-1$. Hence, \begin{align*} ((j+1)/k+1)^{1-\theta}-((j+1)/k)^{1-\theta} &\leq\frac{(j+k+1)^{1-\theta}-(j+1)^{1-\theta}}{k^{1-\theta}-\theta} \\&=\frac{\int_{j+1}^{j+k+1}t^{-\theta}\;dt}{1+\int_1^kt^{-\theta}\;dt} \\&\leq\frac{\sum_{n=j+1}^{j+k}n^{-\theta}}{\sum_{n=1}^kn^{-\theta}} \\&\leq\frac{\int_j^{j+k}t^{-\theta}\;dt}{\int_1^{k+1}t^{-\theta}\;dt} \\&=\frac{(j+k)^{1-\theta}-j^{1-\theta}}{(k+1)^{1-\theta}-1} \\&=\frac{(j/k+1)^{1-\theta}-(j/k)^{1-\theta}}{(1+1/k)^{1-\theta}-(1/k)^{1-\theta}} \\&\leq\frac{(j/k+1)^{1-\theta}-(j/k)^{1-\theta}}{2^{1-\theta}-1}. \end{align*} \end{proof} \begin{lemma}\label{uniformly-equivalent-weights} Let $0<\theta<1$ and $\textbf{\emph{w}}=(w_n)_{n=1}^\infty=(n^{-\theta})_{n=1}^\infty\in\mathbb{W}$. Then $$ \frac{1-\theta}{2}\cdot w_i \leq w_i^{(k)} \leq\frac{2-2^\theta}{2^{1-\theta}-1}\cdot w_i \;\;\;\text{ for all }i,k\in\mathbb{N}. $$ In particular, if $1\leq p<\infty$ then there is a constant $C\in[1,\infty)$, depending only on $\theta$, such that $$ (d_n)_{n=1}^\infty\approx_C(d_i^{(k)})_{i=1}^\infty\;\;\;\text{ for all }k\in\mathbb{N}. $$ \end{lemma} \begin{proof} We can assume $i,k\geq 2$. Observe that $$ t\mapsto t-(t-1)^{1-\theta}\cdot t^\theta $$ is decreasing on $[2,\infty)$ and hence has the maximum $2-2^\theta$. Also, the function $$ t\mapsto t-(t-1/2)^{1-\theta}\cdot t^\theta $$ is decreasing on $[2,\infty)$ and hence has infimum $$ \lim_{t\to\infty}\left(t-(t-1/2)^{1-\theta}\cdot t^\theta\right)=\frac{1-\theta}{2}. $$ Thus, by the above together with Lemma \ref{integral-estimate}, \begin{align*} \frac{1-\theta}{2}\cdot i^{-\theta} &\leq\left(i-(i-1/2)^{1-\theta}\cdot i^\theta\right)i^{-\theta} \\&=i^{1-\theta}-(i-1/2)^{1/\theta} \\&\leq(i+1/k)^{1-\theta}-(i-1+1/k)^{1/\theta} \\&\leq\frac{\sum_{n=(i-1)k+1}^{ik}n^{-\theta}}{\sum_{n=1}^kn^{-\theta}} \\&\leq\frac{i^{1-\theta}-(i-1)^{1-\theta}}{2^{1-\theta}-1} \\&=\frac{i-(i-1)^{1-\theta}\cdot i^\theta}{2^{1-\theta}-1}\cdot i^{-\theta} \\&\leq\frac{2-2^\theta}{2^{1-\theta}-1}\cdot i^{-\theta}. \end{align*} \end{proof} \begin{remark}\label{disjoint-vectors} Suppose $x=\sum_{n\in A}a_nd_n$ and $y=\sum_{n\in B}b_nd_n$ for finite and disjoint sets $A,B\subset\mathbb{N}$, where $(a_n)_{n\in A}$ and $(b_n)_{n\in B}$ are sequences of scalars. Then $$ \|x+y\|^p\leq\|x\|^p+\|y\|^p. $$ \end{remark} \begin{lemma}\label{last-step} Let $(j_k)_{k=1}^\infty$ be a sequence of positive integers, and for each $k$ set $$J_k=j_1+2j_2+3j_3+\cdots+kj_k.$$ Suppose that there are constants $A,B\in(0,\infty)$ such that \begin{equation}\label{A-constant} w_i^{(j_k)}\leq Aw_i \end{equation} and \begin{equation}\label{B-constant} Bw_i\leq\frac{1}{W_{j_k}}\sum_{n=J_{k-1}+(i-1)j_k+1}^{J_{k-1}+ij_k}w_n \end{equation} for all $i=1,\cdots,k$ and all $k\in\mathbb{N}$. Then $((d_i^{(j_k)})_{i=1}^k)_{k=1}^\infty$ is equivalent to the canonical $Y_{\textbf{\emph w},p}$ basis. \end{lemma} \begin{proof} Due to \eqref{A-constant} we have $(d_i^{(j_k)})_{i=1}^k\lesssim_A d(\textbf{w},p)^k$. Now, using Remark \ref{disjoint-vectors}, for any finitely-supported scalar sequence $((a_i^{(k)})_{i=1}^k)_{k=1}^\infty$, \begin{align*} \left\|\sum_{k=1}^\infty\sum_{i=1}^k a_i^{(k)}d_i^{(j_k)}\right\|^p &\leq\sum_{k=1}^\infty\left\|\sum_{i=1}^ka_i^{(k)}d_i^{(j_k)}\right\|^p \\&\leq A^p\sum_{k=1}^\infty\left\|(a_i^{(k)})_{i=1}^k\right\|_{d(\textbf{w},p)}^p \\&=A^p\left\|((a_i^{(k)})_{i=1}^k)_{k=1}^\infty\right\|_{Y_{\textbf{w},p}}^p. \end{align*} For the reverse inequality, let $(\hat{a}_i^{(k)})_{i=1}^k$ denote the decreasing rearrangement of $(|a_i^{(k)}|)_{i=1}^k$. Then, applying \eqref{B-constant}, \begin{align*} \left\|\sum_{k=1}^\infty\sum_{i=1}^k a_i^{(k)}d_i^{(j_k)}\right\|^p &=\left\|\sum_{k=1}^\infty\sum_{i=1}^k\frac{a_i^{(k)}}{W_{j_k}^{1/p}}\sum_{n=J_{k-1}+(i-1)j_k+1}^{J_{k-1}+ij_k}d_n\right\|^p \\&\geq\sum_{k=1}^\infty\sum_{i=1}^k\frac{\hat{a}_i^{(k)p}}{W_{j_k}}\sum_{n=J_{k-1}+(i-1)j_k+1}^{J_{k-1}+ij_k}w_n \\&\geq B\sum_{k=1}^\infty\sum_{i=1}^k\hat{a}_i^{(k)p}w_i \\&=B\left\|((a_i^{(k)})_{i=1}^k)_{k=1}^\infty\right\|_{Y_{\textbf{w},p}}^p. \end{align*} \end{proof} \begin{theorem}\label{main-equivalent} Let $(j_k)_{k=1}^\infty$ and $(J_k)_{k=1}^\infty$ be is as in Lemma \ref{last-step}. Suppose there is $M\in[1,\infty)$ such that $$ \frac{J_{k-1}}{j_k}\leq M,\;\;\;\text{ for all }k=2,3,4,\cdots. $$ Then $((d_i^{(j_k)})_{i=1}^k)_{k=1}^\infty$ is equivalent to the canonical $Y_{\textbf{\emph w},p}$ basis. \end{theorem} \begin{proof} Due to Lemma \ref{last-step}, it suffices to show that \eqref{B-constant} and \eqref{A-constant} both hold. To do this, fix an arbitrary $k\in\mathbb{N}$. We may assume without loss of generality that $j_k\geq 2$. Now, by Lemma \ref{integral-estimate}, \begin{align*} \frac{1}{W_{j_k}}\sum_{n=J_{k-1}+(i-1)j_k+1}^{J_{k-1}+ij_k}w_n &\geq\left(\frac{J_{k-1}+(i-1)j_k+1}{j_k}+1\right)^{1-\theta}-\left(\frac{J_{k-1}+(i-1)j_k+1}{j_k}\right)^{1-\theta} \\&=\left(\frac{J_{k-1}}{j_k}+i+\frac{1}{j_k}\right)^{1-\theta}-\left(\frac{J_{k-1}}{j_k}+i-1+\frac{1}{j_k}\right)^{1-\theta} \\&\geq\left(\frac{J_{k-1}}{j_k}+i\right)^{1-\theta}-\left(\frac{J_{k-1}}{j_k}+i-1+\frac{1}{2}\right)^{1-\theta} \\&=i^\theta\left[\left(\frac{J_{k-1}}{j_k}+i\right)^{1-\theta}-\left(\frac{J_{k-1}}{j_k}+i-\frac{1}{2}\right)^{1-\theta}\right]w_i \end{align*} Applying the Mean Value Theorem to the function $x\mapsto(\phi+x)^{1-\theta}$, $\phi\in[1,\infty)$, we can find $x_\phi\in(-1/2,0)$ such that $$ \phi^{1-\theta}-(\phi-1/2)^{1-\theta}=\frac{(1-\theta)(\phi+x_\phi)^{-\theta}}{2} \geq\frac{(1-\theta)\phi^{-\theta}}{2}. $$ Hence, letting $\phi=J_{k-1}/j_k+i$, we have \begin{align*} i^\theta\left[\left(\frac{J_{k-1}}{j_k}+i\right)^{1-\theta}-\left(\frac{J_{k-1}}{j_k}+i-\frac{1}{2}\right)^{1-\theta}\right] &\geq i^\theta\left[\frac{(1-\theta)(J_{k-1}/j_k+i)^{-\theta}}{2}\right] \\&=\frac{1-\theta}{2}\left(\frac{i}{J_{k-1}/j_k+i}\right)^\theta \\&\geq\frac{1-\theta}{2}\left(\frac{1}{M+1}\right)^\theta \end{align*} This proves \eqref{B-constant}, and \eqref{A-constant} follows immediately from Lemma \ref{uniformly-equivalent-weights}. \end{proof} Taking inductively $j_1=1$ and $j_{k+1}=J_k$, the following is now immediate. \begin{corollary} Let $1\leq p<\infty$, $0<\theta<1$, and $\textbf{\emph{w}}=(w_n)_{n=1}^\infty=(n^{-\theta})_{n=1}^\infty\in\mathbb{W}$. Then $d(\textbf{\emph{w}},p)$ admits a 1-complemented subspace isomorphic to $Y_{\textbf{\emph w},p}$. \end{corollary} \section{application to the lattice of closed ideals} In \cite{KPSTT12} was shown (among other results) that the lattice of closed ideals for the operator algebra $\mathcal{L}(d(\textbf{w},p))$ can be put into a chain: $$ \{0\} \subsetneq\mathcal{K}(d(\textbf{w},p)) \subsetneq\mathcal{SS}(d(\textbf{w},p)) \subsetneq\mathcal{S}_{d(\textbf{w},p)}(d(\textbf{w},p)) \subsetneq\mathcal{L}(d(\textbf{w},p)). $$ Here, $\mathcal{K}$ denotes the compact operators, $\mathcal{SS}$ the strictly singular operators, and $\mathcal{S}_{d(\textbf{w},p)}$ the ideal of operators which fail to be bounded below on any isomorph of $d(\textbf{w},p)$. While in \cite[Corollary 2.7]{Wa20}, for the special case where $1<p<2$ and $\textbf{w}\in\mathbb{W}\cap\ell_{2/(2-p)}$, a chain of distinct closed ideals with cardinality of the continuum were identified lying between $\mathcal{K}(d(\textbf{w},p))$ and $\mathcal{SS}(d(\textbf{w},p)$, for the general case, the only distinct elements known were those of the above chain. For an operator $T$, let $\mathcal{J}_T$ denote the class of operators factoring through $T$. If $Z$ is any Banach space, we then set $\mathcal{J}_Z=\mathcal{J}_{Id_Z}$. By Theorem \ref{new-ideal} below, we can extend the chain above as follows: \begin{multline*} \{0\} \subsetneq\mathcal{K}(d(\textbf{w},p)) \subsetneq\mathcal{SS}(d(\textbf{w},p)) \subsetneq(\overline{\mathcal{J}_{\ell_p}}\vee\mathcal{SS})(d(\textbf{w},p)) \\\subsetneq\mathcal{S}_{d(\textbf{w},p)}(d(\textbf{w},p)) \subsetneq\mathcal{L}(d(\textbf{w},p)). $$ \end{multline*} Furthermore, by \cite[Corollary 3.2 and Theorem 5.3]{KPSTT12} together with the fact that $d(\textbf{w},p)$ has the approximation property, any additional distinct closed ideals in $\mathcal{L}(d(\textbf{w},p))$ must lie between $\mathcal{K}(d(\textbf{w},p)) $ and $\mathcal{SS}(d(\textbf{w},p))$, or else between $(\overline{\mathcal{J}_{\ell_p}}\vee\mathcal{SS})(d(\textbf{w},p))$ and $\mathcal{S}_{d(\textbf{w},p)}(d(\textbf{w},p))$. To prove Theorem \ref{new-ideal}, we need a couple of preliminary results. \begin{proposition}\label{proper-ideal} Let $X$ and $Z$ be an infinite-dimensional Banach spaces such that $Z^2\approx Z$, and $X$ fails to be isomorphic to a complemented subspace of $Z$. Then $\overline{\mathcal{J}_Z}(X)$ is a proper ideal in $\mathcal{L}(X)$. Furthermore, if $P\in\mathcal{L}(X)$ is a projection with image isomorphic to $Z$, then $$\mathcal{J}_P(X)=\mathcal{J}_Z(X).$$ \end{proposition} \begin{proof} Since $Z^2\approx Z$, \cite[Lemma 2.2]{KPSTT12} guarantees that $\mathcal{J}_Z(X)$ is an ideal in $\mathcal{L}(X)$. Suppose towards a contradiction that that $Id_X\in\mathcal{J}_Z(X)$. Then $Id_X=AB$ for operators $A\in\mathcal{L}(Z,X)$ and $B\in\mathcal{L}(X,Z)$. By \cite[Lemma 2.1]{KPSTT12}, $BX$ is complemented in $Z$ and isomorphic to $X$, which contradicts our hypotheses. It follows that $\mathcal{J}_Z(X)$ is a proper ideal in $\mathcal{L}(X)$. Recall that the closure of a proper ideal in a unital Banach algebra is again proper; in particular, $\overline{\mathcal{J}_Z}(X)$ is a proper ideal in $\mathcal{L}(X)$. To prove the ``furthermore'' part, assume $A\in\mathcal{L}(Z,X)$ and $B\in\mathcal{L}(X,Z)$. Let $Q:Z\to X$ be the canonical embedding so that $PQ=Id_Z$ and hence $AB=APQB\in\mathcal{J}_P(X)$. It follows that $\mathcal{J}_Z(X)\subseteq\mathcal{J}_P(X)$, and the reverse inclusion is even more obvious. \end{proof} For the next result, $\mathcal{F}$ denotes the class of finite-rank operators and $\mathcal{E}$ the class of inessential operators. Recall also that a basis $\mathcal{B}$ is called {\it semispreading} whenever every subsequence of $\mathcal{B}$ is dominated by $\mathcal{B}$ itself. In particular, the unit vector basis of $\ell_p$ is semispreading. \begin{proposition}[{\cite[Corollary 3.8]{LLR04}}]\label{factorable-ops} Let $Z$ be a Banach space with a semi-spreading basis $(z_n)$, and let $X$ be a Banach space with basis $(x_n)$ such that any seminormalized block sequence of $(x_n)$ contains a subsequence equivalent to $(z_n)$ and spanning a complemented subspace of $X$. Then $$ \{0\}\subsetneq\overline{\mathcal{F}}(X)=\mathcal{K}(X)=\mathcal{SS}(X)=\mathcal{E}(X)\subsetneq\overline{\mathcal{J}_Z}(X), $$ and any additional distinct closed ideals must lie between $\overline{\mathcal{J}_Z}(X)$ and $\mathcal{L}(X)$. \end{proposition} In the proof of what follows, we use the fact that if $\mathcal{I}$ and $\mathcal{J}$ are ideals in $\mathcal{L}(X)$, then $\overline{\mathcal{I}}\vee\overline{\mathcal{J}}=\overline{\mathcal{I}+\mathcal{J}}$. \begin{theorem}\label{new-ideal} Fix $1\leq p<\infty$ and $\textbf{\emph{w}}\in\mathbb{W}$. Let $Y$ be as in Theorem \ref{nontrivial-complemented}, and $P_Y\in\mathcal{L}(d(\textbf{\emph{w}},p))$ any continuous linear projection onto $Y$. Then $$P_Y\in\mathcal{S}_{d(\textbf{\emph{w}},p)}(d(\textbf{\emph{w}},p))\setminus(\overline{\mathcal{J}_{\ell_p}}\vee\mathcal{SS})(d(\textbf{\emph{w}},p)).$$ \end{theorem} \begin{proof} Let $P_{\ell_p}\in\mathcal{L}(d(\textbf{w},p))$ be any projection onto an isomorophic copy of $\ell_p$ spanned by basis vectors of $Y$. (Such a copy exists by Theorem \ref{ACL-subsequence-lemma}.) By Theorem \ref{nontrivial-complemented}, $Y$ contains no isomorph of $d(\textbf{w},p)$ and hence $P_Y\in\mathcal{S}_{d(\textbf{w},p)}(d(\textbf{w},p))$. Since $\mathcal{S}_{d(\textbf{w},p)}(d(\textbf{w},p))$ is the unique maximal ideal in $\mathcal{L}(d(\textbf{w},p))$, and $\mathcal{J}_{P_{\ell_p}}(d(\textbf{w},p))=\mathcal{J}_{\ell_p}(d(\textbf{w},p))$ by Proposition \ref{proper-ideal}, it's sufficient to prove that $P_Y\notin(\overline{\mathcal{J}_{P_{\ell_p}}}\vee\mathcal{SS})(d(\textbf{w},p)).$ Next, we claim that $P_Y\in(\overline{\mathcal{J}_{P_{\ell_p}}}\vee\mathcal{SS})(d(\textbf{w},p))$ only if $Id_Y\in(\overline{\mathcal{J}_{\ell_p}}\vee\mathcal{SS})(Y)$. To prove it, fix $\epsilon>0$, and suppose there are $A,B\in\mathcal{L}(d(\textbf{w},p))$ and $S\in\mathcal{SS}(d(\textbf{w},p))$ such that $$ \left\|AP_{\ell_p}B+S-P_Y\right\|<\epsilon. $$ Let $J_Y:Y\to d(\textbf{w},p)$ be an embedding satisfying $P_YJ_Y=J_Y$, or $P_YJ_Y=Id_Y$ when viewed as an operator in $\mathcal{L}(Y)$. Composing $P_Y$ on the left and $J_Y$ on the right, we have $$ \left\|P_YAP_{\ell_p}BJ_Y+P_YSJ_Y-Id_Y\right\|_{\mathcal{L}(Y)}<\|P_Y\|\cdot\epsilon\cdot\|J_Y\|. $$ On the other hand, since $AP_{\ell_p}=A|_YP_{\ell_p}$ and $P_{\ell_p}=P_{\ell_p}P_Y$, we have $$ P_YAP_{\ell_p}BJ_Y =(P_YA|_Y)P_{\ell_p}(P_YBJ_Y) $$ and hence $$ \left\|(P_YA|_Y)P_{\ell_p}(P_YBJ_Y)+P_YSJ_Y-Id_Y\right\|_{\mathcal{L}(Y)}<\|P_Y\|\cdot\epsilon\cdot\|J_Y\|. $$ Since $\mathcal{J}_{\ell_p}(Y)=\mathcal{J}_{P_{\ell_p}}(Y)$ by Proposition \ref{proper-ideal}, where $P_{\ell_p}$ is likewise viewed as an operator in $\mathcal{L}(Y)$, from the above together with the ideal property of $\mathcal{SS}$, the claim follows. Let $\mathcal{B}_Y=((d_i^{(k)})_{i=1}^{N_k})_{k=1}^\infty$ denote the canonical basis of $Y$ from Theorem \ref{nontrivial-complemented}. Note that since $\mathcal{B}_Y$ is made up of constant coefficient blocks of $(d_n)$ of increasing length, any seminormalized blocks of $\mathcal{B}_Y$ will contain a subsequence equivalent to $\ell_p$ by Theorem \ref{ACL-subsequence-lemma}. In fact, in \cite[Lemma 15]{CL74} this result was refined to show that we can choose that subsequence to span a complemented subspace of $d(\textbf{w},p)$, and hence of $Y$ itself. We can therefore apply Theorem \ref{factorable-ops} to conclude that $\mathcal{SS}(Y)\subset\overline{\mathcal{J}_{\ell_p}}(Y)$. Meanwhile, again by Proposition \ref{proper-ideal}, $\overline{\mathcal{J}_{\ell_p}}(Y)$ is a proper ideal in $\mathcal{L}(Y)$, which means $Id_Y\notin\overline{\mathcal{J}_{\ell_p}}(Y)$. Hence, $P_Y\notin(\overline{\mathcal{J}_{P_{\ell_p}}}\vee\mathcal{SS})(d(\textbf{w},p))$ as desired. \end{proof}
{ "timestamp": "2020-12-17T02:18:23", "yymm": "2012", "arxiv_id": "2012.08935", "language": "en", "url": "https://arxiv.org/abs/2012.08935" }
\section{Introduction} This document is a template for \LaTeXe. If you are reading a paper or PDF version of this document, please download the electronic file \texttt{ifacconf.tex}. You will also need the class file \texttt{ifacconf.cls}. Both files are available on the IFAC web site. Please stick to the format defined by the \texttt{ifacconf} class, and do not change the margins or the general layout of the paper. It is especially important that you do not put any running header/footer or page number in the submitted paper.\footnote{ This is the default for the provided class file.} Use \emph{italics} for emphasis; do not underline. Page limits may vary from conference to conference. Please observe the page limits of the event for which your paper is intended. \section{Procedure for Paper Submission} Next we see a few subsections. \subsection{Review Stage} For submission guidelines, follow instructions on paper submission system as well as the event website. Note that conferences impose strict page limits, so it will be better for you to prepare your initial submission in the camera ready layout so that you will have a good estimate for the paper length. Additionally, the effort required for final submission will be minimal. \subsection{Equations} Some words might be appropriate describing equation~(\ref{eq:sample}), if we had but time and space enough. \begin{equation} \label{eq:sample} {{\partial F}\over {\partial t}} = D{{\partial^2 F}\over {\partial x^2}}. \end{equation} See \cite{Abl:56}, \cite{AbTaRu:54}, \cite{Keo:58} and \cite{Pow:85}. \subsubsection{Example.} This equation goes far beyond the celebrated theorem ascribed to the great Pythagoras by his followers. \begin{thm} The square of the length of the hypotenuse of a right triangle equals the sum of the squares of the lengths of the other two sides. \end{thm} \begin{pf} The square of the length of the hypotenuse of a right triangle equals the sum of the squares of the lengths of the other two sides. \end{pf} Of course LaTeX manages equations through built-in macros. You may wish to use the \texttt{amstex} package for enhanced math capabilities. \subsection{Figures} To insert figures, use the \texttt{graphicx} package. Although other graphics packages can also be used, \texttt{graphicx} is simpler to use. See Fig.~\ref{fig:bifurcation} for an example. \begin{figure} \begin{center} \includegraphics[width=8.4cm]{bifurcation} \caption{Bifurcation: Plot of local maxima of $x$ with damping $a$ decreasing} \label{fig:bifurcation} \end{center} \end{figure} Figures must be centered, and have a caption at the bottom. \subsection{Tables} Tables must be centered and have a caption above them, numbered with Arabic numerals. See table~\ref{tb:margins} for an example. \begin{table}[hb] \begin{center} \caption{Margin settings}\label{tb:margins} \begin{tabular}{cccc} Page & Top & Bottom & Left/Right \\\hline First & 3.5 & 2.5 & 1.5 \\ Rest & 2.5 & 2.5 & 1.5 \\ \hline \end{tabular} \end{center} \end{table} \subsection{Final Stage} Authors are expected to mind the margins diligently. Papers need to be stamped with event data and paginated for inclusion in the proceedings. If your manuscript bleeds into margins, you will be required to resubmit and delay the proceedings preparation in the process. \subsubsection{Page margins.} See table~\ref{tb:margins} for the page margins specification. All dimensions are in \emph{centimeters}. \subsection{PDF Creation} All fonts must be embedded/subsetted in the PDF file. Use one of the following tools to produce a good quality PDF file: \subsubsection{PDFLaTeX} is a special version of LaTeX by Han The Thanh which produces PDF output directly using Type-1 fonts instead of the standard \texttt{dvi} file. It accepts figures in JPEG, PNG, and PDF formats, but not PostScript. Encapsulated PostScript figures can be converted to PDF with the \texttt{epstopdf} tool or with Adobe Acrobat Distiller. \subsubsection{Generating PDF from PostScript} is the classical way of producing PDF files from LaTeX. The steps are: \begin{enumerate} \item Produce a \texttt{dvi} file by running \texttt{latex} twice. \item Produce a PostScript (\texttt{ps}) file with \texttt{dvips}. \item Produce a PDF file with \texttt{ps2pdf} or Adobe Acrobat Distiller. \end{enumerate} \subsection{Copyright Form} IFAC will put in place an electronic copyright transfer system in due course. Please \emph{do not} send copyright forms by mail or fax. More information on this will be made available on IFAC website. \section{Units} Use SI as primary units. Other units may be used as secondary units (in parentheses). This applies to papers in data storage. For example, write ``$15\,\mathrm{Gb}/\mathrm{cm}^2$ ($100\,\mathrm{Gb}/\mathrm{in}^2$)''. An exception is when English units are used as identifiers in trade, such as ``3.5 in disk drive''. Avoid combining SI and other units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity in an equation. The SI unit for magnetic field strength $\mathbf{H}$ is $\mathrm{A}/\mathrm{m}$. However, if you wish to use units of $\mathrm{T}$, either refer to magnetic flux density $\mathbf{B}$ or magnetic field strength symbolized as $\mu_0\,\mathbf{H}$. Use the center dot to separate compound units, e.g., ``$\mathrm{A} \cdot \mathrm{m}^2$''. \section{Helpful Hints} \subsection{Figures and Tables} Figure axis labels are often a source of confusion. Use words rather than symbols. As an example, write the quantity ``Magnetization'', or ``Magnetization M'', not just ``M''. Put units in parentheses. Do not label axes only with units. For example, write ``Magnetization ($\mathrm{A}/\mathrm{m}$)'' or ``Magnetization ($\mathrm{A} \mathrm{m}^{-1}$)'', not just ``$\mathrm{A}/\mathrm{m}$''. Do not label axes with a ratio of quantities and units. For example, write ``Temperature ($\mathrm{K}$)'', not ``$\mbox{Temperature}/\mathrm{K}$''. Multipliers can be especially confusing. Write ``Magnetization ($\mathrm{kA}/\mathrm{m}$)'' or ``Magnetization ($10^3 \mathrm{A}/\mathrm{m}$)''. Do not write ``Magnetization $(\mathrm{A}/\mathrm{m}) \times 1000$'' because the reader would not know whether the axis label means $16000\,\mathrm{A}/\mathrm{m}$ or $0.016\,\mathrm{A}/\mathrm{m}$. \subsection{References} Use Harvard style references (see at the end of this document). With \LaTeX, you can process an external bibliography database using \texttt{bibtex},\footnote{In this case you will also need the \texttt{ifacconf.bst} file, which is part of the \texttt{ifaconf} package.} or insert it directly into the reference section. Footnotes should be avoided as far as possible. Please note that the references at the end of this document are in the preferred referencing style. Papers that have not been published should be cited as ``unpublished''. Capitalize only the first word in a paper title, except for proper nouns and element symbols. \subsection{Abbreviations and Acronyms} Define abbreviations and acronyms the first time they are used in the text, even after they have already been defined in the abstract. Abbreviations such as IFAC, SI, ac, and dc do not have to be defined. Abbreviations that incorporate periods should not have spaces: write ``C.N.R.S.'', not ``C. N. R. S.'' Do not use abbreviations in the title unless they are unavoidable (for example, ``IFAC'' in the title of this article). \subsection{Equations} Number equations consecutively with equation numbers in parentheses flush with the right margin, as in (\ref{eq:sample}). To make your equations more compact, you may use the solidus ($/$), the $\exp$ function, or appropriate exponents. Use parentheses to avoid ambiguities in denominators. Punctuate equations when they are part of a sentence, as in \begin{equation} \label{eq:sample2} \begin{array}{ll} \int_0^{r_2} & F (r, \varphi ) dr d\varphi = [\sigma r_2 / (2 \mu_0 )] \\ & \cdot \int_0^{\inf} exp(-\lambda |z_j - z_i |) \lambda^{-1} J_1 (\lambda r_2 ) J_0 (\lambda r_i ) d\lambda \end{array} \end{equation} Be sure that the symbols in your equation have been defined before the equation appears or immediately following. Italicize symbols ($T$ might refer to temperature, but T is the unit tesla). Refer to ``(\ref{eq:sample})'', not ``Eq. (\ref{eq:sample})'' or ``equation (\ref{eq:sample})'', except at the beginning of a sentence: ``Equation (\ref{eq:sample}) is \ldots''. \subsection{Other Recommendations} Use one space after periods and colons. Hyphenate complex modifiers: ``zero-field-cooled magnetization''. Avoid dangling participles, such as, ``Using (1), the potential was calculated'' (it is not clear who or what used (1)). Write instead: ``The potential was calculated by using (1)'', or ``Using (1), we calculated the potential''. A parenthetical statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.) Avoid contractions; for example, write ``do not'' instead of ``don' t''. The serial comma is preferred: ``A, B, and C'' instead of ``A, B and C''. \section{Conclusion} A conclusion section is not required. Although a conclusion may review the main points of the paper, do not replicate the abstract as the conclusion. A conclusion might elaborate on the importance of the work or suggest applications and extensions. \begin{ack} Place acknowledgments here. \end{ack} \section{Introduction} \vspace{-0.4cm} Control design for complex dynamical systems rely heavily on accurate system models. A way to obtain such models is through first principle modeling. While this method provides generic models with clear physical interpretation it requires a considerable amount of time and user expertise. Another way to model the dynamical behaviour of a system is through data-driven system identification. Within this field there are numerous methods that require the user to take critical decisions (e.g. precisely selecting the model structure within prediction error methods (PEM)). In contrast, the system identification machine learning strategies can automatically select or define model structures and features. The non-parametric machine learning methods such as Gaussian Process based Bayesian Estimators \cite{pillonetto_RKHS:2014}, Support vector machines (SVM) \cite{ming_guang_study_SVM:2004} and Artificial Neuron Networks (ANN), \cite{Goodfellow_et_al:2016}, \cite{billings_nonlinear:2013} describe large model spaces that can represent complex dynamical MIMO structures. However, often the obtained models via these methods lack interpretability and fail to provide generalization to unseen data or opacity regions of the system. On the other hand, the parametric machine learning methods, also known as symbolic regression, such as Tree Adjoining Grammar Guided Genetic Programming (TAG3P) \cite{Dhruv_thesis:2020}, and Equation discovery (EQ) \cite{Ferariu_patelli:2009} perform automated structure selection and yield time-domain solutions that directly represent the temporal modes of the system. In the doctoral thesis \cite{Dhruv_thesis:2020}, the author proposes a convenient way of defining the model set searching space through a novel Tree Adjoining Grammar modelling framework and conveys the critical decision of selecting the right model structure into a automated evolutive procedure based on Genetic Programming. Moreover, this thesis shows how the proposed method can discover physical relation directly from data (Duffing oscilator). This latest development with respect to the modeling framework focused on the single-input single-output (SISO) polynomial NARMAX model set but also included a considerable amount of variation (e.g. ability to embed $\mathrm{sin(\cdot)}$, $\mathrm{cos(\cdot)}$ or $\mathrm{abs(\cdot)}$ nonlinear operators and TAG representation of Box-Jenkins models).\\ The current paper work focuses on a novel grammar that extends the TAG modelling framework to multi-input multi-output (MIMO) polynomial NARMAX models. It is common for dynamic systems to have output channels with coupled dynamics. Our main contribution is defining a framework where the multi-output candidate models are represented by only one compact syntactic tree. By this, the dynamic modes are created, evolved and parametrized with respect to all output signals at once, thus considering the probable output dynamic coupling. Moreover, as our second contribution, an identification Matlab toolbox, which is publicly available on: \texttt{github.com/stefan-nechita/TAG\_Toolbox}, is provided. Using the toolbox, the user can easily select the structure searching space in terms of NARMAX (sub) model set(s) and also with custom made nonlinear building blocks. We have validated the modeling framework and Matlab implementation on two SISO and one MIMO nonlinear benchmark models. \\ The paper is structured as follows. Section \ref{section:Preliminaries} details the novel TAG modeling framework. Section \ref{section:Dual optimisation problem} describes the optimisation approach that drives the automated GP structure search procedure that is introduced in Section \ref{section:Genetic_programming_algorithm}. Section \ref{section:Results} shows the identification results of several benchmark models. In Section \ref{section:Conclusions} we draw conclusion on our results and present several future research direction. \vspace{-0.2cm} \section{Model Structure via TAG} \label{section:Preliminaries} \vspace{-0.35cm} The symbolic regression identification problem consists of determining an appropriate dynamic structure and corresponding parameters of a data generating system. The solution space is described as $\mathcal{S} = \mathcal{W} \times \mathcal{P}$, where $\mathcal{W}$ is the structure space and $\mathcal{P} \in \mathbb{R}^n $ is the parameter space, with $n$ arbitrary large, but finite. Hence naturally, a dual-optimization problem arises. For the proposed identification approach, TAG is used to describe the structure space $\mathcal{W}$. This chapter presents briefly the TAG modeling framework followed by a novel grammar proposal for MIMO polynomial NARMAX models. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{Grammar} \caption{Elementary trees $I \cup A$ of the extented $G_\mathrm{NARMAX}$} \label{fig:Grammar_MIMO_e-NARMAX} \end{figure} \begin{table}[t] \begin{center} \captionsetup{width=9cm} \caption{ Sub model sets included in $G_\mathrm{NARMAX}$} \begin{tabular}{ccc} Sub model & Grammar & Elementary trees \\ \hline Input Poly. & $G_\mathrm{IP}$ & $\beta_1,\beta_4,\alpha_1$ \\ LTI & $G_\mathrm{LTI}$ & $\beta_1,\beta_2,\beta_7,\alpha_1$ \\ poly-NARX & $G_\mathrm{NARX}$ & $\beta_1,\beta_2,\beta_4,\beta_5,\beta_7,\alpha_1$ \\ ext-NARX & $G_\mathrm{extNARX}$ & $\beta_1,\beta_2,\beta_4,\beta_5,\beta_7,\beta_8,\alpha_{1,2,3,4}$\\ exp-NARX & $G_\mathrm{expNARX}$ & $\beta_1,\beta_2,\beta_4,\beta_5,\beta_7,\beta_8,\alpha_{1,5,6}$\\ \hline \label{table:sub_model_Grammars} \vspace{-0.2cm} \end{tabular} \end{center} \end{table} For a complete definition see \cite{kallmeyer:2009} and \cite{Dhruv_thesis:2020}. In short, a candidate model described by a TAG can be seen as an orientated graph encoded by its derived tree $\gamma$, that has a root node $v_\mathrm{r}$, edges to its intermediate nodes $v_\mathrm{int}$ and leafs $v_\mathrm{l}$ all arranged in a purely (one to many) top to bottom fashion. The derived tree $\gamma$ is constructed based on its derivation tree $\Gamma_\gamma$. The later is formed by orientated (ordered) connections of elementary trees ($\beta$ and $\alpha$). The elementary trees are the "building blocks" of any TAG tree structure. In case of system identification, the correspond to elementary algebraic operations for signals, applying time operator such as: time shift (e.g. $\beta_7$) and elementary nonlinear functions such as $\beta_8$ with $\alpha_1 \hdots \alpha_6$. Alongside with label sets, the elementary tree form a TAG $G$. The structure of the elementary trees defines the rules that a certain grammar imposes over the shape of the derived trees $\gamma$ (i.e. it defines what is a model structure that can be generated from the elementary operations). Each such derived tree $\gamma$ represents a function $\mathcal{F}_\gamma$ via an interpreter function $\mathcal{E}(\gamma)$ that transposes the tree structure into the mathematical function $\mathcal{F}$. In our context, the design of the elementary trees defines the TAG language $\mathfrak{L}(G)$ (all the trees $\gamma$ that can be generated) thus, it directly defines the model set where $\mathcal{F}_\gamma = \mathcal{E}(\gamma)$ represents a model structure. Therefore elementary trees can be designed such that a TAG can represent, via it's language, an entire model set. TAG's are highly important as they allow to encode valid model representations and can seriously increase efficiency of GP based system identification. \vspace{-0.35cm} \subsection{TAG p-NARMAX modeling framework} \vspace{-0.3cm} Within this paper we focus on discrete-time MIMO polynomial NARMAX model set. Such a noise structure often provides enough flexibility to represent many dynamic systems in practice. Further, we consider systems of form: \begin{equation} \begin{array}{ll} Y(k) =& \mathcal{F}(\lbrace u_i(k-j) \rbrace_{j=1}^{n_u},\lbrace y_i(k-m) \rbrace_{m=1}^{n_y}, \\ & \lbrace \xi_i(k-l)\rbrace_{l=1}^{n_s}), i \in \mathrm{r}_{\lbrace u,y,\xi \rbrace} \end{array} \label{eq:def:MIMO_p_NARMAX_general_form} \end{equation} where $U(k)$, $Y(k)$ and $\Xi(k)$ are input, output and process noise signals respectively with dimension $\mathrm{r}_{\lbrace u,y,\xi \rbrace} \times 1$, $\mathrm{r}_{\lbrace u,y,\xi \rbrace} \in \mathbb{N}$ and $n_u$, $n_y$ and $n_\xi$ are finite discrete time-delays with $n_u , n_\xi \in \mathbb{N}\cup \lbrace 0 \rbrace$, $n_y \in \mathbb{N}$ and $k \in \lbrace 1\hdots \mathrm{N} \rbrace$ finite number of time samples. If the case (\ref{eq:def:MIMO_p_NARMAX_general_form}) is restricted to polynomial relations, a suitable way to represent (\ref{eq:def:MIMO_p_NARMAX_general_form}) for TAG modeling framework, is as follows: \vspace{-0.1cm} \begin{equation} \begin{array}{l} \hspace{-0.1cm} Y(k)= \sum\limits^{p}_{i=1}C_i \prod\limits^{q_u}_{j=0} \prod\limits^{b_{i,j}}_{s_u} {L_{U,i,j}U(k-j)} \times \\ \hspace{-0.1cm} \prod\limits^{q_y}_{m=1} \prod\limits^{a_{i,m}}_{s_y} {L_{Y,i,m}Y(k-m)} \prod\limits^{q_\xi}_{l=1} \prod\limits^{d_{i,l}}_{s_\xi} {L_{\Xi,i,l}\Xi(k-l)} +\Xi(k) \\ \end{array} \label{eq:def:MIMO_p_NARMAX_TAG_suitable_form} \end{equation} \vspace{-0.cm} where $L_{\lbrace U,Y,\Xi \rbrace}$ is a so called \textit{linking array} defined as: \begin{equation} \begin{array}{l} L_{X} \in \mathbb{R}^{1 \times \mathrm{r}}, \mathrm{r} = \mathrm{dim}(X), L = \begin{bmatrix} \mathrm{l}_{i} \end{bmatrix}_{i=1}^{\mathrm{r}}, \mathrm{l}_{i} \in \lbrace 0, 1 \rbrace\\ L_{X} \neq 0_{1 \times \mathrm{r}} \end{array} \label{eq:def:linking_array} \vspace{-0.1cm} \end{equation} and $p \in \mathbb{N}$. The operation: $\prod\limits^{g_i}_{s=1} {L_{X,i,s}X(k-i)}$ is defined as a right hand side matrix multiplication with $\prod\limits^{0}_{s=1} {L_{X,i,s}X(k-i)} = 1$, where $X(k-i)$ is the value of signal $X$ at time moment $k-i$, $s$ is a selector operator counter, $L_{X,i,s}$ is a random linking array generated by (\ref{eq:def:linking_array}) and $g_i$ is the amount of right hand side multiplication of $X(k-i)$ with itself (e.g. right hand side matriceal rising to power: $X(k-i)^{g_j}$). The form (\ref{eq:def:MIMO_p_NARMAX_TAG_suitable_form}) can represent polynomial terms considering as variables all data channels and their time-shifted representatives $u_i(k-j)$, $y_i(k-m)$ and $\xi_i(k-l)$. Therefore, a given function $\mathcal{F(\cdot)}$ within the model set (\ref{eq:def:MIMO_p_NARMAX_TAG_suitable_form}) can be represented by a derived tree $\gamma$. \begin{prop}{TAG for MIMO p-NARMAX models\\} Let $G_\mathrm{NARMAX}$ = $\langle N, T, S, I, A \rangle$ be a TAG with \begin{itemize} \vspace{-0.1cm} \item $N = \lbrace expr_0, expr_1, expr_2, \mathrm{op}, \mathrm{par}\rbrace$, \item $T = \lbrace U, Y, \Xi, +, C, \times, q^{-1}, L_Y, L_U,L_\Xi \rbrace$ , where $L_Y$, $L_U$ and $L_\Xi$ are "linking arrays", $U$, $Y$, $\Xi$ are the input, output and output noise signals and $C$ the is parameters vector. \item $S = \lbrace expr_0 \rbrace$, \item $I = \lbrace \alpha_1 \rbrace$, \item $A = \lbrace \beta_1, \beta_2, \beta_3, \beta_4, \beta_5, \beta_6, \beta_7 \rbrace$, where the elementary trees $\beta_i$ and $\alpha_1$ are depicted in Figure \ref{fig:Grammar_MIMO_e-NARMAX}. \end{itemize} \vspace{-0.2cm} The model set $M(G_{\mathrm{NARMAX}})$ represents the set of all polynomial models defined by Equation (\ref{eq:def:MIMO_p_NARMAX_TAG_suitable_form}) with $p, n_y, n_\xi \in \mathbb{N}$ and $n_u \in \mathbb{N}\cup\lbrace 0 \rbrace$. \label{proposal:MIMO_NARMAX_grammar} \end{prop} \vspace{-0.3cm} Proposition \ref{proposal:MIMO_NARMAX_grammar} represents our main contribution over the TAG based modeling framework. As described in \cite{Dhruv_thesis:2020}, the TAG that represents the polynomial NARMAX model set can be enhanced or extended by considering $\mathrm{sin}(\cdot)$, $\mathrm{cos}(\cdot)$, $\mathrm{abs}(\cdot)$, $\mathrm{inv}(\cdot)$ and $\mathrm{exp}(\cdot)$ functions over the polynomial variables enlisted above. This modeling extension is enabled in the TAG modeling framework by considering the $\beta_8$ auxiliary tree and $\alpha_{2\hdots6}$ initial trees depicted in the lower part of Figure \ref{fig:Grammar_MIMO_e-NARMAX}. Similarly other functions can be added. Moreover, sub model sets included in $G_\mathrm{NARMAX}$ can be considered by selecting specific constituent elementary trees. Further extensions to the existed noise structure can be directly achieved as discussed in \cite{Dhruv_thesis:2020} by extending the elementary trees with further elements over the noise structre. A list of useful model sets is shown in Table \ref{table:sub_model_Grammars}. The user can select the constituent elementary trees (selecting a model set searching space) by selecting them in the toolbox file: \texttt{TAG\_MandatoryDefinition.m}. New elementary trees can be designed following the patterns in \texttt{CreateAuxTree.m} and \texttt{CreateInitTree.m}. \vspace{-0.2cm} \section{Identification Problem} \vspace{-0.3cm} \label{section:Dual optimisation problem} Given a flexible model structure we would like to obtain an estimate of the underlying data generating system by finding a structure form with adequate complexity to achieve a desired level of approximation. This minimization can be formally defined as a dual optimization problem. Consider a TAG $G_{\mathrm{Model}}$ and it's equivalent model set $\mathcal{W}_{\mathrm{Model}}$ and a data generating system $\mathcal{F}_{\gamma_0}(\theta_0)$ described by a tree $\gamma_0 \in \mathfrak{L}(G_{\mathrm{Model}})$ with the real parameters $\theta_0$ that yield the real output sequence $Y_0(w_{\gamma_0} \vert \theta_0,D_\mathrm{N}) = Y_0(k)$, where $D_{\mathrm{N}} = \lbrace U(k), Y_{0}(k) \rbrace_{k=1}^\mathrm{N}$ is a data set of length $\mathrm{N}$ with $U(k)$ input sequence and $Y_{0}(k)$ stochastic response. Let $\mathcal{F}_{\hat\gamma}(\hat{\theta})=\mathcal{E}(\hat{\gamma})$ be a candidate model represented by $\hat{\gamma}$ tree and its assigned set of parameters $\hat{\theta}$. For the data set $D_{\mathrm{N}}$ the model $\mathcal{F}_{\hat{\gamma}}(\hat{\theta})$ yields the one step ahead prediction response $\hat{Y}_\mathrm{p}(w_{\hat{\gamma}} \vert \hat{\theta},D_\mathrm{N}) = \hat{Y}_\mathrm{p}(k)$ and simulation response $\hat{Y}_\mathrm{s}(w_{\hat{\gamma}} \vert \hat{\theta},D_\mathrm{s,N}) = \hat{Y}_\mathrm{s}(k)$, where $D_\mathrm{s,N} = \lbrace U(k), \hat{Y}_{\mathrm{s}}(k) \rbrace_{k=1}^\mathrm{N}$. The two responses generate an error point $E=(E_{\mathrm{s}},E_{\mathrm{p}})\in \mathbb{R}^2$ where $E_{\mathrm{s}}$ is the root mean square simulation error ($\mathrm{RMS}_\mathrm{s}$) produced by $\hat{Y}_{\mathrm{s}}(k)$ and $E_{\mathrm{p}}$ is the root mean square prediction error ($\mathrm{RMS}_\mathrm{p}$) produced by $Y_{\mathrm{p}}(k)$. The main aim of the identification strategy is to minimize the error point $E$. Therefore the identification procedure searches for the solution of the following dual optimization problem: \begin{equation} \begin{array}{lll} \underset{w_{\gamma}}{\text{min}} & J(w_{\gamma},\hat{\theta})= & \mathrm{min} \left( E \left(w_\gamma, \hat{\theta} \right) \right) \\ \text{s.t.} & & \\ \hat{\theta} = \underset{\theta}{\text{min}} & J_{\mathrm{sub}}(\theta)= & \omega_{\mathrm{s}}E_{\mathrm{s,\tau}} ( \theta )+ \omega_{\mathrm{p}}E_{\mathrm{p}} ( \theta ) \end{array} \label{eq:minimisation_problem} \end{equation} \begin{equation} \begin{array}{l} \hspace{-0.25cm} E_{\mathrm{s}}(\theta) = \frac{1}{\mathrm{r}_y}\sum\limits_{\mathrm{i}=1}^{\mathrm{r}_y}\sqrt{\frac{1}{\mathrm{N}}\mathrm{e}^{\top}_{\mathrm{i,s}} \mathrm{e_{i,s}}}, \quad \hspace{-0.2cm} E_{p}(\theta)= \frac{1}{\mathrm{r}_y}\sum\limits_{\mathrm{i}=1}^{\mathrm{r}_y} \sqrt{\frac{1}{N} \mathrm{e}^{\top}_{\mathrm{i,p}} \mathrm{e_{i,p}}} \end{array} \label{eq:Error_RMS_sim_pred} \end{equation} where \begin{equation} \begin{array}{lll} \mathrm{e_{{i,\lbrace s,p \rbrace}}} & = & [ y_{0,\mathrm{i}}(k) - \hat{y}_{\mathrm{i,\lbrace s,p \rbrace}}(w_\gamma,k \vert \hat{\theta},D_\mathrm{N}) ]_{k=1}^{\mathrm{N}}, \end{array} \end{equation} $\omega_{\mathrm{s}}$ is the simulation error weight and $\omega_{\mathrm{p}}$ is the prediction error weight. The weight values play a role in determining what parameter estimation procedure can be deployed to solve the sub-optimization problem. They will be further detailed later. The $\mathrm{RMS}_\mathrm{p}$ is the error produced by a candidate model that has access to the past real system input and output data ($U(k)$ and $Y_0(k-n_y)$) while the $\mathrm{RMS}_\mathrm{s}$ is the error produced by a candidate model that uses the real input signal $U(k)$ and past own simulated values of output signal $\hat{Y}_\mathrm{s}(k-n_y)$. Minimizing the $\mathrm{RMS}_\mathrm{p}$ enforces the candidate model to approximate the dynamical, self-feeding modes of the data generating system. In short, the $\mathrm{RMS}_\mathrm{s}$ is the metric that measures how well the candidate models performs autonomously and offers a much stronger indication with respect to how well the candidate model approximates the data generating system. \vspace{-0.3cm} \section{Estimation via Genetic programming} \vspace{-0.3cm} \label{section:Genetic_programming_algorithm} To solve the multi-objective dual optimization problem described above, we designed a Genetic Programming (GP) algorithm that evolves a population of tree structures through TAG designed crossover and mutation genetic operators, perform parameter estimation for each structure and sorts each generation based on two fitness criterion $\mathrm{RMS}_\mathrm{s}$ and $\mathrm{RMS}_\mathrm{p}$ using the multi-objective non-dominating sorting algorithm. \vspace{-0.3cm} \subsection{Main Algorithm} \vspace{-0.35cm} The main steps of the GP algorithm are presented in Algorithm \ref{alg:TAG3P}. The GP is initialized by defining the genetic parameters: population size ($\mathrm{Pop}$), number of generations ($\mathrm{Gen}$), number of maximum auxiliary trees that can be used in each derivation tree ($\mathrm{Complexity}$) and crossover parameter ($\mu \in [0-100\%]$). Inside the iterative loop, the crossover, mutation, interpreter function, parameter estimation, evaluation and non-dominating sorting procedures are executed sequentially in order to propose, construct, evaluate and sort new dynamical structures. At the end the solution is considered to be the first Pareto front of the last generation. Since within the Pareto solution the models do not dominate each other, in terms of the two considered fitness criterion, any of them can be selected as a final candidate model that minimizes problem (\ref{eq:minimisation_problem}). The Algorithm \ref{alg:TAG3P} can be found in \texttt{TAG3P\_main.m} file. Next we will explain the main procedures in detail. \\ \begin{algorithm}[t] \caption{TAG GP main} \label{alg:TAG3P} \begin{algorithmic} \State Define $\mathrm{Pop}$ \Comment{Define Population Size} \State Define $\mathrm{Complexity}$ \Comment{Define maximum complexity} \State Define $\mathrm{Gen}$ \Comment {Define the maximum number of generations} \State $\mathrm{G(1)}$ $\gets$ RandomPopulation \Comment {Generate a random population of trees} \State $\mathrm{G(1)}$ $\gets$ Interpreter($\mathrm{G(1)}$) \Comment{Construct the candidate model} \State $\mathrm{G(1)}$ $\gets$ ParameterEstimation($\mathrm{G(1)}$) \State $\mathrm{G(1)}$ $\gets$ Evaluate($\mathrm{G(1)}$) \Comment{Compute $E_{\mathrm{s}}$ and $E_{\mathrm{p}}$ for $\mathrm{G(1)}$} \While{$i\leq$ $\mathrm{Gen}$ } \State $\mathrm{Q_1}$ $\gets$ CrossoverOffsprings($\mathrm{G(i)}$) \Comment{$\mathrm{Card}( \mathrm{Q_1}) = \mathrm{Pop}$ } \State $\mathrm{Q_2}$ $\gets$ MutationOffsprings($\mathrm{G(i)}$) \Comment{$\mathrm{Card}( \mathrm{Q_1}) = \mathrm{Pop}$ } \State $\mathrm{Q_{1,2}}$ $\gets$ Interpreter($\mathrm{Q_{1,2}}$) \Comment{see \texttt{CreateTreeFunction.m}} \State $\mathrm{Q_{1,2}}$ $\gets$ ParameterEstimation($\mathrm{Q_{1,2}}$) \State $\mathrm{Q_{1,2}}$ $\gets$ Evaluate( $\mathrm{Q_{1,2}}$) \Comment{Compute $E_{\mathrm{s}}$ and $E_{\mathrm{p}}$ for $\mathrm{Q_{1,2}}$} \State $\mathrm{R}$ $\gets$ $\mathrm{G(i)}$ $\cup$ $\mathrm{Q_{1}}$ $\cup$ $\mathrm{Q_{2}}$ \State $\mathrm{R}$ $\gets$ NSGA-II($\mathrm{R}$) \Comment{Sorting R into Pareto fronts} \State $\mathrm{G(i+1)}$ $\gets$ $\mathrm{R}(1:\mathrm{Pop})$ \Comment{Select the first $\mathrm{Pop}$ candidates from the \begin{flushright} first Pareto fronts of $\mathrm{R}$ \end{flushright}} \EndWhile \State \textbf{Save} $\mathrm{G}(\mathrm{Gen})$\Comment{collect the Pareto solution} \end{algorithmic} \end{algorithm} \vspace{-0.6cm} \subsection{Crossover and Mutation genetic operators} \vspace{-0.3cm} In Crossover, two parents (individuals of population) have their genotype combined in order to form new individuals called offsprings. Through crossover, no new information is added to the population. By switching strings of genotype between individuals, over generations, the genes that yield smaller fitness values tend to become more frequent in the population. In this way, a local exploration of the search space is performed. Consequently, via crossover, a population is exploring a local minimal point. As described in \cite{Hoai_TAG_language_GP:2003}, within TAG3P+ a sub-tree crossover is defined as follows. Two trees $\gamma_1$, randomly selected from the first $\mu$ structures of $\mathrm{G}(i)$, and $\gamma_2$, randomly selected from the entire $\mathrm{G}(i)$. A randomly chosen point in each of the two derivation trees is chosen, subject to the constraint that each sub-tree can be adjoined to the other parent tree. For each parent, the derivative tree $\Gamma$ is split in two $\Gamma_{\mathrm{STEM}}$ and $\Gamma_{\mathrm{TAIL}}$. The offsprings are created by adjoining the stem of the first parent with the tail of the second and vice versa. The \texttt{TAGCrossover.m} file hosts the implementation of the Crossover operator.\\ In Mutation, an offspring is proposed by eliminating or adjoining elementary trees starting from a derivation tree $\Gamma \in \mathrm{G}(i)$. In our implementation, for each structure of $\mathrm{G}(i)$ an offspring is created by mutation. By random addition or deletion of elementary trees to or from the parent derivation tree, the mutation operator is the procedure through which the evolution process performs global exploration of the searching space. The mutation genetic operator is implemented in \texttt{TAGMutation.m}. Both crossover and mutation functions are called inside the main loop in \texttt{TAG\_GP\_Step1.m}. \vspace{-0.35cm} \subsection{Parameter estimation procedures} \vspace{-0.3cm} Every model constructed through crossover, mutation and random generation requires optimization of its parameters to assess its accuracy in terms of (\ref{eq:minimisation_problem}). The parameter estimation can be performed with respect to both simulation and prediction error (non zero $\omega_\mathrm{s}$ and $\omega_\mathrm{p}$ weights e.g. swarm-optimization approach CMA-ES by \cite{hansen_CMA:2001}, see also \texttt{CMAES.m} file) or only prediction error ($\omega_\mathrm{s}=0$ and $\omega_\mathrm{p}=1$ (e.g. least square procedure see \texttt{ParEst\_LS.m} file). Considering both $\mathrm{RMS}_\mathrm{s}$ and $\mathrm{RMS}_\mathrm{p}$ in parameter estimation transforms the sub optimization problem into a non-convex optimization problem, making it difficult and time-consuming to solve. If only the prediction error is considered, any model defined by a function $\mathcal{F}_\gamma$ with $\gamma \in \mathfrak{L}(G_\mathrm{NARMAX})$ can be rewritten as (\ref{eq:LeastSquaresForm}) \begin{equation} \begin{array}{l} \Psi=\Phi\Theta + E_\Theta\\ \end{array} \label{eq:LeastSquaresForm} \end{equation} where, for $p$ polynomial terms as described in (\ref{eq:def:MIMO_p_NARMAX_TAG_suitable_form}), $\hat{\Psi} \in \mathbb{R}^{\mathrm{N} \times n_y}$ is the model output data set, $\Phi \in \mathbb{R}^{\mathrm{N}\times p}$ is the evolution of each polynomial term over $D_\mathrm{N}$ and $\Theta \in \mathbb{R}^{p\times n_y}$ is the matrix corresponding to the parameter vector $\Theta$. The set of parameters that minimize the sub optimization problem (\ref{eq:minimisation_problem}) is computed as on (\ref{eq:LeastSquaresParameters}). \begin{equation} \begin{array}{l} \hat{\Theta} = \left( \Phi^\top\Phi \right)^{-1}\Phi^\top \Psi. \end{array} \label{eq:LeastSquaresParameters} \end{equation} Within this report, we have opted to used the least squares method for parameter estimation during the genetic evolution process because it is considerable faster than using parameter estimation methods that consider both simulation and prediction error. In the toolbox, the parameter estimation procedure is called inside the main loop in \texttt{TAG\_GP\_Step2.m}. Moreover, the toolbox user has the option to chose between three parameter estimation procedures: least squares, swarm optimization CMA-ES or unconstrained iterative method (see \texttt{ParEst\_fminunc.m}). \vspace{-0.3cm} \subsection{Multi-objective non-dominated sorting} \vspace{-0.3cm} The evolution of dynamical structure as presented above can be guided by a multi-objective criterion. In the presented algorithm, we have considered only simulation and prediction error ($E_\mathrm{s}, E_\mathrm{s}$), but criterion like derivation tree complexity (see \cite{Dhruv_thesis:2020}) or cardinality of the set of parameters can also be included. In the toolbox, $E_\mathrm{s}$ and $E_\mathrm{s}$ values are computed in \texttt{TAG\_GP\_Step3.m} file. In the multi-objective genetic programming literature, most of the evolutionary strageties bases their findings on Pareto optimality criterion. We further present the Pareto dominance definition \cite{emmerich_tutorial:2018}. \begin{defn}{\textit{Pareto dominance}}\\ Given two vectors in the objective space, $O^{(1)},O^{(2)} \in \mathbb{R}^{m}$, then the point $O^{(1)}$ said to \textit{Pareto dominate} the point $O^{(2)}$ ($O^{(1)} \prec_{Pareto} O^{(2)}$), if and only if $\forall i \in \lbrace 1,\ldots,m \rbrace : O^{(1)}_i \leq O^{(2)}_i$ and $\exists j \in \lbrace 1,\ldots,m \rbrace: O^{(1)}_j < O^{(2)}_j$. In case that $O^{(1)} \prec_{\mathrm{Pareto}} O^{(2)}$ the first vector is not worse in each of the objectives and better in at least one objective than the second vector. \label{def:Pareto_dominance} \end{defn} \vspace{-0.2cm} Based on the Pareto dominance $\prec_{\mathrm{Pareto}}$, one can group a set of candidates into fronts. Each candidate has a dominance level and it is based on the number of how many other candidates are Pareto dominated by it. A Pareto front, $F_i$, can be seen as a contour on which all the candidates have the same dominance level. The order of dominance sorts the Pareto fronts between themselves. The Pareto optimal solution is the front that has the highest dominance level, as known as the set of non-dominated solution. A way to construct the Pareto fronts for a given set of dynamical structures is the NSGA-II algorithm detailed in \cite{deb_fast:2002} (see \texttt{NSGAII.m}). The NSGA-II algorithm is called in \texttt{TAG\_GP\_Step4} file. Every generation, for structure sorting procedure, the new models constructed through crossover and mutation are benchmarked against a distinct data set $D_{\mathrm{N}}^{\mathrm{test}}$ \vspace{-0.3cm} \section{Results} \vspace{-0.3cm} \label{section:Results} We tested the TAG3P identification algorithm against two SISO and one MIMO benchmark models. For each model we considered three distinct data sets cathegories: $D_\mathrm{N}^\mathrm{est}$ for parameter estimation, $D_{\mathrm{N}}^\mathrm{test}$ for multi-objective sorting and $D_{\mathrm{N}}^\mathrm{val}$ for computing validation $\mathrm{RMS}_\mathrm{s}$ and $\mathrm{RMS}_\mathrm{p}$ metrics described in Equations (\ref{eq:Error_RMS_sim_pred}). These metrics are used to compare the results obtained through the proposed method with the ones presented in literature. For all benchmark systems, the comparison is shown in Table \ref{table:Results_and_comparison}. For each benchmark model, out of the Pareto solution, we have selected the candidate model that yields the lowest average simulation error over the $D_{\mathrm{N}}^\mathrm{test}$ data sets. For the MIMO benchmark model described in \cite{StirTank_TOTH:2010}, the authors measured their identification method performance in Best Fit Rate ($\mathrm{BFR}$). Thus, for the MIMO case, alongside the $\mathrm{RMS_s}$ value we have also computed a $\mathrm{BFR}$ metric. \begin{table}[t] \begin{center} \captionsetup{width=9cm} \caption{$\mathrm{RMS_s}$ and $\mathrm{RMS_p}$ results of TAG3P Matlab Toolbox over the benchmark models in comparison with other system identification strategies from the literature.} \label{table:Results_and_comparison} \vspace{-0.2cm} \begin{tabular}{lcc} \multicolumn{1}{c}{Bouc-Wen hysteresis model}& & \\ \hline \textbf{TAG3P -} $G_\mathrm{NARX}$ & $6.52\mathrm{e-}5$ & $7.37\mathrm{e-}6$\\ Full PLNSS \cite{BoucWen_ESFAHANI:2017} & $1.20\mathrm{e-}5$ & - \\ Decoupled PLNSS \cite{BoucWen_ESFAHANI:2017} & $1.40\mathrm{e-}5$ & - \\ LMN - NARX \cite{BoucWen_BELZ:2017} & - & $9.86\mathrm{e-}6$\\ LMN - NFIR \cite{BoucWen_BELZ:2017} & $1.63\mathrm{e-}4$ & - \\ \hline \multicolumn{1}{c}{Coupled electric drive} & & \\ \hline \textbf{TAG3P -} $G_\mathrm{extNARX}$ & $1.28\mathrm{e-}2$ & $3.27\mathrm{e-}3$\\ TAG3P \cite{Dhruv_thesis:2020} & $1.2\mathrm{e-}1$ & $3.73\mathrm{e-}3$\\ GA + DE \cite{CEDrive_Ayala:2014} & $1.8\mathrm{e-}1$ & $4.0\mathrm{e-}2$ \\ \hline \multicolumn{1}{c}{Continuous Stirring Tank Reactor} & $\mathrm{RMS_s}$ & $\mathrm{BFR}$ \\ \hline \textbf{TAG3P -} $G_\mathrm{expNARX}$ & $1.6749$ & $92.80\%$ \\ LPV-OBF \cite{StirTank_TOTH:2010} & - & $97.54\%$ \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[ht] \begin{center} \captionsetup{width=9cm} \caption{ TAG and genetic parameters used for the benchmark problem.} \vspace{-0.2cm} \begin{tabular}{ccccc} Benchmark model & TAG & $\mathrm{Pop}$&$\mathrm{Gen}$ &$\mathrm{Complexity}$ \\ \hline BoucWen Osc. & $G_\mathrm{NARX}$ & $36$ &$350$ &$150$ \\ Coupled El. drive & $G_\mathrm{extNARX}$ & $50$ &$400$ &$150$ \\ Styr Tank model & $G_\mathrm{expNARX}$ & $60$ &$350$ &$120$ \\ \hline \label{table:Benchmark_results_set_ip} \vspace{-0.2cm} \end{tabular} \end{center} \end{table} \vspace{-0.7cm} \subsection{SISO benchmark models} \vspace{-0.3cm} \subsubsection{Bouc-Wen model} The Bouc-Wen model has been used during the last decades to represent hysteretic effects in mechanical engineering. The current benchmark represents a Bouc-Wen model with synthetic input and output data. This system is challenging to identify for a series of reasons. On on which is that the system possesses a dynamic nonlinearity that is governed by a non measurable internal variable. To identify this model we used the TAG and genetic programming parameters described in the second entry of Table \ref{table:Benchmark_results_set_ip}.\\ For the parameter estimation and testing data sets ($D_\mathrm{ N}^\mathrm{N}$, $D_{\mathrm{N}}^\mathrm{test}$) we have generated $5$ data sets of $\mathrm{N}=4096$ samples each using the algorithm indicated in \cite{BoucWen:2020}. The validation data set $D_\mathrm{N}^\mathrm{val}$ was considered the sine sweep data set provided by the authors. \vspace{-0.3cm} \subsubsection{Coupled electric drive model} The coupled electric drives consists of two electric motors that drive a pulley using a flexible belt. The pulley is held by a spring, resulting in a lightly damped dynamic mode. The drive control for the pulley is designed only for tracking the speed reference signal. A pulse counter is used to measure the angular speed of the pulley. Thus, the sign of the velocity is unknown. The available data sets are short ($\mathrm{N}=500$), and together with the absolute value component of the velocity profile make this system interesting from an identification point of view. Because of the known absolute value behaviour of the output signal, to identify this model we used the extended TAG and genetic programming parameters enlisted in the third entry of Table \ref{table:Benchmark_results_set_ip}.\\ As described in \cite{CEDrive:2017}, the estimation data set $D^{\mathrm{est}}_\mathrm{N}$ contained the data expressed by $\mathrm{u11}$ as input and $\mathrm{z11}$ as output. The testing $D_{\mathrm{N}}^\mathrm{test}$ and validation $D_{\mathrm{N}}^\mathrm{val}$ data sets contained the data expressed by $\mathrm{u12}$ as input and $\mathrm{z12}$ as output.\\ The identification results, for the same validation data set, are fairly similar to the TAG3P implementation (in Mathematica) described in \citep{Dhruv_thesis:2020}. Figure \ref{figure:results:CEdrive_model} shows the sine sweep validation output signal, the simulated and predicted candidate model output and simulation and prediction error signals. \begin{figure} \centering \vspace{-0.3cm} \includegraphics[width=0.5\textwidth]{CEdrive_model} \caption{Coupled electric drive system validation output compared with the simulation and prediction responses of the candidate model, simulation and prediction error on $D_{\mathrm{N}}^\mathrm{val}$} \label{figure:results:CEdrive_model} \end{figure} \vspace{-0.35cm} \subsection{MIMO benchmark model} \vspace{-0.4cm} \subsubsection{Continuous Stirred Tank Reactor model} \begin{figure}[t] \centering \vspace{-0.2cm} \includegraphics[width=0.5\textwidth]{StirTank_Simulation_noise} \vspace{-0.6cm} \caption{CSTR validation output compared with the simulation responses of the candidate model on $D_{\mathrm{N}}^\mathrm{val}$} \label{figure:results:BoucWen_model} \end{figure} The main contribution of this paper is the extension of the TAG modeling framework to MIMO complex models. For this we tested the TAG3P MIMO identification procedure on an ideal, simulated, Continuous Stirred Tank Reactor (CSTR) that is fully described in \cite{StirTank_TOTH:2010}. In short, the CSTR resembles a chemical conversion of an inflow substance into a product. The chemical conversion is described by a highly nonlinear dynamic relation between input signals $U = [ \mathrm{Q}_1, \mathrm{T_c}, \mathrm{C_1}]^\top$ (input flow, coolant temperature and concentration of the inflow) and output signals $Y = [\mathrm{T_2}, \mathrm{C_2}]^\top$ (temperature in the reactor and concentration in the reactor). Since the benchmark model is fully known, $10$ estimation and testing data sets were generated. These have a length of $\mathrm{N}=1000$ and considered the signals $\mathrm{Q}_1$, $\mathrm{T_c}$ as a pseudo random binary signal (PRBS) form with values of $\pm 10\%$ of nominal values and $\mathrm{C_1}$ as a slow variation, starting from nominal, toward $10$ equidistant operational points in the interval $50 \% - 150 \%$ of nominal. In this way we have excited the system components around the operation values. Over the synthetic output signals $\mathrm{T_2}$ and $\mathrm{C_2}$, a uniformly distributed noise of amplitude $0.5$ and $2$ respectively have been added. This addition mimics a sensor signal to noise ratio of $63.68$ for $\mathrm{T_2}$ and $45.56$ $\mathrm{C_2}$. This experiment design did not consider potential costs of the materials if the input signals were to be applied to a real reactor. The data set $D_\mathrm{N}^\mathrm{val}$ ($\mathrm{N}=500$) was designed to test how well the candidate model describes the global behavior of the reactor and it is similar to the global validation data set presented in \cite{StirTank_TOTH:2010}. Because of the known inverse and exponential terms within the model equations, to identify this model we used the extended TAG and genetic programming parameters enlisted in the forth entry of Table \ref{table:Benchmark_results_set_ip}. \vspace{-0.2cm} \section{Conclusion and Future Work} \vspace{-0.3cm} \label{section:Conclusions} As presented in the Table \ref{table:Results_and_comparison} the new Matlab implementation of the TAG3P identification strategy could identify the three SISO benchmark models with a various degrees of fidelity. The results for Bouc-Wen oscillator and Coupled Electric Drive show $\mathrm{RMS_s}$ and $\mathrm{RMS_p}$ values on par with other literature solutions while for the Parallel Wiener-Hammerstein model, the obtained $\mathrm{RMS_s}$ is smaller than the Best Linear Approximation but considerable larger than the specialized Parallel W-H solution proposed in \cite{PWH_SCHOUKENS:2015} by a factor of 3. In case of the MIMO CSTR system, the BFR metric shows that the proposed TAG modeling framework can obtain a valid model from data. In all cases, te TAG guided genetic programming can provide reliable candidates that represent complex SISO or MIMO nonlinear system. Moreover, it shows a fine trade-off between performance of the identified model and the amount of critical decision the user has to take. Nevertheless the paper introduced and made available the first version of the Matlab Toolbox for TAG3P identification strategy.\\ As future work, in terms of the modeling framework and model space selection, the current TAG MIMO framework and Matlab implementation offer enough flexibility for proposing a genetic programming guided identification procedure for systems that can be described by polynomial nonlinear state space models. The aim of such framework is to enable the genetic evolution to automatically select the dynamic structure and the number of states. \vspace{-0.3cm}
{ "timestamp": "2020-12-17T02:14:09", "yymm": "2012", "arxiv_id": "2012.08834", "language": "en", "url": "https://arxiv.org/abs/2012.08834" }
\section{Introduction} \label{sec:intro} In order to explain the observed baryon asymmetry of the universe \cite{Canetti2012}, a new source of charge-parity (CP) violation beyond the single complex phase in the Standard Model (SM) Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix is needed \cite{Sakharov1991a}. At the same time, new CP-violating physics with direct couplings to quarks or leptons is becoming increasingly tightly constrained by measurements that set stringent upper limits on the electric dipole moments (EDMs) of the neutron~\cite{Abel2020} and electron~\cite{Andreev2018a}. These improved limits have led particle theorists to consider models in which the additional CP violation is sequestered in a hidden or dark sector that does not couple directly to SM fermions \cite{CorderoCid2016, Azevedo2018, Carena2019, Okawa2019, Carena2020,Keus2020, CorderoCid2020}. However, it does remain possible to evade the EDM limits through cancellations among contributions to the EDMs while maintaining large CP-violating phases. We focus here on models with CP violation in an extended Higgs sector in which the couplings of the additional Higgs bosons to fermions are not suppressed. Such a cancellation for the electron EDM (eEDM) was demonstrated in a CP-violating two Higgs doublet model (2HDM) in Ref.~\cite{Kanemura2020}, in which CP-violating contributions from neutral Higgs boson couplings cancel between two different Barr-Zee diagrams. If one considers three Higgs doublet models (3HDMs) in the presence of CP violation in the charged Higgs sector, another type of cancellation takes place in both the eEDM and neutron EDM (nEDM). Quite similarly to the Glashow-Iliopoulos-Maiani (GIM) mechanism~\cite{Glashow1970} that suppresses flavor-changing neutral currents (FCNCs) induced by loop diagrams involving a sum over fermions, the cancellation in the 3HDM involves a sum over the two charged Higgs bosons of the model and becomes exact when these are degenerate in mass. However, even away from this limit and depending on the other model parameters, chiefly the charged Higgs boson Yukawa couplings, mass degeneracies of ${\cal O}(10\%)$ are still sufficient to evade current bounds on the electron and neutron EDMs. Under these conditions, we show that significant parameter space regions remain in a 3HDM, including regions with one or both charged Higgs bosons being lighter than $m_t$ (the top quark mass), while simultaneously satisfying both theoretical and experimental constraints. {In this paper we focus on CP-violating effects within the charged Higgs sector of the 3HDM in which natural flavour conservation (NFC)~\cite{Glashow1977,Paschos} ensures the absence of tree-level FCNCs via Higgs interactions. In this kind of models the Higgs sector CP violation arises from four physical CP-violating phases in the scalar potential. These phases generically lead to CP violation in both the neutral and charged Higgs sectors. Previous studies of CP violation in extended Higgs sectors have primarily focused on CP violation in the neutral scalar sector, indeed, this is the only type of Higgs sector CP violation that is possible in 2HDMs with NFC. (Removing the requirement for NFC does open the possibility of CP violation in the couplings of the single charged Higgs boson of the Aligned 2HDM, though~\cite{Jung2014}.) The cancellation mechanism that we study in this paper does not appear in the Aligned 2HDM because that model contains only a single physical charged Higgs boson. To highlight the novel cancellation mechanism for CP-odd observables like the nEDM and eEDM, in this work, we will limit ourselves to the case in which CP violation appears only in the charged Higgs sector. We will show that it is possible to do this through a judicious choice of three of the four physical CP-violating phases in the scalar potential, leaving one physical phase free to control the CP violation in the charged Higgs sector. We leave a full analysis including CP violation in the neutral scalar sector of the 3HDM to future work. } This paper is organized as follows. In Sec.~\ref{sec:model}, we describe the 3HDM and define our notation. In Sec.~\ref{sec:edms}, we describe the calculation of the electron and neutron EDMs. In Sec.~\ref{sec:hiding}, we explain the physics behind the aforementioned cancellation mechanism. In Sec.~\ref{sec:numerics} we present numerical results showing the allowed CP-violating parameter space given the current EDM constraints along with constraints from $\bar{B} \to X_s \gamma$ and searches for charged Higgs bosons at colliders. In Sec.~\ref{sec:conclusions}, we conclude. Details of our implementation of the $\bar{B} \to X_s \gamma$ and EDM constraints are given in two appendices. \section{Three Higgs doublet model} \label{sec:model} The model contains three scalar SU(2)$_L$ doublets, denoted $\Phi_1$, $\Phi_2$, $\Phi_3$, with \begin{eqnarray} \Phi_i = \doublet{\phi^+_i}{(v_i + \phi^{0,r}_i + i\phi^{0,i}_i)/\sqrt{2}}.\ \end{eqnarray} The vacuum expectation values (VEVs) $v_i$ of the three Higgs doublets can be chosen real through an independent rephasing of each doublet. They are constrained by the $W^\pm$ boson mass to satisfy $v = \sqrt{v^2_1 + v^2_2 + v^2_3} \approx 246$ GeV. In order to avoid FCNCs through Higgs interactions, we will impose NFC~\cite{Glashow1977,Paschos} by allowing each type of fermion to couple to only a single Higgs doublet. To this end, we require that the scalar potential be invariant under three $Z_2$ symmetries that each act on one of the $\Phi_i$.\footnote{Only two of these $Z_2$ symmetries need to be imposed by hand as the third follows accidentally. The Type-I version of the model can be achieved by imposing only a single $Z_2$ symmetry, which opens the possibility of additional terms in the scalar potential. We do not consider this possibility here, though.} The transformation assignments of the fermions under the three $Z_2$ symmetries then dictate the Yukawa structure of the model according to the five physically-distinct ``types'' given in Tab.~\ref{tab:types}. \begin{table} \centering \begin{tabular}{c c c c} \hline \hline Model & $\Phi_1$ & $\Phi_2$ & $\Phi_3$ \\ \hline Type-I & -- & $u, d, \ell$ & -- \\ Type-II & $d, \ell$ & $u$ & -- \\ Type-X or Lepton-specific & $\ell$ & $u,d$ & -- \\ Type-Y or Flipped & $d$ & $u, \ell$ & -- \\ Type-Z or Democratic & $d$ & $u$ & $\ell$ \\ \hline \hline \end{tabular} \caption{The five types of 3HDM subject to NFC. The table indicates which Higgs doublet is responsible for generating the mass of each type of fermion, wherein $u(d)$ refers to an up(down)-type quark and $\ell$ to a (charged) lepton.} \label{tab:types} \end{table} The most general SU(2)$_L \times $U(1)$_Y$ invariant potential subject to these $Z_2$ symmetries is~\cite{Cree2011} \begin{eqnarray} V &=& m^2_{11}\Phi^\dagger_1\Phi_1 + m^2_{22}\Phi^\dagger_2\Phi_2 + m^2_{33}\Phi^\dagger_3\Phi_3 \nonumber\\ &-& [m^2_{12}\Phi^\dagger_1\Phi_2 + m^2_{13}\Phi^\dagger_1\Phi_3+ m^2_{23}\Phi^\dagger_2\Phi_3 + \text{h.c.}] \nonumber\\ &+& \frac{1}{2}\lambda_{1}(\Phi^\dagger_1\Phi_1)^2 + \frac{1}{2}\lambda_{2}(\Phi^\dagger_2\Phi_2)^2 + \frac{1}{2}\lambda_{3}(\Phi^\dagger_3\Phi_3)^2 \nonumber\\ &+& \lambda_{12}(\Phi^\dagger_1\Phi_1)(\Phi^\dagger_2\Phi_2) + \lambda_{13}(\Phi^\dagger_1\Phi_1)(\Phi^\dagger_3\Phi_3)+ \lambda_{23}(\Phi^\dagger_2\Phi_2)(\Phi^\dagger_3\Phi_3) \nonumber\\ &+& \lambda'_{12}(\Phi^\dagger_1\Phi_2)(\Phi^\dagger_2\Phi_1) + \lambda'_{13}(\Phi^\dagger_1\Phi_3)(\Phi^\dagger_3\Phi_1) + \lambda'_{23}(\Phi^\dagger_2\Phi_3)(\Phi^\dagger_3\Phi_2) \nonumber\\ &+& \frac{1}{2}[\lambda^{\prime\prime}_{12}(\Phi^\dagger_1\Phi_2)^2 + \lambda^{\prime\prime}_{13}(\Phi^\dagger_1\Phi_3)^2 + \lambda^{\prime\prime}_{23}(\Phi^\dagger_2\Phi_3)^2 + \text{h.c.}], \end{eqnarray} where we have retained the terms $m^2_{ij}$ that break the $Z_2$ symmetries softly. The potential contains six complex parameters: the three soft-breaking masses, $m^2_{12}$, $m^2_{13}$, and $ m^2_{23}$, and three quartic couplings, $\lambda^{\prime\prime}_{12}$, $\lambda^{\prime\prime}_{13}$, and $\lambda^{\prime\prime}_{23}$. Only four of the six CP-violating phases are physical, as the other two can be eliminated by phase rotations of $\Phi_1$, $\Phi_2$, and $\Phi_3$.\footnote{Only relative phase rotations are physically meaningful. A common overall phase rotation of all three doublets corresponds to the U(1)$_Y$ hypercharge symmetry and has no effect on the potential. This overall phase rotation can be used to choose one of the VEVs to be real and positive; we apply this to $v_3$.} Instead of removing the imaginary part of two of the six complex parameters, we use this phase freedom to make all three VEVs real and positive with no loss of generality. This choice requires that we fix the imaginary parts of $m_{13}^2$ and $m_{23}^2$ as follows~\cite{Cree2011}: \begin{subequations} \begin{eqnarray} \label{EqAr:realvevs} {\text{Im}}(m^2_{13}) &=& -\frac{v_2}{v_3}{\text{Im}}(m^2_{12}) + \frac{v_1 v_2^2}{2 v_3}{\text{Im}}(\lambda^{\prime\prime}_{12}) + \frac{v_1v_3}{2}{\text{Im}}(\lambda^{\prime\prime}_{13}),\\ {\text{Im}}(m^2_{23}) &=& \frac{v_1}{v_3}{\text{Im}}(m^2_{12}) - \frac{v_1^2v_2}{2v_3}{\text{Im}}(\lambda^{\prime\prime}_{12}) + \frac{v_2v_3}{2}{\text{Im}}(\lambda^{\prime\prime}_{23}). \end{eqnarray} \end{subequations} The remaining four independent complex phases are responsible for the Higgs sector CP violation in the form of mixing between the two would-be CP-odd and the three would-be CP-even neutral Higgs states, as well as a complex phase in the charged Higgs mass matrix, which results in CP violation in the couplings of the charged Higgs mass eigenstates. For our purposes in this paper, we specialize to a constrained version of the model in which we turn off CP violation in the neutral scalar sector; this is achieved by imposing the following three relations: \begin{subequations} \label{EqAr:CPsplit} \begin{eqnarray} {\text{Im}}(\lambda^{\prime\prime}_{13}) &=& -\frac{v^2_2}{v_3^2}{\text{Im}}(\lambda^{\prime\prime}_{12}), \\ {\text{Im}}(\lambda^{\prime\prime}_{23}) &=& \frac{v^2_1}{v_3^2}{\text{Im}}(\lambda^{\prime\prime}_{12}), \\ {\text{Im}}(m^2_{12}) &=& v_1v_2{\text{Im}}(\lambda^{\prime\prime}_{12}). \end{eqnarray} \end{subequations} This leaves only one independent CP-violating parameter, which can be taken as {\rm Im}($\lambda_{12}^{\prime\prime})$. The remaining CP-violating phase appears in the charged Higgs mass matrix. Finally, minimizing the potential also allows three real parameters to be eliminated in favor of the (real) VEVs~\cite{Cree2011}: \begin{subequations} \begin{eqnarray} m^2_{11} &=& \frac{v_2}{v_1}{\text{Re}}(m^2_{12}) + \frac{v_3}{v_1}{\text{Re}}(m^2_{13}) - \frac{v_1^2}{2}\lambda_1 \nonumber \\ &&- \frac{v_2^2}{2}[\lambda_{12} + \lambda'_{12} + {\text{Re}}(\lambda^{\prime\prime}_{12})] - \frac{v_3^2}{2}[\lambda_{13} + \lambda'_{13} + {\text{Re}}(\lambda^{\prime\prime}_{13})], \\ m^2_{22} &=& \frac{v_1}{v_2}{\text{Re}}(m^2_{12}) + \frac{v_3}{v_2}{\text{Re}}(m^2_{23}) - \frac{v_2^2}{2}\lambda_2 \nonumber \\ &&- \frac{v_1^2}{2}[\lambda_{12} + \lambda'_{12} + {\text{Re}}(\lambda^{\prime\prime}_{12})] - \frac{v_3^2}{2}[\lambda_{23} + \lambda'_{23} + {\text{Re}}(\lambda^{\prime\prime}_{23})], \\ m^2_{33} &=& \frac{v_1}{v_3}{\text{Re}}(m^2_{13}) + \frac{v_2}{v_3}{\text{Re}}(m^2_{23}) - \frac{v_3^2}{2}\lambda_3 \nonumber \\ &&- \frac{v_1^2}{2}[\lambda_{13} + \lambda'_{13} + {\text{Re}}(\lambda^{\prime\prime}_{13})] - \frac{v_2^2}{2}[\lambda_{23} + \lambda'_{23} + {\text{Re}}(\lambda^{\prime\prime}_{23})]. \end{eqnarray} \end{subequations} \subsection{Charged Higgs sector} CP violation in the charged Higgs sector emerges from the mixing of the gauge eigenstates $\phi_i^+$ ($i=1,2,3$) to form the charged Higgs mass eigenstates. Following the notation of Ref.~\cite{Cree2011}, we define a mixing matrix $U$ according to \begin{eqnarray} \left( \begin{array}{c} \phi_1^+ \\ \phi_2^+ \\ \phi_3^+ \end{array} \right) = U^{\dagger} \left( \begin{array}{c} G^+ \\ H_2^+ \\ H_3^+ \end{array} \right), \label{eq:Udefinition} \end{eqnarray} where $G^+$ is the charged Goldstone boson, and $H_2^+$ and $H_3^+$ are the physical charged Higgs mass eigenstates. Here, $U$ is obtained by diagonalizing the charged Higgs mass-squared matrix, $V \supset \phi_i^- (\mathcal{M}^2_{H^\pm})_{ij} \phi_j^+$, where~\cite{Cree2011} \begin{equation} \mathcal{M}^2_{H^\pm} = \left( \begin{array}{ccc} \frac{v_2}{v_1}A_{12} + \frac{v_3}{v_1}A_{13} & -A_{12}+i B & -A_{13}-i\frac{v_2}{v_3} B \\ -A_{12}-i B & \frac{v_1}{v_2}A_{12} + \frac{v_3}{v_2}A_{23} & -A_{23}+i\frac{v_1}{v_3} B \\ -A_{13}+i\frac{v_2}{v_3} B & -A_{23}-i\frac{v_1}{v_3} B & \frac{v_1}{v_3}A_{13} + \frac{v_2}{v_3}A_{23} \\ \end{array} \right), \end{equation} with \begin{eqnarray} A_{12} &=& {\text{Re}}(m^2_{12}) - \frac{v_1v_2}{2}[\lambda'_{12} + {\text{Re}}(\lambda^{\prime\prime} _{12})], \\ \nonumber A_{23} &=& {\text{Re}}(m^2_{23}) - \frac{v_2v_3}{2}[\lambda'_{23} + {\text{Re}}(\lambda^{\prime\prime}_{23})], \\ \nonumber A_{13} &=& {\text{Re}}(m^2_{13}) - \frac{v_1v_3}{2}[\lambda'_{13} + {\text{Re}}(\lambda^{\prime\prime}_{13})], \\ \nonumber B &=& -{\text{Im}}(m^2_{12}) + \frac{v_1v_2}{2}{\text{Im}}(\lambda^{\prime\prime}_{12}). \end{eqnarray} Notice that CP violation enters only via $B$. In the case that we turn off CP violation in the neutral Higgs sector by imposing Eqs.~(\ref{EqAr:CPsplit}), $B$ becomes \[ B = -\frac{v_1v_2}{2}\rm{Im}(\lambda^{\prime\prime}_{12}). \] We now diagonalize the charged Higgs mass matrix. We perform the rotation in two stages, starting with rotating to the Higgs basis using the rotation matrix \begin{eqnarray} U_1 &=& \left( \begin{array}{ccc} \sin\gamma & 0 & \cos\gamma \\ 0 & 1 & 0 \\ -\cos\gamma & 0 & \sin\gamma \end{array} \right) \left( \begin{array}{ccc} \cos\beta & \sin\beta & 0 \\ -\sin\beta & \cos\beta & 0 \\ 0 & 0 & 1 \end{array} \right), \end{eqnarray} where we define the angles $\beta$ and $\gamma$ in terms of the VEVs: \begin{equation}\label{tanbetatangamma} \tan \beta = \frac{v_2}{v_1}, \qquad \tan \gamma = \frac{\sqrt{v^2_1 + v^2_2}}{v_3}. \end{equation} This rotation isolates the charged Goldstone boson, yielding the following mass matrix: \begin{equation} \mathcal{M}^{\prime 2}_{H^\pm} = U_1\mathcal{M}^2_{H^\pm}U^\dagger_1 = \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & \mathcal{M}^{\prime 2}_{22} & \mathcal{M}^{\prime 2}_{23} \\ 0 & \mathcal{M}^{\prime 2*}_{23} & \mathcal{M}^{\prime 2}_{33} \\ \end{array} \right), \label{Chmatrix} \end{equation} where \cite{Cree2011}:\footnote{We correct two typos in the expression for $\mathcal{M}^2_{22}$ in Eq.~(A8) of Ref.~\cite{Cree2011}.} \begin{eqnarray} \mathcal{M}^{\prime 2}_{22} &=& \frac{v^2_{12}}{v_1v_2}A_{12} + \frac{v^2_2v_3}{v_1v^2_{12}}A_{13} + \frac{v_1^2v_3}{v_2v^2_{12}}A_{23}, \\ \mathcal{M}^{\prime 2}_{33} &=& \frac{v_1v^2}{v_3v^2_{12}}A_{13} + \frac{v_2v^2}{v_3v_{12}^2}A_{23}, \\ \mathcal{M}^{\prime 2}_{23} &=& \frac{v_2v}{v_{12}^2}A_{13} - \frac{v_1v}{v_{12}^2}A_{23} + i\frac{v}{v_3}B, \end{eqnarray} and $v_{12}^2=v_1^2+v_2^2$. The next step is to diagonalize the matrix in Eq.~(\ref{Chmatrix}). We do it with the matrix $U_2$, \begin{equation} U_2 = \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & e^{-i\delta} & 0 \\ 0 & 0 & 1 \\ \end{array} \right) \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & \cos\theta & \sin\theta e^{i\delta} \\ 0 & -\sin\theta e^{-i\delta} & \cos\theta \\ \end{array} \right), \end{equation} where the CP-violating phase $\delta$ is given by \begin{equation} \delta = \text{phase}(\mathcal{M}^{\prime 2}_{23}), \end{equation} with $0 \leq \delta < 2\pi$. For later convenience, we choose the mixing angle $\theta$ to lie in the range $-\pi/2 \leq \theta \leq 0$,\footnote{In the Democratic (or Type-Z) 3HDM, the coupling of $H_2^+$ to leptons goes to zero when $\theta = 0$, likewise the coupling of $H_3^+$ to leptons goes to zero when $\theta = -\pi/2$.} so that either $H_2^{\pm}$ or $H_3^{\pm}$ can be the lighter physical charged Higgs boson. The full rotation matrix in Eq.~(\ref{eq:Udefinition}) is then given explicitly by \cite{Cree2011}: \begin{equation} U^{\dagger} = (U_2 U_1)^{\dagger} = \left( \begin{array}{ccc} s_\gamma c_\beta & -c_\theta s_\beta e^{i\delta} - s_\theta c_\gamma c_\beta & s_\theta s_\beta e^{i\delta} - c_\theta c_\gamma c_\beta \\ s_\gamma s_\beta & c_\theta c_\beta e^{i\delta} - s_\theta c_\gamma s_\beta & -s_\theta c_\beta e^{i\delta} - c_\theta c_\gamma s_\beta \\ c_\gamma & s_\theta s_\gamma & c_\theta s_\gamma \end{array} \right), \label{eq:Uexplicit} \end{equation} where $s_{\beta} = \sin\beta$, $c_{\beta} = \cos\beta$ and similarly for the other mixing angles. We give the explicit form for $U^{\dagger}$ rather than $U$ for later convenience in writing the Yukawa couplings. {For simplicity, we assume that the masses of all the extra neutral scalars, that is, $H_{2,3}$ and $A_{2,3}$, are larger than those of the charged Higgs bosons, and we take the alignment limit so that the tree-level couplings of the 125~GeV Higgs boson $h$ are identical to those of the SM Higgs boson. In other words, we focus on the physics related to the charged Higgs sector so that our input parameters are the following six:} \[ M_{H^\pm_2}, M_{H^\pm_3}, \tan\beta, \tan\gamma, \theta, \delta. \] {Notice that our definitions of $\tan\beta$ and $\tan\gamma$ differ from those in Ref. \cite{Akeroyd2017}, where a similar analysis of the 3HDM was performed.} \subsection{The Yukawa Lagrangian} In what follows, we will focus on the Democratic 3HDM,\footnote{Hereafter, we adopt this nomenclature in preference to Type-Z, as the former was introduced in Ref.~\cite{Cree2011} prior to the latter in Ref.~\cite{Akeroyd2017}.} in which $\Phi_1$ gives mass to down-type quarks, $\Phi_2$ gives mass to up-type quarks, and $\Phi_3$ gives mass to charged leptons. This version of the model gives rise to the most interesting EDM phenomenology arising from CP violation in the charged Higgs mixing matrix. We will comment on the EDMs in the other versions of the 3HDM in Sec.~\ref{sec:hiding}. The Yukawa Lagrangian takes the form \begin{equation}\label{YukLag} \mathcal{L}_\text{Yukawa} = - \{ \bar{Q}_L\Phi_1\mathcal{G}_dd_R + \bar{Q}_L\tilde{\Phi}_2\mathcal{G}_uu_R + \bar{L}_L\Phi_3\mathcal{G}_l l_R + \text{h.c} \}, \end{equation} where $\tilde \Phi$ is the conjugate doublet given by $i \sigma^2 \Phi^*$. Here, $\mathcal{G}_f$ are the Yukawa matrices, which are determined in terms of the fermion mass matrices $\mathcal{M}_f$ by $\mathcal{M}_f = \mathcal{G}_f v_i /\sqrt{2}$. The Yukawa couplings of the charged Higgs bosons are given by~\cite{Grossman1994}: \begin{eqnarray} \mathcal{L}^\text{charged}_\text{Yukawa} &=& -\frac{\sqrt{2}}{v} \left\{ [X_2\bar{u}_LV \mathcal{M}_dd_R + Y_2\bar{u}_R\mathcal{M}_uVd_L + Z_2\bar{\nu}_L \mathcal{M}_ll_R]H^+_2 \right. \\ \nonumber &+& \left. [X_3\bar{u}_LV \mathcal{M}_dd_R + Y_3\bar{u}_R\mathcal{M}_uVd_L + Z_3\bar{\nu}_L\mathcal{M}_ll_R]H^+_3 + \text{h.c.} \right\}, \end{eqnarray} where $V$ is the CKM matrix and the coupling coefficients $X_i$, $Y_i$ and $Z_i$ are given in terms of the elements of the charged Higgs mixing matrix $U^{\dagger}$ in Eq.~(\ref{eq:Uexplicit}) by \begin{equation} X_i = \frac{U^\dagger_{1i}}{U^\dagger_{11}}, \qquad Y_i = -\frac{U^\dagger_{2i}}{U^\dagger_{21}}, \qquad Z_i = \frac{U^\dagger_{3i}}{U^\dagger_{31}}, \label{XYZ} \end{equation} where $i = 2, 3$. Note that these expressions are for the Democratic 3HDM. The coupling coefficients for the other types of 3HDM are collected in Tab.~\ref{tab:couplingfactors}.\footnote{In all types of 3HDM except the Democratic one, taking the limit $\tan\gamma \to \infty$ (i.e., $v_3 \to 0$) recovers the corresponding 2HDM plus a third, inert, doublet.} \begin{table} \centering \begin{tabular}{c ccc} \hline \hline Model & $X_i$ & $Y_i$ & $Z_i$ \\ \hline Type-I & $\frac{U^\dagger_{2i}}{U^\dagger_{21}}$ & $-\frac{U^\dagger_{2i}}{U^\dagger_{21}}$ & $\frac{U^\dagger_{2i}}{U^\dagger_{21}}$ \\ Type-II & $\frac{U^\dagger_{1i}}{U^\dagger_{11}}$ & $-\frac{U^\dagger_{2i}}{U^\dagger_{21}}$ & $\frac{U^\dagger_{1i}}{U^\dagger_{11}}$ \\ Type-X or Lepton-specific & $\frac{U^\dagger_{2i}}{U^\dagger_{21}}$ & $-\frac{U^\dagger_{2i}}{U^\dagger_{21}}$ & $\frac{U^\dagger_{1i}}{U^\dagger_{11}}$ \\ Type-Y or Flipped & $\frac{U^\dagger_{1i}}{U^\dagger_{11}}$ & $-\frac{U^\dagger_{2i}}{U^\dagger_{21}}$ & $\frac{U^\dagger_{2i}}{U^\dagger_{21}}$ \\ Type-Z or Democratic & $\frac{U^\dagger_{1i}}{U^\dagger_{11}}$ & $-\frac{U^\dagger_{2i}}{U^\dagger_{21}}$ & $\frac{U^\dagger_{3i}}{U^\dagger_{31}}$ \\ \hline \hline \end{tabular} \caption{Coefficients $X_i$, $Y_i$, and $Z_i$ appearing in the Yukawa Lagrangian for the $H_i^+$ couplings to down-type quarks, up-type quarks, and (charged) leptons, respectively, with $i = 2,3$. The matrix $U^{\dagger}$ is defined in Eq.~(\ref{eq:Uexplicit}).} \label{tab:couplingfactors} \end{table} {In Figs. \ref{Fig:HpmBRs1}--\ref{Fig:HpmBRs2}, we show the branching ratios (BRs) of $H^\pm_2$ (upper panels) and $H^\pm_3$ (lower panels) as a function of $\tan\beta$, in the 3HDM Type-II, -X, -Y and the Democratic model.\footnote{We have used {\tt CalcHEP} \cite{Belyaev2013} to produce these plots. We will use it again to calculate widths and cross sections to compare against experimental constraints.} In Fig.~\ref{Fig:HpmBRs1} we take $M_{H_2^\pm}=100$ GeV and $M_{H_3^\pm}=150$ GeV, while in Fig.~\ref{Fig:HpmBRs2} we take $M_{H_2^\pm}=200$ GeV and $M_{H_3^\pm}=250$ GeV, with $\theta = -\pi/4$ and $\delta = 0$ in both. The solid and dotted curves show the case for $\tan\beta = 2$ and $5$, respectively. We can see that a light charged Higgs boson (with $M_{H^\pm_i}<m_t$) predominantly decays to $\tau\nu$, although $cs$ is more dominant for some types in specific $\tan\beta$ regions. Furthermore, the decay into $cb$ becomes relevant for higher $\tan\beta$ in the Type-Y and Democratic models. For a heavy charged Higgs boson (with $M_{H^\pm_i}>m_t$), the vastly dominant decay is into $tb$ except for Type-X at large $\tan\beta$, where $\tau\nu$ dominates instead. Instead, for the Democratic model, $\tau\nu$ dominates for large values of $\tan\gamma$. (Notice that, here, the 3HDM parameter values are chosen so we can directly compare with Figs.~1 and 2 of \cite{Akeroyd2017}, where the parametrization of $\tan\beta$ and $\tan\gamma$ is, however, chosen differently from our work.\footnote{Furthermore, we use here the labeling $H_{2,3}^\pm$ in place of $H_{1,2}^\pm$ in Ref.~\cite{Akeroyd2017}, respectively.}) {We do not show the BRs for the charged Higgs bosons in the Type-I 3HDM because they are independent of $\tan\beta$, are the same for both of the charged Higgs bosons, and depend very little on the charged Higgs boson mass for $M_{H_i^\pm}<m_t$. The most important BRs for the masses shown are to $cs$ (close to 70\%) and $\tau \nu$ (a little less than 30\%), with sub-dominant decays to $cb$ (just over 1\%) and $\mu \nu$ (around 0.1\%). When the charged Higgs masses are above the top one - in the Type-I 3HDM, decays to $t b$ become overwhelmingly dominant, e.g., for the parameter choices of Fig.~\ref{Fig:HpmBRs2}, all of the decay BRs to fermion pairs other than $tb$ are below 0.2\%.} \begin{figure} \centering \includegraphics[scale=0.4]{Fig1b.pdf} \includegraphics[scale=0.4]{Fig1c.pdf} \includegraphics[scale=0.4]{Fig1d.pdf} \includegraphics[scale=0.4]{Fig1e.pdf}\\ \includegraphics[scale=0.4]{Fig1g.pdf} \includegraphics[scale=0.4]{Fig1h.pdf} \includegraphics[scale=0.4]{Fig1i.pdf} \includegraphics[scale=0.4]{Fig1j.pdf} \caption{BRs of $H_2^+$ (upper panels) and $H_3^+$ (lower panels) as a function of $\tan\beta$ in, from left to right, the Type-II, -X, -Y, and Democratic 3HDMs. We take $M_{H_2^+}=100$ GeV, $M_{H_3^+}=150$ GeV, $\theta=-\pi/4$ and $\delta = 0$. The value of $\tan\gamma$ is 2 (5) for the solid (dotted) curves.} \label{Fig:HpmBRs1} \end{figure} \begin{figure} \centering \includegraphics[scale=0.4]{Fig2b.pdf} \includegraphics[scale=0.4]{Fig2c.pdf} \includegraphics[scale=0.4]{Fig2d.pdf} \includegraphics[scale=0.4]{Fig2e.pdf}\\ \includegraphics[scale=0.4]{Fig2g.pdf} \includegraphics[scale=0.4]{Fig2h.pdf} \includegraphics[scale=0.4]{Fig2i.pdf} \includegraphics[scale=0.4]{Fig2j.pdf} \caption{As in Fig.~\ref{Fig:HpmBRs1} but for $M_{H_2^+}=200$ GeV and $M_{H_3^+}=250$ GeV.} \label{Fig:HpmBRs2} \end{figure} \subsection{Collider constraints} Charged Higgs boson production in hadronic collisions can be described by the subprocesses $gg,q\bar q\to t\bar b H^-$ + c.c. for both light ($M_{H^\pm_i}<m_t$) and heavy ($M_{H^\pm_i}>m_t$) states \cite{Guchait2002,Assamagan2004}, as in the former case the dominant channel is $gg,q\bar q\to t\bar t\to t \bar b H^-$ + c.c. (i.e., $t$-quark pair production and decay) while in the latter case, it is $bg\to tH^-$ + c.c. (i.e., Higgs-strahlung off $b$-quarks).\footnote{Recall that $b$-(anti)quarks are produced inside protons from a gluon splitting.} Since the Higgs-strahlung cross section is much smaller than the one for top-antitop quark production, a light charged Higgs boson is severely constrained while direct searches for a heavy one leave it largely unconstrained. However, when $M_{H^\pm_i}\approx M_{W^\pm}\approx 80$~GeV, the $t\to bW^+$ background overwhelms the $t\to b H^+$ signal, so that, even at the current Large Hadron Collider (LHC), this mass region is still allowed for a charged Higgs state in a 3HDM, no matter its decay mode \cite{Akeroyd2018,Akeroyd2020a}. Of relevance to our analysis are the constraints coming from $H^\pm\to\tau\nu$ \cite{Sirunyan2019}, $cb$ \cite{Sirunyan2018} and $cs$ \cite{Aad2013} searches at the LHC (with the first channel generally being more constraining than the second and third ones), which have been performed by both ATLAS and CMS. In Fig.~\ref{Fig:munuconstraints}, we fix the values of $M_{H_2^\pm}=80$ GeV, $M_{H_3^\pm}=170$~(200)~GeV in the upper (lower) panels and $\tan\beta = 20$.\footnote{Comparing the upper and lower left panels of Fig.~\ref{Fig:munuconstraints} shows that the cross section times BR of $H_2^{\pm}$ is essentially unaffected by the mass of the heavier $H_3^{\pm}$, once it is at least comparable to the top quark mass.} We tested the region $-0.6<\theta <0$, $0.4<\tan\gamma<2.6$ against CMS searches for $H^\pm\to\tau\nu$~\cite{Sirunyan2019}.\footnote{{Although values of $\tan\gamma >2.6$ are allowed, we have chosen this region to better show the tension between the excluded areas for $H_2^\pm$ and $H_3^\pm$, respectively.} } In the case of $H_2^\pm$, it is preferable to take values of $\theta$ closer to zero, which is in tension with the cross section for $H_3^\pm$, which prefers $\theta\lesssim -0.4$. However, we can quench this tension if we choose $\tan\gamma\lesssim 2$, as the BR of both charged Higgs states to $\tau\nu$ are smaller (see Fig.~\ref{Fig:HpmBRs1}). We can also notice that lower values for $M_{H_3^\pm}$ increase the cross section of $H^\pm_3\to\tau\nu$, thus making it harder to agree with collider limits. For example, this is very manifest for the case of $M_{H^\pm_3}=150$ GeV, shown in Fig.~\ref{Fig:80150collider}, a scenario that is excluded by $H^\pm_3\to\tau\nu$ results. For this value of $M_{H^\pm_3}$, we should also compare to the collider limits for $H^\pm_3\to cb$ and $cs$. However, these are less constraining than the case of $\tau\nu$. In the case when $m_t < M_{H^\pm_2} < M_{H^\pm_3}$, the BR of $H^\pm_2$ to $\tau\nu$ only dominates over the BR to $tb$ for small values of $\tan\beta$, as can be seen in Fig.~\ref{Fig:HpmBRs2}. Later in this work, when we consider the masses of the charged Higgs bosons to be larger than the top-quark one, we take $\tan\beta>10$, and then this region readily satisfies collider limits. Overall, notice that there is no significant interference between $H^\pm_2$ and $H^\pm_3$, unless their mass difference is comparable to either of their widths, which is never the case for the benchmark points that we will study.\footnote{{When both charged Higgs boson masses are lower than $m_t$, their widths become very small, so that very strong fine-tuning of their masses would be needed to achieve overlap of the lineshapes and thus interference in top quark decays involving on-shell $H^\pm_2$ and $H^\pm_3$. For example, if we take the parameter values of Fig.~\ref{Fig:lowmass-summary} (lower panels), the width of $H_2^\pm$ is around 5 MeV and the width of $H_3^\pm$ is 0.9 MeV (0.74 MeV) for $\delta = 0.8\pi$ $(0.95\pi)$.}} \begin{figure} \centering \includegraphics[scale=0.45]{sigtaunuH2_80_170tb20cont.pdf}~ \includegraphics[scale=0.45]{sigtaunuH3_80_170tb20cont.pdf}\\ \includegraphics[scale=0.45]{sigtaunuH2_80_200tb20cont.pdf}~ \includegraphics[scale=0.45]{sigtaunuH3_80_200tb20cont.pdf}\\ \caption{{Contour plots of $\sigma(pp\to tbH_i^\pm)\times {\rm BR}(H_i^\pm\to\tau\nu)$ for $H_2^{\pm}$ (left panels) and $H_3^{\pm}$ (right panels) on the ($\theta,\tan\gamma$) plane, for $\tan\beta = 20$, $\delta = 0.9\pi$, $M_{H_2^\pm} = 80$~GeV, and $M_{H_3^\pm}=170$~GeV (upper) and 200~GeV (lower). In the case of $H_2^\pm$ ($H_3^\pm$), the resulting rate is higher for lower (higher) values of $\theta$. The area in white is excluded by CMS~\cite{Sirunyan2019}. The lower the mass of $H_3^\pm$, the more challenging it is to find viable parameter space satisfying the direct experimental search bounds for both charged scalars.} } \label{Fig:munuconstraints} \end{figure} \begin{figure} \centering \hspace*{-0.5truecm} \includegraphics[scale=0.4]{sigtaunuH3_80_150cont.pdf}~ \includegraphics[scale=0.4]{sigcbH3_80_150cont.pdf}~ \includegraphics[scale=0.4]{sigcsH3_80_150cont.pdf} \caption{Production cross sections times BR for a 150~GeV heavier charged Higgs $H^\pm_3$ decaying to $\tau\nu$ (left), $cb$ (middle) and $cs$ (right) in the ($\tan\gamma,\tan\beta$) plane, for $M_{H^\pm_2} = 80$~GeV, $\theta=-0.5$, and $\delta = 0.95\pi$. The white area represents the upper limits from LHC searches in Refs.~\cite{Sirunyan2019} (CMS), \cite{Sirunyan2018} (CMS), and \cite{Aad2013} (ATLAS), respectively. Notice that the $H^\pm_3\to\tau\nu$ limits strongly exclude almost all of this scenario. The $H^\pm_2$ signal on the other hand is well below the collider limits. } \label{Fig:80150collider} \end{figure} Charged Higgs boson parameters can also be constrained indirectly via measurements of the top-quark width, $\Gamma_t$, whenever $M_{H_i^\pm}<m_t$. {We add to the SM top quark width~\cite{Zyla2020} the partial width from the decays $t\to H^\pm_ib$, where \cite{Akeroyd2018} \begin{equation} \Gamma(t\to H^\pm_ib) = \frac{G_Fm_t}{8\sqrt{2}\pi}\left[ m_t^2|Y_i|^2 + m_b^2|X_i|^2\right] \left(1 - M_{H_i^\pm}^2/m_t^2\right)^2, \end{equation} with $X_i,Y_i$ given in Eq. \eqref{XYZ}.} This can be done by measuring $\Gamma_t$ from the top-quark visible decay products reconstructing its Breit-Wigner (BW) resonance.\footnote{Notice that, for our analysis, constraints obtained from measuring the single-top cross section are inapplicable, as these assume that $t\to bW^+$ is the only possible top-quark decay channel.} According to Refs.~\cite{Zyla2020,ATLAS2019}, the most precise measurement to date is $\Gamma_t=(1.9\pm 0.5)$~GeV. As can be seen from Fig.~\ref{Fig:topwidth}, to prevent the top-quark width from becoming too large, we need to select lower values of $\tan\beta$. Low values of $\tan\gamma$, in general, make the value of $\Gamma_t$ blow up. However, in a scenario where the masses of the two charged Higgs bosons are close to the top-quark mass, we can still find very low values of $\tan\gamma$ that give an allowed $\Gamma_t$ value. This will be relevant to find parameter space that can satisfy all constraints: from the top-quark width to collider searches for $H^\pm_i$ states, EDMs, and $\bar{B}\to X_s\gamma$. \begin{figure} \centering \includegraphics[scale=0.6]{twidth-mH3Z.pdf} \\ \includegraphics[scale=0.6]{twidth-tglt1.pdf}~ \includegraphics[scale=0.6]{twidth-160170.pdf} \caption{Predicted top quark width $\Gamma_t$ as a function of $M_{H_3^+}$ (upper) and $\tan\gamma$ (lower) in the Democratic 3HDM, for $\theta = -\pi/4$ and various values of $\tan\beta$. In the upper panel $M_{H_2^+} = 85$~GeV, $\tan\gamma = 4$, and $\delta = 0.85 \pi$. The lower left panel has the same values of $M_{H_2^+}$ and $\delta$ but sets $M_{H_3^+} = 500$~GeV. In the lower right panel $M_{H_2^+} = 160$~GeV, $M_{H_3^+} = 170$~GeV, and $\delta = 0.9\pi$. The allowed range of top quark widths from Refs.~\cite{Zyla2020,ATLAS2019} lies in shaded red area. } \label{Fig:topwidth} \end{figure} \subsection{Perturbativity constraints} The ranges of $\tan\beta$ and $\tan\gamma$ can be constrained by requiring that the Yukawa couplings in Eq.~(\ref{YukLag}) remain sufficiently perturbative. We adopt the approach introduced for the 2HDM in Ref.~\cite{Barger1990}, which required the decay width $\Gamma_{H^+}$ of the charged Higgs boson into $t \bar b$ computed above the kinematic threshold to be no larger than $M_{H^+}/2$. For example, at low $\tan\beta$, this leads to a constraint in the 2HDM Type-I of the form~\cite{Barger1990} \begin{equation} \Gamma(H^+ \to t \bar b) \simeq \frac{3 G_F m_t^2}{4 \sqrt{2} \pi \tan^2\beta} M_{H^+} < \frac{1}{2} M_{H^+}, \qquad {\rm or} \qquad \tan\beta \gtrsim 0.34, \end{equation} where we have used $m_t = 173$~GeV. In the 2HDM Type-II, we can use the same approach to find an upper bound on $\tan\beta$, where at large $\tan\beta$ the bottom quark Yukawa dominates, and we have \begin{equation} \Gamma(H^+ \to t \bar b) \simeq \frac{3 G_F m_b^2 \tan^2\beta}{4 \sqrt{2} \pi} M_{H^+} < \frac{1}{2} M_{H^+}, \qquad {\rm or} \qquad \tan\beta \lesssim 125, \end{equation} where we used $m_b \approx 4$~GeV (using the running bottom quark mass at the weak scale would yield an even higher upper bound on $\tan\beta$). These bounds are generally loose compared to the ranges of $\tan\beta$ usually adopted in collider searches. Nevertheless, we will adapt them to the 3HDM with this in mind. However, the presence of two charged Higgs bosons in the 3HDM makes a direct adaptation of the above analysis rather opaque. Instead, we interpret the constraints as upper bounds on the Yukawa couplings themselves, so that, applied to the 2HDM equivalent of Eq.~(\ref{YukLag}), these bounds on $\tan\beta$ are equivalent to imposing $\mathcal{G}_t \lesssim 3.07$ and $\mathcal{G}_b \lesssim 2.90$. For uniformity we impose $\mathcal{G}_f \lesssim 3$ and derive constraints on $v_1 = v \cos\beta \sin\gamma$, $v_2 = v \sin\beta \sin\gamma$ and $v_3 = v \cos\gamma$ in the Democratic 3HDM using the $m_t$ and $m_b$ values quoted above (plus $m_{\tau} = 1.78$~GeV). We find \begin{equation} \sin\beta \sin\gamma \gtrsim 0.33, \qquad \cos\beta \sin\gamma \gtrsim 0.0077, \qquad \tan\gamma \lesssim 290. \end{equation} The first two constraints yield an absolute lower bound on $\tan\gamma$, \begin{equation} \tan\gamma \gtrsim 0.35. \end{equation} Later in this paper, we will show plots for $\tan\gamma = 1$ and 2. For $\tan\gamma = 1$, the perturbativity analysis above requires $0.53 \lesssim \tan\beta \lesssim 92$ and the allowed $\tan\beta$ range expands as $\tan\gamma$ increases. \section{Calculation of EDMs in the 3HDM} \label{sec:edms} In this section, we compute the dominant contributions to the electron and neutron EDMs from CP violation in the charged Higgs sector of the 3HDM. All our results are obtained by a straightforward generalization of the charged Higgs contributions to EDMs that can arise in the 2HDM and are already available in the literature. \subsection{Electron EDM from charged Higgs bosons in the 3HDM} Experimental sensitivity to the eEDM has improved by more than an order of magnitude in recent years, with a current upper bound from the ACME collaboration of~\cite{Andreev2018a}: \begin{equation} \label{de} |d_e| \leq 1.1 \times 10^{-29} \, e \, \text{cm}\, \, (90\% \,\, \text{C.L.}). \end{equation} The charged Higgs bosons in the 3HDM give rise to contributions to the eEDM via the CP violation in their couplings to fermion pairs. The one-loop contribution involving a charged Higgs loop is subdominant due to suppression by the tiny electron Yukawa coupling. The dominant contribution comes from the two-loop Barr-Zee type diagrams as shown in Fig.~\ref{fig:barrzeetb}, first calculated in Ref.~\cite{BowserChao1997} in the 2HDM (see also Ref.~\cite{Jung2014}). \begin{figure}[t] \begin{center}\resizebox{0.5\textwidth}{!}{\includegraphics{barrzeebloop}}\end{center} \caption{One of the Barr-Zee type diagrams that give the dominant charged Higgs boson contribution to the eEDM in the 3HDM.} \label{fig:barrzeetb} \end{figure} The charged Higgs sector also appears in the Barr-Zee type diagrams of Fig.~\ref{fig:barrzeeHp}, where $\phi^0$ is any of the neutral scalars in the model. It was pointed out in Ref.~\cite{Kanemura2020}, in the context of the Aligned 2HDM, that these diagrams can contribute significantly and lead to interesting cancellations with the diagrams of Fig.~\ref{fig:barrzeetb}. In the 3HDM scenario that we consider here, where CP violation is present in the charged Higgs sector but not in the neutral Higgs sector (which we have integrated out), these diagrams do not contribute to the eEDM because the $\phi^0 ee$ and $\phi^0 H_i^+ H_i^-$ couplings contain no CP phase.\footnote{It can be seen that, in the absence of neutral (pseudo)scalar sector CP violation, the latter coupling cannot contain a CP-violating phase because this term is Hermitian by itself and hence must have a real coefficient in the Lagrangian.} The couplings $\phi^0 H_2^+ H_3^-$ do contain non-trivial CP phases, but these couplings do not appear in the diagrams of Fig.~\ref{fig:barrzeeHp} because the photon coupling to the charged Higgs boson is diagonal. \begin{figure}[t] \resizebox{\textwidth}{!}{\includegraphics{barzeehploop}} \caption{Two of the Barr-Zee type diagrams for the eEDM involving a charged Higgs boson in the loop. These do not contribute in the 3HDM when CP violation is turned off in the neutral Higgs sector, as we assume in this paper.} \label{fig:barrzeeHp} \end{figure} Under our assumption that the neutral Higgs sector is CP-conserving and that CP violation appears only in the charged Higgs sector, the dominant Barr-Zee type contribution of the charged Higgs to the eEDM in the 2HDM~\cite{BowserChao1997,Jung2014} can be generalized to the 3HDM as follows: \begin{eqnarray} \frac{d_e(M_{H^{\pm}_2},M_{H^{\pm}_3})_{BZ}}{2} &=& - m_e \frac{12 G^2_F M^2_W}{(4\pi)^4} |V_{tb}|^2 \nonumber \\ &\times & \bigg[{\text{Im}} (- Y_2^*Z_2) \left(q_t F_t(\textit{z}_{H^{\pm}_2},\textit{z}_W) + q_b F_b(\textit{z}_{H^{\pm}_2},\textit{z}_W)\right) \nonumber \\ &+& {\text{Im}} (- Y_3^*Z_3) \left(q_t F_t(\textit{z}_{H^{\pm}_3},\textit{z}_W) + q_b F_b(\textit{z}_{H^{\pm}_3},\textit{z}_W)\right) \bigg], \label{eq: deforH} \end{eqnarray} where $q_t = 2/3$ and $q_b = -1/3$ are quark electric charges, $z_{a} =M_a^2/m_t^2$ and~\cite{BowserChao1997,Jung2014} \begin{eqnarray} F_q(\textit{z}_{H^{\pm}_i},\textit{z}_W) &=& \frac{T_q(\textit{z}_{H^{\pm}_i}) - T_q(\textit{z}_W) }{\textit{z}_{H^{\pm}_i} - \textit{z}_W}, \nonumber \\ T_t(\textit{z}) &=& \frac{1- 3\textit{z}}{\textit{z}^2} \frac{\pi^2}{6} + \bigg(\frac{1}{\textit{z}} - \frac{5}{2}\bigg) \text{log}\textit{z} -\frac{1}{\textit{z}} - \bigg(2 - \frac{1}{\textit{z}}\bigg) \bigg(1 - \frac{1}{\textit{z}}\bigg) \text{Li}_2 (1 -\textit{z}), \nonumber \\ T_b(\textit{z}) &=& \frac{2\textit{z} - 1}{\textit{z}^2} \frac{\pi^2}{6} + \bigg(\frac{3}{2} - \frac{1}{\textit{z}}\bigg) \text{log}\textit{z} + \frac{1}{\textit{z}} - \frac{1}{\textit{z}} \bigg(2 - \frac{1}{\textit{z}}\bigg) \text{Li}_2 (1 -\textit{z}). \end{eqnarray} Note that the original calculation of Ref.~\cite{BowserChao1997} was done setting $m_b = 0$ so that only the contribution involving the top-quark Yukawa couplings $m_t Y_i/v$ appears. Keeping the non-zero bottom mass would introduce additional contributions proportional to $m_b X_i/v$, which could become important at large values of $\tan\beta$. Finally, all other eEDM contributions at the loop level that are purely fermionic or induced by gauge bosons \cite{Pospelov1991,Chang1991} remain identical to those in the SM and are negligible compared to the current experimental bound. \subsection{Neutron EDM from charged Higgs bosons in the 3HDM} The current measurement of the nEDM at the Paul Scherrer Institute with ultra-cold neutrons (UCN) provided an upper limit as follows~\cite{Abel2020}:\footnote{{If the (unrealistic) assumption is made that the nEDM is the sole contribution to the atomic EDM of mercury, the most recent measurement of the latter yields a comparable limit, $|d_n| < 1.8 \times 10^{-26} e$~cm at 90\% C.L. \cite{Graner2016ses}.}} \begin{equation}\label{dn} |d_n| \leq 1.8 \times 10^{-26} \, e \, \text{cm} \, \, (90\% \,\, \text{C.L.}). \end{equation} CP violation from charged Higgs boson exchange enters this observable through a variety of effective operators. Jung and Pich~\cite{Jung2014} point out three types of effective operators through which the charged Higgs boson contributes to the nEDM in the 2HDM. These are four-fermion operators involving the up- and down-type quarks which are induced by CP-violating Higgs exchange, the Weinberg operator (the CP-violating three-gluon operator) which is neither suppressed by quark masses nor CKM matrix elements, and the Barr-Zee type two-loop diagrams contributing to the EDMs and chromo-electric dipole moments (CEDMs) of the up- and down-type quarks. The light quark masses suppress the contributions of the four fermion operators and the up- and down-type quark (C)EDMs. \begin{figure}[t] \resizebox{0.5\textwidth}{!}{\includegraphics{weinbergoperator}} \resizebox{0.5\textwidth}{!}{\includegraphics{bCEDM.png}} \caption{Left panel: Two-loop charged Higgs boson contribution to the Weinberg operator. Right panel: One-loop charged Higgs boson contribution to the bottom quark CEDM. } \label{fig:bCEDM} \end{figure} This leaves the Weinberg operator, the charged Higgs contribution to which is shown in the left panel of Fig.~\ref{fig:bCEDM}. Following Ref.~\cite{Jung2014}, we compute this using an effective field theory approach~\cite{Braaten1990}, which amounts to computing only the one-loop short-distance piece at the high scale $\mu_{tH} = m_t$, which is the bottom quark CEDM shown in the right panel of Fig.~\ref{fig:bCEDM}. The contribution of the Weinberg operator to the nEDM is~\cite{Jung2014}: \begin{equation}\label{formula1} |d_n(C_W)/e| = \left[1.0 {+1.0 \atop -0.5} \right] \times 20\, \text{MeV} \, C_W(\mu_h), \end{equation} where the sign is unknown, and the theoretical uncertainty on the magnitude is a factor of two. In our numerical results, we follow Ref.~\cite{Jung2014} and use the central theoretical value. The Wilson coefficient $C_W$ evaluated at the hadronic scale $\mu_h \sim 1$~GeV is expressed as \begin{equation}\label{formula2} C_W(\mu_h) = \eta^{\kappa_W}_{c-h} \eta^{\kappa_W}_{b-c} \bigg( \eta^{\kappa_W}_{t-b} C_W(\mu_{tH}) + \eta^{\kappa_C}_{t-b} \frac{g^3_s(\mu_{b})}{8\pi^2 m_b} \frac{d^C_b(\mu_{tH})}{2} \bigg), \end{equation} where $C_W(\mu_{tH}) = 0$ because there is no short-distance contribution to the Weinberg operator involving the charged Higgs boson at the scale $m_t$. $d_b^C(\mu_{tH})$ is the short-distance contribution to the bottom quark CEDM, given below. The running of these short-distance contributions down to the scale $\mu_b = m_b$ is accomplished by the factors of $\eta_{t-b} = \alpha_s(\mu_{tH})/\alpha_s(\mu_b)$ raised to the appropriate power $\kappa_i = \gamma_i/(2 \beta_0$), where $\gamma_W = N_C + 2 n_f$ and $\gamma_C = 10 C_F - 4 N_C$ are the leading order (LO) anomalous dimensions of the Weinberg and $b$-quark CEDM operator, respectively, and $\beta_0 = (11 N_C - 2 n_f)/3$ is the one-loop beta function of QCD. Here, $N_C = 3$, $C_F = 4/3$, and $n_f$ is the number of active quark flavors involved in the QCD running at the relevant scale (e.g., between the top and bottom masses, $n_f = 5$). At the scale $\mu_b$, the bottom quark is integrated out and the operators matched, then the remaining Weinberg operator is run down to the hadronic scale $\mu_h$ in two steps (integrating out the charm quark at $\mu_c = m_c$), giving rise to two more factors, $\eta_{b-c}^{\kappa_W}$ and $\eta_{c-h}^{\kappa_W}$, in which the running of $\alpha_s$ and the exponent are evaluated with the appropriate value of $n_f$. At LO, $\alpha_s(\mu)$ is given by: \begin{equation} \alpha_s(\mu) = \frac{\alpha_s(M_Z)}{v(\mu)}, \end{equation} with \begin{equation} v(\mu) = 1 - \beta_0 \frac{\alpha_s(M_Z)}{2\pi} \log \bigg(\frac{M_Z}{\mu} \bigg). \end{equation} Finally, the high-scale one-loop charged Higgs boson contribution to the bottom quark CEDM in the right panel of Fig.~\ref{fig:bCEDM} has been calculated in the 2HDM in Ref.~\cite{Jung2014} (see also references therein). By adapting this to the 3HDM, one obtains \begin{eqnarray} \frac{d^C_b(\mu_{tH})}{2} &=& - \frac{G_F}{\sqrt{2}} \frac{1}{16\pi^2} |V_{tb}|^2 m_b(\mu_{tH}) \left[{\text{Im}}(-X_2Y_2^*) x_{tH_2} \bigg(\frac{\log(x_{tH_2})}{(x_{tH_2} - 1)^3} + \frac{(x_{tH_2} - 3)}{2(x_{tH_2} - 1)^2}\bigg) \right. \nonumber \\ && \qquad \qquad \qquad \left. + {\text{Im}}(-X_3Y_3^*) x_{tH_3} \bigg(\frac{\log(x_{tH_3})}{(x_{tH_3} - 1)^3} + \frac{(x_{tH_3} - 3)}{2(x_{tH_3} - 1)^2}\bigg) \right], \label{formula7} \end{eqnarray} where $x_{tH_i} = m_t^2 / M_{H^{\pm}_i}^2$. Again, purely fermionic and gauge contributions \cite{Jung2014} remain identical to those in the SM and are negligible compared to the current experimental bound. \section{Cancellation in the charged Higgs contributions to the EDMs} \label{sec:hiding} The CP-violating phase in the charged Higgs mixing matrix is responsible for generating CP-violating observables in this model. The effects of this CP-violating phase in processes involving virtual charged Higgs boson exchange can be arbitrarily suppressed by making the two physical charged Higgs bosons sufficiently degenerate in mass, thereby avoiding constraints from EDMs. This can be understood as a consequence of an analogue of the GIM mechanism~\cite{Glashow1970}, in particular, when $H^\pm_2$ and $H^\pm_3$ become degenerate, both the mixing angle $\theta$ and the CP-violating phase $\delta$ in their mixing matrix become non-physical. Any internal charged Higgs propagator that begins and ends on a fermion line brings with it one factor of $X_i^*$, $Y_i^*$ or $Z_i^*$ and one factor of $X_i$, $Y_i$ or $Z_i$. The combinations $X_iX_i^*$, $Y_iY_i^*$, and $Z_iZ_i^*$ are purely real and cannot contribute to CP-odd observables, leaving only the combinations $X_iY_i^*$, $X_iZ_i^*$ and $Y_iZ_i^*$ (or their complex conjugates) which can have an imaginary part. Consider, for example, $X_iY_i^*$, which is given in the Democratic 3HDM in terms of the unitary rotation matrix in Eq.~(\ref{eq:Uexplicit}) by \begin{equation} X_i Y_i^* = - \frac{U^{\dagger}_{1i} U_{i2}}{U^{\dagger}_{11} U_{12}}, \end{equation} where $i = 2$ or 3. The denominator is real by construction since $U_{1j} = v_j/v$. Computation of CP-odd observables in this context always involves a sum over the two charged Higgs bosons that can appear in the contributing diagrams, yielding \begin{equation} \sum_{i = 2}^3 {\rm Im} (X_i Y_i^*) f(M_{H^+_i}) = - \frac{1}{U_{11}^{\dagger} U_{12}} \left[ {\rm Im} (U_{12}^{\dagger} U_{22}) f(M_{H^+_2}) + {\rm Im} (U_{13}^{\dagger} U_{32}) f(M_{H^+_3}) \right], \label{eq:gimmech1} \end{equation} where $f(M_{H^+_i})$ represents the dependence of the diagram on the charged Higgs boson mass. We can trivially add zero in the form of ${\rm Im} (U_{11}^{\dagger} U_{12}) f(m)$ inside the square brackets. Then, in the limit $M_{H^\pm_2} = M_{H^\pm_3} \equiv m$, Eq.~(\ref{eq:gimmech1}) becomes \begin{equation} \sum_{i = 2}^3 {\rm Im} (X_i Y_i^*) f(m) = - \frac{1}{U_{11}^{\dagger} U_{12}} {\rm Im} \left[ \sum_{i = 1}^3 U_{1i}^{\dagger} U_{i2} \right] f(m) = - \frac{1}{U_{11}^{\dagger} U_{12}} {\rm Im} (\delta_{12}) f(m) = 0, \end{equation} where $\delta_{12}$ is the $(1,2)$ element of the Kronecker delta. This also shows that ${\rm Im}(X_2 Y_2^*) = - {\rm Im}(X_3 Y_3^*)$, due to the unitarity of the charged Higgs mixing matrix, and similarly for the imaginary parts of $X_iZ_i^*$ and $Y_iZ_i^*$. The form of Eq.~(\ref{eq:gimmech1}) also implies that, for small non-zero mass splitting $\Delta M_{H^\pm} \ll M_{H^\pm}$, CP-violating amplitudes must be linear in $\Delta M_{H^\pm} /M_{H^\pm}$, where $\Delta M_{H^\pm} \equiv M_{H^\pm_3} - M_{H^\pm_2}$ and $M_{H^\pm} \equiv (M_{H^\pm_3} + M_{H^\pm_2})/2$.\footnote{{The degeneracy of the charged Higgs boson masses favored by the avoidance of EDM constraints raises the possibility of interesting interference effects in direct collider production of on-shell charged Higgs bosons, if their mass splitting is comparable to or smaller than the decay widths of the two charged Higgs bosons so that the BW lineshapes of their decay products overlap in phase space. Unfortunately, for the case when both charged Higgs boson masses are below $m_t$, not only are their decay widths extremely narrow (as illustrated already), but it is also very difficult (maybe impossible) to find a viable set of model parameters that are not already ruled out by collider searches for which such a degeneracy can be achieved. For charged Higgs boson masses above $m_t$, however, collider constraints are much less stringent and the decay widths are larger, so that such a lineshape overlap could offer interesting future possibilities for experimental exploration. For example, for the set of values in Fig.~17, the width of $H_3^\pm$ is of the order of 1 GeV when $M_{H^\pm_2}=M_{H^\pm_3}=200$ GeV and of the order of $10^{2}$ GeV for $M_{H^\pm_2}=M_{H^\pm_3}=800$ GeV whereas the width of $H_2^\pm$ is of the order of $10^{-3}$ GeV when $M_{H^\pm_2}=M_{H^\pm_3}=200$ GeV and of the order of 0.1 GeV for $M_{H^\pm_2}=M_{H^\pm_3}=800$ GeV. } } In this paper, we focus on the Democratic 3HDM because CP violation in the charged Higgs sector gives rise to interesting contributions to the EDMs of both the electron and neutron. In the other types of 3HDM, the effects of charged Higgs CP violation are more limited because, in these models, at least two of $X_i$, $Y_i$, and $Z_i$ become identical (see Tab.~\ref{tab:couplingfactors}). In particular, the dominant charged Higgs contribution to the eEDM, proportional to ${\rm Im}(-Y_i^* Z_i)$, is zero in the Type-I and Type-Y (Flipped) 3HDMs because in those models $Y_i = Z_i$. Similarly, the dominant charged Higgs contribution to the nEDM, proportional to ${\rm Im}(-X_i Y_i^*)$, is zero in the Type-I and Type-X (Lepton-specific) 3HDMs because in those models $X_i = Y_i$. In the Type-II 3HDM, $X_i = Z_i$, so that this model also leads to CP-violating charged Higgs boson contributions to both the electron and neutron EDMs. \section{Numerical results} \label{sec:numerics} We now present our results for the Democratic 3HDM as a function of the relevant coupling parameters ($\theta$, $\tan\beta$, $\tan\gamma$, and $\delta$) and masses ($M_{H^{\pm}_{2}}$ and $M_{H^{\pm}_{3}}$) against the eEDM and nEDM constraints. We will also impose the constraints from direct searches for charged Higgs bosons, as well as from the measurement of BR($\bar B \to X_s \gamma$), which provides the most stringent indirect constraint on the charged Higgs masses. Details of our implementation of the $\bar B \to X_s \gamma$ constraint are given in Appendix~\ref{sec:bsg}. To start with, it is instructive to compare the 3HDM results with those available in the literature for the analogous case in a 2HDM, which we do by presenting the nEDM and eEDM constraints against the Yukawa coupling combinations Im$(X_iY_i^*)$ and Im$(Y_i^*Z_i)$ ($i=2$). In Fig.~\ref{2hdmnedm}, we show the Aligned 2HDM results in the plane of the charged Higgs mass and the imaginary part of the relevant combination of Yukawa coupling factors, to be compared to Figs.~3, 4, and 5 of Ref.~\cite{Jung2014},\footnote{Herein, there is no subscript 2 for the couplings and masses of the 2HDM, as only one charged Higgs state is present in the model.} updated using the latest nEDM and eEDM experimental limits as given in Eqs.~(\ref{dn}) and (\ref{de}), respectively. The shaded areas in Fig.~\ref{2hdmnedm} represent the viable parameter regions in both cases. The newest bounds from both nEDM and eEDM induce a strong suppression on the allowed parameter space corresponding to the imaginary contributions of the couplings $X_2Y_2^*$ and $Y_2^*Z_2$. In Figs. \ref{3hdmnedm300} and \ref{3hdmnedm500}, we show the 3HDM cases as a function of $M_{H_2^{\pm}}$ with $M_{H^{\pm}_3} = 85$ and 300 GeV, respectively. We can then see that the parameter space is generally enlarged in the Democratic 3HDM with respect to the Aligned 2HDM, particularly in the $M_{H_2^\pm}=M_{H_3^\pm}$ limit, clearly illustrating the aforementioned cancellation mechanism between the two charged Higgs states of the 3HDM. It is worth noticing here that, while in the exact mass degeneracy case there is virtually no constraint applicable to the Democratic 3HDM from either nEDM or eEDM, even when the $M_{H_2^\pm}=M_{H_3^\pm}$ condition is lifted, there are substantial differences in the values allowed for the Yukawa couplings between the two scenarios at both small and large values of the lightest charged Higgs boson mass. Next, we consider the effect of the coupling parameters $\theta$, $\tan\beta$, $\tan\gamma$, and $\delta$ for various scenarios for the charged Higgs masses $M_{H^{\pm}_2}$ and $M_{H^{\pm}_3}$ within the Democratic 3HDM. We consider two classes of mass scenarios: the first in which either or both $H_i^\pm$ masses are lighter than $ m_t$ (in Sec.~\ref{sec: lightmass}) and the second in which they are both heavier than $m_t$ (in Sec.~\ref{sec: heavymass}). Explicit expressions for the parameter combinations ${\rm Im}(-X_2 Y_2^*)$ and ${\rm Im}(-Y_2^* Z_2)$ that enter the calculations of the EDMs are given in Appendix~\ref{sec:appB}; in particular we note that these quantities are proportional to $\sin\delta$ and to the product $\sin\theta \cos\theta$, so that the CP-violating effects are largest when $\delta = \pi/2$ or $3\pi/2$ and $\theta = -\pi/4$. \begin{figure}[t!] \centering \begin{subfigure} { \includegraphics[scale=.45]{figure4innedm.png} } \hfill { \includegraphics[scale=.45]{figure3ineedm.png} } \caption{Constraint from the nEDM (left) and the eEDM (right) on $| {\rm Im}(X Y^*)|$ and $| {\rm Im}(Y^*Z)|$, respectively, in the Aligned 2HDM as a function of the charged Higgs mass ($M_{H^+}$). The blue shaded region is allowed. } \label{2hdmnedm} \vspace*{0.75cm} \subfigure { \includegraphics[scale=.45]{m1300imxy.png} } \hfill { \includegraphics[scale=.45]{m1300imxyeedm.png} } \caption {Constraint from the nEDM (left) and the eEDM (right) on $|{\rm Im}(X_2 Y_2^*)|$ and $|{\rm Im}(Y_2^*Z_2)|$ in the 3HDM as a function of the mass of $H_2^+$. $M_{H_3^+}$ is fixed to be 85~GeV. The structure of the model forces Im$(X_3Y_3^{*}) = - {\rm Im}(X_2Y_2^{*})$ and Im$(Y_3^*Z_3) = -{\rm Im}(Y_2^*Z_2)$. } \label{3hdmnedm300} \vspace*{0.75cm} \subfigure { \includegraphics[scale=.45]{m1500imxy.png} } \hfill { \includegraphics[scale=.45]{m1500imxyeedm.png} } \end{subfigure} \caption {Same as in Fig.~\ref{3hdmnedm300} but for $M_{H^{\pm}_3} = 300$ GeV. } \label{3hdmnedm500} \end{figure} \subsection{Light charged Higgses} \label{sec: lightmass} \subsubsection{The $M_{H_2^\pm}<m_t<M_{H_3^\pm}$ case} In Fig.~\ref{Fig:80,200EDM}, we show the constraints from $\bar{B}\to X_s\gamma$, eEDM and nEDM on the $[\delta,\theta]$ plane, for $M_{H_2^+} = 80$~GeV, $M_{H_3^+} = 200$~GeV, and small values of $\tan\beta$ and $\tan\gamma$ so as to be compliant with collider limits, as seen previously. Notice that the $\bar{B}\to X_s\gamma$ constraint is satisfied within the green and grey shaded areas while the two EDM constraints are satisfied outside the corresponding closed curves. (Details of our calculation of the $\bar{B}\to X_s\gamma$ constraint are given in Appendix~\ref{sec:bsg}.) {The shaded areas correspond to the $\pm 2 \sigma$ allowed region of BR($\bar B \to X_s \gamma$), with the green (grey) area corresponding to values below (above) the experimental central value.} From these plots, we learn that we need $\delta$ to be very close to $\delta = n \pi$ to satisfy all three constraints at once. That is, we are forced to find solutions very close to the CP-conserving limit; furthermore, the constraint from $\bar B \to X_s \gamma$ furthermore tends to favour $\delta \simeq \pi$. In Fig.~\ref{fig:deltatheta20}, we show the effect of varying $\tan\gamma$ and increasing the mass of the heavier charged Higgs state while keeping $M_{H_2^\pm} = 80$~GeV and fixing $\tan\beta = 20$. As can be seen, increasing $M_{H_3^\pm}$ from 200 to 500 GeV makes it more difficult to find regions that can survive all constraints, in line with the requirements of the aforementioned cancellation mechanism. Comparing with Fig.~\ref{Fig:80,200EDM} we also see that larger values of $\tan\beta$ lead to tighter constraints from the nEDM while larger values of $\tan\gamma$ lead to tighter constraints from the eEDM. In Fig.~\ref{Fig:80,200constraints}, we show the same constraints on the $[\tan\gamma,\tan\beta]$ plane instead, {for $\theta = -0.3$ and two characteristic values of $\delta$ chosen to be very close to $\pi$, i.e., $\delta = 0.975 \pi$ and $0.985\pi$. We have also added here the constraints from the top-quark width and perturbativity of the $H_i^+ b\bar t$ vertex. The allowed region is the portion of the green and grey shaded areas that lies to the right of the black dotted line and above the blue curve. For all the parameter regions shown, the collider limits are satisfied. We can see that, for $\tan\gamma > 1.5$ and $\tan\beta > 8$, we can satisfy all other constraints for these values of $\delta$.} \begin{figure}[h!] \centering \includegraphics[scale=0.5]{t51d80200.png}\includegraphics[scale=0.5]{t101d80200.png} \caption{The allowed regions from $\bar{B} \to X_s \gamma$ (within the green and grey shaded areas), eEDM (outside the blue curves), and nEDM (outside the red curves) in the [$\delta, \theta$] plane, with $M_{H_2^+} = 80$~GeV, $M_{H_3^+} = 200$~GeV, $\tan\gamma = 1$, and $\tan\beta = 5$ (left) or 10 (right). {Here, the shaded areas correspond to the $\pm 2 \sigma$ allowed region of BR($\bar B \to X_s \gamma$), with the green (grey) area corresponding to values below (above) the experimental central value.} } \label{Fig:80,200EDM} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale=0.5]{t201d85200.png} \includegraphics[scale=0.5]{t201d85500.png}\\ \includegraphics[scale=0.5]{t202d85200.png} \includegraphics[scale=0.5]{t202d85500.png} \caption{The allowed regions from $\bar{B} \to X_s \gamma$ (within the green and grey shaded areas), eEDM (outside the blue curves), and nEDM (outside the red curves) in the [$\delta, \theta$] plane, with $M_{H_2^+} = 80$~GeV and $\tan\beta = 20$. $M_{H_3^+} = 200$~GeV in the left panels and 500~GeV in the right panels. Here, $\tan\gamma = 1$ in the upper panels and 2 in the lower panels. } \label{fig:deltatheta20} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale=0.522]{alllimit80200_tmin03_bg_d0975.png} \includegraphics[scale=0.522]{alllimit80200_tmin03_bg_d0985.png} \caption{The allowed regions from $\bar{B} \to X_s \gamma$ (within the green and grey shaded areas), eEDM (above the blue line) and nEDM (to the right of the red line) in the [$\tan \gamma,\tan \beta$] plane, with $M_{H_2^+} = 80$~GeV, $M_{H_3^+} = 200$~GeV, $\theta = -0.3$, and $\delta = 0.975\pi$ (left) or $0.985\pi$ (right). We also show constraints from the top-quark width (black dotted line) and perturbativity (orange dashed line), wherein the region to the right of the respective curves is allowed.} \label{Fig:80,200constraints} \end{figure} \subsubsection{The $M_{H^\pm_2}<M_{H^\pm_3}<m_t$ case} Similarly to the previous case, also here we need low values of $\tan\beta$ to satisfy the top-quark width measurements. However, this is in tension with the region of parameter space that satisfies simultaneously the constraints from $\bar{B}\to X_s\gamma$, eEDM, and nEDM, despite which, as can be seen in Fig.~\ref{Fig:edmlow}, we could have a somewhat wider interval of $\delta$ around $\pi$ for large values of $\tan\beta$ and $\tan\gamma$. There also seems to be a broader band satisfying the $\bar{B}\to X_s\gamma$ constraint for lower values of $M_{H^\pm_3}$, while keeping $M_{H^\pm_2}=80$ GeV. This, again, is in tension with the aforementioned experimental constraints. However, in this case, it is the collider limit on $H^\pm\to\tau\nu$ that becomes too restrictive on the $H^\pm_3$ properties as we decrease its mass. But we can prevent this from happening if we keep $M_{H^\pm_3}=170$ GeV and increase instead the mass of $M_{H^\pm_2}$, which is what we do in Fig. \ref{Fig:lowmass-summary}. In the upper panel of this figure, we show the case $M_{H^\pm_2}=80$~GeV and $M_{H^\pm_3}=170$~GeV. In this case, the top-quark width measurement is very constraining, and very low values of $\tan\gamma$ are ruled out. In the lower panel of this figure, we show the case $M_{H^\pm_2}=160$~GeV and $M_{H^\pm_3}=170$~GeV. Here, the top-quark width measurement is not that constraining, and very low values of $\tan\gamma$ are allowed. With the two charged Higgs masses closer to being degenerate, a larger range of the CP-violating phase $\delta$ also becomes allowed. \begin{figure} \centering \includegraphics[scale=0.5]{t50dot5d80150.png}\includegraphics[scale=0.5]{t50dot5d80170.png} \includegraphics[scale=0.5]{t51d80150.png}\includegraphics[scale=0.5]{t51d80170.png} \includegraphics[scale=0.5]{t101d80150.png}\includegraphics[scale=0.5]{t101d80170.png} \caption{The allowed regions from $\bar{B} \to X_s \gamma$ (within the green and grey shaded areas), eEDM (outside the blue curves), and nEDM (outside the red curves) in the [$\delta, \theta$] plane, with $M_{H_2^+} = 80$~GeV and $M_{H_3^+} = 150$ (left) or 170 (right)~GeV. From top to bottom, $(\tan\beta, \tan \gamma) = (5, 0.5)$; $(5, 1)$; and $(10, 1)$. } \label{Fig:edmlow} \end{figure} \begin{figure} \centering \includegraphics[scale=0.522]{alllimit80170_tmin03_bg_d096.png} \includegraphics[scale=0.522]{alllimit80170_tmin03_bg_d0985.png}\\ \includegraphics[scale=0.522]{alllimit160170_tmin05_bg_d08.png} \includegraphics[scale=0.522]{alllimit160170_tmin05_bg_d095.png} \caption{The allowed regions from $\bar{B} \to X_s \gamma$ (within the green and grey shaded areas), eEDM (above the blue line), and nEDM (to the right of the red line) in the [$\tan \gamma,\tan \beta$] plane, with $M_{H_3^+} = 170$~GeV. In the upper panels $M_{H_2^+} = 80$~GeV, $\theta = -0.3$, and $\delta = 0.96\pi$ (left) or $0.985\pi$ (right). In the lower panels $M_{H_2^+} = 160$~GeV, $\theta = -0.5$, and $\delta = 0.8\pi$ (left) or $0.95\pi$ (right). We also show constraints from the top-quark width (black dotted line) and perturbativity (orange dashed line), wherein the region to the right of the respective curves is allowed. } \label{Fig:lowmass-summary} \end{figure} \subsection{Heavy charged Higgses}\label{sec: heavymass} In the case that both the $H_2^\pm$ and $H_3^\pm$ masses are heavier than the top-quark mass, collider searches no longer significantly limit the parameter space, so we present the $\bar{B} \to X_s \gamma$, eEDM and nEDM constraints on the [$M_{H^{\pm}_2},M_{H^{\pm}_3}$] plane with different choices for the mixing parameters ($\tan\beta$, $\tan \gamma$, $\theta$, and $\delta$). We choose the parameters $\theta = - 0.476\pi$ $(-\pi/4)$, $\tan\beta = 20\; (40)$ and $\tan \gamma =1\; (2)$ to plot from Fig.~\ref{fig:theta2105} to Fig.~\ref{fig:theta49}. Specifically, Figs.~\ref{fig:theta2105}--\ref{fig:theta219} are plotted for three different $\delta$ values for the same $\theta=-0.476\pi$, where $\delta = 0.5\pi$ (maximum CP-violating scenario), 0.85$\pi$, and 0.9$\pi$ (two choices closer to the CP-conserving limit). In Fig.~\ref{fig:theta2105}, the two bottom panels clearly show that the most constraining limit comes from the nEDM when $\tan\beta = 40$. For the choice of $\tan \beta = 20$ and $\tan\gamma = 2$, the top right panel shows instead that the eEDM constraint is the one limiting most of the parameter space. In Figs.~\ref{fig:theta2185} and~\ref{fig:theta219}, a large expanse of parameter space is allowed by both the eEDM and nEDM constraints. In fact, here, EDM constraints no longer strictly limit the parameter space so that $\bar{B} \to X_s \gamma$ becomes the essential constraint, especially as $\delta$ gets close to $\pi$. The typical funnel shape of the allowed region along the mass diagonal for the EDM constraints illustrates again the impact of the GIM-like cancellation mechanism driven by the charged Higgs mass degeneracy, the more so the smaller their absolute values. Such a cancellation is not present in the $\bar{B} \to X_s \gamma$ constraint, since this observable receives both real and imaginary contributions from $X_iY_i^*$ terms, with the real components of $X_2Y^*_2$ and $X_3Y^*_3$ not being strongly correlated as their imaginary parts are; the corresponding shape thus departs from the funnel one and depends more on a judicious choice of $\theta$ for given values of $\tan\beta$ and $\tan\gamma$. In the case of $\theta = -\pi/4$, three similar figures, Figs.~\ref{fig:theta405}, \ref{fig:theta485}, and \ref{fig:theta49}, are presented for $\delta = 0.5\pi$, $0.85\pi$ and $0.9\pi$, respectively. For this $\theta$ value, it is intriguing to note that even the exact degeneracy case between $H_2^\pm$ and $H_3^\pm$ fails the $\bar B\to X_s\gamma$ constraint for the smallest $\delta$ choice. In contrast, for the other $\delta$ values, the main effect is a significant restriction of the parameter space allowed by $\bar B\to X_s\gamma$ along the $M_{H_2^\pm}=M_{H_3^\pm}$ diagonal while, conversely, the EDM constraints are less invasive. This is a generalized feature quite irrespectively of the value of $\tan\beta$, so long as $\tan\gamma$ remains small. \begin{figure} \centering \includegraphics[scale=0.5]{m1m2minpi212010d5pibsgnedmeedm.png} \includegraphics[scale=0.5]{m1m2minpi212020d5pibsgnedmeedm.png}\\ \includegraphics[scale=0.5]{m1m2minpi214010d5pibsgnedmeedm.png} \includegraphics[scale=0.5]{m1m2minpi214020d5pibsgnedmeedm.png} \caption{The allowed regions from $\bar{B} \to X_s \gamma$ (within the green and grey shaded areas), eEDM (between the blue lines), and nEDM (between the red lines) in the [$M_{H^{\pm}_2} , M_{H^{\pm}_3} $] plane, for $\theta = -0.476\pi$ and $\delta = 0.5\pi$ (i.e., maximal CP violation), with $\tan\beta = 20$ (upper panels) or 40 (lower panels) and $\tan\gamma = 1$ (left panels) or 2 (right panels). } \label{fig:theta2105} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{m1m2minpi212010d85pibsgnedmeedm.png} \includegraphics[scale=0.5]{m1m2minpi212020d85pibsgnedmeedm.png}\\ \includegraphics[scale=0.5]{m1m2minpi214010d85pibsgnedmeedm.png} \includegraphics[scale=0.5]{m1m2minpi214020d85pibsgnedmeedm.png} \caption{Same as Fig.~\ref{fig:theta2105} but with $\delta=0.85\pi$. } \label{fig:theta2185} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{m1m2minpi212010d9pibsgnedmeedm.png} \includegraphics[scale=0.5]{m1m2minpi212020d9pibsgnedmeedm.png}\\ \includegraphics[scale=0.5]{m1m2minpi214010d9pibsgnedmeedm.png} \includegraphics[scale=0.5]{m1m2minpi214020d9pibsgnedmeedm.png} \caption{Same as Fig.~\ref{fig:theta2105} but with $\delta=0.9\pi$. } \label{fig:theta219} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{m1m2minpi42010d5pibsgnedmeedm.png} \includegraphics[scale=0.5]{m1m2minpi42020d5pibsgnedmeedm.png}\\ \includegraphics[scale=0.5]{m1m2minpi44010d5pibsgnedmeedm.png} \includegraphics[scale=0.5]{m1m2minpi44020d5pibsgnedmeedm.png} \caption{Same as Fig.~\ref{fig:theta2105} but with $\theta=-\pi/4$. } \label{fig:theta405} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{m1m2minpi42010d85pibsgnedmeedm.png} \includegraphics[scale=0.5]{m1m2minpi42020d85pibsgnedmeedm.png}\\ \includegraphics[scale=0.5]{m1m2minpi44010d85pibsgnedmeedm.png} \includegraphics[scale=0.5]{m1m2minpi44020d85pibsgnedmeedm.png} \caption{Same as Fig.~\ref{fig:theta2185} but with $\theta=-\pi/4$. } \label{fig:theta485} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{m1m2minpi42010d9pibsgnedmeedm.png} \includegraphics[scale=0.5]{m1m2minpi42020d9pibsgnedmeedm.png}\\ \includegraphics[scale=0.5]{m1m2minpi44010d9pibsgnedmeedm.png} \includegraphics[scale=0.5]{m1m2minpi44020d9pibsgnedmeedm.png} \caption{Same as Fig.~\ref{fig:theta219} but with $\theta=-\pi/4$. } \label{fig:theta49} \end{figure} \section{Conclusions} \label{sec:conclusions} In this paper, we have studied a version of the 3HDM, called Democratic, wherein each amongst the down-type quarks, up-type quarks, and charged leptons gain their mass from one only of the three VEVs of the Higgs doublets, in the presence of explicit CP violation in the charged Higgs sector, which consists of two physical states, each with mass varying from 80~GeV to the TeV scale. While an enlarged neutral Higgs sector also exists in this framework, consisting, in addition to the SM-like Higgs state already discovered, of four other neutral Higgs states, two CP-even and two CP-odd, these have been assumed to be sufficiently heavy compared to the charged Higgs bosons so as to not significantly affect the low energy phenomenology of the 3HDM. In particular, we showed that it is possible to isolate the effects of CP violation to the charged Higgs sector only, and derived the conditions on the complex parameters of the scalar potential required to achieve this. We have studied the charged Higgs sector in terms of the following experimental observables, all very sensitive to new CP-violating effects emerging alongside those contained in the CKM matrix: BR$(\bar B\to X_s\gamma)$, eEDM, and nEDM. We have tested the parameter space of the 3HDM, mapped in terms of the two charged Higgs boson masses $M_{H_2^\pm}$ and $M_{H_3^\pm}$ and four parameters entering their Yukawa couplings, $\tan\beta$, $\tan\gamma$, $\theta$, and the CP violating phase $\delta$, against experimental measurements of these three observables. In doing so, we have discovered a sort of GIM-like cancellation mechanism between the two charged Higgs contributions to eEDM and nEDM driven by the unitarity of the charged Higgs mixing matrix.\footnote{An equivalent cancellation occurs in the CP-odd asymmetries in $\bar B \to X_s \gamma$, considered in Ref.~\cite{Akeroyd2020}.} Such a cancellation becomes exact when $M_{H_2^\pm}$ = $M_{H_3^\pm}$. As a consequence, it is then possible to evade the experimental constraints enforced through the aforementioned CP-odd observables whichever the values of $\tan\beta$, $\tan\gamma$, $\theta$, and $ \delta$. Even in less fine-tuned conditions, when we lift the charged Higgs boson mass degeneracy, interesting phenomenology emerges. Specifically, light ${H_2^\pm}$ and/or ${H_3^\pm}$ states, with mass below $m_t$, are still allowed not only by the BR$(\bar B\to X_s\gamma)$, eEDM and nEDM constraints but also by those induced by the experimental measurements of the top quark decay width and direct searches for top quark decays to charged Higgs bosons with subsequent charged Higgs decays to $\tau^+\nu$, $c\bar s$, and $c\bar b$ at the LHC, as well as the theoretical requirement of perturbativity of the Yukawa couplings. While this is really only possible near the CP-conserving limit and when the lightest of the two charged Higgs states has a mass close to $M_{W}$ (so as to be unconstrained by the LHC, owing to the overwhelming irreducible $W^\pm$ background herein), also for small values of $\tan\beta$ and $\tan\gamma$, it nonetheless opens us the possibility of searching for the corresponding signals at the LHC, wherein one could attempt to isolate CP-violating asymmetries in the top-quark decay rates between the positively and negatively charged $H^\pm_i$ ($i=2,3$) channels. The region of viable 3HDM parameter space in which these two states are both heavier than the top quark is much larger in comparison, and the mass difference $M_{H_2^\pm} - M_{H_3^\pm}$ can be up to 200~GeV or so, albeit for selected values of the other parameters so that the constraint from BR($\bar B \to X_s \gamma$) can be satisfied. In this case, while it may be difficult to access $H^\pm_i$ signals in direct searches at the LHC, and consequently the possible CP-violating nature of the 3HDM, the latter could well be established in CP asymmetries of ${\bar B}\to X_s/X_d\gamma$ observables at $B$-factories (e.g., Belle-II), as demonstrated in Ref.~\cite{Akeroyd2020}, which in fact can capture striking signals in the case of light charged Higgs bosons too. \acknowledgments This work was supported by the grant H2020-MSCA-RISE-2014 No.\ 645722 (NonMinimalHiggs). H.E.L.\ was also supported by the Natural Sciences and Engineering Research Council of Canada. S.M. is supported in part through the NExT Institute and the STFC Consolidated Grant No. ST/L000296/1. D.R.-C. is supported by the Royal Society Newton International Fellowship NIF/R1/180813 and by the National Science Centre (Poland) under the research Grant No. 2017/26/E/ST2/00470. D.R.-C. and M.S. thank Carleton University for hospitality during the initial stages of this work. The authors thank Shinya Kanemura and Kei Yagyu for useful conversations. \newpage
{ "timestamp": "2021-06-29T02:31:14", "yymm": "2012", "arxiv_id": "2012.08846", "language": "en", "url": "https://arxiv.org/abs/2012.08846" }
\section{Introduction} Networks are usually seen as a parsimonious model to describe the backbone architecture of complex systems \cite{latora}. Accordingly, comparing different systems boils down to compare their architecture, leading to the notion of network similarity measure \cite{distance0,distance1,distance2,distance3,distance4}. In graph theory, two graphs are isomorphic if there exist a vertex permutation which maps one network into the other, naturally leading to a binary (and not very useful in real-world systems) notion of similarity. More useful approaches proceed by projecting networks into a suite of properties summarised in some vector ${\bf p}$ (e.g. degree distribution, centrality vectors, eigenspectra, etc) and, subsequently construct a similarity metric $\cal D$ by which two networks $A$ and $B$ are closer in the space spanned by $\bf p$ if ${\cal D}(A,B)=||{\bf p}_A - {\bf p}_B||$ is ``small''. Other ideas include the formalisation of graph kernels \cite{distance0}, comparing networks by comparing the statistics of random walks running over them \cite{distance5, distance6}, or using statistical approaches such as estimating topological correlations between networks. While in all these approaches we typically have ${\cal D}(A,B)={\cal D}(B,A)$, i.e. a symmetrical relation, in many cases this undirected relation is hiding an actual direction (whether causal or not). As an example, consider social networks. The different layers of the social network of an individual are typically correlated: my friends offline tend to be also friends in Facebook. However, such {\it relation} is directional: when a new link --i.e, a new social relationship-- is created, then it is likely that such a link will be replicated within her online social network too (Facebook, Instagram), but it is proportionally less likely that the direction of influence is inverted. So the offline and the online social network of a person are probably similar, but such similarity has a direction. Furthermore, in many cases such influence is not direct (not causal). Sometimes, there is a hidden network $\cal C$ that indeed confounds or mediates the relation between $\cal A$ and $\cal B$. For instance, the Facebook and Instagram networks of a certain individual are correlated not because there is a direct, causal relationship between them, but because both these networks are indeed related to the actual (offline) social network of the person.\\ In this work we are interested in understanding and disambiguating when the {relation} between two networks $\cal A$ and $\cal B$ (where for instance ${\cal A}\ r \ {\cal B}$ if ${\cal D}({\cal A},{\cal B})<\epsilon$) is a direct one or is underpinned by the hidden interaction with a third network $\cal C$. In particular, $\cal C$ can be independent of $\cal A$ and $\cal B$ (leading to a direct relation ${\cal A}\ r \ {\cal B}$). $\cal C$ can also act as a hidden {\it mediator} or {\it confounding} factor (${\cal A}\ r\ {\cal C}$, ${\cal C}\ r\ {\cal B}\Rightarrow {\cal A}\ r\ {\cal B})$. Finally, ${\cal C}$ can act as a suppressor such that $[{\cal A}\oplus {\cal C}]\ r\ {\cal B}$, where $\oplus$ is here to be defined but conceptually means $\cal{A}$ and $\cal C$ interact synergistically. The terms {\it mediator}, {\it confounder} and {\it suppressor} are inspired by the information-theoretic framework described in \cite{mackinnon}.\\ In what follows we address these questions introducing a set-theoretical approach where concepts such as network mediation or network suppression emerge naturally. We benchmark our theory with simple generative models and then apply it to a range of empirical networks, where we unveil and discuss the concomitant roles of mediation and suppression. \\ \section{Theory} Let $\mathcal{A, B, C}$ be three unweighted networks with adjacency matrices $\bf A, B, C$, all with the same node set and respective edge sets $a,b$ and $c$ (i.e. they can also be identified with the layers of a multiplex network). Let us define the network-Jaccard index of two networks $\text{NJ}({\mathcal A,B})$ as the Jaccard index over its edge sets \begin{equation} \text{NJ}({\cal A,B}):= J(a,b)= \frac{|a \cap b|}{|a \cup b|} \label{eq:Jaccard} \end{equation} $\text{NJ}({\mathcal A,B})$ is a similarity metric, and a distance can be easily defined as $d({\mathcal A,B})=1-\text{NJ}({\cal A,B})$. This quantity alone can be used to initially establish if two networks are related. Regardless the fact that such relation is effectively undirected or otherwise is causal (influence), in order to explore whether such relation is underpinned by a third network $\cal C$ we need to quantify the effect of conditioning such relation on $\cal C$. Let us then define the partial network-Jaccard index $\text{NJ}_p(\mathcal{A,B}|\mathcal{C})$ of two networks $\mathcal{A, B}$ conditioned on a third one $\mathcal{C}$ as the Jaccard index over the edge subsets of $\mathcal{A}$ and $\mathcal{B}$ formed by those edges which are absent in $\mathcal{C}$: \begin{equation} \text{NJ}_p(\mathcal{A,B}|\mathcal{C}) = \frac{|(a\cap b) \setminus c|}{|(a\cup b) \setminus c|} \label{eq:PartJaccard} \end{equation} Let's see intuitively the effect of conditioning with respect to $\mathcal{C}$ in this way. Suppose initially that $\mathcal{C}$ is totally independent from $\mathcal{A}$ and $\mathcal{B}$. Then we may expect that the Jaccard index, on average, will be the same if evaluated just on the links which are absent in $\mathcal{C}$, so $\text{NJ}_p(\mathcal{A,B}|\mathcal{C})\approx \text{NJ}({\cal A,B})$. Suppose on the other hand that $\mathcal{A}$ is influencing $\mathcal{B}$ indirectly, with the mediation of $\mathcal{C}$. Then, intuitively, removing the links of $\mathcal{C}$ would effectively push the partial Jaccard index to zero. A similar scenario takes place if $\mathcal{A}$ and $\mathcal{B}$ are undirectedly related through direct relation to a confounding factor $\mathcal{C}$. Finally, $\mathcal{C}$ could be suppressing the influence of $\mathcal{A}$ in $\mathcal{B}$. For example, imagine that $\mathcal{B}$ somewhat depends on whether $\mathcal{A}$ and $\mathcal{C}$ interact synergistically, e.g. if links in $\mathcal{C}$ are more likely to occur if they are in one network but not on the other (probabilistic XOR gate); then removing the links of $\mathcal{C}$ will enhance the partial Jaccard index.\\ To distinguish these three scenarios, we define the Jaccard net difference \begin{equation} \Delta[\mathcal{A,B; C}] :=\Delta= \text{NJ}_p(\mathcal{A,B}|\mathcal{C}) - \text{NJ}({\cal A,B}). \end{equation} Intuitively, if ${\cal C}$ is independent of the relation between ${\cal A}$ and ${\cal B}$ then $\Delta \approx 0$, if it mediates or confounds such relation then $\Delta <0$, and if it acts as a suppressor then $\Delta >0$.\\ \noindent In what follows we construct simple generative models of independent, mediated and suppressor interactions, detailed as algorithms, we prove that these correctly generate these three types of trivariate relations, and depict numerical simulations of the outcome for finite networks. \begin{figure}[htb!] \centering \includegraphics[width=0.75\textwidth]{fig1.png} \caption{{\bf Pure models. }$\text{NJ}_p(\mathcal{A,B}|\mathcal{C})$ vs $\text{NJ}({\cal A,B})$, calculated on $1000$ realizations of triplets of networks of $N=50$ nodes wired such that $\mathcal{C}$ plays no effect (green crosses), plays a mediating effect (violet dots) or a suppression effect (red crosses) in the relation between $\mathcal{A}$ and $\mathcal{B}$. These interactions are constructed using the generative models described in Algorithms 1, 2 and 3 ($p=0.5$ in every case, and $q=1$). For completeness, we depict the histograms $P(\Delta)$ which certify that these algorithms generate networks where $\cal C$ play an independent role ($\Delta \approx 0$), a mediating role ($\Delta < 0$) or a suppressing role ($\Delta >0$).} \label{fig:toy_numerics} \end{figure} \subsection{Independency} A simple generative model of independency is given by three independent, Erdos-Renyi-type models, where in each of the networks each possible link independently occurs with probability $p$, see Algorithm 1. \begin{algorithm}[h!] \caption{{\textsc{Uncorrelated}()}}\label{euclid} \begin{flushleft} \textbf{Output}: 3 Erdos-Renyi adjacency matrices $\bf A, B, C$ which are uncorrelated ($\Delta \approx 0)$ \end{flushleft} \vspace{-4mm} \begin{algorithmic}[1] \State ${\bf A} \gets {\bf 0}$ \State ${\bf B} \gets {\bf 0}$ \State ${\bf C} \gets {\bf 0}$ \For{\texttt{$i=1 \ {\bf to} \ N$}} \For{\texttt{$j=i+1 \ {\bf to} \ N$}} \If{\textsc{rand} $<p$} $A_{ij}, A_{ji} \gets 1$ \EndIf \If{\textsc{rand} $<p$} $B_{ij}, B_{ji} \gets 1$ \EndIf \If{\textsc{rand} $<p$} $C_{ij}, C_{ji} \gets 1$ \EndIf \EndFor \EndFor \State \Return {\bf A,B,C} \end{algorithmic} \vspace{3mm} \end{algorithm} The following theorem can be easily proved. \begin{thm} \label{thm:uncorr} Let $\bf A$, $\bf B$ and $\bf C$ be as in Algorithm 1. Then $\mathbb{E}(\Delta)=0$ and the expected values of $\text{NJ}_p({\cal A,B}|{\cal C})$ and $\text{NJ}({\cal A,B})$ are equal to $p/(2-p)$. \end{thm} \noindent The proof of this theorem is given in the appendix. We conclude that, on average, $\text{NJ}_p$ and $\text{NJ}$ are nearly equal for uncorrelated networks generated in this way, as partialization with respect to an independent network $\mathcal C$ does not have any effect. In Fig.\ref{fig:toy_numerics} we illustrate this case for finite networks with $N=50$ nodes and $p=0.5$, finding that indeed $\Delta \approx 0$ and that $\text{NJ}_p({\cal A,B}|{\cal C})\approx \text{NJ}({\cal A,B}) \approx 1/3$, in good agreement with the theorem. \begin{figure}[!htb] \centering \includegraphics[width=0.55\textwidth]{matrices.png} \caption{{\bf Adjacency matrices cartoons displaying mediation and suppression.} (Top) Network $\cal C$ is mediating the relation between $\cal A$ and $\cal B$ (Algorithm 2). (Bottom) Network $\cal C$ acts as a suppressor between $\cal A$ and $\cal B$ (Algorithm 3).} \label{fig:matrices} \end{figure} \subsection{Mediation} Suppose now that $\mathcal{A}$ and $\mathcal{B}$ are both dependent on $\mathcal{C}$, i.e. where there is a link in $\mathcal{C}$, then there is a link in $\mathcal{A}$ and $\mathcal{B}$ (see Fig.\ref{fig:matrices} for an illustration of such case, and Algorithm 2 for a formal recipe of this generative model). \begin{algorithm} \caption{{\textsc{Mediated}()}}\label{euclid} \begin{flushleft} \textbf{Output}: 3 Erdos-Renyi adjacency matrices $\bf A, B, C$ where $\bf C$ mediates relation between $\bf A$ and $\bf B$ ($\Delta > 0)$ \end{flushleft} \vspace{-4mm} \begin{algorithmic}[1] \State ${\bf A} \gets {\bf 0}$ \State ${\bf B} \gets {\bf 0}$ \State ${\bf C} \gets {\bf 0}$ \For{\texttt{$i=1 \ {\bf to} \ N$}} \For{\texttt{$j=i+1 \ {\bf to} \ N$}} \If{\textsc{rand} $<p$} $A_{ij}, A_{ji} \gets 1$ \EndIf \If{\textsc{rand} $<p$} $B_{ij}, B_{ji} \gets 1$ \EndIf \If{\textsc{rand} $<p$} $C_{ij}, C_{ji},A_{ij}, A_{ji},B_{ij}, B_{ji} \gets 1$ \EndIf \EndFor \EndFor \State \Return {\bf A,B,C} \end{algorithmic} \vspace{3mm} \end{algorithm} \noindent This describes a situation where $\mathcal{C}$ mediates the relation between $\mathcal{A}$ and $\mathcal{B}$ (or, alternatively, $\mathcal{C}$ is confounding that relation). Partializing with respect to $\cal C$ removes the dependence between ${\cal A}$ and ${\cal B}$ due to $\cal C$, which intuitively leads to $\Delta<0$. The following theorem can be proved: \begin{thm} \label{thm:mediated} Let $\bf A$, $\bf B$ and $\bf C$ be as in Algorithm 2. If $\bf A$ and $\bf B$ share at least one edge besides the common edges shared with $\bf C$, then $\Delta<0$. \end{thm} \noindent The proof of this theorem is also put in an appendix. In Fig.\ref{fig:toy_numerics} we show numerical results for finite networks with $N=50$, with $p=0.5$.\\ \subsection{Suppression} Finally, let us consider the case where $\cal B$ depends on the interaction of $\cal A$ and $\cal C$ such that, an edge occurs in $\cal B$ with a certain probability if it appears in $\cal A$ but not in $\cal C$ or alternatively if it appears in $\cal C$ but not in $\cal A$ (see Fig. \ref{fig:matrices} for an illustration). This is akin to a probabilistic XOR gate. Then on average $\text{NJ}_p(\mathcal{A,B}|\mathcal{C})> \text{NJ}(\mathcal{A,B})$, i.e. partializing with respect to $\cal C$ in this case evidences suppression effects.\\ \begin{algorithm} \caption{{\textsc{Suppression}()}}\label{euclid} \begin{flushleft} \textbf{Output}: 3 Erdos-Renyi adjacency matrices $\bf A, B, C$ where $\bf C$ acts as a suppressor between $\bf A$ and $\bf B$ ($\Delta > 0)$ \end{flushleft} \vspace{-4mm} \begin{algorithmic}[1] \State ${\bf A} \gets {\bf 0}$ \State ${\bf B} \gets {\bf 0}$ \State ${\bf C} \gets {\bf 0}$ \For{\texttt{$i=1 \ {\bf to} \ N$}} \For{\texttt{$j=i+1 \ {\bf to} \ N$}} \If{\textsc{rand} $<p$} $A_{ij}, A_{ji} \gets 1$ \EndIf \If{\textsc{rand} $<p$} $B_{ij}, B_{ji} \gets 1$ \EndIf \If{\textsc{rand} $<p$} $C_{ij}, C_{ji} \gets 1$ \EndIf \If{$A_{ij}*C_{ij}=0$ \& $A_{ij} + C_{ij}>0$ \& \textsc{rand} $<q$} $B_{ij}, B_{ji} \gets 1$ \EndIf \EndFor \EndFor \State \Return {\bf A,B,C} \end{algorithmic} \vspace{3mm} \end{algorithm} \noindent Algorithm 3 encapsulates the generative model. The following theorem can be proved: \begin{thm} Let $\bf A$, $\bf B$ and $\bf C$ be as in Algorithm 3. Then $\mathbb{E}(\Delta)>0$. \end{thm} \noindent The proof for this theorem is also put in an appendix. In Fig.\ref{fig:toy_numerics} we show numerical results for finite networks with $N=50$, with $p=0.5$ and $q=1$, which are in full agreement with the theorem. \begin{figure*}[!htb] \centering \includegraphics[width=0.48\textwidth]{res_diagram.png} \includegraphics[width=0.48\textwidth]{res_shuf_diagram.png} \caption{(Left panel) Values of $\Delta$ obtained averaging 50 realizations of an interpolating model giving a blend of mediation and suppression (see the text), where each network has $N=300$ nodes and varying model parameter $p$, as a function of the interpolation parameter $\mu$ (in each of the N(N-1) steps we apply Algorithm 2 with probability $1-\mu$ and Algorithm 3 with $\mu$). (Right panel) Revised version of the left panel, this time selectively removing all suppression and mediation effects according to our theoretical framework. The original curve, for an interpolating model generating a blend of mediation and suppression, is depicted in black. The blue curve is a pure randomisation, which generates $\Delta\approx 0$. The dotted green line corresponds to a selective rewiring that removes all hidden suppression: in that case the curve is kept always below zero (increasing $\mu$ increases the amount of Algorithm 3, but then is selectively rewired, thus effectively randomising the networks and pushing $\Delta$ to zero). The pink curve is the result of a selectively rewiring that removes all hidden mediation: in that case the curve is pushed to the regime $\Delta >0$. As $\mu$ increases, the amount of Algorithm 3 (generating suppression) is increased, hence pushing $\Delta$ to larger values. The dashed yellow and pink lines are the result of selectively rewiring on the randomised networks, and only highlight the residual values of suppression or mediation which occur by chance (as a finite size effect) in randomised networks.} \label{fig:interpol} \end{figure*} \subsection{Coexistence of mediation and suppression effects} When we abandon ideal cases where only suppression or only mediation are present, and we go towards a mixture of the two, it becomes evident that both effects can be hidden and a single $\Delta$ cannot in principle tell us if the system evidences {\it only} one out of two mechanisms. To investigate coexistence of both mechanisms, we run a simulation in which algorithms 2 and 3 above are combined: in each step with probability $1-\mu$ we take algorithm 2 and with probability $\mu$ we take algorithm 3. The resulting model linearly interpolates mediation and suppression, such that measurable $\Delta = (1-\mu) \Delta_{\text{med}} + \mu\Delta_{\text{syn}}$, where $\Delta_{\text{med}}$ and $\Delta_{\text{syn}}$ are hidden. The results are depicted in the left panel of Fig.\ref{fig:interpol}, for different instances of parameter $p$ and $q=1$. We can have negative, null or positive values of $\Delta$ underpinned by a balance of both mediation and suppression mechanisms, and actually for the concrete set of parameters, the effect of mediation in $\Delta$ is slightly stronger than the effect of suppression (this unbalance gets more pronounced for $q<1$). This simple interpolating model thus leads us to conclude that, in real cases, we might for instance be naively measuring $\Delta <0$ and misleadingly concluding that there is only mediation where in fact both mediation and suppression could be at play. Accordingly, a measure describing the effects of suppression and mediation is not enough to describe and resolve the simultaneous presence of both.\\ \begin{algorithm}[h!] \caption{{\textsc{Null}()}}\label{Algo4} \begin{flushleft} {\bf Input}: 3 adjacency matrices {\bf A, B, C} and mode $X$ (mediation (M) or suppression (S))\\ {\bf Output}: Null model net difference $\Delta_{W,X}$ \end{flushleft} \vspace{-3mm} \begin{algorithmic}[1] \State ${\bf B}2 \gets \textsc{FullRewire}({\bf A, B, C})$ \State $\Delta_X \gets \textsc{SelectiveRewire}({\bf A,B,C})$ \State $\Delta_{RX} \gets \textsc{SelectiveRewire}({\bf A,B2,C}) $ \State $\Delta_{W,X} \gets [\Delta_X - \Delta_{RX}]/X_{\text{max}}$ \State \Return ${\Delta_{W,X}}$ \end{algorithmic} \vspace{3mm} \begin{algorithmic}[1] \Function{\textsc{FullRewire}}{$\bf G$} \For{$\ell_{ij} \in {\bf G}$} \If{\textsc{rand} $<p$} $\ell_{ij} \gets \ell_{kl}$, $\ell_{kl} \gets \ell_{ij}$ \EndIf \EndFor \State \Return {\bf G} \EndFunction \end{algorithmic} \vspace{3mm} \begin{algorithmic}[1] \Function{\textsc{SelectiveRewire}}{{\bf A, G, C}, X} \If{$X=S$} \For{$\ell'_{ij} \in {\bf G}'$} \If{$\ell'_{ij}=1 \ \& \ A_{ij}=1\ \& \ C_{ij}\neq1$} $\ell'_{kl} \gets \ell'_{ij}; \ \ell'_{ij} \gets 0 $ \EndIf \EndFor \EndIf \If{$X=M$} \For{$\ell'_{ij} \in {\bf G}'$} \If{$\ell'_{ij}=1 \ \& \ A_{ij}=1\ \& \ C_{ij}=1$} $\ell'_{kl} \gets \ell'_{ij}; \ \ell'_{ij} \gets 0 $ \EndIf \EndFor \EndIf \State $\textsc{selective-rewire} \gets \text{NJ}_p({\bf A,G}'|{\bf C}) - \text{NJ}({\bf A,G}')$ \State \Return $\textsc{SelectiveRewire}$ \EndFunction \end{algorithmic} \end{algorithm} \noindent In order to disentangle both effects we now introduce { Algorithm} \ref{Algo4}, which applies both for constructing null models for mediation (M) and suppression (S). To construct a surrogate where all suppression has been removed, starting from $\cal A,B,C$, we perform a selective rewiring in $\cal B$, where only those links in $\cal B$ which are also present in $\cal A$ but not in $\cal C$ (or that also appear in $\cal C$ but not in $\cal A$) are rewired randomly. Similarly, to construct a surrogate where all mediation has been removed, starting from $\cal A,B,C$, we perform a selective rewiring in $\cal B$, where only those links in $\cal B$ which are also present in $\cal A$ and in $\cal C$ are rewired randomly.\\ We then compute again the net Jaccard difference on the rewired versions, which are are labelled $\Delta_S$ (applied to the case where suppression is removed) and $\Delta_M$ (applied to the case where mediation is removed) respectively. The heuristic is then simple: if there is e.g. hidden suppression in the data (respectively mediation), then $\Delta_S < \Delta$ (respectively $\Delta_M > \Delta$), whereas if such mechanism is absent then $\Delta_S\approx \Delta$ ($\Delta_M\approx\Delta$).\\ Now, we also need to take into account finite-size effects which irremediably add spurious mediation and suppression effects (i.e. triplets of purely random, uncorrelated networks will show small but non-zero mediation and suppression due to chance). To counterbalance such effects, we also proceed to selectively remove suppression and mediation from a completely randomly rewired version of $\cal B$, which we call $\bf B2$, leading to two new indices: $\Delta_{RS}$ and $\Delta_{RM}$. We can finally combine these to produce normalised indices of mediation ($\bar m$) and suppression ($\bar s$) by normalising them dividing over the maximum possible value of suppression (mediation) attainable by a generative model such as algorithm $3$ ($2$) to the triplet of networks, i.e. $$ {\bar m}= \frac{\Delta_{S}-\Delta_{\text{R,S}}}{M_{max}}, \ \ {\bar s}= \frac{\Delta_{M}-\Delta_{\text{R,M}}}{S_{max}}.$$ Accordingly, the role that $\cal C$ plays in the relation of $\cal A$ and $\cal B$ is described by the duple $(\bar m, \bar s)$.\\ Additionally, a significance value for these indices could be defined as: $$\sigma_{\text{\{S,M\}}} = \left\lvert \frac{\Delta_{\{S,M\}}-\Delta_{\text{R,\{S,M\}}}}{\text{std}(\Delta_{\text{R,\{S,M\}}})} \right\rvert.$$ It is worth to stress that for large networks the finite size effects become less common, and $\Delta_{\text{R,\{S,M\}}}$ will tend to zero. In the same way any value of suppression and mediation will be highly significant.\\ \begin{figure}[!htb] \centering \includegraphics[width=0.85\textwidth]{hists.png} \caption{{\bf Illustration of Algorithm 4. } We illustrate the procedure of how to disentangle mediation and suppression in two examples. (a) Selective rewiring applied on a triplet of independent networks generated by Algorithm 1, with $N=300$ nodes and $p=0.3$. The original value of the net Jaccard difference (close to zero) is denoted $\Delta_{0}$. Each selective rewiring is repeated 500 times, and histograms associated to each process are built. The selective removal of suppression and mediation yields only a marginal change in $\Delta$, similar to the one performed on a randomised version, what indicates that the amount of suppression and mediation in this configuration is residual and only due to finite size effects, as expected since $\cal C$ is independent of $\cal A, B$ by construction. (b) Similar to panel (a) but applied on the C Elegans triplet, where network $\cal C$ is assigned to the electrical layer. The actual value $\Delta_{0}<0$, initially suggesting mediation. The selective removal of suppression (mediation) significantly push the histograms towards more negative (positive) values of $\Delta$ --much more than such selective removal performed on a randomisation-- suggesting that there exist a significant amount of mediation and suppression. All histograms are built from $500$ independent rewiring realisations. } \label{fig:random_hists} \end{figure} \noindent For illustration, in the left panel of Fig.\ref{fig:random_hists} we show the effects of the sequence of selective rewirings on $\Delta$ applied to a particular example of three independent, Erdos-Renyi networks with $N=300$ nodes and wiring probability $p=0.3$ (i.e., Algorithm 1). The original value of $\Delta$ is very close to zero, as well as the ones obtained from a full randomisation of $\cal B$. Since in this example the networks are independent, any mediation or suppression is only a spurious residual due to finite size effects, thus this residual is flagged out in similar terms by a selective rewiring on the actual network $\cal B$ ($\Delta_X$) or on its full randomisation ($\Delta_{RX}$), hence the violet and green histograms overlap, and similarly the pink and pale blue ones also overlap. As an additional illustration, we applied the sequence of selective rewirings on the results of the interpolating model. Results are shown in the right panel of Fig.\ref{fig:interpol}. \section{Empirical networks} We now turn to real-world networks and consider four types of 3-layer multiplex networks, including (i) different modes of social interaction in Twitter during the 2014's New York City Climate March (NYC), (ii) different types of social interaction --proximity, phone call/text message, Facebook-- as collected in Denmark (Copenhagen), (iii) different interpersonal relations inside a corporation (Lazega law firm) and (iv) different synaptic junctions in a brain network (C Elegans), see appendix for details.\\ To begin with, in Fig.\ref{fig:J} we confirm that, with the exception of the pair NYC Retweets vs Replies, all other possible pairs of layers in the four examples are indeed genuinely related --i.e., showing substantially more similarity than a null model--. In each case we plot $\text{NJ}({\cal A,B})$ (blue bars) and as a reference, as black lines we also plot the average result of $\text{NJ}({\cal A,B})$ ($\pm$ one standard deviation) after $\cal A,B$ have been appropriately randomised. We confirm that the similarity between each pair of networks is not the result of a finite-size effect and thus exploring the role of a third network ($\cal C$) is justified.\\ We then turn to analyse the role of $\cal C$. For illustration, the whole selective rewiring process described in Algorithm 4 is depicted with detail for a specific example (the case of the C Elegans multiplex where we explore the role that the electrical synapses layer play in the relation between the monadic and the polyadic layers) in the right panel of figure \ref{fig:random_hists}. We provide the original value $\Delta_{0}$, and the distributions of the $\Delta$ values obtained after each of the rewiring procedure, concluding that this network indeed shows non-negligible mediation and suppression effects.\\ \begin{figure}[!htb] \centering \includegraphics[width=0.75\textwidth]{J.png} \caption{{\bf Similarity between pairs of real-world networks. } Values of NJ$({\cal A,B})$ computed on the four empirical multiplex networks considered (for each multiplex, we consider the three pair permutations). As a reference, black horizontal lines display NJ$_{\text{null}}({\cal A,B})$ which computes the average over several randomisations of layers $\cal A, B$ (red lines correspond to $\pm$ one standard deviation. We conclude that all pair of layers are genuinely related with the possible exception of the pair NYC replies vs retweet.} \label{fig:J} \end{figure} \noindent The normalized indices of mediation and suppression for the rest of permutations in all the real-world multiplex networks are reported in figure \ref{fig:med_syn_2d}. The first thing we can observe is that overall there is substantially more mediation than suppression, although we also observe the latter mechanism. All effects are statistically significant ($\sigma\gg1$ in every case, data not shown) except for the suppression in the Lazega advice layer and the NYC Retweet layer, where $\sigma \approx 2$ in both cases.\\ In the case of the Copenhagen multiplex, the only layer which evidences a significant role in the relation of the other two is the phone/sms layer, which we show displays both mediation and suppression effects, although the former is notably stronger. For the proximity network we considered averaged values over the whole four weeks, and used an adjacency matrix of a density comparable to the one of Facebook links, corresponding to the closest proximity range. Also the phone network was built irrespective of the timing of the interaction.\\ In the case of the Lazega law firm, all three layers display very high mediation, but such effect is notably stronger for the co-working network, i.e. within this firm dyadic friendships are related to the dyadic advisory relations, and this is mediated by the fact that these are co-working. Only the friendship layer displays a suppressor effect (the one played by the advice layer is non-significant), i.e. pairs of individuals that are not co-working can have an advisory relationship (or otherwise) {\it because} they are friends, or pairs of co-working will also have an advisory relationship without the needs of them being friends.\\ In the case of the Twitter triplet (NYC), only the Replies network shows a mediating effect.\\ Finally, in the nervous system multiplex (C Elegans) we can see that all layers display some amount of mediation and suppression. The electrical synapses layer is the one displaying a stronger suppressing effect, whereas the monadic chemical layer is the one that displays a larger mediation role. The increased suppression role of the layer of electrical synapses reflects the evidence that chemical and electrical synapses closely interact and serve related functions \cite{pereda_celegans}, so that when the either of the chemical layers is taken as $\cal{C}$ the presence of the other chemical layer accounts for a reduced suppression/mediation.\\ \begin{figure}[!htb] \centering \includegraphics[width=0.85\textwidth]{m_s_plane.png} \caption{Normalised indices of mediation and suppression for four real-world multiplex networks. For each multiplex we permute the role of $\cal C$ across the three layers. Dot clouds are the result of repeating the rewiring procedures 500 realisations (for multiplexes with a large number of nodes, the figure displays very little dispersion, and the cloud is only perceptible for the Lazega triplet which is indeed the smallest multiplex).} \label{fig:med_syn_2d} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.75\textwidth]{bluetooth_cop.png} \caption{{\bf Modulating mediation. } (Outer panel) $(\bar m,\bar s)$-plane for the role of the proximity network with respect to the relation of the Facebook and the phone/sms network (Copenhagen multiplex), for different proximity networks reconstructed by changing the spatial scale over which links are defined. The network displays a very clear mediating effect only when links represent close physical proximity (yellow), for other spatial scales such effect is lost. (Inset panel) Link average weight distribution (Received Signal Strength (RSSI)) is depicted in black. According to this distribution, we build seven non-overlapping RSSI windows. RSSI is inversely related to distance (the larger the signal strength, the closer two nodes are) so these windows represent different spatial scales. The right panel describes $\bar m + \bar s$ for each spatial scale, further emphasising that only in the closer spatial scale the proximity network is playing a strong mediating role.} \label{fig:copmod} \end{figure} \noindent As a final analysis, and in order to show how suppression and mediation can be functionally modulated within a particular real-world example, we examine the role played by the Proximity layer in the relation between the Facebook and phone calls/sms layers when such layer is systematically varied. In this multiplex, the proximity network is originally reconstructed using Bluethooth signal strength between participants, by assigning a link between each pair of nodes whose relative Bluetooth strength, averaged over the whole period of the recording (four weeks) belongs to a given range. In order to build different proximity networks (each of them accounting for a different spatial scale) and at the same time keeping the edge density constant, we build non-overlapping Bluetooth intensity ranges by taking into account the original Bluetooth intensity distribution (see the inset of Fig.\ref{fig:copmod}). In this way, ranges are non-uniform but the number of edges in each of this range is the same, hence the resulting proximity networks have all the same edge density while describing different scales of physical proximity. The intuition is that only the smaller scale is a meaningful proximity network, and for larger spatial scales the resulting proximity networks do not really imply any real interaction between the nodes. Then, for each resulting proximity network, we estimate the role it plays in the relation between the other two networks and plot it in the $(\bar m, \bar s)$-plane. Results are shown in the outer panel of Fig.\ref{fig:copmod}. For proximity networks describing large spatial scales, the network is essentially independent of the other two, and it is only when the proximity network captures smaller spatial scale (i.e. when links describe real physical proximity) that an indirect effect (notably, mediation) gets amplified. \section{Discussion} In this paper we have proposed a simple strategy to assess the role that a given network might play in shaping the relation between other two networks, thus enlarging the paradigm of network similarity beyond the classical pairwise comparison. This approach is aligned to a recent endeavour that aims at going beyond dyadic interactions in the characterisation of complex systems \cite{physrep}, and takes inspiration from the causal mediation literature \cite{mackinnon, lizierIID}. We make use of a set-theoretic approach to define a similarity metric between a pair of networks and to further explore if such relation is independent of, mediated or suppressed by a third network which might be hidden. We introduce simple generative models that, we prove, produce pure mediation and suppression. We then explore the coexistence between mediation and suppression and develop a procedure to disentangle both indirect effects. The whole methodology is subsequently applied to a range of real-world, 3-layer multiplex networks, and we unveil previously unnoticed mediation and suppression effects in social and brain networks.\\ We hope this work sparks further research in several areas. First, the simplicity and tractability of the approach makes it easily applicable across the disciplines. Second, our approach can be readily generalised to consider not just isolated triplets of networks. Indeed, one can sequentially apply this protocol to a multiplex network of arbitrary number of layers, or to a temporal network, and accordingly derive concepts of causality and directionality in this context. \section*{Acknowledgements} LL acknowledges funding from EPSRC EP/P01660X/1. SS was supported by MIUR project PRIN 2017WZFTZP “Stochastic forecasting in complex systems”. DM was supported by the Belgian Embassy in the United Kingdom through the Belgian Chair at the University of London and by the Flemish Fund for Research (FWO) through a sabbatical bench fee. \section*{Appendix: Proofs of theorems} \subsection*{Proof of theorem 1} \proof{Since $\bf A$, $\bf B$ and $\bf C$ are ER(p), all links are independent. We then use expected values, which should be representative as $N$ is large, to directly compute the expected size of the different sets in eq. \ref{eq:Jaccard}. With a total of $\ell :=L(L-1)/2$ independent trials, it is easy to see that the expected sizes are $$\mathbb{E}(|a|)=\mathbb{E}(|b|)=2\ell p,\ \mathbb{E}(|a\cap b|)=2\ell p^2.$$ Since $|a\cup b|=|a| + |b| - |a\cap b|$ we also have $$\mathbb{E}(|a\cup b|)=2\ell (2p- p^2),$$ such that $$\mathbb{E}(J(A,B))=\frac{p}{2-p}$$ Consider now $J_p(A,B|C)$. On the one hand, we have $(a\cap b)\setminus c=(a\cap b) \cap c'$, where $c'$ is the complement of $c$. Since all networks are independent, $$\mathbb{E}(|(a\cap b)\setminus c|) = 2\ell p^2(1-p)$$ On the other hand, we have $$\mathbb{E}(|(a\cup b)\setminus c|)=2\ell \cdot (1-p) \cdot \underbrace{Prob(a\cup b)}_{p(2-p)}=2\ell\cdot(1-p)\cdot(2p-p^2),$$ so that altogether $\mathbb{E}(J_p(A,B|C))=\mathbb{E}(J(A,B))=\frac{p}{2-p}$, and $\mathbb{E}(\Delta)=\mathbb{E}(J_p(A,B|C)) - \mathbb{E}(J(A,B))=0$. \qed } \subsection*{Proof of theorem 2} \proof{In this case the proof uses basic arguments of set theory. We aim to prove that $J_p(A,B|C)<J(A,B)$, i.e. $$\frac{|(a\cap b)\setminus c|}{|(a\cup b)\setminus c|} < \frac{|a\cap b|}{|a \cup b|}$$ Let us define the residual sets $R_a=a\setminus c$, $R_b := b\setminus c$, and the residuals' intersection $R_i = R_a \cap R_b$ and union $R_u=R_a \cup R_b$. $R_a$ and $R_b$ are clearly non-empty according to Algorithm 2. $R_i$ is not guaranteed to be non-empty as $A$ and $B$ are probabilistic, so we need to assume in what follows that $R_i \neq \O$. $R_u$ is trivially non-empty according to Algorithm 2.\\ Since for any three sets $x$, $y$ and $z$ we have that $(x\cap y)\setminus z= (x\setminus z)\cap(y\setminus z)$ and $(x\cup y)\setminus z= (x\setminus z)\cup(y\setminus z)$, we also have that $|(a\cap b)\setminus c|=|R_i|$ and $|(a\cup b)\setminus c|=|R_u|$. Incidentally, our previous assumption can now be easily interpreted: we need to assume that networks A and B share at least one edge besides the common edges shared with C, that's why this assumption is indeed stated in the theorem.\\ Now, since in this case $c \subset (a\cap b)$ it is easy to see that $a \cap b = c \cup R_i$ and $a\cup b = c\cup R_u$. Also by construction we have $c \cap R_i = \O$ and therefore $|a \cap b| = |c| + |R_i|$. Similarly, also by construction $c \cap R_u = \O$ and therefore $|a \cup b| = |c \cup R_u|= |c| + |R_u|$. Therefore we aim to prove that $$\frac{|R_i|}{|R_u|}<\frac{|c| + |R_i|}{|c| + |R_u|}$$ Since $R_i$ and $R_u$ are respectively the intersection and the union of two sets, it trivially follows that $|R_i|\leq |R_u|$, where inequality only saturates when $a=b$, and otherwise is strict. Let us indeed assume $A\neq B$, thus enforcing the strict inequality. Rearranging terms: \begin{eqnarray} && |R_i|< |R_u| \nonumber \\ \iff && \frac{1}{|R_i|} > \frac{1}{|R_u|}\nonumber \\ \iff && \frac{|c|}{|R_i|} > \frac{|c|}{|R_u|}, \ (|c|>0)\nonumber \\ \iff && 1+\frac{|c|}{|R_i|} > 1+\frac{|c|}{|R_u|} \nonumber \\ \iff && \frac{|R_i|+|c|}{|R_i|} > \frac{|R_u|+|c|}{|R_u|} \nonumber \\ \iff && \frac{|R_i|}{|R_u|}< \frac{|c| + |R_i|}{|c| + |R_u|} \end{eqnarray} \qed} \subsection*{Proof of theorem 3} \proof{ The first thing to observe is that in {\it Algorithm 3} the condition $c \subset (a\cap b)$ is not met, therefore theorem \ref{thm:mediated} does not hold in this case. Let us now define the following sets:\\ Let $a_p\subset a$ be the subset of edges in $a$ such that $a_p \cap c=\O$. The elements of this subset will be in $b$ with probability $q$, hence on average $|a_p|q$ edges of $a_p$ will be in $b$.\\ Let $c_p\subset b$ be the subset of edges in $c$ such that $c_p \cap a=\O$. The elements of this subset will be in $b$ with probability $q$, hence on average $|c_p|q$ edges of $c_p$ will be in $b$.\\ Let $r\subset b$ be the subset of edges in $b$ which are neither in $a$ nor in $c$, i.e. $r\cap (a\cup c)=\O$.\\ By symmetry, we have $\mathbb{E}(|a_p|)=\mathbb{E}(|c_p|)$. According to Algorithm 3, we have\\ $$\mathbb{E}(|(a\cap b)\setminus c|) = \mathbb{E}(|(a\cap b))=\mathbb{E}(|a_p|) q,$$ $$\mathbb{E}(|(a\cup b)\setminus c|)= \mathbb{E}(|a_p|) + \mathbb{E}(|r|),$$ $$\mathbb{E}(|(a\cup b))=\mathbb{E}(|a_p|) + \mathbb{E}(|c_p|)q + \mathbb{E}(|r|), $$ and thus $$\mathbb{E}(J_p(A,B|C))=\frac{\mathbb{E}(|a_p|) q}{\mathbb{E}(|a_p|) + \mathbb{E}(|r|)},$$ whereas $$\mathbb{E}(J(A,B))= \frac{\mathbb{E}(|a_p|) q}{\mathbb{E}(|a_p|) + \mathbb{E}(|c_p|)q + \mathbb{E}(|r|)}.$$ Therefore as long as $\mathbb{E}(|c_p|)q>0$, we have $\mathbb{E}(J_p(A,B|C)) > \mathbb{E}(J(A,B))$, yielding $\mathbb{E}(\Delta)>0$. Since A and C are independent, $c_p$ is not empty with large probability, so given a value of $q$, for sufficiently large $N$ this condition is always met. \qed } \section*{Appendix: empirical networks} \begin{table}[htb] \centering \begin{tabular}{ |p{2cm}||p{1.99cm}|p{1.35cm}|p{2.5cm}|p{2.3cm}|p{2.cm}| } \hline \multicolumn{6}{|c|}{Summary of empirical networks} \\ \hline {\bf Triplet} & {\bf Type} & $N$ &{\bf Network} $\#1$&{\bf Network} $\#2$&{\bf Network} $\#3$\\ \hline C. Elegans& brain & 279 & monadic (1639 edges) & polyadic (3193 edges) & electrical (1031 edges) \\ \hline NYC& Twitter &102439 & retweet (213754 edges) & mentions (131679 edges) & replies (8062 edges)\\ \hline Lazega law firm & social (offline) & 71 & cowork (892 edges) & friendship (575 edges) & advice (1104 edges)\\ \hline Copenhagen & social (offline/online) & 751 & proximity (13020 edges) & facebook (12847 edges) & calls/sms (1760 edges)\\ \hline \end{tabular} \caption{{\bf Summary of network specificities.} The first three examples are multiplex networks collected from \url{comunelab.fbk.eu/data.php}, whereas the fourth one is collected from \url{icon.colorado.edu}. The C Elegans multiplex describes the Caenorhabditis elegans connectome, where layers correspond to different synaptic junctions: chemical monadic ("MonoSyn"), polyadic ("PolySyn") and electrical ("ElectrJ") \cite{celegans, muxviz}. The NYC multiplex describes Twitter activity during an exceptional event, the NYC Climate March in 2014, and layers correspond to retweet, mentions and replies \cite{nyc}. The Lazega law firm depicts three kinds of social relationships (Co-work, Friendship and advice) between partners and associates of a corporate law partnership \cite{lazega1,lazega2}. Finally the Copenhagen multiplex describes social interaction in three layers corresponding to phone calls and text messages (merged), Facebook friendships, and proximity as measured with strength of Bluetooth signal \cite{copenhagen}. } \label{tab:nets} \end{table}
{ "timestamp": "2020-12-17T02:20:30", "yymm": "2012", "arxiv_id": "2012.09006", "language": "en", "url": "https://arxiv.org/abs/2012.09006" }
\section{Introduction}}\label{sec:introduction} 3D depth-sensing devices have become ubiquitous in the last decades, finding applications ranging from robotics, 3D scanning, autonomous driving, drones, and agriculture, biology, geology, atmosphere, to military uses, law enforcement, and even gaming. The LIDAR technology has a wide market on its own, and its active laser illumination makes it resilient to low-light environmenta and harsh weather conditions such as rain and fog. A LIDAR camera illuminates the scene using a laser and senses the scattered light using a photo-detector. The LIDAR camera's performance and accuracy depend on external parameters such as the distance and reflectivity of the target and various internal parameters, which control its inner components' behavior. For example, the laser driver electronics affect the laser's raising time, falling time, and the general shape of the transmitted signal. Another example is the working point of the photo-detector, which controls the amplification of the received signal.\\ To calculate an object's distance, a binary code pulse, denoted $\alpha$, which is a sequence of light pulses, is transmitted by the LIDAR camera transmitter. The pulse hits the target and reflects to the receptor (we denote the received signal by $\alpha'$). The most accurate time of light travel back and forth, denoted $t_{argmax}$, is the peak of the correlation between $\alpha$ and $\alpha'$. Denoting the receiver's sampling rate by $f$, the speed of light by $c$, and the system delay\footnote{The system delay might change as a function of the camera parameters, and used in the function as an averaged anchor} in mm by $d$, the estimated distance of the object is given by $\Delta =\dfrac{1}{2} \cdot (\dfrac{t_{argmax}}{f} \cdot c-d)$, ignoring triangulation corrections. However, the method described above is hard to rely on: The back-scattered pulse of bits behaves differently than the transmitted one. The behavior is affected by the distance, the camera's parameters, ambient light, thermal noise, and even the previously transmitted bits. this unexplained behavior causes uncontrollable inaccuracies of the depth estimation. \begin{figure}[h] \centering \includegraphics[width =\columnwidth]{figures/real_code.png} \caption{The real transmitted code: Each horizontal row is a set of pixels representing transmitted code which was transmitted 64 times in a row. } \end{figure} \begin{figure}[h] \centering \includegraphics[width =\columnwidth]{figures/worst_percent_inliers_from_data.png} \caption{Demonstration of the instability of the back-scattered code: Each horizontal row is a set of pixels representing the back-scattered code sampled 64 times with the same transmitted code and camera parameters. It can be seen that the code is phase shifted from the original transmitted code. finding a reliable estimation of the distance from the object is hard due to the amount of noise and inconsistency between the different rows in the back-scattered codes, which makes the correlation between the transmitted code and the back-scattered one not reliable} \end{figure} Since the number of parameters is vast, finding the optimal set of parameters for a specific task might be challenging. In many cases, each camera component is optimized separately to achieve maximal performance in terms of SNR. However, once combined, the optimal parameters per component might not be optimal in terms of the camera's performance. The optimal working point of the depth camera depends on the application it is used for, as well as the scene itself and the objects we wish to capture. 3D scanning, for example, will require maximal depth accuracy, while autonomous driving applications might settle on lower accuracy given object detections in longer distances. For a given objective, searching the space of all possible parameter values for its optimum is costly, inefficient, and impractical when the space is sufficiently large. To find such an optimum, one might utilize standard mathematical optimization techniques such as gradient approaches. However, an analytical formulation of the gradient in respect of the parameters might not be available, and numerical gradients calculations become costly as the number of parameters grows. Besides, some parameters might only obtain discrete values, which cause inaccurate numerical gradient estimations. \\ To push research on laser-based semantic segmentation, \cite{behley2019semantickitti} annotated all KITTI Vision Odometry Benchmark, with a provided 360\textdegree{} field of view of dense LIDAR depth points, to generate three new benchmark related tasks. The use of such an extensive LIDAR depth annotated data can serve many purposes. i.e., \cite{wu2017squeezeseg} leveraged it for teaching a deep network model to do real-time road object semantic segmentation. \\ Generative Adversarial Networks (GANs) are two different neural networks, Generator and Discriminator, that train against each other in an adversarial process. The Discriminator is being fed with real independent data and with the generator model's output, and its goal is to distinguish between real and fake data. The Generator, on the other hand, tries to fool the Discriminator and make it classify its outputs as real ones. The end goal of this process is two well-trained models: The Discriminator is an expert in deciding which samples are drawn from the real data distribution, while the Generator is an expert in generating new unreal samples that have the same distribution of the real data, as described in \cite{gans,dcgans}. An essential capability of the GANs is to create an output conditioned by some attributes fed as an additional input to both models. The Generator will then generate different classes of the original distribution, depending on the condition. Such model is called "CGAN". \cite{cgans}.\\ We will show hereby how to use this ability of GANs for learning how LIDAR back-scattered code (the signal that returns from the object to the camera detector) behaves and distributes as a function of the camera parameters. Our model learns an interpolation and extrapolation of the parameters space to receive a continuous and differentiable forward function out of discrete real-world parameters options. We then optimize the parameters to receive an optimal calibration of the camera. We focus on short-range distance objects and low power consuming LIDAR for energy-saving and eye-care products. \section{Related Work} Improving LIDAR performance is crucial for enhancing the depth measurements for many of the mentioned uses. \cite{xu2019depth} used a model that receives a LIDAR scan with an RGB image for input by modeling the geometric constraints between the surface normal and depth and combining this information with a calculation of the confidence level of sparse LIDAR measurements to lower the impact of noise. \citet*{8959801} optimized the six external calibration parameters with an algorithm that aims to align the edges detected in the LIDAR point cloud with the corresponding color image, combined with minimizing the depth difference between the measured LIDAR data and a depth map derived from a monocular image. Another enhancement to the LIDAR performance was done by \cite{9105252}, involving deep learning methods to optimize the depth estimation of dense 3D point clouds, which on its own is limited when sampling with low rates. Different uses of signal generation using GANs were previously done: \cite{8852227} and \cite{hartmann2018eeggan} developed a generative model to generate EEG-like brain signals for purposes like data augmentation, supersampling, and restoration of corrupted data; all of those can be similarly used in the field of sensing and computational imaging. \cite{caccia2019deep} touched both aspects in their research and generated a 2D point map out of LIDAR scans, using GANs, reaching high-quality samples. To improve depth estimation sparsity of pixels, acquired by a LIDAR, \cite{yang2019dense} inferred the posterior distribution of an image depth map, to have a probability over depth for each pixel in the image.\\ The depth estimation field contains different methods to do so and represents 3D image reconstruction out of a specific scene. \cite{DBLP:journals/corr/LainaRBTN16} constructed a fully convolutional network with residual blocks that models the ambiguous mapping between monocular images and depth maps and helps estimate the depth map of a scene given only a single RGB image. \cite{DBLP:journals/corr/abs-1804-06278} took it one step further and, with a deep network, predicted the reconstruction of a piece-wise planar depth map. The model receives a single RGB image for input and infers the set of plane parameters and corresponding plane segmentation masks. \cite{Aleotti2018GenerativeAN} used GANS for monocular depth estimation, with the following method: the Generator infers a depth map given an input of a 2D image, while the Discriminator has to distinguish between the Generator's output and target frames acquired with a stereo rig. \section{Proposed Method} Our research splits into two different parts: \begin{enumerate} \item Creating a fully unsupervised Forward model of the LIDAR camera, that learns the behavior and distribution of the back-scattered signals and their reliance on the camera's parameters.\\ \item Defining a robust metric and creating a fully unsupervised Inverse model allows us to traverse smoothly through the parameters space and find the optimal camera calibration. \end{enumerate} \subsection{Part 1: Creating a fully unsupervised forward model using CGAN} To understand the behavior of the camera's light radar (LIDAR), we collected an extensive database of back-scattered codes related to a specific object, and the same code transmitted, but with a variety of different parameters of the camera, having multiple samples per a set of parameters to capture the distribution of the different behaviors. \subsubsection{Training a Semi-Supervised CGAN} we trained a conditional GAN to distinguish between real back-scattered codes and fake ones, conditioned on a set of different eight continuous parameters of the camera normalized to be in the range of [0,1], denoted $C$. The Generator model receives an input a set of parameters of the camera, and some random noise $z\sim \mathcal{N}(0,1)$ for keeping the output stochastic. The Generator outputs an alleged code that is plausible to be back-scattered from the object to the camera's receptor.\\ Neither the ground truth distance of the object nor the real transmitted code is being fed to the network and learned inherently during the training phase, making the learning fully unsupervised. \\ The Discriminator model, unlike the original CGAN model, receives only a single input vector, without any condition, and outputs two different objects, similar to what was done in \cite{8803807}: \begin{enumerate} \item Validity score: How confident is the Discriminator whether the code vector is a real back-scattered code from the object. \item Parameters estimation: An eight continues parameters vector, which is the most plausible to cause such a back-scattered code, denoted $\hat{C}$. \end{enumerate} \begin{figure*}[h] \centering \includegraphics[width = 1\textwidth]{figures/model_bs.png} \caption{Diagram of our CGAN model - Forward model of the camera. "BS" stands for "Back-Scattered"} \end{figure*} \subsubsection{Custom losses} For stabilizing the adversarial process, we used a Wasserstein GAN \cite{wgans} and applied another loss for the Discriminator, namely "Gradient Penalty" loss, which enforces the Discriminator Gradient's second norm to be bounded by 1. \cite{DBLP:journals/corr/GulrajaniAADC17} Additionally, for the Discriminator, we added an MSE loss over the predicted parameters, $C'$, encouraging them to be similar to those set while $\alpha$ was transmitted. \\ For the Generator, we added three new terms, additionally to the basic adversarial loss, divided into two types: \begin{enumerate} \item An adversarial loss for the predicted camera parameters that the Discriminator outputs, $\hat{C}$, when having the code that the Generator generates as input, compared to the camera parameters that were fed as input to the Generator, $C$. \item A loss of a first norm over the distribution of the $1_{st}$ and $2_{nd}$ moments of the distribution of the bits generated by the Generator, comparing to the $1_{st}$ and $2_{nd}$ moments of the distribution of the bits in the real data, respectively, having the same camera parameters, $C$. \\ \\ To further stabilize the gentle joint distribution of the conditions and the generated data of the CGAN, we first trained the Generator, having only a loss consisting of the $1_{st}$ and $2_{nd}$ moments of the distributions differences, and after several epochs, we gradually increased the weight of the adversarial losses. The notion derives from curriculum learning, where the neural network is being fed first on easy samples, and step by step learns to handle greater and more complicated tasks, as shown in \cite{10.1145/1553374.1553380} \\ \end{enumerate} $D_{loss} =\underbrace{ {\mathbb E}[D(G(z\given C))[0]]}_{\text{Adversarial loss}} - \underbrace{{\mathbb E}[D(x\given C)[0]]}_{\text{loss over real samples}}\\ \tab + \underbrace{\lambda_{GP} \cdot {\mathbb E} [(\norm{ \nabla_{x}D(x\given C)[0]} _{2}-1)^2]}_{\text{Gradient penalty loss}}\\ \tab +\underbrace{\lambda_{parameters} \cdot \norm{D(x\given C)[1]-C}_{2}}_{\text{Predicted conditions loss}} $ \\ \\ \\ $G_{loss} = \underbrace{-{\alpha \cdot \mathbb E}[D(G(z\given C))[0]]}_{\text{Adversarial loss}} \\ \tab + \underbrace{\alpha \cdot \lambda_{parameters} \cdot \norm{D(G(z\given C)))[1]-C}_{2}}_{\text{Adversarial predicted conditions loss}} \\ \tab+ \underbrace{\lambda_{mean} \cdot \norm{{\mathbb E}[G(z \given C)] - {\mathbb E}[x \given C]}}_{\text{1}_{\text{st}}\text{ Moment distribution similarity loss}} \\ \tab + \underbrace{\lambda_{variance} \cdot \norm{{\textrm{Var}}\,{}[G(z \given C)] - {\textrm{Var}}\,{}[x \given C]}}_{\text{2}_{\text{nd}}\text{ Moment distribution similarity loss}}$ \\ \\ \\ where:\\ $D(x)=\text{[-validity score, predicted parameters]}$\\ $\alpha= \begin{cases*} \text{min}\left(1, \dfrac{\text{\#iterations}}{\text{Constant}}\right) & \text{\#iterations}$> \#threshold$ \\ 0 & otherwise \end{cases*}$\\ \subsubsection{Visualization of the training phase} To visually track the convergence process, we plotted along with the training, the real back-scattered codes, and the generated ones by our CGAN of the five most stable and unstable camera parameters of the training data. Each row represents such binary code, and the codes that correspond to the same camera parameters are stacked together to notice how similar they are. A large difference between two consecutive rows indicates a high variance and noisy back-scattered codes. The cleaner and more constant rows we get will indicate a more stable back-scattered code reflected in a better and more reliable estimation of the depth. Using this visualization, we could track both the correctness of the output distribution, alongside the behavior of the back-scattered codes conditioned by the camera's parameters.\\ \begin{figure}[h] \includegraphics[width = \columnwidth]{figures/gan_train_output.png} \caption{The outputs of the Generator are on the left side of the real back-scattered codes, corresponding to the same camera parameters. The upper half of each side is the generated/back-scattered codes of the five most stable parameters, reflecting more concrete and clear columns, which means the code is stable. the lower halves are the generated/back-scattered codes of the five most unstable parameters, reflecting by a much noisier output with a higher variance and unclear data.} \end{figure} \subsubsection{Models architecture} Both Generator and Discriminator consist of 10 1-dimensional convolution blocks with only two channels and 7-sized kernels for all layers, which means that the receptive field captured 67 bits around the area of each bit transmitted. That allows the model to consider the causal connections between different hidden effects that happen along a sequence of a transmitted pulse of code. The padding method is circular since the transmitted and back-scattered codes are cyclic. The nonlinear function at the end of each block is LeakyRelu, with a slope of 0.2 at nonpositive values. The Generator's last block ends with a Sigmoid activation to push all values to be 0 or 1 as they are supposed to represent a back-scattered binary code. The Generator's first layer and the Discriminator's last layer are fully connected layers for adjusting the correct output size needed. \section{Part 2: Creating a fully unsupervised inverse model} The trained CGAN, which imitates the LIDAR outputs distribution, let us have a differentiable forward model. Nevertheless, we need to have a reliable metric to decide what an optimal set of parameters is. It is enough to have one extreme noisy sample, namely "outlier," in a batch of back-scattered codes to dramatically affect the mean and variance of the estimated depth. That makes the metric of checking the mean and variance of the same bits of back-scattered codes corresponding to the same transmitted code and camera parameters not sufficient and even unreliable in terms of successful depth estimation. To overcome this problem, We formed a new metric to evaluate the back-scattered code quality, "Inliers rate", denoted '$R$' to be the following: \\ $R = 100 \cdot \dfrac{\#\text{samples s.t.} \norm{\Delta - median(\Delta)} \leq delta_{in} }{\#\text{samples}}$, where $delta_{in}$ was set to be 0.03m.\\ \subsection{The optimization algorithm} The camera parameters are designed to be ranged in $[0,1]$ after normalization, yet the Generator model can receive an unbounded input; hence, the optimization process's output might exceed the boundaries. \\ To solve this issue, we set the optimization object to be the inverse function of a Sigmoid $\left(f(x) = \dfrac{1}{1+ e^{-\text{x}}}\right)$ of the parameters. Every iteration of the optimization process, the optimization object passes through a Sigmoid to receive the current step camera parameters in the range of [0,1].\\ We then calculate the correlation $\rho$ and find its peak using a differentiable approximation of the $argmax$ function to receive the time travel $t_{argmax}$ of light to and back from the object. The approximation assumes a normal distribution of the outputs of the correlation. It is implemented by the sum of a softmax function of the values, multiplied by the set of integer numbers ranged from 1 to the code length $L$. \\ For the optimization loss metric, we chose a combination of two different losses: \begin{enumerate} \item $loss_{median}$: The median of all the estimated distances along the batch of optimization shall be a reliable candidate when the data is not too noisy. Hence we treated the median as our anchor. since there are a few extreme outliers, we used the natural log of the difference between the distance and the median distance as our loss. This encourages all the results to crowd together around the median without suffering from heavy penalties caused by a few samples. to adopt this to the LIDAR cyclic code transmission, we took the distance to be the minimum of the former term and the difference between of the former term to the maximal distance that the camera is capable of calculating. \\ \item $loss_{variance}$: To accurate the results, we added another loss that encourages the variance of the same bits along the batch axis to be as low as possible, what promises a more stable result. \end{enumerate} The optimization algorithm used SGD with a stop condition of $R > 97\%$. Assuming the batch output distributes normally, $\Delta \sim \mathcal{N}(median(\Delta),\sigma^2)$, the objective of the optimization function will get to optimum as $\sigma\to 0$. \\ Along all the optimization procedure, the object's distance was not introduced to the system, and the model gets to its goal of minimizing the distribution variance of the estimated distances around the real distance in an unsupervised manner. \begin{algorithm}[!h] \setstretch{1.5} \caption{finding the optimal camera's parameters} $\text {Denoting the Batch Size} = n$: \\ $\text{opt} \xleftarrow{} x\sim \mathcal{N}(0,1)^{\text{num parameters}}$ \\ $\widetilde{\text{opt}} \xleftarrow{} \ln{\frac{\text{opt}}{1-\text{opt}}}$\\ \While{$R \leq th$}{ $\text{opt} = \dfrac{1}{1+ e^{-\text{opt}}} $ \\ $\rho = corr(\alpha,\text{ } G(z|\text{opt}))$ \\ $t_{argmax} = \sum_{i=1}^{L} \dfrac {e^{\rho_{j}}}{\sum_{j=1}^{L} e^{\rho_{j}}}\cdot[L]$ \\ $\Delta =\dfrac{1}{2} \cdot \left(\dfrac{t_{argmax}}{f} \cdot c-d\right)$ \\ $loss_{median} = \dfrac{1}{n} \cdot \sum_{i=1}^{n} \log(1+\min(\norm{\Delta-median(\Delta)},\dfrac{1}{2} \cdot \left(\dfrac{L}{f} \cdot c-d\right)- \norm{\Delta-median(\Delta)}))$ \\ $loss_{variance} = var(G(z|\text{opt}))$ \\ $loss = \beta \cdot loss_{median} + (1- \beta) \cdot loss_{variance}$ \\ $\widetilde{\text{opt}} = \widetilde{\text{opt}} - \lambda_{LR} \cdot \dfrac{\partial {loss}}{\partial {\widetilde{\text{opt}}}}$ \\ $R = \% \norm{\Delta - median(\Delta)} \leq \delta_{in}$ } return opt \end{algorithm} \section{Experimental Results} Running the whole algorithm pipeline on Both simulated data and real camera data has shown a significant improvement in the distribution of the estimated distances of the objects. Training the CGAN over the real data was a much challenging task, due to the many real-world factors that intervene in the system and dramatically affect the joint distribution of the camera parameters and the object's scenario and characteristics.\\ All experiments were conducted on hard and noisy examples, i.e., low laser power, which makes the algorithm useful for efficient LIDAR uses, consuming low power, saving on energy, and significantly lowering the danger of harming the human eye. \\ The training time of the CGAN takes only several hours on a single GPU, and the optimization process takes several seconds to half a minute to find the optimal camera parameters.\\ The following diagrams show the difference of the distance estimation distribution before and after using our algorithm, transmitting the same code 50,000 times with one constant set of random camera parameters and then with the optimized parameters. The depth estimation distribution with a random set of camera parameters is broad, with many outliers. After applying our algorithm, the optimal parameters perform a much stable depth estimation, reflecting in a much narrower distribution around the real object's distance, with almost no outliers. \\ \begin{figure}[h] \centering \includegraphics[width = \columnwidth]{figures/sim_data_corrected_distances_distribution_before_and_after_optimization.png} \caption{distribution of estimated distances, before and after using our algorithm on simulated data.} \end{figure} \begin{figure}[h] \centering \includegraphics[width = \columnwidth]{figures/real_data_corrected_distances_distribution_before_and_after_optimization.png} \caption{distribution of estimated distances, before and after using our algorithm on real data} \end{figure} The following image shows two batches of 512 back-scattered codes: on the left, at the beginning of the optimization procedure, and on the right, at the end. On each side, all the rows correspond to the same set of parameters, which differ between the sides. Both sides correspond to the same transmitted binary code. It can be seen that before the optimization phase, the variance between the rows is high and that the density of each white/black part is lower, which implies incorrect and distorted back-scattered code.\\ After the optimization procedure, the rows are much similar to each other, having a much solid group of bits. The back-scattered code is much more similar to the transmitted one, and hence the depth estimation distribution is significantly narrower and more reliable. \begin{figure}[h] \includegraphics[width = \columnwidth]{figures/before_and_after_optimization_new.png} \caption{Comparing the back-scattered code quality - before and after the optimization process} \end{figure} \section{Conclusion} A framework for optimizing the working point of a LIDAR camera in order to enhance its depth estimation stability was introduced. The framework consists of two optimization processes: (i) Teaching the Generator the forward model and thereby creating a simulator of the real system. (ii) Optimizing the parameters of the camera to minimize the noise in the depth estimation. The resulting Generator can be a valuable tool as a simulator with the important qualities of fast inference and derivable output in respect to input. The framework is flexible. For any given starting point and an objective, we can optimize the parameters and achieve better performance. The concept itself can be expanded to various systems in which a group of parameters controls the system's behavior. For the specific use case of depth estimation stability, having a derivable forward model opens new possibilities when dealing with learning tasks. Given a neural network that receives the back-scattered signal as input and reconstructs the depth, one could jointly learn the reconstruction network and the camera parameters.\\ We have shown that a Generator can serve as a fast simulator for a real system, aided by the $1_{st}$ and $2_{nd}$ moments losses to help capture the low order statistics and the Discriminator for the higher orders. Besides, we aided by the Wasserstein GAN technique and condition reconstruction to avoid mode collapse and learn the joint distribution. \\ Also, the Generator can easily optimize the system's parameters, given a derivable objective, Improving current conditions(camera parameters) to a better set of conditions. Once given a trained Generator, which can be treated as a flexible block, replacing the objective may be easy. \\ Looking at the big picture, the same concept is not limited to 3D imaging devices. It can be expanded for other systems with a large number of parameters and a derivable objective. Besides, this GAN training can create a useful simulator for other tasks (that do not require optimization).\\ An open question is how much data each system requires for training. We shall use a smaller dataset and still get a qualitative simulator of real product behavior. \\ Optional future work may do On-The-Fly optimization of camera parameters given distances and reflectivities in scenes. Another enhancement would be replacing the depth estimation with a Neural Network model and optimizing the model with the conditions simultaneously. \ifpeerreview \else \section*{Acknowledgments} The authors would like to thank... \fi
{ "timestamp": "2020-12-17T02:18:44", "yymm": "2012", "arxiv_id": "2012.08951", "language": "en", "url": "https://arxiv.org/abs/2012.08951" }
\chapter*{Abstract} \rfoot[e1]{\thepage} \fancypagestyle{plain}{ \fancyfoot[R]{\thepage} } \pagestyle{fancy} \pagestyle{fancy} The relevance of hidden symmetries is explored at the level of classical and quantum mechanics in a variety of physical systems related to conformal and superconformal invariance. Hidden symmetries, that correspond to nonlinear in momenta integrals of motion, generally lead to nonlinear algebras. First, analyzing the $\mathfrak{sl}(2,\mathbb{R})$ symmetry, it is concluded that both the asymptotically free (at infinity) and the harmonically confined models are two different forms of dynamics described by the same symmetry algebra. A mapping between these two dynamics is constructed, and its applications are studied in one-, two- and three-dimensional systems. Second, rational extensions of the conformal mechanics model of de Alfaro, Fubini and Furlan (AFF) are derived by employing the generalized Darboux transformation. In general, the obtained systems have an almost equidistant spectrum with some gaps inside, and their spectral properties imply the presence of hidden symmetries. The supersymmetric extensions of the AFF model are also studied, and the origin of the hidden bosonized superconformal symmetry of the quantum harmonic oscillator is established. Finally, a three-dimensional generalization of the AFF system is considered. The model describes a particle with electric charge $e$ in Dirac monopole background of magnetic charge $g$, and subjected to the central potential $\frac{m\omega^2}{2}r^{2}+\frac{\alpha}{2mr^2}$. When $ \alpha=(eg)^2 $, the classical trajectories are periodic for arbitrary initial conditions and at the quantum level, the spectrum acquires a peculiar degeneration. These characteristics are described by hidden symmetries, which can be obtained from the model without harmonic term by means of the mentioned mapping. A complementary spin-orbit coupling term gives rise to a supersymmetric extension of the system, characterized by superconformal symmetry. The spectrum-generating operators of the new model are shown to be nonlocal. \vskip 1cm \textcolor{blue}{\underline{\emph{Keywords:}}} Hidden symmetries; (Super-)Conformal symmetry; de Alfaro, Fubini and Furlan model; Harmonic oscillator; Supersymmetric quantum mechanics; Rationally extended systems; Darboux duality; Klein four-group; Dirac monopole. \pagestyle{fancy} \renewcommand\headrulewidth{0pt} \lhead{}\chead{}\rhead{}\cfoot{} \rfoot{\vspace*{0\baselineskip}\thepage} \pagenumbering{roman} \chapter*{Dedicatory} ${}$ \vspace{2cm} \begin{flushright} \emph{``... y la realidad plausible cae de pronto sobre mi...\\ me incorporo a medias en\'ergico, convencido, humano\\ y voy a escribir estos versos para convencernos de lo contrario...''} \vspace{0.5 cm} \'Albaro de Campos (Fernando Pessoa), \emph{La Tabaquer\'ia}. Extracto. \vspace{2cm} Dedicado a quienes me apoyaron (y soportaron) durante todo el proceso. \end{flushright} \chapter*{Acknowledgements} The work on this Thesis was mainly supported by CONICYT (now ANID) scholarship 21170053. I would also like to thank the University of Santiago for the sponsorship of research projects FCI-PM-02 and USA 1899, and the FONDECYT Project 1190842, in which I participated. I am especially grateful for all the support and help provided by my Thesis supervisor, Professor Mikhail S. Plyushchay, who has been with me at all times. In the same vein, I would like to thank Professor Andreas Wipf for his warm hospitality and help during my stay at the Friedrich-Schiller University, Jena, Germany, in 2019. Finally, I want to thank my family and friends for their emotional support. They are always in my thoughts. \chapter*{Notations} Here we summarize some common notations used in the manuscript. In this Thesis we use $\hbar=c=1$. \\ \underline{\textit{Geometry}}:\\ $g_{\mu\nu}$ and $\eta_{\mu\nu}$: The general metric tensor and Minkowski metric tensor.\\ $x_\mu=g_{\mu\nu}x^{\nu}=\sum_{\nu}g_{\mu\nu}x^{\nu}\,$ and $g_{\mu\nu}x^{\mu}x^{\nu}=\sum_{\mu,\nu}g_{\mu\nu}x^{\mu}x^{\nu}\,$: The Einstein summation convention.\\ $\zeta^\mu$: A Killing vector component.\\ $A\wedge B$ and $d$: The exterior product and the exterior derivative, respectively.\\ $ \pounds_{X}{T}$: The Lie derivative of a tensor field $T$ along the flow of the vector field $X$.\\ $i_{X}\omega\equiv \omega(X,\underbrace{\ldots\ldots\ldots}_{r-1\text{ entries}})$: The contraction between a vector field and a differential $r$-form $\omega$,\\ ${}\qquad\qquad\qquad\qquad\qquad\,\,$ which, in turns, is a differential $(r-1)$-form.\\ \underline{\textit{Classical mechanics}}:\\ $\mathcal{M}$: The configuration space.\\ $T\mathcal{M}_q$: The tangent space at $q \in \mathcal{M}$.\\ $T_*\mathcal{M}_q$: The cotangent space at $q \in \mathcal{M}$.\\ $T\mathcal{M}$: The tangent bundle.\\ $T_*\mathcal{M}$: The cotangent bundle.\\ $q^i$ and $\dot{q}^{i}=\frac{dq^i}{dt}$: The generalized coordinates on $\mathcal{M}$ and its velocities.\\ $\mathcal{L}$ , $p_i=\frac{\partial\mathcal{L}}{\partial \dot{q}^{i}}$ and $H$: The Lagrangian, the canonical momenta and the Hamiltonian.\\ $\omega=dq_i\wedge dp^i$: The symplectic two-form. \\ \underline{\textit{Supersymmetric quantum mechanics}}: \\ $H$: The quantum Hamiltonian.\\ $L$: A dimensionless quantum Hamiltonian.\\ $\psi_*$, $\widetilde{\psi}_*$: Two linearly independent eigenstates of $L$, with eigenvalue $\lambda_*$\,.\\ $W(\underbrace{\ldots\ldots\ldots}_{n \text{ entries}})$: The generalized Wronskian of $n$ functions.\\ $\breve{L}$: A dimensionless supersymmetric partner of $L$.\\ $A^\pm$: The first order mutually conjugate intertwining operators.\\ $\mathbb A_{n}^\pm$: The higher order mutually conjugate intertwining operator.\\ $\Omega_*(x)\,,\breve{\Omega}_*(x)$: The Jordan states constructed by means of $\psi_*$ and $\widetilde{\psi}_*\,,$ respectively.\\ $\mathcal{H}:$ A matrix-valued super-Hamiltonian operator.\\ $\mathcal{Q}_a:$ A Supercharge.\\ $\mathcal{N}$: The number of supercharges.\\ Pauli matrices: $ \sigma_1=\left(\begin{array}{cc} 0 & 1\\ 1 & 0 \end{array}\right)\,,\qquad \sigma_2=\left(\begin{array}{cc} 0 & -i\\ i & 0 \end{array}\right)\,,\qquad \sigma_3=\left(\begin{array}{cc} 1 & 0\\ 0 & -1 \end{array}\right)\,. $\\ $\Pi_\pm=\frac{1}{2}(1\pm \sigma_3)$: Projectors to $\sigma_3$ subspaces. \\ \underline{\textit{Conformal mechanics}}:\\ $H$, $D$, and $K$: Generators of the $\mathfrak{so}(2,1)$ algebra.\\ $\mathcal{J}$ and $\mathcal{J}_\pm$: Generators of the $\mathfrak{sl}(2,\mathbb R)$ algebra.\\ $H_\nu:$ The Hamiltonian of an asymptotically free conformal invariant system.\\ $\mathscr{H}_\nu$ and $\mathcal{C}_\nu^\pm$ The Hamiltonian of de Alfaro, Fubini and Furlan model and its ladder operators.\\ $\mathfrak{S}$: The conformal bridge transformation operator.\\ \underline{\textit{Rationally extended systems}}:\\ $\Delta_\pm$: The positive-negative Darboux scheme.\\ $A_{(\pm)}^\pm$: The self-conjugate intertwining operators of the positive-negative Darboux scheme.\\ $L_{(\pm)}$: The rationally extended system associated with the positive-negative Darboux scheme.\\ $\mathcal{A}^\pm\,,$ $\mathcal{B}^\pm\,,$ and $\mathcal{C}^\pm\,$: The spectrum-generating ladder operators of the ABC-type.\\ $\mathfrak{A}_i^\pm\,,$ $\mathfrak{B}_i^\pm\,,$ and $\mathfrak{C}_i^\pm\,$: The extended families of ladder operators of the ABC-type.\\ $\mathfrak{S}_z^\pm\,,$ : The extended families of intertwining operators.\\ $\mathcal{U}_{0,z}^{(2\theta(z)-1)},$ and $\mathcal{I}_{N,z}^{(1-2\theta(z-N))}$ : The extended subsets of generators of a nonlinear superalgebra.\\ \underline{\textit{Three-dimensional conformal mechanics in a monopole background}}:\\ $\nu=(eg)^2$: Here $e$ and $g$ are the particle's electric charge and the monopole's magnetic charges, respectively.\\ $\alpha:$ The coupling of the conformal mechanics potential.\\ $\mbf{I}_1$, $\mbf{I}_2$, $\mbf{a}$ and $\mbf{a}^\dagger$: Dynamical integrals for the case $\alpha=\nu^2$. \\ $\mbf{J}$: The Poincar\'e vector integral.\\ $T^{(ij)}, T^{[ij]}$: Symmetric and anti-symmetric tensor integrals.\\ \underline{\textit{A charge-monopole superconformal model}}\\ $\mbf{K}=\mbf{J}+\frac{1}{2}\,\mbfgr{\sigma}$: The total angular momentum.\\ $k=j\pm 1/2$: The eigenvalue of $\mbf{K}^2$.\\ $\pm \omega \,\mbfgr{\sigma}\cdot \mbf{J}$: The spin-orbit coupling.\\ $\Theta$, $\Theta^\dagger$, $\Xi$ and $\Xi^\dagger$: Scalar intertwining operators.\\ $\mathcal{H}$ and $\breve{\mathcal{H}}:$ Pauli type supersymmetric Hamiltonians in exact and spontaneously broken phase.\\ $\mathcal{Q}$, $\mathcal{Q}^\dagger$, $\mathcal{W}$, $\mathcal{W}^\dagger$: Nilpotent fermionic operators. \\ $\mathcal{R}$, $\mathcal{G}$ and $\mathcal{G}^\dagger$: The $R-$symmetry generators and the lowering and rising supersymmetric ladder operators. \\ $\mathcal{P}_\pm$: Projectors onto subspaces with fixed $k$. \\ $\mathcal{B}$ and $\mathcal{F}$: Generic bosonic and fermionic three-dimensional generators.\\ $\mathscr{B}$ and $\mathscr{F}$: Generic bosonic and fermionic one-dimensional generators.\\ \newpage \rfoot[e1]{} \fancypagestyle{plain}{ \fancyfoot[R]{} } \pagestyle{fancy} \tableofcontents \cleardoublepage \listoftables \cleardoublepage \listoffigures \chapter*{Introduction} \addcontentsline{toc}{chapter}{Introduction} \setcounter{page}{1} \pagenumbering{arabic} \rfoot[e1]{\thepage} \fancypagestyle{plain}{ \fancyfoot[R]{\thepage} } \pagestyle{fancy} Symmetries play a very important role in the construction of the fundamental theories that we have in physics nowadays. Examples of that are the general relativity and the Standard Model of particle physics, just to name a few. In this Thesis, we study hidden symmetries that control nontrivial aspects of classical dynamics, as well as spectral peculiarities in quantum and supersymmetric quantum mechanics models. From a classical mechanics perspective, Noether's theorem reveals that behind the invariance of action under a symmetry transformation, there exists a conservation law. In general, the principle of least action assumes the existence of a Lagrangian $\mathcal{L}$, which in mechanics depends on the generalized coordinates and its velocities. Geometrically, these coordinates belong to a configuration space $\mathcal{M}$, which points are usually denoted by $q$, and their associated velocities are vectors that live on the tangent space $T\mathcal{M}_{q}$ at $q$ , which in turns, are generated by the action of a particular tangent vector field. Then, naturally the Lagrangian is a function on the tangent bundle $T\mathcal{M}=\cup_{q\in \mathcal{M} }T\mathcal{M}_{q}$ of $\mathcal{M}$ \textcolor{red}{[\cite{Nakahara,Sundermayer}]}. In this framework, symmetry is a one-parametric transformation generated by some conserved vector field. To compare transformations associated with two different vector fields, say $X=X^\mu\frac{\partial}{\partial q^\mu}$ and $Y=Y^\mu\frac{\partial}{\partial q^\mu}$, we compute the Lie derivative\footnote{The Lie derivative evaluates the change of a tensor field (including scalar functions, vector fields and one-forms), along the flow defined by another vector field \textcolor{red}{[\cite{Nakahara}]}.} of $Y$ along the flow of $X$, denoted by $ \pounds_{X}Y$, and it is not difficult to show that this operation reduces to the usual commutator between two vector fields $[X,Y]\in T\mathcal{M}$. This gives rise to a Lie algebra of vector fields on $T\mathcal{M}$ \textcolor{red}{[\cite{Nakahara}]}. On the other hand, when we go to the Hamiltonian formalism, the dynamical variables considered now are the generalized coordinates and their canonical momenta $p_i=\frac{\partial \mathcal{L}}{\partial \dot{q}_i}$. One can show that under a general change of coordinates, $p_i$ transform as the components of a vector in the cotangent space $T_*\mathcal{M}_{q}$ at $q$ \textcolor{red}{[\cite{Nakahara}]}. Then the phase space is naturally identified with the cotangent bundle $T_*\mathcal{M}=\cup_{q\in \mathcal{M} }T_*\mathcal{M}_{q}$ with local coordinates $(q^i,p_i)$ on it \textcolor{red}{[\cite{Arnold, Nakahara, Sundermayer}]}. Here, the symplectic form $\omega=dq^i\wedge dp_i$ encodes the Poisson bracket structure. Namely, with a given function $F=F(q,p)$ on the phase space, a Hamiltonian vector field, $$X_F= \frac{\partial F}{\partial p^i}\frac{\partial}{\partial q_i} -\frac{\partial F}{\partial q^i}\frac{\partial}{\partial p_i}\,,$$ is associated, such that the contraction $i_{X_F}\omega\equiv \omega(X_F,.)=dF$. For two Hamiltonian vector fields $X_F$ and $X_G$, it follows then that $ \pounds_{X_F}X_G=X_{\{F,G\}}$ and $ \pounds_{X_F}G=\{G,F\}$. If $ F $ is identified as the Hamiltonian of the system, then the last relation corresponds to the equation of motion for $ G $ \textcolor{red}{[\cite{Arnold,Sundermayer}]}. In this formalism, a symmetry transformation is a flow produced by a Hamiltonian vector field whose generating function in phase space is conserved in time. As it is known, the Lie algebra mentioned above corresponds to a more abstract concept. A Lie group is a smooth manifold with an additional group structure, and any Lie group gives rise to a Lie algebra, which is its tangent space at the identity \textcolor{red}{[\cite{Nakahara,Gilmore}]}. When a group ``acts'' on some target space (that could be the same group manifold), an explicit form of its elements is required. This leads us to the representation theory. In Hamiltonian classical mechanics, the target space is $T_*\mathcal{M}$, the Lie algebra generators are identified with the Hamiltonian vector fields, and the group action corresponds to Hamiltonian flows. In the case of quantum theory, we look, in accordance with the celebrated Wigner theorem \textcolor{red}{[\cite{Wigner0,Wigner,Waimberg1}]}, for irreducible unitary representations of the quantum symmetry group of the system, and target space is the Hilbert space generated by eigenstates of the quantum Hamiltonian operator. In fact, the ``algebraic'' approach claims that the entire Hilbert space can be generated by the action of the symmetry operators on an arbitrary solution of the corresponding Schr\"odinger equation, i.e., the spectrum of the system is explained by symmetry. Symmetries are intrinsic properties of the geometry that characterizes a given manifold. Suppose we have a space-time manifold with a metric structure $ds^2=g_{\mu \nu}dx^{\mu}dx^{\nu}$. If $ds^2$ is invariant under a certain change of coordinates, we have an ``isometry'', which in accordance with the discussion above, is generated by a particular vector field, called Killing vector field \textcolor{red}{[\cite{Nakahara}]}. We can ask for mechanical systems that respect the isometries of the space-time where they live, that gives rise to important physical consequences. For example, the construction of an action principle in Minkowski space that is invariant under the Poincar\'e group transformations $x^\mu\rightarrow y^{\mu}=\Lambda^{\mu}_\nu x^\nu+a^\mu$, where $\Lambda^{\mu}_\nu $ are the Lorentz transformations, is just the same as to impose the relativity postulates. In this way, Poincar\'e invariant quantum field theories involve in their description field operators which provide certain representations of this symmetry group \textcolor{red}{[\cite{Waimberg2,Sundermayer}]}. The isometry condition for infinitesimal transformations $x^\mu \rightarrow x^\mu+\zeta^\mu$ corresponds to the Killing equation $$\frac{\partial g_{\mu\nu}}{\partial x^\lambda}\zeta^\lambda+ g_{\mu\lambda} \frac{\partial\zeta^\lambda}{\partial x^\nu}+ g_{\lambda\nu} \frac{\partial\zeta^\lambda}{\partial x^\mu}=0\,,$$ and for Poincar\'e transformations in Minkowski space its solutions are given by $\zeta^\mu=a^\mu+\omega^{\mu\nu} x_\nu$, where $\omega^{\mu \nu}$ is an antisymmetric matrix. To obtain the corresponding Killing vector fields we use the fact that Poincar\'e transformations admit the unitary representation $\exp(i(a^\mu T_\mu-\frac{1}{2}\omega^{\mu\nu}M_{\mu\nu}))$, where $ T_\mu$ and $M_{\mu\nu}$ are our candidates for translations and Lorentz transformations generators, respectively. To identify them we should compare $y^{\mu}=x^\mu+\zeta^\mu$ with $$ \exp(i(a^\nu T_\nu-\frac{1}{2}\omega^{\alpha\beta}M_{\alpha\beta}))x^\mu \approx x^\mu+i(a^\nu T_\nu-\frac{1}{2}\omega^{\alpha\beta}M_{\alpha\beta})x^\mu\,, $$ which implies that $T_\mu=i\partial_\mu$ and $ M_{\mu\nu}=i(x_\mu\partial_{\nu}-x_\nu\partial_{\mu})+\Sigma_{\mu\nu}\,.$ Here $ \Sigma_{\mu \nu} $ are operators that do not act on the coordinates, but their representations tell us about the spin of the corresponding fields. The notion of Killing vectors is generalized to the so-called conformal Killing vectors, which are related to the coordinate changes so that $ ds^2 \rightarrow \Omega(x) ds^2 $, where $\Omega(x)$ is the conformal factor. Such transformations correspond, particularly, to dilatations $x^\mu\rightarrow c x^\mu$ and special conformal transformations $x^\mu\rightarrow (x^\mu-b^\mu x^2)/(1-2b^\nu x_\nu+b^2x^2)$ \textcolor{red}{[\cite{Francesco}]}. Conformal symmetry, as well as conformal field theories, have made an huge contribution on different aspects of physics, such as condensed matter, electrodynamics, and gravity, just to mention a few examples \textcolor{red}{[\cite{Ginsparg,Jackiw}]}. The two-dimensional case is special in this context. Indeed, consider the change of coordinates $$x^1\rightarrow x^1+f_1(x^1,x^2)\,,\qquad x^2\rightarrow x^2+f_2(x^1,x^2)\,,$$ in flat space. This transformation can be shown to be of the conformal type if and only if $ f_1 (x ^ 1, x ^ 2) $ and $ f_2 (x ^ 1, x ^ 2) $ satisfy the Cauchy-Riemann equations, i.e., they are the real and imaginary parts of a holomorphic function. In the case of infinitesimal transformations, however, we can be less restrictive. To see this better, it is natural to take the complex coordinate $z=x^{1}+ix^{2}$, together with its complex conjugate $\bar{z}$, and consider the infinitesimal transformation $z\rightarrow z+\varepsilon(z)$, where $\varepsilon(z)$ is assumed to be a meromorphic function which admits a Laurent expansion around $z=0$. In this situation a (primary) field $ \phi (z, \bar{z}) $ infinitesimally transforms as $\delta \phi=-(\varepsilon \partial_z+\bar{\varepsilon} \partial_{\bar{z}})\phi$, from where we identify the symmetry generators $l_n= -z^{n + 1}\partial_z $ and $ \bar{l}_n=-\bar{z}^{n + 1} \partial_{\bar{z }} $, with $ n \in \mathbb Z $. They produce a direct sum of two copies of the infinite-dimensional Witt algebra, while the global conformal group that maps the complex plane onto itself is obtained from the subalgebra $\mathfrak{sl}(2,\mathbb{C})=\mathfrak{sl}(2,\mathbb R)\oplus\mathfrak{sl}(2,\mathbb R)$, which, in turn, is generated by $\{l_0,\, \bar{l}_0 ,\, l _\pm,\,\bar{l} _\pm\}$, \textcolor{red}{[\cite{Francesco}]}. Using these properties one can introduce a conformal field theory that does not even need a specific action principle. This corresponds to the so-called conformal bootstrap \textcolor{red}{[\cite{Polyakov1}]}. This type of theories, ``minimal models'', as they are often called \textcolor{red}{[\cite{Polyakov2, Francesco}]}, appears in the study of critical points in the second-order phase transition phenomena, and their main advantage is the calculation of the correlation functions of 2 and 3 points, only by symmetry arguments. On the other hand, conformal theories in higher dimensions became popular after Maldacena's famous article \textcolor{red}{[\cite{Maldacena}]}, where a duality between a gravity theory in AdS (type $ IIB $ string theory in $ AdS_5 \cross S^5 $) and a conformal field theory in the boundary ($ \mathcal{N} = 4 $ supersymmetric Yang-Mills) was shown. This AdS/CFT correspondence along with holographic techniques have found applications not only in black holes physics but also in other areas such as QCD \textcolor{red}{[\cite{Ammon,App1,Brod2}]}. Beyond the Standard Model it has been postulated supersymmetry, based on transformations that relate bosons and fermions \textcolor{red}{[\cite{Waimberg3}]}. These models refer to an action principle defined in the ``super-space'', which is a place where bosonic and fermion quantities (described by Grassmann's variables) live together. To overcome the Coleman-Mandula theorem: ``space-time and internal symmetries cannot be combined in any but a trivial way'', see \textcolor{red}{[\cite{Pelc}]}, the concept of symmetry is generalized to a $ \mathbb Z_2 $-graded algebra, or superalgebra, which is characterized by the supercommutator $ [ [A, B]] $, \begin{itemize} \item $ [[A,B]]=[A,B]$ if $A$ and $B$ are bosonic generators, \item $ [[A,B]]=[A,B]$ if one generator is bonosic while another is fermionic, \item $ [[A,B]]=\{A,B\}=AB+BA$ if both generators are fermionic. \end{itemize} To discriminate between bosonic and fermionic objects it is necessary to introduce a grading operator $ \Gamma $, $\Gamma^2=1$, that commutes with all bosonic generators and anti-commutes with fermionic ones. The conserved quantities that generate the supersymmetric transformations are called supercharges and are the fermionic operators. For the study of supersymmetry outside the framework of quantum field theory, the concepts of pseudo-classical mechanics \textcolor{red}{[\cite{psedoclasical2,psedoclasical1,Casalbuoni}]} and its quantum version, supersymmetric quantum mechanics \textcolor{red}{[\cite{Witten1, Witten2,Cooper}]}, were introduced. The latter has become an invaluable tool in the study of solvable potentials, and is closely related to the theory of integrable classical field systems and their solitonic and finite-gap type solutions \textcolor{red}{[\cite{MatSal}]}. Details of this formalism are presented in the next chapter. At this point it is clear that symmetries govern physics, and in this context, the notion of hidden symmetries becomes relevant \textcolor{red}{[\cite{Cariglia}]}. To explain it, let us consider again classical mechanics. If, regardless of the initial conditions, it happens that the nature of the trajectories in some system is ``special '' (in a geometric sense), this should indicate on the presence of the hidden symmetries. Form the perspective of symmetry transformations, these objects mix the coordinate and velocity (momenta) variables in Lagrangian (Hamiltonian) formalism. At the quantum level, hidden symmetries can explain peculiar properties of the physical spectrum, such as a degeneration. Take, for example, the case of the Kepler-Coulomb problem, where we know that the system is invariant under rotations and that the particle trajectories, being conical sections, lie in the plane orthogonal to the angular momentum vector. We also know that the geometric properties are determined by the energy and the angular momentum itself, but there is one more special property, the orientation of the trajectory, which is given by the so-called Laplace-Runge-Lentz vector to be the second-order in canonical momenta quantity. This vector integral is also relevant at a quantum level because it explains the ``accidental'' degeneration in the spectrum of the hydrogen atom model \textcolor{red}{[\cite{Pauli}]}. From now on, the nonlinear in canonical momenta integrals of motion different from Hamiltonian, like the mentioned Laplace-Runge-Lentz vector, will be called hidden symmetries. To study the geometric interpretation of these objects, which are usually related to Killing tensors and conformal Killing tensors \textcolor{red}{[\cite{Cariglia}]}, a good approach corresponds to the Eisenhart-Duval lift \textcolor{red}{[\cite{Cariglia2}]}, the procedure by which classical trajectories are identified with the null geodesics of a non-trivial geometry with two extra dimensions. Some other well known examples where these objects play a key role are the three-dimensional isotropic harmonic oscillator \textcolor{red}{[\cite{Jauch,Frad}]}, the anisotropic harmonic oscillator \textcolor{red}{[\cite{Bonatsos,deBoer}]}, the Higgs oscillator \textcolor{red}{[\cite{Zhedanov, Evnin}]}, nonlinear supersymmetry \textcolor{red}{[\cite{Plyunonlinear}]} and a charged particle in a monopole background \textcolor{red}{[\cite{PlyWipf,InzPlyWipf2}]}. The hidden symmetries satisfy nonlinear algebras in the general case. The first examples of nonlinear algebras introduced in field theory literature were the infinite $W$ algebras \textcolor{red}{[\cite{Zamolodchikov}]}, which are necessary to study the nature of the infinite-dimensional groups that appear in two-dimensional conformal models. The listed above systems are examples of elementary models whose associated integrals of motions satisfy finite $W$ algebras, which in turns, have played a relevant role in understanding of their infinite counterpart \textcolor{red}{[\cite{deBoer}]}. In the particular case of one-dimensional quantum mechanics, the supersymmetric algorithm allows us to build families of solvable potentials that have spectral peculiarities, perfectly encoded in hidden symmetries. A good example of this are the rational deformations of the harmonic oscillator, characterized by a potential of the form $ x^2-2 \ln (W (x))'' $, where $W (x)$ is a regular polynomial on the real line \textcolor{red}{[\cite{Krein,Adler}]}. Systems of this nature find importance in the field of exceptional orthogonal polynomials, see for example \textcolor{red}{[\cite{Dubov,Quesne2012,Gomez2}]}. The corresponding spectrum of this kind of systems is divided into $ g $ subsets of equidistant energy levels, isolated from each other. The first $(g-1)$ subsets, or bands, have a finite number of levels, while the last band has infinite number of equidistant discrete levels. In \textcolor{red}{[\cite{CarPly}]}, the spectrum-generating ladder operators for these systems were built, and they turned out to be higher order symmetry operators. This Thesis reviews in a self-contained manner the results obtained within the framework of a three-years research project, in which we address the following problems: \newpage \underline{\emph{a) Connection between different mechanical systems through symmetries}} \\ The $\mathfrak{so}(2,1)$ conformal algebra $$ [D,H]=iH\,,\qquad [D,K]=-iK\,,\qquad [K,H]=2iD\,, $$ describes different quantum systems with continuous spectrum, that is, $ H $ could represent the Hamiltonian of a free particle, Calogero models, monopole-charge system, etc. This algebra is isomorphic to the $ \mathfrak{sl} (2, \mathbb R) $ algebra, $$ [\mathcal{J}_0,\mathcal{J}_\pm]=\pm \mathcal{J}_\pm\,,\qquad [\mathcal{J}_-,\mathcal{J}_+]=2 \mathcal{J}_0\,, $$ where $ \mathcal {J}_0 $ is a compact generator that represents the Hamiltonian of a confined system, such as the harmonic oscillator. We address the problem of establishing a mapping between these two forms of dynamics associated with conformal algebra. Such a transformation would be useful, particuarly, for mapping conserved quantities that are easier to identify for one system than for the other. \underline{\emph{b) Hidden and bosonized supersymmetry}}\\ In quantum mechanics, the reflection operator $ \mathcal{R} $ is defined by $ \mathcal{R} x = -x \mathcal{R} $ and $ \mathcal{R} p = -p \mathcal {R} $. If we choose the supersymmetric grading operator $\Gamma$ to be $ \mathcal{R} $, we can construct bosonized supersymmetric systems \textcolor{red}{[\cite{Boson1}; $\quad$ \cite{PlyPara};$\quad$ \cite{Gamboa2}; $\quad$ \cite{CorNiePly}; \cite{Boson2, Boson3, jakubsky}]} which do not employ fermionic degrees of freedom. We focus on the origin of the hidden bosonized superconformal symmetry of the harmonic oscillator in one dimension \textcolor{red}{[\cite{Hiden1,BalSchBar,CarPly2,Hiden3}]}, that is, we build an unconventional supersymmetric system that, after nonlocal transformation of the Foldy-Wouthuysen type and a dimensional reduction \textcolor{red}{[\cite{jakubsky}]}, produces the superalgebra we are looking for. \underline{\emph{c) Hidden symmetries in rationally extended conformal mechanics}}\\ The simplest conformal invariant system that one can construct is $$ S=\int_{t_1}^{t_2}\left(\frac{1}{2}\dot{q}^2+\frac{g}{2 q^2}\right) dt\,,\qquad q>0\,, $$ where $g$ is a dimensionless constant that should be non-negative in classical mechanics and $g\geq -1/4$ at the quantum level. This model does not have a well-defined invariant ground state and to eliminate this deficiency, de Alfaro, Fubini, and Furlan used a particular coordinate and time change to transform the latter action into $$ S=\int_{\tau _1}^{\tau_2}\left(\frac{1}{2}\dot{y}^2+\frac{g}{2 y^2}+\frac{\omega^2}{2}y^2 \right)d\tau\,,\qquad x>0\,. $$ The corresponding Hamiltonian is compact and has a well-defined ground state at the quantum level, see \textcolor{red}{[\cite{AFF}]}. This system, called the de Alfaro, Fubini and Furlan model (AFF), and its supersymmetric extensions \textcolor{red}{[\cite{SCM1,SCM2,SCM3,SCM4,SCM5}]} have attracted a great attention over the years in a variety of fields such as particles dynamics in black hole backgrounds \textcolor{red}{[\cite{BlackHold1};$\,\,$ \cite{BlackHold2}; $\,\,$\cite{BlackHold3}; $\,\,$\cite{BlackHold5};$\quad$ \cite{Galaj}]}, $\quad$ cosmology $\quad$\textcolor{red}{[\cite{DGH};$\quad$ \cite{PioWal}]},$\qquad$ nonrelativistic $\qquad\,$ AdS/CFT $\qquad\,$ correspondence $\qquad\,$ \textcolor{red}{[\cite{GAdS1};$\,\,$\cite{GAdS2, BarFue, Jack}]}, QCD confinement problem \textcolor{red}{[\cite{App1,Brod2}]}, physics of Bose-Einstein condensates \textcolor{red}{[\cite{App2,App3}]} and anyon statistics \textcolor{red}{[\cite{leinaas1,leinaas2, mackenzie}]}. We apply the generalized Darboux-Crum-Krein-Adler transformation (DCKA) \textcolor{red}{[\cite{Moutard1,Moutard2,Darboux,Crum,Krein,Adler,MatSal}]} to the AFF model to construct rational deformations of this system. The objective is to follow the approach given in \textcolor{red}{[\cite{CarPly}]} to find the ladder operators that generate spectrum of these systems. \underline{\emph{d) Hidden symmetries in three-dimensional conformal mechanics}}\\ Consider a charged particle moving in a magnetic field generated by a Dirac monopole, i.e., in a monopole background \textcolor{red}{[\cite{Sakurai}]}, which is also subject to a central potential of the form $ V (\mbf{r}) = \frac {\alpha}{2m \mbf{r} ^ 2} $. In \textcolor{red}{[\cite{PlyWipf}]} it had already been shown that the system has hidden symmetries when $ \alpha = (eg)^2 $, where $ e $ and $ g $ are the particle's electric charge and the monopole magnetic charge, respectively. It was also shown that the system allows an $ \mathcal{N} = 4 $ supersymmetric extension. We investigate the possibility of obtaining hidden integrals of motion when the central potential is changed for $ V (\mbf{r}) = \frac{\alpha}{2m \mbf{r} ^ 2} +\frac {m \omega \mbf{r}^2}{ 2} $, and we look for possible supersymmetric extensions. The results obtained from problem {\emph{a)} are used to investigate this problem. The results of investigation of the listed problems were reported in the articles \textcolor{red}{[\cite{CarInzPly,InzPly1,InzPly2,InzPly3,InzPlyWipf1,InzPlyWipf2}]}. The subsequent main part of the Thesis is organized as follows. In Chap. \ref{ChSUSY} we review the supersymmetric quantum mechanics formalism as well as the generalized Darboux transformations and their confluent extensions. In Chap. \ref{ChConformal} we revisit the one-dimensional conformal mechanics model of de Alfaro, Fubini and Furlan \textcolor{red}{[\cite{AFF}]}, as well as its $\mathcal{N}=2$ supersymmetric extension, leading us to the $\mathfrak{osp}(2,2)$ superconformal symmetry. In Chap. \ref{ChBridge}, based on \textcolor{red}{[\cite{InzPlyWipf1}]}, we consider the conformal bridge transformation and its applications to models in one and two dimensions. In Chap. \ref{ChHiddenboson}, we explain the origin of the hidden bosonic superconformal symmetry of the harmonic oscillator \textcolor{red}{[\cite{InzPly1}]}. In Chap. \ref{ChRQHO} we review the results of ref. \textcolor{red}{[\cite{CarInzPly}]}, where rational extensions of the conformal mechanics model characterized by the potential $ \frac{m(m+1)}{x^2}$ with $m=1,2,\ldots$, as well as its spectrum-generating ladder operators are constructed. In Chap. \ref{ChNonLinearSUSY}, following \textcolor{red}{[\cite{InzPly2}]}, we consider supersymmetric extensions of the rationally deformed system of Chap. \ref{ChRQHO}, as well as its complete spectrum-generating nonlinear superalgebra. In Chap. \ref{ChKlein} we exploit a discrete Klein four-group symmetry of the Schr\"odinger equation for the AFF model to generalize the construction of rationally extended systems and the spectrum-generating ladder operator sets for the case in which integer parameter $m$ is replaced by a real number $\nu\geq-1/2$ \textcolor{red}{[\cite{InzPly3}]}. In this case, the confluent Darboux transformations appear naturally. Chap. \ref{Chapmono1} and \ref{Chapmono2} are devoted to investigation, in the light of hidden symmetries, of the conformal mechanics in a monopole background as well as its supersymmetric extension, which is characterized by a three-dimensional realization of the $\mathfrak{osp}(2,2)$ superconformal symmetry \textcolor{red}{[\cite{InzPlyWipf2}]}. The Thesis ends with its Conclusion and Outlook. In Appendix, some technical details are collected. \chapter{Supersymmetric quantum mechanics } \label{ChSUSY} The application of supersymmetric ideas in nonrelativistic quantum mechanics has given us a better understanding of the problem of solvable potentials and its associated hidden symmetries. In this context, the main technique is the factorization method \textcolor{red}{[\cite{Infeld,Cooper}]}, which relates a particular quantum mechanical system with another one (the so-called superpartner). In the one-dimensional case, the formalism of construction of such operators (starting from a well known quantum system) receives the name of Darboux-Crum-Krein-Adler transformation \textcolor{red}{[\cite{Moutard1,Moutard2,Darboux,Crum,Krein,Adler,MatSal}]}. An algorithmic procedure involves a given number of eigenstates of the original system, typically called ``seed states'', and in its confluent extension Jordan states are also considered \textcolor{red}{[\cite{Jordan2,Jordan1,Jordan3}]}. In this chapter we revisit these methods. Generalization $\,\,$to $\,\,$higher $\,\,$spatial $\,\,$dimensions $\,\,$can $\,\,$be $\,\,$reformulated $\,\,$in $\,\,$different $\,\,$ways, $\,\,$see$\,\,$ \textcolor{red}{[\cite{HSUSY1,HSUSY2,HSUSY3,HSUSY4,HSUSY5,HSUSY6}]}. In this Thesis, we just consider the approach of a given Dirac Hamiltonian, whose square produces a supersymmetric Hamiltonian operator \textcolor{red}{[\cite{Cooper}]}. \section{The one-dimensional case} In one-dimensional systems, the factorization method consists in introducing intertwining operators of the form \begin{equation} A=\sqrt{\frac{\hbar^2}{2m}}\frac{d}{dx}+W(x)\,,\qquad A^\dagger=-\sqrt{\frac{\hbar^2}{2m}}\frac{d}{dx}+W(x)\,, \end{equation} which satisfy \begin{eqnarray} \label{susy1} & AA^\dagger=H_+\,,\qquad A^\dagger A=H_-\,,\qquad H_\pm=-\frac{\hbar^2}{2m}\frac{d^2}{dx^2}+W(x)^2\pm \frac{\hbar}{\sqrt{2m}}W(x)'\,. \end{eqnarray} Here $W(x)$ is called superpotential and $H_{\pm}$ are the superpartner systems. Now, let us assume we know a function $\psi_*$ such that $H_-\psi_*=0$. This defines a nonlinear Riccati type equation for $W$ \begin{equation} W(x)^2-\frac{\hbar}{\sqrt{2m}}\frac{dW}{dx}=u(x)\,,\qquad u=(x)= \frac{\hbar^2}{2m}\frac{1}{\psi_*}\frac{d^2}{dx^2}\psi_*\,, \end{equation} and a particular solution of which is \begin{equation} \label{intert1} W(x)=-\frac{\hbar}{\sqrt{2m}}\frac{\psi_{*}'}{\psi_{*}}=-\frac{\hbar}{\sqrt{2m}}\ln(\psi_*)'\quad \Rightarrow\quad A=\frac{\hbar}{\sqrt{2m}}\left(\frac{d}{dx}-\ln(\psi_*)'\right)= \frac{\hbar}{\sqrt{2m}}\psi_*\frac{d}{dx}\frac{1}{\psi_{*}}\,, \end{equation} which in turns implies $A\psi_{*}=0$. This result allows us to conclude the following: For a given well known physical system we can select one of the two linear independent (formal) zero energy solutions to recognize the associated superpotential and use it to construct a new quantum mechanical system given by $H_+$. From the first equation in (\ref{intert1}) it can be concluded that $\psi_*$ must not have zeros in the domain of $H_-$ to obtain a \emph{priori}, a regular superpotential and a \emph{posteriori}, a well defined superpartner in the same domain. If the selected state does not fulfill this condition we call the resulting system as a ``virtual system'' that makes no physical sense\footnote{Such virtual systems are useful in the context of higher order supersymmetry, see for example \textcolor{red}{[\cite{Arancibia,Plyushchay,CarInzPly}]}. }. One typically refers to $\psi_*$ as a seed state. On the other hand, the action of operator $A$ on other eigenfunctions of $H_-$ produces eigenstates of $H_+$. To show this statement we use Eq. (\ref{susy1}) to deduce the intertwining relations \begin{equation} \label{Intertwiningrelation0} AH_-=H_+A\,,\qquad A^\dagger H_+=H_-A^\dagger\,. \end{equation} Then, if $\psi_\lambda$ is an eigenstate of $H_-$ with eigenvalue $\lambda$ we get \begin{equation} H_-\psi_{\lambda}=\lambda\psi_{\lambda}\qquad\Rightarrow\qquad H_+\,(A\psi_{\lambda})=\lambda\,(A\psi_{\lambda})\,. \end{equation} Of course these relations also work for the second linearly independent solution of the form \begin{equation}\label{tildepsi} \widetilde{\psi}_{\lambda}=\psi_{\lambda} \int^x\frac{d\zeta}{(\psi_{\lambda}(\zeta))^2}\,, \end{equation} which together with $\psi_\lambda$ satisfies $W(\psi_\lambda,\widetilde{\psi_\lambda})=1$, where $W(.,.)$ is the Wronskian of two functions. It is not difficult to show that operator $A^\dagger$ annihilates the state $A \widetilde{\psi_*}=1/\psi_*$ which is one of the zero eigenvalue solutions of $H_+$\footnote{Note that with this method we only obtain one of the two linear independent solutions since $A\psi_*=0$. To obtain the second linear independent solution we should extend the transformation by applying it to Jordan states.}. Knowing this, one can say something about the spectrum of the latter Hamiltonian in correspondence with the behavior of the seed state. First, acting on physical states of $H_-$, operator $A$ produces physical states of $H_+$ and second, if $\psi_*$ is a physical state, then the spectrum of $H_+$ does not have this energy level. On the other hand, if the seed state is nonphysical, two things could happen: 1) $1/\psi_*$ is normalizable and system $H_+$ possesses an extra level and 2) $1/\psi_{*}$ is nonphysical and both systems are isospectral. Finally, suppose we have a given number of differential operators denoted by $ I_i $, each of them of a certain differential order $ d_i $, which together with $H_-$ span a symmetry algebra. In this context, it is not difficult to show the relation $[H_+,AI_iA^\dagger]=A[H_-,I_i]A^\dagger$, which means that when operator $I_i$ is the integral of motion of $H_-$, then $A(I_i)A^\dagger$ (of differential order $d_i+2$) is the integral for $H_+$ and the system is described (in the general case) by a certain nonlinear deformed algebra. In conclusion, the method not only serves to map states but also to obtain hidden integrals of motion of the generated system. This procedure is known as ``Darboux-dressing''. We have the complete picture to extend our superpartner systems to supersymmetric quantum mechanics. We use our Hamiltonians and intertwiners to construct the $2\cross 2$ matrix operators \begin{equation} \mathcal{H}=\left(\begin{array}{cc} H_+ & 0\\ 0 & H_- \end{array}\right)\,,\qquad \mathcal{Q}_1=\left(\begin{array}{cc} 0 & A\\ A^\dagger & 0 \end{array}\right)\,,\qquad \mathcal{Q}_2=i\sigma_3\mathcal{Q}_1\,, \end{equation} which satisfy the $\mathcal{N}=2$\footnote{Here $\mathcal{N}$ indicates the number of true fermionic integrals.} Poincar\'e superalgebra \begin{equation} \label{Poincare0} [\mathcal{H},\mathcal{Q}_{a}]=0\,,\qquad \{\mathcal{Q}_a,\mathcal{Q}_{b}\}=2\delta_{ab}\mathcal{H}\,, \end{equation} with $\mathbb Z_2$ grading operator $\Gamma=\sigma_3$. In the case in which the state $\psi_*$ (or $1/\psi_*$) is the physical ground state of $H_-$ ($H_+$), then the spinor $(0,\, \quad \psi_*)^t$ (or $(1/\psi_* ,\, \quad 0)^t$) is the supersymmetric invariant ground state of $\mathcal{H}$. Otherwise supersymmetry is spontaneously broken. The method described in this paragraph is called the Darboux transformation and is the first step in an iterative process. In the next step we can produce a third new Hamiltonian by taking a seed state from $H_+$ and so on. The final form of the method after a several number of steps is called the ``Darboux-Crum-Krein-Adler transformation'' (DCKA transformation for short) whose details are explored in the following section. \section{DCKA transformation} \label{Chap1Darbux} Let us start with the equation \begin{equation} \label{Sch} L\psi_\lambda=\lambda\psi_\lambda\,,\qquad L= -\frac{d^2}{dx^2}+V(x)\,, \end{equation} corresponding to the eigenvalue problem of a Schr\"odinger type operator $L$. In this paragraph we treat Eq. (\ref{Sch}) as a formal second order differential equation on some interval $(a,b)$. Consider now a set of solutions $\psi_{k}$ corresponding to eigenvalues $\lambda_{k}$, $k=1,\ldots,n$. We use them as seed states for our DCKA transformation and generate the new Schr\"odinger operator \begin{equation} \label{Dar} \breve{L}\Psi_{\lambda}= \lambda\Psi_\lambda\,,\qquad \breve{L}= -\frac{d^2}{dx^2}+V(x)-2\frac{d^2}{dx^2} \ln W(\psi_{1},\ldots,\psi_{n})\,. \end{equation} If the set of the seed states is chosen in such a way that the generalized Wronskian of $n$ functions \begin{equation} W(f_1(x),\ldots,f_n(x))=\text{det}\left(\frac{d f_{i}(x)}{dx^{j-1}}\right)\,,\qquad i,j=1\ldots,n\,, \end{equation} takes nonzero values on $(a,b)$, then the potential of the generated system will also be nonsingular there. In general case, solutions of (\ref{Dar}) are obtained from solutions of Eq. (\ref{Sch}) as follows \begin{equation} \label{Darstates} \Psi_{\lambda}=\frac{W(\psi_{1},\ldots,\psi_{n},\psi_{\lambda})}{W(\psi_{1},\ldots,\psi_{n})}=\mathbb A_{n}\psi_{\lambda}\,, \end{equation} where $\mathbb A_{n}$ is the differential operator of order $n$ defined recursively as \begin{equation} \label{generic-inter} \mathbb A_{n}=A_n\ldots A_1\,,\qquad A_k=\mathbb A_{k-1}\psi_k\frac{d}{dx}\left(\frac{1}{\mathbb A_{k-1}\psi_k}\right), \qquad k=1,\ldots,n,\qquad \mathbb A_0=1\,. \end{equation} Note that this operator is the natural generalization of (\ref{intert1}) with $\hbar/\sqrt{2m}=1$ and by the construction, $\ker\mathbb A_n=\text{span}\{\psi_{1},\ldots,\psi_{n}\}$. Operator $\mathbb A_n$ and its Hermitian conjugate $\mathbb A_n^\dagger$ intertwine the operators $L$ and $\breve{L}$, \begin{equation} \label{inter-gen} \mathbb A_nL=\breve{L}\mathbb A_n\,,\qquad \mathbb A_n^\dagger \breve{L}=L\mathbb A_n^\dagger\,, \end{equation} and satisfy relations \begin{equation} \label{poly1} \mathbb A_n^\dagger\mathbb A_n=\prod_{k=1}^{n}(L-\lambda_k)\,,\qquad \mathbb A_n\mathbb A_n^\dagger=\prod_{k=1}^{n}(\breve{L}-\lambda_k)\,. \end{equation} From the first equation in (\ref{poly1}) one can find that $\ker\mathbb A_n^\dagger=\text{span}\{\mathbb A_n\widetilde{\psi}_{1},\ldots,\mathbb A_n\widetilde{\psi}_{n}\}$. Similarly to (\ref{Darstates}), $\mathbb A_{n}^\dagger \Psi_{\lambda}=\psi_{\lambda}$ for $\Psi_\lambda\notin \text{ker}\,\mathbb A_{n}^\dagger$, and $$ \mathbb A_{n}^\dagger \widetilde{(\mathbb A_n\widetilde{\psi}_k)}=\psi_k\in \text{ker}\,\mathbb A_n\,. $$ Following the same approach as in the previous section, we can also use the pair $L$ and $\breve{L}$ and their corresponding intertwining operators to construct an $\mathcal{N}=2$ superextended system described by the $2\times 2$ matrix Hamiltonian and the supercharges given by \begin{equation}\label{Hlambda*} \mathcal{H}= \left( \begin{array}{cc} H_1\equiv \breve{L}-\lambda_*& 0 \\ 0 & H_0\equiv L-\lambda_* \end{array} \right),\qquad \mathcal{Q}_1= \left( \begin{array}{cc} 0& \mathbb A_n \\ \mathbb A_n^\dagger & 0 \end{array} \right), \qquad \mathcal{Q}_2=i\sigma_3\mathcal{Q}_1\,, \end{equation} where $\lambda_{*}$ is a constant associated with the energy levels of the seed states. These generators produce the (anti)commutation relations \begin{equation}\label{N2susy} [\mathcal{H},\mathcal{Q}_a]=0\,, \quad \{\mathcal{Q}_a,\mathcal{Q}_b\}=2\delta_{ab}P_n(\mathcal{H}+\lambda_*)\,, \end{equation} which for $n=1$ correspond to an $\mathcal{N}=2$ Poincar\'e supersymmetry and for $n=2,3,\ldots$, we have a nonlinear deformation of the latter supersymmetry (here $P_{n}(\eta)$ represents a polynomial of order $n$ in $\eta$). Examples of this kind of systems will be the main focus in Chap. \ref{ChNonLinearSUSY}. The iterative nature of DCKA transformation allows us to derive some useful Wronskian identities for a given set of eigenstates. They are shown in Appendix \ref{ApenWI}. \section{Jordan states and confluent Darboux transformation} Jordan states correspond to functions that are annihilated by a certain polynomial of the Schr\"odinger operator $L$ \textcolor{red}{[\cite{Jordan1}]}. They were used, for example, in the construction of isospectral $\quad$deformations $\quad\,$of $\quad\,$ the$\quad\,$ harmonic$\quad\,$ oscillator$\quad\,$ \textcolor{red}{[\cite{Car2,CarPly}; \cite{InzPly1}]}, and also they can be used to construct solutions of the KdV equation \textcolor{red}{[\cite{Correa2016,JM1}]}. These Jordan states will play a key role throughout this manuscript. This time we will focus our attention on building solutions of the fourth order differential equation $(L-\lambda_*)^2\chi_*=0$. Let us take an eigenstate $\psi_*$ with eigenvalue $\lambda_*$ as a seed state of the Darboux transformation. The corresponding intertwining operators are \begin{equation} A_{\psi_*}=\psi_*\frac{d}{dx}\left(\frac{1}{\psi_*}\right)\,,\qquad A_{\psi_*}^\dagger=-\frac{1}{\psi_*}\frac{d}{dx}\psi_*\,. \end{equation} According to Eq. (\ref{poly1}), their product gives us the shifted Schr\"odinger operator $A_{\psi_*}^\dagger A_{\psi_*}=L-\lambda_*$, whose kernel is spanned by the linear independent states $\psi_*$ and $\widetilde{\psi}_*$. The problem of constructing Jordan states reduces then to solving equations \begin{equation}\label{Omega12} A_{\psi_*}^\dagger A_{\psi_*}\Omega_*=(L-\lambda_*)\Omega_*=\psi_*\,,\qquad A_{\psi_*}^\dagger A_{\psi_*}\breve{\Omega}_*=(L-\lambda_*)\breve{\Omega}_*=\widetilde{\psi}_*\,. \end{equation} Their solutions are given, up to a linear combination of $\psi_*$ and $\widetilde{\psi}_*$, by particular solutions of respective inhomogeneous equations, \begin{eqnarray} \label{omega1} \Omega_*=\psi_*\int_{a}^{x}\frac{d\zeta}{\psi_*^2(\zeta)}\int_{\zeta}^{b}\psi_*^2(\eta)d\eta \,,\qquad \breve{\Omega}_*=\psi_*\int_{a}^{x}\frac{d\zeta}{\psi_*^2(\zeta)}\int_{\zeta}^{b}\psi_* (\eta)\widetilde{\psi}_*(\eta)d\eta\,. \end{eqnarray} Here the integration limits are chosen coherently with the region where the operator $L$ is defined, and we have the relations \begin{eqnarray} \label{JorWronskian} W(\psi_*,\Omega_*)=\int_{x}^{b}\psi_{*}^2d\zeta\,,\qquad W(\psi_*,\breve{\Omega}_*)=\int_{x}^{b}\psi_{*}\widetilde{\psi}_*d\zeta\,, \end{eqnarray} which will be useful to produce nonsingular confluent Darboux transformations. Let us inspect now the role of Jordan states (\ref{omega1}) in DCKA transformation generated by a set of the seed states $\{\psi_n\}$. The intertwining operator (\ref{generic-inter}) and Eqs. (\ref{inter-gen}) and (\ref{Omega12}) give us the relations \begin{equation} \label{Dar-jor} \mathbb A_n\psi_{*}=(\breve{L}-\lambda_{*})\mathbb A_n\Omega_{*}\,,\qquad \mathbb A_n\widetilde{\psi}_{*}=(\breve{L}-\lambda_{*})\mathbb A_n\breve{\Omega}_{*}\,. \end{equation} If the state $\psi_*$ (or $\widetilde{\psi}_*$) is annihilated by $\mathbb A_n$, i.e., if the set of the seed states $\{\psi_n\}$ includes $\psi_*$ (or $\widetilde{\psi}_*$), the function $\mathbb A_n\Omega_{*}$ (or $\mathbb A_n\breve{\Omega}_{*}$) will be an eigenstate of $\breve{L}$ with eigenvalue $\lambda_*$ which is available to produce another Darboux transformation if we consider $\breve{L}$ as an intermediate system. Otherwise, the indicated function is a Jordan state of $\breve{L}$, and in correspondence with (\ref{omega1}) we have \begin{eqnarray} \label{omega2} &\mathbb A_n\Omega_*=(\mathbb A_n\psi_*)\int_{a}^{x}\frac{d\zeta}{(\mathbb A_n\psi_*)^2(\zeta)}\int_{\zeta}^{b}(\mathbb A_n\psi_*)^2(\eta)d\eta \,,\qquad&\\& \mathbb A_n\breve{\Omega}_*=(\mathbb A_n\psi_*)\int_{a}^{x}\frac{d\zeta}{(\mathbb A_n\psi_*)^2(\zeta)}\int_{\zeta}^{b}(\mathbb A_n\psi_*) (\eta)\widetilde{\mathbb A_n\psi_*}(\eta)d\eta\,& \end{eqnarray} up to a linear combination with $\mathbb A_n\psi_*$ and $\widetilde{\mathbb A_n\psi_*}$. Having in mind that Jordan states appear naturally in the confluent generalized Darboux transformations \textcolor{red}{[\cite{Jordan1}]}, one can consider directly a generalized Darboux transformation based on the following set of the seed states\,: $(\psi_1,\Omega_1,\ldots,\psi_n,\Omega_n)$. This generates a Darboux-transformed system which we denote by $\widehat{L}_{[2n]}$. The intertwining operator $\mathbb A_{2n}^{\Omega}$ as a differential operator of order $2n$ is built according to the same rule (\ref{generic-inter}), but with the inclusion of Jordan states into the set of generating functions. By the construction, this operator annihilates the chosen $2n$ seed states, and one can show that \begin{equation} \label{Polly2} (\mathbb A_{2n}^\Omega)^\dagger\mathbb A_{2n}^\Omega=\prod_{i=1}^{n}(L-\lambda_i)^2\,,\qquad \mathbb A_{2n}^\Omega(\mathbb A_{2n}^\Omega)^\dagger=\prod_{i}^{n}(\widehat{L}_{[2n]}-\lambda_i)^2\,. \end{equation} This, in particular, means that $\ker(\mathbb A_{2n}^\Omega)^\dagger=\text{span}\{\mathbb A_{2n}^\Omega\widetilde{\psi}_{1}, \mathbb A_{2n}^\Omega\breve{\Omega}_{1},\ldots,\mathbb A_{2n}^{\Omega}\widetilde{\psi}_{n},\mathbb A_{2n}^{\Omega}\breve{\Omega}_{n}\}$. \section{A three-dimensional example} \label{SecSUSY3d} Unlike the one-dimensional case, three-dimensional supersymmetric quantum mechanics does not have a unique generalization. Here, following \textcolor{red}{[\cite{Cooper}]}, we begin with a charged massless Dirac particle in a four-dimensional Euclidian space. Assuming the presence of an external electromagnetic field, the Dirac's equation takes the form ($\hbar=e=c=1$) \begin{equation} \label{DiracEq} \gamma^\mu P_\mu\Psi=0\,, \qquad P_\mu= -i\partial_\mu+ A_\mu\,,\qquad\mu=0,1,2,3\,, \end{equation} where $A_\mu$ is the associated $U(1)$ gauge potential, the metric is just $\delta_{\mu\nu}$ and $\gamma^\mu$ are the Euclidean gamma matrices \begin{eqnarray} \gamma_{i}=\left(\begin{array}{cc} 0 & -i\sigma_i\\ i\sigma_i & 0 \end{array}\right)\,,\qquad \gamma_{0}=\left(\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right)\,, \qquad \Gamma=\gamma_5=i\gamma_0\gamma_1\gamma_2\gamma_3\gamma_4= \left(\begin{array}{cc} 1 & 0\\ 0 & -1 \end{array}\right)\,. \end{eqnarray} Assuming that the gauge field does not depend on $t$, we can look for stationary solutions of the form $\Psi=e^{i\lambda t}\Phi(\mbf{r})$. Expanding equation (\ref{DiracEq}) in terms of this ansatz we get \begin{eqnarray} \left( \begin{array}{cc} 0 & \lambda\\ \lambda & 0 \end{array}\right)\Phi= \mathcal{Q}_1\Phi\,,\qquad \mathcal{Q}_1=\left( \begin{array}{cc} 0 & \,\mbfgr{\sigma}\cdot(\mbfgr{\nabla}+i\mbf{A})-A_0\\ -\,\mbfgr{\sigma}\cdot(\mbfgr{\nabla}+i\mbf{A})-A_0 & 0 \end{array}\right) \,. \end{eqnarray} By applying $\mathcal{Q}_1$ from the left, the Schr\"odinger equation $ \lambda^2\Phi=\mathcal{H}\Phi\,$ is obtained, where \begin{eqnarray} \label{Pauli1} \mathcal{H}=(\mbf{p}+\mbf{A})^2+A_0^2 +\Pi_+\,\mbfgr{\sigma}\cdot (\mbf{E}+\mbf{B})+\Pi_-\,\mbfgr{\sigma}\cdot(\mbf{E}-\mbf{B})\,,\qquad \Pi_\pm=\frac{1}{2}(1\pm \Gamma)\,, \end{eqnarray} is a Pauli Hamiltonian operator with $\mbf{E}=-\mbfgr{\nabla} A_0$ and $\mbf{B}=\mbfgr{\nabla} \cross \mbf{A}$. The operator $\mathcal{Q}_1$, together with $\mathcal{Q}_2=i\Gamma\mathcal{Q}_1$ and $\mathcal{H}$ produce a three-dimensional realization of the $\mathcal{N}=2$ Poincar\'e supersymmetry with grading operator $\Gamma$. Furthermore, in the dual case $\mbf{E}=\mbf{B}$ (antidual case $\mbf{E}=-\mbf{B}$) the system possesses the nontrivial bosonic integral of motion $\mbfgr{\mathcal{S}}^-=\Pi_-\,\mbfgr{\sigma}$ ($\mbfgr{\mathcal{S}}^+=\Pi_+\,\mbfgr{\sigma}$), and the commutation relations $[\mathcal{S}_i^-,\mathcal{Q}_a]$ ($[\mathcal{S}_i^+,\mathcal{Q}_a]$) with $a=1,2$, produce other 3 pairs of supercharges. The system described by (\ref{Pauli1}) has been studied properly in \textcolor{red}{[\cite{HSUSY3}]} where authors show that dual and anti-dual cases are the only ones that admit extensions of the Poincar\'e supersymmetry. In \textcolor{red}{[\cite{PlyWipf}]}, the case of dual and anti-dual dyon (where the magnetic fields is due to a Dirac magnetic monopole) was considered, and it was shown that the system possesses the exceptional $\mathcal{N}=4$ superconformal algebra $D(1,2;\alpha)$ \textcolor{red}{[\cite{HSUSY2}]}. \section{Remarks} The tools considered in this chapter are going to be our principal methods for the rest of this Thesis. When we study one-dimensional potentials, our initial system for the DCKA transformation will always be a Newton-Hooke conformal invariant particle \textcolor{red}{[\cite{NH1,NH2,NH3,NH4}]}, the properties of which are described in Chaps. \ref{ChConformal} and \ref{ChBridge}. Our principal target is to study the hidden symmetries of the nontrivial resulting systems and their supersymmetric extensions. This is the main content of Chaps. \ref{ChHiddenboson}-\ref{ChKlein}. In Chap. \ref{Chapmono1} and \ref{Chapmono2} we study a three-dimensional generalization of the system introduced in Chap. \ref{ChConformal}, as well as its supersymmetric extensions. The resulting system will have superconformal symmetry that can be reinterpret in accordance with Sec. \ref{SecSUSY3d}, but with a nontrivial gauge potential. \chapter{ One-dimensional conformal mechanics } \label{ChConformal} As it was noted in the introduction, conformal invariance appears as a natural extension of the Poincar\'e symmetry of space-time, and involves the set of transformations that perform the change $g_{\mu \nu}dx^{\mu}dx^{\nu}\rightarrow\Omega(x)g_{\mu \nu}dx^{\mu}dx^{\nu}$, where $g_{\mu\nu}$ is the metric tensor and $\Omega(x)$ the conformal factor \textcolor{red}{[\cite{Francesco,Sundermayer}]}. The transformations that make this job (preservation of angles) are the space-time dilatations and the special conformal transformations. Some examples of space-time manifolds that allow this extension are the flat space (Minkowski), together with de Sitter (dS) and Anti de Sitter (AdS) spaces \textcolor{red}{[\cite{Francesco,Nakahara}]}. The $\mathfrak{so}(2,1)$ conformal algebra is given by \begin{equation} \label{so(2,1)preambulo} [D,H]=iH\,,\qquad [D,K]=-iK\,,\qquad [K,H]=2iD\,, \end{equation} being $H$, $D$ and $K$ the generators of time translations, dilatations and special conformal transformations, for details we recommend \textcolor{red}{[\cite{SCM5}]}. Taking the linear combinations \begin{equation} \label{Jintermsofso2} \mathcal{J}_0=\frac{1}{2}(\alpha^{-1} H+\alpha K)\,,\qquad \mathcal{J}_1=\frac{1}{2}( \alpha^{-1} H- \alpha K)\,,\qquad \mathcal{J}_2=D\,,\ \end{equation} where $\alpha$ is a constant that compensates the dimensions of $K$ and $H$, we obtain the Lorentz algebra in $(2+1)$-dimensional Minkowski space, with metric $\eta^{\mu\nu}=\text{diag}(-1,1,1)$, given by \begin{equation} [\mathcal{J}_{\mu},\mathcal{J}_{\nu}]=-i\epsilon_{\mu\nu\rho}\mathcal{J}^{\rho}\,,\qquad \epsilon^{012}=1\,, \end{equation} which, in turn, is isomorphic to the $\mathfrak{sl}(2\,,\mathbb R)$ algebra, \textcolor{red}{[\cite{Mikisl2R}]}, \begin{equation} \label{sl(2R)preambulo} [\mathcal{J}_0,\mathcal{J}_\pm]=\pm \mathcal{J}_\pm\,,\qquad [\mathcal{J}_-,\mathcal{J}_+]=2 \mathcal{J}_0\,,\qquad \mathcal{J}_\pm=\mathcal{J}_1\pm i\mathcal{J}_2= \frac{1}{2\alpha}( H- \alpha^2 K \pm i2\alpha D) \,. \end{equation} This algebra has the automorphisms $ \mathcal{J}_0\rightarrow \mathcal{J}_0\,,$ $\mathcal{J}_\pm\rightarrow -\mathcal{J}_\pm\,,$ and $\mathcal{J}_0\rightarrow -\mathcal{J}_0\,, \mathcal{J}_\pm\rightarrow -\mathcal{J}_\mp\,,$ and the Casimir element is given by \begin{equation} \label{CasimirInvariant} \mathscr{F}=-\mathcal{J}_\mu \mathcal{J}^\mu=\mathcal{J}_0^2-\frac{1}{2}(\mathcal{J}_+\mathcal{J}_-+ \mathcal{J}_-\mathcal{J}_+)=KH-D^2\,. \end{equation} One of the objectives of this Thesis is to study models that have both this symmetry and some supersymmetric extensions of it. We also study possible nonlinear extensions of (super)conformal algebra, performed in terms of hidden symmetries. This chapter is devoted to the analysis of classical and quantum conformal mechanical models. In Sec. \ref{SecAFF} we review the theory behind the de Alfaro, Fubini, and Furlan (AFF) model, presented in \textcolor{red}{[\cite{AFF}]}, that looks for a well-defined one-dimensional quantum system with a conformal invariant ground state. In Sec. \ref{SecOSP22Conformal} we use the tools developed in Chap. \ref{ChSUSY} to construct the $\mathfrak{osp} (2,2)$ supersymmetric extension of the AFF model. \section{The de Alfaro, Fubini and Furlan model} \label{SecAFF} Consider the one-dimensional system given by the action, \textcolor{red}{[\cite{AFF}]}, \begin{equation} \label{conformalaction} I[q]=\int \mathcal{L}(q,\dot{q}) dt\,, \quad \mathcal{L}=\frac{1}{2}\left(\dot{q}^2-\frac{g}{q^2}\right)\,, \end{equation} where $q$ takes values on the positive real line and has dimension $[q]=[\sqrt{t}]$, besides $g$ is a dimensionless coupling constant which classically is assumed to be positive to avoid the ``problem of fall to the center''. This action could represent, for example, a Calogero model of two particles, but omitting the degree of freedom of the center of mass \textcolor{red}{[\cite{Calogero1,Calogero2}]}. On can show that the action (\ref{conformalaction}) is invariant under time translations $t\rightarrow t+\alpha t$, space-time dilatations \begin{equation} x\rightarrow e^{\frac{\beta}{2}}x\,,\qquad t\rightarrow e^{\beta}t\,, \end{equation} and the spacial conformal transformations \begin{equation} x\rightarrow \frac{x}{1-\gamma t}\,,\qquad t\rightarrow \frac{t}{1-\gamma t}\,, \end{equation} where $\alpha$, $\beta$ and $\gamma$ are parameters of the corresponding transformations. This symmetry is generated by the Hamiltonian $H_g$, the dilatations generator $D$, and generator of special conformal transformations $K$, \begin{equation} \label{conformalgenerators} H_g=\frac{1}{2}(p^2+\frac{g}{q^2})\,,\qquad D=\frac{1}{4}(qp+pq) -H_gt\,,\qquad K=\frac{1}{2}q^2-2Dt-H_gt^2\,, \end{equation} where $p=\dot{q}$. These are the integrals of motion that satisfy the equation of the form $\frac{d}{dt}{A}=\frac{\partial A}{\partial t} + \{A,H\}=0$ where $\{,\}$ denotes Poisson brackets. We often call objects of this type as ``dynamical integrals'', and in this case they obey the classical version of $\mathfrak{so}(2,1)$ algebra \begin{equation} \label{so(2,1) cap 1} \{D,H_g\}=H_g\,,\qquad \{D,K\}=-K\,,\qquad \{H_g,K\}=-2D\,, \end{equation} and the Casimir invariant (\ref{CasimirInvariant}) takes the value $\mathscr{F}=\frac{1}{4}g$. The last relation in (\ref{conformalgenerators}) gives us the solution of the corresponding Euler-Lagrange equation derived from (\ref{conformalaction}), \begin{equation}\label{q(t)} q(t)=\sqrt{2(at^2+2bt+c)}=\sqrt{2\left(a(t+\frac{b}{a})^2+\frac{\mathscr{F}}{a^2}\right)}\,, \end{equation} where real-valued constants $a$, $b$ and $c$ correspond to the values of the integrals $H_g$, $D$ and $K$, respectively (for a given initial configuration). Note that in the case of $ g = 0 $, $H_g$ takes the form of an object that looks like the Hamiltonian of a free particles, but is defined in the restricted domain $\mathbb R^+$. The notable difference between this system and the free particle $H_{f}$, which lives in $ \mathbb R $, is that the latter has two additional integrals of motion, namely the momentum $p$ and the Galileo boost generator $\chi=\tilde{q}-pt$, with $\tilde{q}\in \mathbb R$. They produce Heisenberg algebra and together with the generators $D_{f}=\frac{\chi P}{2}$ and $K_{f}=\frac{\chi^2}{2}$, leading to the Schr\"odinger symmetry \textcolor{red}{[\cite{Niedfree,Duval,Henkel,GAdS1,Aizawa}]}, \begin{eqnarray} & \label{so(2,1) free} \{D_{f},H_{f}\}=H_{f}\,,\qquad \{D_{f},K_{f}\}=-K_{f}\,,\qquad \{H_{f},K_{f}\}=-2D_{f}\,,&\\& \label{Schr free cap 1} \{\chi,p\}=1\,,\qquad \{H_{f},p\}=\{K_{f},\chi\}=0\,\qquad \{H_{f},\chi\}=-p\,,\qquad \{K_{f},p\}=\chi\,,\qquad &\\& \{D_{f},\chi\}=-\frac{1}{2}\chi\,,\qquad \label{Schr free cap 1 2} \{D_{f},p\}=\frac{1}{2}p\,.& \end{eqnarray} The model (\ref{conformalaction}) has a problem at the quantum level, as we explain below: In the Schr\"odinger picture, the quantum version of the generators (\ref{conformalgenerators}) are given by ($\hbar=1$) \begin{equation} \label{Qso(2,1)} H_\nu=\frac{1}{2}\left(-\frac{d^2}{dq^2}+\frac{\nu(\nu+1)}{q^2}\right)\,,\qquad D=\frac{1}{4i}\left(q\frac{d}{dq}+\frac{d}{dq}q\right)\,,\qquad K=\frac{q^2}{2}\,, \end{equation} where we have parameterized $g$ as $\nu(\nu+1)$. Obviously, the operators $ D $ and $ K $ are not integrals of motion, however they can be promoted to dynamical integrals, in the sense of the Heisenberg equation $\frac{d O}{dt}=\frac{\partial O }{\partial t}-i[O,H_\nu ] $, by means of the unitary transformation \begin{equation} \label{recipe} O \rightarrow {}_{H}O=e^{-iH_\nu t } O e^{iH_\nu t}\,. \end{equation} This is the general recipe for moving from the Schr\"odinger picture to the Heisenberg picture, and in this last framework the operators $ D $ and $ K $ are changed for ${}_HD=D-H_\nu t$ and ${}_HK=K-2D t-H_\nu t^2$, respectively. The Hamiltonian $H_\nu$ is self-adjoint for the cases in which $\nu\geq 0$ and admits self-adjoint extensions for $\nu\geq -1/2$, \textcolor{red}{[\cite{Landau,kirsten}]}. In these cases, $H_\nu$ has a continuous spectrum $E=\kappa^2/2$, with $\kappa \in \mathbb R$, in the domain $\{ \psi\in L^2((0,\infty),dq)\vert \psi(0^+)=0\}$ and the physical eigenstates are given by \begin{equation} \label{states calogero} \psi_\nu (q;\kappa)=\sqrt{q}J_{\nu+\frac{1}{2}}(\kappa q)\,, \end{equation} where $J_{\alpha}(\zeta)$ is the Bessel function of the first kind \begin{equation} J_\alpha(\zeta)= \sum_{n=0}^{\infty}\frac{(-1)^{n}} {n!\Gamma(n+\alpha+1)}\zeta^{2n+\alpha}\,. \end{equation} From here it is not difficult to show that the state $ e^{i \alpha D} \psi_\nu (x; \kappa) $ corresponds to the energy $ e ^ {\alpha} E $, which implies that the only scale-invariant solutions are those with zero energy eigenvalue, which in this case are given by the nonphysical solutions $ q^{\nu + 1} $ and $ q^{-\nu} $, the first of which is not bounded at infinity and the second diverges when $ q = 0 $. This means that conformal symmetry is spontaneously broken at the quantum level. To$\,$ find$\,$ a$\,$ conformal $\,$invariant$\,$ model$\,$ with$\,$ a$\,$ well-defined$\,$ ground$\,$ state,$\,$ the$\,$ proposal$\,$ in$\,$ \textcolor{red}{[\cite{AFF}]} is to consider the following change of the variables at the classical level \begin{equation} \label{trans} y(t)=\frac{q(t)}{\sqrt{u+vt+wt^2}}\,, \qquad d\tau=\frac{dt}{u+vt+wt^2}\,, \end{equation} where $u>0$, $v$ and $w>0$ are real constants with dimensions $[u]=1$, $[v]=1/t$ and $[w]=1/t^2$, and $y>0$. This is in fact related to a change of coordinates in an AdS${}_2 $ space, where $t$ is not a good global coordinate, in contrast to $ \tau $ \textcolor{red}{[\cite{BlackHold2}]}. Under the transformation (\ref{trans}), action (\ref{conformalaction}) takes the form \begin{eqnarray} \label{conformalaction2} &\int\mathcal{L}(y,y')d\tau + \frac{1}{4}\int d\tau\frac{d}{d\tau}[(v+2wt(\tau))q^2(t(\tau))]= I[y]+I_{surface}\,,\,\,& \end{eqnarray} where $\mathcal{L}(y,y')=\frac{1}{2}(y'^2-\frac{g}{y^2}-\omega^2y^2)\,,$ $y'=\frac{dy}{d\tau}$, and $\omega^2=(4wu-v^2)/4$. {}Action $I[y]=\int \mathcal{L}d\tau$ is the so-called de Alfaro, Fubini and Furlan (AFF) model, from where we obtain the new time translation generator \begin{eqnarray} \label{mostgeneralH} \mathscr{H}_g=\frac{1}{2}\left(p^2+\frac{g}{y^2}+\omega^2 y^2\right)\,,\qquad p=y'\,. \end{eqnarray} The evolution parameter $\tau=\frac{1}{\omega}\text{acrtan}(\frac{v+2wt}{2\omega})$ varies in the finite interval $(-\frac{\pi}{2\omega},\frac{\pi}{2\omega})$, and new Hamiltonian (\ref{mostgeneralH}) is conjugate to this good global time coordinate. As $\omega$ is a dimensionful parameter, $[\omega]=[1/t]$, (\ref{mostgeneralH}) breaks the manifest scale invariance of the original system (\ref{conformalaction}), and via such a basic mechanism the mass and length scales are introduced in holographic QCD (often referred to as ``AdS/QCD'') \textcolor{red}{[\cite{App1,Brod2}]}. In spite of the introduced scale, the action of the new system is conformal invariant as we will see now. The dilatation generator $\mathscr{D}$ and the conformal transformation generator $\mathscr{K}$ associated with the action $I[y]$ are given by the explicitly depending on time $\tau$ integrals \begin{eqnarray} \label{NHgenD} &\mathscr{D}=\frac{1}{2}\left(yp\cos(2\omega \tau)+\left(\omega y^2- \mathscr{H}_g{\omega}^{-1}\right)\sin(2\omega \tau)\right)\,,&\\ &\mathscr{K}= \cos(2\omega \tau)\frac{y^2}{2}+\frac{\mathscr{H}_g}{\omega^2}\sin^2(\omega\tau) -\frac{\sin(2\omega\tau)}{2\omega}yp\,,\label{NHgenH}& \end{eqnarray} which generate the Newton-Hooke symmetry, \textcolor{red}{[\cite{NH1,NH2,NH3,NH4}]}, \begin{equation}\label{NHalg} \{\mathscr{H}_g,\mathscr{D}\}=-(\mathscr{H}_g-2\omega^2 \mathscr{K})\,,\qquad \{\mathscr{H}_g,\mathscr{K}\}=-2\mathscr{D}\,,\qquad \{\mathscr{D},\mathscr{K}\}=-\mathscr{K}\,, \end{equation} whose Casimir invariant is $\mathscr{F}=\mathscr{K}\mathscr{H}_g-\mathscr{D}^2-\omega^2\mathscr{K}^2=g/4$. Using Eqs. (\ref{NHgenD}) and (\ref{NHgenH}), one can find solution to the equation of motion of the system (\ref{conformalaction2}), \begin{equation}\label{ytau} y^2(\tau)=\frac{2}{\omega^2}(a\sin^2(\omega\tau)+\omega b \sin(2\omega\tau)+\omega^2c\cos(2\omega \tau))\,, \end{equation} where $a>0$, $b$ and $c>0$ are constants corresponding to the values of the integrals $\mathscr{H}_g$, $\mathscr{D}$ and $\mathscr{K}$, respectively, and obeying the relation $ac-b^2-\omega^2c^2=g/4$. {}From the explicit form of the solution we see that it is periodic with the period $T=\pi/\omega$ not depending on the value of the coupling constant\footnote{System given by Hamiltonian (\ref{mostgeneralH}) is an isoperiodic deformation of the half-harmonic oscillator of frequency $\omega$ \textcolor{red}{[\cite{Aso}]}.} $g$. The finite interval in which the evolution parameter $\tau$ varies corresponds to the period of the motion of the system (\ref{conformalaction2}), and one can consider $\tau$ as the compact evolution parameter that takes values on the closed interval $[-\frac{\pi}{2\omega},\frac{\pi}{2\omega}]$ with identified ends. As in the previous case, if one sets $ g = 0 $, $ \mathscr{H}_g $ is formally reduced to the Hamiltonian of the harmonic oscillator, however the object is defined in $ \mathbb R ^ + $ (we will call it as \emph{half} harmonic oscillator). If we extend for this case the domain to the entire real line, i.e., we exchange $y\rightarrow \tilde{y}\in \mathbb R$, the resulting system $\mathscr{H}_{\text{os}}$ has the additional dynamical integrals \begin{equation} \label{chigamapegama} \chi_\omega=\tilde{y}\cos(\omega \tau)-\frac{p}{\omega}\sin(\omega \tau)\,, \qquad P_\omega=\omega \tilde{y}\sin(\omega \tau)+p\cos(\omega \tau)\,. \end{equation} They are identified as the initial conditions of the oscillatory motion and in terms of them, the Hamiltonian is read as $\mathscr{H}_{\text{os}}=\frac{1}{2}(P_\omega^2+\omega^2\chi_\omega^2)$. The generators (\ref{chigamapegama}), together with generators $\mathscr{H}_{\text{os}}$, $\mathscr{D}_{\text{os}}=\frac{\chi_\omega P_\omega}{2}$ and $\mathscr{K}_{\text{os}}=\frac{1}{2}\chi_{\omega}^2$ produce the following Poisson brackets relations \begin{eqnarray} \label{Schr free os cap 1} & \{\mathscr{H}_{\text{os}},\mathscr{D}_{\text{os}}\}= -(\mathscr{H}_{\text{os}}-2\omega^2 \mathscr{K}_{\text{os}})\,,\quad \{\mathscr{H}_{\text{os}},\mathscr{K}_{\text{os}}\}=-2\mathscr{D}_{\text{os}}\,,\quad \{\mathscr{D}_{\text{os}},\mathscr{K}_{\text{os}}\}=-\mathscr{K}_{\text{os}}\,, &\\& \{\chi_\omega ,P_\omega\}=1\,,\qquad \{\mathscr{H}_{\text{os}}, P_\omega\}=\omega^2\chi_\omega\,,\qquad \{\mathscr{H}_{\text{os}}, \chi_\omega\}=P_\omega\,\qquad \{\mathscr{K}_{\text{os}},\chi_\omega\}=0\,,&\\& \{\mathscr{K}_{\text{os}},P_\omega\}=\chi_\omega\,, \qquad \{\mathscr{D}_{\text{os}},\chi_\omega\}=-\frac{1}{2}\chi_\omega\,,\qquad \{\mathscr{D}_{\text{os}},P_\omega\}=\frac{1}{2}P_\omega\,.& \end{eqnarray} Note that if instead to take $\mathscr{H}_{\text{os}}$ we consider $\hat{\mathscr{H}}=\mathscr{H}_{\text{os}}-\omega^2\mathscr{K}_{\text{os}}=\frac{1}{2}P_{\omega}^2$ one gets the algebraic relations (\ref{so(2,1) free}), (\ref{Schr free cap 1}) and (\ref{Schr free cap 1 2}), which mean that generators $\{\mathscr{H}_{\text{os}}\,, \mathscr{D}_{\text{os}}\,,\mathscr{K}_{\text{os}}\,,\chi_\omega\,,P_\omega \}$ are just another basis for the Schr\"odinger symmetry. In fact by taking the limit $\omega\rightarrow 0$ we recover the free particle generators. According to \textcolor{red}{[\cite{Dirac}]}, starting from a given symmetry algebra, one can freely designate a particular generator or a linear combination of generators as Hamiltonian, leading to different forms of dynamics. This terminology was introduced in the context of special relativity, however, the two models discussed above are good examples in nonrelativistic mechanics. At the quantum level, the AFF Hamiltonian takes the form \begin{eqnarray} \label{Lg} \mathscr{H}_{\nu}=\frac{1}{2}\left(-\frac{d^2}{dy^2}+\omega^2y^2+\frac{g(\nu)}{y^2}\right)\,,\qquad g(\nu)=\nu(\nu+1)\,, \end{eqnarray} which as well as $H_\nu$ in (\ref{Qso(2,1)}), has a bounded spectrum restricted from below in the domain $\{ \psi\in L^2((0,\infty),dy)\vert \psi(0^+)=0\}$ for $\nu\geq-1/2$ \textcolor{red}{[\cite{Falomir1,Falomir2}]}. The normalized eigenstates of the system and its respective energy values are given by \begin{equation} \label{AFF states} \psi_{\nu,n}(y)=\sqrt{\frac{2n!\omega^{\nu+\frac{3}{2}}}{\Gamma(n+\nu+\frac{3}{2})}}\,\,y^{\nu+1}L_{n}^{(\nu+\frac{1}{2})}(\omega y^2)e^{-\frac{\omega y^2}{2}}\,,\qquad E_{\nu,n}=\omega(2n+\nu+\frac{3}{2})\,, \end{equation} where \begin{equation} \label{Laguerre} \mathcal{L}_n^{(\alpha)}(\eta)= \sum_{j=0}^{n}\frac{\Gamma(n+\alpha+1)}{\Gamma(j+\alpha+1)}\frac{(-\eta)^{j}}{j!(n-j)!}\,, \end{equation} are the generalized Laguerre Polynomials. Note that $g$ in (\ref{Lg}) vanishes for $\nu=0$ and for $\nu=-1$ (where we have some problems with boundary conditions) and for both cases $\mathscr{H}_{\nu}$ looks like an harmonic oscillator Hamiltonian. Indeed, the well known relations \begin{equation} \label{hermiteLaguerre} H_{2n+1}(\eta)= (-1)^{n}2^{2n+1}L_{n}^{(1/2)}(\eta^2)\,,\qquad H_{2n}(\eta) = (-1)^{n}2^{2n}L_{n}^{(-1/2)}(\eta^2)\,, \end{equation} where functions $H_{n}(\eta)$ are the Hermite polynomials, show us that in the first case eigenfunctions (\ref{AFF states}) become the odd eigenstates of the harmonic oscillator (vanishing at the origin), and in the second case, they take the form of the even eigenstates of the latter mentioned system (which do not vanish at $x=0$, thereby violating the imposed boundary conditions). Instead to do a direct quantization of generators $\mathscr{K}$ and $\mathscr{D}$, it is worth it to consider complex combinations of them. In particular, in the Schr\"odinger picture we construct \begin{eqnarray} \label{Cpm} &\mathcal{C}_\nu^\pm= \mathscr{H}_\nu-2\omega^2\mathscr{K}\pm 2i\omega \mathscr{D}=\left(\mathscr{H}_\nu-\omega^2y^2\pm 2\omega (y\frac{d}{dy}+\frac{1}{2}) \right)\,, & \end{eqnarray} which, together with $\mathscr{H}_\nu$, produce the commutator relations \begin{equation} \label{sl2RAFF} [\mathscr{H}_{\nu},\mathcal{C}_\nu^\pm]=\pm 2\omega\mathcal{C}_\nu^\pm\,,\qquad [\mathcal{C}_\nu^-,\mathcal{C}_\nu^+]=4\omega\mathscr{H}_\nu\,, \end{equation} and by using the identification $\mathscr{H}_\nu=2\omega \mathcal{J}_0$ and $\mathcal{C}_\nu=2\omega \mathcal{J}_\pm$, we recognize the $\mathfrak{sl}(2,\mathbb R)$ algebra (\ref{sl(2R)preambulo}). On the Hilbert space of the AFF system, the states (\ref{AFF states}) correspond to an infinite-dimensional unitary irreducible representation of the $\mathfrak{sl}(2,\mathbb R)$ algebra of the discrete type series $\mathcal{D}^+_\alpha$ with $\alpha=\frac{1}{2}\nu+\frac{3}{4}$, and the Casimir operator takes the value $\mathscr{F}_\nu=\mathcal{J}^\mu\mathcal{J}_\mu=-\alpha(\alpha-1)= \frac{3}{16}-\frac{1}{4}\nu(\nu+1)$, \textcolor{red}{[\cite{Mikisl2R}]}. As operators $\mathcal{C}_\nu^\pm $ are not integrals of motion, when we go to the Heisenberg picture, it is necessary to replace operators $\mathcal{C}^\pm$ by the dynamical integrals ${}_H\mathcal{C}^\pm=e^{\mp i 2\omega t} \mathcal{C}^\pm$. Relations (\ref{sl2RAFF}) clearly show us that $\mathcal{C}_\nu^\pm$ are ladder operators which change the energy in $\pm 2\omega$. Their action can be computed by means of the corresponding recurrence relations of Laguerre polynomials, \begin{eqnarray} & \nonumber y\frac{d}{dy}L^\alpha_n(y)-yL^\alpha_n(y)+\alpha L_n^\alpha= (n+1)L_{n+1}^{\alpha-1}\,,\qquad \frac{d}{dy}L_n^\alpha(y)-L_n^\alpha(y)=-L_n^{\alpha+1}(y)\,, \nonumber &\\ &\frac{d}{dy}L^{\alpha}_{n}(y)=-L_{n-1}^{\alpha+1}(y)\,,\qquad y\frac{d}{dy}L_n^\alpha(y)+\alpha L_n^\alpha(y)= (n+\alpha)L_n^{\alpha-1}(y)\,.\label{recurrence-relations}& \end{eqnarray} Using this we get \begin{eqnarray} \label{Conpsi} & \mathcal{C}_\nu^-\psi_{\nu,n}=2\omega\sqrt{n(n+\nu+\frac{1}{2})}\,\psi_{\nu,n-1}\,,&\\& \mathcal{C}_\nu^+\psi_{\nu,n}=2\omega\sqrt{(n+1)(n+\nu+\frac{3}{2})}\,\psi_{\nu,n+1}\,,& \end{eqnarray} from where we see that the lowering operator $\mathcal{C}_\nu^-$ annihilates the ground state of the system. In next section we will show that these operators have their own origin in supersymmetric quantum mechanics. \section{The $\mathfrak{osp}(2|2)$ superconformal symmetry} \label{SecOSP22Conformal} The aim of this section is to construct an $\mathcal{N}=2$ super-extension of the AFF model (\ref{Lg}). To this end, we apply the method introduced in Chap. \ref{ChSUSY}. For the construction let us use the ground state $\psi_{\nu,0}\propto y^{\nu+1}e^{-\omega y^2/2}$ as a seed state for the first order Darboux transformation. The associated intertwining operators are \begin{equation} \label{Anu} A_{\nu}^-=\frac{1}{\sqrt{2}}\left(\frac{d}{dy}+\omega y-\frac{\nu+1}{y}\right)\,,\qquad A_{\nu}^+=(A_{\nu}^-)^\dagger\,, \end{equation} which produce \begin{equation} \label{producA} A_\nu^+A_{\nu}^-=\mathscr{H}_{\nu}-\omega (\nu+\frac{3}{2}):=H_-\,,\qquad A_\nu^-A_{\nu}^+=\mathscr{H}_{\nu+1}-\omega(\nu+\frac{1}{2}):=H_+\,,\end{equation} and intertwining relations take the form (\ref{Intertwiningrelation0}). Using the recurrence relations that Laguerre polynomials satisfy (\ref{recurrence-relations}), one gets the explicit action of $A_\nu^\pm$ on eigenstates (\ref{AFF states}), \begin{equation} \label{Aonpsi} A_\nu^-\psi_{\nu,n}=-\sqrt{2n\omega}\,\psi_{\nu+1,n-1} \,,\qquad A_{\nu}^+\psi_{\nu+1,n-1}=-\sqrt{2n\omega }\,\psi_{\nu,n}\,. \end{equation} With the help of (\ref{Anu}) we can construct the matrix generators \begin{eqnarray} \label{Poincare1}& \mathcal{H}_\nu^{e}=\left(\begin{array}{cc} \mathscr{H}_{\nu+1}-\omega (\nu+1/2) & 0 \\ 0 & \mathscr{H}_{\nu}-\omega(\nu+3/2) \end{array}\right)\,, &\\& \mathcal{Q}_\nu^{1}=\left(\begin{array}{cc} 0 & A_\nu^-\\ A_\nu^+ & 0 \end{array}\right)\,,\quad \mathcal{Q}_\nu^{2}=i\Gamma \mathcal{Q}_\nu^{1},& \end{eqnarray} where $\Gamma=\sigma_3$ is the $\mathbb Z_2$ grading operator. These generators produce the Poincar\'e supersymmetry (\ref{Poincare0}). Operator $ \mathcal{H}_\nu^e \, $ has the spectrum $ 2 \omega n $, $n=0,1,\ldots,$ and the unique ground state $ (0 ,\, \, \psi_{\nu, 0})^{t} $ is annihilated by all generators in (\ref{Poincare1}), therefore supersymmetry is in the exact phase. On the other hand, the system (\ref{Lg}) possesses the nonphysical solutions $\psi_{\nu,n}^-=\psi_{\nu,n}(iy)$ of the eigenvalues $-E_{n,\nu}$\footnote{ The stationary Schr\"odinger equation $\mathscr{H}_\nu \psi_{\nu,n} = E \psi_{\nu,n}$ has a discrete symmetry group, and the transformation defined as $ y \rightarrow iy$ and $ E_{\nu, n} \rightarrow -E _ {\nu,n} $ is an element of this group, see Chap. \ref{ChKlein}. The nonphysical eigenstates produced by the action of the mentioned group can be used in the Darboux transformations, resulting in new solvable systems.}. Then, instead of the ground state we could select the function $ \psi_{\nu, 0}^- \propto y^{\nu + 1}e^{\omega y^2/2} $ as a seed state. The resulting intertwining operators are \begin{equation} \label{Bnu} B_\nu^{-}= \frac{1}{\sqrt{2}}\left(\frac{d}{dy}-\omega y-\frac{\nu+1}{y}\right)\,,\qquad B_\nu^{+}= (B_\nu^-)^\dagger \,. \end{equation} Their products give us \begin{equation} \label{ProductB} B_\nu^+B_\nu^-=\mathscr{H}_\nu+\omega (\nu+\frac{3}{2})=H_-+\omega(2\nu+3)\,,\qquad B_\nu^-B_\nu^+=\mathscr{H}_{\nu+1}+\omega(\nu+\frac{1}{2})=H_++\omega(2\nu+1)\,, \end{equation} and in terms of $H_\pm$ the intertwining relations take the form \begin{equation} B_\nu^-H_-=(H_+-2\omega)B_\nu^-\,,\qquad B_\nu^+H_+=(H_-+2\omega)B_\nu^+\,. \end{equation} Coherently with this, the action of operators $B_\nu^\pm$ on the eigenstates is \begin{equation} \label{Bonpsi} B_{\nu}^-\psi_{\nu,n}=-\sqrt{(2n+2\nu+3)\omega}\, \psi_{\nu+1,n} \,,\qquad B_{\nu}^+\psi_{n,\nu+1}=-\sqrt{(2n+2\nu+3)\omega}\, \psi_{\nu,n}\,. \end{equation} Just like we did with $ A_\nu^\pm $, we can also use $ B_\nu^\pm $ to build other matrix operators \begin{eqnarray} \label{Rnu} & \mathcal{H}_{\nu}^{b}=\left(\begin{array}{cc} \mathscr{H}_{\nu+1}+\omega(\nu+1/2) & 0 \\ 0 & \mathscr{H}_{\nu}+\omega (\nu+3/2) \end{array}\right) \,,&\\& \label{Snu} \mathcal{S}_\nu^{1}=\left(\begin{array}{cc} 0 & B_\nu^-\\ B_\nu^+ & 0 \end{array}\right)\,,\qquad \mathcal{S}_\nu^{2}=i\Gamma \mathcal{S}_\nu^{1}\,,& \end{eqnarray} which again will satisfy the $\mathcal{N}=2$ Poincar\'e supersymmetry, but now, in the spontaneously broken phase\footnote{The seed state $\psi_{\nu,0}^-$ is nonphysical and $1/\psi_{\nu,0}^-$ does not satisfy the boundary condition at the origin. }; the spectrum of $\mathcal{H}_\nu^{b}$ is $\omega(2n+2\nu+3)$, $n=0,1,\ldots,$ and there is no physical eigenstate which is simultaneously annihilated by both odd operators $\mathcal{S}_\nu ^{a}$. On the other hand one can reinterpret the object $\mathcal{H}_\nu^{b}$ as a linear combination of $\mathcal{H}_\nu^{e}$ and the nontrivial integral \begin{equation} \mathcal{R}_\nu= \frac{1}{2\omega}(\mathcal{H}_\nu^{e}- \mathcal{H}_\nu^{b}) = \frac{1}{2}\sigma_3 -(\nu+1)\,, \end{equation} that plays the role of what will become an $R$ symmetry generator. Now, remember that the system (\ref{Lg}) has the two second order ladder operators (\ref{Cpm}). Namely, they are constructed from $A_\nu^\pm$ and $B_\nu^\pm$ as follows \begin{eqnarray}& \label{FactorCnu+1} B_\nu^-A_\nu^+=\mathcal{C}_{\nu+1}^+\,,\qquad A_\nu^-B_\nu^+=\mathcal{C}_{\nu+1}^-\,, &\\& A_\nu^+B_\nu^-=\mathcal{C}_{\nu}^+\,,\qquad B_\nu^+A_\nu^-=\mathcal{C}_{\nu}^- \,. \label{FactorCnu}& \end{eqnarray} By using this structure together with the Eqs. (\ref{Bonpsi}) and (\ref{Aonpsi}) it is easy to check the relations (\ref{Conpsi}). Also, by means of the Eqs. (\ref{producA}) and (\ref{ProductB}), in addition with the intertwining relations corresponding to $ A_\nu^\pm $ and $B_\nu^\pm $, it is easy to derive the $ \mathfrak{sl}(2,\mathbb R) $ algebra (\ref{sl2RAFF}). Returning to the matrix operators subject, the relations (\ref{FactorCnu+1})-(\ref{FactorCnu}) show us that the anti-commutator between generators $\mathcal{S}_\nu^{a}$ and $\mathcal{Q}_\nu^{a}$ produces the even operators \begin{equation} \label{Gnu} \mathcal{G}_\nu^\pm=\left(\begin{array}{cc} \mathcal{C}_{\nu+1}^\pm & 0 \\ 0 & \mathcal{C}_{\nu}^\pm \end{array}\right)\,, \end{equation} which are the corresponding super-extensions of the ladder operators of systems $\mathcal{H}_\nu^{e}$ and $\mathcal{H}_\nu^{b}$. Then, all together the generators $\{\mathcal{H}_\nu^{e}\,,\mathcal{G}_\nu^\pm\,,\mathcal{R}_\nu\,,\mathcal{Q}_\nu^{a}\,,\mathcal{S}_\nu^{a}\} $ satisfy the superalgebraic relations \begin{eqnarray}\label{HRQ0} &[\mathcal{H}_\nu^{e},\mathcal{R}_\nu]=[\mathcal{H}_\nu^{e},\mathcal{Q}_\nu^a]=0\,,&\\ \label{evencommutation} &[\mathcal{H}_\nu^{e},\mathcal{G}_\nu^{\pm}]=\pm2\omega \mathcal{G}_\nu^{\pm}\,, \qquad [\mathcal{G}_\nu^{-},\mathcal{G}_\nu^{+}]=4\omega\left(\mathcal{H}^{e}_\nu-\omega^2\mathcal{R}_\nu\right)\,,&\\ \label{evenodd} &[\mathcal{H}_\nu^{e},\mathcal{S}_\nu^a]=-2 i \omega \epsilon^{ab}\mathcal{S}_\nu^b\,,\qquad [\mathcal{R}_\nu,\mathcal{Q}_\nu^a]=-i\epsilon^{ab}\mathcal{Q}_\nu^b\,, \qquad [\mathcal{R}_\nu,\mathcal{S}_\nu^a]=-i\epsilon^{ab}\mathcal{S}^b_\nu\,,&\\ \label{fq1} &[\mathcal{G}_\nu^-,\mathcal{Q}_\nu^a]=\omega (\mathcal{S}_\nu^a+i\epsilon^{ab}\mathcal{S}_\nu^b), \qquad [\mathcal{G}_\nu^+,\mathcal{Q}_\nu^a]=-\omega (\mathcal{S}_\nu^a-i\epsilon^{ab}\mathcal{S}_\nu^b)\,,&\\ \label{fq3} &[\mathcal{G}_\nu^-,\mathcal{S}_\nu^a]=\omega (\mathcal{Q}_\nu^a-i\epsilon^{ab}\mathcal{Q}_\nu^b)\,, \qquad [\mathcal{G}_\nu^+,\mathcal{S}_\nu^a]=-\omega (\mathcal{Q}_\nu^a+i\epsilon^{ab}\mathcal{Q}_\nu^b)\,,&\\ \label{anti1} &\{ \mathcal{Q}_\nu^a,\mathcal{Q}_\nu^b\}=2\delta^{ab}\mathcal{H}_\nu^{e}\,, \qquad \{ \mathcal{S}_\nu^a,\mathcal{S}_\nu^b\}=2\delta^{ab}(\mathcal{H}_\nu^{e} -2\omega \mathcal{R}_\nu)\,,&\\ \label{anti2} &\{\mathcal{Q}^a_\nu,\mathcal{S}^b_\nu\}=\delta^{ab}(\mathcal{G}_\nu^{+}+\mathcal{G}_\nu^-)+ i\epsilon^{ab}(\mathcal{G}_\nu^+-\mathcal{G}_\nu^-)\,.\label{QSGG} \end{eqnarray} From here we realize that operators $\mathcal{G}^\pm$ and $\mathcal{S}_\nu^a$ are not integrals of motion, and in the Heisenberg picture we have instead the dynamical integrals $ {}_{H}\mathcal{G}^\pm= e^{\mp 2\omega t}\mathcal{G}^\pm$ and $ {}_{H}\mathcal{S}_\nu^a =e^{-i\sigma_3 \omega t}\mathcal{S}_\nu^a$. Superalgebra $\,$ (\ref{HRQ0})-(\ref{QSGG})$\,$ $\,$is$\,$ identified$\,$ with$\,$ the$\,$ $\mathfrak{osp}(2|2)$ $\,$superconformal$\,$ symmetry$\,$ $\,$\textcolor{red}{[\cite{InzPly1,InzPly2,InzPly3}]}, and has the automorphism $f=f^{-1}$ given by the transformations $\mathcal{H}_{\nu}^{e}\rightarrow \mathcal{H}_{\nu}^{e}-4\mathcal{R}_{\nu}=\mathcal{H}_{\nu}^b$, $\mathcal{R}_{\nu}\rightarrow -\mathcal{R}_\nu$, $\mathcal{G}_{\nu}^\pm\rightarrow \mathcal{G}_{\nu}^{\pm}$, $\mathcal{Q}_\nu^1\rightarrow -\mathcal{S}_{\nu}^{1}$, $\mathcal{Q}_\nu^2\rightarrow \mathcal{S}_{\nu}^{2}$, $\mathcal{S}_\nu^1\rightarrow -\mathcal{Q}_{\nu}^{1}$ $\mathcal{S}_\nu^2\rightarrow \mathcal{Q}_{\nu}^{2}$. Transformation $f$ shows us what would happen with the superalgebra if we had chosen $\mathcal{H}_\nu^{b} $ instead of $ \mathcal{H}_\nu^{e}$ as our time translation generator. For future applications, we present the superalgebraic structure in terms of nilpotent fermionic operators \begin{eqnarray} \mathcal{Q}_\nu =\left(\begin{array}{cc} 0 & A_\nu \\ 0 & 0 \end{array}\right)\,,\qquad \mathcal{W}_\nu =\left(\begin{array}{cc} 0 & 0\\ B_\nu^+ & 0 \end{array}\right)\,, \end{eqnarray} and its Hermitian counterpart, as follows, \begin{eqnarray} \label{Ospnil1} &[\mathcal{H}_\nu^{e},\mathcal{G}_\nu^{\pm}]=\pm 2\omega\mathcal{G}_\nu^{\pm}\,, \qquad [\mathcal{G}_\nu^{-},\mathcal{G}_\nu^{+}]=4\omega(\mathcal{H}^{e}_\nu-\omega^2\mathcal{R}_\nu)\,,&\\ &[\mathcal{H}_\nu^{e},\mathcal{W}_\nu]=- 2\omega\mathcal{W}_\nu\,,\qquad [\mathcal{R}_\nu,\mathcal{Q}_\nu]=\mathcal{Q}_\nu\,,\qquad [\mathcal{R}_\nu,\mathcal{W}_\nu]= -\mathcal{W}_\nu\,,\qquad &\\& \{\mathcal{Q}_\nu,\mathcal{Q}_\nu^\dagger\}=\mathcal{H}_\nu^{e}\,,\qquad \{\mathcal{W}_\nu,\mathcal{W}_\nu^\dagger\}=\mathcal{H}_\nu^{e}-2\omega \mathcal{R}_\nu\,, &\\& \{\mathcal{Q}_\nu,\mathcal{S}_\nu\}=\mathcal{G}_\nu^- \,,\qquad [\mathcal{G}_\nu^-,\mathcal{Q}_\nu^\dagger]= 2 \omega \mathcal{W}_\nu \,,\qquad [\mathcal{G}_\nu^-,\mathcal{W}_\nu^\dagger]= 2\omega \mathcal{Q}_\nu \,, \label{Ospnilf} \end{eqnarray} in addition with corresponding Hermitian conjugate relations. In this base, we have the automorphism $\mathcal{H}_\nu^{e}\rightarrow\mathcal{H}_\nu^{b}\,,$ $\mathcal{G}_\nu^{\pm}\rightarrow\mathcal{G}_\nu^{\pm}\,,$ $\mathcal{R}_\nu\rightarrow -\mathcal{R}_\nu\,,$ $\mathcal{Q}_\nu^a\leftrightarrow \mathcal{S}_\nu^a\,.$ As was for the bosinic case, one can use this structure as an approach to the study of the super-harmonic oscillator system, whose corresponding $\mathfrak{osp}(2|2)$ generators are \begin{equation} \label{ospos} \{\mathcal{H}_{os}\,,\mathcal{G}^\pm\,,\mathcal{R}\,,\mathcal{Q}^{a}\,,\mathcal{S}^{a}\} =\{\mathcal{H}_\nu^{e}\,,\mathcal{G}_\nu^\pm\,,\mathcal{R}_\nu\,,\mathcal{Q}_\nu^{a}\,,\mathcal{S}_\nu^{a}\} |_{\nu=-1, y\rightarrow \tilde{y}}\,, \end{equation} where $\tilde{y}\in \mathbb R$. The super-Hamiltonian $\mathcal{H}_{os}=\text{diag}(\mathscr{H}_{os}+\omega,\mathscr{H}_{os}-\omega)$ is a composition of two copies of an harmonic oscillator Hamiltonian, displaced from each other. On the other hand, from the perspective of the Darboux transformation, the seed states used to construct the fermionic operators $\mathcal{Q}^{a}$ and $\mathcal{S}^{a}$ are $\psi_{0}(\tilde{y}) \propto e^{-\tilde{y}^2/2}$ and $\psi_{0}(i\tilde{y})\propto e^{\tilde{y}^2/2}$ respectively, see \textcolor{red}{[\cite{InzPly1}]}, and as a consequence, both resulting systems $\mathcal {H}_{os} $ and $ \mathcal{H}_{os} -4 \mathcal{R}_0 $ have the exact Poincar\'e supersymmetry, in contrast to the AFF case, since $ \psi_{0} (i \tilde{y})^{- 1} \propto \psi_{0} (\tilde{y}) $. Finally, the intertwining operators are reduced to the usual harmonic oscillator ladder operators, \begin{equation} A^\pm |_{\nu=-1, y\rightarrow \tilde{y}}=a^\pm\,,\qquad B^\pm |_{\nu=-1, y\rightarrow \tilde{y}}=-a^\mp\,,\qquad a^\pm =\frac{1}{\sqrt{2}}\left(\omega \tilde{y}\mp \frac{d}{d\tilde{y}} \right)\,. \end{equation} A radical difference with the super-extended AFF model is that for the super-harmonic oscillator system we can also build the additional operators \begin{equation} \label{Fysigma} \mathcal{F}^\pm=\left(\begin{array}{cc} a^\pm & 0\\ 0 & a^\pm \end{array}\right)\,,\qquad \Sigma_{1}=\frac{1}{2}\sigma_{1}\,,\qquad \Sigma_{2}=-\frac{1}{2}\sigma_{2}\,, \end{equation} that supplement the $\mathfrak{osp}(2|2)$ superalgebra with the (anti)-commutation relations \begin{eqnarray} \label{qho3} &[\mathcal{H}_{os},\mathcal{F}^{\pm}]=\pm \omega \mathcal{F}^{\pm}\,,\qquad [\mathcal{F}^\mp,\mathcal{G}^{\pm}]=\mp \omega \mathcal{F}^{\pm}\,,\qquad [\mathcal{F}^-,\mathcal{F}^{+}]=\omega \mathbb{I}\,,&\\ &\{\Sigma_{a},\Sigma_b\}=\frac{1}{2}\delta_{ab}\mathbb{I}\,,\qquad [\mathcal{H}_{os},\Sigma_a]=-i\omega \epsilon_{ab}\Sigma_{b} \,,\qquad [\mathcal{R},\Sigma_a]=i\epsilon_{ab}\Sigma_{b}\,,&\\ &\{\Sigma_{a},\mathcal{Q}_b\}=\frac{1}{2}[\delta_{ab}(\mathcal{F}^{+}+\mathcal{F}^{-}) -i\epsilon_{ab}(\mathcal{F}^{+}-\mathcal{F}^{-})]\,,&\\ &\{\Sigma_{a},\mathcal{S}_b\}=\frac{1}{2}[ \delta_{ab}(\mathcal{F}^{+}+\mathcal{F}^{-})+ i\epsilon_{ab}(\mathcal{F}^{+}-\mathcal{F}^{-})]\,,&\\ &[\mathcal{F}^-,\mathcal{Q}_a]= \omega(\Sigma_a+i\epsilon_{ab}\Sigma_b)\, ,\qquad [\mathcal{F}^+,\mathcal{Q}_a]=- \omega(\Sigma_a-i\epsilon_{ab}\Sigma_b)\,,&\\ \label{qho4} &[\mathcal{F}^-,\mathcal{S}_a]=\omega(\Sigma_a-i\epsilon_{ab}\Sigma_b) \,,\qquad [\mathcal{F}^+,\mathcal{S}_a]=- \omega(\Sigma_a+i\epsilon_{ab}\Sigma_b)\,,&\\ \label{SigGJ0} &[\Sigma_{a},\mathcal{F}^\pm]=[\Sigma_{a},\mathcal{G}^\pm]=0\,.& \end{eqnarray} Again, operators (\ref{Fysigma}) do not commute with $\mathcal{H}_{\text{os}}$ so in the Heisenberg picture we will have the dynamical integrals ${}_H\mathcal{F}^\pm= e^{\mp i \omega t},\mathcal{F}^\pm$ and ${}_H\Sigma^\pm= e^{-i \sigma_3 \omega t}\Sigma^\pm$ . Note that generators $\{\mathcal{F}^\pm, \mathbb{I},\mathfrak{S}_{a}\}$ produce an ideal sub-supergebra, which we identify with the natural super-extension of Heisenberg's symmetry. In fact, the superalgebraic structure generated by (\ref{ospos}), along with the Eqs. (\ref{qho3})-(\ref{SigGJ0}) is a semi-direct sum of this super-Heisenberg symmetry and the superalgebra $\mathfrak{osp}(2|2) $, corresponding to an $ \mathcal{N} = 2 $ super-extension of the Schr\"odinger symmetry \textcolor{red}{[\cite{beckers1, beckers2, InzPly1}]} . \section{The zero frequency limit} \label{Sec0omegalimit} In this paragraph we take the limit $ \omega \rightarrow 0 $ in supersymmetric generators introduced in last section, getting new $ \mathcal{N} = 2 $ super-extended systems. We start with the supersymmetric AFF model generators, but now we consider the basis \begin{eqnarray} & \hat{\mathcal{D}}_\nu=\frac{i}{4\omega}(\mathcal{G}_\nu^--\mathcal{G}_\nu^+)\,,\qquad \hat{\mathcal{K}}_\nu= \frac{1}{4\omega^2} (\mathcal{H}_\nu^{e}-\mathcal{G}_\nu^--\mathcal{G}_\nu^-)\,,\qquad \hat{\mathcal{H}}_\nu=\frac{1}{2}(\mathcal{H}_\nu^{e}+\mathcal{H}_\nu^{b})-\omega^2 \hat{\mathcal{K}}_\nu\,, &\\& \xi_\nu^a= \frac{1}{2}\epsilon^{ab}(\mathcal{Q}_\nu^a-\mathcal{S}_\nu^a)\,,\qquad \mathcal{\pi}_\nu^a= \frac{1}{2\omega }\epsilon^{ab}(\mathcal{Q}_\nu^a+\mathcal{S}_\nu^a)\,,\qquad \mathcal{Z}_\nu= \frac{1}{2}\mathcal{R}_\nu\,.& \end{eqnarray} The generators defined in this way satisfy \begin{eqnarray} & [\hat{\mathcal{D}}_\nu,\hat{\mathcal{H}}_\nu]=i\hat{\mathcal{H}}_\nu\,,\qquad [\hat{\mathcal{D}}_\nu,K_\nu]=-i\hat{\mathcal{K}}_\nu\,,\qquad [\hat{\mathcal{H}}_\nu,\hat{\mathcal{D}}_\nu]=-2\hat{\mathcal{D}}_\nu\,,&\\& \{\zeta_\nu^{a},\zeta_\nu^{b}\}=2\hat{\mathcal{K}}_\nu\delta^{ab}\,,\qquad \{\pi_\nu^{a},\pi_\nu^{b}\}=2\hat{\mathcal{H}}_{\nu}\delta^{ab}\,,\qquad \{\zeta_\nu^{a},\zeta_\nu^{b}\}=2\hat{\mathcal{D}}_\nu\delta^{ab}+2\epsilon^{ab}\mathcal{Z}_\nu\,, &\\&\label{superS3} [\hat{\mathcal{D}}_\nu,\pi_\nu^a]=\frac{i}{2}\pi_\nu^a\,,\quad [\hat{\mathcal{D}}_\nu,\xi_\nu^a]=-\frac{i}{2}\xi_\nu^a\,,\quad [\mathcal{Z}_\nu,\pi_\nu^a]=-\frac{i}{2}\epsilon_{ab}\pi_\nu^b\,,\quad [\mathcal{Z}_\nu,\xi_\nu^a]=-\frac{i}{2}\epsilon_{ab}\xi_\nu^b\,, &\\& \label{superS3+} [\hat{\mathcal{H}}_\nu,\xi_\nu^a]=-i\pi_\nu^a\,,\qquad [\hat{\mathcal{K}}_\nu,\pi_a]=i\xi_\nu^a\,. \end{eqnarray} This is the usual way in which the superalgebra $ \mathfrak{osp} (2,2) $ is presented for supersymmetric extensions of the conformal model (\ref{conformalaction}) at the quantum level \textcolor{red}{[\cite{Leiva,SCM5}]}. So it is not a surprise that at the zero frequency limit we get \begin{eqnarray} & \hat{\mathcal{H}}_\nu|_{\omega=0}=\frac{1}{2}(p^2+\frac{\nu^2}{y^2})\mathbb{I}+\frac{\nu}{2y^2}\sigma_3\,,&\\& \hat{\mathcal{D}}_\nu|_{\omega=0}= \frac{1}{4i}\left(y\frac{d}{dy}+\frac{d}{dy}y\right)\mathbb{I}:=\mathcal{D}\,,\qquad \hat{\mathcal{K}}_\nu|_{\omega=0}= \frac{y^2}{2}\mathbb{I}:=\mathcal{K}\,, &\\& \xi_\nu^a |_{\omega=0}=\frac{y}{\sqrt{2}}\sigma_a,,\qquad \pi_\nu^a|_{\omega=0}=\frac{1}{\sqrt{2}} \left( p\sigma_a -\frac{\nu+1}{y}\epsilon_{ab}\sigma_b\right)\,, & \end{eqnarray} where $\mathbb{I}=\text{diag}(1,1)$ and $p=-i\frac{d}{dy}$. We can repeat this procedure for the super-Schr\"odinger symmetry, which we have derived for the super-harmonic oscillator system. In this case the generators \begin{equation} \label{freeparticlesupergen} \{\hat{\mathcal{H}}_\nu,\hat{\mathcal{D}}|_{\omega=0}, \hat{\mathcal{K}}|_{\omega=0} \,, \mathcal{Z}_\nu\,,\xi_\nu^a|_{\omega=0}\,,\pi_\nu^a |_{\omega=0}\}|_{\nu=-1, y\rightarrow \tilde{y}}= \{\mathcal{H}_0, \mathcal{D},\mathcal{K},\mathcal{Z}\,, \xi_a\,,\pi_a\} \end{equation} reflect the superconformal symmetry of the super-extended free particle, which in turn includes the additional integrals $\Sigma_1=\frac{1}{2}\sigma_1$, $\Sigma_2=-\frac{1}{2}\sigma_2$ and \begin{equation} \mathcal{P}=\frac{i}{2}(\mathcal{F}^+-\mathcal{F}^+)|_{\omega=0,y\rightarrow \tilde{y}}= -\frac{i}{\sqrt{2}}\frac{d}{d\tilde{y}} \mathbb{I} \,,\qquad \mathcal{X}=\frac{1}{2\omega}(\mathcal{F}^++\mathcal{F}^+)|_{\omega=0,y\rightarrow \tilde{y}}= \frac{\tilde{y}}{\sqrt{2}} \mathbb{I} \,. \end{equation} Together, generators $\{\mathcal{H}_0, \mathcal{D},\mathcal{K},\mathcal{X}\,,\mathcal{P}\,, \xi_a\,,\pi_a\,,\Sigma_a\}$ produce the super-Schr\"odinger symmetry, now for the super-free particle system \textcolor{red}{[\cite{Aizawa,InzPly1}]}, \begin{eqnarray} \label{superS1} & [\mathcal{D},\mathcal{H}_0]=i\mathcal{H}_0\,,\qquad [\mathcal{D},\mathcal{K}]=-i\mathcal{K}\,,\qquad [\mathcal{K},\mathcal{H}_0]=2i\mathcal{D}\,,\qquad [\mathcal{X},\mathcal{P}]=\frac{1}{2}i\mathbb{I}\,, &\\&\label{superS2} [\mathcal{H}_0,\mathcal{X}]=-i\mathcal{P}\,,\qquad [\mathcal{K},\mathcal{P}]=i\mathcal{X},\qquad [\mathcal{D},\mathcal{P}]=\frac{i}{2}\mathcal{P}\,,\qquad [\mathcal{D},\mathcal{X}]=-\frac{i}{2}\mathcal{X}\,, &\\&\label{superSH3} [\mathcal{D},\pi_a]=\frac{i}{2}\pi_a\,,\quad [\mathcal{D},\xi_a]=-\frac{i}{2}\xi_a\,,\quad [\mathcal{Z},\pi_a]=-\frac{i}{2}\epsilon_{ab}\pi_b\,,\quad [\mathcal{Z},\xi_a]=-\frac{i}{2}\epsilon_{ab}\xi_b\,, &\\&\label{superSH3+} [\mathcal{H}_0,\xi_a]=-i\pi_a\,,\qquad [\mathcal{K},\pi_a]=i\xi_a\,, &\\& \label{superS4} [\mathcal{Z},\Sigma_a]=\frac{i}{2}\epsilon_{ab}\Sigma_b\,,\qquad [\mathcal{P},\pi_a]=-i\Sigma_a\,,\qquad [\mathcal{X},\xi_a]=i\Sigma_a\,, &\\&\label{superS5} \{\Sigma_a,\pi_{b}\}=\delta_{ab}\mathcal{P}\,, \qquad \{\Sigma_a,\xi_{b}\}=\delta_{ab}\mathcal{X}\,,\qquad \{\Sigma_a,\Sigma_b\}=\frac{1}{2}\delta_{ab}\mathbb{I}\,, &\\&\label{superS6} \{\pi_a,\pi_b\}=2\delta_{ab}\mathcal{H}_0\,,\qquad \{\xi_a,\xi_b\}=2\delta_{ab}\mathcal{K}\,,\qquad \{\pi_a,\xi_b\}=2\delta_{ab}\mathcal{D}+2\epsilon_{ab}\mathcal{Z}\,. \end{eqnarray} \section{Remarks} In this chapter we have considered one-dimensional conformal and an $\mathcal{N}=2$ super-conformal mechanical models. In the bosonic case, there are many models that share the same conformal symmetry and some examples are the charged particle in a Dirac monopole background, Landau problem, rational Calogero models of $ N $ particles, geodesic motion in extreme black holes, the free particle and the harmonic oscillator in $ d $ dimensions, to name a few. In particular, some systems in various dimensions are especially rich thanks to the presence of conformal symmetry. Such is the case of the rational Calogero model, which is not only integrable, but also super-integrable, see \textcolor{red}{[\cite{CorLefPly}]} and references therein. On the other hand, higher extensions of superconformal models are also a regular topic in scientific literature \textcolor{red}{[\cite{SCM1,SCM2,SCM3,SCM4,SCM5}]}. In Sec \ref{SecAFF} we have emphasized that the models (\ref{conformalaction}) and (\ref{conformalaction2}) represent two different forms of dynamics associated with conformal algebra. In the next chapter we will show that there is a non-unitary mapping between both models. We call it the conformal bridge transformation, and it might be useful to obtain hidden symmetries for higher dimensional conformal invariant models. \chapter{The conformal bridge} \label{ChBridge} As we highlighted in the previous chapter, the conformal invariant systems with or without a harmonic potential are just two different dynamical phases of the same algebraic structure. However, there seems to be no direct relationship at the eigenstate level because one of the Hamiltonians is a non-compact generator, in contrast to the another Hamiltonian (the harmonically trapped one), which is compact. The objective of this chapter is to show that there is a non-unitary transformation that effectively maps one quantum mechanical system to the other but in an unorthodox way. To do so, let us start with algebra (\ref{so(2,1)preambulo}) without specifying a particular form of the generators. Then we construct the operators \begin{equation} \label{Generalbridge} \mathfrak{S}=e^{-\alpha K} e^{\frac{H}{2\alpha}} e^{i\ln(2)D}\,,\qquad \mathfrak{S}^{-1}= e^{-i\ln(2)D}e^{-\frac{H}{2\alpha}}e^{\alpha K}\,, \end{equation} which from now on we will call as ``conformal bridge'', because by means of the Baker-Campbell-Hausdorff formula \begin{equation} e^{A}Be^{-A}=B+[A,B]+\frac{1}{2!}[A,[A,B]]+\ldots\,, \end{equation} one can show that \begin{eqnarray} \label{ConfBrid chapter 2} & \mathfrak{S}(H)\mathfrak{S}^{-1}=\alpha \mathcal{J}_-\,, \qquad \mathfrak{S}(D)\mathfrak{S}^{-1}=-i\mathcal{J}_0\,, \qquad \mathfrak{S}(K)\mathfrak{S}^{-1}=-\frac{1}{\alpha}\mathcal{J}_+\,. & \end{eqnarray} Here, $\mathcal{J}_0$ and $\mathcal{J}_\pm$ correspond to the generators of the $\mathfrak{sl}(2,\mathbb R)$ algebra given in (\ref{sl(2R)preambulo}). Note that the transformed generators in (\ref{ConfBrid chapter 2}) still satisfy the $\mathfrak{so}(2,1)$ symmetry, i.e., the transformation is an automorphism of the algebra. Anyway, as we showed in the previous chapter, for the one-dimensional case $H$ could represent the Hamiltonian of a free particle or that of the model (\ref{conformalaction}) and on the other hand, $\mathcal{J}_0$ could be the Hamiltonian of an harmonic oscillator or that of the AFF model. Therefore, the conformal bridge transformation produces a mapping between these two forms of dynamics as follows: the formal eigenstates of $ -iD $ are transformed into those of $\mathcal {J} _0 $, and on the other hand, eigenstates of $H$ are mapped to eigenstates of the lowering operator $\mathcal{J}_-$, which are in turns coherent states for $\mathcal{J}_0$. Of course, these statements remain true for any other higher-dimensional representation of the generators. In the following sections we explore the scope of this transformation with examples in one and two dimensions. The content of this chapter is based on \textcolor{red}{[\cite{InzPlyWipf2}]}. Here we only consider the basic elements and important results related with quantum mechanics examples, even though construction can be extended to the classical level, as we briefly discus in Sec. \ref{Remarks3}. \section{Free particle/ harmonic oscillator conformal bridge} Let us identify $\alpha$ with $\omega$ and $H$, $K$ and $D$ with the free particle conformal symmetry generators in the Schr\"odinger picture ($\hbar=m=1$), \begin{equation} \label{free} H=-\frac{1}{2}\frac{d^2}{dx^2}:=H_0\,,\qquad D=\frac{1}{2i}\left(\frac{d}{dx}+\frac{1}{2}\right)\,,\qquad K=\frac{x^2}{2}\,. \end{equation} Then, the conformal bridge takes the form \begin{equation} \label{U0KH+} \mathfrak{S}=\exp(-\omega\frac{x^2}{2})\exp(-\frac{1}{4\omega}\frac{d^2}{dx^2})\exp( \ln\sqrt{2}\,\left(x\frac{d}{dx}+\frac{1}{2}\right))\,, \end{equation} besides $\mathcal{J}_0$ and $\mathcal{J}_\pm$ are the symmetry generators of the harmonic oscillator, \begin{equation} \label{harmonicoscillatorgen} 2\omega \mathcal{J}_0=\frac{1}{2}\left(-\frac{d^2}{dx^2}+\omega^2x^2\right):=H_{os}\,,\qquad 2\omega \mathcal{J}_\pm=-(a^\pm)^2\,,\qquad a^\pm=\frac{1}{\sqrt{2}}\left(\omega x\mp \frac{d}{dx}\right)\,. \end{equation} As we saw in the previous chapter, these operators are well defined for $x\in \mathbb R$, and there are many more symmetries for these systems. In the case of the free particle we have the momentum operator $p=-i\frac{d}{dx}$ and the Galilean boost, which in the Schr\"odinger picture at $t=0$ is just $x$. These objects are connected with the Heisenberg generators $a^\pm$, appearing in (\ref{harmonicoscillatorgen}), via the conformal bridge as follow \begin{equation} \label{Heisenbergmap} \mathfrak{S} (x) \mathfrak{S}^{-1}=\frac{1}{\omega}a^+ \,,\qquad \mathfrak{S}(p) \mathfrak{S}^{-1}=-ia^-\,, \end{equation} and, therefore, the transformation is also an automorphism of the Schr\"odinger symmetry. For the sake of simplicity, we set $\omega=1$ along the rest of this chapter. The relation (the inverse Weierstrass transformation of a monomial) \begin{equation} \label{the wellknown relation} e^{-\frac{1}{4}\frac{d^2}{dx^2}}x^{n}=2^{-n}H_n(x)\,, \end{equation} where $H_n(x)$ are the Hermite polynomials, implies that \begin{equation} \psi_{n}(x)=\frac{1}{\sqrt{\pi^{1/2}n!}}H_{n}(x)e^{-\frac{x^2}{2}}=(2\pi)^{\frac{1}{4}} \sqrt{2^{n}n!}\,\, \mathfrak{S}\left(\frac{x}{\sqrt{2}}\right)^{n}\,, \end{equation} which are the eigenstates of $H_{os}$ with eigenvalues $E_{n}=\omega(n+\frac{1}{2})$. With the exception of $1$ and $x$, which are annihilated by $H_0$, the functions $x^n$ are not solutions of the free particle Schr\"odinger equation. They are in fact the rank $n$ Jordan states of the zero energy, as the relations \begin{eqnarray} & (H)^{j}x^{2l}=\left(-\frac{1}{2}\right)^{2l}\frac{(2l)!(2l-1)!}{(2l-j)!(2l-1-j)!}x^{2(l-j)}\,,&\\& (H)^{j}x^{2l+1}=\left(-\frac{1}{2}\right)^{2l+1}\frac{(2l)!(2l+1)!}{(2l-j+1)!(2l-j)!}x^{2(l-j)+1}\,, \end{eqnarray} valid for $j=0,\ldots,l$, show us. If we set $j=l$ in these formulas, a subsequent application of $H$ from the left produces 0 on the right hand side of the equation, and for this reason the nonlocal operator in (\ref{the wellknown relation}) produces a polynomial of order $n$. Also these Jordan states satisfy the equation $2iDx^{n}=(n+1/2)x^{n}$, so it is not really a surprise that after applying $ \mathfrak{S} $, one obtains the harmonic oscillator eigenstates. Now let us put our attention to the plane waves. We know that the functions $e^{i\frac{\kappa} {\sqrt{2}} x}$ are eigenstates of the free particle with energy $E=\kappa^2$, then the application of $\mathfrak{S}$ produces \begin{equation} \mathfrak{S}e^{i\frac{\kappa} {\sqrt{2}} x}=2^{\frac{1}{4}} \exp(-\frac{x^2}{2}+\frac{\kappa^2}{4}+i\kappa x)=(2\pi)^{\frac{1}{4}} \sum_{n=0}^{\infty}\sqrt{\frac{2^{n}}{n!}}\,\,(ik)^n\psi_{n}(x):=\psi_{CS}(x,\kappa)\,. \end{equation} These funtions are eigenstates of $a^-$ and $(a^-)^2$, and up to a normalization factor, they are coherent states of $H_{os}$, \textcolor{red}{[\cite{Schrodinger,Klauder,Gazeau}]}. In fact, by applying the evolution operator $U=e^{iH_{os}t}$ one gets $\psi_{CS}(x,\kappa,t)=e^{\frac{i t}{2}}\psi_{CS}(x,\kappa e^{it})$ which is a solution of the harmonic oscillator time-dependent Schr\"odinger equation. To obtain the over-complete set of coherent states, an analytical continuation in $\kappa$ must be done, allowing complex values. From these results one can formulate a general recipe: \begin{itemize} \item Under the conformal bridge transformation, the formal states of the operator $2iD$, that are also the rank $n$ Jordan state of zero energy, are mapped to normalizable eigenstates of $J_0$. \item Eigenstates of the Hamiltonian $H$ are transformed into coherent states of the system $J_0$, which are eigenstates of $J_-$ and conserve their form with time evolution. To have the overcomplete set, negative energy solutions (complex $\kappa$) should be also considered in this map. \item The conformal bridge also serves to map other symmetries from one system to another, as was the case for generators of the Heisenberg algebra (\ref{Heisenbergmap}). \end{itemize} In \textcolor{red}{[\cite{InzPlyWipf2}]}, it is shown how to obtain the squeezed states by applying the conformal bridge to Gaussian packets, and there is also an interesting discussion about the relation of this transformation and the Stone-von Newman theorem \textcolor{red}{[\cite{Tak}]}. Nevertheless, we prefer to not dwell with these details here. \section{Conformal bridge and the AFF model} \label{conformal mechanics bridge} Let us now set up $H$ as the Hamiltonian operator of the two-body Calogero model with omitted center of mass degree of freedom \begin{eqnarray}\label{Hnudef} H=\frac{1}{2}\left(-\frac{d^2}{dx^2}+\frac{\nu(\nu+1)}{x^2}\right):=H_\nu\,, \end{eqnarray} besides $D$ and $K$ take the same form given in (\ref{free}). With this choice, the conformal bridge is also labeled by $\nu$, \begin{equation} \mathfrak{S}_\nu=\exp(-\frac{x^2}{2}) \exp(\frac{1}{4}\left(-\frac{d^2}{dx^2}+ \frac{\nu(\nu+1)}{x^2}\right)) \exp( \ln\sqrt{2}\,\left(x\frac{d}{dx}+\frac{1}{2}\right))\,, \end{equation} and operators $J_0$, $J_\pm$ are now \begin{equation} 2\omega J_0=\frac{1}{2}\left(-\frac{d^2}{dx^2}+\frac{\nu(\nu+1)}{x^2}+x^2 \right):=\mathcal{H}_\nu\,,\qquad 2\omega J_\pm=-(a^\pm)^2+\frac{\nu(\nu+1)}{2x^2}:=\mathcal{C}_\nu^\pm\,. \end{equation} Following the recipe described above, we look for the zero energy solutions and its Jordan states, then consider the set of functions $x^{\nu+1+2n}$, $n=0,1,\ldots$. The function with $n=0$ represents a formal, diverging at infinity, eigenstate of the differential operator $H_\nu$ with $\nu\geq -1/2$ of eigenvalue $E=0$. For $n\geq 1$ this functions are the Jordan states of rank $n$ corresponding to the same eigenvalue of $H_\nu$. The functions $x^{\nu+1+2n}$ are at the same time eigenstates of the operator $2i D$ with eigenvalues $\nu+2n+3/2$. The Jordan states with $n\geq 1$ satisfy the relations \begin{eqnarray} \label{j-th_element1+} (H_{\nu})^{j}x^{\nu+1+2n}= \frac{(-2)^{j}\Gamma(n+1)}{\Gamma(n+1-j)}\frac{\Gamma(n+\nu+3/2)}{\Gamma(n+\nu+3/2-j)}x^{\nu+1+2(n-j)}\,, \quad j=0,1,\ldots,n\,, \end{eqnarray} which can be proved by induction. Eq. (\ref{j-th_element1+}) extends to the case $j=n+1$ giving $(H_{\nu})^{n+1}x^{\nu+1+2n}=0$ due to appearing of a simple pole in the denominator. \vskip0.1cm Using relation (\ref{j-th_element1+}) one can compute the conformal bridge transformation in functions $x^{2n +\nu+1}$, which gives \begin{eqnarray} \mathfrak{S}_\nu \left( \frac{x}{\sqrt{2}} \right)^{\nu+1+2n}=2^{\frac{-1}{4}}(-1)^{n}\sqrt{n!\Gamma(n+\nu+3/2)}\,\,\psi_{\nu,n}(x)\,, \end{eqnarray} where eigenstates $\psi_{\nu,n}(x)$ correspond to (\ref{AFF states}) (with $\omega=1$ and $y=x$). On the other hand, application of the operator $\mathfrak{S}_\nu$ to the eigenstates (\ref{states calogero}) (with $x=q$) of the system $H_\nu$ gives \begin{eqnarray} \label{coherent0} &\mathfrak{S}_{\nu}\psi_{\kappa,\nu}(\frac{1}{\sqrt{2}}x)=2^{\frac{1}{4}}e^{-\frac{1}{2}x^2+\frac{1}{4}\kappa^2} \sqrt{x}J_{\nu+1/2}(\kappa x):=\phi_\nu(x,\kappa)\,.& \end{eqnarray} These are the coherent states of the AFF model \textcolor{red}{[\cite{Perelomov}]}, which satisfy \begin{equation} \mathcal{J}_-\phi_\nu(x,\kappa) =-\frac{1}{4}\kappa^2\phi_\nu(x,\kappa)\,. \end{equation} By allowing the $\kappa>0$ to become a complex parameter $z$, coherent states can be constructed with complex eigenvalues of the operator $\mathcal{J}_-$. Application of the evolution operator $e^{-it H_\nu}$ to these states gives the time-dependent coherent states \begin{eqnarray} \phi_{\nu}(x,z,t)=2^{1/4}\sqrt{x}\mathcal{J}_{\nu+1/2}(z(t) x)e^{-x^2/2+z^2(t)/4-it}\,, \end{eqnarray} where $z(t)=z e^{-it}$. In the case of $\nu=0$, these time-dependent coherent states of the AFF model are the odd Schr\"odinger cat states of the quantum harmonic oscillator \textcolor{red}{[\cite{Dodonov}]}, \begin{equation} \phi_{0}(x,z,t)\propto e^{-\frac{x^2}{2}+\frac{z^2(t)}{4}-\frac{it}{2}}\sin(z(t)x)\,. \end{equation} \section{The conformal bridge and Landau problem} The generalization of the conformal bridge between free particle and harmonic oscillator to the $d-$dimensional case is straightforward; since the problem is separable in Cartesian coordinates, the conformal bridge operator is just $\mathfrak{S}(\mbf{r})=\mathfrak{S}(x_1)\ldots\mathfrak{S}(x_d)$. Each $\mathfrak{S}(x_i)$ touch only the objects constructed in terms of $x_i$ and $p_i=-\frac{d}{dx_i}$, leaving invariant the other coordinates. On the other hand, as both systems posses the $\mathfrak{so}(d)$ symmetry, the generalized angular momentum tensor $M_{ij}=x_ip_j-x_jp_i$ remains intact after the similarity transformation. On the other hand, a nontrivial relation between two-dimensional free particle, whose conformal symmetry generators are \begin{eqnarray} \label{2dimensionConf} H=\frac{1}{2}(p_x^2+p_y^2)\,,\qquad D=\frac{1}{2}(xp_x+yp_y +1)\,,\qquad K=\frac{1}{2}(x^2+y^2)\,, \end{eqnarray} and the Landau problem in the symmetric gauge, can be established by means of the two-dimensional conformal bridge operator \begin{equation}\label{Sxy} \mathfrak{S}(x,y)=\mathfrak{S}(x) \mathfrak{S}(y)\,, \end{equation} with $\mathfrak{S}(x)$ and $\mathfrak{S}(y)$ of the form (\ref{U0KH+}). This is the subject of this section. Consider now the Landau problem for a scalar particle on $\mathbb R^2$. In the symmetric gauge $\vec{A}=\frac{1}{2}B(-q_2,q_1)$, the Hamiltonian operator (in units $c=m=\hbar=1$) is given by \begin{equation} \label{Landau} H_{\text{L}}=\frac{1}{2}\vec{\Pi}^{2},\qquad \Pi_j=-i\frac{\partial}{\partial q_j}-eA_j\,, \qquad [\Pi_1,\Pi_2]=ieB\,. \end{equation} Assuming $\omega_c=eB>0$, this operator can be factorized as \begin{eqnarray} &H_{\text{L}}=\omega_c(\mathcal{A}^+\mathcal{A}^-+\frac{1}{2})\,, &\\& \label{Landaulad} \mathcal{A}^\pm=\frac{1}{\sqrt{2\omega_c}}(\Pi_1\mp i\Pi_2)\,,\qquad [\mathcal{A}^-, \mathcal{A}^+]= 1\,.& \end{eqnarray} Setting $\omega_c=2$, we can identify $q_i$ with dimensionless variables $q_1=x$, $q_2=y$. Then we present $\mathcal{A}^\pm$ as linear combinations of the usually defined ladder operators $a_x^\pm$ and $a_y^\pm$ (the shape of which corresponds to the third equation in (\ref{harmonicoscillatorgen})), in terms of which we also define the operators $\mathcal{B}^\pm$, \begin{equation} \mathcal{A}^\pm=\frac{1}{\sqrt{2}} (a_y^\pm \pm ia_x^\pm)\,,\qquad \mathcal{B}^\pm=\frac{1}{\sqrt{2}}(a_y^\pm \mp ia_x^\pm)\,. \end{equation} The operators $\mathcal{B}^\pm$ satisfy relation $[\mathcal{B}^-,\mathcal{B}^+]=1$, and commute with $\mathcal{A}^\pm$. They are integrals of motion, and their non-commuting Hermitian linear combinations $\mathcal{B}^++\mathcal{B}^-$ and $i(\mathcal{B}^+-\mathcal{B}^-)$ are identified with the coordinates of the center of the cyclotron motion. In terms of the ladder operators $a_x^\pm$, $a_y^\pm$ the Hamiltonian $H_{\text{L}}$ takes the form of a linear combination of the Hamiltonian of the isotropic oscillator $H_{\text{iso}}$ and angular momentum operator $M$, \begin{equation}\label{HLM3iso} H_{\text{L}}=H_{\text{iso}} -M\,,\qquad H_{\text{iso}}=a_x^+a_x^-+a_y^+a_y^-+1\,,\qquad M=xp_x-yp_y=-i(a_x^+a_y^--a_y^+a_y^- )\,. \end{equation} On the other hand, $H_{\text{iso}}$ and $M$ are presented in terms of $\mathcal{A}^\pm$ and $\mathcal{B}^\pm$ as follows, \begin{equation}\label{M3Hiso} M= \mathcal{B}^+\mathcal{B}^-- \mathcal{A}^+\mathcal{A}^-\,,\qquad H_{\text{iso}}=\mathcal{B}^+\mathcal{B}^-+ \mathcal{A}^+\mathcal{A}^-+1\,, \end{equation} and we have the commutation relations $ [M,\mathcal{B}^\pm]=\pm\mathcal{B}^\pm,$ $ [M,\mathcal{A}^\pm]=\mp\mathcal{A}^\pm.$ By taking into account the invariance of the angular momentum under similarity transformation, we find that its linear combination with the dilatation operator is transformed into the Hamiltonian of the Landau problem, \begin{equation} \mathfrak{S}(x,y)(2iD-M)\mathfrak{S}^{-1}(x,y)=H_{\text{L}}\,. \end{equation} Let us now introduce complex coordinate in the plane, \begin{equation} w=\frac{1}{\sqrt{2}}(y+ix)\,, \quad \text{and}\qquad \bar{w}=\frac{1}{\sqrt{2}}(y-ix)\,. \end{equation} The elements of conformal algebra and angular momentum operator take then the form \begin{equation} \label{conformalcomplex} H=-\frac{\partial^2}{\partial w \partial \bar{w}}\,,\quad D=\frac{1}{2i}\left(w\frac{\partial}{\partial w}+\bar{w}\frac{\partial}{\partial \bar{w}}+1\right)\,,\quad K=w\bar{w}\,,\quad M=\bar{w}\frac{\partial}{\partial \bar{w}}-w\frac{\partial}{\partial w}\,, \end{equation} and we find that the operator (\ref{Sxy}) generates the similarity transformations \begin{eqnarray} & \mathfrak{S}(x,y)w\mathfrak{S}^{-1}(x,y)=\mathcal{A}^+\,, \qquad \mathfrak{S}(x,y)\left(\frac{\partial}{\partial w}\right) \mathfrak{S}^{-1}(x,y)=\mathcal{A}^-\,, &\label{wASig}\\ & \mathfrak{S}(x,y)\bar{w}\mathfrak{S}^{-1}(x,y)=\mathcal{B}^+ \,, \qquad \mathfrak{S}(x,y)\left(\frac{\partial}{ \partial \bar{w}}\right)\mathfrak{S}^{-1}(x,y)=\mathcal{B}^-\,, &\label{wBSig}\\ & \mathfrak{S}(x,y)\left(w\frac{\partial} {\partial w}\right)\mathfrak{S}^{-1}(x,y)= \mathcal{A}^+\mathcal{A}^-\,,\qquad \mathfrak{S}(x,y)\left(\bar{w}\frac {\partial} {\partial \bar{w}}\right)\mathfrak{S}^{-1}(x,y) =\mathcal{B}^+\mathcal{B}^-\,. &\label{wdwSig} \end{eqnarray} Observe that each pair of relations in (\ref{wASig}) and (\ref{wBSig}) has a form similar as the one-dimensional transformation (\ref{Heisenbergmap}), where, however, the coordinate and momentum are Hermitian operators. Simultaneous eigenstates of the operators $w\frac{\partial }{\partial w}$ and $\bar{w}\frac{\partial }{\partial\bar{w}}$, which satisfy the relations $w\frac{\partial}{\partial w}\phi_{n,m}=n\phi_{n,m}$ and $\bar{w}\frac{\partial }{\partial\bar{w}}\phi_{n,m}=m\phi_{n,m}$ with $n,m=0,1,\ldots$, are \begin{equation} \phi_{n,m}(x,y)= w^n(\bar{w})^{m}=2^{-(n+m)/2}\sum_{k=0}^{n} \sum_{l=0}^{m}{n\choose k}{m\choose l}(i)^{n-m+l-k}y^{k+l}x^{n+m-k-l}\,, \end{equation} where the binomial theorem has been used. Employing Eq. (\ref{conformalcomplex}) we find that \begin{eqnarray} & M\phi_{n,m}=(m-n)\phi_{n,m}\,,\qquad 2iD\phi_{n,m}=(n+m+1)\phi_{n,m}\,,&\\& \label{KyHenphi} K\phi_{n,m}=\phi_{n+1,m+1}\,,\qquad H\phi_{n,m}=-nm\phi_{n-1,m-1}\,. & \end{eqnarray} The last equality shows that $\phi_{0,m}$ and $\phi_{n,0}$ are the zero energy eigenstates of the two-dimensional free particle, while the $\phi_{n,m}$ with $n,m>0$ are the Jordan states corresponding to the same zero energy value. Application of the operator $\mathfrak{S}(x,y)$ to these functions yields \begin{equation} \label{landaustates} \mathfrak{S}(x,y)\phi_{n,m}(x,y)=2^{2(n+m)+\frac{1}{2}}e^{-\frac{(x^2+y^2)}{2}}H_{n,m}(y,x) =\psi_{n,m}(x,y)\,, \end{equation} where \begin{equation} H_{n,m}(y,x)=2^{-(n+m)}\sum_{k=0}^{n} \sum_{l=0}^{m}{n\choose k}{m\choose l}(i)^{n-m+l-k}H_{k+l}(y)H_{n+m-k-l}(x)\,, \end{equation} are the complex Hermite polynomials, see \textcolor{red}{[\cite{Hermite}]}. These functions are eigenstates of the operators $H_{\text{L}}$, $M$ and $H_{\text{iso}}$, \begin{eqnarray} & H_{\text{L}}\psi_{n,m}=(n+\frac{1}{2})\psi_{n,m}\,,\qquad M\psi_{n,m}=(m-n)\psi_{n,m}\,,&\\& H_{\text{iso}}\psi_{n,m}=(n+m+1)\psi_{n,m}\,,& \end{eqnarray} and we note that $\psi_{n,n}$ is rotational invariant. Eqs. (\ref{wASig}), (\ref{wBSig}), and (\ref{KyHenphi}) show that the operators $\mathcal{A}^\pm$ and $\mathcal{B}^{\pm}$ act as the ladder operators for the indexes $n$ and $m$, respectively, while the operators $\hat{\mathcal{J}}_\pm=-\frac{1}{2}((a_x^\pm)^2+(a_y^\pm)^2)$, increase or decrease simultaneously $n$ and $m$ by one. Application of the operator $\mathfrak{S}(x,y)$ to exponential functions of the most general form $e^{\alpha w+\beta \bar{w}}$ with $\alpha,\beta \in \mathbb{C}$ gives here, similarly to the one-dimensional case, the coherent states of the Landau problem as well of the isotropic harmonic oscillator, \begin{eqnarray} \label{coherentLandau} \begin{array}{lcl} \psi_{\text{L}}(x,y,\alpha,\beta)&=& \mathfrak{S}(x,y) e^{\frac{1}{\sqrt{2}}((\alpha+\beta)y +i(\alpha-\beta)x)}=\sqrt{2} e^{-\frac{(x^2+y^2)}{2}+(\alpha+\beta)y +i(\alpha-\beta)x-\alpha\beta}\\ &=& \sum_{n=0}^{\infty}\sum_{l=0}^{n}\frac{1}{n!}{n \choose l}\alpha^{l}\beta^{n-l}\psi_{l,n-l}(x,y)\,. \end{array} \end{eqnarray} Applying to them, in particular, the evolution operator $e^{-itH_{\text{L}}}$, we obtain the time dependent solution to the Landau problem, \begin{equation} \psi_{\text{L}}(x,y,\alpha,\beta,t)=e^{-\frac{i t}{2}}\psi_{\text{L}}(x,y,\alpha e^{-it},\beta)\,, \end{equation} whereas under rotations these states transform as \begin{equation} e^{i\varphi M}\psi_{\text{L}}(x,y,\alpha,\beta)=\psi_{\text{L}}(x,y,\alpha e^{-i \varphi},\beta e^{i \varphi})\,. \end{equation} As the function $e^{\alpha w+\beta \bar{w}}$ is a common eigenstate of the differential operators $\frac{\partial}{\partial w}$ and $\frac{\partial}{\partial \bar{w}}$ with eigenvalues $\alpha$ and $\beta$, respectively, then our transformation yields \begin{equation} \mathcal{A}^-\psi_{\text{L}}(x,y,\alpha,\beta)=\alpha\psi_{\text{L}}(x,y,\alpha,\beta)\,,\qquad \mathcal{B}^-\psi_{\text{L}}(x,y,\alpha,\beta)=\beta\psi_{\text{L}}(x,y,\alpha,\beta)\,, \end{equation} that provides another explanation why the wave functions (\ref{coherentLandau}) are the coherent states for the planar harmonic oscillator as well as for the Landau problem. \section{Remarks} \label{Remarks3} Note that if we apply $\mathfrak{S}$ from the right to the equations in (\ref{ConfBrid chapter 2}), we get intertwining relations of the form \begin{eqnarray} & \mathfrak{S}H=\alpha \mathcal{J}_-\mathfrak{S}\,, \qquad \mathfrak{S}D=-i\mathcal{J}_0\mathfrak{S}\,, \qquad \mathfrak{S}K=-\frac{1}{\alpha}\mathcal{J}_+\mathfrak{S}\,, & \end{eqnarray} which are very similar to the usual intertwining relations of the supersymmetric quantum mechanics, however, here the ``intertwining operators'' are nonlocal and non-unitary operators described by an infinite series of powers of second derivatives. This is the reason why non-normalizable functions are mapped to bound states and vice-versa. One can go further and try to obtain a classical version of the conformal bridge by using ``Hamiltonian flows'' of the form \begin{equation}\label{TransCan} f(\alpha)=\exp(\alpha F)\star f:=f+\sum_{n=1}^\infty \frac{\alpha^n}{n!}\{F,\{\ldots,\{F,f\underbrace{\}\ldots\}\}}_{n}=:T_F(\alpha)(f)\,. \end{equation} where $F$ represents a symmetry generator, $\alpha$ is a transformation parameter and $f=f(q,p)$ corresponds to a function on phase space. The composed transformation \begin{equation}\label{Tabg0} T_{\beta\alpha\gamma}:=T_K(\beta)\circ T_{H_0}(\alpha) \circ T_D(\gamma)= T_K(\beta)\circ T_D(\gamma) \circ T_{H_0}(2\alpha)\,, \end{equation} with the election \begin{equation}\label{abgfix} \alpha=\frac{i}{2}\,,\qquad \beta=-i\,,\qquad \gamma=-\ln 2\,, \end{equation} is the classical analog, the operator (\ref{Generalbridge}) (generators should be fixed at $t=0$). This is a complex canonical transformation, so one should expect that there is some relation with $\mathcal{PT}$ symmetry \textcolor{red}{[\cite{PT1,PT2,PT3}]}. Actually, in the case of the classical bridge between free particle and harmonic oscillator, the function $T_{iD}(\tau)(x)$, i.e, the ``imaginary'' flux of $x$ due to $D$, is the one that is mapped to a complex combination of position and momentum of the harmonic oscillator. Besides, the transformation of the free particle trajectory does not have a clear interpretation. Finally it is worth emphasizing that Hamiltonians of the form $xp$ have found application in mathematics, namely, in the study of Riemann hypothesis, see \textcolor{red}{[\cite{Connes,Berry,Regniers,Sierra,Bender2017}]}. \chapter{Hidden bosonized superconformal symmetry} \label{ChHiddenboson} It is well known that the one-dimensional quantum harmonic oscillator system is characterized by a bosonized superconformal symmetry \textcolor{red}{[\cite{Hiden1,BalSchBar,CarPly2,Hiden3}]}, however, the origin of this symmetry had not been clarified, until the article \textcolor{red}{[\cite{InzPly1}]} appeared, and this chapters summarize the main results of that work. We show that this supersymmetry can be derived by applying a nonlocal transformation (of the nature of a Foldy-Wouthuysen transformation) to a particular super-extended system. The latter system itself can not be obtained directly from a given superpotential, i.e., is outside of the Darboux transformation scheme, however its corresponding generators are, in fact, linear combinations of the $\mathfrak{osp}(2|2)$ symmetry generators that the super-harmonic oscillator system possesses. They were introduced in Chap \ref{ChConformal}, Sec. \ref{SecOSP22Conformal}. The mentioned system can also be obtained by taking a certain limit in an isospectral deformation of the harmonic oscillator, produced with a confluent Darboux transformation. \section{Dimensionless generators} So far we have turned our attention to Hamiltonian operators of the form \begin{equation} \label{Hdimensionless} H=\frac{1}{2}\left(-\frac{d^2}{dy^2}+ V(y) \right)\,,\qquad [y]=\sqrt{t}\,, \end{equation} where $ V (y) $ is the potential of the harmonic oscillator or that of the AFF model. However, when we are working with the DCKA transformation, it is worth using dimensionless operators. For this reason, we consider the change of variables $x=\sqrt{\omega} y$, in term of which the Hamiltonian (\ref{Hdimensionless}) takes the form $H=\frac{\omega}{2}L$, where depending on the situation we are looking at, the operator $ L $ as well as its eigenstates and its spectrum could be \begin{equation} \label{Lqhosc} L_{\text{os}}=-\frac{d^2}{dx^2}+x^2\,,\qquad \psi_{n}(x)=\frac{H_{n}(x)e^{-\frac{x^2}{2}}}{\sqrt{\pi^{1/2}n!}}\,,\qquad E_n=2n+1\,, \end{equation} or \begin{eqnarray} &\label{AFFless} L_\nu=-\frac{d^2}{dx^2}+x^2+\frac{\nu(\nu+1)}{x^2}\,, &\\& \psi_{\nu,n}(x)=\sqrt{\frac{2n!}{\Gamma(n+\nu+\frac{3}{2})}}\,\,x^{\nu+1}L_{n}^{(\nu+\frac{1}{2})}( x^2)e^{-\frac{ x^2}{2}}\,,\qquad E_{\nu,n}=4n+2\nu+3\,. \end{eqnarray} It is also convenient to redefine the first order ladder operators of the harmonic oscillator as \begin{equation} \label{Laderdimensionless1} a^{\pm}=\mp\frac{d}{dx}+x,\qquad [a^+,a^-]=2\,,\qquad [L_{\text{os}}, a^\pm]=\pm 2a^\pm\,. \end{equation} and the same for the second order ladder operators of the AFF system which are now given by \begin{eqnarray} &\label{Ladderdimensionless} \mathcal{C}_\nu^{\pm}=-(a^\pm)^2+\frac{\nu(\nu+1)}{x^2}\,,&\\& [L_\nu,\mathcal{C}_\nu^{\pm}]=\pm \Delta E \mathcal{C}_\nu^{\pm}\,,\qquad [\mathcal{C}_\nu^{-},\mathcal{C}_\nu^{+}] =8L_{\nu}\,,\qquad \Delta E=4\,. & \end{eqnarray} In the Heisenberg picture operators $a^\pm$ and $\mathcal{C}_\nu^{\pm}$ are respectively replaced by ${}_Ha^{\pm}= e^{\mp 2it} a^{\pm}$ and ${}_H \mathcal{C}^{\pm}= e^{\mp 4it} \mathcal{C}^{\pm}$, which will be dynamical integrals of motion for the corresponding systems. \section{Hidden superconformal symmetry of the quantum harmonic oscillator}\label{SecHiden} In this paragraph we show how the aforementioned superconformal symmetry appears for the one-dimensional bosonic harmonic oscillator system, the Hamiltonian of which is given by (\ref{Lqhosc}). As the ladder operators (\ref{Laderdimensionless1}) anticommute with reflection operator $\mathcal{R}$ defined by $\mathcal{R}^2=1$, $\mathcal{R}x=-x\mathcal{R}$, and their anti-commutator produces $\{a^+,a^-\}=2L_{\text{os}}$, it is clear that if one set $\mathcal{R}$ as the $\mathbb Z_2$-grading operator, then: \begin{itemize} \item $a^\pm$ are identified as odd, fermionic generators, \item $L_{\text{os}}$ and quadratic operators $(a^{\pm})^2$ are identified as even, bosonic generators since $[\mathcal{R},L_{\text{os}}]=[\mathcal{R},(a^\pm)^2]=0$. \end{itemize} Consider now the dynamical integrals of motion \begin{equation}\label{localGen} J_0=\frac{1}{4}L_{\text{os}},\qquad J_\pm=\frac{1}{4}e^{\mp4it}(a^{\pm})^2\,,\qquad \alpha_{\pm}=\frac{1}{4}e^{\mp i2t}a^{\pm}\,. \end{equation} They produce the (anti)commutator relations \begin{eqnarray} \label{sl2}& [J_0,J_\pm]=\pm J_\pm,\qquad [J_-,J_+]=2J_0\,, &\\& \label{osp(21.1)} \{\alpha_{+},\alpha_-\}=\frac{1}{2}J_0\,, \qquad\{\alpha_{\pm},\alpha_\pm\}=\frac{1}{2}J_\pm\,, &\\& \label{osp(21.3)} [J_0,\alpha_{\pm}]=\pm\frac{1}{2}\alpha_{\pm},\qquad [J_\pm,\alpha_{\mp}]=\mp\alpha_{\pm}\,.& \end{eqnarray} The superalgebra (\ref{sl2}), (\ref{osp(21.1)}), (\ref{osp(21.3)}) describes the hidden superconformal $\mathfrak{osp}(1|2)$ symmetry of the quantum harmonic oscillator \textcolor{red}{[\cite{Hiden1,BalSchBar}]}. The set of even integrals $J_0$, $J_\pm$ generates the $\mathfrak{sl}(2,\mathbb R)$ subalgebra (\ref{sl2}), and relations (\ref{osp(21.3)}) mean that fermionic generators $\alpha_\pm$ form a spin-$1/2$ representation of this Lie subalgebra. One can extend this superalgebra by introducing the fermionic operators \begin{equation}\label{beta} \beta_{\pm}=i\mathcal{R}\alpha_{\pm}\,. \end{equation} which give rise to the additional super-algebraic relations \begin{eqnarray} \label{beta1} & [J_0,\beta_{\pm}]=\pm\frac{1}{2}\beta_{\pm}\,,\qquad [J_{\pm},\beta_{\mp}]=\mp\beta_\pm\,, &\\& \qquad\{\beta_{\pm},\beta_{\pm}\}=\frac{1}{2}J_{\pm}\,,\qquad \{\beta_{+},\beta_{-}\}=\frac{1}{2}J_0\,, \qquad \{\alpha_{\pm},\beta_{\mp}\}=\mp\frac{i}{2}Z\,, &\\& [Z,\alpha_{\pm}]=\frac{i}{2}\beta_{\pm}\,, \qquad [Z,\beta_{\pm}]=-\frac{i}{2}\alpha_\pm\,,\label{betaf} & \end{eqnarray} where \begin{equation} Z=-\frac{1}{4}\mathcal{R}\,. \end{equation} However this extension is nonlocal since $\mathcal{R}$ can be presented as $\mathcal{R}=\sin (\frac{\pi}{2}L_{\text{os}})$. We will show soon that superalgebra given by Eqs. (\ref{sl2})-(\ref{osp(21.3)}) and (\ref{beta1})-(\ref{betaf}) is just another basis for the $\mathfrak{osp}(2|2)$ superconformal algebra presented in Chap. \ref{ChConformal}, Sec \ref{SecOSP22Conformal}. \section{Extended system with super-Schr\"odinger symmetry and nonlocal Foldy-Wouthuysen transformation}\label{Sec3Hiden} The approach with nonlocal Foldy-Wouthuysen transformation and a subsequent reduction was used to clarify the origin of the hidden bosonized supersymmetry (that is outside the conformal symmetry) in \textcolor{red}{[\cite{Gamboa2,jakubsky}]}, and in this section we demonstrate that the bosonized superconformal symmetry introduced above can be ``extracted'' from the symmetry generators of the extended quantum harmonic oscillator system described by the matrix Hamiltonian \begin{equation}\label{Hextbr} \mathcal{H}= \left( \begin{array}{cc} L_{\text{os}} & 0 \\ 0& L_{\text{os}} \end{array} \right). \end{equation} It is natural to identify the diagonal matrix $\Gamma=\sigma_3$ as a $\mathbb Z_2$-grading operator, implying that Hamiltonian (\ref{Hextbr}) is an even generator, besides the anti-diagonal integrals $\sigma_a$, $a=1,2$, can be considered as odd supercharges. The peculiarity of the system (\ref{Hextbr}) is that these supercharges anticommute not for Hamiltonian but for central element, $\{\sigma_a,\sigma_b\}=2\delta_{ab}\mathbb{I}$, $\mathbb{I}=\text{diag}\,(1,1)$. On the other hand, all the energy levels of the extended system $\mathcal{H}$ (including the lowest $ E_0 = 1> 0 $) are doubly degenerate. Furthermore, neither the supercharges nor the Hamiltonian can annihilate any eigenstate or linear combination of them, so the system is in the spontaneously broken supersymmetric phase. Additionally, one can also construct the dynamical integrals \begin{eqnarray} &\label{J+-t} \qquad \mathcal{J}_\pm=\frac{1}{4}e^{\mp i4t}\left( \begin{array}{cc} (a^{\pm})^2 & 0 \\ 0& (a^{\pm})^2 \end{array} \right)=\left( \begin{array}{cc} J_\pm & 0 \\ 0& J_\pm \end{array} \right), &\\& \label{C+-t} \mathcal{F}_\pm=\frac{1}{4}e^{\mp i2t}\left( \begin{array}{cc} a^{\pm}& 0 \\ 0 & a^{\pm} \end{array} \right)=\left( \begin{array}{cc} \alpha_\pm& 0 \\ 0 & \alpha_\pm \end{array} \right), &\\& \mathcal{Q}_\pm=\frac{1}{4}e^{\mp i2t} \left( \begin{array}{cc} 0& a^\pm \\ a^\pm& 0 \end{array} \right)= \left( \begin{array}{cc} 0& \alpha_\pm \\ \alpha_\pm & 0 \end{array} \right), \qquad \mathcal{S}_\pm=i\sigma_3\mathcal{Q}_\pm. \label{J+-t2}& \end{eqnarray} Diagonal operators $\mathcal{J}_\pm$ and $\mathcal{F}_\pm$ are identified here as even generators, and antidiagonal dynamical integrals $\mathcal{Q}_\pm$ and $\mathcal{S}_\pm$ are odd. All these generators produce the superalgebra\,: \begin{eqnarray} & \label{supmat1} [\mathcal{J}_0,\mathcal{J}_\pm]= \pm \mathcal{J}_\pm\,, \qquad [\mathcal{J}_-, \mathcal{J}_+]=2\mathcal{J}_0\,, &\\& \label{supmat2} [\mathcal{J}_0,\mathcal{F}_\pm]=\pm\frac{1}{2}\mathcal{F}_\pm\,, \qquad [\mathcal{J}_\pm,\mathcal{F}_\mp]=\mp\mathcal{F}_ \pm\,, \qquad [\mathcal{F}_-,\mathcal{F}_+]=\frac{1}{2}\mathcal{I}\,, &\\& \label{supmat2+} [\mathcal{J}_0,\mathcal{Q}_\pm]=\pm\frac{1}{2}\mathcal{Q}_\pm\,,\quad [\mathcal{J}_0,\mathcal{S}_\pm]=\pm\frac{1}{2}\mathcal{S}_\pm\,,\quad [\mathcal{J}_\pm,\mathcal{Q}_\mp]=\mp \mathcal{Q}_\pm\,,\quad [\mathcal{J}_\pm,\mathcal{S}_\mp]=\mp \mathcal{S}_\pm\,, &\\& \label{supmat3} \{\Sigma_a,\Sigma_b\}=2\delta_{ab}\,\mathcal{I}\,, \qquad \{\Sigma_1,\mathcal{Q}_\pm\}=\mathcal{F}_\pm\,, \qquad \{\Sigma_2,\mathcal{S}_\pm\}=\mathcal{F}_\pm\,, &\\& \label{supmat4} \{\mathcal{Q}_\pm,\mathcal{Q}_\pm\}=\frac{1}{2}\mathcal{J}_\pm\,, \quad \{\mathcal{Q}_+,\mathcal{Q}_-\}=\frac{1}{2}\mathcal{J}_0\,,\quad \{\mathcal{S}_\pm,\mathcal{S}_\pm\}=\frac{1}{2}\mathcal{J}_\pm\,, \quad \{\mathcal{S}_+,\mathcal{S}_-\}=\frac{1}{2}\mathcal{J}_0\,, &\\& \label{supmat5} \{\mathcal{Q}_+,\mathcal{S}_-\}=-\frac{i}{2}\mathcal{Z}\,, \qquad \{\mathcal{Q}_-,\mathcal{S}_+\}=\frac{i}{2}\mathcal{Z}\,, &\\& \label{supmat6+} [\mathcal{Z},\Sigma_a]=\frac{i}{2}\epsilon_{ab}\Sigma_b\,,\qquad [\mathcal{Z},\mathcal{Q}_{\pm}]=\frac{i}{2}\mathcal{S}_{\pm}\,, \qquad [\mathcal{Z},\mathcal{S}_{\pm}]=-\frac{i}{2}\mathcal{Q}_\pm\,, &\\& \label{supmat6} [\mathcal{F}_\pm,\mathcal{Q}_\mp]=\mp\frac{1}{4} \Sigma_1,\qquad [\mathcal{F}_\pm,\mathcal{S}_\mp]=\mp\frac{1}{4} \Sigma_2\,, & \end{eqnarray} where \begin{eqnarray}\label{mathcalJ0} & \mathcal{J}_0=\frac{1}{4}\mathcal{H}= \left( \begin{array}{cc} J_0 & 0 \\ 0& J_0 \end{array} \right) \,,&\\& \label{Sigmadef} \Sigma_1=\frac{1}{2}\sigma_1\,,\qquad \Sigma_2=-\frac{1}{2}\sigma_2\,, \qquad \mathcal{Z}=-\frac{1}{4}\sigma_3\,, \qquad \mathcal{I}=\frac{1}{4}\mathbb{I}\,.& \end{eqnarray} The not shown (anti)commutators between generators are equal to zero. The system presented here cannot be obtained by the usual Darboux transformation procedure, since it is not possible to find a superpotential, so that the potentials relative to the superpartners are exactly $x^2$, see \textcolor{red}{[\cite{InzPly1}]}. However, when considering the base change \begin{eqnarray} & \mathcal{H}_{os}=2(\mathcal{J}_0-\mathcal{Z}) \,,\qquad \mathcal{G}^\pm=-2\mathcal{J}_\pm \,, &\\& \label{Qa osi} \mathcal{Q}_1=2\sqrt{2}\left( Re(\mathcal{Q}^-)+Im(\mathcal{S}^-)\right)\,,\qquad \mathcal{Q}_2=2\sqrt{2}\left(Re(\mathcal{S}^-)-Im(\mathcal{Q}^-)\right)\,, &\\& \label{Qb osi} \mathcal{S}_1= 2\sqrt{2}\left(Re(\mathcal{Q}^-)- Im(\mathcal{S}^-)\right)\,,\qquad \mathcal{S}_2= 2\sqrt{2}\left( Re(\mathcal{S}^-)+ Im(\mathcal{Q}^-)\right)\,, & \end{eqnarray} and identifying $2\mathcal{Z}$ with the generator of the $R$ symmetry, one realizes that the generators defined in this way satisfy the $\mathfrak{osp}(2|2)$ superconformal algebra (\ref{HRQ0})-(\ref{QSGG}). Actually, the generators (\ref{Qa osi}) - (\ref{Qb osi}) match with $\mathcal{Q} _a $ and $ \mathcal{S}_a $ in (\ref{ospos}) when $ t = 0 $\footnote{putting $ \omega = 1 $ and therefore $ y = x $.}, and in addition, the operators $ \Sigma_a $ and $ \mathcal {F}_\pm $ are, up to a proportionality factor, the generators of Heisenberg's superextended symmetry. With this information at hand we identify (\ref{supmat1})-(\ref{supmat6}) as another expression for super-Schr\"odinger symmetry. By comparing with what we have in the previous section, it is obvious that the matrix integrals $\mathcal{J}_0$, $\mathcal{J}_\pm$, $\mathcal{Z}$, $\mathcal{Q}_\pm$, $\mathcal{S}_\pm$ of the extended system (\ref{Hextbr}) are analogous to the corresponding integrals $J_0$, $J_\pm$, $Z$, $\alpha_\pm$, $\beta_\pm$ of the quantum harmonic oscillator. Because of the extension, the nonlocal integrals $Z$ and $\beta_\pm$ of the system (\ref{Lqhosc}) are changed here for the corresponding local matrix integrals $\mathcal{Z}$ and $\mathcal{S}_\pm$. The anti-commutator of additional fermionic integrals $\Sigma_a$ with $\Sigma_b$ generates a central charge $\mathcal{I}$, and via the anti-commutators with odd dynamical integrals $\mathcal{Q}_\pm$ and $\mathcal{S}_\pm$ they produce additional bosonic integrals $\mathcal{F}_\pm$, see Eq. (\ref{supmat3}). {}The comparison of the symmetries and generators of the systems (\ref{Hextbr}) and (\ref{Lqhosc}) indicates that the local $\mathfrak{osp}(1|2)$ and nonlocal $\mathfrak{osp}(2|2)$ hidden superconformal symmetries of the quantum harmonic oscillator can be obtained by a certain projection (reduction) of the local symmetries of the matrix system (\ref{Hextbr}). To find the exact relation between these two systems and their symmetries, we apply to the extended system a unitary transformation $\mathcal{O}\mapsto \widetilde{\mathcal{O}}=U\mathcal{O}U^\dagger$ generated by the nonlocal matrix operator \begin{equation}\label{Utrans} U=U^\dagger=U^{-1}=\frac{1}{2} \left( \begin{array}{cc} 1+\mathcal{R} & 1-\mathcal{R} \\ 1-\mathcal{R}& 1+\mathcal{R} \end{array} \right). \end{equation} Under this transformation, the central element $\mathcal{I}$ and generators of the $\mathfrak{sl}(2,\mathbb R)$ subalgebra, $\mathcal{J}_0$ and $\mathcal{J}_\pm$, do not change, while other generators take the following form\,: \begin{equation} \widetilde{Z}=\frac{1}{4}\left( \begin{array}{cc} -\mathcal{R}& 0 \\ 0& \mathcal{R} \end{array} \right)\,, \end{equation} \begin{equation} \widetilde{\mathcal{Q}_\pm}= \left( \begin{array}{cc} \alpha_\pm& 0 \\ 0& \alpha_\pm \end{array} \right),\qquad \widetilde{\mathcal{S}_\pm}= \left( \begin{array}{cc} i\mathcal{R}\alpha_\pm& 0 \\ 0& -i\mathcal{R}\alpha_\pm \end{array} \right) =\left( \begin{array}{cc} \beta_\pm& 0 \\ 0& -\beta_\pm \end{array} \right), \end{equation} \begin{equation} \widetilde{\Sigma_1}=\frac{1}{2}\sigma_1\,,\qquad \widetilde{\Sigma_2}=-\frac{1}{2}\sigma_2\mathcal{R}\,, \qquad \widetilde{\mathcal{F}_\pm}=\sigma_1\alpha_\pm \,. \end{equation} {}Note that the transformation diagonalizes the dynamical odd integrals $\mathcal{Q}_\pm$ and $\mathcal{S}_\pm$ which initially have had the anti-diagonal form. Therefore, the transformation is of the same nature as a Foldy-Wouthuysen$\,\,$ transformation$\,\,\,\,$ for$\,\,\,\,$ a$\,\,\,\,$ Dirac$\,\,\,\,$ particle$\,\,\,\,$ in$\,\,\,\,$ external$\,\,\,\,$ electromagnetic$\,\,\,\,$ field$\,\,\,\,$ \textcolor{red}{[\cite{FW}]}. On the other hand, the transformed even, $\widetilde{\mathcal{Z}}$, and odd, $\widetilde{\mathcal{S}_\pm}$, generators of the super-extended Schr\"odinger symmetry of the system (\ref{Hextbr}) take a nonlocal form. We can reduce (or, in other words, project) the transformed system and its symmetries to the proper subspace of eigenvalue $+1$ of the matrix $\sigma_3$ which corresponds, according to Eq. (\ref{mathcalJ0}), to the single (non-extended) quantum harmonic oscillator system. In this procedure (which can be done using projector $\Pi_+=\frac{1}{2}(1+\sigma_3)$) we looses operators $\widetilde{\mathcal{F}_\pm}$ and $\widetilde{\Sigma_{b}}$ because they are anti-diagonal, but on the other hand, we retrieve all the generators of the bosonized superconformal symmetry given in the previous section. \section{Two-step isospectral Darboux chain}\label{Sec5} As we have indicated previously, the extended system (\ref{Hextbr}) cannot be produced by means of the usual supersymmetric algorithm based on some superpotential $ W (x) $. In this section we will show that an option to generate this system is through a two-step confluent Darboux transformation: The extended system obtained in this way will have a set of true and dynamical integrals of motion, and after the application of a certain limit, these integrals will give us the generators of the super-extended Schr\"odinger symmetry related to (\ref{Hextbr}). Consider the functions $\psi_0(x)$, which is the normalized ground state of (\ref{Lqhosc}), and $\chi_0(x)$, given by \begin{equation} \chi_0(x;\mu)=\mu\widetilde{\psi_0(x)}+\Omega_0\,, \end{equation} where $\Omega_0$ is a Jordan state of energy $E=1$, whose form corresponds to (\ref{omega1}), and $\mu$ is a real constant. By construction $\chi_0$ satisfy $H_-^2\chi_0=0$ with $H_-=a^+ a^-=L_{\text{os}}-1$, and the application of $a^-$ on it gives us \begin{equation}\label{varphi0} \varphi_{-0}(x;\mu)=a^{-}\chi_0(x;\mu)=\frac{\mu+I_0(x)}{\psi_0(x)}=\mu\psi_{-0}(x)+\widetilde{\psi_{-0}(x)}\,,\qquad I_0(x)=\int_{-\infty}^x (\psi_0(t))^2dt\,, \end{equation} where $\psi_{-0}(x)=e^{x^2/2}$ is a nonphysical eigenstate of $L_{\text{os}}$ with negative energy $E=-1$ and $\widetilde{\psi_{-0}(x)}$ is its corresponding linear independent partner constructed according to (\ref{tildepsi}). If we choose the value of parameter $\mu$ in one of the infinite intervals $(-\infty,-1)$ or $(0,\infty)$ for which $\varphi_{-0}(x;\mu)$ is a nodeless on a real line function being a nonphysical eigenstate of $H_+=a^-a^+$ of zero eigenvalue, $H_+\varphi_{-0}(x;\mu)=0$, then we can use it as a seed state for a new Darboux transformation which produces the first order differential operators \begin{equation} A^-_\mu=\varphi_{-0}(x;\mu)\frac{d}{dx}\frac{1}{\varphi_{-0}(x;\mu)}= \frac{d}{dx}+W(x;\mu)\, , \qquad A^+_\mu= (A^-_\mu)^\dagger\,, \end{equation} where \begin{equation} W(x;\mu)=-(\ln \varphi_{-0}(x;\mu))'=- x -\frac{\psi_0(x)}{\varphi_{-0}(x;\mu)}\,. \end{equation} These operators factorize $H_+$ and \begin{equation}\label{H-muWron} H_\mu=H_++2W'=H_--2\left(\ln (I_0(x)+ \mu)\right)''\,, \end{equation} $A^+_\mu A^-_\mu=H_+$, $A^-_\mu A^+_\mu=H_\mu$, and intertwine them, $A^-_\mu H_+=H_\mu A^-_\mu$, $A^+_\mu H_\mu=H_+ A^+_\mu$. Considering the second order differential operators given by a composition of the first order Darboux generators, \begin{equation}\label{secondA} \mathbb A^-_\mu=A^-_\mu a^-\,,\qquad \mathbb A^+_\mu=a^+A^+_\mu\,, \end{equation} we find that they intertwine the Hamiltonian operators $H_-$ and $H_\mu$, \begin{equation}\label{A2inter} \mathbb A^-_\mu H_-=H_\mu \mathbb A^-_\mu\,,\qquad \mathbb A^+_\mu H_\mu=H_- \mathbb A^+_\mu\,, \end{equation} and also satisfy relations $\mathbb A^+_\mu\mathbb A^-_\mu=(H_-)^2$, $\mathbb A^-_\mu\mathbb A^+_\mu=(H_\mu)^2$. By construction, \begin{equation} \ker\,(\mathbb A^-_\mu)=\text{span}\,\{\psi_0(x),\chi_0(x;\mu)\}\,. \end{equation} The Darboux-deformed oscillator system described by the Hamiltonian operator $H_\mu$ is \emph{completely isospectral} to the system $H_-$. Its eigenstates with eigenvalues $E=2n$, $n=1,2\ldots$, are obtained by the mapping $\mathbb A^-_\mu$\,: $\psi_n(x)\mapsto \psi_n(x;\mu)=\mathbb A^-_\mu\psi_n(x)$, $H_\mu\psi_n(x;\mu)=2n\psi_n(x;\mu)$. The (not normalized) ground state of zero energy of the system $H_\mu$ is described by wave function $\psi_0(x;\mu)=\frac{1}{\varphi_{-0}(x;\mu)}$, where $\varphi_{-0}(x;\mu)$ corresponds to (\ref{varphi0}). Thus, we obtained the completely isospectral pair $H_-$ and $H_\mu$, from which we compose the extended system described by the matrix Hamiltonian operator \begin{equation}\label{calHmu} \mathcal{H}_\mu=\left( \begin{array}{cc} H_\mu & 0 \\ 0 & H_- \end{array}\right). \end{equation} On the other hand, $A_\mu^-$ and $A^+_\mu$ intertwine $H_+=H_-+2$ and $H_\mu$, which implies \begin{equation}\label{Amuinter} A^-_\mu H_-=(H_\mu-2) A^-_\mu\,,\qquad A^+_\mu (H_\mu-2)=H_- A^+_\mu\,. \end{equation} {}For this system we have in fact three Darboux schemes\,: \begin{itemize} \item Scheme $(\psi_0(x),\chi_0(x;\mu))$ which provides us with the intertwining operator $\mathbb A^\pm_\mu$. \item Scheme $(\varphi_{-0}(x;\mu))$, the intertwining operators of which are $A^\pm_\mu$ . \item Scheme $(\psi_0(x),\psi_1(x), a^+\chi_0(x;\mu))$, which gives us the third order intertwining operators $\mathcal{A}^-_\mu=A^-_\mu(a^-)^2=\mathbb A^-_\mu a^-$ and $\mathcal{A}^+_\mu=(\mathcal{A}^-_\mu)^\dagger$, that satisfy $\mathcal{A}^-_\mu H_-=(H_\mu+2)\mathcal{A}^-_\mu$, $\mathcal{A}^+_\mu (H_\mu+2)=H_-\mathcal{A}^+_\mu$. \end{itemize} Using the intertwining operators of these three Darboux schemes, we construct the odd integrals \begin{equation} \mathcal{Q}_{\mu 1}=\left( \begin{array}{cc} 0 & \mathbb A^- _\mu \\ \mathbb A^+_\mu & 0 \end{array}\right),\quad \mathcal{Q}_{\mu 2}=i\sigma_3\mathcal{Q}_{\mu 1}\,, \quad \mathcal{S}_{\mu 1}=\left( \begin{array}{cc} 0 & A^-_\mu \\ A^+_\mu & 0 \end{array}\right),\quad \mathcal{S}_{\mu 2}=i\sigma_3\mathcal{S}_{\mu 1}\,, \end{equation} \begin{equation} \mathcal{L}_{\mu 1}=\left( \begin{array}{cc} 0 & \mathcal{A}^-_\mu \\ \mathcal{A}^+_\mu& 0 \end{array}\right),\qquad \mathcal{L}_{\mu 2}=i\sigma_3\mathcal{L}_{\mu 1}\,. \end{equation} and by means of the relations \begin{eqnarray} & \mathbb A^-_\mu A^+_\mu=A^-_\mu a^-A^+_\mu\,,\qquad A^+_\mu \mathbb A^-_\mu=(H_-+2) a^-\,,&\\& \mathcal{A}^-_\mu A^+_\mu=A^-_\mu (a^-) A^+_\mu\,,\qquad A^+_\mu \mathcal{A}^-_\mu=(H_-+2)(a^-)^2\,,& \end{eqnarray} we also construct diagonal (even) operators \begin{equation} \mathcal{F}_{\mu-}=\left( \begin{array}{cc} A_\mu^-a^-A_\mu^+ & 0 \\ 0 & (H_-+2)a^- \end{array}\right),\qquad \mathcal{J}_{\mu-}=\left( \begin{array}{cc} A_\mu^-(a^-)^2A_\mu^+ & 0 \\ 0 & (H_-+2)(a^-)^2 \end{array}\right), \end{equation} and Hermitian conjugate operators $\mathcal{F}_{\mu+}$ and $\mathcal{J}_{\mu+}$. With respect to the Hamiltonian $\mathcal{H}_\mu$, the only pair of time-independent integrals are the supercharges $\mathcal{Q}_{\mu a}$, $a=1,2$. To obtain dynamical integrals one should unitary transform other operators with $U(t)=\exp\left(i\mathcal{H}_\mu t\right)$\,. The generators considered here produce a kind of a nonlinear deformation of the super-Schr\"odinger symmetry. We are not interested here in explicit form of such a nonlinear superalgebra, but just note that when $\mu\rightarrow \pm \infty$, we have $(\ln(I(x) + \mu))' \rightarrow 0$. As a result, in any of the two limits the Hamiltonian $H_\mu$ transforms into $H_-$, and the matrix Hamiltonian transforms into extended Hamiltonian (\ref{Hextbr}) shifted for the minus unit matrix\,: $\mathcal{H}_\mu\rightarrow \mathcal{H} -\mathbb{I}$. In this limit we also have $A^\pm_\mu\rightarrow -a^\mp$, and find that the constructed operators transform as follows\,: \begin{eqnarray} \mathcal{Q}_{\mu 1}\rightarrow -(\mathcal{H}-1)\sigma_1\,,&&\quad \mathcal{Q}_{\mu 2}\rightarrow (\mathcal{H}-1)\sigma_2\,,\\ \mathcal{S}_{\mu a}\rightarrow-\breve{\mathcal{S}}_a\,,&&\quad \mathcal{L}_{\mu a}\rightarrow-(\mathcal{H}-2+\sigma_3) \widehat{\mathcal{Q}}_a\,,\\ \mathcal{F}_{\mu -}\rightarrow (\mathcal{H}-\sigma_3)\mathcal{F}_-\,,&& \quad \mathcal{F}_{\mu +}\rightarrow \mathcal{F}_+(\mathcal{H}-\sigma_3)\,,\\ \mathcal{J}_{\mu -}\rightarrow (\mathcal{H}-\sigma_3)\mathcal{J}_-\,,&& \quad \mathcal{J}_{\mu +}\rightarrow \mathcal{J}_+(\mathcal{H}-\sigma_3)\,. \end{eqnarray} In such a way we reproduce all the corresponding integrals of the system (\ref{Hextbr}) that generate the super-extended Schr\"odinger symmetry lying behind the hidden superconformal symmetries $\mathfrak{osp}(1|2)$ and $\mathfrak{osp}(2|2)$ of a single quantum harmonic oscillator. The isospectral deformation $V_\mu(x)$ of the harmonic oscillator potential is illustrated by Figure \ref{HidenFig1}, while Figure \ref{HidenFig2} illustrates the action of the intertwining operators $\mathbb A^\pm_\mu$ and $\mathcal{A}^\pm_\mu$. \begin{figure}[h] \begin{center} \includegraphics[scale=0.7]{figure1.eps} \includegraphics[scale=0.7]{figure2.eps} \caption[Behavior of the potential, Sec. 5.4 ]{\small{ On the left: Isospectrally deformed potential $V_\mu$ at $\mu=1$ and $\mu=-3$ is shown by continuous red and dashed black lines, respectively. On the right: The difference $V_\mu(x)-x^2$ given by the last term in Eq. (\ref{H-muWron}) is shown for the same values $\mu=1$ and $\mu=-3$. With increasing value of modulus of the deformation parameter $\mu$ the amplitudes of minimum and maximum of the difference $V_\mu(x)-x^2$ decrease, and in both limits $\mu \rightarrow\pm \infty$ the deformed potential $V_\mu(x)$ transforms into harmonic potential $V=x^2$ shown on the left by continuous blue line. }} \label{HidenFig1} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[scale=0.3]{figure3.eps} \caption[Intertwining operators behavior, Sec. 5.4 ]{\small{Mapping of eigenstates of the systems $H_-$ and $H_\mu$ by intertwining operators $\mathbb A^\pm_\mu$ and $\mathcal{A}^\pm_\mu$ via eigenstates of intermediate system $H_+$. The ground state $\mathbb A^-_\mu\widetilde{\psi_0 }$ of $H_\mu$ is obtained by applying $\mathbb A^-_\mu$ to nonphysical eigenstate $\widetilde{\psi_0 }$ of $H_-$. It also can be generated by a not shown here action of $\mathcal{A}^-_\mu$ on nonphysical eigenstate $\widetilde{\psi_1}$ of $H_-$ via nonphysical eigenstate $\psi_{-0}$ of $H_+$. }} \label{HidenFig2} \end{center} \end{figure} In conclusion of this section we note that the Hamiltonian (\ref{calHmu}) and the second order intertwining operators $\mathbb A^\pm_\mu$ can be presented in alternative form which corresponds to the anomaly-free scheme of quantization of classical systems with second-order supersymmetry \textcolor{red}{[\cite{Plyushchay}]}. For this we introduce the quasi-amplitude \textcolor{red}{[\cite{Brezh}]} \begin{equation} \Xi(x)=\sqrt{\psi_{-0}(x)\varphi_{-0}(x;\mu)}\,. \end{equation} It is a square root of the product of two nonphysical eigenstates of eigenvalue $-1$ of the quantum harmonic oscillator $L_{\text{os}}$. The rescaled function $\Xi(x)/\sqrt{\mu}$ transforms in the limit $\mu\rightarrow \pm \infty$ into the nonphysical eigenstate $\psi_{-0}$. This function satisfies Ermakov-Pinney equation \textcolor{red}{[\cite{ermakov1,ermakov2,ermakov3,ermakov4}]} \begin{equation} -\Xi''+(x^2+1)\Xi=\frac{1}{4\Xi^3}\,. \end{equation} In terms of quasi-amplitude, the first order differential operators \begin{equation}\label{Axi} A^-_\Xi=\Xi(x)\frac{d}{dx}\frac{1}{\Xi(x)}=\frac{d}{dx}-x-\mathcal{W}(x)\,, \qquad A^+_\Xi=(A_\Xi)^\dagger\,,\qquad \end{equation} can be defined, where \begin{equation}\label{calWsup} \mathcal{W}(x)=\frac{1}{2\Xi^2(x)}= \frac{1}{2}\left(\ln (I_0(x)+\mu)\right)'\,. \end{equation} Then the Hamiltonian $H_\mu$ and the intertwining operator $\mathbb A^-_\mu$ can be presented in the form \begin{equation} \mathcal{H}_\mu=A^-_\Xi A^+_\Xi +\mathcal{W}^2 - 2\mathcal{W}'\sigma_3\,, \qquad \mathbb A^-_\mu=-(A^-_\Xi-\mathcal{W})(A^+_\Xi+\mathcal{W})\,. \end{equation} Function $\mathcal{W}(x)$ in the anomaly-free scheme of quantization plays a role of superpotential for corresponding$\,\,\,$ classical $\,\,\,$system $\,\,\,$ with$\,\,\,$ second $\,\,\,$order $\,\,\,$supersymmetry,$\,\,\,$ \textcolor{red}{[\cite{PlyPara,KliPly,Plyushchay}]}. \section{Remarks} Along with the harmonic oscillator, there are many bosonic systems that have hidden bosonized supersymmetry and the idea of the Foldy-Wouthuysen transformation is not new, see \textcolor{red}{[\cite{Boson1}; \cite{PlyPara}; \cite{Gamboa2}; \cite{Boson2}; \cite{Boson3}; \cite{jakubsky}]}. In fact, one can use the transformation (\ref{Utrans}) in the generators of the super-extended free particle given in Chap. \ref{ChConformal}, Sec. \Ref{Sec0omegalimit}, to obtain the hidden superconformal symmetry of the bosonic free particle. The exotic feature here is the supersymmetric system from where we get the bosonic superalgebra which, in principle, does not correspond to the Darboux transformation scheme. However, it is possible to obtain such a system starting from the classical level: Consider a classical system described by a Hamiltonian \begin{equation}\label{Comm1} H=p^2+W^2 +W'[\theta^+,\theta^-] \end{equation} with superpotential $W(x)=\sqrt{x^2+c^2}$, where $c>0$ is a constant, besides $\theta^+$ and $\theta^-=(\theta^+)^*$ are Grassmann variables with a nonzero Poisson bracket $\{\theta^+,\theta^-\}_{{}_{PB}}=-i$, that after quantization are realized as the creation-annihilation fermionic operators $\theta^\pm\rightarrow \sigma_\pm=\frac{1}{2}(\sigma_1\pm i\sigma_2)$. A direct quantum analog of this system is a composition of two isospecral systems and is in the phase of spontaneously broken supersymmetry, with nonsingular superpartner potentials $V_\pm=x^2+c^2\pm x/\sqrt{x^2+c^2}$. The spectrum of subsystems is different from that of the quantum harmonic oscillator. On the other hand, if before the quantization we realize a canonical transformation $x\rightarrow X=x+ N \partial G(x,p)/\partial p$, $p\rightarrow P=p - N\partial G(x,p)/\partial x$, $\theta^\pm\rightarrow \Theta^\pm=e^{\pm iG(x,p)}\theta^\pm$, where $N=\theta^+\theta^-$ and $G=\frac{1}{2}\arcsin \big((p^2-x^2-c^2)/(p^2+x^2+c^2)\big)$ \textcolor{red}{[\cite{KliPly,InzPly1}]}, we obtain the canonically equivalent form of the Hamiltonian $H=P^2+X^2 +c^2$. In the canonically transformed system, the new classical Grassmann variables $\Theta^\pm$ completely decouple and are the odd integrals of motion with Poisson bracket $\{\Theta^+,\Theta^-\}_{{}_{PB}}=-i$. The quantization of the canonically transformed system gives us exactly the extended quantum system (\ref{Hextbr}) shifted just for the additive constant $c^2$. Another possibility is a ``naive'' application of the comformal bridge. To do so, let us start by setting the super-Schr\"odinger symmetry generators for the super-free particle system, \begin{eqnarray} & \mathcal{H}= -\frac{1}{2}p^2\mathbb{I}\,,\qquad \mathcal{D}= \frac{1}{4}\{ x, p\} \mathbb{I}\,,\qquad \mathcal{K}= \frac{x^2}{2} \mathbb{I}\,, \qquad \mathcal{Z}=-\frac{\sigma_3}{4}\,,&\\& \mathcal{P}=p\mathbb{I}\,,\qquad \mathcal{X}=x\mathbb{I}\,,\qquad \Sigma_1=\sigma_1\,,\qquad \Sigma_2=-\sigma_2\,, \qquad \pi_{a}=p\Sigma_{a}\,\qquad \xi_{a}=x\Sigma_{a}\,, & \end{eqnarray} where $p=i\frac{d}{dx}$. The conformal bridge transformation produces \begin{eqnarray} & \mathfrak{S}\mathcal{H} \mathfrak{S}^{-1}=-\frac{(a^-)^{2}}{2}\mathbb{I}\,,\qquad \mathfrak{S}\mathcal{D} \mathfrak{S}^{-1}=-\frac{i}{4}L_{\text{os}}\mathbb{I}\,,\qquad \mathfrak{S}\mathcal{K}\mathfrak{S}^{-1}=\frac{(a^+)^{2}}{2}\mathbb{I} &\\& \mathfrak{S}\mathcal{Z} \mathfrak{S}^{-1}=\mathcal{Z}\,,\qquad \mathfrak{S}\mathcal{X} \mathfrak{S}^{-1}=\frac{a^+}{\sqrt{2}}\mathbb{I} \,,\qquad \mathfrak{S}\mathcal{P}\mathfrak{S}^{-1}=-i\frac{a^-}{\sqrt{2}}\mathbb{I} \,, &\\& \mathfrak{S}\Sigma_a\mathfrak{S}^{-1}=\Sigma_a \,,\qquad \mathfrak{S}\xi_a \mathfrak{S}^{-1}=\frac{a^+}{\sqrt{2}}\Sigma_a \,,\qquad \mathfrak{S}\pi_a\mathfrak{S}^{-1}=-i\frac{a^-}{\sqrt{2}}\Sigma_a \,, \end{eqnarray} that up to a complex proportionality constant, they match with the generators presented in Sec. \ref{Sec3Hiden} at $ t = 0 $. The conformal bridge works fine in this case because $\mathcal{D} $ and its transformed version are matrix generators containing two copies of the same differential operator. In the general case, both superpartners are different from each other and the transformation fails. \chapter{Rationally extended conformal mechanics} \label{ChRQHO} As we have shown in Chap. \ref{ChSUSY}, Sec. \ref{Chap1Darbux}, DCKA transformation allows us to construct new quantum systems starting from a well known original one. In this context, the systems that appear due to these transformations applied to the harmonic oscillator are the rationally extended harmonic oscillators, that is, a harmonic potential plus a regular rational function of $ x $, and to obtain a well-defined system, we have to follow some rules for selecting the set of seed states for transformation. The selection rule that gives us a regular potential is known as the Krein-Adler theorem, \textcolor{red}{[\cite{Krein,Adler,Dubov,Quesne2012,Gomez2}]}. In the research carried out in the article \textcolor{red}{[\cite{CarInzPly}]}, we found new selection rules to construct completely isospectral rational extensions for the AFF model with integer coupling constant $ m (m + 1) $, where $m=1,2,\ldots$, as well as deformations with gaps in their spectrum. We also learned how to construct the spectrum-generating ladder operators of these deformed systems by using what we call Darboux dualities. The content presented in this chapter is a summary of the results obtained in \textcolor{red}{[\cite{CarInzPly}]}, an article that in turn was inspired by previous research on rational deformations of the harmonic oscillator \textcolor{red}{[\cite{CarPly}]}. Before to start, let us explain what a Darboux duality is with a simple example: consider the half-harmonic oscillator Hamiltonian $L_0$\footnote{ This Hamiltonian is formally (\ref{Lqhosc}), but defined in the domain $\{ \psi\in L^2((0,\infty),dy)\vert \psi(0^+)=0\}$. The physical states are the odd eigenstates of the harmonic oscillator system.}. When the first $m$ physical states are considered as seed states for the DCKA transformation, it is not difficult to show that the resulting system is the AFF model $L_m$, defined in (\ref{AFFless}), shifted by the constant $-2m$. Now, by performing the transformation $ x \rightarrow ix $ in the physical eigenstates, we produce new nonphysical solutions, and when the first $m$ functions obtained in this way are taken as seed states for the DCKA transformation, the resulting system is again $L_m$ but now shifted by the positive constant $2m$. So both Darboux transformation schemes generate essentially the same quantum system, and in this sense we call them as dual Darboux schemes. The intertwining operators of both dual schemes are independent of each other and it can be shown that operators constructed by means of products of these intertwiners are equivalent to powers of $\mathfrak{sl}(2,\mathbb R)$ generators \textcolor{red}{[\cite{CarInzPly}]}. Here we study rational extended systems built on the basis of the half-harmonic oscillator, and for simplicity, we present the following notation to refer to the physical and nonphysical eigenstates of the quantum harmonic oscillator system (from now on QHO), \begin{equation} \label{notation} n\equiv \psi_n(x),\qquad -n\equiv \psi_{n}^-=\psi_n(ix)\,,\qquad \widetilde{n}\equiv \widetilde{\psi_n}\,, \qquad \widetilde{-n}\equiv \widetilde{\psi_{n}^-}. \end{equation} \section{Generation of rationally extended systems} \label{rationallyextededinteger} Rational deformations (extensions) of the QHO system are constructed following the Krein-Adler theorem \textcolor{red}{[\cite{Krein,Adler}]}, which ensures that the Wronskian of the seed states (or henceforth Darboux scheme) $(n_1,n_1+1,\ldots,n_\ell,n_\ell+1)$, where the numbers $n_j\in \mathbb N $, $j=1,\ldots,\ell$, indicate the chosen seed states, see notations (\ref{notation}), does not have zeros on the real axis. The corresponding DCKA transformation produces \begin{equation} \label{DQHO} L_{(n_1,n_1+1,\ldots,n_\ell,n_\ell+1)}=L +4\ell + \frac{F(x)}{Q(x)}, \end{equation} where $F(x)$ and $Q(x)$ are even polynomials, with $Q(x)$ taking positive values on real line and having degree higher by two of degree of $F(x)$. According with Chap. \ref{ChSUSY}, the spectrum of the system (\ref{DQHO}) is almost isospectral to the QHO spectrum: there are missing energy levels or gaps, related to the energy levels corresponding to seed states. \vskip0.1cm On the other hand, deformations of the AFF model $L_m$ can be obtained from the half-harmonic oscillator by considering the scheme $(n_1,n_1+1,\ldots,n_\ell,n_\ell+1,2k_1+1,\ldots, 2k_m+1)$, where even indexes inside the set $n_1,n_1+1,\ldots,n_\ell,n_\ell+1$ represent nonphysical eigenstates of $L_{0}$ and $k_i$, $i=1,\ldots,m$, are identified as $m$ odd states which were not considered in the first set of $2n_\ell$ states. The Hamiltonian operator \begin{equation} \label{REIO} L_{(n_1,n_1+1,\ldots,n_ \ell,n_\ell+1,2k_1+1,\ldots, 2k_m+1)}=L_{m}+ 2m +4\ell + \frac{\widetilde{F(x)}}{\widetilde{Q(x)}}, \end{equation} appears as a final result of the DCKA transformation, where polynomials $\widetilde{F(x)}$ and $\widetilde{Q(x)}$ have the properties similar to those of $F(x)$ and $Q(x)$ in (\ref{DQHO}). Note that in this way we can only construct deformations of $L_m$. Rational deformations of $L_\nu$, with arbitrary values for parameter $\nu$, cannot be connected with the harmonic oscillator as we did here, and the issue about their construction is discussed properly in Chap. \ref{ChKlein}. In general such a system has gaps in its spectrum. If, however, the set $n_1,n_1+1,\ldots,n_ \ell,n_\ell+1,2k_1+1,\ldots, 2k_m+1$ contains all the $\ell+m$ odd indexes from $1$ to $2k_m+1$, the generated deformed AFF system will have no gaps in its spectrum and we obtain a system completely isospectral to $L_{0}+4\ell+2m$. Such completely isospectral (gapless) deformations in the QHO case are only possible if we include Jordan states in the construction. \vskip0.1cm The mirror diagram method developed and used in \textcolor{red}{[\cite{CarInzPly}]} is a technique such that a dual scheme with nonphysical ``negative'' eigenstates (\ref{notation}), is derived from a ``positive'' scheme with physical states of $L_{os}$ and, vice-versa. This can be done by using the algorithmic procedure described in Appendix \ref{MirrorHarmonic} and the final picture is the following: \begin{itemize} \item For a given positive scheme $\Delta_+\equiv (l_1^+,\ldots,l_{n_+}^+)$, where $l_i^+$ with $i=1,\ldots, n_+$, one gets the negative scheme $\Delta_-=(-\check{0},\ldots,-\check{n}_i^-= l_i^+-l_{n_+}^+, \ldots,-l_{n_+}^+)$, where $-\check{n}_i^-$, means that the corresponding number $-n_i^-$ is omitted from the set $\Delta_-$. \item If we have instead the negative scheme $\Delta_-\equiv (-l_{1}^-,\ldots,-l_{n_-}^-)$, where $-l_{j}^-$ with $j=1,\ldots, n_-$, one obtains the positive scheme $\Delta_+=(\check{0}, \ldots,\check{n}^+_{j}=l_{n_-}^- -l_{j}^-,\ldots,l_{n_-}^-)$, where symbols $\check{n}_j^+$ represent again the states missing from the list of the chosen seed states. \end{itemize} Obviously, Darboux scheme must be constructed in such a way that the generated Hamiltonian is a non-singular operator, that is, by means of the rules discussed above. Then, having two dual schemes on hand, the relation \begin{equation}\label{genDual} e^{-n_+x^2/2}W(\Delta_-)=e^{n_-x^2/2} W(\Delta_+)\,, \end{equation} is valid modulo a multiplicative constant. From here one can see that the Hamiltonians of dual schemes satisfy \begin{equation} \label{L+L-} L_{(+)}-L_{(-)}=2N\,,\qquad N\equiv n_+ +n_-=l_{n^+}^++1=l_{n^-}^-+1\,, \end{equation} where $L_{(+)}$ and $ L_{(-)}$ correspond to \begin{equation} \label{L(+-)form} L_{(\pm)}= -\frac{d^2}{dx^2}+V(x)-2\frac{d^2}{dx^2} \ln W(\Delta_\pm)\,. \end{equation} On the other hand, the intertwining operators $\mathbb A_{n_+}^-$ and $\mathbb A_{n_-}^-$ that correspond to each scheme are constructed following (\ref{generic-inter}), however, we prefer to use the more generic notation \begin{equation} \mathbb A_{n_+}^-=A_{(+)}^-\,,\qquad \mathbb A_{n_-}^-=A_{(-)}^-\,,\qquad A_{(+)}^+=(A_{(+)}^-)^\dagger\,,\qquad A_{(-)}^+=(A_{(-)}^-)^\dagger\,. \end{equation} By means of the negative scheme we do not eliminate any energy level from the spectrum, but instead energy levels can be introduced, but not obligatorily, in its lower part. In the particular special case of completely isospectral deformations of the (shifted) $L_m$ systems, all $m$ seed states composing negative scheme are nonphysical odd eigenstates of $L_0$, and the transformation does not introduce any additional energy level. The construction of the mirror diagram can be better understood with the following example: Consider the illustration in figure \ref{defAFFFigure1}. In the upper line we have represented the first eleven physical eigenstates of the harmonic oscillator by circles, where the black ones are the seed states of the positive scheme $ (1,4,5,10,11) $, which produces a system of the type (\ref{REIO}). In a similar way, the first eleven nonphysical states with negative energy are indicated by the circles in the bottom line, and the marked ones are the seed states of the corresponding dual negative scheme. In general, when considering a scheme of the form $(\ldots,N)$, in the upper line we ordered from left to right all the physical state from $\psi_{0}$ to $\psi_N$, besides in the bottom line we set from right to left all the states between $\psi_{0}^-$ to $\psi_N^-$. After marking the states of the positive scheme, the construction of the negative scheme is by means of a sort of an ``anti-reflection'' transformation with respect to an imaginary line in the center, that is parallel to the other two lines. The construction of the positive scheme from the negative one is analogous. \begin{figure}[H] \begin{center} \includegraphics[scale=0.16]{figure1Isolader.eps} \caption[Mirror diagram, Sec. 6.1 ]{\small{A mirror diagram example.}} \label{defAFFFigure1} \end{center} \end{figure} \vskip-0.5cm This construction seems to be related with the Maya diagram formalism, for a review see \cite{gomez2019}. However, our technique is completely based on the existence of the first-order ladder operators and their relationship with the Darboux transformation (this is the key to its generalization for the AFF model in Chap. \ref{ChKlein}), besides for the Maya diagrams it is important to study the proprieties of an additional structure called the pseudo-Wronskian, which we do not introduce in our work. \section{Spectrum-generating ladder operators: completely isospectral case} \label{SecIsocase} In this section we explore the possibilities of constructing spectrum-generating ladder operators for rationally extended isospectral systems. We start with the simplest example and then expand on the ideas for the general case. Consider the simplest deformed AFF system generated via the Darboux transformation based on the nonphysical eigenstate $\psi_{3}^{-}=(2x^3+3x)e^{x^2/2}$ of the half-harmonic oscillator $L_{0}$. The resulting Hamiltonian takes the form \begin{eqnarray} \label{reio3} L_{(-)} :=L_1 -2 +8\frac{2 x^2-3}{(2 x^2+3)^2}\,. \end{eqnarray} By the method of the mirror diagram, we find that up to a constant shift, the system can be generated alternatively by the DCKA transformation based on the set $(1,2,3)$\footnote{ The state $\psi_{2}$ is not a physical state of the half-harmonic oscillator $L_{0}$. } , \begin{equation} L_{(+)}:= L_{(-)}+8\,.\end{equation} The intertwiners of the negative scheme are \begin{equation}\label{A+-(-3)} A_{(-)}^-=\psi^{-}_3\frac{d}{dx}\frac{1}{\psi^{-}_3} =\frac{d}{dx}-x -\frac{1}{x}-\frac{4x}{2x^2+3}\,,\qquad A_{(-)}^+=(A_{(-)}^-)^\dagger\,. \end{equation} They provide us the factorization relations $A_{(-)}^{+}A_{(-)}^{-}=L_0+7\,,$ $A_{(-)}^{-}A_{(-)}^{+}=L_{(-)}+7=L_{(+)}-1$. In correspondence with them, $A_{(-)}^{-}$ intertwines the Hamiltonian operators $L_{0}$ and $L_{(-)}$, \begin{equation} A_{(-)}^{-}L_{0}=L_{(-)}A_{(-)}^{-}=(L_{(+)}-2\Delta E)A_{(-)}^{-}\,, \qquad \Delta E= 4\,, \end{equation} and the intertwining relation for $A_{(-)}^{+}$ is obtained by Hermitian conjugation. The systems $L_{0}$ and $L_{(+)}$ are also intertwined by the third order operators $A^{\pm}_{(+)}$, where the operator $A^{-}_{(+)}$ is uniquely specified by its kernel\,: $\ker A^{-}_{(+)}=\text{span}\,\{\psi_1,\psi_2,\psi_3\}$. We have the intertwining relation $A^{-}_{(+)}L_{0}=L_{(+)}A^{-}_{(+)}=(L_{(-)}+8)A^{-}_{(+)}$, and the conjugate relation for $A^{+}_{(+)}$. To construct ladder operators for this deformed system we can ``Darboux dress'' the ladder operators of the half-harmonic oscillator which are nothing else than $(a^\pm)^2$. The first pairs of operators produced in this way are \begin{equation} \label{reio7} \mathcal{A}^{\pm}=A_{(-)}^{-}(a^{\pm})^{2}A_{(-)}^{+}\,. \end{equation} These operators together with the Hamiltonian $L_{(-)}$ generate a nonlinear deformation of the conformal symmetry given by the commutation relations \begin{equation} [L_{(-)},\mathcal{A}^{\pm}]=\pm 4 \mathcal{A}^{\pm}, \qquad [\mathcal{A}^-,\mathcal{A}^+]=16\left(L_{(-)}+3\right)\left(L_{(-)}+7\right)\left(L_{(-)}+{1}/{2}\right). \end{equation} The roots of the fourth order polynomial in the relation \begin{equation} \mathcal{A}^{+}\mathcal{A}^{-}= (L_{(-)}+7)(L_{(-)}+3)(L_{(-)}-1)(L_{(-)}-3)\,, \end{equation} correspond to eigenstates of $L_{(-)}$, which belong to the kernel of the lowering operator, \begin{equation} \ker\mathcal{A}^{-}=\text{span}\,\{ A_{(-)}^{-} \widetilde{\psi_{3}^{-}},\,A_{(-)}^{-}\psi_1^{-}\,, A_{(-)}^{-}\psi_0,\, A_{(-)}^{-}\psi_1 \}. \end{equation} The last state $\,A_{(-)}^{-}\psi_1=A_{(+)}^-\psi_5$ describes here the ground state of $L_{(-)}$ of eigenvalue $E=3$, and other states are nonphysical. On the other hand, the roots in the product \begin{equation} \mathcal{A}^{-}\mathcal{A}^{+}=(L_{(-)}+11)(L_{(-)}+7)(L_{(-)}+3)(L_{(-)}+1)\,, \end{equation} correspond to eigenvalues of the eigenstates of $L_{(-)}$ which appear in the kernel of the raising ladder operator, \begin{equation} \ker\mathcal{A}^{+}=\text{span}\{ A_{(-)}^{-}\psi_5^{-},\, A_{(-)}^{-}\widetilde{\psi_{3}^{-}},\, A_{(-)}^{-}\psi_1^{-},\,A_{(-)}^{-}\psi_0^{-}\}\,. \end{equation} All the states in this kernel are nonphysical. In correspondence with the described properties of the ladder operators (\ref{reio7}) they are the spectrum-generating ladder operators for the system $L_{(-)}$\,: acting by them on any physical eigenstate of $L_{(-)}$, we can generate any other physical eigenstate. The kernels of the ladder operators contain here the same nonphysical eigenstate $A_{(-)}^{-}\widetilde{\psi_{3}^{-}}= A_{(-)}^{-}\psi_1^{-}$. Below we shall see that in the case of \textbf{non-isospectral} rational deformations of the AFF system the kernels of analogs of such lowering and raising ladder operators contain some common physical eigenstates, see for example Figure \ref{defAFF1Figure2} in next section. In a similar way, one can construct the ladder operators for $L_{(-)}$ via Darboux-dressing of $(a^\pm)^2$ by the third order intertwining operators, \begin{equation} \mathcal{B}^{\pm}=A^{-}_{(+)}(a^{\pm})^2 A^{+}_{(+)}\,,\qquad [L_{(-3)},\mathcal{B}^{\pm}]=\pm 4\mathcal{B}^{\pm}\,. \end{equation} However, these differential operators of order $8$ are not independent and reduce to the fourth order ladder operators (\ref{reio7}) multiplied by the second order polynomials in the Hamiltonian, \begin{equation} \mathcal{B}^{-}=\mathcal{A}^{-}(L_{(-)}+1)(L_{(-)}+5)\qquad \text{and} \qquad \mathcal{B}^{+}=(\mathcal{B}^{-})^\dagger\,. \end{equation} As the first and third order operators $A_{(-)}^{\pm} $ and $A_{(+)}^{\pm}$ intertwine the half-harmonic oscillator with the system $L_{(-)}$ with a nonzero relative shift, we can construct yet another pair of the ladder operators for the quantum system $L_{(-)}$, \begin{eqnarray}\label{C+-iso} & \mathcal{C}^{-}=A_{(+)}^{-}A_{(-)}^{+}\,, \qquad \mathcal{C}^{+}=A_{(-)}^{-}A_{(+)}^{+} \,, &\\& [L_{(-)},\mathcal{C}^{\pm}]=\pm 8\,\mathcal{C}^{\pm}\,,\qquad [\mathcal{C}^-,\mathcal{C}^+]=32\left(L^3_{(-)}+6L^2_{(-)}-L_{(-)}+30\right)\,. & \end{eqnarray} The kernel of the lowering ladder operator is \begin{equation} \ker\mathcal{C}^{-}=\text{span}\,\{(\psi_{(-)}^{-})^{-1},\,A_{(-)}^{-}\psi_1,\,A_{(-)}^{-}\psi_2,\,A_{(-)}^{-}\psi_3 \}\,. \end{equation} Here $A_{(-)}^{-}\psi_1=A_{(+)}^-\psi_5$ and $A_{(-)}^-\psi_3=A_{(+)}^-\psi_7$ are the ground and the first exited states of $L_{(-)}$. On the other hand, all the states in the kernel of the raising ladder operator are nonphysical\,: \begin{equation} \ker \mathcal{C}^{+}=\text{span}\,\{A_{(-)}^{-}\psi_7^{-},\,A_{(-)}^{-}\psi_2^{-},\,A_{(-)}^{-} \psi_1^{-},\,A_{(-)}^{-}\psi_0^{-}\}\,. \end{equation} As a result, the space of states of $L_{(-)}$ is separated into two subspaces, on each of which the ladder operators $\mathcal{C}^{+}$ and $\mathcal{C}^{-}$ act irreducibly. One subspace is spanned by the even eigenstates and the another subspace corresponds to the odd eigenstates. The ladder operators $\mathcal{C}^{\pm}$, unlike $\mathcal{A}^{\pm}$, are therefore not spectrum-generating operators for the system $L_{(-)}$. {}Notice that from the point of view of the basic properties of the ladder operators $\mathcal{C}^{\pm}$, they are similar to the operators $(a^\pm)^4$ in the case of the half-harmonic oscillator $L_0$. The essential difference here, however, is that the ladder operators $\mathcal{C}^{\pm}$ are independent from the spectrum-generating ladder operators $\mathcal{A}^{\pm}$ and have the same differential order equal to four. We shall see that for \textbf{non-isospectral} rational extensions of the AFF systems the direct analogs of the operators $\mathcal{C}^{\pm}$ will constitute an inseparable part of the set of the spectrum-generating operators. \vskip0.1cm The described properties of this particular example are extended for the general case of isospectral deformations and can be summarized as follows. No matter what set of the $m$ odd nonphysical eigenstates of the quantum harmonic oscillator we select, the lower order ladder operators $\mathcal{A}^\pm$ obtained by Darboux-dressing of the ladder operators of the half-harmonic oscillator are spectrum-generating operators for the rationally deformed AFF system. They commute for a polynomial of order $2m+1$ in the corresponding Hamiltonian with which they produce a deformation of the conformal $\mathfrak{sl}(2,\mathbb R)$ symmetry of the type of $W$-algebra \textcolor{red}{[\cite{deBoer}]}. Other spectrum-generating ladder operators, which can be constructed on the basis of other DCKA schemes via the Darbox-dressing procedure, act on physical states in the same way as the operators $\mathcal{A}^\pm$ of order $2(m+1)$, and are equal to them modulo the multiplicative factor in the form of the polynomial in the Hamiltonian operator of the system. The ladder operators $\mathcal{C}^{\pm}$ constructed by ``gluing'' intertwining operators of the two dual schemes are not spectrum-generating. Particularly, for the isospectral deformation of the system $L_{l_m+1}$ based on the set of the seed states $(-(2l_1+1),-(2l_2+1),\ldots, -(2l_m+1))$ with $0\leq l_1<l_2<\ldots<l_m$, $l_m\geq 1$, the operator ${\mathcal{C}}^-$ annihilates the lowest $l_m+1$ states in the spectrum of the system. \section{Spectrum-generating ladder operators: non-isospectral case} \label{sectionSG} As in the previous section, here we explore the construction of spectrum-generating ladder operators for non-isospectral deformations of the AFF system through a particular example, and then generalize the ideas. Let us start with Darboux's positive scheme $ (1,4,5,10,11) $ that we have already used as example to explain the mirror diagram technique in Sec. \ref{rationallyextededinteger}. There we had already obtained the negative scheme which is $ (- 2, -3, -4, -5, -8, -9, -11) $. After performing the DCKA transformation using the positive scheme, we obtain the Hamiltonian operator \begin{equation}\label{(1,4,5,10,11)} L_{(+)}:=-\frac{d^2}{dx^2}+x^2-2(\ln W(1,4,5,10,11))''\,, \end{equation} where \begin{eqnarray} \begin{array}{ll} W(1,4,5,10,11)\propto & x e^{-\frac{5}{2}x^2} (467775+4x^2(155925-93555x^2+8x^4(62370 -21945x^2+\\ & +4x^4(735+1145x^2-504x^4+358x^6-88x^8+8x^{10})))) \end{array}\,. \end{eqnarray} The graph of the resulting potential and the quantum spectrum of the system (\ref{(1,4,5,10,11)}) are shown on Figure \ref{defAFF1Figure2}. \begin{figure}[hbt] \begin{center} \includegraphics[scale=0.25]{figure2Isolader.eps} \caption[Behavior of the potential and physical states annihilated by the spectrum-generating ladder operators, Sec. 6.3 ]{\small{Potential of the system (\ref{(1,4,5,10,11)}). The energy levels of the corresponding physical states annihilated by ladder operators $\mathcal{B}^-$, $\mathcal{B}^+$, $\mathcal{A}^-$, $\mathcal{A}^+$, and $\mathcal{C}^-$ are indicated from left to right.} } \label{defAFF1Figure2} \end{center} \end{figure} \vskip-0.6cm The potential has three local minima and the system supports three separated states in its spectrum which are organized in two ``valence bands'' of one and two states. On the other hand, the dual scheme produces the same Hamiltonian operator but shifted by a constant, $L_{(+)}-L_{(-)}=6\Delta E=24$. The fact that the mutual shift of both Hamiltonians is proportional to the difference of two consecutive energy levels in the spectrum of the AFF model allows us to use below exactly the same rule for the construction of the ladder operators of the type $\mathcal{C}^{\pm}$ as in the previous section. As we shall see, the number of physical states annihilated by the lowering operator $\mathcal{C}^-$ in this case is equal exactly to six. Later, we also shall see that in some cases of the rational gapped deformations of the AFF systems, the mutual shift of the corresponding Hamiltonian operators can be equal to the half-integer multiple of $\Delta E$, and then the procedure for the construction of the ladder operators of the type $\mathcal{C}^{\pm}$ will require some modification. In the DCKA construction of the Hamiltonian operator $L_{(+)}$, the energy levels corresponding to the physical seed eigenstates of the half-harmonic oscillator $L_{0}$ were removed from the spectrum producing two gaps. In the (up to a shifted constant) equivalent system $L_{(-)}$ based on nonphysical seed eigenstates of $L_{0}$, the energy levels were added under the lowest energy of the ground state of $L_{0}$. The intertwining operators associated with the positive scheme $A_{(+)}^\pm $ have differential order five, while the operators $ A_{(-)}^\pm $, obtained from the negative scheme, have differential order eleven. The three lowest physical states of the system (\ref{(1,4,5,10,11)}) which correspond to the three separated energy levels can be presented in two equivalent forms \begin{eqnarray} \phi_0= A_{(-)}^{-}\widetilde{\psi_8^-}=A^-_{(+)}\psi_3\,,\qquad \phi_1= A_{(-)}^{-}\widetilde{\psi_4^-}=A^-_{(+)}\psi_7\,, \qquad \phi_2= A_{(-)}^{-}\widetilde{\psi_2^-}=A^-_{(+)}\psi_9\,, \end{eqnarray} where equalities are modulo a nonzero constant multiplier. We have here the intertwining relations \begin{equation} A^-_{(+)}L_0=L_{(+)}A^-_{(+)}=(L_{(-)}+24)A^-_{(+)}\,, \qquad A_{(-)}^{-}L_0=L_{(-)}A_{(-)}^{-}=(L_{(+)}-24)A_{(-)}^{-}, \end{equation} and the conjugate relations for $A^+_{(+)}$ and $A_{(-)}^{+}$. Let us turn now to the construction of the ladder operators for the system under consideration. Like in the isospectral case, here we have two ways to realize Darboux-dressing of the ladder operators $-\mathcal{C}^\pm_0=(a^{\pm})^{2}$. Using $A^\pm_{(+)}$ for this purpose , we obtain the operators of order twelve: \begin{equation}\label{BcalAladder} \mathcal{B}^{\pm}=A^-_{(+)}(a^{\pm})^{2}A^+_{(+)}\,,\qquad [L_{(-)},\mathcal{B}^{\pm}]=\pm\Delta E\mathcal{B}^{\pm}\,. \end{equation} The kernel of $\mathcal{B}^{-}$ contains three physical states $\phi_0$, $\phi_1$ and $\phi_3= A_{(-)}^{-}\psi_1=A_{(+)}^{-}\psi_{13}$ among other 9 nonphysical solutions with negative energy. They correspond to the ground state, the lowest states in the isolated ``valence band'', and the first state in the equidistant part of the spectrum, see Figure \ref{defAFF1Figure2}. On the other hand $\mathcal{B}^+$ annihilates $\phi_0$, the upper state in the valance band $\phi_2$ and other 10 nonphysical states. Then, due to the incapacity of these operators to connect the isolates states with the equidistant part of the spectrum, it is obvious that $\mathcal{B}^\pm$ are not spectrum-generating. We also can construct ladder operators by using $A_{(-)}^{\pm}$ instead, \begin{equation}\label{AaADarboux} \mathcal{A}^{\pm}=A_{(-)}^{-}(a^{\pm})^{2}A_{(-)}^{+}\,, \qquad [L_{(+)},\mathcal{A}^{\pm}]=\pm\Delta E \mathcal{A}^{\pm\,}\,. \end{equation} These are also not spectrum-generating operartors because the leap they make does not allow to overcome the gaps. Operator $\mathcal{A}^{+}$ detects all the states in both separated valence bands by annihilating them. In addition to the indicated physical states, the lowering operator $\mathcal{A}^{-}$ also annihilates the lowest state in the half-infinite equidistant part of the spectrum. Therefore, the essential difference of the non-isospectral rational deformations of the AFF model from their isospectral rational extensions is that there is no pair of spectrum-generating ladder operators constructed via the Darboux-dressing procedure. This situation is similar to that in the rationally extended QHO systems \textcolor{red}{[\cite{CarPly}]}. We now construct the ladder operators $\mathcal{C}^\pm$ by ``gluing'' the intertwining operators of different types. As in the case of the isospectral deformations, they also will not be the spectrum-generating operators, but together with any pair of the ladder operators ${\mathcal{B}^{\pm}}$, or $\mathcal{A}^\pm$ they will form a spectrum-generating set. So, let us consider \begin{equation}\label{C+-AAdefin} \mathcal{C}^{-}=A_{(-)}^-A_{(+)}^{+} \,, \qquad \mathcal{C}^{+}=A_{(+)}^{-}A_{(-)}^+\,, \qquad [L_{(-)},\mathcal{C}^{\pm}]=\pm 6\Delta E\mathcal{C}^{\pm}\,. \end{equation} They are independent from the ladder operators constructed via the Darboux-dressing procedure, and their commutator $[\mathcal{C}^-, \mathcal{C}^+]$ is a certain polynomial of order $11$ in the Hamiltonian $L_{(-)}$. The operators $\mathcal{C}^{\pm}$ divide the Hilbert space of the system into six infinite subsets on which they act irreducibly: The $\mathcal{C}^{-}$ transforms a physical eigenstate into another physical eigenstate by making it skip six levels below and annihilates the first six eigenstates of the spectrum. The operator $\mathcal{C}^{+}$ does not annihilate any physical state here and skip the energy of an arbitrary state in to six levels above. Therefore they connect the separated states with the equidistant part of the spectrum. As a result, the pair $\mathcal{C}^{\pm}$ together with any pair of the ladder operators, $\mathcal{B}^\pm$ or $\mathcal{A}^\pm$ are the spectrum-generating set. Figure \ref{Figure3} illustrates the action of the ladder operators and show how we can use them to obtain a particular state, starting from an arbitrary one. \begin{figure}[h] \begin{center} \includegraphics[scale=0.40]{Figura4Isolader.eps} \end{center} \caption[Spectrum-generating ladder operators behavior, Sec. 6.3]{{\label{Figure3}} \small{On the left: The numbers on the left correspond to the indices of the physical eigenstates $\psi_{2l+1}$ of the half-harmonic oscillator that are mapped ``horizontally'' by operator $\mathbb A^-_{(+)}$ into eigenstates $\Psi_n$ of the system (\ref{(1,4,5,10,11)}). Lines show the action of the ladder operators coherently with their structure (\ref{AaADarboux}), (\ref{BcalAladder}) and (\ref{C+-AAdefin}). The marked set of the states $0,\, 1,\, 2,\, 3,\, 5,\,8 $ on the right corresponds to six eigenstates of $L_{(+)}$ annihilated by $\mathcal{C}^-$. On the right: Horizontal lines correspond to the energy levels of $L_{(+)}$. Upward and downward arrows represent the action of the rising and lowering ladder operators, respectively. As it is shown in the figure on the right, following the appropriate paths, any eigenstate can be transformed into any other eigenstate by applying subsequently the corresponding ladder operators.} } \end{figure} All the described picture is generalized directly in the case when the index of the last seed state used in the corresponding DCKA transformation is odd. Then the corresponding scheme based on physical eigenstates of $L_0$ is of the form $(\ldots,2l_m,2l_m+1)$, and the dual scheme is $(\ldots,-(2l_m+1))$. Following the same notation as we used in the particular examples, the Hamiltonian operators generated in these two dual schemes are shifted by the distance equal to the separation $\Delta E=4$ of energy levels in the equidistant part of the spectrum times integer number $l_m+1$\,: $L_{(+)}-L_{(-)}=4l_m+4$, see (\ref{L+L-}), and the picture is the following: \begin{itemize} \item Operators $\mathcal{A}^\pm=\mathbb A^-_{(-)}(a^\pm)^2\mathbb A^+_{(-)}$ are of differential order $2n_-+2$. Rising and lowering operators of this kind annihilate all the states in the isolated valence bands, in the sense of a group of energy levels separated by a gap from the equidistant part of the spectrum. They act as regular ladder operators in the equidistant part of the spectrum. \item Operators $\mathcal{B}^\pm= \mathbb A^-_{(+)}(a^\pm)^2\mathbb A^+_{(+)}$ are of differential order $2n_++2$. $\mathcal{B}^-$ annihilates all the lowest states in each valence band and the lowest state in the equidistant part of the spectrum. The raising operator $\mathcal{B}^+$ annihilates all the highest states in each valence band. They act in the same way as $\mathcal{A}^\pm$ in the equidistant part of the spectrum. \item Operators $\mathcal{C}^\pm$ of the form (\ref{C+-AAdefin}) have a differential order $n_-+n_+=2l_m+2$, and their commutation with Hamiltonian produces: \begin{equation}\label{LCpm(lm+1)} [L_{(-)}, \mathcal{C}^\pm]=\pm (l_m+1)\Delta E\mathcal{C}^\pm\,. \end{equation} Lowering operator $\mathcal{C}^-$ annihilates $l_m+1$ physical states, where we find all of the isolated states and some exited states of the equidistant part. Rising operator $\mathcal{C}^+$ does not annihilate any physical state. \end{itemize} \vskip0.1cm When we have the schemes $(\ldots,2l_m-1,2l_m)\sim (\dots,-2l_m)$ generating a gapped rational extension of some AFF system, the corresponding Hamiltonian operators associated with them are shifted mutually for the distance $L_{(+)}-L_{(-)}=4l_m+2=(l_m+\frac{1}{2})\Delta E$, that is equal to the half-integer multiple of the energy spacing in the equidistant part of the spectrum and in the valence bands with more than one state. In this case the procedure related to the construction of the ladder operators $\mathcal{A}^\pm$ and $\mathcal{B}^\pm$ and their properties are similar to those in the systems generated by the schemes $(\ldots,2l_m,2l_m+1)\sim (\dots,-(2l_m+1))$. However, the situation with the construction of the ladder operators of the type $\mathcal{C}^\pm$ in this case is essentially different. We still can construct the operators $\mathcal{C}^\pm$ of the form (\ref{C+-AAdefin}). Such operators will be of odd differential order $2l_m+1$, and their commutation relations with any of the Hamiltonian operators $L_{(+)}$ and $L_{(-)}$ will be of the form $[L, \mathcal{C}^\pm]=\pm (4l_m+2)\mathcal{C}^\pm$. This means that these operators acting on physical eigenstates of $L$ will produce nonphysical eigenstates excepting the case when the lowering operator $\mathcal{C}^-$ acts on the states from its kernel. The square of these operators will not have the indicated deficiency and will form together with the ladder operators $\mathcal{A}^\pm$ or $\mathcal{B}^\pm$ the set of the spectrum-generating operators. This picture can be compared with the case of the half-harmonic oscillator $L_0$, where the first order differential operators $a^\pm$ will have the properties similar to those of the described operators $\mathcal{C}^\pm$. In this case we can however modify slightly the construction of the ladder operators of the $\mathcal{C}^\pm$ type by taking \begin{equation}\label{Cpmnew} \widetilde{\mathcal{C}}^-=A_{(-)}^-(a^-)A_{(+)}^+\,,\qquad \widetilde{\mathcal{C}}^+=A_{(+)}^-(a^+)A_{(-)}^+\,. \end{equation} These ladder operators satisfy the commutation relations $[L_{(\pm)},\widetilde{\mathcal{C}}^\pm]=4(l_m+1)\widetilde{\mathcal{C}}^\pm$, and transform a particular physical states into other physical states with different energy. To conclude this section, let us summarize the structure of the nonlinearly deformed conformal symmetry algebras generated by different pairs of the corresponding ladder operators and Hamiltonians of the rationally deformed conformal mechanics systems. The commutators of the ladder operators $\mathcal{A}^\pm$, $\mathcal{B}^\pm$ and $\mathcal{C}^\pm$ with Hamiltonian operators are given, respectively, by Eqs. (\ref{AaADarboux}), (\ref{BcalAladder}) and (\ref{C+-AAdefin}) with $\Delta E=4$. The commutation relations of the form (\ref{AaADarboux}) also are valid for the case of the isospectral deformations discussed in the previous section. To write down the commutation relations between raising and lowering operators of the same type in general case, let us introduce the polynomial functions \begin{equation} P_{n_+}(x)=\Pi_{k=1}^{n_+}(x-2n_k-1)\,, \qquad R_{n_-}(x)=\Pi_{l=1}^{n_-}(x+2n_l+1), \end{equation} where $n_k>0$ are the indices of the corresponding seed states in the positive scheme and $-n_l<0$ are the indices of the seed states in the negative scheme. With this notation, we have the relations $A_{(+)}^+A_{(+)}^-=P_{n_+}(L_0)$, $A_{(+)}^-A_{(+)}^+=P_{n_+}(L_{(+)})=P_{n_+}(L_{(-)}+2(n_-+n_+))$, and $A_{(-)}^+A_{(-)}^-=R_{n_-}(L_0)$, $A_{(-)}^-A_{(-)}^+=R_{n_-}(L_{(-)})$. Then we obtain \begin{eqnarray}\label{AARH} & [\mathcal{A}^-,\mathcal{A}^+]=(x+1)(x+3) R_{n_-}(x)R_{n_-}(x+4) \big\vert_{x=L_{(-)}}^{L_{(-)}-4}\,, \label{BBPH}&\\& [\mathcal{B}^-,\mathcal{B}^+]=(x+1)(x+3)P_{n_+}(x+4)P_{n_+}(x) \big\vert_{x=L_{(-)}+2N}^{x=L_{(-)}+2N-4}\,, \label{CCPRH}&\\& [\mathcal{C}^-,\mathcal{C}^+]=R_{n_-}(x) P_{n_+}(x)\big\vert_{x=L_{(-)}}^{x=L_{(-)}+2N}\,, \end{eqnarray} where $N=n_-+n_+$, and relation (\ref{AARH}) also is valid in the case of isospectral deformations. In the case of the non-isospectral deformations given by the dual schemes $(\ldots,2l_m-1,2l_m)\sim (\dots,-2l_m)$, the corresponding modified operators (\ref{Cpmnew}) satisfy the commutation relation \begin{equation}\label{tilCCPRH} [\widetilde{\mathcal{C}}^-, \widetilde{\mathcal{C}}^+]=(x+1)R_{n_-}(x) P_{n_+}(x+2)\big\vert_{x=L_{(-)}}^{x=L_{(-)}+2N-2}\,. \end{equation} Thus, in any rational deformation of the conformal mechanics model we considered, each pair of the conjugate ladder operators of the types $\mathcal{A}^\pm$, $\mathcal{B}^\pm$ or $\mathcal{C}^\pm$ generates a nonlinear deformation of the conformal $\mathfrak{sl}(2,\mathbb R)$ symmetry. The commutation relations between ladder operators of different types of the form $[\mathcal{A}^\pm, \mathcal{C}^\pm]$, etc. is considered in next chapter, and their taking into account gives rise naturally to different nonlinearly extended versions of the superconformal $\mathfrak{osp}(2|2)$ symmetry \textcolor{red}{[\cite{InzPly2}]}. \section{Remarks} The construction of the spectrum-generating ladder operators can also be explored by using intertwining operators between the final rational extended model and some intermediate system in the Darboux chain. This possibility was explored in \textcolor{red}{[\cite{CarInzPly}]}. Anyway, the final conclusion of this is that one always has a triad of pairs of ladder operators $\mathcal{A}^\pm$, $\mathcal{B}^\pm$ and $\mathcal{C}^\pm$ which behaves as described above. The only difference here is the number of nonphysical states that appear in the corresponding kernels. An unresolved question for us is if there is any relationship between rationally extended systems and other systems of quantum mechanics, such as the conformal model (\ref{conformalaction}) or a $ \mathcal{PT} $ deformation of it \textcolor{red}{[\cite{JM1,JM2,plyushchay2020exotic}]}, we are thinking of something like the conformal bridge. It can be speculated that if such a relationship exists, it would be useful in applications related to integrable systems of infinite degrees of freedom, since $\mathcal{PT}$ symmetric systems have opened new branches in the search for solitonic solutions for the KdV equation and other integrable models \textcolor{red}{[\cite{Correa2016,JM2,Cen}]}. In the next chapter we continue with rationally extended AFF models characterized by integer coupling constants as well as extended QHO systems, but now from the perspective of supersymmetric quantum mechanics. \chapter{Nonlinear supersymmetries in rationally extended systems} \label{ChNonLinearSUSY} We now turn to the study of the extensions and deformations of the superconformal and super-Schr\"odinger symmetries that appear in the $\mathcal {N} = 2$ super-extended systems described by the superpartners ($L_\text{os}$, $L_{def}$) and ($L_{0}$, $L_{m,def}$). Here $L_{def}$ and $L_{m,def}$ correspond to rational deformations of the QHO system and the AFF model with integer values of the parameter $\nu=m$, $m\in \mathbb N$, respectively. As we have seen in the last chapter, the rational deformations of the QHO system and the AFF model are characterized, in the general case, by a finite number of missing energy levels, or gaps, in their spectra, and the description of such systems requires more than a couple of spectrum-generating operators. It is because of this expansion of the sets of ladder operators, whose differential order exceeds two, that nonlinearly deformed superconformal and super-Schr\"odinger structures appear. This chapter, based on the article \textcolor{red}{[\cite{InzPly2}]}, is devoted to the description of the complete sets of generators of the indicated symmetries. At this point, we will again take advantage of the Darboux duality property of the QHO system. \section{Basic intertwining operators} According to \textcolor{red}{[\cite{CarPly,CarInzPly}]}, with each of the dual schemes it is necessary first to associate two basic pairs of the intertwining operators. Here, we discuss general properties of such operators without taking care of the concrete nature of the system built by the DCKA transformation. On the way, however, some important distinctions between rational deformations of the AFF model and harmonic oscillator have to be taken into account, and for this reason, it is convenient to speak of two classes of the systems. We distinguish between them by introducing the class index $c$, where $c = 1$ and $c=2$ will correspond to deformed harmonic oscillator and deformed AFF conformal mechanics model, respectively. As already established in the previous chapter, we will denote the Hamiltonian produced by the positive scheme $\Delta_+$ (negative scheme $\Delta_-$) by $L_{(+)}$ ($L_{(-)}$), and the corresponding intertwining operators by $A_{(+)}^-$ and $(A_{(+)}^-)^\dagger \equiv A_{(+)}^+$ ( $A_{(-)}^-$ and $(A_{(-)}^-)^\dagger \equiv A_{(-)}^+$), see Sec. (\Ref{rationallyextededinteger}). These operators satisfy the relations \begin{eqnarray} \label{L+L-N} &L_{(+)}-L_{(-)}=2N\,,\qquad N=n_++n_-\,, &\\& \label{inter0} A_{(+)}^-L=L_{(+)}A_{(+)}^-\,, \qquad A_{(-)}^-L=L_{(-)}A_{(-)}^-\,,& \end{eqnarray} and the corresponding Hermitian conjugate relations for $A_{(+)}^+$ and $A_{(-)}^+$. Here $ L $ could be $ L_\text{os} $ or $ L_{0} $, depending on the class index $ c $ of the rationally deformed system $ L _ {(\pm)} $ that we want to study. Applying operator identities (\ref{inter0}) to an arbitrary physical or nonphysical (formal) eigenstate $\varphi_n$ of $L$ different from any seed state of the positive scheme and using Eq. (\ref{L+L-}), one can derive the equality \begin{equation} \label{relation-operators} A_{(-)}^-\varphi_n=A_{(+)}^-\varphi_{n+N}\,, \end{equation} to be valid modulo a multiplicative constant. As a result, both operators acting on the same state of the harmonic oscillator produce different states of the new system. We have seen this behavior before in last chapter, Sec. \ref{sectionSG}. The Hermitian conjugate operators $A_{(-)}^+$ and $A_{(+)}^+$ do a similar job but in the opposite direction. Eq. (\ref{relation-operators}) suggests that some peculiarities should be taken into account for class 2 systems\,: the infinite potential barrier at $x=0$ assumes that physical states of $L_{0}$ and $L_{(\pm)}$ systems are described by odd wave functions. Then, in order for $ A_{(+)}^-$to transform physical states of $L_{0}$ into physical states of $L_{(\pm)}$, we must take $n + N$ to be odd for odd $n$ in (\ref{relation-operators}). This means that $A_{(-)}^-$ transforms physical states into physical only if $N$ is even. In the case of odd $N$, it is necessary to take $A_{(-)}^-a^-$ or $A_{(-)}^-a^+$ as a physical intertwining operator. It is convenient to take into account this peculiarity by denoting the remainder of the division $N/c$ by $r(N,c)$\,: it takes value $1$ in the class $c=2$ of the systems with odd $N$ and equals zero in all other cases. The products of the described intertwining operators are of the form (\ref{poly1}), and for further analysis it is useful to write down them explicitly: \begin{eqnarray} \label{A-A-A+A+Poly} &A_{(\pm)}^{+}A_{(\pm)}^{-}=P_{n_\pm}(L)\,,\qquad A_{(\pm)}^{-}A_{(\pm)}^{+}=P_{n_\pm}(L_{(\pm)})\,,&\\ \label{polyA} &P_{n_+}(\eta)\equiv \prod_{k=1}^{n_+}(\eta-2l_k^+-1)\,, \qquad P_{n_-}(\eta)\equiv \prod_{k=1}^{n_-}(\eta+2l_k^-+1)\,.& \end{eqnarray} Here $l_k^+$ are indexes of physical states with eigenvalues $2l_k^++1$ in the set $\Delta_+$, and $-l^-_k$ correspond to nonphysical states with eigenvalues $-2l^-_k-1$ in the negative scheme $\Delta_-$. In the same vein, it is useful to write \begin{eqnarray} \label{ak} &(a^+)^k(a^-)^k=T_k(L_0),\qquad (a^-)^k(a^+)^k=T_k(L_0+2k)\,,&\\ \label{Tk} &T_{k}(\eta)\equiv \prod_{s=1}^{k}(\eta-2s+1)\,, \qquad T_{k}(\eta+2k)\equiv \prod_{s=1}^{k}(\eta+2s-1)\,.& \end{eqnarray} We also have the operator identities \begin{eqnarray} \label{ide} (a^-)^{N}=(-1)^{n_-}A_{(-)}^+A_{(+)}^-\,, \qquad f(L_{(-)})A_{(+)}^-(a^+)^{n_-}=(-1)^{n_-}h(L_{(-)})A_{(-)}^-(a^-)^{n_+}\,, \end{eqnarray} and their Hermitian conjugate versions, where $f(\eta)$ and $h(\eta)$ are polynomials whose explicit structure is given in Appendix \ref{show}. In one-gap deformations of the harmonic oscillator and gapless deformations of $L_1$ these polynomials reduce to $1$. \section{Extended sets of ladder and intertwining operators} \label{interladder} Actually, instead of three types of ladder operators, we have a total of three families of operators \begin{eqnarray} \label{genlad} &\mathfrak{A}_{k}^\pm\equiv A_{(-)}^-(a^\pm)^k A_{(-)}^+\,,\qquad \mathfrak{B}_{ k}^\pm\equiv A_{(+)}^-(a^\pm)^k A_{(+)}^+\,, &\\ &\mathfrak{C}_{N\pm k'}^-\equiv A_{(+)}^-(a^\mp)^{k'}A_{(-)}^+ \,,\qquad \mathfrak{C}_{N\pm k'}^+\equiv (\mathfrak{C}_{N\pm k'}^-)^\dagger\,,&\label{genlad+} \end{eqnarray} where, formally, $k$ can take any nonnegative integer value and $k'$ is such that $N-k'\geq 0$, otherwise operators (\ref{genlad+}) reduce to $\mathfrak{A}_k^\pm$, \textcolor{red}{[\cite{InzPly2}]}. Due to relations (\ref{A-A-A+A+Poly})-(\ref{ide}) one concludes that at $k=0$ and $N-k'=0$ all these operators are reduced to certain polynomials in $L_{(\pm)}$. These objects are generated by taking the commutator relations between two arbitrary representatives of the spectrum generator set described in the previous chapter, and behave like powers of the ladder operator in the QHO system. Calculations with these operators are discussed in detail in Appendix~\ref{apen-red}, so this chapter contains only the main results. Independently of the class of the system, or on whether the operators are physical or not, the three families $\mathfrak{D}_{\rho,j}^\pm=( \mathfrak{A}_{j}^\pm, \mathfrak{B}_{j}^\pm,\mathfrak{C}_{j}^\pm )$, $\rho=1,2,3$, $j=1,2,\ldots$, satisfy the commutation relations of the form \begin{equation} \label{sl2rh} [L_{(\pm)},\mathfrak{D}_{\rho,j}^\pm]=\pm2j \mathfrak{D}_{\rho,j}^\pm \,, \qquad [\mathfrak{D}_{\rho,j}^-,\mathfrak{D}_{\rho,j}^+]=\mathcal{P}_{\rho,j}(L_{(-)})\,, \end{equation} where $\mathcal{P}_{\rho,j}(L_{(-)})$ is a certain polynomial of the corresponding Hamiltonian operator of the system, whose order of polynomial is equal to differential order of $\mathfrak{D}_{\rho,j}^\pm$ minus one, see Appendix \ref{apen-red}. Algebra (\ref{sl2rh}) can be considered as a deformation of $\mathfrak{sl}(2,\mathbb R)$\,, \textcolor{red}{[\cite{JM2}]}. \vskip0.1cm Of all the operators that can be built, our objective is to discriminate against those that are physical and cannot be written as products of lower order elements, belonging to others or to the same family. Having this in mind, we have the following assertion related to the three families: \begin{itemize} \item From (\ref{sl2rh}) one concludes that $2j\propto \Delta E=2c$. Then, for $\mathfrak{A}$ and $\mathfrak{B}$ families, the physical operators are those whose index is $j=lc$ with $l \in \mathbb N$, while for $\mathfrak{C}$ family index should be $j=N+r(N,c)+cs$, where $s$ is integer such that $j>0$. \item For isospectral deformations of the AFF model, the spectrum-generating set is given by any pair of the conjugate operators $\mathfrak{A}^{\pm}_{2}$, $\mathfrak{B}^{\pm}_{2}$, or $\mathfrak{C}^{\pm}_{2}$. \item Due to Eq. (\ref{ide}) one realizes that the basic operators in the general case are \begin{eqnarray} \label{ladgen} \left\{ \begin{array}{cc} \mathfrak{A}_{k}^\pm\,, & 0<k<N\,,\\ \mathfrak{B}_{k}^\pm\,, & 0<k<N\,,\\ \mathfrak{C}_{k}^\pm\,, & 0<k<2N+r(N,c)\,,\\ \end{array} \right. \end{eqnarray} \item For one-gap deformations of the harmonic oscillator, the set of basic ladder operators can be reduced further to the set \begin{eqnarray} \label{basicsubsetonegap} \left\{ \begin{array}{cc} \mathfrak{A}_{k}^\pm\,, & 0<k<n_+\,,\\ \mathfrak{B}_{k}^\pm\,, & 0<k<n_-\,,\\ \mathfrak{C}_{k}^\pm\,, & M<k<n_+\,,\\ \end{array} \right. \qquad M=\left\{\begin{array}{ccc} \max\,(n_-,n_+) & \text{if} & n_-\neq n_+\,,\\ N/2 & \text{if} & n_-=n_+\,, \end{array} \right. \end{eqnarray} where the relations $\mathfrak{A}_{n_+}^{\pm}=(-1)^{n_-}\mathfrak{C}_{n_+}^{\pm}$ and $\mathfrak{B}_{n_-}^{\pm}=(-1)^{n_-}\mathfrak{C}_{n_-}^{\pm}$ were taken into account. \end{itemize} As is obvious from their explicit form, any of the basic elements belonging to one of the three families of ladder operators can be constructed by ``gluing'' two different intertwining operators associated with an alternative DCKA transformation, which are of the form $ A_{(\pm) } a ^ {\pm} $ and $ A_{(\pm)} a^ {\mp} $, so their number should also be reduced. Indeed, for general deformations only the operators \begin{eqnarray} \label{genA} \left\{ \begin{array}{cc} A_{(\pm)}^-(a^\pm)^{n}\,, & 0\leq n<N\,,\\ A_{(\pm)}^-(a^\mp)^{n}\,,& 0<n<N+r(N,c)\,, \end{array} \right. \end{eqnarray} and their Hermitian conjugate counterparts can be considered as basic, see Appendix \ref{apen-red}. One can note that the total number of the basic intertwining operators $\#_f=2[(4N-2+r(N,c))/c]$ is greater than the number of the basic ladder operators $\#_{lad} = 2[(4N-3+r(N,c))/c]$ which can be constructed with their help. In particular case of gapless deformations of the AFF model, the indicated set of Darboux generators can be reduced to those which produce, by `gluing' procedure, one conjugate pair of the spectrum-generating ladder operators of the form $\mathfrak{D}_{2,\rho}^\pm$. For $c=1$ one-gap systems, identity (\ref{ide}) allows us to reduce further the set of the basic intertwining operators, which, together with corresponding Hermitian conjugate ones, is given by any of the two options, \begin{equation} \label{frakS} \mathfrak{S}_{z}\equiv \left\{ \begin{array}{lcc} A_{(-)}^-{(a^+)}^{|z|}\,, & -N<z\leq 0\,,\\\vspace{-0.4cm} \\ A_{(-)}^-{(a^-)}^{z}\,, & 0< z \leq n_+\,,\\\vspace{-0.4cm} \\ A_{(+)}^-{(a^+)}^{N-z} \,, & n_+ < z \leq N\,,\\\vspace{-0.4cm} \\ A_{(+)}^-{(a^-)}^{N-z}\,, & N<z<2N\,, \\ \end{array} \right.\\\qquad \text{or}\qquad \mathfrak{S}_{z}^{'}\equiv \mathfrak{S}_{N-z}\,, \end{equation} see Appendix \ref{apen-red}. Here we have reserved $z=0$ and $z=N$ values for index $z$ to the dual schemes intertwining operators: in the first choice, $\mathfrak{S}_{0}=A_{(-)}^-$ and $\mathfrak{S}_{N}=A_{(+)}^-$, and for the second choice we have $\mathfrak{S}_{0}'=A_{(+)}^-$ and $\mathfrak{S}_{N}'=A_{(-)}^-$. Written in this way, these operators satisfy the intertwining relations $ \mathfrak{S}_{z}L=(L_{(-)}+2z)\mathfrak{S}_{z}$ or $\mathfrak{S}_{z}'L=(L_{(+)}-2z)\mathfrak{S}_{z}'$, and their Hermitian conjugate versions. Then, to study supersymmetry, we have to choose either positive or negative scheme to define the $\mathcal{N}=2$ super-extended Hamiltonian. We take $\mathfrak{S}_{z}$ if we work with a negative scheme, and $\mathfrak{S}_{z}'$ if positive scheme is chosen for the construction of super-extension. \section{Supersymmetric extensions} \label{susyextension} For each of the two dual schemes, one can construct an $\mathcal{N}=2$ super-extended Hamiltonian operator following the recipe given in Chap. \ref{ChSUSY}, equation (\ref{Hlambda*}). The task is to choose appropriately $H_1=\breve{L}-\lambda^*$ and $H_0=L-\lambda^*$. We put $\breve{L}=L_{(+)}$ and $\lambda^*=\lambda_+= 2l_1^++1$ for positive scheme, and choose $\breve{L}=L_{(-)}$ and $\lambda^*=\lambda_-= -2l_1^--1$ for negative scheme. For both options, we set $L=L_\text{os}$ if we are dealing with a rational extension of harmonic oscillator, and $L=L_{0}$ if we work with a deformation of the AFF model. We name the matrix Hamiltonian associated with negative scheme as $\mathcal{H}$, and denote by $\mathcal{H}'$ the Hamiltonian of positive scheme. The spectrum of these systems can be found using the properties of the corresponding intertwining operators described in Sec. \ref{Dar}, see also refs. \textcolor{red}{[\cite{CarPly,CarInzPly}]}. The two Hamiltonians are connected by relation $\mathcal{H}-\mathcal{H}'=-N(1+\sigma_3)-\lambda_-+\lambda_+$, and $\sigma_3$ plays a role of the $\mathcal{R}$ symmetry generator for both super-extended systems. In this subsection we finally construct the corresponding spectrum-generating superalgebra for $\mathcal{H}$ and $\mathcal{H'}$. The resulting structures are based on the physical operators $\mathfrak{D}_{\rho,j}^\pm$. As we shall see, the supersymmetric versions of the $c=1$ systems are described by a nonlinearly extended super-Schr\"odinger symmetry with bosonic generators to be differential operators of even and odd orders, while in the case of the $c=2$ systems we obtain nonlinearly extended superconformal symmetry in which bosonic generators are of even order only. \vskip0.1cm We construct a pair of fermionic operators on the basis of each intertwining operator from the set (\ref{genA}) and their Hermitian conjugate counterparts. Let us consider first the extended nonlinear super-Schr\"odinger symmetry of a one-gap deformed harmonic oscillator, and then we generalize the picture. If we choose the negative scheme, then we use $\mathfrak{S}_z$ defined in (\ref{frakS}) to construct the set of operators \begin{eqnarray} \label{gencharge} \mathcal{Q}_1^{z}= \left( \begin{array}{cc} 0& \mathfrak{S}_z \\ \mathfrak{S}_z^\dagger & 0 \end{array} \right)\,, \qquad \mathcal{Q}_2^{z}= i\sigma_3\mathcal{Q}_1^{z}\,, \qquad -N<z<2N\,. \end{eqnarray} They satisfy the (anti)-commutation relations \begin{equation} \label{SUSY} [\mathcal{H},\mathcal{Q}_a^z]=2iz\epsilon_{ab}\mathcal{Q}_{b}^{z}\,, \qquad \{\mathcal{Q}_a^z,\mathcal{Q}_b^z\}=2\delta_{ab}\mathbb{P}_{z}(\mathcal{H},\sigma_3)\,,\qquad [\Sigma,\mathcal{Q}_{a}^z]=-i\epsilon_{ab}\mathcal{Q}_{b}^{z}\,, \end{equation} where $\Sigma= \frac{1}{2}\sigma_3$ and $\mathbb{P}_z$ are some polynomials whose structure is described in Appendix \ref{apen-comm}. For the choice of the positive scheme to fix extended Hamiltonian, according to (\ref{frakS}), the corresponding fermionic operators are given by $\mathcal{Q}_1^{'z}\equiv \mathcal{Q}_1^{N-z}$. They satisfy relations of the same form (\ref{SUSY}) but with replacement $\mathcal{H}\rightarrow \mathcal{H}'$, $\Sigma=\frac{1}{2}\sigma_3 \rightarrow \Sigma'=-\frac{1}{2}\sigma_3$, $\mathbb{P}_{z}(\mathcal{H},\sigma_3)\rightarrow \mathbb{P}_{z}'(\mathcal{H}',\sigma_3)=\mathbb{P}_{N-z}(\mathcal{H}'-N(1+\sigma_3)-\lambda_-+\lambda_+,\sigma_3)$, $\mathcal{Q}_{1}^{z}\rightarrow \mathcal{Q}_{2}^{'z}$ and $\mathcal{Q}_{2}^{z}\rightarrow \mathcal{Q}_{1}^{'z}$. The fermionic operators $\mathcal{Q}_a^{0}$ (or $\mathcal{Q}_a^{'0}$) are the supercharges of the (nonlinear in general case) $\mathcal{N}=2$ Poincar\'e supersymmetry, which are integrals of motion of the system $\mathcal{H}$ (or $\mathcal{H}'$), and $\mathbb{P}_{0}= P_{n_-}(\mathcal{H} + \lambda_{-})\,$ (or $\mathbb{P}_{0}=P_{n_+}(\mathcal{H}' + \lambda_{+})$) with polynomials $P_{n_\pm}$ defined in (\ref{polyA}). The operators $\mathcal{Q}_a^{'0}$ are analogous here to supercharges in $Q_\nu^{a}$ in the linear case, see Chap. \ref{ChConformal}. On the other hand, we have here the fermionic operators $\mathcal{Q}_a^{'N}$ as analogs of dynamical integrals $\mathcal{S}^a_\nu$ there. We recall that in the simple linear case considered in section \ref{SecOSP22Conformal}, the interchange between positive and negative schemes corresponds to the automorphism of superconformal algebra, and this observation will be helpful for us for the analysis of the nonlinearly extended super-Schr\"odinger structures. Here, actually, each of the $(\#_f-2)/2$ pairs of fermionic operators distinct from supercharges provides a possible dynamical extension of the super-Poincar\'e symmetry. As we will see, all of them are necessary to obtain a closed nonlinear spectrum-generating superalgebra of the super-extended system. To construct any extension of the deformed Poincar\'e supersymmetry, we calculate $\{\mathcal{Q}_{a}^{0},\mathcal{Q}_{a}^{z} \}$, in the negative scheme, or $\{\mathcal{Q}_{a}^{'0},\mathcal{Q}_{a}^{'z} \}$ in the positive one. In the first case we have \begin{equation} \label{Cn+kQN} \{\mathcal{Q}_a^{0},\mathcal{Q}_{b}^{z}\}=\delta_{ab}(\mathcal{G}_{-z}^{(2\theta(z)-1)}+ \mathcal{G}_{+z}^{(2\theta(z)-1)})+i\epsilon_{ab}(\mathcal{G}_{-z}^{(2\theta(z)-1)}- \mathcal{G}_{+z}^{(2\theta(z)-1)})\,, \end{equation} where $z\in (-N,0)\cup(0,2N)$, $\theta(z)=1\, (0)$ for $z>0\, (z<0)$, and $\mathcal{G}^{(2\theta(z)-1)}_{\pm z}$ are given by \begin{equation} \label{superC} \mathcal{G}_{+z}^{(2\theta(z)-1)}= \left( \begin{array}{cc} \mathfrak{S}_{0}(\mathfrak{S}_{z})^{\dagger} & 0 \\ 0& (\mathfrak{S}_{z})^{\dagger}\mathfrak{S}_{0} \end{array} \right)\,,\qquad \mathcal{G}_{-z}^{(2\theta(z)-1)}=(\mathcal{G}_{+z}^{(2\theta(z)-1)})^\dagger\,. \qquad \end{equation} Following definition (\ref{frakS}), one finds directly that $\mathfrak{S}_{0}(\mathfrak{S}_{z})^{\dagger}$ is equal to $\mathfrak{A}_{|z|}^-$ when $-N<z<0$, while for $0<z\leq n_+$, this operator is equal to $\mathfrak{A}_{z}^+$, and takes the form of $\mathfrak{C}_{z}^+$ for $n_+<z<2N$. The operators $(\mathfrak{S}_{z})^{\dagger}\mathfrak{S}_{0}$ reduce to \begin{equation} \label{S*S+} (\mathfrak{S}_{z})^{\dagger}\mathfrak{S}_{0}= \left\{ \begin{array}{lcc} P_{n_-}(L-2k)(a^-)^{|z|}\,, & -N<z<0\,,\\\vspace{-0.4cm} \\(a^+)^{z}P_{n_-}(L)\,, & 0<z\leq n_+\,,\\\vspace{-0.4cm} \\ (-1)^{n_-}(a^+)^zT_{N-z}(L+2N)\,, & n_+<z<N\,,\\\vspace{-0.4cm} \\ (-1)^{n_-}(a^+)^z \,, & N\leq z<2N\,. \end{array} \right.\\ \end{equation} Note that $\mathcal{G}_{\pm k}^{(-1)}$ and $\mathcal{G}_{\pm k}^{(+1)}$ with $k=|z|\leq n_-$ are two different matrix extensions of the same operator $\mathfrak{A}_k^\pm$. For a super-extended system based on the positive scheme, we obtain \begin{equation} \label{Q0'Qk'} \{\mathcal{Q}_a^{'0},\mathcal{Q}_{b}^{'z}\}= \delta_{ab}(\mathcal{G}^{'(2\theta(z)-1)}_{-z}+\mathcal{G}^{'(2\theta(z)-1)}_{+z}) -i\epsilon_{ab}(\mathcal{G}^{'(2\theta(z)-1)}_{-z}- \mathcal{G}^{'(2\theta(z)-1)}_{+z})\,, \end{equation} where, again, $z\in (-N,0)\cup(0,2N)$, and $\mathcal{G}^{'(2\theta(z)-1)}_{\pm z}$ are given by \begin{equation} \label{superC'} \mathcal{G}^{'(2\theta(z)-1)}_{-z}= \left( \begin{array}{cc} \mathfrak{S'}_{0}(\mathfrak{S'}_{z})^{\dagger} & 0 \\ 0& (\mathfrak{S'}_{z})^{\dagger}\mathfrak{S'}_{0} \end{array} \right)\,,\qquad \mathcal{G}^{'(2\theta(z)-1)}_{+z}=(\mathcal{G}^{'(2\theta(z)-1)}_{-z})^\dagger\,. \qquad \end{equation} Now, $\mathfrak{S'}_{0}(\mathfrak{S'}_{z})^{\dagger}=\mathfrak{B}^{+}_{|z|}$ when $-N<z<0$, while for positive index $z$ this operator reduces to $\mathfrak{B}^{-}_{z}$ when $0<z\leq n_-,$ and to $\mathfrak{C}^{-}_{z}$ when $n_-<z<2N$. For the other matrix element we have \begin{eqnarray} (\mathfrak{S}_{z}')^{\dagger}\mathfrak{S}_{0}'= \left\{ \begin{array}{lcc} (a^+)^{|z|}P_{n_+}(L)\,, & -N<z<0\,,\\\vspace{-0.4cm} \\(a^-)^{z}P_{n_+}(L)\,, & 0<z\leq n_-\,,\\\vspace{-0.4cm} \\ (-1)^{n_-}T_{N-k}(L)(a^-)^z\,, & n_-<z<N\,,\\\vspace{-0.4cm} \\ (-1)^{n_-}(a^-)^z\,, & N<z<2N\,. \end{array} \right. \end{eqnarray} Here, again, there are two different matrix extensions of the operators of the $\mathfrak{B}$-family given by $\mathcal{G}_{\pm k}^{'(+1)}$ and $\mathcal{G}_{\pm k}^{'(-1)}$ when $k\leq n_-$. By comparing both schemes one can note two other special features. It turns out that $\mathcal{G}_{\pm k}^{(1)}=\mathcal{G}_{\pm k}^{'(1)}$ when $k\geq N$, and this corresponds to the automorphism discussed in section~\ref{SecOSP22Conformal}. In the same way, for $\max(n_-,n_+)<k<N$, operators $\mathcal{G}_{\pm k}^{(1)}$ and $\mathcal{G}^{'(1)}_{\pm k}$ are different matrix extensions of $\mathfrak{C}^{\pm}_k$. {}From here and in what follows we do not specify whether we have the super-extended system corresponding to the negative or the positive scheme, and will just use, respectively, the unprimed or primed notations for operators of the alternative dual schemes. In particular, we have \begin{equation} [\mathcal{H},\mathcal{G}_{\pm k}^{(2\theta(z)-1)}]=\pm 2k \mathcal{G}_{\pm k}^{(2\theta(z)-1)} \,,\qquad k\equiv|z|\,, \qquad z \in (-N,0)\cup(0,2N)\,, \end{equation} that shows explicitly that our new bosonic operators have the nature of ladder operators of the super-extended system $\mathcal{H}$. Commutators $[\mathcal{G}_{-k}^{(1)},\mathcal{G}_{+k}^{(1)}]$ and $[\mathcal{G}_{-k}^{(-1)},\mathcal{G}_{+k}^{(-1)}]$ produce polynomials in $\mathcal{H}$ and $\sigma_3$, which can be calculated by using the polynomials $\mathcal{P}_{\rho,j}$ defined in (\ref{sl2rh}). The algebra generated by $\mathcal{H}$, $\mathcal{G}_{\pm k}^{(2\theta(z)-1)}$ and $\sigma_3$ is identified as a deformation of $\mathfrak{sl}(2,\mathbb R)\oplus \mathfrak{u}(1)$, where a concrete form of deformation depends on the system, $\mathcal{H}$, and on $z$. Each of these nonlinear bosonic algebras expands further up to a certain closed nonlinear deformation of superconformal $\mathfrak{osp}(2|2)$ algebra generated by the subset of operators \begin{equation} \label{U1} \mathcal{U}_{0,z}^{(2\theta(z)-1)}\equiv \{\mathcal{H},\sigma_3, \mathbb{I},\mathcal{G}_{\pm |z|}^{(2\theta(z)-1)}, \mathcal{Q}_{a}^{0}, \mathcal{Q}_{a}^{z}\}\,,\qquad z \in (-N,0)\cup(0,2N)\,, \end{equation} see Appendix \ref{apen-comm}. The deficiency of any of these nonlinear superalgebras is that none of them is a spectrum-generating algebra for the super-extended system\,: application of operators from the set (\ref{U1}) and of their products does not allow one to connect two arbitrary eigenstates in the spectrum of $\mathcal{H}$. To find the spectrum-generating superalgebra for this kind of the super-extended systems, one can try to include into the superalgebra simultaneously the operators $\mathcal{G}^{(1)}_{\pm N}$ and, say, $\mathcal{G}^{(1)}_{\pm 1}$ or $\mathcal{G}^{(-1)}_{\pm 1}$. The operators $\mathcal{G}^{(1)}_{\pm N}$ provide us with matrix extension of the operators $\mathfrak{C}_{N}^\pm$ being ladder operators for deformed subsystems $L_{(-)}$ or $L_{(+)}$. Analogously, operators $\mathcal{G}^{(1)}_{\pm 1}$ or $\mathcal{G}^{(-1)}_{\pm 1}$ supply us with matrix extensions of the ladder operators $\mathfrak{A}_{ 1}^\pm$ or $\mathfrak{B}_{ 1}^\pm$ ($\mathfrak{A}_{ 2}^\pm$ or $\mathfrak{B}_{ 2}^\pm$) when systems $L_{(\pm)}$ are of the class $c=1$ or $c=2$ with even (odd) $N$. Therefore, it is enough to unify the sets of generators $\mathcal{U}_{0,1}^{(1)}$ and $\mathcal{U}_{0,N}^{(1)}$. Having in mind the commutation relations between operators of the three families $\mathfrak{A}$, $\mathfrak{B}$ and $\mathfrak{C}$, one can find, however, that the commutators of the operators $\mathcal{G}^{(1)}_{\pm N}$ with $\mathcal{G}^{(1)}_{\pm 1}$ generate other bosonic matrix operators $\mathcal{G}^{(1)}_{\pm k}$. The commutation of these operators with supercharges $\mathcal{Q}_{a}^{0}$ generates the rest of the fermionic operators we considered, see Appendix \ref{apen-comm} for details. The set of higher order generators is completed by considering all non-reducible bosonic and fermionic generators, which do not decompose into the products of other generators. In correspondence with that was noted above, we arrive finally at two different extensions of the sets of operators with index less than $N$. By this reason it is convenient also to introduce the operators \begin{eqnarray} \label{G_k} &\mathcal{G}_{\pm k}^{(0)}\equiv \Pi_- (a^\pm)^k, \qquad k=1,\ldots,N-1, \qquad \Pi_-=\frac{1}{2}(1-\sigma_3)\,,& \end{eqnarray} which help us to fix in a unique way the bosonic set of generators. For our purposes we choose to write all the operators $\mathcal{G}_{\pm k}^{(-1)}$ in terms of $\mathcal{G}_{\pm k}^{(1)}$ and $\mathcal{G}_{\pm k}^{(0)}$ when $k\leq n_+$ in the negative scheme, and when $k\leq n_-$ in the extended system associated with the positive scheme. For indexes outside the indicated scheme-dependent range, we neglect operators $\mathcal{G}_{\pm k}^{(-1)}$ because they are not basic in correspondence with the discussion on reduction of ladder operators in the previous Sec. \ref{interladder}. As a result, we have to drop from (\ref{U1}) all the operators $\mathcal{G}_{\pm |z|}^{(2\theta(z)-1)}$ with $z\in (-N,0)$. By taking anti-commutators of fermionic operators $\mathcal{Q}_{a}^{N}$ with $\mathcal{Q}_{a}^{z}$, $z\neq 0$, we produce bosonic dynamical integrals $\mathcal{J}_{\pm |z-N|}^{(1-2\theta(z-N))}$, which have exactly the same structure of the even generators $\mathcal{G}_{\pm |z|}'^{(2\theta(z)-1)}$ in the extension associated with the dual scheme. In this way we obtain the subsets of operators \begin{eqnarray} \label{In11} \mathcal{I}_{N,z}^{(1-2\theta(z-N))}\equiv \{\mathcal{H},\sigma_3,\mathbb{I}, \mathcal{J}_{\pm |z-N|}^{(1-2\theta(z-N))}, \mathcal{Q}_{a}^{N},\mathcal{Q}_{a}^{z}\}\, \qquad z \in (-N,0)\cup(0,2N)\,, \end{eqnarray} which also generate closed nonlinear super-algerabraic structures. With the help of (\ref{G_k}), we find similarly to the subsets (\ref{U1}), that a part of the sets (\ref{In11}) also can be reduced. Having in mind the ordering relation between $n_-$ and $n_+$, the super-extended systems associated with the negative schemes can be characterized finally by the following irreducible, in the sense of subection \ref{interladder}, subsets of symmetry generators\,: \begin{table}[H] \begin{center} \begin{tabular}{|c| c|} \hline $n_-\leq n_+$ &$n_+<n_-$ \\ \hline $ \mathcal{U}_{0,k}^{(1)}\,,\qquad 0<k<2N$ & $\mathcal{U}_{0,k}^{(1)}\,,\qquad k\in (0,n_+)\cup (n_-,2N) $\\ $ \mathcal{I}_{N,z}^{(1-2\theta(N-z))}\,,\qquad z\in(-N,0)\cup(n_+,N)$ &$ \mathcal{I}_{N,z}^{(1-2\theta(N-z))}\,,\qquad z\in(-N,0)\cup[n_+,N)$ \\ \hline \end{tabular} \caption{Symmetry generators subset.} \label{Tabla 1} \end{center} \end{table} \noindent For more details, see Appendix \ref{apen-red}. A similar result can be obtained for super-extended systems associated with positive schemes, where the roles played by families $\mathfrak{A}$ and $\mathfrak{B}$, and of numbers $n_-$ and $n_+$ are interchanged. Finally, we arrive at the following picture. Any operator that can be generated via (anti)-commutation relations and which does not belong to the sub-sets appearing in Table \ref{Tabla 1}, can be written as a product of the basic generators. For super-extensions of rationally deformed one-gap harmonic oscillator systems we have considered, the spectrum-generating algebra is composed from the sets $\mathcal{U}_{0,k}^{(1)}$ and $\mathcal{I}_{N,z}^{(1-2\theta(N-z))}$ and from those operators generated by them via (anti)-commutation relations which cannot be written as a product of the basic generators. It is worth to stress that in this set of generators the unique true integrals of motion, in addition to $\mathcal{H}$ and $\sigma_3$, are the supercharges $\mathcal{Q}_a^{0}$, while the rest has to be promoted to the dynamical integrals by unitary transforming them with the evolution operator. For gapless rational extensions of the systems of class $c=2$, only the subset $\mathcal{U}_{0,2}^{(1)}$ has to be considered instead of the family of sets $\mathcal{U}_{0,k}^{(1)}$. For super-extensions of rationally deformed systems of arbitrary form in the sense of the class $c$ and arbitrary number of gaps and their dimensions, the identification of their generalized super-Schr\"odinger or superconformal structures is realized in a similar way. The procedure is based on the sets of operators (\ref{ladgen}) and (\ref{genA}), which include the operators (\ref{basicsubsetonegap}) and (\ref{frakS}) of the discussed one-gap case as subsets. As a result, for every irreducible pair of ladder operators (\ref{ladgen}) with index less than $N$ we have two super-extensions which are related by operators of the form (\ref{G_k}). When we put together the subsets containing the spectrum-generating set of operators, we obtain all the other structures. We would like to end this section highlighting some of the peculiarities of the simplest systems that can be treated with this machinery and these are \emph{Peculiarities of one-gap deformations of the QHO}\,: The super-extended Hamiltonian constructed on the base of the negative scheme with $n_-=1$ is characterized by unbroken $\mathcal{N}=2$ Poincar\'e supersymmetry, whose supercharges, being the first order differential operators, generate a Lie superalgebra. The $\mathfrak{B}$ family of ladder operators in the sense of (\ref{basicsubsetonegap}) does not play any role in this scheme. On the other hand, the super-Hamiltonian provided by the positive scheme possesses $n_+$ singlet states while the ground state is a doublet. The $\mathcal{N}=2$ super-Poincar\'e algebra of such a system is nonlinear as its supercharges are of differential order $n_+=2\ell \geq 2$. \vskip0.1cm \emph{Peculiarities of gapless deformations of $L_1$}\,: The negative scheme produces a super-Hamiltonian with spontaneously broken supersymmetry, whose all energy levels are doubly degenerate; its $\mathcal{N}=2$ super-Poincar\'e algebra has linear nature. To construct the spectrum-generating algebra we only need a matrix extension of the operators $\mathfrak{A}_2^\pm$. In a super-extended system produced by the positive scheme, $n_+>1$ physical and nonphysical states of $L_{0}$ of positive energy (the latter being even eigenstates of harmonic oscillator) are used as seed states for DCKA transformation. Its supersymmetry is spontaneously broken, and the $\mathcal{N}=2$ super-Poincar\'e algebra is nonlinear. The nonlinearly deformed super-Poincar\'e symmetry cannot be expanded to spectrum-generating superalgebra by combining it with matrix extension of the $\mathfrak{A}^\pm_2$, but this can be done by using matrix extensions of the $\mathfrak{B}_2^\pm$ or $\mathfrak{C}_2^\pm$ ladder operators, see (\ref{superC'}). The resulting spectrum-generating superalgebra is a certain nonlinear deformation of the $\mathfrak{osp}(2|2)$ superconformal symmetry. \vskip0.1cm \section{Example 1: Gapless deformation of AFF model} The example considered here corresponds to the same system analyzed in the previous chapter, in Sec. \ref{SecIsocase}. By construction, the super-Hamiltonian and its spectrum correspond to \begin{equation} \label{hamilisodef} \mathcal{H}= \left( \begin{array}{cc} H_1& 0 \\ 0 & H_0 \end{array}\right),\qquad \mathcal{E}_{n}=4n+10\,, \qquad n=0,1,\ldots\,, \end{equation} where $H_1=L_{(-)}+7$, with $L_{(-)}$ given in (\ref{reio3}), and $H_0=L_{0}+7$. Due to complete isospectrality of $ H_1$ and $H_0$, all the energy levels of the system (\ref{hamilisodef}) including the lowest one $\mathcal{E}_{0}=10>0$ are doubly degenerate and we have here the case of spontaneously broken $\mathcal{N}=2$ super-Poincar\'e symmetry generated by Hamiltonian $\mathcal{H}$, the supercharges $\mathcal{Q}^0_a$ constructed in terms of $A_{(-)}^\pm$, and by $\Sigma=\frac{1}{2}\sigma_3$. The generators that should be considered for the super-extension correspond to \begin{equation} \mathcal{U}_{0,2}^{(1)}=\{\mathcal{H},\mathbb{I},\mathcal{G}_{\pm2}^{(1)},\sigma_3,\mathcal{Q}_a^{0},\mathcal{Q}_a^{2} \}\,, \end{equation} where \begin{eqnarray} & \label{Q2Cpm2} \mathcal{Q}^z_1= \left( \begin{array}{cc} 0& A^-_{(-)}(a^-)^z \\ (a^+)^zA^+_{(-)} & 0 \end{array}\right),\,\, z=0,2\,, &\\& \mathcal{G}_{-2}^{(1)}= \left( \begin{array}{cc} A_{(-)}^-(a^-)^2A^+_{(-)}& 0 \\ 0 & H_0(a^-)^2 \end{array}\right),&\\& \mathcal{Q}^z_2=i\sigma_3 \mathcal{Q}^z_1\,,\qquad\mathcal{G}_{+2}^{(1)}=(\mathcal{G}_{-2}^{(1)})^\dagger\,, & \end{eqnarray} and the explisit form of $A_{(-)}^\pm$ is given in (\ref{A+-(-3)}). The complete set of superalgebraic relations they satisfy is \begin{eqnarray} \label{nonlinear3} &[\mathcal{H},\mathcal{Q}_a^{0}]=0\,,\qquad [\mathcal{H},\mathcal{Q}_a^{2}]=4i\epsilon_{ab}\mathcal{Q}_b^{2}\,,\qquad [\sigma_3,\mathcal{Q}_a^{z}]=-2i\epsilon_{ab}\mathcal{Q}_b^{z}\,,\quad z=0,2\,,&\\ \label{nonlinear2} &\{\mathcal{Q}_a^{0},\mathcal{Q}_a^{0}\}=2\delta_{ab}\mathcal{H}\,,\qquad \{\mathcal{Q}_a^{0},\mathcal{Q}_b^{2}\}=\delta_{ab}(\mathcal{G}_{-2}^{(1)}+\mathcal{G}_{+2}^{(1)})+i \epsilon_{ab}(\mathcal{G}_{-2}^{(1)}-\mathcal{G}_{+2}^{(1)})\,,&\\ \label{nonlinear1} &[\mathcal{H},\mathcal{G}_{\pm 2}^{(1)}]=\pm4\mathcal{G}_{\pm2}^{(1)}\,,\qquad [\mathcal{G}_{\mp 2}^{(1)},\mathcal{Q}_a^{0}]=\pm 2(\mathcal{Q}_a^{2}\mp i\epsilon_{ab}\mathcal{Q}_b^{2})\,,&\\ \label{nonlinear4} &[\mathcal{G}_{-2}^{(1)},\mathcal{G}_{+2}^{(1)}]=8(\mathcal{H}-4)(\mathcal{H}(2\mathcal{H}-9)+\Pi_- (\mathcal{H}^2-4\mathcal{H}+24))\,,&\\ \label{nonlinear5} &[\mathcal{G}_{\mp 2}^{(1)},\mathcal{Q}_a^{2}]= \pm 2(-80 + 4 \mathcal{H} + \mathcal{H}^2)(\mathcal{Q}_a^{0}\pm i\epsilon_{ab}\mathcal{Q}_b^{0})\,,&\\ \label{nonlinear7} &\{\mathcal{Q}_a^{2},\mathcal{Q}_b^{2}\}=2\delta_{ab} (\eta+1)(\eta+3)(\eta+7)|_{\eta=\mathcal{H}+2\sigma_3-9}\,,& \end{eqnarray} where $\Pi_-=\frac{1}{2}(1-\sigma_3)$. The common eigenstates of $\mathcal{H}$ and $\mathcal{Q}^0_1$ are \begin{equation} \label{states-3} \Psi_{n}^{+}= \left( \begin{array}{c} (\mathcal{E}_n)^{-1/2} A_{(-)}^-\psi_{2n+1} \\ \psi_{2n+1} \end{array} \right), \qquad \Psi_{n}^{-}=\sigma_3\Psi_{n}^+, \end{equation} where $\mathcal{Q}^0_1\Psi^\pm_n=\pm\sqrt{\mathcal{E}_n}\Psi^\pm_n$, and we have here the relations $\Psi_{n}^\pm=(\mathcal{G}_{+2}^{(1)})^n\Psi_0^{\pm}$ and $\mathcal{G}_{-2}^{(1)}\Psi_0^{\pm}=0$. As a result one can generate all the complete set of eigenstates of the system by applying the generators of superalgebra to any of the two ground states $\Psi_0^+$ or $\Psi_0^-$, and therefore the restricted set of generators we have chosen is the complete spectrum-generating set for the super-extended system (\ref{hamilisodef}). \vskip0.1cm The complete set of (anti)-commutation relations (\ref{nonlinear1})-(\ref{nonlinear7}) corresponds to a nonlinear deformation of superconformal algebra $\mathfrak{osp}(2|2)$. The first relation from (\ref{nonlinear1}) and equation (\ref{nonlinear4}) represent a nonlinear deformation of $\mathfrak{sl}(2,\mathbb{R})$ with commutator $[\mathcal{G}_{-2}^{(1)},\mathcal{G}_{+2}^{(1)}]$ to be a cubic polynomial in $\mathcal{H}$. {}From the superalgebraic relations it follows that like in the linear case of superconformal $\mathfrak{osp}(2|2)$ symmetry discussed in Chap \ref{ChConformal}, Sec. \ref{SecOSP22Conformal}, here the extension of the set of generators $\mathcal{H}$, $\mathcal{Q}^0_a$ and $\Sigma$ of the $\mathcal{N}=2$ Poincar\'e super-symmetry by any one of the dynamical integrals $\mathcal{Q}^2_a$, $a=1,2$, $\mathcal{G}_{+2}^{(1)}$ or $\mathcal{G}_{-2}^{(1)}$ recovers all the complete set of generators of the nonlinearly deformed superconformal $\mathfrak{osp}(2|2)$ symmetry. \vskip0.2cm Due to a gapless deformation of the AFF model, here similarly to the case of the non-deformed superconformal $\mathfrak{osp}(2|2)$ symmetry, the super-extension based on the positive scheme is characterized by essentially different physical properties. The positive scheme of the system corresponds to the states$(1,2,3)$ and in this case we identify $\mathcal{H}'=\text{diag}\,(L_{(+)}-3,L_{0}-3)$ as the extended Hamiltonian. This $\mathcal{H}'$ is related to $\mathcal{H}$ defined by Eq. (\ref{hamilisodef}) by the equality $\mathcal{H}'=\mathcal{H}-6+4\sigma_3$. For extended system $\mathcal{H}'$, supercharges ${\mathcal{Q}'}_a^{0}$ have the form similar to $\mathcal{Q}_a^{0}$ in (\ref{Q2Cpm2}) but with $A^\pm_{(-)}$ changed for the third order intertwining operators $A^\pm_{(+)}$, constructed with the formula (\ref{generic-inter}). Being differential operators of the third order, they satisfy relations $[\mathcal{H}', {\mathcal{Q}'}_a^{0}]=0$ and $\{{\mathcal{Q}'}_a^{0},{\mathcal{Q}'}_b^{0}\}=2\delta_{ab}P_{n_+}(\mathcal{H}'+3)$ with $P_{n_+}(\mathcal{H}'+3)=\mathcal{H}'(\mathcal{H}'-2)(\mathcal{H}'-4)$. The linear $\mathcal{N}=2$ super-Poincar\'e algebra of the system (\ref{hamilisodef}) is changed here for the nonlinearly deformed superalgebra with anti-commutator to be polynomial of the third order in Hamiltonian. This system has two nondegenerate states $(0,\psi_{1})^t$ and $(0,\psi_{3})^t$ of energies, respectively, $0$ and $4$, and both them are annihilated by both supercharges ${\mathcal{Q}'}_a^{0}$. All higher energy levels $\mathcal{E}'_n=4n$ with $n=2,3,\ldots$ are doubly degenerate. Thus, the nonlinearly deformed $\mathcal{N}=2$ super-Poincar\'e symmetry of this system can be identified as partially unbroken \textcolor{red}{[\cite{KliPly}]} since the supercharges have differential order three but annihilate only two nondegenerate physical states. Here instead of the spectrum-generating set $\mathcal{U}_{0,2}^{(1)}$, formed by true and dynamical integrals, the same role is played by the set of integrals $\mathcal{U}_{0,2}'^{(1)}=\{\mathcal{H}',\mathcal{G}'^{(1)}_{\pm 2}, \mathbb{I},\sigma_3, {\mathcal{Q}'}_a^{0}, {\mathcal{Q}'}_a^{2}\}$, where fermionic generators are ${\mathcal{Q}'}_a^{z}=\mathcal{Q}_a^{4-z}$ with $z=0,2$ according with (\ref{frakS}) and (\ref{gencharge}). Bosonic dynamical integrals $\mathcal{G}'^{(1)}_{\pm 2}$ are given here by \begin{eqnarray} \label{Cpm2Hprime} \mathcal{G}_{-2}'^{(1)}= \left( \begin{array}{cc} A_{(+)}^-(a^+)A^+_{(-)}& 0 \\ 0 & (L_{0}-1)(a^-)^2 \end{array}\right),\qquad \mathcal{G}_{+2}'^{(1)}=(\mathcal{G}_{-2}'^{(1)})^\dagger\,, \end{eqnarray} where equations in (\ref{superC'}) have been used for the case of the present positive scheme. They are generated via anticommutation of ${\mathcal{Q}'}^0_a$ with ${\mathcal{Q}'}^2_b$. The set of operators $\mathcal{U}_{0,2}'^{(1)}$ generates the nonlinearly deformed superconformal $\mathfrak{osp}(2|2)$ symmetry given by superalgebra of the form (\ref{nonlinear3})--(\ref{nonlinear7}), but with coefficients to be polynomials of higher order in Hamiltonian $\mathcal{H}'$ in comparison with the case of the system (\ref{hamilisodef}). \section{Example 2: Rationally extended harmonic oscillator}\label{SecDefQHO} The example we discuss in this subsection corresponds to the rational extension of QHO based on the dual schemes $(1,2)\sim (-2)$, for which $N=3$. Different aspects of this system were extensively studied in literature \textcolor{red}{[\cite{CarPly,CarInzPly}]}. Here, we investigate it in the light of the nonlinearly extended super-Schr\"odingerr symmetry. The Hamiltonian produced via Darboux transformation based on the negative scheme is \begin{equation} \label{H_-2} L_{(-)}=-\frac{d^2}{dx^2}+x^2+8\frac{2 x^2-1 }{(1 + 2 x^2)^2}-2\,, \end{equation} whose spectrum is $E_0=-5$, $E_{n+1}=2n+1$, $n=0,1,\ldots$. In this system a gap of size 6 separates the ground state energy from the equidistant part of the spectrum, where levels are separated from each other by a distance $\Delta E=2$. The pair of ladder operators of the $\mathfrak{C}$-family connects here the isolated ground state with the equidistant part of the spectrum, and together with the ladder operators $\mathfrak{A}^\pm_1$ they form the complete spectrum-generating set of operators for the system. The intertwining operators of the negative scheme are \begin{equation} A_{(-)}^-=\frac{d}{dx}-x-\frac{4x}{2x^2+1},\qquad A_{(-)}^+\equiv (A_{(-)}^{-})^{\dagger}\,. \end{equation} We also have the intertwining operators $A_{(+)}^\pm$ constructed on the base of the seed states of the positive scheme $(1,2)$. These four operators satisfy their respective intertwining relations of the form (\ref{inter0}), and their alternate products (\ref{polyA}) reduce here to polynomials $P_{n_-}(L_{(-)})=L_{(-)}+5\equiv H_1$, $P_{n_-}(L)=L+5\equiv H_0$ and $P_{n_+}(L_{(+)})=(L_{(+)}-3)(L_{(+)}-5)$, $P_{n_+}(L)=(L+3)(L+5)$, where $L=L_\text{os}$ is the Hamiltonian operator of the harmonic oscillator, and $L_{(+)}$ is the Hamiltonian produced by positive scheme, which is related with $L_{(-)}$, according to (\ref{L+L-}), by $L_{(+)}-L_{(-)}=6$. Here, the eigenstate $A_{(-)}^{-}\widetilde{\psi_{-2}}=1/\psi_{-2}$ is the isolated ground state of zero energy of the shifted Hamiltonian operator $H_1$. The super-extended Hamiltonian and its spectrum are \begin{equation} \label{superdefHO} \mathcal{H}= \left( \begin{array}{cc} H_1& 0 \\ 0 & H_0 \end{array}\right),\qquad \mathcal{E}_{0}=0\,, \qquad \mathcal{E}_{n+1}=2n+6\,,\qquad n=0,1,\ldots\,. \end{equation} The ground state of zero energy is non-degenerate and corresponds to the ground state $(A_{(-2)}^- \widetilde{\psi_{-2}},0)^t$. Other energy levels are doubly degenerate and correspond to eigenstates of the extended Hamiltonian (\ref{superdefHO}) and supercharge $\mathcal{Q}^0_1$, see below\,: \begin{equation} \Psi_{n+1}^{+}= \left( \begin{array}{c} (\mathcal{E}_{n+1})^{-1/2} A_{(-)}^-\psi_{n} \\ \psi_{n} \end{array} \right),\qquad \Psi_{n+1}^{-}=\sigma_3\Psi_{n+1}^{+}\,. \end{equation} The system (\ref{superdefHO}) is characterized by unbroken $\mathcal{N}=2$ Poincar\'e supersymmetry. Now we use the construction of Sec. \ref{susyextension} to produce generators of the extended nonlinearly deformed super-Schr\"odinger symmetry of the system. Following (\ref{gencharge}) and (\ref{superC}), we construct the odd operators $\mathcal{Q}_{a}^{z}$ with $z=-2,-1,0,\ldots,5$, and matrix bosonic ladder operators $\mathcal{G}_{\pm k}^{(1)}$ with $k=1,\ldots,5$. Also we must consider the operators $\mathcal{G}_{\pm k}^{(0)}$ with $k=1,2$ defined in (\ref{G_k}). To obtain all the ingredients, we have to use the version of relation (\ref{reqgen1}) for this system translated to the supersymmetric extension of $\mathfrak{C}_{N+k}^\pm$ which is \begin{equation} \label{req} \mathcal{G}_{\pm(3l+n)}^{(1)}=(-\mathcal{G}_{\pm 3}^{(1)})^l\mathcal{G}_{\pm n}^{(1)} \,,\qquad n=3,4,5\,,\qquad l=0,1,\ldots\,. \end{equation} Then we generate the even part of the superalgebra\,: \begin{equation} \label{slr1} [\mathcal{H},\mathcal{G}_{\pm n}^{(1)}]=\pm 2n \mathcal{G}_{\pm n}^{(1)}\,, \qquad [\mathcal{H},\mathcal{G}_{\pm l}^{(0)}]=\pm 2l\mathcal{G}_{\pm l}^{(0)}\,, \end{equation} \begin{equation} \label{slr3} [\mathcal{G}_\alpha^{(1)},\mathcal{G}_\beta^{(1)}]=P_{\alpha,\beta} \mathcal{G}_{\alpha+\beta}^{(1)}+ M_{\alpha,\beta}\mathcal{G}_{\alpha+\beta}^{(0)}\,, \quad \alpha,\beta=\pm1,\ldots,\pm5\,, \end{equation} \begin{equation} \label{slr4} [\mathcal{G}_\alpha^{(0)},\mathcal{G}_\beta^{(1)}]=\Pi_-(F_{\alpha,\beta}\mathcal{G}_{\alpha+\beta}^{(1)}+ N_{\alpha,\beta}\mathcal{G}_{\alpha+\beta}^{(0)})\,, \quad \alpha=1,2\,,\quad\beta=\pm1,\ldots,\pm5\,, \end{equation} \begin{equation} [\mathcal{G}_{-1}^{(0)},\mathcal{G}_{+1}^{(0)}]=2\Pi_-,\qquad [\mathcal{G}_{\pm 1}^{(0)},\mathcal{G}_{\mp 2}^{(0)}]= \pm6\mathcal{G}_{\pm 1}^{(0)}\,, \qquad [\mathcal{G}_{-2}^{(0)},\mathcal{G}_{+2}^{(0)}]=8\Pi_-(\mathcal{H}-5)\,, \end{equation} where we put $\mathcal{G}_0^{(1)}=\mathcal{G}_0^{(0)}=1$ and $P_{\alpha,\beta}$, $F_{\alpha,\beta}$, $M_{\alpha,\beta}$ and $N_{\alpha,\beta}$ are some polynomials in $\mathcal{H}$ and $\Pi_-=\frac{1}{2}(1-\sigma_3)$, some of which are numerical coefficients, whose explicit form is listed in Appendix \ref{list}. We note that in Eqs. (\ref{slr3}) and (\ref{slr4}), the operators $\mathcal{G}_{\pm n}^{(1)}$ with $1<n\leq 7$ can appear, where for $n>5$ we use relation (\ref{req}) (admitting $\mathcal{G}_{\pm3}^{(0)}$ as coefficients in the algebra). Additionally we note that the operators $\mathcal{G}_{\pm m}^{(0)}$ with $m>2$ in both equations where they appear are absorbed in generators $\mathcal{G}_{\pm m}^{(1)}$. For eigenstates we have the relations \begin{eqnarray} \label{C3psi} &\Psi_{3j+k}^\pm= (\mathcal{G}_{+3}^{(1)})^j\Psi_k^\pm\,, \qquad \Psi_{0}=\mathcal{G}_{-3}^{(1)}\Psi_1^\pm,\qquad j=1,2,\ldots\,,\qquad k=1,2,3\,,&\\ \label{C2psi} &\Psi_{j}^\pm= (\mathcal{G}_{+ 1}^{(1)})^j\Psi_1^\pm\,, \qquad \mathcal{G}^{(1)}_{\pm 1}\Psi_{0}=\mathcal{G}_{-1}^{(1)}\Psi_{1}^\pm=0\,.& \end{eqnarray} Eq. (\ref{C3psi}) shows that we can connect the isolated ground state with the equidistant part of the spectrum using $\mathcal{G}_{\pm 3}^{(1)}$, which are not spectrum-generating operators. Eq. (\ref{C2psi}) indicates that the states in the equidistant part of the spectrum can be connected by $\mathcal{G}_{\pm 1}^{(1)}$, but this part of the spectrum cannot be connected by them with the ground state. Thus we have to use a combination of both pairs of these operators. On the other hand, the odd operators $\mathcal{Q}_a^z$ satisfy relations (\ref{SUSY}), where $\mathbb{P}_0=\mathcal{H}$, and, therefore, we have again the linear $\mathcal{N}=2$ Poincar\'e supersymmetry as a sub-superalgebra generated by $\mathcal{H}$, $\mathcal{Q}^0_a$ and $\Sigma$. The general anti-commutation structure is given by \begin{equation} \label{susy3} \{\mathcal{Q}_a^{n},\mathcal{Q}_b^{m}\}=\delta_{ab}(\mathbb{C}_{nm}+(\mathbb{C}_{nm})^{\dagger})+ i\epsilon_{ab}(\mathbb{C}_{nm}-(\mathbb{C}_{nm})^{\dagger})\,, \end{equation} where $\mathbb{C}_{n,m}=\mathbb{C}_{n,m}(\mathcal{G}_{|n-m|}^{(1)},\mathcal{G}_{|n-m|}^{(0)})$ in general are some linear combinations of the indicated ladder operators with coefficients to be polynomials in $\mathcal{H}$, $\mathcal{G}_{\pm3}^{(0)}$ and $\sigma_3$. Some of these relations define ladder operators, see Eq. (\ref{Cn+kQN}). For $n=N=3$ and $m=-1,-2$ we can use (\ref{superC'}) knowing that $\mathcal{Q}_{a}'^{z}=\mathcal{Q}_{a}^{3-z}$, see Sec. \ref{susyextension}. For structure of anti-commutation relations with other combinations of indexes, see Appendix \ref{list}. To complete the description of the generated nonlinear supersymmetric structure, we write down the commutators between the independent lowering operators and supercharges\,: \begin{equation} \label{susy4} [\mathcal{G}_{-m}^{(1)},\mathcal{Q}_a^{n}]=\mathbb{Q}_{m,n}^{1}(\mathcal{Q}_{a}^{n-m}+i \epsilon_{ab}\mathcal{Q}_b^{n-m})+\mathbb{Q}_{m,n}^{2}(\mathcal{Q}_{a}^{m+n}-i \epsilon_{ab}\mathcal{Q}_b^{m+n})\,, \end{equation} \begin{equation} \label{susy5} [\mathcal{G}_{-m}^{(0)},\mathcal{Q}_a^{n}]=\mathbb{G}_{m,n}^{1}(\mathcal{Q}_{a}^{n-m}+i \epsilon_{ab}\mathcal{Q}_b^{n-m})+\mathbb{G}_{m,n}^{2}(\mathcal{Q}_{a}^{m+n}- i\epsilon_{ab}\mathcal{Q}_b^{m+n})\,. \end{equation} Here $\mathbb{Q}_{m,n}^{j}$ and $\mathbb{G}_{m,n}^{j}$ with $j=1,2$ are polynomials in $\mathcal{H}$ or numerical coefficients, some of which are listed in the sets of general commutation relations in Appendix \ref{apen-comm}, while other are given explicitly in Appendix \ref{list}. As the odd fermionic operators are Hermitian, then $[\mathcal{G}_{+m}^{(1)},\mathcal{Q}_a^{z} ]=-([\mathcal{G}_{- m}^{(1)}, \mathcal{Q}_a^{z} ])^{\dagger}$, and we do not write them explicitly. In matrix language, Eq. (\ref{susy4}) can be written as \begin{equation} [\mathcal{G}_{-m}^{(1)},\mathcal{Q}_a^{n}]= \left( \begin{array}{cc} 0& \mathfrak{S}_{n+m}^- \\ \mathfrak{S}_{n-m}^+ & 0 \end{array} \right)\,, \end{equation} and an important point here is that the number $n-m$ could take values less than -2 and $n+m$ could be greater than 5, but fermionic operators are defined with the index $z$ taking integer values in the interval $I=[-2,+5]$. It is necessary to remember that we cut the series of $\mathfrak{S}_z^\pm$ because operators outside the defined interval are reduced to combinations (products) of other basic operators. In this way, we formally apply the definition of $\mathfrak{S}_z^\pm$ outside of the indicated interval and use the relation in Appendix \ref{apen-red} to show that these ``new'' generated operators reduce to combinations of operators with index values in the interval $I$ and of the generators $\mathfrak{C}_{\pm 3}$. Finally, the subsets which produce closed sub-superalgebras here are those defined by $\mathcal{U}_{0,z}^{(1)}$ in (\ref{U1}), with $z=1,\ldots,5$ in addition to $\mathcal{I}_{N,-k}^{(1)}$ given in (\ref{In11}) with $k=1,2$. With respect to the positive scheme, the super-Hamiltonian is given by $\mathcal{H}'=\text{diag}\, (L_{(+)}-3,L_0-3)$. It has two positive energy singlet states of the form $(0,\psi_n)$ with $n=1,2$; besides, there are two ground states $\Psi_0^+=(\phi_0,\psi_0)$ and $\Psi_0^-=\sigma_3\Psi_0^+$ of energy $-2$. According to the construction from the previous section, the fermionic operators here are $\mathcal{Q}^{'z}_a=\mathcal{Q}^{3-z}_a$, and the basic subsets which generate closed sub-superalgebras are $\mathcal{U}_{0,k}'^{(1)}$ and $\mathcal{I}_{N,l}'^{(1-2\theta(l))}$ with $k=3,4,5$ and $l=-1,-2,4,5$. One can note that considering $\mathcal{G}_{\pm 3}^{(1)}$ as coefficients, the subset $\{\mathcal{H}, \mathcal{G}_{\pm 3}^{(1)},\sigma_3, \mathcal{Q}_a^{-2},\mathcal{Q}_a^{1},\mathcal{Q}_a^{4}, \mathbb{I} \} $ also generates a closed nonlinear superalgebraic structure. \section{Remarks} In fact, the construction in Sec. \Ref {susyextension} offers more possibilities: in principle, the choice of the constant $ \lambda_* $ in the Hamiltonian (\ref{Hlambda*}) can be modified in such a way that another pair of fermionic operators in the scheme (\ref{gencharge}) will be the true integrals of the motion. As a result, the super-extended system will have a different spectrum. We schematically discussed this picture in the original work \textcolor{red}{[\cite{InzPly2}]}. Another possibility is to choose $ L_0 = L_{(-)} $ and $ L_ {[n]} = L_{(+)} $ and, as a consequence, the intertwining operators will be the ladder operators in (\ref {ladgen}), and one can expect that the use of intermediate systems in the DCKA procedure will provide lower order intertwining operators, however this is still an open problem. Finally, the discussion in these last two chapters involved AFF models with integer coupling constant $ m (m + 1) $, so the next natural step is to try to generalize for the case $ \nu (\nu + 1) $ with $ \nu $ real equal to or greater than $ -1 / 2 $. This is the objective of the next chapter. \chapter{The Klein four-group and Darboux duality} \label{ChKlein} The invariance of the QHO eigenvalue problem to the discrete transformation $ (x, E) \rightarrow (ix, -E) $ was the basis of the construction presented in the last two chapters. The presence of nonphysical eigenstates gives rise to the so-called Darboux duality, which was the key to building the spectrum-generating ladder operators for extended rational systems. In this chapter we demonstrate that the Schr\"odinger equation for the AFF model with $\nu\geq-1/2$ has an even larger discrete symmetry group, which will be responsible for the generalization of Darboux duality for these systems. Such a discrete group has its particular consequences when it acts on eigenstates and (super) symmetry generators. With the generalization of the Darboux duality at hand, constructing spectrum-generating ladder operators for rational deformations of the general AFF models, as well as their nonlinear algebras, is straightforward. It is interesting to recall that when $ \nu $ is a half-integer number, the Jordan states associated with confluent Darboux transformations naturally enter in the framework. In particular, some deformed systems undergo structural changes when we set $\nu = \ell - 1/2$ with $\ell=0,1\ldots\,$. The results contained in this chapter were reported in our work \textcolor{red}{[\cite{InzPly3}]}. \section{The Klein four-group in AFF model} \label{secK4group} Parameterizing the coupling constant in parabolic form $g=\nu(\nu+1)$, which is symmetric with respect to $\nu=-\frac{1}{2}$, we artificially induce the invariance of the equation \begin{equation} \label{timedependent1} \left(-\frac{\partial^2}{\partial x^2}+ x^2+ \frac{\nu(\nu+1)}{x^2}\right)\psi=i\frac{\partial}{\partial t}\psi \end{equation} with respect to the transformation $\rho_1:\nu\rightarrow-\nu-1$. Equation (\ref{timedependent1}) is also invariant with respect to the transformation $\rho_2:(x,t)\rightarrow(ix,-t)$. These two transformations generate the Klein four-group as a symmetry of equation (\ref{timedependent1}): $K_4\simeq \mathbb Z_2\times \mathbb Z_2=(1,\rho_1,\rho_2,\rho_1\rho_2=\rho_2\rho_1)$, where each element is its own inverse. At the level of the stationary Schr\"odinger equation, the action of $\rho_2$ reduces to the transformation $\rho_2:(x,E_{\nu,n})\rightarrow(ix,-E_{\nu,n})$, which means that $\rho_2$ is a completely broken $\mathbb Z_2$ symmetry, for which the transformed eigenstates $\rho_2(\psi_{\nu,n})=\psi_{\nu,n}(ix)$ with eigenvalues $-E_{\nu,n}$ are nonphysical solutions. The transformation $\rho_1$ at the same level of the stationary Schr\"odinger equation implies that the energy eigenvalues change as $E_{\nu,n}\rightarrow\rho_1(E_{\nu,n})=E_{-\nu-1,n}=4n-2\nu+1$. The difference between the original energy level and the transformed one is $E_{\nu,n}-E_{-\nu-1,n}=\Delta E\cdot (\nu+1/2)$, where $\Delta E=4$ is the distance between two consecutive levels. So, if we take $\nu=\ell-1/2$ with $\ell=0,1,\ldots$, we obtain $\rho_1(E_{\ell-1/2,n})= E_{\ell-1/2,n-\ell}$, and find that physical energy levels with $n\geq\ell$ are transformed into physical energy levels but lowered by $4\ell$. Under the action of $\rho_1$, the eigenstates in (\ref{AFFless}) are transformed into the functions \begin{eqnarray} \label{psi-nu-1} &\rho_1(\psi_{\nu,n})= \sqrt{\frac{n!}{\Gamma(n-\nu+1/2)}}x^{-\nu}L_n^{(-\nu-1/2)}(x^2)e^{-x^2/2}:=\psi_{-\nu-1,n} \,.& \end{eqnarray} In the case of $\nu\neq \ell-1/2$, functions (\ref{psi-nu-1}) do not satisfy boundary condition at $x=0$ because of the presence of the factor $x^{-\nu}$, and they are nonphysical, formal eigenstates of $\mathcal{H}_{\nu}$. The case of $\nu= \ell-1/2$ requires, however, a separate consideration. To analyze this case, we observe that \begin{eqnarray} & \rho_1(\psi_{\ell-1/2,n})=\sqrt{\frac{n!}{\Gamma(n-\ell+1)}}x^{-\ell+1/2}L_n^{(-\ell)}(x^2)e^{-x^2/2}\,. & \end{eqnarray} Due to the poles of Gamma function, this expression vanishes when $n<\ell$, i.e., $\rho_1$ annihilates the first $\ell$ eigenstates of the system. On the other hand, the identity \begin{equation} \frac{(-\eta)^{m}}{m!}L_{n}^{(m-n)}(\eta)=\frac{(-\eta)^{n}}{n!}L_{m}^{(n-m)}(\eta)\,, \end{equation} with integer $m$ and $n$, which follows from (\ref{Laguerre}), allows us to write $\rho_{1}(\psi_{\ell-1/2,n})=(-1)^{\ell}\psi_{\ell-1/2,n-\ell}$ when $n\geq\ell$, and this is coherent with the change of the energy eigenvalues under application to them of transformation $\rho_1$. In conclusion, $\rho_1$ corresponds to a symmetry which is just the identity operator when $\ell=0$, while for $\ell\geq 1$ this symmetry annihilates the $\ell$ lowest physical eigenstates, but restores them by acting on the higher eigenstates~\footnote{This is similar to a picture of a Hilbert's hotel under departure of clients from first $\ell$ rooms with numbers $n=0,\ldots,\ell-1$ with simultaneous translation of the clients from rooms with numbers $n=\ell,\ell+1,\ldots$, into the rooms with numbers $n-\ell$. Note that the power $(\mathcal{C}_\nu^-)^{\ell}$ of lowering generator of conformal symmetry with $\nu=\ell-\frac{1}{2}$ acts on physical eigenstates in a way similar to $\rho_1$, but violating normalization of the states.}. {}From this point of view, in the case of half-integer $\nu$, transformation $\rho_1$ does not produce anything new. Nevertheless, we can also construct a finite set of nonphysical solutions of the same nonphysical nature as in (\ref{psi-nu-1}) given by the functions \begin{eqnarray} \label{halfintegernonphysical} & \psi_{-\ell-1/2,k}:= \rho_1\left(\sqrt{\frac{\Gamma(k+l+1)}{k!}}\psi_{\ell-1/2,k}\right)= x^{-\ell+1/2}L_n^{(-\ell)}(x^2)e^{-x^2/2}, \quad k=0,\ldots,\ell-1, & \end{eqnarray} singular at $x=0$, whose corresponding eigenvalues are $E_{-\ell-1/2,n}=4n-2\ell+2$. \vskip0.1cm We note that the combined transformation $\rho_1\rho_2(\psi_{\nu,n})$ always produces nonphysical solutions for all values of $\nu$ due to the presence of $\rho_2$. Wave eigenfunctions transformed by the $K_4$ generators $\rho_2$ and $\rho_1\rho_2$ diverge exponentially at infinity, and for the following consideration it is convenient to introduce a special common notation for them: $\psi_{r(\nu),n}(x)$, with $r(\nu)=-\nu-1$ for functions that vanish at infinity and $\psi_{r(\nu),-n}(x)=\psi_{r(\nu),n}(ix)$ for functions that diverge when $x\rightarrow \infty$. In the case of $\nu=\ell-1/2$, $\ell\geq 1$, we have $E_{-\ell-1/2,\ell-n-1}=-E_{-\ell-1/2,n}$ for $n<\ell$, and one finds that (\ref{halfintegernonphysical}) and their partners in the sense of Eq. (\ref{tildepsi}) are related with nonphysical eigenstates produced by $\rho_2$ and their partners, \begin{equation} \label{tilderelation} \psi_{-\ell-1/2,\ell-1-n}\propto \widetilde{\psi}_{-\ell-1/2,-n}\,,\qquad \widetilde{\psi}_{-\ell-1/2,n}\propto \psi_{-\ell-1/2,-\ell+1-n}\,. \end{equation} Now, let us study the quantum conformal symmetry of the AFF model from the perspective of the discrete Klein four-group. Keep in mind that under these transformations, $\mathfrak{sl}(2,\mathbb R)$ ladder operators $\mathcal{C}_\nu^\pm$ introduced in (\ref{Ladderdimensionless}) change as \begin{equation} \rho_1(\mathcal{C}_\nu^{\pm})=\mathcal{C}_\nu^{\pm}\,, \qquad \rho_2(\mathcal{C}_\nu^{\pm})=\rho_3(\mathcal{C}_\nu^{\pm})=-\mathcal{C}_\nu^{\mp}\,, \end{equation} so what we have here is a group of automorphisms of the conformal algebra. Knowing that $\mathcal{C}_\nu^-$ annihilates the ground state, we can use the $K_4$ group to obtain the kernels of $\mathcal{C}_\nu^\pm$ in the case $\nu\geq-1/2$, \begin{eqnarray} \label{kerCnu} \ker\,\mathcal{C}_{\nu}^-=\text{span}\,\{ \psi_{\nu,0},\psi_{-\nu-1,0} \}\,,\qquad \ker\,\mathcal{C}_{\nu}^+=\text{span}\,\{ \psi_{\nu,-0},\psi_{-\nu-1,-0} \}\,. \end{eqnarray} For $\nu=-1/2$, the kernels of $\mathcal{C}_{-1/2}^\pm$ are similar to (\ref{kerCnu}) but with the states $\psi_{-\nu-1,0}$ and $\psi_{-\nu-1,-0}$ are replaced, respectively, by the Jordan states \begin{eqnarray} \label{Jordan0} &\Omega_{-1/2,0}=\left(a-\frac{1}{2}\ln x\right)\psi_{-1/2,0}\,,\qquad \Omega_{-1/2,-0}=\left(b-\frac{1}{2}\ln x\right)\psi_{-1/2,-0}\,,& \end{eqnarray} where $a$ and $b$ are constants. In the context of the Darboux transformations, the equations in (\ref{kerCnu}) indicate that the second order differential operators $-\mathcal{C}_\nu^\pm$ are generated by the choice of the seed states $(\psi_{\nu,\mp0},\psi_{-\nu-1,\pm0})$, and by means of Eq. (\ref{Darstates}) we can write the equalities \begin{eqnarray} \label{WrC-} \mathcal{C}_\nu^\mp\phi_{r(\nu),z}=-\frac{W(\psi_{\nu,\pm0},\psi_{-\nu-1,\pm0}, \phi_{r(\nu),z})}{W(\psi_{\nu,\pm0},\psi_{-\nu-1,\pm0})} \,, \end{eqnarray} where $\phi_{r(\nu),z}$ with $z=\pm n$, $n\in \mathbb N$, corresponds to an eigenstate or a Jordan state of $L_\nu$. The Wronskian form of these equalities is useful to find the action of the ladder operators on the states $\widetilde{\psi}_{r(\nu),\pm 0}$ and $\breve{\Omega}_{-1/2,0}$. Using some Wronskian identities from the Appendix \ref{ApenWI}, specifically the Eqs. (\ref{ide2}) and (\ref{tech1}), as wells as the relations \begin{equation} \label{tools1} W(\psi_{\nu,\pm0},\psi_{-\nu-1,\pm0})=-(2\nu+1) e^{\mp x^2}\,,\qquad W(\psi_{-1/2,\pm0},\Omega_{-1/2,\pm0})=e^{\mp x^2}\,, \end{equation} one can find that \begin{eqnarray} \label{tools2} \mathcal{C}_\nu^-\widetilde{\psi}_{r(\nu),0}\propto\psi_{r(-\nu-1),-0}\,,\qquad \mathcal{C}_\nu^+\widetilde{\psi}_{r(\nu),-0}\propto\psi_{r(-\nu-1),0}\,,\\ \label{tools3} \mathcal{C}_{-1/2}^\mp\widetilde{\psi}_{-1/2,\pm0}\propto\Omega_{-1/2,\mp0}\,,\qquad \mathcal{C}_{-1/2}^\mp\breve{\Omega}_{-1/2,\pm0}\propto\psi_{-1/2,\mp0}\,. \end{eqnarray} So far, we realize that the states of Jordan should play some role in the case of half-integer $\nu$, however, let us first consider the general case. For this, we use (\ref{Dar-jor}) and the $\mathfrak{sl}(2,\mathbb R)$ algebra to prove the relations \begin{equation} \label{Jordann} \Omega_{r(\nu),\pm n}\propto(\mathcal{C}_{\nu}^{\pm})^n\Omega_{r(\nu),\pm 0}\,,\qquad \breve{\Omega}_{r(\nu),\pm n}\propto(\mathcal{C}_{\nu}^{\pm})^n\breve{\Omega}_{r(\nu),\pm 0}\,. \end{equation} Thus, the ladder operators act in a similar way as they act on eigenstates of $L_\nu$, but with a difference when $n=0$. When $\nu\not=-1/2$, we obtain the relations $ \mathcal{C}_{\nu}^{\pm}\Omega_{r(\nu),\mp 0}\propto \widetilde{\psi}_{r(-\nu-1),\pm 0} $ and $\mathcal{C}_{\nu}^{\pm}\breve{\Omega}_{r(\nu),\mp 0}\propto \Omega_{r(-\nu-1),\pm 0}$. Due to (\ref{tilderelation}) one can make the identification $\breve{\Omega}_{-\ell-1/2,\pm0}=\Omega_{-\ell-1/2,\mp(\ell-1)}$, so in the half-integer case $\nu=\ell-1/2$ with $\ell\geq 1$ we obtain \begin{equation} \label{ConJordan} \mathcal{C}^{\pm}_{\ell-1/2}\Omega_{\ell-1/2,\mp0}\propto\psi_{-\ell-1/2,\mp (\ell-1)}\,,\qquad \mathcal{C}^{\pm}_{\ell-1/2}\Omega_{-\ell-1/2,\mp0}\propto\psi_{\ell-1/2,\mp (\ell-1)}\,. \end{equation} Acting on these relations by $(\mathcal{C}_{\ell-1/2}^\pm)^{\ell}$, we obtain zero, and conclude that \begin{eqnarray} \label{spanChalf} \begin{array}{ll} \ker (\mathcal{C}_{\ell-1/2}^\pm)^{\ell+k}&=\text{span}\{\psi_{\ell-1/2,\mp0},\ldots,\psi_{\ell-1/2,\mp(\ell+k-1)},\psi_{-(\ell-1/2)-1,\mp0},\ldots,\\ &\qquad\qquad\psi_{-(\ell-1/2)-1,\mp(\ell-1)},\Omega_{\ell-1/2,\mp0},\ldots,\Omega_{\ell-1/2,\mp(k-1)}\}\, \end{array} \end{eqnarray} for $k=1,2,\ldots$. The whole picture is summarized in Figure \ref{figure0}. \begin{figure}[H] \begin{center} \includegraphics[scale=0.5]{figure1IsoDual.eps} \caption[Ladder operators action on eigenstates and Jordan states, Sec. 8.1 ]{\small{The action of the ladder operators in dependence on the value of $\nu$. Diagram \emph{a)} illustrates the case of half-integer $\nu=\ell-1/2$ with $\ell=1,\ldots,$ where it is shown how Jordan states can be related to eigenstates by the action of $\mathcal{C}_{\nu}^\pm$. Diagram \textit{b)} corresponds to non-half-integer values of $\nu$. In \textit{c)}, it is indicated how the case with $\nu=-1/2$ can be obtained from \textit{b)} by changing the corresponding states. The shapes with borders highlighted in blue (red) represent the states annihilated by $\mathcal{C}_{\nu}^-$ ($\mathcal{C}_{\nu}^+$).}} \label{figure0} \end{center} \end{figure} \section{Superconformal symmetry and the Klein four-group} \label{section3.2} Here, we inspect the action of the Klein four-group on a supersymmetric extension of the AFF model. To do so, we must pay attention to the intertwining operators $ A_\nu^\pm $ and $B_\nu^\pm $ introduced in Chap. \ref{ChConformal}, Eqs. (\ref{Anu}) and (\ref{Bnu}) (with $ \omega = 1 $). Acting on them, the group produces \begin{eqnarray} \label{rho1} \rho_1(A^\mp_{\nu})=-B^\pm_{\nu-1}\,,\qquad \rho_1(B^\mp_{\nu})=-A^\pm_{\nu-1}\,,\\ \label{rho2} \rho_2(A^\pm_\nu)=-iB^\pm_\nu\,,\qquad \rho_2(B^\pm_\nu)=-iA^\pm_\nu \,. \end{eqnarray} These relations are valid for $\nu>-1/2$, while for $\nu=-1/2$ the transformation $\rho_1$ reduces to the identity. The symmetry generators of the super-extended AFF model, namely $\{\mathcal{H}_{\nu}^{e},\mathcal{R}_\nu,\mathcal{C}_\nu^\pm,\mathcal{Q}_\nu^{a},\mathcal{S}_\nu^{b}\}$, were defined in Eqs. (\ref{Poincare1}), (\ref{Rnu}), (\ref{Snu}) and (\ref{Gnu}). The basic blocks to construct these objects are the intertwining operators $A_\nu^\pm$ and $B_\nu^\pm$, so the role of the Klein four-group at the supersymmetric level is at hand. Nevertheless, before to apply the relations (\ref{rho1})-(\ref{rho2}) in the supersymmetric generators, it is convenient to remember that the corresponding superalgebra (\ref{HRQ0})-(\ref{QSGG}) has the automorphism $f=f^{-1}$, which corresponds to the transformations $\mathcal{H}_{\nu}^{e}\rightarrow \mathcal{H}_{\nu}^{e}-4\mathcal{R}_{\nu}=\mathcal{H}_{\nu}^b$, $\mathcal{R}_{\nu}\rightarrow -\mathcal{R}_\nu$, $\mathcal{G}_{\nu}^\pm\rightarrow \mathcal{G}_{\nu}^{\pm}$, $\mathcal{Q}_\nu^1\rightarrow -\mathcal{S}_{\nu}^{1}$, $\mathcal{Q}_\nu^2\rightarrow \mathcal{S}_{\nu}^{2}$, $\mathcal{S}_\nu^1\rightarrow -\mathcal{Q}_{\nu}^{1}$ $\mathcal{S}_\nu^2\rightarrow \mathcal{Q}_{\nu}^{2}$. Then, the action of $\rho_1$ gives us \begin{eqnarray} \label{gentransformed1} &\rho_1(\mathcal{H}_{\nu}^{e})=\sigma_1(\mathcal{H}_{\nu-1}^{e}-4\mathcal{R}_{\nu-1})\sigma_1\,,\qquad \rho_1(\mathcal{G}_{\nu}^\pm)=\sigma_1(\mathcal{G}_{\nu-1}^\pm)\sigma_1\,,&\\ &\rho_1(\mathcal{R}_{\nu})=\sigma_{1}(-\mathcal{R}_{\nu-1})\sigma_{1}\,,&\label{gentransformed1+}\\ &\rho_1(\mathcal{Q}_{\nu}^1)=\sigma_1(-\mathcal{S}_{\nu-1}^1)\sigma_1\,,\qquad \rho_1(\mathcal{Q}_{\nu}^2)=\sigma_1(\mathcal{S}_{\nu-1}^2)\sigma_1\,,\label{gentransformed1++}& \\ &\rho_1(\mathcal{S}_{\nu}^1)=\sigma_1(-\mathcal{Q}_{\nu-1}^1)\sigma_1\,,\qquad \rho_1(\mathcal{S}_{\nu}^2)=\sigma_1(\mathcal{Q}_{\nu-1}^2)\sigma_1\,,\label{gentransformed2}& \end{eqnarray} which in fact is a combination of the shift $\nu\rightarrow \nu-1$, the action of $f$ and the unitary rotation. The transformed generators (\ref{gentransformed1})-(\ref{gentransformed2}) still satisfy the same superconformal algebra, i.e. $\rho_1$ is an automorphism of the $\mathfrak{osp}(2|2)$ symmetry, however the new generators describe another super-extended system: Unlike the initial system $\mathcal{H}_\nu^{e}$, in the transformed one the $\mathcal{N}=2$ Poincar\'e supersymmetry is spontaneously broken in the case of $\nu>-1/2$, see Chap. \ref{ChConformal}. The only exception from this rule corresponds to the case $\nu=-1/2$, where the transformed Hamiltonian reduces to $\sigma_1\mathcal{H}_{-1/2}^{e}\sigma_1$, and represents a unitarily transformed super-Hamiltonian with the unbroken $\mathcal{N}=2$ Poincar\'e supersymmetry. On the other hand, one can verify that when $\rho_1$ acts on the Hamiltonian $\mathcal{H}_\nu^{b}$, it produces $\sigma_1(\mathcal{H}_{\nu-1}^{e})\sigma_1$, and this time the $\mathcal{N}=2$ Poincar\'e supersymmetry of the system is changed from the spontaneously broken phase (in the case of $\nu>-1/2)$ to the phase of unbroken supersymmetry, with the only exception of the system $\mathcal{H}_{-1/2}^b$ with unbroken supersymmetry, which unitary transforms into $\sigma_1\mathcal{H}_{-1/2}^{b}\sigma_1$. This action of transformation $\rho_1$ on super-extended systems can be compared with the case of the non-extended AFF system, where $\rho_1$ acts identically on its Hamiltonian and generators of the conformal symmetry, though, as we saw, it acts nontrivially on eigenstates of the system. On the other hand, the action of $\rho_2$ produces \begin{eqnarray} \label{gentransformedrho2} &\qquad \rho_2(\mathcal{H}_{\nu}^{e})= -\mathcal{H}_{\nu}^{b}\,,\quad \rho_2(\mathcal{G}_{\nu}^\pm)= -\mathcal{G}_{\nu}^{\mp}\,,\quad \rho_2(\mathcal{R}_{\nu})= \mathcal{R}_\nu\,,&\\ &\rho_2(\mathcal{Q}_\nu^1)= -i\mathcal{S}_{\nu}^{1}\,, \qquad \rho_2(\mathcal{Q}_\nu^2)= -i\mathcal{S}_{\nu}^{2}\,,&\\ &\rho_2(\mathcal{S}_\nu^1)= -i\mathcal{Q}_{\nu}^{1}\,,\qquad \rho_2(\mathcal{S}_\nu^2)= -i\mathcal{Q}_{\nu}^{2}\,.& \end{eqnarray} Transformed Hamiltonian operator is similar here to the Hamiltonian produced by the automorphism $f$ but multiplied by $-1$. This correlates with the anti-Hermitian nature of the transformed fermion generators of superalgebra. Accordingly, the spectrum of the transformed matrix Hamiltonian is negative, not bounded from below, and each of its level is doubly degenerate for $\nu\geq-1/2$. In correspondence with the described picture, the application of the combined transformation $ \rho_2\rho_1$ is just another automorphism of the superconformal algebra (\ref{HRQ0})-(\ref{anti2}), which produces anti-Hermitian odd generators, and $\rho_2\rho_1(\mathcal{H}_{\nu}^{e})=\sigma_1(-\mathcal{H}_{\nu-1}^{e})\sigma_1$. The discrete spectrum of the transformed Hamiltonian is not restricted from below and is given by the numbers $\mathcal{E}_n=-4n$, $n=0,1,\ldots$, where each negative energy level is doubly degenerate, while non-degenerate zero energy level corresponds to the state $(\psi_{\nu,0},0)^t$. \section{Dual Darboux schemes} \label{Mirror} With the new set of nonphysical solutions, in this section we extend the idea of dual schemes for the AFF model with $ \nu \geq-1/2 $. As we have shown in Sec. \ref{secK4group}, the case in which $ \nu $ takes half-integer values is special, because the Jordan states take relevance through the properties of the conformal symmetry generators\footnote{ Operators $\mathcal{C}_\nu^\pm$ can be interpreted as the second order intertwining operators associated with the seed states $(\psi_{-\nu-1,0},\psi_{\nu,0})$ for $\nu>1/2$, and to the confluent scheme $(\Omega_{-1/2,0},\psi_{-1/2,0})$, when $\nu=1/2$.}, which are simultaneously the ladder operators for corresponding AFF systems, see equation (\ref{ConJordan}). For this reason, we start first with the case where $ \nu $ is not a half-integer. Let us choose a generic set of physical and nonphysical eigenstates of $ L_{\nu} $ as seed states, \begin{eqnarray} \label{unioncollection} \{\alpha\}=(\psi_{\nu,k_1},\ldots,\psi_{\nu,k_{N_1}},\psi_{-\nu-1,l_1},\ldots,\psi_{-\nu-1,l_{N_2}}) \,,\qquad k_{i},l_{j}= \pm0,\pm1,\ldots\,, \end{eqnarray} where $i=1,\ldots,N_1$ and $j=1,\ldots,N_2$, and, for simplicity, we suppose that $|k_1|<\ldots<|k_{N_1}|$ and $|l_1|<\ldots<|l_{N_2}|$. Let us assume that in the scheme (\ref{unioncollection}) there are no repeated states and both $k_i$ and $l_j$ carry the same sign for all $i$ and $j$. Also let us define the index number \begin{equation} n_N=\text{max}\,(|k_1|,\ldots,|k_{N_1}|,|l_1|,\ldots,|l_{N_2}|)\,. \end{equation} which can correspond to a state with index $\nu$ or $-\nu-1$. By means of the algorithm described in Appendix \ref{DualAFF1} one can show that \begin{eqnarray} \label{eqschemes3} &W(\{\alpha\})=e^{-(n_{N}+1)x^2}W(\{\Delta_-\})\,,&\\\nonumber &\{\Delta_-\}:=(\psi_{-\nu-1,-0},\psi_{\nu,-0},\ldots,\check{\psi}_{-\nu-1,-r_i}, \check{\psi}_{\nu,-s_i},\ldots,\psi_{-\nu-1,-n_N},{\psi}_{\nu,-n_N} )\,,& \end{eqnarray} is satisfied, where the marked states $\check{\psi}_{-\nu-1,-r_i}$ and $\check{\psi}_{\nu,-s_i}$, with $r_i=n_{N}-k_i$ and $s_j=n_{N}-l_j$, are omitted from the set $\{\Delta_-\}$. On the contrary, if $k_i$ and $l_j$ carry the minus sign, we have the equality \begin{eqnarray} \label{eqschemes4} &W(\{\alpha\})=e^{(n_{N}+1)x^2}W(\{\Delta_+\})\,,&\\\nonumber &\{\Delta_+\}:=(\psi_{-\nu-1,0},\psi_{\nu,0},\ldots,\check{\psi}_{-\nu-1,r_i}, \check{\psi}_{\nu,s_j},\ldots,{\psi}_{-\nu-1,n_N},{\psi}_{\nu,n_N} )\,,& \end{eqnarray} where now $r_i=n_N-|k_i|$ and $s_j=n_N-|l_j|$. These relations are also valid if one of the numbers ${N_1}$ or ${N_2}$ is equal to zero, which means that in the corresponding scheme there are only states of the same kind with respect to the first index, $-\nu-1$ or $\nu$, respectively. When considering $\nu=\ell-1/2$ with $\ell=0,1,2,\ldots$, some repeated states could appear due to $\rho_{1}(\psi_{\ell-1/2,n})=(-1)^{\ell}\psi_{\ell-1/2,n-\ell}$. This means that the Wronskian must vanish, however, that happens because, in the general case, this object takes the form $ \Lambda (\nu) f (x; \nu) $, where $ \Lambda (\nu) $ disappears in these special cases (see the example (\ref{W(nu2-nu-1,2)}) below). To obtain a deformed AFF system with the potential modified by $ -2 \ln (f (x; \nu))'' $ for half-integer $ \nu $, as well as its dual scheme, we will have relations analogous to (\ref{eqschemes3}) and (\ref{eqschemes4}), but changing each state of the form $ \psi_{-\nu-1, \pm(\ell + k)} $ by $ \Omega_{\ell-1 / 2, \pm k} $, which means that we are dealing with the confluent Darboux transformation, see Appendix \ref{AFFhalf} for a detailed derivation. The general rules of the Darboux duality can be summarized and better understood with the examples presented diagrammatically in Fig. \ref{Kleinfigure1}. \begin{figure}[H] \begin{center} \includegraphics[scale=0.28]{figure2IsoDual.eps} \caption[Two mirror diagrams, Sec. 8.3]{\small{Two ``mirror diagrams'' corresponding to dual schemes for the conformal mechanics model. The numbers $\pm n$ indicate the states $\psi_{\nu,\pm n}$, and symbols $\pm\bar{n}$ correspond to the states $\psi_{-\nu-1,\pm n}$.} } \label{Kleinfigure1} \end{center} \end{figure} \vskip-0.5cm These types of diagrams are read in the same way as for the harmonic oscillator mirror diagram presented in Chap. \ref{ChRQHO} and in this case they correspond to the following Wronskian relations: \begin{eqnarray} \label{Interpoeg} W(\psi_{-\nu-1,2},\psi_{\nu,2})=e^{-3x^2}W(\psi_{-\nu-1,-1},\psi_{\nu,-1},\psi_{-\nu-1,-2},\psi_{\nu,-2})\,,\\ \label{Adlerkreineg} W(\psi_{\nu,2},\psi_{\nu,3})=e^{-4x^2}W(\psi_{\nu,-0},\psi_{\nu,-1},\psi_{-\nu-1,-2},\psi_{\nu,-2},,\psi_{-\nu-1,-3},\psi_{\nu,-3})\,, \end{eqnarray} whose explicit forms are \begin{eqnarray}\label{W(nu2-nu-1,2)} \begin{array}{ll} W(\psi_{\nu,2},\psi_{-\nu-1,2})=&(2\nu+1)e^{-x^2}\big(45 - 72 \nu +16 (-4x^6 + x^8) \\ & +8x^4(15 - 4 \nu (1 + \nu)) + \nu^2 (-7 +2 \nu (2 + \nu))\big), \end{array}\\ \label{W(nu2nu3)} \begin{array}{ll} W(\psi_{\nu,2},\psi_{\nu,3})=&e^{-x^2} x^{ 3 + 2 \nu} \big(16 x^8 - 32 x^6 (5 + 2 \nu) + 24 x^4 (5 + 2 \nu)^2 - \\ & 8 x^2 (3 + 2 \nu) (5 + 2 \nu) (7 + 2 \nu) + (3 + 2 \nu) (5 + 2 \nu)^2 (7 + 2 \nu)\big). \end{array} \end{eqnarray} The transformation which relates the AFF systems described by $L_\nu$ with $L_{\nu+m}$ can also be understood within this picture. Furthermore, using a diagram similar to those in Fig. \ref{Kleinfigure1}, one can show that the schemes $\{\Delta_+\}=(\psi_{r(\nu),0},\ldots,\psi_{r(\nu),m-1})$ and $\{\Delta_-\}=(\psi_{r(\nu),-0},\ldots,\psi_{r(\nu),-(m-1)})$ are dual. \section{Rationally deformed AFF systems} \label{Ladders} A rational deformation of the AFF model can be generated by taking a set of the seed states \begin{equation}\label{alpKA} \{\alpha_{KA}\}=(\psi_{\nu,l_1},\psi_{\nu,l_1+1},\ldots,\psi_{\nu,l_m},\psi_{\nu,l_m+1})\,, \end{equation} composed from $m$ pairs of neighbour physical states. Krein-Adler theorem \textcolor{red}{[\cite{Krein,Adler}]} guarantees that the resulting system described by the Hamiltonian operator of the form \begin{eqnarray} \label{deformed1} &L^{KA}_{(\nu,m)}=L_{\nu+m}+4m+\frac{F_\nu(x)}{Q_\nu(x)}& \end{eqnarray} is nonsingular on $\mathbb R^+$. Here $F_\nu(x)$ and $Q_\nu(x)$ are real-valued polynomials, $Q_\nu(x)$ has no zeroes on $\mathbb R^+$, its degree is two more than that of $F_\nu(x)$, and so, the last rational term in (\ref{deformed1}) vanishes at infinity. The spectrum of the system (\ref{deformed1}) is the equidistant spectrum of the AFF model with the removed energy levels corresponding to the seed states. Consequently, any gap in the resulting system has a size $12+8k$, where $k=0,1,\ldots$ correspond to $k$ adjacent pairs in the set (\ref{alpKA}) which produce a given gap. An example of this kind of systems is generated by the scheme $(\psi_{\nu,2},\psi_{\nu,3})$, whose dual negative scheme is given by equation (\ref{Adlerkreineg}). \vskip0.1cm Another class of rationally extended AFF systems is provided by isospectral deformations generated by the schemes of the form \begin{equation}\label{isoscheme} \{\alpha_{iso}\}=(\psi_{\nu,-s_1},\ldots,\psi_{\nu,-s_m})\,, \end{equation} which contains the states of the form $\rho_2(\psi_{\nu,n}(x))=\psi_{\nu,n}(ix)$. As the functions used in this scheme are proportional to $x^{\nu+1}$ and do not have real zeros other than $x=0$, one obtains a regular on $\mathbb R^+$ system of the form \begin{eqnarray} \label{deformed2} &L^{iso}_{(\nu,m)}=L_{\nu+m}+2m+f_\nu(x)\,,& \end{eqnarray} where $f_\nu(x)$ is a rational function disappearing at infinity \textcolor{red}{[\cite{Grand}]}, and one can find that potential of the system (\ref{deformed2}) is a convex on $\mathbb R^+$ function. In this case the transformation does not remove or add energy levels, and, consequently, the initial system $\mathcal{H}_\nu$ and the deformed system (\ref{deformed2}) are completely isospectral superpartners. Some concrete examples of the systems (\ref{deformed2}) with integer values of $\nu$ were considered in the two previous chapters, see also \textcolor{red}{[\cite{CarInzPly}]} . \vskip0.1cm Consider yet another generalized Darboux scheme which allows us to interpolate between different rationally deformed AFF systems. For this we assume that the initial AFF system is characterized by the parameter $\nu=\mu+m$, where $-1/2<\mu\leq 1/2$ and $m$ can take any non-negative integer value. For these ranges of values of the parameter $\nu$, real zeros of the functions $\psi_{\mu+m,n-m}$ are located between zeros of $\psi_{-(\mu+m)-1,n}$, so that we can rethink the Krein-Adler theorem and consider the scheme \begin{equation} \label{Interpol} \{\gamma_{\mu}\}=(\psi_{-(\mu+m)-1,n_1},\psi_{(\mu+m),n_1-m},\ldots, \psi_{-(\mu+m)-1,n_{N}},\psi_{(\mu+m),n_{N}-m})\,, \end{equation} which includes $2N$ states and where we suppose that $n_i-m\geq 0$ for all $i=1,\ldots, N$. The DCKA transformation based on the set (\ref{Interpol}) produces the system \begin{equation} \label{Intersys} L_{\mu+m}^{def}:=L_{\mu+m}-2(\ln W(\gamma_\nu))''=L_{\mu+m}+4N+h_{\mu+m}(x)/q_{\mu+m}(x)\,, \end{equation} where the constant $4N$ is provided by the Gaussian factor in the Wronskian, and the last term is a rational function vanishing at infinity and having no zeros on the whole real line, including the origin, if an only if $-1/2<\mu\leq1/2$, see Appendix \ref{apWron}. Let us analyze now some special values of $\mu$. \textit{The case $\mu=0$}\,: by virtue of relations between Laguerre and Hermite polynomials mentioned in Chap \Ref{ChConformal}, see equation (\ref{hermiteLaguerre}), in this case we obtain those systems which were generated in \textcolor{red}{[\cite{CarInzPly}]} and discussed in Chap. \ref{ChRQHO}, we refer to systems (\ref{REIO}). \textit{The case $\mu=1/2$}\,: we have here the relation \begin{equation}\rho_1(\psi_{m+1/2,n_i})=\psi_{-m-3/2,n_i}=(-1)^{m+1}\psi_{m+1/2,n_i-m-1}\,, \end{equation} due to which the scheme (\ref{Interpol}) transforms into \begin{equation} \{\gamma_{{1}/{2}}\}=(\psi_{1/2+m,n_1-m-1},\psi_{1/2+m,n_1-m},\ldots, \psi_{1/2+m,n_{N}-m-1},\psi_{1/2+m,n_{N}-m})\,, \end{equation} which corresponds to (\ref{alpKA}) with $l_i=n_i-m-1$. We additionally suppose that $n_i-m-1\not=n_{i-1}-m$, otherwise the Wronskian vanishes. Note that when $\mu\not=1/2$, the image of the states $\psi_{\mu+m,n_i-m-1}$ under Darboux mapping (\ref{Darstates}) is a physical state, but in the case $\mu=1/2$ such states are mapped into zero since the argument $\psi_{1/2+m,n_i-m-1}$ appears twice in the Wronskian of the numerator. \textit{The case $\mu=-1/2$}\,: this case was not included in the range of $\mu$ from the beginning due to relation $\rho_1(\psi_{m-1/2,n_i})=\psi_{-m-1/2,n_i}=(-1)^{m}\psi_{m-1/2,n_i-m}$ which would mean the appearance of the repeated states in the scheme (\ref{Interpol}) and vanishing of the corresponding Wronskian. However, in Appendix \ref{apWron} we show that the limit relation $\lim_{\mu\to-1/2}{W(\{\gamma_\mu\})}/{(\mu+\frac{1}{2})^N}\propto W(\{\gamma\})$ is valid, where the scheme $\{\gamma\}$ is \begin{equation} \label{Interpol2} \{\gamma\}=(\psi_{m-1/2,n_1-m},\Omega_{m-1/2,n_1-m},\ldots,\psi_{m-1/2,n_N-m},\Omega_{m-1/2,n_N-m})\,, \end{equation} which corresponds to a non-singular confluent Darboux transformation, \textcolor{red}{[\cite{Jordan1}]}. By considering this last comment, in conclusion we have that when $-1/2\leq\mu<1/2$, the states $\psi_{-(\mu+m)-1,n_i}$ (and $\Omega_{m-1/2,n_i-m}$ in the case of $\mu=-1/2$) are nonphysical states. This means that only the physical states $\psi_{\nu+m,n_i-m}$ indicate the energy levels removed under the corresponding Darboux transformation, i.e., there are gaps of the minimum size $2\Delta E=8$, where $\Delta E=4$ is the distance between energy levels of the AFF model, which can merge to produce energy gaps of the size $8+4k$. On the other hand, when $\mu=1/2$, we have a typical Krein-Adler scheme with gaps of the size $12+4k$. To give an example, we put $m=0$, that means $\nu=\mu$, and consider the scheme $(\psi_{-\nu-1,2},\psi_{\nu,2})$ given in (\ref{W(nu2-nu-1,2)}) with $-1/2<\nu\leq1/2$, and in the case of $\nu=-1/2$ we have the scheme $(\psi_{-1/2,2},\Omega_{-1/2,2})$. The potential of the rationally deformed AFF system generated by the corresponding Darboux transformation is shown in Fig. \ref{potential1} and Fig. \ref{potential12}. \begin{figure}[H] \begin{center} \includegraphics[scale=0.78]{figure3IsoDual.eps} \caption[First picture of potential and ground state behavior with respect to $\nu$, Sec. 8.3]{\small{On the left, a graph of the corresponding potential is shown which is produced by the associated Darboux transformation applied to the AFF model with three indicated values of the parameter $\nu$ versus the dimensionless coordinate $x$. For $\nu=-1/2$, the corresponding limit is taken, and the resulting system has an attractive potential with a (not shown) potential barrier at $x=0$. For $\nu=0$, we obtain a rationally extended half-harmonic oscillator. The case $\nu=1/2$ corresponds to the Krein-Adler scheme $(\psi_{1/2,1},\psi_{1/2,2})$ with a gap equal to $12$. On the right, the ground states of the corresponding generated systems are shown as functions of dimensionless coordinate $x$.}} \label{potential1} \end{center} \end{figure} \vskip-0.5cm \begin{figure}[H] \begin{center} \includegraphics[scale=0.78]{figure4Isodual.eps} \caption[Second picture of potential and ground state behavior with respect to $\nu$, Sec. 8.3]{\small{On the left, the potential of deformed systems with $\nu$ close to $1/2$ is shown. On the right, the ground states of the corresponding systems are displayed. }} \label{potential12} \end{center} \end{figure} As it is seen from the figures, the first minimum of the potential grows in its absolute value, its position moves to $0$, and it disappears at $\nu=1/2$, while the local maximum near zero also grows, its position approaches zero, and it goes to infinity in the limit. Besides, the first maximum of the ground state vanishes when $\nu$ approximates the limit value $1/2$. Coherently with the described behavior of the potential, the image of the Darboux-transformed state $\psi_{\nu,1}$, which is the first excited state of the new system when $-1/2\leq \nu<1/2$, vanishes when $\nu\rightarrow 1/2$, the corresponding energy level disappears from the spectrum at $\nu=1/2$, and the size of the gap increases from $8$ to $12$. The described three possible selection rules to choose the seed states correspond to the negative scheme (\ref{isoscheme}), which generates isospectral deformations, the positive Krein-Adler scheme (\ref{alpKA}), and the positive interpolating scheme (\ref{Interpol}). Then we can apply the Darboux duality to obtain the corresponding dual schemes for them. The positive and negative dual schemes will be used in the next subsection to construct complete sets of the spectrum-generating ladder operators for the rationally deformed conformal mechanics systems. \section{Intertwining and ladder operators}\label{SecIntLad} In this paragraph we proceed to construct the intertwining and ladder operators of rational deformed system obtained by means of the seed states selection rules detailed above. For simplicity we do not touch here the schemes that contain Jordan states. However, we have relations (\ref{Polly2}) and (\ref{spanChalf}), and relations (\ref{eqschemes3}) and (\ref{eqschemes4}) which were extended to such cases with the corresponding substitutions. This means that the properties summarized below are also valid for the schemes containing Jordan states. Suppose that the positive (negative) scheme possesses $n_+$ ($n_-$) seed states. Then the generated Hamiltonian $L_{(\pm)}$ satisfy the relation \begin{equation} \label{dualL} L{(+)}-L_{(-)}=\Delta E(n_{n_+}+1)=2(n_++n_-)\,,\qquad \Delta E=4\,, \end{equation} where $n_{n_+}$ is the bigest quantum number in the positive scheme. Let us denote by $A_{(+)}^\pm$ and $A_{(-)}^\pm$ the intertwining operators of the positive and negative schemes being differential operators of the orders $n_+$ and $n_-$, respectively. They satisfy the intertwining relations \begin{equation} \label{inter-relation} A_{(\pm)}^-L_{\nu}=L_{(\pm)}A_{(\pm)}^-\,,\qquad A_{(\pm)}^+L_{(\pm)}= L_{\nu}A_{(\pm)}^+\,. \end{equation} As the states $\widetilde{\psi}_{r(\nu),\pm n}$ behave asymptotically as $e^{\pm x^2/2}$, the states produced from them by application of differential operators $A_{(\pm)}^-$ will carry the same exponential factor. Having this asymptotic behavior in mind, let us suppose that $\psi_{r(\nu),-l_*}$ and $\psi_{r(\nu),n_*}$ are some arbitrary states from the negative and positive scheme, respectively. By using (\ref{inter-relation}), we obtain the relations \begin{eqnarray} \label{complement1} &A_{(-)}^-\widetilde{\psi}_{r(\nu),-l_*}=A_{(+)}^-\rho_1(\psi_{r(\nu),n_{n+}-l_*})\,,\qquad A_{(+)}^-\widetilde{\psi}_{r(\nu),n_*}=A_{(-)}^-\rho_1(\psi_{r(\nu),-(n_{n+}-n_*)})\,,\quad& \end{eqnarray} in both sides of which the functions satisfy the same second order differential equation and have the same behaviour at infinity. Note that in the dual schemes in (\ref{eqschemes3}) and (\ref{eqschemes4}), the indexes $n_{n_+}-l_*$ and $-(n_{n_+}-n_*)$ are in correspondence with the indexes $r_i$, and $s_i$ of the states omitted from the positive and negative scheme, respectively. This helps us to find that \begin{equation} \ker \big(A_{(-)}^+A_{(+)}^-\big)=(\psi_{\nu,0},\psi_{-\nu-1,0},\ldots, \psi_{\nu,n_{n_+}},\psi_{-\nu-1,n_{n_+}})=\ker\, (\mathcal{C}_{\nu}^-)^{n_{n_+}+1}\,, \end{equation} from where we obtain the identities \begin{equation} \label{powerC} A_{(-)}^+A_{(+)}^-=(-1)^{n_{n_+}+1-n_+}(\mathcal{C}_{\nu}^-)^{n_{n_+}+1}\,, \qquad A_{(+)}^+A_{(-)}^-=(-1)^{n_{n_+}+1-n_+}(\mathcal{C}_{\nu}^+)^{n_{n_+}+1}\,. \end{equation} Finally, to have a complete picture we write the relations \begin{eqnarray} \label{complement3} A_{(-)}^-\psi_{r(\nu),k}=A_{(+)}^-\psi_{r(\nu),n_{n+}+1+k'}\,,\qquad A_{(+)}^-\psi_{r(\nu),-k'}=A_{(-)}^-\psi_{r(\nu),-(n_{n+}+1+k')}\,.& \end{eqnarray} Note that in the case $\nu=0$, first equation reduces to (\ref{relation-operators}). In the case of the dual schemes where $\nu=m-1/2$, similar relations are obtained but with $\psi_{-\mu-m-1,\pm n_i}$ and $\widetilde{\psi}_{-\mu-m-1,\pm n_i}$ replaced by $\Omega_{m-\frac{1}{2},\pm(n_i-m)}$ and $\breve{\Omega}_{m-\frac{1}{2},\pm(n_i-m)}$ when is required. With the help of the described intertwining operators, we can construct three types of ladder operators for $L_{(\pm)}$ which are given by: \begin{eqnarray} \label{ladders} &\mathcal{A}^{\pm}=A_{(-)}^-\mathcal{C}_{\nu}^\pm A_{(-)}^+\,, \quad \mathcal{B}^{\pm}=A_{(+)}^-\mathcal{C}_{\nu}^\pm A_{(+)}^+\,,\quad \mathcal{C}^{+}=A_{(-)}^-A_{(+)}^+\,,\quad \mathcal{C}^{-}=A_{(+)}^-A_{(-)}^+\,.& \end{eqnarray} Let us denote these operators in the compact form $\mathcal{F}_a^{\pm}=(\mathcal{A}^\pm, \mathcal{B}^\pm,\mathcal{C}^\pm)$, $a=1,2,3$, and use (\ref{dualL}) and (\ref{inter-relation}) to obtain the commutation relations \begin{eqnarray} \label{defsl2R} &[L_{(\pm)},\mathcal{F}_a^{\pm}]=\pm R_a\mathcal{F}_a^{\pm}\,,\qquad [\mathcal{F}_{a}^-,\mathcal{F}_a^{+}]=\mathcal{P}_a(L_{(\pm)})\,,&\\\nonumber& \begin{array}{ll} R_1=R_2=4\,,&\mathcal{P}_1=(\eta+2\nu+3)(\eta-2\nu+1)P_{n_-}(\eta)P_{n_-}(\eta+4)|_{\eta=L_{(-)}-4}^{\eta=L_{(-)}}\,,\\ &\mathcal{P}_2=(\eta+2\nu+3)(\eta+2\nu+1)P_{n_+}(\eta)P_{n_+}(\eta+4)|_{\eta=L_{(+)}-4}^{\eta=L_{(+)}}\,,\\ R_3=4(n_{n_+}+1)\,,&\mathcal{P}_3=P_{n_+}(\eta)P_{n_-}(\eta)|_{\eta=L_{(+)}-4}^{\eta=L_{(-)}}\,, \end{array}& \end{eqnarray} where \begin{eqnarray} \label{Poly2} P_{n_-}(y)=\prod_{i=1}^{n_-}(y-\lambda_{i}^-)\,,\qquad P_{n_+}(y)=\prod_{i=1}^{n_+}(y-\lambda_{i}^+)\,, \end{eqnarray} and $\lambda_{i}^\pm$ are the corresponding eigenvalues of the seed states in the positive and negative schemes. Equations (\ref{defsl2R}) are three different but related copies of the nonlinearly deformed conformal algebra $\mathfrak{sl}(2,\mathbb R)$. One can verify the commutators between generators with different values of index $a$ do not vanish, and therefore the complete structure is rather complicated. Similarly to the non-deformed case, be means of a unitary transformation produced by $U=e^{-itL_{(\pm)}}$ we obtain the integrals of motion ${}_H\mathcal{F}_a^\pm(t)=e^{\mp R_a}\mathcal{F}_a^\pm$, and by linear combinations of them construct the Hermitian generators $\mathfrak{D}_a(t)=(\mathcal{F}_a^-(t)-\mathcal{F}_a^+(t))/(i2R_a)$ and $ \mathfrak{K}_a(t)=(\mathcal{F}_a^+(t)+\mathcal{F}_a^-(t)+2L_{(\pm)})/R_a^2$, which generate three copies of a nonlinear deformation of the Newton-Hooke algebra, \begin{eqnarray} [L_{(\pm)},\mathfrak{D}_a]=-i\left(L_{(\pm)}-\frac{(R_a)^2}{2}\mathfrak{K}_a\right)\,,\qquad [L_{(\pm)},\mathfrak{K}_a]=-2i\mathfrak{D}_a\,,\\\nonumber [\mathfrak{D}_a,\mathfrak{K}_a]=\frac{1}{iR_a^3}\left(\mathcal{P}_a(L_{(\pm)})- 2R_aL_{(\pm)}+R_a^3\mathfrak{K}_a\right)\,, \end{eqnarray} which are hidden symmetries of the system described by $L_{(\pm)}$. In the isospectral case, the operators $\mathcal{A}^\pm$ are the spectrum generating ladder operators, where their action on physical eigenstates of $L_{(\pm)}$ is similar to that of $\mathcal{C}_\nu^\pm$ in the AFF model. On the other hand, in rationally extended gapped systems obtained by Darboux transfromations based on the schemes not containing Jordan states, the separated states have the form $A_{(-)}^-\widetilde{\psi}_{-\nu-1,-l_j}=A_{(+)}^-\psi_{\nu,n_{n+}-l_j}$, where the states $\psi_{-\nu-1,-l_j}$ belong to the negative scheme and $\psi_{\nu,n_{n+}-l_j}$ are the omitted states in the corresponding dual positive scheme. Since by construction the separated states belong to the kernel of $A_{(-)}^+$, the operators $\mathcal{A}^\pm$ and $\mathcal{C}^-$ will always annihilate all them. In summary, the resulting picture is more or less the same as we had for the cases analyzed in the previous chapters. We have three pairs of ladder operators; $ \mathcal{B}^\pm $ detect the upper and lower energy levels of each isolated valence band, $ \mathcal{A}^\pm $ operators annihilate all the isolated states, and $ \mathcal{C}^\pm $ operators connect isolated states with the equidistant part of the spectrum. \section{An example} \label{Examples} In this section we will apply the machinery of the dual schemes and the construction of nonlinear deformations of the conformal algebra to a nontrivial example of rationally extended systems with gaps. Remember that if we take $\nu=\mu+m$, we replace $\psi_{-(\mu+m)-1,\pm n}$ by $\Omega_{-(\mu+m)-1,\pm(n-m)}$ with $n>m$ when $\mu\rightarrow-1/2$ in each of the relations that we have in the following, see Sec. \ref{Mirror}. Consider a system generated on the base of the Darboux-dual schemes \begin{equation} (\psi_{\nu,2},\psi_{\nu,3})\sim(\psi_{\nu,-0}, \psi_{\nu,-1},\psi_{\nu,-2},\psi_{-\nu-1,-2},\psi_{\nu,-3},\psi_{-\nu-1,-3})\,. \end{equation} Here, $n_-=2$, $n_+=6$, $n_{n_{+}}=n_{n_{-}}=3$ and $n_-+n_+=2(n_{n_+}+1)=8= 2\Delta E$. The positive scheme, whose Wronskian is given explicitly in (\ref{W(nu2nu3)}), corresponds to the Krein-Adler scheme that provides us with the system \begin{eqnarray} \label{DeformedA-K} &L_{(+)}=-\frac{d^2}{dx^2}+V_{(+)}(x)\,,& \end{eqnarray} whose potential $V_{(+)}$ is plotted in Figure \ref{Potential2}. The spectrum of the system, $\mathcal{E}_{\nu,0}=2\nu+3$, $\mathcal{E}_{\nu,1}=2\nu+7$, $\mathcal{E}_{\nu,n}=2\nu+4(n+2)+3$, $n=2,\ldots$, is characterized by the presence of the gap of the size $3\Delta E=12$, which appears between the first and second excited states. The negative scheme generates the shifted Hamiltonian operator $L_{(-)}=L_{(+)}-4\Delta E$. In terms of the intertwining operators $A_{(+)}^\pm$ and $A_{(-)}^\pm$ of the respective positive and negative schemes, the physical eigenstates of (\ref{DeformedA-K}) are given by \begin{eqnarray} \Psi_j&=&A_{(+)}^-\psi_{\nu,j}=A_{(-)}^-\widetilde{\psi}_{-\nu-1,j-3}\,,\qquad j=0,1\,,\\ \Psi_j&=&A_{(+)}^-\psi_{\nu,j+2}=A_{(-)}^-\psi_{\nu,j-2}\,,\qquad j=2,3,\ldots\,. \end{eqnarray} \begin{figure}[H] \begin{center} \includegraphics[scale=0.6]{myfigure.eps} \caption[Behavior of the potential and physical states annihilated by the spectrum-generating ladder operators, Sec. 8.6 ] {\small{The resulting potential with $\nu=1/3$ and energy levels of the system. The energy levels of the physical states annihilated by the ladder operators $\mathcal{A}^-$, $\mathcal{A}^+$, $\mathcal{B}^-$, $\mathcal{B}^+$, and $\mathcal{C}^-$ are indicated from left to right.}} \label{Potential2} \end{center} \end{figure} \noindent The explicit form of the polynomials (\ref{Poly2}) for the system is \begin{eqnarray}\label{Pn+y} P_{n_+}(\eta)=(\eta-11-2\nu)(\eta-15-2\nu)\,,\\ P_{n_-}(\eta)=(\eta+9-2\nu)(\eta+13-2\nu)\prod_{i=0}^{3}(\eta+4n+3+2\nu)\,,\label{Pn-y} \end{eqnarray} and so, $A_{(\pm)}^-A_{(\pm)}^+=P_{n_\pm}(\mathcal{H}_{\nu})$ and $A_{(\pm)}^-A_{(\pm)}^+=P_{n_\pm}(L_{(\pm)})$. The spectrum-generating ladder operators are given by Eq. (\ref{ladders}), and the nonlinearly deformed conformal algebras generated by each corresponding pair of the ladder operators and the Hamiltonian $L_{(+)}$ are obtained from (\ref{defsl2R}) by using polynomials (\ref{Pn+y}) and (\ref{Pn-y}). To clarify physical nature of the ladder operators, one can inspect their corresponding kernels by using relations (\ref{tools2}) and (\ref{powerC}). As a result, one gets that the physical eigenstates annihilated by these operators are indicated in figure \ref{Potential2} and all other functions in the respective kernels are nonphysical solutions. \section{Remarks} Note that the group $ K_4 $ can also be discussed in the context of Schr\"odinger equation \begin{equation} \label{timedependent2} \left(-\frac{\partial^2}{\partial x^2}+ \frac{\nu(\nu+1)}{x^2}\right)\psi=i\frac{\partial}{\partial t}\psi\,, \end{equation} the stationary solutions of which are $\Psi_\nu(x,t;\kappa)=\psi_\nu(x;\kappa)e^{-i\kappa^2 t}$, where $\psi_\nu(x;\kappa)=\sqrt{x}J_{\nu+\frac{1}{2}}(\kappa x)$ . The transformation $\rho_2$ gives us the modified Bessel functions, besides $\rho_1$ produces singular solutions when $\nu$ is not a half-integer number. In the case $\nu=\ell-1/2$ with $\ell=0,1,2\ldots\,,$ we have that $\rho_1(\psi_{\ell-1/2}(x,\kappa))=\sqrt{x}J_{-\ell}(\kappa x)=(-1)^{\ell}\psi_{\ell-1/2}(x,\kappa)$. \chapter{Three-dimensional conformal mechanics in a monopole background} \label{Chapmono1} The conformal algebra shown in Chap. \ref{ChConformal} can be realized in higher-dimensional models. In the same sense, the conformal bridge is an algebraic construction, independent of the realization. This means that it also works for these higher-dimensional generalizations. In this chapter, we will study a direct generalization of the AFF model in three dimensions, whose Hamiltonian corresponds to \begin{equation} \label{ClassicalH} H=\frac{\mbfgr{\pi}^2}{2m}+\frac{m\omega^2 r^2}{2}+\frac{\alpha}{2mr^2}\,, \end{equation} where $\omega>0$, $\mbfgr{\pi}=\mbf{p}-e\mbf{A}$, $\mbf{A}$ is a U(1) gauge potential of a Dirac magnetic monopole at the origin with charge $g$, $\nabla\cross\mbf{A}=\mbf{B}=g\mbf{r}/r^3$, and the coupling $\alpha$ should be chosen appropriately to prevent a fall to the center, see below. We solve the Hamiltonian equations, study the conformal Newton-Hooke symmetry of the system, and investigate a hidden symmetry which appears in a special case $\alpha=\nu^2$, $\nu=eg$. The results of this chapter are based on the article \textcolor{red}{[\cite{InzPlyWipf1}]} which was inspired by the line of reasoning used in \textcolor{red}{[\cite{PlyWipf}]} to identify the hidden symmetry and characterize the particle's trajectories. \section{Classical case} \label{Classicalsection} The particle's coordinates and kinetic momenta obey the Poisson brackets relations $ \{r_i,\pi_j\}=\delta_{ij}\,,$ $\{r_i,r_j\}=0\,,$ and $\{\pi_i,\pi_j\}=e\epsilon_{ijk}B_{k}\,,$ which give rise to the equations of motion \begin{equation} \label{CanonEq} \dot{\mbf{r}}=\frac{1}{m}\mbfgr{\pi}\,,\qquad \dot{\mbfgr{\pi}}=\frac{1}{mr^3}(\alpha\mbf{n}-\nu\,\mbf{r} \times\mbfgr{\pi})-m\omega^2\mbf{r}\,, \end{equation} where $\mbf{n}={\mbf{r}}/{r}$. From (\ref{CanonEq}) we derive the equations $ \frac{d r}{dt}=\frac{1}{m}\pi_r\,,$ and $\dot{\mbf{n}}=\frac{1}{mr^2}\,\mbf{J}\cross\mbf{n}\,,$ where we denote $\pi_r=\mbf{n}\cdot\mbfgr{\pi}$, and \begin{equation} \label{ClassicPoincare} \mbf{J}=\mbf{r}\cross\mbfgr{\pi}-\nu\mbf{n} \, \end{equation} is the conserved Poincar\'e vector identified as the angular momentum of the system, \begin{equation} \{J_i,J_j\}=\epsilon_{ijk}J_k\,,\qquad \{J_i,r_j\}=\epsilon_{ijk}r_k\,,\qquad \{J_i,\pi_j\}=\epsilon_{ijk}\pi_k\,. \end{equation} In terms of this conserved quantity the Hamiltonian can be presented in the form \begin{equation}\label{HAFF} H=\frac{\pi_r^2}{2m}+\frac{\mathscr{L}^2}{2mr^2}+\frac{m\omega^2r^2}{2}\,, \qquad\mathscr{L}^2 :=\mbf{J}^2-\nu^2+\alpha\,, \end{equation} which reveals that the variables $r$ and $\pi_r$, $\{r,\pi_r\}=1$, behave like $y$ and $p$ in the one-dimensional AFF model (\ref{mostgeneralH}). From (\ref{HAFF}) one also reads the following assertions: \begin{itemize} \item There is no fall to the center if $\mathscr{L}^2> 0$, i.e. $\alpha>0$, that we will assume from now on. \item The possible values of the angular momentum $J$ and energy obey the relation $ \frac{\mathscr{L}\omega}{H}\leq 1\,.$ \item The turning points for the radius are \begin{equation} \label{returning1} r_{\pm}^2=\frac{H}{m\omega^2}(1\pm \rho)\,,\qquad 0\leq \rho=\sqrt{1- \frac{\mathscr{L}^2\omega^2}{H^2}}<1\,,\qquad r_+r_-=\frac{\mathscr{L}}{m\omega}\,. \end{equation} \end{itemize} On the other hand, to solve the equations of motion it is worth parameterizing $\mbf{n}$ as \begin{eqnarray} & \label{n(phi)} \mbf{n}(t)= \mbf{n}_\parallel+\mbf{n}_{\bot}(t) =-\nu\,\frac{\mbf{J}}{J^2} +\mbf{n}_{\bot}(t),\qquad \mbf{J}\cdot\mbf{n}_{\bot}(t)=0\,,\qquad \mbf{J}\cdot\mbf{n}=\mbf{J}\cdot\mbf{n}_\parallel=-\nu. &\\& \label{nparaort} \mbf{n}_{\bot}(t)=\mbf{n}_{\bot}(0)\cos\varphi(t) +\,\hat{\hskip-1mm\vJ}\cross\mbf{n}_{\bot}(0)\sin\varphi(t)\,.& \end{eqnarray} From (\ref{nparaort}) and the equation of motion for $ \mbf{n} $ we get $ \dot{\varphi} = \frac{J}{mr^2} $. These relations involve a clockwise rotation of $ \mbf{n}_{\bot} $ from the perspective of vector $ \mbf{J} $. Thus, if $ \mbf{J} $ is oriented along $ \mbf{e}_z $, and $ \nu <0 $, $ 0 <\theta <\pi/2 $, where $ \theta = \arccos(- \nu / J ) $, the path of the particle is on the upper sheet of the cone and $ \mbf{n}_{\bot} $ rotates clockwise in the horizontal plane. If on the other hand $ \mbf{J} $ is oriented along $ - \mbf{e}_z $ and $ \nu> 0 $, $ \pi / 2 <\theta <\pi $, then the path is again on the upper sheet of the cone, but the vector $ \mbf{n}_{\bot} $ rotates counterclockwise in the $ (x, y) $ plane looking from $ \mbf{e}_z $. We also note that when $J=\nu$, then $\theta=\pi$ so $\mbf{n}$ is co-linear to $\mbf{J}$ and there is no rotation at all. In the following we exclude that case. The corresponding solutions for the angular and radial variables are \begin{eqnarray} & \label{r(t)1} r^2(t)=\frac{H}{m\omega^2}(1- \rho\cos(2\omega t))\,, \qquad \label{phi(t)} \varphi(t)=\frac{J}{\mathscr{L}} \arctan(\frac{r_\mathrm{max}}{r_\mathrm{min}}\tan(\omega t))\,,& \end{eqnarray} where the initial conditions $r(t=0)=r_-:=r_\text{min}$ and $\varphi(t=0)=0$ are assumed (also we redefine $r_+:=r_\text{max}$). By expressing time thought $\varphi(t)$ and introducing it in to $r^2(t)$, we obtain the trajectory equation \begin{eqnarray} \frac{1}{r^2(\varphi)}=\frac{mH}{\mathscr{L}^2}\left[1+\gamma\cos(\frac{2\mathscr{L}}{J}\varphi)\right]\,, \end{eqnarray} which shows us that the angular period is $\pi J/\mathscr{L}$. The condition for a periodic trajectory is \begin{equation} \frac{2\mathscr{L}}{J}2\pi l_r=2\pi l_a\quad \Longleftrightarrow \quad \frac{2\mathscr{L}}{J}=\frac{l_a}{l_r}, \qquad l_r,l_a=1,2,\ldots\,. \end{equation} From the definition of $\mathscr{L}$ in (\ref{HAFF}) we find that the trajectories are closed for arbitrary values of $J$ if and only if $\alpha=\nu^2$. On the other hand, when $\alpha\neq \nu^2$, the trajectory will be closed only for special values of the angular momentum given by the condition \begin{equation} \label{alpha nu J} \alpha=\nu^2 +\left(\frac{1}{4}\frac{l_a^2}{l_r^2}-1\right)J^2\,, \end{equation} and in this case the condition $\frac{\mathscr{L}\omega}{H}\leq 1\,$ takes the form $\frac{l_a}{l_r}\leq \frac{2H}{\omega J}$. Figure \ref{figure1} illustrates several particular orbits lying on the corresponding conical surface in a general case $\alpha\neq \nu^2$ and in the special case $\alpha= \nu^2$. Trajectories $\mbf{r}(\varphi)$ are shown there for fixed values of $H$, $\mbf{J}$ and $\nu$, but for different values of $\alpha$. \begin{figure}[H] \begin{center} \includegraphics[scale=0.350]{Cono8_a-2}\hskip8mm \includegraphics[scale=0.350]{Cono4_n1-1}\hskip8mm \includegraphics[scale=0.350]{Cono3_n1-2}\hskip8mm\\ \includegraphics[scale=0.350]{Cono2_n2-3}\hskip8mm \includegraphics[scale=0.350]{Cono6_n3-2}\hskip8mm \includegraphics[scale=0.350]{Cono7_n2-1} \caption[Classical trajectories 1, Sec. 9.1]{\small{ The depicted trajectories correspond to the vector $\mbf{J}$ oriented along $\mbf{e}_z$. The first figure in the top row represents the generic case with non-closed trajectory. The other figures are examples of closed trajectories with parameters satisfying the relation (\ref{alpha nu J}), with quotients $l_a/l_r=\{1,\,1/2,\,2/3,\,3/2,\,2\}$ are sequentially shown. The last} relation $l_a/l_r=2$ corresponds to the special case $\alpha=\nu^2$. } \label{figure1} \end{center} \end{figure} \vskip-0.7cm Below we shall see that when $\alpha=\nu^2$, the projection to the plane orthogonal to $\mbf{J}$ of the trajectory shown on the last plot is an ellipse centered at the origin of the coordinate system similarly to the case of the three-dimensional isotropic harmonic oscillator. This corresponds to a fundamental universal property of the magnetic monopole background which we discuss in the last section. Since the center of the projected elliptical trajectory is in the center of an ellipse, the angular period $P_a$ is twice the radial period $P_r$, $P_a/P_r=2$, similarly to the isotropic harmonic oscillator. This is different from the picture of the finite orbits in Kepler problem where the force center is in one of the foci, and as a result $P_a=P_r$. This similarity with the isotropic oscillator and contrast to the Kepler problem are also reflected in the spectra of the systems at the quantum level. As we have the AFF model form of the Hamiltonian in (\ref{HAFF}), we can intermediately write the rest of the Newton-Hooke conformal algebra generators. They are given by \begin{eqnarray} \label{KyD1} &\mathcal{D}=\frac{1}{2}\left(rp_r\cos(2\omega \tau)+\left(m \omega r^2- H{\omega}^{-1}\right)\sin(2\omega \tau)\right)\,,&\\ &\mathcal{K}= \cos(2\omega \tau)m\frac{r^2}{2}-\frac{H}{\omega^2}\sin^2(\omega\tau) -\frac{\sin(2\omega\tau)}{2\omega}rp_r\,.\label{KuD2}& \end{eqnarray} Together with $H$ they satisfy the algebra (\ref{sl2RAFF}). The Casimir invariant corresponds to $\mathscr{F}=\frac{\mathscr{L}^2}{4}$. To conclude this part of the analysis, we comment on the limit $\omega\rightarrow 0$. In this case the generators $H$, $D$ and $K$ take the form \begin{equation} \label{freemotion} H_0=\frac{\pi_r^2}{2m}+\frac{\mathscr{L}^2}{2mr^2}\,,\qquad D_0=\frac{1}{2}r\pi_r-H_0t\,,\qquad K_0=\frac{mr^2}{2}-Dt-H_0t^2\,, \end{equation} and satisfy the conformal algebra. The case $\alpha=0$ of the system $H_0$ corresponds to a geodesic motion on the dynamical cone \textcolor{red}{[\cite{Plymono1,Plymono2}]}. The special case of $\alpha=\nu^2$, on the other hand, was studied in \textcolor{red}{[\cite{PlyWipf}]}. It was shown there that the trajectory of the particle, projected to the plane orthogonal to $\mbf{J}$, is a straight line along which the projected particle's motion takes place with constant velocity. Consistently with these peculiar properties, in the special case $\alpha=\nu^2$ the system with $H_0$ possesses a hidden symmetry described by the integral of motion $\mbf{V}=\mbfgr{\pi}\times\mbf{J}$ being a sort of Laplace-Runge-Lenz vector, in the plane orthogonal to which and parallel to $\mbf{J}$ the particle's trajectory lies \textcolor{red}{[\cite{PlyWipf}]}. In Fig. \ref{figure2Cono} some plots of the trajectories are shown for the system (\ref{freemotion}). \begin{figure}[H] \begin{center} \includegraphics[scale=0.350]{ConoFree_n3-7.eps}\hskip8mm \includegraphics[scale=0.350]{ConoFree_n1-2.eps}\hskip8mm \includegraphics[scale=0.350]{ConoFree_n2-1.eps} \caption[Classical trajectories 2, Sec. 9.1]{\small{Each plot represents a trajectory for a specific value of $\alpha$ chosen according to (\ref{alpha nu J}) with the vector $\mbf{J}$ oriented along $\mbf{e}_z$. From left to right the cases $l_a/l_r=\{3/2,\,1/2,\,2\}$ are shown, where the last plot corresponds to the special case $\alpha=\nu^2$.} } \label{figure2Cono} \end{center} \end{figure} \subsection{ The case $\alpha=\nu^2$\,: hidden symmetries}\label{hidclassym} In the case $\alpha=\nu^2$ the particle described by the Hamiltonian (\ref{ClassicalH}), which now is \begin{equation} \label{ClassicHnu} H=\frac{\mbfgr{\pi}^2}{2m}+\frac{m\omega^2}{2}r^2+\frac{\nu^2}{2mr^2}\,, \end{equation} admits the vector integrals of motion responsible for the closed nature of the trajectories for arbitrary choice of initial conditions. The integrals are derived by an algebraic approach as in Fradkin's construction for the isotropic three-dimensional harmonic oscillator \textcolor{red}{[\cite{Frad}]}. Let us first introduce the vector quantities \begin{align} \mbf{I}_1&=\mbfgr{\pi}\cross\mbf{J}\cos(\omega t)+\omega m \mbf{r}\cross\mbf{J}\sin(\omega t)\,, \label{I1}\\ \mbf{I}_2&=\mbfgr{\pi}\cross\mbf{J}\sin(\omega t)-\omega m \mbf{r}\cross\mbf{J}\cos(\omega t)\,. \label{I2} \end{align} Using the corresponding equations of motion for $\mbf{r}$ and $\mbfgr{\pi}$ is not difficult to show that $\dot{I_i}=0$ so they are dynamical integrals of motion. The evaluation of these integrals in the initial conditions give us \begin{equation}\label{I(0)} \mbf{I}_1(0)=\frac{J^2}{r_\mathrm{min}} \mbf{n}_{\bot}(0)\,,\qquad \mbf{I}_2(0)=m\omega r_\mathrm{min} \mbf{J}\cross\mbf{n}_{\bot}(0)\,, \end{equation} thus, $\mbf{I}_1$ and $\mbf{I}_2$ are orthogonal to each other. On the other hand, the lengths of these vectors are also dynamical integrals which for the initial conditions take the form \begin{equation} \vert\mbf{I}_{1}\vert =m\omega\sqrt{J^2-\nu^2}\,r_\mathrm{max},\qquad \vert\mbf{I}_{2}\vert =m\omega\sqrt{J^2-\nu^2}\,r_\mathrm{min}\,,\label{lengthint} \end{equation} where we have taken into account Eqs. (\ref{I(0)}) and the second equation in (\ref{returning1}). The sum of their squares, however, is a true integral of motion whose value is a function of $H$ and $J$, \begin{equation}\label{I12+I22} \mbf{I}_1^2+\mbf{I}_2^2=2mH(J^2-\nu^2)\,. \end{equation} These vectors point in the direction of the semi-axes of the elliptic trajectory in the plane orthogonal to $\mbf{J}$. The lengths of semi-major and semi-minor axes correspond to those of the vectors $r\mbf{n}_{\bot}(0)$ and $r\,\hat{\hskip-1mm\vJ}\cross \mbf{n}_{\bot}(0)$, and are equal to $r_\mathrm{max}\sqrt{1-\nu^2/J^2},$ and $r_\mathrm{min}\sqrt{1-\nu^2/J^2}$. As it is shown in \textcolor{red}{[\cite{InzPlyWipf1}]}, in a general case of $\alpha\neq \nu^2$, the periodic change of the scalar product of $\mbf{I}_1$ and $\mbf{I}_2$, which would not be integrals, implies a precession of the orbit, see Fig. \ref{figure1}. Using the definition of $\mbf{I}_1$ and $\mbf{I}_2$ in (\ref{I1}) and (\ref{I2}), we can express the position $\mbf{r}(t)$ of the particle as follows, \begin{equation} \label{r(t)2} \mbf{r}(t)=\frac{1}{m\omega J^2}\left( \mbf{J}\cross \mbf{I}_{1}\sin \omega t-\mbf{J}\times\mbf{I}_{2}\cos \omega t -\nu \frac{\sqrt{I_1^2\sin^2\omega t+I_2^2\cos^2\omega t}}{\sqrt{J^2-\nu^2}}\mbf{J}\right)\,, \end{equation} where we again see that $\mbf{I}_{1}=\mbf{I}_1(0)$ and $\mbf{I}_{2}=\mbf{I}_2(0)$ correspond to the orthogonal set that define the elliptic trajectory in the plane. \vskip0.1cm Alternatively, one can follow a more algebraic approach to extract information on the trajectories without explicitly solving the equations of motion. It is well known from the seminal paper \textcolor{red}{[\cite{Frad}]} that for the three-dimensional isotropic harmonic oscillator all symmetries of the trajectories are encoded in a tensor integral of motion. During the rest of this subsection we construct an analogous tensor for the system at hand to find the trajectories by a linear algebra techniques. We begin with the tensor integrals \begin{equation} T^{ij}=T^{(ij)}+T^{[ij]}\,,\qquad T^{(ij)}=\frac{1}{2}(I_{1}^{i}I_1^j+I_{2}^i I_2^j)\,,\qquad T^{[ij]}=\frac{1}{2}(I_{1}^{i}I_2^j-I_{1}^j I_2^i)\,.\label{symm_tensor} \end{equation} They, unlike the vectors $\mbf{I}_1$ and $\mbf{I}_2$, but like the quadratic expression (\ref{I12+I22}) are the true, not depending explicitly on time integrals of motion, $\frac{d}{dt}T^{ij}=\{T^{ij},H\}=0$. In accordance with (\ref{I12+I22}), their components satisfy relations \begin{equation} \text{tr}(T)= m(J^2-\nu^2)H\,,\qquad \epsilon_{ijk}T^{[jk]}=m\omega(J^2-\nu^2)J_{i}\,. \end{equation} As the anti-symmetric part of $T^{ij}$ is related to the Poincar\'e integral, we only need to use the symmetric part $T^{(ij)}$, which is related but not identical to Fradkin's tensor. Since the vectors (\ref{I1}), (\ref{I2}) are orthogonal to each other and to $\mbf{J}$, we immediately conclude that $\mbf{J},\mbf{I}_1$ and $\mbf{I}_2$ are eigenvectors of $T^{(ij)}$ with eigenvalues equal, respectively, to zero and \begin{align} \lambda_1=\vert\mbf{I}_1\vert^2&=\frac{1}{2}m^2\omega^2(J^2-\nu^2)r^2_\mathrm{max}\,,\\ \lambda_2=\vert\mbf{I}_2\vert^2&=\frac{1}{2}m^2\omega^2(J^2-\nu^2)r^2_\mathrm{min}\,, \label{eigenvalues} \end{align} Also one can show that the quadratic form $\mbf{r}^T T\mbf{r}$ is time-independent, \begin{equation} 2r_i T^{ij}r_j=(\mbf{I}_1\cdot\mbf{r})^2+(\mbf{I}_2\cdot\mbf{r})^2=(J^2-\nu^2)^2\,. \label{quad_form} \end{equation} In a coordinate system with orthonormal base $\mbf{e}_x=\hat{\mbf{I}}_1,\mbf{e}_y=\hat{\mbf{I}}_2$ and $\mbf{e}_z=\hat{\mbf{J}}$, the quadratic form (\ref{quad_form}) simplifies to \begin{equation} \lambda_1 x^2+\lambda_2 y^2=(J^2-\nu^2)^2\,. \end{equation} With $r_\mathrm{max}r_\mathrm{min}=J/(m\omega)$ one ends up with the equation for an ellipse in the plane orthogonal to $\mbf{J}$: \begin{equation} \frac{x^2}{r^2_\mathrm{min}}+\frac{y^2}{r^2_\mathrm{max}}=\frac{J^2-\nu^2} {J^2} \,. \end{equation} The lengths of the semi-major axis and semi-minor axis of the ellipse are $r_\mathrm{max}\sqrt{1-\nu^2/J^2}$ and $r_\mathrm{min}\sqrt{1-\nu^2/J^2}$, in accordance with that was found above. Finally, the symmetric tensor components integral $T_{(ij)}$ satisfy the Poisson bracket relations \begin{eqnarray} & \{J_i,T_{(jk)}\}=\epsilon_{ijl}T_{(lk)}+\epsilon_{ikl}T_{(jl)}\,, &\\& \{T_{(ij)},T_{(lk)}\}=m(\epsilon_{ils}\mathcal{F}_{jk}+\epsilon_{iks}\mathcal{F}_{jl}+ \epsilon_{jls}\mathcal{F}_{ik}+\epsilon_{jks}\mathcal{F}_{im})J_s\,,& \end{eqnarray} where $ \mathcal{F}_{ij}=\tfrac{1}{4}m\omega^2(J^2-\nu^2)^2\delta_{ij}-HT_{(ij)}\,. $ In fact, the quantum version of the tensor $T_{(ij)}$ was already considered in \textcolor{red}{[\cite{Vinet}]}, but this is the first time that it has been obtained and used at the classical level. \section{Quantum theory of the model with $\alpha=\nu^2$} \label{Quantumsection} The$\quad$ quantum$\quad$ theory$\quad$ of$\quad$ the$\quad$ system$\quad$ with$\quad$ Hamiltonian$\quad$ (\ref{ClassicHnu}) is discussed in details in \textcolor{red}{[\cite{Mcin,Vinet,InzPlyWipf1}]} and here we summarize the results. We shall use the units in which $m=1$ and $\hbar=1$. \vskip0.1cm In coordinate representation the basic commutation relations are \begin{equation} [\hat{r}_i,\hat{r}_j]=0\,,\qquad [\hat{r}_i,\hat{\Pi}_j]=i\delta_{ij}\,,\qquad [\hat{\Pi}_i,\hat{\Pi}_j]=i\nu\epsilon_{ijk}\frac{\hat{r}_k}{r^3}\,. \end{equation} In what follows we shall skip the hat symbol $\,\hat{{}}\,\,\,$ to simplify the notation. The Hamiltonian (\ref{ClassicHnu}) can be written as \begin{equation} \label{Qm Hamiltonian} H=\frac{1}{2}\left[-\frac{1}{r^2}\frac{\partial}{\partial r}\left(r^2\frac{\partial}{\partial r}\right)+ \frac{1}{r^2}\mbf{J}\!\,^2+\omega^2 r^2\right]\,, \end{equation} where $\mbf{J}$ is just the quantum version of the Poincar\'e integral (\ref{ClassicPoincare}), the components of which generate the $\mathfrak{su}(2)$ symmetry. The Dirac quantization condition implies that $\nu=eg$ must take an integer or half integer value \textcolor{red}{[\cite{Plymono1,Plymono2}]}. Using the angular momentum treatment we obtain \begin{equation} \label{so(3) rep1} {\mbf{J}}\!\,^2\mathcal{Y}_{j}^{j_3}=j(j+1)\mathcal{Y}_{j}^{j_3}\,,\quad J_3\mathcal{Y}_{j}^{j_3}=j_3\mathcal{Y}_{j}^{j_3}\,,\quad J_\pm\mathcal{Y}_{j}^{j_3}=c_{jj_3}^\pm\mathcal{Y}_{j}^{j_3\pm1}\,, \end{equation} with $J_\pm=J_1\pm i J_2$, and \begin{equation} j=|\nu|,|\nu|+1,\ldots\,,\qquad j_3=-j,\ldots,j\,,\qquad c_{jj_3}^\pm=\sqrt{(j\pm j_3+1)(j\mp j_3)}\,, \end{equation} where the indicated values for $j$ correspond to a super-selection rule. The case $\nu=0$ corresponds just to the quantum harmonic isotropic oscillator. Excluding the zero value for $\nu$, i.e. implying that $\vert \nu\vert$ takes any nonzero integer or half-integer value, the first relation in (\ref{so(3) rep1}) automatically provides the necessary inequality $\mbf{J}^2=j(j+1)>\nu^2$. The functions $\mathcal{Y}_{j}^{j_3}=\mathcal{Y}_{j}^{j_3}(\theta,\varphi;\nu)$ are the (normalized) monopole harmonics \textcolor{red}{[\cite{monoharm,Lochak,Plymono1,Plymono2}]}, which are well defined functions if and only if the combination $j\pm\nu$ is in $\mathbb N_0=\{0, 1,2,\ldots\}$. Then, the eigenstates and the spectrum of $H$ are given by \begin{align} \psi_{n,j}^{j_3}(\mbf{r} )&=f_{n,j}(\sqrt{\omega}r) \mathcal{Y}_j^{j_3}(\theta,\varphi )\,,\nonumber \\ f_{n,j}(x)&=\bigg(\frac{2n!}{\Gamma(n+j+3/2)}\bigg)^{1/2}\omega^{3/4} \,x^{j}L_{n}^{(j+1/2)}(x^2)\,e^{-x^2/2}\,,\label{wavefunction}\\ E_{n,j}&=\Big(2n+j+\tfrac{3}{2}\Big)\,\omega\,,\nonumber \end{align} where $L_{n}^{(j+1/2)}(y)$ are the generalized Laguerre polynomials. The degeneracy of each level depends on $\nu$ and corresponds to \begin{eqnarray} \label{degeneracy} \mathfrak{g}(\nu,N)= \left\{ \begin{array}{ccc} \tfrac{1}{2}(N+\nu+1)(N-\nu+2)\,, & j-\nu &\text{even } \\ \\ \tfrac{1}{2}(N-\nu+1)(N+\nu+2)\,, & j-\nu &\text{odd} \end{array} \right.\,,\qquad N=2n+j\,. \end{eqnarray} It is remarkable that the system possesses $2\vert \nu \vert +1$ degenerate ground states. The ground states here are not invariant under the action of the total angular momentum $\mbf{J}$, although the Hamiltonian operator commutes with $\mbf{J}$ and hence is spherically symmetric. Thus we see some analog of spontaneous breaking of rotational symmetry in the magnetic monopole background. This is of course in contrast to the isotropic harmonic oscillator in three dimensions which has a unique spherically symmetric ground state and symmetry algebra $\mathfrak{su}(3)$. According to \textcolor{red}{[\cite{Vinet}]} the symmetry algebra for the system under investigation is $\mathfrak{su}(2)\oplus\mathfrak{su}(2)$. We do not further dwell on these interesting aspects of symmetry but rather turn to the construction of spectrum-generating ladder operators. Note that the coefficients at radial, $n$, and angular momentum, $j$, quantum numbers in the energy eigenvalue $E_{n,j}=(2n+j+\frac{3}{2})\,\omega$ corresponds to the ratio $P_a/P_r=l_a/l_r=2$ between the classical angular and radial periods in the special case $\alpha=\nu^2$ under investigation. This can be compared with the structure of the principle quantum number $N=n_r+l+1$ defining the spectrum in the quantum model of the hydrogen atom, where the corresponding classical periods are equal. The explicit wave functions in (\ref{wavefunction}) are specified by the discrete quantum numbers $n$, $j$ and $j_3$. Our target now is to identify the ladder operators for radial, $n$, and angular momentum, $j$, quantum numbers (we already have the ladders operators for $j_3$), which are based on the conformal and hidden symmetries of the system. In the algebraic approach we do not fix the representation for the position and momentum operators and thus use Dirac's ket notation for eigenstates. \vskip0.2cm \noindent \emph{Ladder operators for $n$.} Let us first consider the quantum version of the $\mathfrak{sl}(2,\mathbb R)$ symmetry, \begin{equation} \label{Qsl2r} [H,\,{\mathcal{C}}]=-2\omega\,{\mathcal{C}}\,,\qquad [H,\,{\mathcal{C}}^\dagger]=2\omega\,{\mathcal{C}}^\dagger \,,\qquad [\,{\mathcal{C}},\,{\mathcal{C}}^\dagger]=4\omega H\,, \end{equation} where the generators $\,{\mathcal{C}},\,{\mathcal{C}}^\dagger$ are the quantum versions of combinations of Newton-Hooke symmetry generators in the Schr\"odinger picture at $t=0$, i.e., \begin{equation} \label{ladern} \,{\mathcal{C}}=H-\omega^2r^2- \frac{i\omega}{2}(\mbf{r}\cdot\mbfgr{\pi}+\mbfgr{\pi}\cdot\mbf{r})\,, \end{equation} and their action on the eigenstates is \begin{eqnarray} & \,{\mathcal{C}}\ket{n,j,j_3}=\omega\, d_{n,j}\ket{n-1,j,j_3}\quad,\quad \,{\mathcal{C}}^\dagger\ket{n,j,j_3}=\omega\, d_{n+1,j}\ket{n+1,j,j_3}\,,&\\& d_{n,j}=\sqrt{2n(2n+2j+1)}\,.\label{d-coeff}& \end{eqnarray} \emph{Ladder operators for $j$.} We introduce the complex vector operator \begin{equation} \mbf{a}=\frac{1}{2}(\mbf{b}\cross\mbf{J}-\mbf{J}\cross\mbf{b})=(\mbf{b}\cross\mbf{J}-i\mbf{b})\,,\qquad \mbf{b}=\frac{1}{\sqrt{2}} (\mbfgr{\pi}-i\omega m \mbf{r})\,, \label{b} \end{equation} together with its Hermitian conjugation. The vector operator $\mbf{a}$ is the quantum version of the complex classical quantity $\frac{1}{\sqrt{2}} (\mbf{I}_1+i\mbf{I}_2)$ in Schr\"odinger picture at $t=0$, and its components satisfy the relations \begin{eqnarray} &\label{comHaJa} [H,a_{i}]=-\omega a_i\,,\qquad [J_i,a_{j}]=i\epsilon_{ijk}a_{k}\,,\qquad [a_{i},a_j]=-i\epsilon_{ijk}\,{\mathcal{C}} J_k\,, &\\& \label{comaa} [a_{i}^\dagger,a_{j}]=-\omega[(2\mbf{J}^2+1-\nu^2)\delta_{ij}-J_{i}J_j)]-iH\epsilon_{ijk}J_k\,, & \end{eqnarray} The action of these operators is computed algebraically in \textcolor{red}{[\cite{InzPlyWipf1}]} and for us is sufficient to consider $a_3$ and $a_3^\dagger$ and their actions on the ket-states \begin{align} \label{eta on psi1} a_3\ket{n,j,j_3}&=A_{n,j,j_3}\ket{n,j-1,j_3}+B_{n,j,j_3}\ket{n-1,j+1,j_3}\,, \\ a_3^\dagger\ket{n,j,j_3}&=A_{n,j+1,j_3}\ket{n,j+1,j_3}+B_{n+1,j-1,j_3}\ket{n+1,j-1,j_3}\,, \end{align} where the squares of the positive coefficients are \begin{align} \label{Anlm1} \big(A_{n,j,j_3}\big)^2&=\omega(2n+2j+1)\, \frac{(j^2-j^2_3)(j^2-\nu^2)}{(2j)^2-1}\,, & \big(B_{n,j,j_3}\big)^2&=\frac{2n}{2n+2j+3}\big(A_{n,j+1,j_3}\big)^2\,. \end{align} We see that the operators $a_3$ and $a_3^\dagger$ change the quantum numbers $n$ and $j$, but the result is a superposition of the two eigenstate vectors. Their action is depicted in Fig. \ref{figure2Esqueme}. \begin{figure}[H] \begin{center}\includegraphics[scale=0.3]{Esquema.eps}\hskip5mm \caption[Angular ladder operators action, Sec. 9.2]{\small{The circles represent the first two quantum numbers of the eigenstates $\ket{n,j,j_3}$. Red arrows indicate the action of $a_3$ and blue arrows correspond to the action of $a_3^\dagger$. Note that some circles have two emergent arrows of the same color, which means that the action of the rising/lowering operator on that states produce a superposition of two states.} } \label{figure2Esqueme} \end{center} \end{figure} Clearly, if we are working in a representation where $H$, $\mbf{J}^2$ and $J_3$ are simultaneously diagonalized, it would be rather natural to try to find ladder operators that map a given eigenstate into just one eigenstate with a different quantum number $j$ and not a superposition of eigenstates (having in mind the picture of the usual harmonic oscillator). To find such operators we introduce the nonlocal operator \begin{eqnarray} & \mathscr{J}=\sqrt{\mbf{J}^2+\frac{1}{4}}-\frac{1}{2}\,,\qquad \mathscr{J}\ket{n,j,j_3}=j\ket{n,j,j_3}\,, \end{eqnarray} and construct the operators \begin{equation} \mathscr{T}_{\pm}=\omega(\mathscr{J}+\tfrac{1}{2})a_3\pm(H-\omega)a_3\mp a_3^\dagger\,{\mathcal{C}}\, \label{defLadder2} \end{equation} together with their Hermitean conjugate. Actually $\mathscr{T}_\pm$ and $\mathscr{T}_\pm^\dagger$ are the third components of the vector operators $\mathbfcal{T}_\pm$ and $\mathbfcal{T}_\pm^\dagger$ which are given by (\ref{defLadder2}) wherein $a_3$ and $a_3^\dagger$ are replaced by $\mbf{a}$ and $\mbf{a}^\dagger$ on the right hand side. But in what follows it suffices to consider $\mathscr{T}_\pm$ and $\mathscr{T}_\pm^\dagger$ which are ladder operators for the energy, \begin{equation} [H,\mathscr{T}_\pm]=\omega\mathscr{T}_\pm\,,\qquad [H,\mathscr{T}_\pm^\dagger]=-\omega\mathscr{T}_\pm^\dagger\,. \end{equation} They decrease and increase the angular momentum according to \begin{align} \label{ActionA-} \mathscr{T}_+\ket{n,j,j_3}&=\omega(2j+1)A_{n,j,j_3}\ket{n,j-1,j_3}\,,\\ \label{ActionA+} \mathscr{T}_-\ket{n,j,j_3}&=\omega(2j+3)B_{n,j,j_3}\ket{n-1,j+1,j_3}\,, \end{align} and the analogous Hermitian conjugate relations. These nonlocal objects were inspired by a similar construction presented in \textcolor{red}{[\cite{DiracOs3}]} for the three-dimensional isotropic harmonic oscillator. Now one can generate in a simple way all eigenstates of the commuting observables $H,\mbf{J}^2$ and $J_3$ by acting with the local ladder operators $\,{\mathcal{C}},\,{\mathcal{C}}^\dagger, J_\pm$ and with the nonlocal ladder operators $\mathscr{T}_+,\mathscr{T}_{+}^\dagger$ on just one eigenstate. The same can be achieved with local ladder operators when one uses $a,a^\dagger$ instead of $\mathscr{T}_+,\mathscr{T}_+^\dagger$, but then the recursive construction gets more involved, since $a,a^\dagger$ map into a superposition of eigenstates. \subsection{The conformal bridge in monopole background} \label{Conformal bridge in mono} Here we show how the generators of the conformal symmetry as well as the hidden symmetry of the quantum system (\ref{Qm Hamiltonian}) can be obtained from generators of the corresponding symmetries of the quantum system studied in \textcolor{red}{[\cite{PlyWipf}]}. This will be realized by means the conformal bridge transformation introduced in Chap. \ref{ChBridge} . Similarly to the classical case, in the limit $\omega\rightarrow 0$ the quantum version of the generators (\ref{freemotion}) has the form \begin{eqnarray} & \label{spin0monopole} H_0=\frac{1}{2}\left(\mbfgr{\pi}^2+\frac{\nu^2}{r^2}\right)=\frac{1}{2} \left(-\frac{1}{r^2}\frac{\partial}{\partial r}\left(r^2\frac{\partial}{\partial r}\right)+ \frac{1}{r^2}\mbf{J}\!\,^2\right)\,,& \\& D_0=\frac{1}{4}(\mbf{r}\cdot\mbfgr{\pi}+\mbfgr{\pi}\cdot\mbf{r} ) - H_0t\,,\qquad K_0=\frac{1}{2} r^2 -Dt-H_0t^2\,.&\label{K0r2} \end{eqnarray} They produce the quantum conformal algebra \begin{equation} \label{quantumconformal} [D_0,H_0]=iH_0\,,\qquad [D_0,K_0]=-iK_0\,,\qquad [K_0,H_0]=2iD_0\,. \end{equation} The Hamiltonian $H_0$ is a non-compact generator of the conformal algebra $\mathfrak{sl}(2,\mathbb R)$ with a continuous spectrum $(0,\infty)$. In the same limit and in the quantum version, the vector integrals $\mbf{I}_1$ and $\mbf{I}_2$ transform into vectors \begin{equation} \label{LRL-G} \mbf{I}_1\rightarrow\frac{1}{2}\left(\mbfgr{\pi}\cross\mbf{J}-\mbf{J}\cross\mbfgr{\pi}\right):=\mbf{V}\,,\qquad \frac{\mbf{I}_2}{\omega}\rightarrow\frac{1}{2}\big(\mbfgr{\pi} t-\mbf{r})\cross\mbf{J}-\mbf{J}\cross(\mbfgr{\pi} t-\mbf{r})\big):=\mbf{G}\,, \end{equation} which we identify, respectively, as the Laplace-Runge-Lentz vector and the Galilei boost generator for the system $H_0$ \textcolor{red}{[\cite{PlyWipf}]} in the Weyl-ordered form. The commutator relations of the vectors $\mbf{V}$ and $\mbf{G}$ with the generators of the conformal algebra are \begin{eqnarray} & [H_0,G_i]=-iV_i\,,\qquad [K_0,V_i]=iG_i\,,\qquad [H_0,V_i]=[K_0,G_i]=0\,,&\\& [D_0,V_i]=\frac{i}{2}V_i\,,\qquad [D_0,G_i]=-\frac{i}{2}G_i\,.& \end{eqnarray} In order to go in the opposite direction, i.e., to recover our system $H$ and its symmetry generators starting from the generators (\ref{spin0monopole}), (\ref{K0r2}) and (\ref{LRL-G}), we implement the conformal bridge transformation \textcolor{red}{[\cite{InzPlyWipf2}]}, \begin{equation} \mathfrak{S}=e^{-\omega K_0}e^{\frac{1}{2\omega}H_0}e^{i\ln 2 D_0}\,, \end{equation} where generators are fixed at $t=0$. A similarity transformation generated by $\mathfrak{S}$ yields \begin{eqnarray} \label{bridge 1} & \mathfrak{S}(\mbf{J})\mathfrak{S}^{-1}=\mbf{J}\,,\qquad \mathfrak{S}(\mbf{V})\mathfrak{S}^{-1}=\mbf{a}\,,\qquad \mathfrak{S}(\omega\mbf{G})\mathfrak{S}^{-1}=-i\mbf{a}^\dagger\,,&\\ & \label{bridge 2} \mathfrak{S}(H_0)\mathfrak{S}^{-1}=\frac{1}{2}\,{\mathcal{C}}\,\qquad \mathfrak{S}(2i\omega D_0)\mathfrak{S}^{-1}=H \,,\qquad \mathfrak{S}(\omega^2 K_0)\mathfrak{S}^{-1}=-\frac{1}{2}\,{\mathcal{C}}^\dagger\,, & \end{eqnarray} where $H=H_0+\omega^2K_0$ is the quantum Hamiltonian (\ref{Qm Hamiltonian}). Then, as we know from Chap. \ref{ChBridge}, the eigenstates of $H$ are mapped from the rank $n$ Jordan states of zero energy of $H_0$, which also satisfy the equation $2i\omega D_0\chi_{n,j}^{j_3}=\omega(2n+j+3/2)\chi_{n,j}^{j_3}$. Besides, the coherent states are obtained from the wave-type eigenstates of $H_0$. On one hand, the mentioned Jordan states are \begin{equation} \chi_{n,j}^{j_3}(r,\theta,\phi)=r^{j+2n}\mathcal{Y}_{j}^{j_3}(\theta,\phi)\,, \end{equation} and after the transformation we get \begin{eqnarray} &\mathfrak{S}\chi_{n,j}^{j_3}=\frac{(-1)^n}{\sqrt{2}}\left(\frac{2}{\omega}\right)^{n+\frac{j}{2}+\frac{3}{4}} \left[n! \Gamma(n+j+3/2)\right]^{\frac{1}{2}}\psi_{n,j}^{j_3}\,. & \end{eqnarray} On the other hand, the corresponding eigenstates of $H_0$ are \begin{eqnarray} \label{Jstates} &\phi_j^{j3}(\mbf{r};\kappa)=\frac{1}{\sqrt{r}}J_{j+\frac{1}{2}}(\kappa r)\mathcal{Y}_{j}^{j_3} =\sum_{n=0}^{\infty}\frac{(-1)^{n}(\kappa/2)^{2n+j+1/2}}{n!\Gamma(n+j+3/2)}\chi_{n,j}^{j_3}(\mbf{r})\,,& \end{eqnarray} and the normalized coherent states of $H$ are \begin{eqnarray} \label{coherent} &\zeta^{j_3}_{j}(\mbf{r};\kappa)= N\mathfrak{S}\phi_j^{\,j3}(\mbf{r};\frac{\kappa}{\sqrt{2}})=\sqrt{2}Ne^{-\frac{\omega x^2}{2}+\frac{\kappa^2}{4\omega}}\phi_j^{\,j3}(\mbf{r};\kappa)\\\nonumber&= \frac{N}{\omega^{1/2}}\sum_{n=0}^{\infty} \frac{1} {\sqrt{n!\Gamma(n+j+\frac{3}{2})}}\left(\frac{\kappa}{2\sqrt{\omega}}\right)^{2n+j+ 1/2}\psi_{n,j}^{j_3}(\mbf{r})\,, & \end{eqnarray} where $N^2=\sqrt{\omega}/(I_{j+\frac{1}{2}}(\frac{|\kappa|^2}{2\omega}))$, the term $I_{j+\frac{1}{2}}(z)$ is the modified Bessel function of the first kind, and we have put the modulus in its argument because $\kappa$ admits an analytic extension for complex values, as is usual for coherent states. \section{Remarks} As we have shown, hidden symmetries appear only when $ \alpha = \nu^2 $. In this case, one always has closed trajectories, the angular period is twice the radial period, and even more, the projected dynamics in the plane orthogonal to the Poincar\'e vector turns out to be similar to that of the three-dimensional isotropic harmonic oscillator trajectory. In fact, such an interesting ``coincidence'' is actually an universal property of the monopole background: Consider the system described by the Hamiltonian \begin{equation} \label{centralpotentialH} H=\frac{\mbfgr{\pi}^2}{2m}+\frac{\nu^{2}}{2mr^{2}}+ U(r)\,, \end{equation} where $ U (r) $ is an arbitrary central potential. The dynamical variables $ \mbf{r} \cross \mbf{J} $ and $ \mbfgr{\pi} \cross \mbf{J} $ satisfy the same equations of motion as the vector variables $ \mbf{r} \cross \mbf{L} $ and $ \mbf{p} \cross \mbf{L} $ when $ \nu = eg = 0 $, where $ \mbf{L} $ is the usual angular momentum: \begin{table}[H] \begin{center} \begin{tabular}{|c| c|} \hline $\nu\not=0$ &$\nu=0 $\\\hline $\frac{d}{dt}(\mbf{r}\cross\mbf{J})=\frac{1}{m}\mbfgr{\pi}\cross\mbf{J} $& $\frac{d}{dt}(\mbf{r}\cross\mbf{L}) =\frac{1}{m}\mbf{p}\cross\mbf{L}$\\\hline $\frac{d}{dt}(\mbfgr{\pi}\cross\mbf{J})=U'(r)\,\mbf{n}\cross \mbf{J} $& $\frac{d}{dt}(\mbf{p}\cross\mbf{L})=U'(r)\,\mbf{n}\cross \mbf{L}$\\ \hline \end{tabular} \caption{Comparison of dynamics in the presence and absence of the monopole charge.} \label{Tabla 2} \end{center} \end{table} \noindent \vskip-0.5cm Therefore, the movement in the plane orthogonal to $ \mbf{J} $ is equivalent to the dynamics obtained in the absence of the magnetic monopole source, and if we know the solutions $ \mbf{r} = \mbf{r} (t) $ and $ \mbf{p} = \mbf{p} (t) $ in the case $ \nu = 0 $, the dynamic for $ \mbfgr{\pi} \cross \mbf{J} $ and $ \mbf{r}\cross \mbf{J} $ is at hand, \begin{equation} \mbf{r}(t)=\frac{1}{J^2}\left(\mbf{J}\cross(\mbf{r}(t)\cross\mbf{J})+\sqrt{\frac{|\mbf{r}(t)\cross\mbf{J}|}{J^2-\nu^2}}\mbf{J}\right)\,. \end{equation} On the other hand, if we take the system $ \widetilde {H}_\nu = \frac{1}{2m}\mbfgr{\pi} ^ 2 + \widetilde{U} (r) $ with arbitrary central potential $ \widetilde {U} (r) $, the corresponding dynamical problem is reduced to that of the system (\ref{centralpotentialH}) with central potential $ U (r) = \widetilde{U} (r) - \nu^2 / 2mr^ 2 $. The indicated similarities and relations allow, in particular, to identify immediately the analog of the Laplace-Runge-Lenz vector (\ref{LRL-G}) for a particle in the monopole background in the cases $ \widetilde{U} = 0 $, $ U = 0 $ and for the Kepler problem with $ U = q / r $. This was done previously in \textcolor{red}{[\cite{Plymono2,PlyWipf}]} and \textcolor{red}{[\cite{Vinet}]} using different approaches. In the next chapter we will study how to extend this picture to supersymmetric quantum mechanics. \chapter{A charge-monopole superconformal model} \label{Chapmono2} In this chapter we extend our system by means of an additional contribution in the Hamiltonian (\ref{Qm Hamiltonian}) that involves spin degrees of freedom. The supplemented term describes a strong long-range spin-orbit coupling and one of its direct consequences is the appearance of two independent subsets of energy levels. In one of these subsets or towers, infinitely degenerate energy levels appear, while in the other, the levels have finite degeneration. The system is studied in detail in Sec. \ref{sectionSOC}. In the Sec. \ref{osp22 extension}, we show that thanks to this term, the system introduced earlier supports a factorization in terms of intertwining operators that naturally leads us to a supersymmetric extension, which is nothing more than a three-dimensional realization of the superalgebra $ \mathfrak{osp} (2 \vert 2) $. Finally, in Sec. \ref{DimRed}, it is shown that by means of certain dimensional reductions, it is possible to obtain supersymmetric AFF models in their exact and spontaneously broken supersymmetric phase. Something special about the models obtained in this way is that the coupling constant in the potential is $j (j + 1)$, where $j$ can takes integer or half-integer values, starting from $\nu = (eg)^2$. \section{Introducing spin degrees of freedom: Spin-orbit coupling} \label{sectionSOC} Let us consider the following two Hamiltonians with strong spin-orbit coupling \begin{equation} \label{spinorbitH} H_{\pm\omega}= \frac{1}{2}\left(\mbfgr{\pi}^2 +\omega^2 r^2+\frac{\nu^2}{r^2}\right)\pm\omega\,\mbfgr{\sigma}\cdot\mbf{J} =H\pm\omega\,\mbfgr{\sigma}\cdot\mbf{J}\,. \end{equation} The Hamiltonians $H_{\pm\omega}$ are similar to those which appear as subsystems of the nonrelativistic limit of the supersymmetric Dirac oscillator discussed in \textcolor{red}{[\cite{DiracOs1,DiracOs2}]}. Thus the eigenvalue problems can be solved similarly as in those references, but the usual spherical harmonics are replaced by the monopole harmonics. Actually, if we choose a spin-orbit coupling $\omega'\,\mbfgr{\sigma}\cdot\mbf{J}$ with $0\leq \omega<\omega'$, then the spectra of both Hamiltonians would be unbounded from below. On the other hand, for $0\leq \omega'<\omega$ all energies will have finite degeneracy. Only in the very particular case $\omega'=\omega$, which we consider here, the spectra are bounded from below and half of the energies have a finite degeneracy whereas the other half have infinite degeneracy. This reminds us the BPS-limits in field theory, where different interactions balance and supersymmetry is observed. The operators $H$ and $\,\mbfgr{\sigma}\cdot\mbf{J}$ commute and as a consequence $H_{\pm \omega}$ commute with the ``total angular momentum'' \begin{equation} \mbf{K}=\mbf{J}+\mbf{s}=\mbf{J}+\tfrac{1}{2}\,\mbfgr{\sigma}\,,\qquad [K_i,K_j]=i\epsilon_{ijk}K_k\,. \end{equation} The possible eigenvalues of $\mbf{K}^2$ are $k(k+1)$. It is well-known how to construct the simultaneous eigenstates of $\mbf{K}^2$ and $K_3$: \begin{equation} \ket{n,k,k_3,\pm} =\sum_{m_s}C^{kk_3}_{jj_3\tiny\frac{1}{2} m_s} \ket{n,j,j_3} \otimes\ket{\tfrac{1}{2},m_s}_{k=j\pm\tiny\frac{1}{2}}\,, \label{eigenHpm} \end{equation} where the Clebsch-Gordan coefficients \begin{equation} C^{kk_3}_{jj_3\tiny\frac{1}{2} m_s}=\bra{j,j_3,\tfrac{1}{2},m_s}k,k_3\rangle\, \end{equation} on the right hand side are nonzero only if $j_3+m_s=k_3$ and if the triangle-rule holds, which means that the total angular momentum $k$ is either $j+\frac{1}{2}$ or $j-\frac{1}{2}$. In the first case the eigenstates of the total angular momentum are denoted by $\ket{\dots,k,k_3,+}$ and in the second case by $\ket{\dots,k,k_3,-}$. The sums (\ref{eigenHpm}) contain just two terms, since the eigenvalue $m_s$ of the third spin-component $s_3=\frac{1}{2}\sigma_3$ is either $\frac{1}{2}$ or $-\frac{1}{2}$. In the coordinate representation the wavefunctions corresponding to these kets are given by \vskip-0.5cm \begin{eqnarray} & \label{Wspin+-} \bra{\mbf{r}}\ket{n,k,k_3,\pm}=f_{n,j}(\sqrt{\omega}r)\bra{\mbf{n}}\ket{k,k_3,\pm}\,,&\\& \label{Omega} \bra{\mbf{n}}\ket{k,k_3,\pm}= \frac{1}{\sqrt{2k+1\mp 1}} \left(\begin{array}{cc} \pm \sqrt{k\pm k_3+(1\mp 1)/2}\,\mathcal{Y}_{k\mp 1/2}^{k_3-1/2}(\theta,\varphi;\nu)\\ \sqrt{k\mp k_3 +(1\mp 1)/2}\,\mathcal{Y}_{k\mp 1/2}^{k_3+1/2}(\theta,\varphi;\nu) \end{array}\right):=\Omega_{k}^{k_3\,\pm}\,. & \end{eqnarray} If $\nu=eg$ is integer-valued then $j$ is a non-negative integer and $k$ a positive half-integer. If $eg$ is half-integer, then $j$ is a positive half-integer and $k$ is in $\mathbb N_0$. The vector in (\ref{eigenHpm}) is a simultaneous eigenstate of $\mbf{J}^2$ with eigenvalue $j(j+1)$, of $\mbf{K}^2$ with eigenvalue $k(k+1)$, of $H$ with eigenvalue $(2n+j+\frac{3}{2})\omega$, where $j=k\mp 1/2$, and finally of the operator $\,\mbfgr{\sigma}\cdot\mbf{J}$: \begin{equation} \,\mbfgr{\sigma}\cdot\mbf{J}\ket{n,k,k_3,\pm}=\big(\pm(k+\tfrac{1}{2})-1\big) \ket{n,k,k_3,\pm}\,. \end{equation} As a consequence the action of the Hamiltonians in (\ref{spinorbitH}) on these states is \begin{align} H_{+\omega}\ket{n,k,k_3,\pm}&=\omega\left(2n+k+\tfrac{1}{2} \pm k \right) \ket{n,k,k_3,\pm}\,,\label{Hket+}\\ H_{-\omega}\ket{n,k,k_3,\pm}&=\omega\left(2n+k+\tfrac{5}{2} \mp(k+1)\right) \ket{n,k,k_3,\pm}\,. \label{Hket-} \end{align} We see that the discrete eigenvalues of both Hamiltonians $H_{\pm\omega}$ fall into two families: in one family all energies are infinitely degenerate and in the other family they all have finite degeneracy (due to their dependence on the quantum number $k$). More explicitly, for $k=j\mp\frac{1}{2}$ the eigenvalues of $H_{\mp\omega}$ have infinite degeneracy and for $k=j\pm\frac{1}{2}$ they have finite degeneracy $\mathfrak{g}(N,\nu)=N^2-\nu^2$, where $N=n+j+1$. A similar peculiar behavior is observed in the Dirac oscillator spectrum \textcolor{red}{[\cite{DiracOs1}]}. Operators $K_\pm=K_1\pm iK_2$ are the ladder operators for the magnetic quantum number $k_3$. The ladder operators for the radial quantum number are given in (\ref{ladern}), and their action on the simultaneous eigenstates reads \begin{align} \,{\mathcal{C}}\ket{n,k,k_3,\pm}&=\omega d_{n,j}\ket{n-1,k,k_3,\pm}\,,\\ \,{\mathcal{C}}^\dagger\ket{n,k,k_3,\pm}&=\omega d_{n+1,j}\ket{n+1,k,k_3,\pm}\,, \end{align} with coefficients defined in (\ref{d-coeff}). Thus, as for the spin-zero particle system in monopole background, we can easily construct local ladder operators for $n$ and $k_3$. But again, finding ladder operators for $k$ is more difficult. One way to proceed is to follow the ideas employed for the Dirac oscillator in \textcolor{red}{[\cite{DiracOs1,DiracOs2,DiracOs3}]}. First we decompose the total Hilbert space in two subspaces, $\mathscr{H}=\mathscr{H}^{(+)}\oplus \mathscr{H}^{(-)}$, where each $\mathscr{H}^{(\pm)}$ is spanned by the states $\ket{n,k,k_3,\pm}$. Actually we can construct nonlocal operators which project orthonormally onto these subspaces, \begin{align} \mathscr{P}_{+}&=\tfrac{1}{2}+\sqrt{\mbf{K}^2+\tfrac{1}{4}}- \sqrt{\mbf{J}^2+\tfrac{1}{4}}\,,\\ \mathscr{P}_{-}&=\tfrac{1}{2}-\sqrt{\mbf{K}^2+\tfrac{1}{4}}+ \sqrt{\mbf{J}^2+\tfrac{1}{4}}\,, \end{align} and reproduce or annihilate the eigenstates, \begin{equation} \mathscr{P}_{(\pm)}\big\vert_{\mathscr{H}^{(\pm)}} =\mathbbm{1}\big\vert_{\mathscr{H}^{(\pm)}}\,,\qquad \mathscr{P}_{(\pm)}\big\vert_{\mathscr{H}^{(\mp)}}=0\,. \end{equation} In next step we introduce the operators \begin{equation} \label{PAP+} \mathcal{A}_{\pm}=\mathscr{P}_\pm \mathscr{T}_\pm\mathscr{P}_\pm\,, \end{equation} where the nonlocal $\mathscr{T}_\pm$ have been defined in (\ref{defLadder2}). The presence of the projectors will ensure that $\mathcal{A}_\pm$ only acts on eigenstates in $\mathscr{H}^{(\pm)}$, and its action on these eigenstates can be computed straightforwardly using the relations (\ref{ActionA-}) and (\ref{ActionA+}): \begin{align} \label{nonlocalAaction} &\mathcal{A}_+ \ket{n,k,k_3,+}= (k-1)\sqrt{n+k}\,\Lambda_{k,k_3,j}\, \ket{n,k-1,k_3,+}\,, \\ &\mathcal{A}_{-} \ket{n,k-1,k_3,-}= (k+1)\sqrt{n}\,\Lambda_{k,k_3,j} \, \ket{n-1,k,k_3,-}\,, \end{align} with \[ \Lambda_{k,k_3,j}=\frac{\omega^{3/2}}{k}\sqrt{2(k^2-k_3^2)(j^2-\nu^2)}\,. \] These relations mean that the operators $\mathcal{A}_\pm$ and their adjoint act as ladder operators for the quantum number $k$. Together with operators $K_\pm,\,{\mathcal{C}},\,{\mathcal{C}}^\dagger$ they generate all eigenstates in the full Hilbert space from just two eigenstates, one from each subspace $\mathscr{H}^{(\pm)}$. \section{The $\mathfrak{osp}(2\vert 2)$ superconformal extension} \label{osp22 extension} In this subsection we construct and analyze supersymmetric partners of the Hamiltonians $H_{\pm \omega}$ by introducing factorizing operators. From these we obtain two $\mathcal{N}=2$ super-Poincar\'e quantum systems which are related to each other by a common integral of motion which generates an $R$-symmetry. Supplementing the supercharges of one of these systems by supercharges of another, we extend the $\mathcal{N}=2$ super-Poincar\'e symmetry up to the $\mathfrak{osp}(2\vert 2)$ superconformal symmetry realized by a three-dimensional system of spin-1/2 particle in a monopole background. Consider the first-order scalar operators \begin{equation} \label{intertwiningQ} \Theta=i\,\mbfgr{\sigma}\cdot\mbf{b}-\frac{1}{\sqrt{2}}\frac{\nu}{r}\,,\qquad \Xi=i\,\mbfgr{\sigma}\cdot\mbf{b}^\dagger-\frac{1}{\sqrt{2}}\frac{\nu}{r}\,, \end{equation} and their adjoint $\Theta^\dagger$ and $\Xi^\dagger$. The products of these operators with their adjoint are \begin{equation} H_{[1]}:= \Theta\Theta^\dagger=H_{+\omega}+\tfrac{3}{2}\omega\,,\qquad \breve{H}_{[1]}:= \Xi\Xi^\dagger=H_{-\omega}-\tfrac{3}{2}\omega\,, \label{H0} \end{equation} where $H_{\pm\omega}$ are given in (\ref{spinorbitH}). The associated superpartners take the form \begin{align} \label{H0a} H_{[0]}&:= \Theta^\dagger \Theta=\breve{H}_{[1]}-\nu\left(\tfrac{1}{r^2}+2\omega \right)\sigma_r \,, \\ \label{H0b} \breve{H}_{[0]}&:=\Xi^\dagger \Xi =H_{[1]}-\nu\left(\tfrac{1}{r^2}-2\omega \right)\sigma_r \,, \end{align} wherein the projection of $\,\mbfgr{\sigma}$ to the normal unit vector appears, \begin{equation} \label{sigman} \sigma_r=\mbf{n}\cdot\,\mbfgr{\sigma}=\left(\begin{array}{cc} \cos\theta & e^{-i\varphi}\sin\theta\\ e^{i\varphi}\sin\theta & -\cos\theta \end{array}\right)\,. \end{equation} The first order operators satisfy the intertwining relations \begin{eqnarray} &\label{thetainter} \Theta H_{[0]}=H_{[1]}\Theta\,,\qquad \Theta^\dagger H_{[1]}=H_{[0]}\Theta^\dagger\,, &\\& \label{Xiinter} \Xi\breve{H} _{[0]}=\breve{H}_{[0]}\Xi\,,\qquad \Xi^\dagger \breve{H}_{[1]}=\breve{H}_{[0]}\Xi^\dagger\,. & \end{eqnarray} To compute the action of the intertwining operators $\Theta^\dagger$ and $\Xi^\dagger$ in eigenstates of $H_{\pm \omega}$ is useful to express them in the form \begin{align} \label{Qnu} \Theta^\dagger =\frac{\sigma_r}{\sqrt{2}}\left(- \frac{1}{r}\frac{\partial}{\partial r} r+\omega r+ \frac{1+\,\mbfgr{\sigma}\cdot\mbf{J}}{r}\right)\,, \\ \label{Wsigmaform} \Xi^\dagger =\frac{\sigma_r}{\sqrt{2}}\left(- \frac{1}{r}\frac{\partial}{\partial r} r-\omega r+ \frac{1}{r}(1+\,\mbfgr{\sigma}\cdot\mbf{J}) \right)\,. \end{align} Then the strategy is to apply directly this operators on the eigenstates of $H_\omega$ in their coordinate representation (\ref{Wspin+-}), obtaining in this way the eigenstates of systems $H_{[0]}$ and $\breve{H}_{[0]}$. The action of operators $\Theta$ and $\Xi$ in these new eigenvectors follows from the intertwining relations (\ref{thetainter})-(\ref{Xiinter}). The final result is \begin{align} \Theta^\dagger\ket{n,k,k_3,\pm}&=\pm\sqrt{2\omega(n+1+\beta_{\pm}k)} \,\Vert n+\beta_{\mp},k,k_3,\pm\rangle\,,\quad \beta_\pm=\tfrac{1}{2}(1\pm 1)\,,\label{qdonpsi} \\ \Theta\Vert n,k,k_3,\pm\rangle&=\pm \sqrt{2\omega(n+\beta_{\pm}(k+1))}\,\ket{n-\beta_{\mp},k,k_3,\pm}\,, \label{Qonphi+} \\ \Xi^\dagger\ket{n,k,k_3,\pm}&=\pm\sqrt{2\omega (n+\beta_{\mp}(k+1)}\, \Vert n-\beta_\pm,k,k_3,\pm\rangle\,, \label{W+onpsi1} \\ \Xi\,\Vert n,k,k_3,\pm\rangle &=\pm\sqrt{2\omega (n+1+\beta_\mp k)}\,\,\ket{n+\beta_{\pm},k,k_3,\pm}\,. \label{W+onphi2} \end{align} Where in coordinate representation the normalized spinors $\Vert n,k,k_3,\pm\rangle$ have the explicit form \begin{align} \langle\mbf{r}\Vert n,k,k_3,\pm\rangle&=f_{n,j\pm1}\sigma_r\Omega_{k}^{k_3\,\pm}\,, \label{spinors2a}\end{align} and $\Omega_{k}^{k_3\,\pm}$ are given in (\ref{Omega}). From these equations it is easy to show that \begin{align} H_{[0]}\Vert n,k,k_3,\pm\rangle&=2\omega(n+\beta_{\pm}(k+1))\Vert n,k,k_3,\pm\rangle\,, \\ \breve{H}_{[0]} \,\Vert n,k,k_3,\pm\rangle&=2\omega(n+1+k\beta_{\mp})\,\Vert n,k,k_3,\pm\rangle\,, \label{eigenbreve} \end{align} and note that in one hand, $\Vert 0,k,k_3,-\rangle$ are zero-modes of $H_{[0]}$ since they are annihilated by $\Theta$, and on the other hand $\Xi^\dagger$ as well as $\breve{H}_{[1]}$ annihilate the set of states $\ket{0,k,k_3,+}$. Having at hand the eigenstates $\Vert n,k,k_3,\pm\rangle$, one may find spectrum-generating ladder operators. In this context Eqs. (\ref{qdonpsi}), (\ref{Qonphi+}), (\ref{W+onpsi1}) and (\ref{W+onphi2}) can be used to construct such operators for the quantum number $n$. They read \begin{align} \tilde{\mathcal{C}}=\Xi^\dagger \Theta\,,\qquad \tilde{\mathcal{C}}^\dagger=\Theta^\dagger \Xi\,, \end{align} and act on the eigenvectors $\Vert \dots\rangle$ as follows: \begin{align} \tilde{\mathcal{C}}^\dagger\,\Vert n,k,k_3,\pm\rangle&=2\omega d_{n+1,j\pm 1} \, \Vert n+1,k,k_3,\pm\rangle\,,\nonumber\\ \tilde{\mathcal{C}}\,\Vert n,k,k_3,\pm\rangle&=2\omega d_{n,j\pm 1} \, \Vert n-1,k,k_3,\pm\rangle\,. \label{W+Q} \end{align} Actually, the first order operators $\Theta$ and $\Xi^\dagger$ factorize the earlier considered second order ladder operator (\ref{ladern}) according to $\,{\mathcal{C}}=\Theta\Xi^\dagger$. Having constructed lowering and raising operators for $n$, we are still missing ladder operators for $k$ and $k_3$. For the latter we may of course use $K_\pm$, since $\Theta$, $\Xi$ and their adjoint are scalar operators with respect to $\mbf{K}$. But once more, for the angular momentum quantum number $k$ we can introduce nonlocal ``dressed'' operators \begin{eqnarray} \label{nonlocaldressed} &\tilde{\mathcal{A}}_-=\Theta\sqrt{\frac{1}{H_{[1]}}}\mathcal{A}_-\sqrt{\frac{1}{H_{[1]}}}\Theta^\dagger\,,\qquad \tilde{\mathcal{A}}_+=\Xi\sqrt{\frac{1}{\breve{H}_{[1]}}}\mathcal{A}_+\sqrt{\frac{1}{\breve{H}_{[1]}}}\Xi^\dagger\,,&\qquad \end{eqnarray} and their adjoint operators, where $\mathcal{A}_\pm$ have been given in (\ref{PAP+}). The operators $\tilde{\mathcal{A}}_\pm$ are the analogs to $\mathcal{A}_\pm$ for the vectors $\Vert n,k,k_3,\pm\rangle$, as we can see from the equations \begin{align} \label{nonlocalAaction2} &\tilde{\mathcal{A}}_+ \Vert n,k,k_3,+\rangle= (k-1)\sqrt{n+k}\,\Lambda_{k,k_3,j}\, \Vert n,k-1,k_3,+\rangle \,, \\ &\tilde{\mathcal{A}}_{-}\Vert n,k-1,k_3,-\rangle= (k+1)\sqrt{n}\,\Lambda_{k,k_3,j} \, \Vert n-1,k,k_3,-\rangle\,. \end{align} In a final step we combine the four $2\times 2$ matrix Hamiltonians introduced above into two $4\times 4$ matrix super-Hamiltonians as follows: \begin{eqnarray} \label{superH} \mathcal{H}=\left(\begin{array}{cc} H_{[1]} & 0\\ 0 & H_{[0]} \end{array}\right)\,,\qquad \breve{\mathcal{H}}=\left(\begin{array}{cc} \breve{H}_{[1]} & 0\\ 0 & \breve{H}_{[0]} \end{array}\right)\,. \end{eqnarray} In the limit $\nu\rightarrow 0$ they turn into different versions of the Dirac oscillator in the nonrelativistic limit, see \textcolor{red}{[\cite{DiracOs1}]}. Both operators commute with the $\mathbb Z_2$-grading operator $\Gamma=\sigma_3\otimes \mathbb{I}_{2\times 2}$, $[\Gamma,\mathcal{H}]=[\Gamma,\breve{\mathcal{H}}]=0$, and their difference is the (bosonic) integral of motion \begin{equation} \mathcal{R}=\frac{1}{2\omega}(\mathcal{H}-\breve{\mathcal{H}})= (\mbf{J}\cdot \,\mbfgr{\sigma} +\tfrac{3}{2})\Gamma-2\nu\sigma_r\Pi_-= \left(\begin{array}{cc} \,\mbfgr{\sigma}\cdot\mbf{J}+\frac{3}{2} & 0\\ 0&-(\,\mbfgr{\sigma}\cdot\mbf{J}+2\nu\sigma_r+\frac{3}{2}) \\ \end{array}\right), \label{rwisym} \end{equation} where $\Pi_-$ is a projector, \begin{equation}\label{Pi+-} \Pi_\pm=\tfrac{1}{2}(1\pm\Gamma)\,. \end{equation} In the fermionic sectors of the systems $\mathcal{H}$ and $\breve{\mathcal{H}}$ we have the nilpotent operators \begin{eqnarray} \label{QyW} & {\mathcal{Q}}=\left(\begin{array}{cc} 0 & \Theta\\ 0 & 0 \end{array}\right)\,,\qquad {\mathcal{W}}=\left(\begin{array}{cc} 0 & 0 \\ \Xi^\dagger & 0 \end{array}\right)\,,& \end{eqnarray} $\{\Gamma,\mathcal{Q}\}=\{\Gamma,\mathcal{W}\} =0$, and their adjoint operators. The even integral $\mathcal{R}$ in (\ref{rwisym}) generates an $R$-symmetry for both systems. Having in mind that $\mathcal{H}$ and $\breve{\mathcal{H}}$ can be diagonalized simultaneously, from now on we treat $\mathcal{H}$ as the Hamiltonian of the super-extended system and $\breve{\mathcal{H}}=\mathcal{H}-2\omega\mathcal{R}$ as its integral. Then, by anti-commuting ${\mathcal{Q}}$ and ${\mathcal{W}}$ we obtain the bosonic generator \begin{equation} \mathcal{G}=\{\mathcal{W},\mathcal{Q}\}=\left(\begin{array}{cc} \mathcal{C} & 0\\ 0 & \tilde{\mathcal{C}} \end{array}\right)\,, \qquad [\Gamma,\mathcal{G}]=0\,, \end{equation} together with its adjoint. They are composed from the ladder operators of sub-systems $H_{[1]}$ and $H_{[0]}$ of our system $\mathcal{H}$. Taking together, these scalar generators with respect to \begin{equation} \mathcal{K}_i= \left(\begin{array}{cc} K_i & 0\\ 0 & K_i \end{array}\right)\,, \qquad i=1,2,3\,, \end{equation} obey the $\mathfrak{osp(2,2)}$ superalgebra mentioned in Chap. \ref{ChConformal}, Eqs. (\ref{Ospnil1})-(\ref{Ospnilf}), and therefore this construction maybe considered as generalization of the super-extended AFF model to three dimensions. The common eigenstates of $\mathcal{H}$, $\mathcal{R}$, $\Gamma$, $\mathcal{K}_3$ and $\mbfgr{\mathcal{K}}^2$ are given by \begin{eqnarray} \label{supervectors} \ket{n,k,k_3,\pm,1}=\left(\begin{array}{cc} \ket{n,k,k_3,\pm}\\ 0 \end{array}\right) \,,\qquad \ket{n,k,k_3,\pm,-1}=\left(\begin{array}{cc} 0\\ \Vert n,k,k_3,\pm\rangle \end{array}\right)\,, \end{eqnarray} which satisfy the eigenvalue equations \begin{align} \label{spectalEq} \mathcal{H}\ket{n,k,k_3,\pm,\gamma}&=2\omega\big(n+\tfrac{1}{2}(1+\gamma)+\beta_\pm(k+ \tfrac{1}{2}(1-\gamma))\big)\ket{n,k,k_3,\pm, \gamma }\,, \\ \Gamma \ket{n,k,k_3,\pm,\gamma}&= \gamma \ket{n,k,k_3,\pm,\gamma} \,,\qquad \gamma=\pm 1 \,, \\ \label{REq} \mathcal{R} \ket{n,k,k_3,\pm,\gamma}&= [\pm(k+\tfrac{1}{2})+\tfrac{\gamma}{2}] \ket{n,k,k_3,\pm,\gamma}\,, \\ \label{Konsusy} \mbfgr{\mathcal{K}}^2\ket{n,k,k_3,\pm,\gamma}&= k(k+1) \ket{n,k,k_3,\pm,\gamma}\,,\\ \mathcal{K}_3\ket{n,k,k_3,\pm,\gamma}&=k_3\ket{n,k,k_3,\pm,\gamma}\,. \end{align} The operators $\mathcal{Q}$ and $\mathcal{Q}^\dagger$ ($\mathcal{W}$ and $\mathcal{W}^\dagger$) defined in (\ref{QyW}), interchange the state vectors $\ket{n,k,k_3,\pm,\gamma}$ and $\ket{n,k,k_3,\pm,-\gamma}$ according to the rules in (\ref{qdonpsi}), (\ref{Qonphi+}) and (\ref{W+onpsi1}), (\ref{W+onphi2}). The ground states of $\mathcal{H}$ ($\breve{\mathcal{H}}$) which are given by $\ket{n,k,k_3,-,-1}$ ($\ket{n,k,k_3,+,+1}$ ) are invariant under transformations generated by these fermionic operators, therefore the quantum system $\mathcal{H}$ exhibits the unbroken $\mathcal{N}=2$ Poincar\'e supersymmetry. Finally, the spectrum-generating ladder operators for the supersymmetric system correspond to operators $\mathcal{G}$ and $\mathcal{G}^\dagger$ (associated with $n$), $\mathcal{K}_\pm$ that change $k_3$, and the matrix nonlocal operators \begin{eqnarray} \left( \begin{array}{cc} \mathcal{A}_\pm &0\\ 0 & \tilde{\mathcal{A}}_\pm \end{array}\right)\,,\qquad \left( \begin{array}{cc} \mathcal{A}_\pm^\dagger &0\\ 0 & \tilde{\mathcal{A}}_\pm^\dagger \end{array}\right)\,.\qquad \end{eqnarray} related to the angular quantum number $k$. \section{Dimensional reductions} \label{DimRed} The system studied in the last section and the one presented in Chap \ref{ChConformal}, Sec. \ref{SecOSP22Conformal} share the same symmetry, and in this paragraph we will show that they are related by a dimensional reduction. For the sake of simplicity, we put $ \omega = 1 $ here, and denote $ \sqrt {\omega} r = r $ as $ x $. The first step is to note that the Hamiltonian $ \mathcal{H}$ can be presented in the following form \begin{equation} \label{HintermsRK} \mathcal{H}=\frac{1}{2}\left[-\frac{1}{x^2}\frac{\partial}{\partial x}\left(x^2\frac{\partial}{\partial x}\right) + x^2\right]\mathbb{I}_{4\cross 4}+ \frac{1}{2x^2}(\mbfgr{\mathcal{K}}^2-\Gamma\mathcal{R}+\tfrac{3}{4})+\mathcal{R}\,. \end{equation} Then, to do the reduction we introduce the set of equations \begin{eqnarray} \label{reduction1} & \mbfgr{(\mathcal{K}}^2-k(k+1))\ket{\chi,\pm}=0\,,\qquad (\mathcal{K}_3-k_3)\ket{\chi,\pm}=0\,,\qquad &\\& \label{reduction2} \mathcal{P}_{\pm}\ket{\chi,\pm}=0\,,\qquad \mathcal{P}_{\pm}=\frac{1}{2k+1}(\Pi_\pm+k\mp \mathcal{R})\,, & \end{eqnarray} where $k=j\pm\tfrac{1}{2}\,,$ and $k_3=j_3\pm\tfrac{1}{2}\,$. $k=j\pm\tfrac{1}{2}\,,$ and $k_3=j_3\pm\tfrac{1}{2}\,$. Here, the most general form of $\ket{\chi,\pm}$ is \begin{equation}\label{anbn} \ket{\chi,\pm}=\sum_{n=0}^{\infty}a_n^\pm\ket{n,k,k_3,\pm,1}+b_n^\pm \ket{n,k,k_3,\pm,-1}= \sum_{n=0}^{\infty} \left(\begin{array}{cc} a_n^\pm \ket{n,k,k_3,\pm}\\ b_n^\pm \Vert n,k,k_3,\pm\rangle \end{array}\right)\,, \end{equation} and effectively, operators $\mathcal{P}_{\pm}$ are projectors onto the orthogonal subspaces $\ket{\chi,-}$ and $\ket{\chi,+}$. These states satisfy \begin{equation} \mathcal{H}\ket{\chi,-}= \frac{1}{x}\mathcal{H}_j^{-}x\otimes\mathbb{I}_{2\cross 2}\ket{\chi,-}\,,\qquad \mathcal{H}\ket{\chi,+}= \sigma_1(\frac{1}{x}\mathcal{H}_{j+1}^{+}x)\sigma_1\otimes\mathbb{I}_{2\cross 2}\ket{\chi,+}\,, \end{equation} where $\mathcal{H}_j^{-}=\mathcal{H}_j^{e}$ and $\mathcal{H}_j^{+}=\mathcal{H}_j^{b}$ are the one-dimensional supersymmetric extension of the AFF model in exact and spontaneously broken phase, see Chap \ref{ChConformal}, Sec. \ref{SecOSP22Conformal}. Moreover, if we call as $\mathcal{B}_{a}$ and $\mathcal{F}_{b}$ (where index $\mathcal{B}_{1}$ is the Hamiltonian and so on) the bosonic and fermionic generators of the three-dimensional system, respectively, and in the same vein $\mathscr{B}_{j,a}^{\pm}$ and $\mathscr{F}_{j,b}^{\pm}$ are their analogs for one-dimensional system in their respective supersymmetric phases, we get \begin{eqnarray} & \mathcal{B}_{a}\ket{\chi,-}=\frac{1}{x}\mathscr{B}_{j,a}^{-}x\otimes\mathbb{I}_{2\cross 2}\ket{\chi,-}\,,\qquad \mathcal{F}_{b}\ket{\chi,-}=\frac{1}{x}\mathscr{F}_{j,b}^{-}x\otimes\sigma_r\ket{\chi,-}\,, & \\ & \mathcal{B}_{a}\ket{\chi,+}=\sigma_1(\frac{1}{x}\mathscr{B}_{j+1,a}^{+}x)\sigma_1\otimes\mathbb{I}_{2\cross 2}\ket{\chi,+}, \quad \mathcal{F}_{b}\ket{\chi,+}=\sigma_1(\frac{1}{x}\mathscr{F}_{j,b}^{+}x)\sigma_1\otimes\sigma_r\ket{\chi,+}.\quad & \end{eqnarray} In these equations the generators take the form of a direct product of two matrix operators: In case of bosonic (fermionic) operators one has $x^{-1}B\otimes x \mathbb{I}_{2\cross2}$ ( $x^{-1}F\otimes x\sigma_r$), where $B$ ($F$) is a particular bosonic (fermionic) operator of the one-dimensional AFF model in its corresponding supersymmetric phase. Note that in the odd sector we still have angular dependence due to $\sigma_r$. To complete the reduction we introduce the operators \begin{equation} \mathcal{O}_{\pm}=\left(\begin{array}{cc} \ket{v}\bra{k,k_3,\pm} & 0\\ 0 & \ket{v}\bra{k,k_3,\pm}\sigma_r\ \end{array}\right)\,,\qquad \ket{v}=\left(\begin{array}{c} 1\\ 1 \end{array}\right), \end{equation} and their adjoint, as well as the unitary operator \begin{equation} U=\left(\begin{array}{cccc} 1 & 0 & 0 &0 \\ 0 & 0 & 1 & 0\\ 0 & 1&0 &0 \\ 0 & 0 & 0 & 1 \end{array}\right)\,,\qquad UU^\dagger=1\,,\qquad \text{det}\, U=-1\,. \end{equation} Operators $\mathcal{O}_{\pm}$ effectively integrate the angular variables, so the bosonic generators do not change, but the fermionic generators are transformed into \begin{eqnarray} & \mathcal{O}_{-}\mathcal{F}_{b}\mathcal{O}_{-}^\dagger\ket{\Psi,-}= \frac{1}{x}\mathscr{F}_{j,b}^{-}x\otimes\sigma_1\,\ket{\Psi,-}\,, &\\& \mathcal{O}_{+}\mathcal{F}_{b}\mathcal{O}_{+}^\dagger\ket{\Psi,+}= \sigma_1(\frac{1}{x}\mathscr{F}_{j+1,b}^{+}x)\sigma_1\otimes\sigma_1\,\ket{\Psi,+}\,, & \end{eqnarray} where $\mathcal{O}_\pm\ket{\chi,\pm}=\ket{\Psi,\pm}$. On the other hand, by means of the unitary transformation produced by $ U $, we are able to present the bosonic and fermionic generators, already transformed by $\mathcal{O}_\pm$, in the form $ \mathbb{I}_{2 \cross2} \otimes x^{- 1} Bx $ and $ \sigma_1 \otimes x^{- 1} F x $, respectively. From these expressions one simply extracts the one-dimensional generators by means of the projectors $\Pi_\pm$, and it is also easy to show that the objects $ \Pi_\pm U\ket{\Psi, \pm} $ take the form of the eigenstates of the AFF supersymmetric model divided by $x$. In summary, we have two schemes of dimensional reductions made up by a projection on a subspace with fixed $ k $, the integration of the remaining angular variables, and a unitary transformation. Let us denote these two schemes as $\delta_{\pm}=\{\mathcal{P_\pm},\mathcal{O}_\pm, U\}$. Then, by applying the scheme $ \delta_-$ ($ \delta_+ $) in our three-dimensional $\mathcal{N}=2$ $\mathfrak{osp}(2|2)$ superconformal system we obtain a super-extension of the AFF model in the exact (spontaneously broken) supersymmetric phase, and there is a one-to-one correspondence between bosonic and fermionic generators of the three-dimensional model with those associated with the one-dimensional model. \section{Remarks} We end this chapter with a comment related to supersymmetry and Dirac Hamiltonian. Taking the nilpotent operators $ \mathcal{Q}^\pm $ given in (\ref{QyW}), a Hermitian supercharge can be constructed, and this has the form \begin{equation} \label{PB-Dirac Op} \mathcal{Q}_{0}=-\sqrt{2}(\mathcal{Q}^++\mathcal{Q}^-)=\gamma^{i}(p_i-e\mathscr{A}_i)+e\gamma^{0}\mathscr{A}_0\,, \end{equation} where $ \mathscr{A}_{0}=\frac{g}{r}\,,$ $\mathscr{A}_{i}=A_{i}-i\frac{\omega}{e}\gamma^{5}\,r_i\,, $ and $\gamma^{5}=\Gamma$ is our grading operator in Sec. \ref{osp22 extension}. Then the operator (\ref{PB-Dirac Op}) can be viewed as a parity breaking Euclidean Dirac operator with components of the gauge potential satisfying the relations $-\partial_i \mathscr{A}_{0}=\epsilon_{ijk}\partial_{j}\mathscr{A}_{k} =gr_i/r^3$. Hence we are dealing with a new type of parity breaking dyon background. Actually, the $\gamma^{5}$ terms do not allow for an $\mathcal{N}=4$ supersymmetric extension and we only have $\mathcal{N}=2$ supersymmetry, with the second supercharge given by $i\sqrt{2}(\mathcal{Q}^+-\mathcal{Q}^-)=i\gamma^5\mathcal{Q}_{0}$. It is interesting to relate a parity-breaking Dirac operators with supersymmetric quantum mechanics. In this context it is not clear whether a (pseudo)classical supersymmetric system exists whose quantization would produce our three-dimensional superconformal system, or we have here a kind of a classical anomaly \textcolor{red}{[\cite{clasanomaly}]}. Also, the fact that the ground state is infinitely degenerate is maybe due to this parity breaking term. \chapter*{Conclusions and Outlook} \label{Conclusion} \addcontentsline{toc}{chapter}{Conclusions} In conclusion, we recall the problems \emph{a) to d)} that were originally listed in the introduction, but now in the light of the obtained results. This will also allow to point out interesting problems for further research. \underline{\emph{a) Connection between different mechanical systems through symmetries}} \\ We addressed the problem of establishing a mapping between the two forms of dynamics (in the sense of Dirac \textcolor{red}{[\cite{Dirac}]}) associated with conformal algebra. The indicated mapping is the conformal bridge transformation introduced in \textcolor{red}{[\cite{InzPlyWipf1}]} (Chap. \ref{ChBridge}), that relates an asymptotically free system with an harmonically confined one. The transformation maps rank $n$ Jordan states of the zero energy (and eigenstates) of the first system to eigenstates (coherent states) of the second. The conformal bridge also maps symmetry generators from one system to the another. From its general nature, this mapping provides a new approach to study higher dimensional (in the sense of degrees of freedom) conformal invariant systems, such as the Calogero model \textcolor{red}{[\cite{Calogero1,Calogero2}]}. Actually, we have already shown its applicability for the Landau problem analyzed in Chap. \ref{ChBridge}, as well as for the monopole background model in Chap. \ref {Chapmono1}. A fairly natural question is whether there is any analog transformation at the level of supersymmetric quantum mechanics, in such a way we could include in this mapping fermionic integrals of motion. There could also be some relationship between this transformation and the Riemann hypothesis, since Hamiltonians of the form $ xp $ have been used in this direction \textcolor{red}{[\cite{Connes,Berry,Regniers,Sierra,Bender2017}]}. \underline{\emph{b) Hidden and bosonized supersymmetry}}\\ We wanted to establish the origin of the hidden bosonized superconformal symmetry of the harmonic oscillator in one dimension \textcolor{red}{[\cite{Hiden1,BalSchBar,CarPly2,Hiden3}]}. It was shown that such a bosonized supersymmetry originates from a nontrivial supersymmetric system, via the nonlocal Foldy-Wouthuysen transformation \textcolor{red}{[\cite{ InzPly1}]} (Chap. \ref{ChHiddenboson}). The only fermionic true integrals of that system are the trivial Pauli matrices, and other operators are dynamical integrals, in the sense of the total Heisenberg equation. In contrast to the usual super-harmonic oscillator, the system has spontaneously broken supersymmetry. We explain the nature of this system through confluent Darboux transformation and in the scheme of free anomaly quantization for second order supersymmetry \textcolor{red}{[\cite{PlyPara,KliPly,Plyushchay}]}. The question about what happens in higher dimensional cases remains open, however we think that the conformal bridge transformation could provide us an answer. \underline{\emph{c) Hidden symmetries in rationally extended conformal mechanics}}\\ The objective was to find the spectrum generating ladder operators for rational deformations of the AFF model and its supersymmetric extensions. We have used the DCKA transformation to produce a rational extension of the AFF model. The nature of the resulting Hamiltonians depends on the choice of the seed states: We can produce isospectral and non-isospectral rational deformations that have an arbitrary number of gaps of different sizes in their spectra. Starting from the harmonic oscillator \textcolor{red}{[\cite{CarInzPly}]} (Chap. \ref{ChRQHO}), we implemented an algorithmic procedure that takes a set of seed states for DCKA transformation (them could be physical or nonphysical, but not a mixture), and produces a new set of seed states of a different nature. Both Darboux schemes essentially generate the same system, up to an additive constant. This is what we called a Darboux duality for the harmonic oscillator, and we have used it to construct the spectrum-generating ladder operators for rational deformations of the AFF model with potential $ x^2 + m (m + 1) / x^2 $ where $ m = 0,1, \ldots. $. These ladder operators fall into three categories; Operators of the type $ \mathcal{A} $ that irreducibly act on the equidistant part of the spectrum but annihilate all separate states. Operators of type $\mathcal{B}$ that act similarly to $ \mathcal{A} $ on the equidistant part of the spectrum but annihilate only the upper (rising operator) and lower (lowering operator) states in each separate band. Finally, operators of type $ \mathcal{C} $, that connect the separated part of the spectrum with its equidistant part. These results are analogous to what was obtained for rational extensions of the harmonic oscillator in \textcolor{red}{[\cite{CarPly}]}. This phenomenon in which different possible options of Darboux schemes produce the same system, also appears in the context of deformations of the free particle, specifically, in the construction of the so-called reflectionless potentials, see \textcolor{red}{[\cite{MatSal}]} for a background on the subject. The main difference between these systems and the rational deformations of the harmonic oscillator (as well as deformations of the AFF model), is that the Darboux schemes produce there the same potential without any additive constant. This implies that the Darboux dressing procedure provides there the true integrals of motion, which are the so-called Lax-Novikov integrals, see \textcolor{red}{[\cite{Boson3,Arancibia,Arancibia2,Arancibia2014,plyushchay2020exotic}]} for more information. The next step was to study the complete nonlinear supersymmetry that characterizes the rational super-extensions of the AFF model and the harmonic oscillator \textcolor{red}{[\cite{InzPly2}]} (Chap. \ref{ChNonLinearSUSY}). By means of a set of algebraic relations, we have obtained a large chain of new higher-order dynamical integrals that act irreducibly in the system, in a similar way as powers of the first-order ladder operators do in the case of the simplest harmonic oscillator. We stopped the generation of integrals when we realized that certain objects can be written in terms of more basic elements than they are, otherwise one would have an infinite-dimensional algebra of the $W$ type, see \textcolor{red}{[\cite{deBoer}]} and references therein. With fermionic generators we have a similar picture. Despite having so many new operators, which we cannot avoid because they arise from the commutation relations between operators of the type $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{C}$, the role they play is not clear since the spectrum-generating set was already built. Perhaps there is a more basic structure behind this construction, hidden in the virtual systems produced by the Darboux chain, but this is still an open question. In this context, another interesting problem to investigate is whether these higher order generators can be obtained by means of a quantization prescription of a pseudo-classical system, however one must bear in mind that higher order supersymmetry presents a quantum anomaly \textcolor{red}{[\cite{KliPly,Plyushchay}]}. In \textcolor{red}{[\cite{InzPly3}]} (Chap. \ref{ChKlein}) we extend the Darboux duality to the case of the AFF model with potential $ x^2 + \nu (\nu + 1) / x^2 $, where $\nu\geq-1/2$. This is possible due to the Klein four-group associated to the Schr\"odinger equation of the model. Having the Darboux duality for this system allows us to extend the notion of the three classes of ladder operators described above, now for any possible deformation of the AFF model. We have not considered spectrum-generating algebras and supersymmetric extensions for these cases, so this remains as an open problem. Within all this, the cases in which $\nu$ is a half-integer number are really special: When this happens, the confluent Darboux transformation is involved in some of the recipes for constructing rationally extended potentials, and some rational extensions undergo significant structural changes. Such changes are reflected both in the available energy levels, such as in the number of physical states, and also in the kernels of the of spectrum-generating ladder operators, where now nonphysical states and Jordan states appear. On the other hand, systems very similar to these, but without the harmonic term, appeared in a completely different context, through the so-called $ \mathcal{PT} $ regularization \textcolor{red}{[\cite{Correa2016,JM1, JM2}]}. These models are intimately related to the Korteweg-de Vries equation due to the Lax pair formalism \textcolor{red}{[\cite{MatSal}]} and help to provide new types of solutions. It would be interesting to clarify if there is a generalization of the conformal bridge for deformed systems, that could provide us a new knowledge related to integrable models. \underline{\emph{d) Hidden symmetries in three-dimensional conformal mechanics}}\\ For this problem, we have considered a particle with electric charge $e$ in a Dirac monopole background, i.e., a $U(1)$ external vector potential $\mbf{A}$, the curl of which gives us the spherically symmetric magnetic field produced by a monopole source with charge $g$, see details in \textcolor{red}{[\cite{Sakurai,Mcin,Vinet,InzPlyWipf1}]} and in the references cited there. The particle was also subjected to a central potential of the form $ V (\mbf{r}) = \frac{\alpha}{2m \mbf{r} ^ 2} +\frac {m \omega \mbf{r}^2}{ 2} $. We investigated the possibility of obtaining hidden integrals of motion for this system, and we also looked for a possible supersymmetric extension of this model. It was found that the system has hidden symmetries when $\alpha=(eg)^2$. At the classical level, they control the periodic nature of the trajectory, besides in the quantum case, these integrals reveal the nature of spectrum degeneration of the system. To construct the hidden integrals at the classical level, we have used the fact that the projection of the particle's trajectory into the orthogonal plane to the Poincar\'e vector integral (the modified angular momentum of the system), is analogous to the orbit of the three-dimensional harmonic oscillator. Actually, we demonstrated that this is a universal property of this background, i.e., if we change the harmonic trap for an arbitrary central potential, the dynamics in the mentioned plane will be the same that would occur in the absence of the monopole charge. It is also necessary to emphasize that the system has the $\mathfrak{sl} (2, \mathbb R) $ symmetry and is connected with an $\mathfrak{so}(2,1) $ invariant system previously analyzed in \textcolor{red}{[\cite{PlyWipf}]}, by means of the conformal bridge transformation. This brings us another way to get the integrals of the hidden symmetries. Inspired by the so-called ``Dirac oscillator'' proposed in \textcolor{red}{[\cite{DiracOs1, DiracOs2, DiracOs3}]}, we introduced a special spin-orbit coupling term into the Hamiltonian of our system in the monopole background (Chap. \ref{Chapmono2}), and this naturally leads us to the construction of a supersymmetric extension. The resulting model is a three-dimensional realization of the $ \mathfrak{osp} (2 | 2) $ superconformal symmetry, and some of its interesting proprieties appear in the following list: \begin{itemize} \item In the limit $\nu\rightarrow 0$, the Hamiltonian of our system takes the form $$ \mathcal{H}_{\text{DO}}= \frac{1}{2}\left(\mbf{p}^2+\omega^2\mbf{r}^2\right)\mathbb{I}_{4\cross 4}+\omega\Gamma( \,\mbfgr{\sigma}\cdot \mbf{L} + \frac{3}{2})\,,\qquad \Gamma=\sigma_3\otimes \mathbb{I}_{2\times 2}\,. $$ which is identified with the mentioned Dirac oscillator Hamiltonian in the non-relativistic limit. \item In the limit $\omega\rightarrow 0$, our Hamiltonian operator is transformed into $$ \mathcal{H}_{\text{dyon}}= \frac{1}{2}\left((\mbf{p}-e\mbf{A})^2+\frac{\nu^2}{r^2}\right)\mathbb{I}_{4\cross 4}+\frac{\nu}{r^2}\,\mbfgr{\sigma}\cdot\mbf{r} \Pi_-\,,\qquad \Pi_\pm=\frac{1}{2}(1\pm \Gamma)\,, $$ which$\quad$ is$\quad$ interpreted$\quad$ as$\quad$ the$\quad$ Pauli$\quad$ Hamiltonian$\quad$ of$\quad$ a$\quad$ supersymmetric dyon $(c=1)$ \textcolor{red}{[\cite{PlyWipf}]}. This system has the exceptional superconformal symmetry $ D (2; 1, \alpha) $ with $ \alpha = 1/2 $, which is larger than $ \mathfrak{osp} (2 | 2) $ superalgebra, so we believe that some important structures are still missing in our construction. \item The system has two classes of energy levels organized in two independent towers. The eigenvalues associated with one of these towers are infinitely degenerate, while the energies in the other tower have finite degeneracy. \item Through the application of two different dimensional reduction schemes, the system is transformed into the super-extended AFF model. One scheme gives us the extended system in the spontaneously broken supersymmetric phase, while the other scheme produces the system in the exact supersymmetric phase. \end{itemize} This type of system opens an interesting line of research, which consists in exploring the supersymmetric structure of a Dirac Hamiltonian that breaks parity symmetry (since the Hermitian supercharges of our model can be interpreted in this way), and searching for applications for systems with infinitely degenerate ground energy.
{ "timestamp": "2020-12-17T02:19:09", "yymm": "2012", "arxiv_id": "2012.08963", "language": "en", "url": "https://arxiv.org/abs/2012.08963" }
\section{} \section{Introduction} Transverse single-spin asymmetries (TSSAs) have been puzzling physicists more than three decades. They are among the most intriguing observables in hadronic physics since first FermiLab measurements for $p+\mathrm{Be} \to \Lambda^{\uparrow}+X$ reaction\cite{Bunce:1976yb}. Since then, TSSA are observed in many different reaction, including mesons production in $pp$ and SIDIS. Results of experiments are in contradiction with predictions from the perturbative Quantum Chromodynamics(pQCD) and the naive collinear parton model. It was expected that asymmetries should be extremely small\cite{Kane:1978nd}. For comprehensive introduction to the problematic we refer the reader to review \cite{DAlesio:2007bjf}. In this paper we focus on transverse single-spin asymmetry for pion production in nucleon–nucleon scattering. It is often called as analyzing power and denoted as $A_N$. Such measurements were done at FermiLab by E581/E704 Collaborations\cite{Adams:1991cs}. Later, similar measurements at higher energy was performed at RHIC\cite{Adams:2006uz}. Unambiguous effects were measured and they triggered renewed interest on TSSAs. A popular approach to describe observed spin effects is based on the extension of the collinear parton model with inclusion of parton's transverse motion. It utilizes the Transverse Momentum Dependent(TMD) factorization scheme. However, the factorization theorem has not been proven generally for such case\cite{Rogers:2010dm}. It has so far only been proven for some classes of processes: the Drell-Yan $(q + \bar{q} \to l^{+} + l^{-})$\cite{Collins:1984kg} and semi-inclusive DIS\cite{Collins:1981va}. $k_T$-dependent factorization is, therefore, an assumption, although a well-accepted one. Efforts are ongoing to establish the theoretical basis more firmly. We refer the reader to papers with discussions of the universality \cite{Collins:2002kn,Metz:2002iz,Bomhof:2004aw,Boer:2003cm} and TMD pdf's evolution\cite{Henneman:2001ev,Kundu:2001pk}. Moreover, the dominance of $k_T$ effects among other contributions is disputed. For example, effects of parton virtuality, target mass corrections could be of the same order of magnitude as transverse parton motion\cite{Moffat:2017sha}. Two mechanisms for TSSA have been proposed in the framework of non-collinear parton model. The first is Collins mechanism, when transversity distribution in combination with spin-dependent, chiral-odd Fragmentation Function(FF) can give rise TSSA\cite{Collins:1992kk}. The Collins FF describes the azimuthal asymmetry of a fragmented hadron in respect to struck quark polarization. Work \cite{Anselmino:2012rq,Ma:2004tr} has suggested that it is difficult to explain the large TSSA entirely in terms of the Collins effect. The second mechanism was suggested by Sivers\cite{Sivers:1989cc}. The idea is that parton distributions are asymmetric in the intrinsic transverse momentum $k_T$ within the proton. The Sivers effect can exist both for quarks and gluons. This intrinsic asymmetry is represented by Sivers function of the unpolarized partons in a transversely polarized proton. Calculations based on Sivers effect for E704 data and other results can be found in \cite{Anselmino:2013rya,Anselmino:1994tv}. Other direction for investigation is the twist-3 approach. It was pointed out that three-parton correlators may give rise to TSSAs\cite{Efremov:1981sh}. Qiu and Sterman examined higher-twist contributions due interference between quark and gluon fields in the initial polarized proton\cite{Qiu:1998ia}. Similar study was performed by Kanazawa and Koike for quark-gluon interference in the final state\cite{Kanazawa:2000hz}. In present paper we propose an alternative mechanism for TSSA in $pp\to\pi X$, based on existence of novel effective interaction induced by instantons. The instantons describe sub-barrier transitions between the classical QCD vacua with different topological charges. In previous work\cite{Kochelev:2013zoa} we calculated TSSA for quark-quark scattering and showed that such mechanism gives significant TSSA. However, generalization of that result to the case of real hadron scattering is unclear. Calculation in the standard, pQCD-like way with introduction of fragmentation functions is not self-consistent. Extraction of FFs requires evolution equation and was done in framework of pQCD without considering an additional non-perturbative low-energy interaction. The new vertex may give significant contribution to the evolution\cite{Kochelev:2015pqd}. Reanalyzing data with the new vertex and modified evolution will not give new information since we will introduce more parameters. Fortunately, the low-energy effective interaction generated by instantons provides us the other solution. It contains a pion-quark-gluon vertex. In such case, we do not need any fragmentation function and, as result, we reduce the number of parameters in the model. Formation of pion happens at the short distance of the instanton scale $\approx 0.3$~fm, which is smaller than distances of confinement dynamics. The other important consequence is breaking of the pQCD factorization. Scattering of partons and hadronization are coherent at the instanton scale. It might be a corner stone of various phenomena observed in high energy reactions in the few GeV range for the transferred momentum. This paper has the following structure. Section \ref{section:vertex} gives a brief introduction to the instanton generated interaction. In section \ref{section:Xsec} we discuss calculation for pion production cross-section and then, in section \ref{section:ssa}, calculate TSSA. Section \ref{section:discussion} is dedicated to numerical analysis and discussion. \section{Instanton generated interaction}\label{section:vertex} Our calculation for TSSA is based on the presence of the intrinsic spin-flip during the quark-gluon interaction already on the quark level. The generating functional for such non-perturbative interaction was obtained previously\cite{Kochelev:1996pv}. Later it was generalized in order to preserve the chiral invariance\cite{Diakonov:2002fq}. The generalized interaction Lagrangian has form \begin{equation}\label{eq:Lagrangian} \mathcal{L}_I=-i g_s \frac{\mu_a}{4 m_q} \bar{\psi} \, t^a [\sigma_{\mu\nu} e^{i\gamma_5 \vec{\tau}\cdot \vec{\phi}/F_{\pi}}] \psi \, G^a_{\mu\nu}, \end{equation} where $g_s$ is the strong coupling constant, $\mu_a$ is the anomalous quark chromomagnetic moment(AQCM), $m_q$ is the constituent quark mass, $t^a$ are $SU(3)$ color matrices, $\sigma_{\mu \nu}=\frac{1}{2}[\gamma_\mu , \gamma_\nu]$. $\vec{\tau}$ are Pauli matrices acting in the flavor space, $\vec{\phi}$ is the pion field, $F_\pi=93$ MeV is the pion decay constant. $G^a_{\mu\nu}$ is the gluon field strength. This effective interaction is obtained by expanding t'~Hooft interaction in the power series in the gluon field strength, assuming a big spatial size of the gluon fluctuations. Based on the Lagrangian (\ref{eq:Lagrangian}), the full interaction vertex is \begin{equation}\label{eq:int_vertex} U_\mu^a = i g_s t^a \left( \gamma_\mu - \sigma_{\mu \nu} q_\nu F(k_1,k_2,q) e^{i \gamma_5 \vec{\tau} \cdot \vec{\phi} / F_{\pi} } \right). \end{equation} The first term $\gamma_\mu$ corresponds to usual pQCD interaction. The second term is from effective low-energy action Eq.(\ref{eq:Lagrangian}). $k_{1,2}$ are the momenta of incoming and outgoing quarks, $q=k_2-k_1$. The form factor $F$ is calculated in the instanton liquid model\cite{Schafer:1996wv,Diakonov:2002fq}: \begin{equation} \begin{split} F(k_1,k_2,q)&= \frac{\mu_a}{2m_q} \Phi_q \left(\frac{|k_1|\rho_c}{2} \right) \Phi_q \left(\frac{|k_2|\rho_c}{2} \right) F_g(|q|\rho_c), \\ \Phi_q(z)&=-z\frac{d}{dz}(I_0(z)K_0(z)-I_1(z)K_1(z)), \\ F_g(z)&=\frac{4}{z^2}-2K_2(z), \end{split} \end{equation} where are the $I_{\nu}(z)$ and $K_{\nu}(z)$ are the modified Bessel functions. $\rho_c \approx 1.67$ GeV$^{-1}$ ($1/3$~fm) is the average instanton size. In our calculations all quarks are on mass shell, therefore $\Phi_q=1$ and we will omit it further. The AQCM $\mu_a$ is calculated in the framework of the instanton liquid model\cite{Kochelev:1996pv} is \begin{equation}\label{eq:AQCM_value_inst_model} \mu_a=-\frac{3\pi (m_q\rho_c)^2}{4\alpha_s(\rho_c)}. \end{equation} AQCM in pQCD appears at higher order $\alpha_s$ corrections. Therefore, it has a small value $\mu_{pQCD}=\alpha_s/2\pi \approx 10^{-2}$. In contrast, the instanton generated AQCM is of the order of $1$. Moreover, instanton liquid model gives the sign of AQCM and, in its turn, determines the sign of observed TSSA. Eq.~\ref{eq:AQCM_value_inst_model} is obtained in the massless chiral limit. One should not be confused that the $\mu_a$ increases with the quark mass. $m_q$ is the constituent mass and this equation could not be applied for heavy $c, b$ and $t$ quarks. If we expand the exponent in Eq.(\ref{eq:int_vertex}) into series and cut it on the second term, we get three types of vertices: traditional perturbative, chromomagnetic and the vertex with pion, \begin{equation}\label{eq:int_vertex_expanded} \begin{split} U_\mu^a = i g_s t^a \Bigg( &\gamma_\mu - \sigma_{\mu \nu} q_\nu F(k_1,k_2,q) \\ & - i \frac{\vec{\tau} \vec{\phi}}{F_{\pi}} \gamma_5 \sigma_{\mu \nu} q_\nu F(k_1,k_2,q) \Bigg). \end{split} \end{equation} We neglect the higher order terms. Their contribution to cross section is expected to be suppressed in the large $N_c$ limit by factor $1/N_c$ because $F_{\pi}\sim \sqrt{N_c}$\cite{Witten:1979kh}. Moreover, due to increasing of the final particles number, it should be suppressed at large $x_F$ where TSSA is observed. \section{Calculation of cross section}\label{section:Xsec} We are interested in process $p^\uparrow p\to\pi X$. Three parton subprocesses give contribution to the cross section. They are shown on Fig.\ref{fig:Xsection_contributions}. The diagram (a) was calculated before in \cite{Kochelev:2015pha} using an assumption that the pion fragmentates in the same kinematic region as the quark $q_{+}$, i.e. the pion and quark flight approximately in the same direction. In the present work we implement more rigorous calculation for the phase space and calculate additional contributions shown on panels (b) and (c) of Fig.\ref{fig:Xsection_contributions}. The contribution (b) has the chromomagnetic vertex on the bottom quark line instead of the perturbative one. The diagram (c) $q+q\to 2\pi+2q$ is essentially different. In our model we have the pion directly in the interaction vertex and should consider the process where the pion is inside of an unobserved inclusive state $X$. As the first step we study the partonic cross section and its features. Then we calculate the hadron cross section as convolution of partonic one with parton densities. \begin{figure} \centering \includegraphics[width=0.9 \columnwidth]{Xsection_contributions_quarks.pdf} \caption{Contributions to the pion production cross section and notation for momenta. The small dot denotes the perturbative vertex. The white blob is for the instanton induced interaction, it corresponds to the second term in Eq.(\ref{eq:int_vertex_expanded}). The shaded blob corresponds to the last term in Eq.(\ref{eq:int_vertex_expanded}) with the pion.} \label{fig:Xsection_contributions} \end{figure} \subsection{Parton cross section} In massless limit the parton cross section is \begin{equation}\label{eq:parton_Xsec} d\hat{\sigma}=\frac{|\mathcal{M}|^2}{2\hat{s}}d\hat{R}_i, \end{equation} where $d\hat{R}_i$ is the phase space for $i$ number of particles. In our case it can be three $d\hat{R}_3$ and four $d\hat{R}_4$. We use the hat symbol to emphasize that the phase space is expressed in terms of momenta and energies calculated in the \emph{parton} c.m. frame. This frame moves in respect to the hadron c.m. frame. $\hat{s}$ is the total energy of colliding partons. In calculation we use the following Sudakov decomposition for momentum vectors: \begin{equation}\label{eq:momenta_decomposition} \begin{split} k &= x p_+ + \beta_k p_- + k_\perp, \\ q_+ &= \alpha_+ p_+ +\beta_+ p_- + q_{+ \perp}, \\ q_- &= \alpha_- p_+ +\beta_- p_- + q_{- \perp}, \\ q &= \alpha p_+ +\beta p_- + q_{\perp}, \\ l &= \alpha_l p_+ + z p_- + l_\perp. \end{split} \end{equation} $x$ and $z$ are parts of longitudinal momentum of the initial quarks carried by $k$ and $l$ pions correspondingly. $p_+$ and $p_-$ are light-cone vectors: \begin{align} p_+&=(\sqrt{\hat{s}}/2,\sqrt{\hat{s}}/2,0_{\perp}),& \quad p_- &= (\sqrt{\hat{s}}/2,-\sqrt{\hat{s}}/2,0_{\perp}), \nonumber \\ \hat{s} &= (p_+ + p_-)^2,& \quad p_+^2 &= p_-^2=0. \end{align} Using this momenta decomposition the phase space $d\hat{R}_3$ becomes \begin{align} d\hat{R}_3 = \frac{1}{4(2\pi)^5} \frac{dx \, d^2\!k_{\perp} \, d^2\!q_{\perp}}{x (1-x) \hat{s}}. \end{align} Integration over the transverse transferred momenta $q_{\perp}$ can be transformed to integration over the invariant mass $M_k^2=(k+q_{+})^2$: \begin{equation}\label{eq:dR3} d\hat{R}_3= \frac{1}{2^7 \pi^5} \frac{dx \, d^2\!k_{\perp}}{x^2 \hat{s}}\int_0^{E^2_{\text{sph}}} \!\!\!\!\!\!dM_k^2 \int_0^\pi \!\!\!\!\! d\tilde{\phi}, \end{equation} where $E_{\rm sph}$ is the sphaleron energy, $\tilde{\phi}$ is the azimuthal angle of an auxiliary vector $\tilde{q}_\perp=x q_\perp - k_\perp$(see Appendix~\ref{appendix:PS} for details). Sphaleron energy $E_{\rm sph} = \frac{3\pi}{2\rho_c}$\cite{Zahed:2002sy,Diakonov:2002fq} determines the height of potential barriers between different topological vacuums. Instanton describes tunneling through that barrier, therefore the instanton induced vertex works only at energies less than the height of the barrier. For diagram Fig.~\ref{fig:Xsection_contributions} (c) we need the 4-particle phase space. Using Sudakov decomposition \ref{eq:momenta_decomposition} it is \begin{equation} d\hat{R}_4 = \frac{dx dz d^2 \! k_\perp \, d^2 \! l_\perp \, d^2\! q_\perp}{8 (2\pi)^8 \hat{s} x (1-x) z (1-z)}. \end{equation} Similar to $d\hat{R}_3$, we change integration over transverse momenta to integration over the invariant mass $M_{l}^2=(l+q_{-})^2$, $d^2l_\perp \to d M_{l}^2 d\phi$. Notice that here we replace $d^2l_\perp$, not $d^2q_\perp$, \begin{equation} d\hat{R}_4 = \frac{dx dz d^2 \! k_\perp \, d^2\! q_\perp \, dM^2_{l}}{2^{11} \pi^7 \hat{s} x (1-x)}. \end{equation} Next step is calculation transition amplitudes $\mathcal{M}_{(a,b,c)}$. A letter corresponds to a panel on Fig.~\ref{fig:Xsection_contributions}. The amplitude for first diagram Fig.\ref{fig:Xsection_contributions}(a) is \begin{equation} \begin{split} |\mathcal{M}_{(a)}|^2 =& \sum_{f,s,c} g_s \frac{C(q^2)}{F_\pi} (\bar{u}_{q_+} \sigma_{\mu \lambda} q_\lambda \gamma_5 t^a u_{p_+}) \\ & \times (\bar{u}_{q_-} i \gamma_\nu t^{a'} u_{p_-}) D_{\mu\nu}^{aa'}(q^2) \times \Big[ h.c. \Big], \end{split} \end{equation} where $ \sum_{f,s,c}$ short-notes averaging over spin, color and flavor summation for corresponded pion($\pi^0$, $\pi^{\pm}$). $D_{\mu\nu}^{aa'}(q^2)=-i g_{\mu\nu}\delta^{aa'}/q^2$ is the gluon propagator. $C(q^2)$ can be thought as an effective coupling: \begin{equation} C(q^2) = g_s \frac{\mu_a}{2 m_q} F_g(q^2) = - \frac{3 \pi^{3/2} \rho_c^2 m_q}{4 \sqrt{\alpha_s(\rho_c)}} F_g(q^2). \end{equation} Note that $\alpha_s$ in the nonperturbative vertex is taken at the instanton size scale. This is the reason why we keep one $g_s$ inside of $C(q^2)$ and another, from perturbative vertex, outside. They supposed to be taken at different scales. Further, we will omit writing $q$-dependency of $C$ for shortness. We are interested in forward scattering. At such kinematics, for simplicity of calculation, we use Gribov's decomposition for $g_{\mu\nu}$ in the gluon propagator. \begin{equation} g_{\mu\nu}=\frac{2 p_{+\mu} p_{-\nu}}{\hat{s}}+\frac{2 p_{+\nu} p_{-\mu}}{\hat{s}}+g^{\perp}_{\mu\nu} \approx \frac{2 p_{+\mu} p_{-\nu}}{\hat{s}}. \end{equation} Such decomposition allows us to isolate the leading contributions to an amplitude in the power of $\hat{s}$ and factorize fermion traces. Using it we get for the amplitude (see Appendix \ref{appendix:amplitudes}) \begin{align} |\mathcal{M}_{(a)}|^2&= \sum_{f} \frac{8}{9} g_s^2 \frac{C^2}{F_{\pi}^2} \frac{\hat{s}^2 (1-x)}{q_{\perp}^2}. \end{align} We keep the sum over flavor to indicate that expressions for $\pi^\pm$, $\pi^0$ are different. In the case of the diagram Fig.\ref{fig:Xsection_contributions}(b), the difference is only in the trace over the bottom line, \begin{align}\label{eq:M1_amplitude} |\mathcal{M}_{(b)}|^2 &= \sum_{f,s,c} \frac{C^2}{F_\pi} (\bar{u}_{q_+} \sigma_{\mu \lambda} q_\lambda \gamma_5 t^a u_{p_+}) \nonumber \\ & \times (\bar{u}_{q_-} i \sigma_{\nu \rho} q_\rho t^{a'} u_{p_-}) D_{\mu\nu}^{aa'}(q) \times \Big[ h.c. \Big] \\ & = \sum_{f} \frac{8}{9} \frac{C^4}{F_{\pi}^2} \hat{s}^2 (1-x). \nonumber \end{align} Notice that now the amplitude is proportional to $C^2$, not $g_s C$. The amplitude for the two pion contribution $q + q \to 2\pi + 2q$ Fig.~\ref{fig:Xsection_contributions}(c) is very similar to the case with one pion vertex. Now, the trace over the bottom fermion line is similar to the upper one. \begin{equation} \begin{split} |\mathcal{M}_{(c)}|^2 &= \sum_{f,s,c} -\frac{C^2}{F_{\pi}^2} (\bar{u}_{q_+} \sigma_{\mu \lambda} q_\lambda \gamma_5 t^a u_{p_+}) \\ &\times (\bar{u}_{q_-} \sigma_{\nu\rho} q_{\rho} \gamma_5 t^{a'} u_{p_-}) D_{\mu\nu}^{aa'} \times \Big[ h.c. \Big] \\ &=\sum_{f} \frac{8 C^4}{9 F_{\pi}^4} \hat{s}^2 (1-x)(1-z). \end{split} \end{equation} Final formulas for contributions to the parton cross section shown on Fig.\ref{fig:Xsection_contributions} are \begin{align}\label{eq:partoc_Xsec_abc} d\hat{\sigma}_{(a)} &= \sum_{f}\int\limits_{0}\limits^{E^2_{\rm sph}} \!\!\! dM_k^2 \!\! \int\limits_{0}\limits^{\pi} \!\!\! d\tilde{\phi} \, \frac{g_s^2 C^2}{9(2\pi)^5 F_{\pi}^2} \frac{1-x}{q_\perp^2 x^2}dx d^2k_\perp, \\ d\hat{\sigma}_{(b)} &=\sum_{f} \int\limits_{0}\limits^{E^2_{\rm sph}} \!\!\! dM_k^2 \!\! \int\limits_{0}\limits^{\pi} \!\!\! d\tilde{\phi} \, \frac{ C^4}{9(2\pi)^5F_{\pi}^2} \frac{1-x}{x^2}dx d^2k_\perp, \\ d\hat{\sigma}_{(c)} &= \sum_{f} \!\! \int\limits_{0}\limits^{E^2_{\rm sph}} \!\!\! dM_k^2 \!\! \int\limits_{0}\limits^{\pi} \!\!\! d\tilde{\phi} \, \frac{C^4 E^2_{\text{sph}}}{9 \, 2^{10} \pi^7 F_{\pi}^4} \frac{(1-x)}{x^2} \, dx \, d^2k_\perp. \end{align} The detailed derivation of this equations is given in Appendix \ref{appendix:amplitudes}. \subsection{\texorpdfstring{$pp\to\pi X$}{pp to pi X} cross-section} The next step is to calculate observables on the hadron level. Differential hadron cross-section is a convolution of parton distribution functions(PDF) and the parton cross section \begin{equation} E_k \frac{d \sigma}{d^3k} = \sum_f \int^{x_a^{\max}}_{x_a^{\min}} \!\!\!\!\!\! dx_a \int^{x_b^{\max}}_{x_b^{\min}} \!\!\!\!\!\! dx_b \, f(x_a) f(x_b) \frac{2 E_k}{\sqrt{s}x_a} \frac{d \hat\sigma}{dx d^2 k_{\perp}}. \end{equation} The flavor sum $\sum_f$ indicates the proper summation for a corresponding pion. The explicit formula for the $\pi^0$ production cross section is \begin{equation}\label{eq:hadron_xsec_final} \begin{split} E_{k} \frac{d \sigma}{d^3k} = &3\iint \! dx_a dx_b \Big( f_u(x_a)+f_d(x_a) \Big) \\ \times & \Big(f_u(x_b)+f_d(x_b)\Big) \frac{\sqrt{x_T^2 + x_F^2}}{x_a} \frac{d \hat\sigma_{(c)}}{dx d^2 k_{\perp}} \\ & + \iint \! dx_a dx_b \Big( f_u(x_a)+f_d(x_a) \Big) \\ & \times \Big(f_u(x_b)+f_d(x_b)\Big) \frac{\sqrt{x_T^2 + x_F^2}}{x_a} \frac{(d \hat\sigma_{(a)}+d \hat\sigma_{(b)})}{dx d^2 k_{\perp}}. \end{split} \end{equation} Factor $3$ in the first line is the result of summation over unobserved pions($\pi^\pm , \pi^0$) in inclusive state $X$, produced from the bottom vertex Fig.\ref{fig:Xsection_contributions}(c). In the case $\pi^+$ production the cross section is \begin{equation} \begin{split} E_{k} \frac{d \sigma}{d^3k} = & 6 \iint \! dx_a dx_b f_u(x_a) \Big(f_u(x_b)+f_d(x_b)\Big) \\ &\times \frac{\sqrt{x_T^2 + x_F^2}}{x_a} \frac{d \hat\sigma_{(c)}}{dx d^2 k_{\perp}} \\ &+ 2 \iint \! dx_a dx_b f_u(x_a) \Big(f_u(x_b)+f_d(x_b)\Big) \\ &\times \frac{\sqrt{x_T^2 + x_F^2}}{x_a} \frac{(d \hat\sigma_{(a)}+d \hat\sigma_{(b)})}{dx d^2 k_{\perp}}. \end{split} \end{equation} $\pi^-$ cross section is given by replacing $f_u(x_a) \to f_d(x_a)$. In order to determine integration limits for $x_{a,b}$, notice that one could reduce $2\to 3$ and $2\to 4$ parton subprocesses to the $2 \to 2$ case if combines all particles except the detected pion into an effective particle with the mass square $X^2$. We could not neglect this invariant mass since it is of order of $s$. From \begin{equation} \hat{s}+\hat{t}+\hat{u}=X^2, \end{equation} one could relate $x_a$ and $x_b$. Using $X^2 \geq 0$, maximum and minimum values for $x_a$ and $x_b$ are(see Appendix~\ref{appendix:int_limits}): \begin{align}\label{eq:xa_xb_limits} &x_a^{\min} = \frac{4 x_F^2}{4x_F - x_T^2}; &x_a^{\max}=1;\\ &x_b^{\min} = \frac{k^2_{\perp}/x}{x_a(1-x)s}; &x_b^{\max}=1, \end{align} where $x=x_F/x_a$. \section{Single-spin asymmetry}\label{section:ssa} Consider scattering of the proton with transverse polarization vector $\vec{a}$ and momentum $p_+$ and other unpolarized proton with momentum $p_-$. In semi-inclusive process the pion with momentum $k$ is produced. For the TSSA calculation it is crucial to define a coordinate system, because the sign of TSSA depends on it. We choose the standard right-hand coordinate system. The initial polarized proton moves in $+z$ direction and its polarization vector is along $y$ axis, Fig.\ref{fig:kinematics_ssa}. Positive TSSA means that more pions produced in $+x$ half-space when the proton has spin in $+y$ direction. \begin{figure} \includegraphics[width=0.7 \columnwidth]{drawing_ssa.pdf} \caption{Kinematics of TSSA. The polarized proton with the momentum $p_{+}$ moves in $+z$. The polarization vector $a$ is in $+y$ or $-y$ direction. The pion momentum lies in $zx$ plane.} \label{fig:kinematics_ssa} \end{figure} Transverse single spin asymmetry(or analyzing power) is defined as \begin{equation}\label{eq:ssa_definition} A_N=\frac{d\sigma_{\uparrow} - d\sigma_{\downarrow}} {d\sigma_{\uparrow} + d\sigma_{\downarrow}} = \frac{d\Delta\sigma}{ 2 d\sigma}. \end{equation} Arrows $\uparrow$ and $\downarrow$ denote the spin polarization vector of the proton in $+y$ and $-y$ direction correspondingly. We consider only tree-level diagrams for the unpolarized cross-section in the denominator of Eq.(\ref{eq:ssa_definition}). As it will be shown later, it is enough to reproduce cross section data. Moreover, we expect that higher orders are suppressed by instanton density and $\alpha_s$. Polarized parton cross section is related with hadron cross section as a convolution with polarized PDFs. \begin{equation} d\sigma_{\uparrow} = \sum_{f} \iint dx_a dx_b \, f_{a^\uparrow /A^\uparrow}(x_a) f(x_b) d\hat{\sigma}_{\uparrow} \end{equation} \begin{equation} \Delta\sigma = \Delta_T f_{a/A} \otimes f_b \otimes \Delta \hat{\sigma}. \end{equation} $\Delta_T f_{a/A}$ is the transversity distribution -- the difference between the probabilities to find parton $a$ polarized parallel and anti-parallel to the polarization of hadron $A$. Transverse polarization state can be represented as superposition of helicity states: \begin{equation}\label{eq:spin_decomposition} |\!\!\uparrow\downarrow \rangle = \frac{1}{\sqrt{2}} (| + \rangle \pm i | - \rangle). \end{equation} Using this we can rewrite the difference of amplitudes with opposite transverse polarizations as a product of helicity amplitudes: \begin{equation}\label{eq:helicity_TSSA} |\mathcal{M}_{\uparrow}|^2-|\mathcal{M}_{\downarrow}|^2 = 2 \Im (\mathcal{M}_{+} \mathcal{M}^*_{-}). \end{equation} $\pm$ mean helicity of initial parton in the polarized proton. We sum over polarization of other particles. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{dp_cs_fig_v2.pdf} \end{center} \caption{The set of considered diagrams that give contributions to the total scattering amplitude. Notation for interaction vertices is similar to Fig.~\ref{fig:Xsection_contributions}.} \label{fig:all_diagrams_in_TSSA} \end{figure} $\mathcal{M}_{\pm}$ has five parts shown on Fig.\ref{fig:all_diagrams_in_TSSA}. Until now both $\mathcal{M}_+$ and $\mathcal{M}_-$ contain spin-flip and non-flip amplitudes. But only the interference between spin-flip (a,d,e) and non-flip(b,c) diagrams survives in TSSA. In this light, one could think about $\mathcal{M}_+ \mathcal{M}^*_-$ as a product of spin-flip and non-flip amplitudes. Leading contribution into $\Delta\sigma$ comes from interference between (a) and (b+c) diagrams. We expect that the interference between (b+c) and (d+e) diagrams is suppressed due to additional $\alpha_s$. Moreover, because they have the same structure, phase shift between them is small. Upper line should have an odd number of chromomagnetic vertices and the bottom line -- an even number or all perturbative. Firstly we look the case with all perturbative vertices on the bottom line. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{Delta_sigma_and_momentum_notation.pdf} \caption{The leading contribution to TSSA. The left loop diagram we denote as $\mathcal{B}_1$, the right one as $\mathcal{B}_2$. The tree-level diagram is $\mathcal{A}$.} \label{fig:delta_sigma_leading_interf} \end{center} \end{figure} We use the momentum notation that is shown on Fig.\ref{fig:delta_sigma_leading_interf}. Sudakov's decomposition for momentum vectors is as before, Eq.(\ref{eq:momenta_decomposition}). A new vector $q_0$ is decomposed as: \begin{align} q_0&=\alpha_0 p_+ + \beta_0 p_- + q_{0\perp}. \end{align} $\Delta \hat{\sigma}$ is proportional to the interference of spin flip and non flip diagrams Eq.(\ref{eq:helicity_TSSA}) \begin{equation} \Delta\hat{\sigma} = \frac{1}{2\hat{s}} 2 \cdot 2 \overline{\sum_{s,c}} \Im[\mathcal{A_+} (\mathcal{B}_{1-}+\mathcal{B}_{2-})^*]d\hat{R}_3. \end{equation} Factor $1/2\hat{s}$ is the flux of initial particles. In the numerator first factor 2 appears because $ \mathcal{A}_{+} (\mathcal{B}_{1-} + \mathcal{B}_{2-})^*=\mathcal{A}^*_{-} (\mathcal{B}_{1+}+\mathcal{B}_{2+}) $ \cite{Dixon:2013uaa} and the second from Eq.(\ref{eq:helicity_TSSA}). $\overline{\sum}_{s,c}$ symbolically denotes averaging over spin and color states. Three-particles phase space $d\hat{R}_3$ was calculated before. Using Gribov decomposition for $g_{\mu\nu}$ we factorize diagrams to upper and lower parts. The interference between first and second diagram on the Fig.\ref{fig:delta_sigma_leading_interf} is \begin{equation} \begin{split} \mathcal{A}_{+} \mathcal{B}_{1-}^* =& \frac{1}{2\cdot9} \Big( \frac{2}{\hat{s}} \Big)^3 g_s^3 \frac{C^3}{F_{\pi}^2} \int \! \! \frac{d^4q_0}{(2\pi)^4} \\ & \times \frac {\mathrm{Tr}[t^a t^b t^c] \mathrm{Tr}[t^{a'} t^{b'} t^{c'}] \delta_{a a'}\delta_{b b'}\delta_{c c'} (U_1 D)} {q^2 q_0^2 (q-q_0)^2 (p_+ + q_0 - k)^2 (p_{-} - q_0)^2}. \end{split} \end{equation} $U_1$ and $D$ are products of gamma matrices corresponded upper and bottom fermion lines respectively. Factors 2 and 9 in denominator are from averaging over spins of the unpolarized quark and over color states. The color trace is \begin{equation} \mathrm{Tr}[t^a t^b t^c] \mathrm{Tr}[t^{a'} t^{b'} t^{c'}] \delta_{a a'}\delta_{b b'}\delta_{c c'}=-2/3. \end{equation} We calculate the imaginary part by putting fermions in the loop on mass shell. After collecting all $i$ and signs in vertices \begin{align} \Im (\mathcal{A B}_1^*) =& -\frac{2/3}{2\cdot 9}\Big(\frac{2}{\hat{s}}\Big)^3 g_s^3 \frac{C^3}{F_{\pi}^2} \int \! \! \frac{d^2 q_{0\perp} d\alpha_0 d\beta_0}{(2\pi)^4} \frac{\hat{s}(-2\pi i)^2}{2\cdot 2i} \nonumber \\ & \times \frac {\delta((p_- -q_0)^2) \delta((p_+ +q_0-k)^2) U_1 \, D}{q^2 q_0^2 (q_0-q)^2} \nonumber \\ =& -\frac{g_s^3}{54 \hat{s}^4 \pi^2}\frac{C^3}{F_{\pi}^2} \int \! \! \frac{d^2 q_{0\perp}}{(1-x)}\frac {U_1 \, D}{q_{\perp}^2 q_{0\perp}^2 (q_{0\perp}-q_{\perp})^2}, \label{eq:ImAB1_initial} \end{align} where $d^4q_0 = \frac{\hat{s}}{2}d\alpha_0 d\beta_0 d^2q_{0\perp}$ was used. Notice that the loop integral in $\mathcal{AB}_1^*$ is restricted by the sphaleron energy, similar to the phase space integral, $(p_+ + q_0)^2<E_{\textrm{sph}}^2$. For the upper fermion line we have \begin{align} U_1 =& \bar{u}_{p_+}(-) \cancel{q}_{0\perp} \cancel{p}_- \gamma_5 (\cancel{p}_+ + \cancel{q}_0 - \cancel{k}) (\cancel{q}_{\perp} - \cancel{q}_{0\perp}) \cancel{p}_- \nonumber \\ & \times (\cancel{p}_+ + \cancel{q} - \cancel{k}) \gamma_5 \cancel{p}_- \cancel{q}_{\perp} u_{p_+}(+) \nonumber \\ =& -2 (1-x)^2 \hat{s}^3 (q_{0\perp}^2 q_x - q_{\perp}^2 q_{0 x}), \end{align} where subscript $x$ denotes the component of a vector along $x$-axis. $u(\pm)$ is the spinor for a quark in the corresponded helicity state. For the bottom quark line with all perturbative vertices the trace is \begin{align} D = \text{Tr}[(\cancel{p}_- -\cancel{q})\cancel{p}_+ \cancel{p}_- \cancel{p}_+ (\cancel{p}_--\cancel{q}_0) \cancel{p}_+]= 2 \hat{s}^3. \end{align} The second contribution shown on Fig.\ref{fig:delta_sigma_leading_interf} is given by the formula: \begin{align}\label{eq:ImAB2} \Im \mathcal{A B}_2^* = -\frac{g_s^3}{54 \hat{s}^4 \pi^2}\frac{C^3}{F_\pi^2} \int \! \! d^2 q_{0\perp}\frac {U_2 \, D}{q^2_\perp q_{0\perp}^2 (q_{0\perp}-q_\perp)^2}, \end{align} \begin{align} U_2 =& \bar{u}_{p_+}(-) \cancel{q}_{0\perp} \cancel{p}_- (\cancel{p}_+ + \cancel{q}_{0\perp}) (\cancel{q}_{\perp} - \cancel{q}_{0\perp}) \cancel{p}_- \gamma_5 \nonumber \\ &\times (\cancel{p}_+ + \cancel{q} - \cancel{k}) \gamma_5 \cancel{p}_- \cancel{q}_{\perp} u_{p_+}(+) \nonumber \\ =& 2 (1-x) \hat{s}^3 (q_{0\perp}^2 q_x - q_{\perp}^2 q_{0 x}). \end{align} The absence of additional $(1-x)$ in the trace $U_2$ in comparison with $U_1$ is compensated by lack of $(1-x)$ in denominator of Eq.(\ref{eq:ImAB2}). The trace $D$ is the same. Therefore, $\Im(\mathcal{AB}_1^*)$ and $\Im (\mathcal{AB}_2^*)$ differ by the sign and integration limits over $d^2q_{0\perp}$. Loop integral in $\mathcal{AB}_1$ is limited by the sphaleron energy. In contrast, the loop integral in $\mathcal{AB}_2$ does not have such limit. Because the integrands are the same in an absolute value and with opposite sign, we can exclude part of the integration region where they are canceled out. Nonzero contribution comes from region where $(p_+ + q_0)^2>E^2_{\textrm{sph}}$. Combining this observations, the final result is \begin{equation}\label{eq:TSSA_parton} \begin{split} \frac{d\Delta \hat{\sigma}}{dx d^2k_{\perp}} =& \frac{g_s^3}{27 \cdot 2^6 \pi^7} \frac{C^3}{F_\pi^2} \int_0^{E^2_{\textrm{sph}}}\!\!\!\!\!\!\!\!\!\!dM_{k}^2 \!\!\int_{0}^{\pi} \!\!\!d\tilde{\phi} \int\!\!d^2q_{0\perp} \\ &\times \theta(M_0^2-E^2_{\textrm{sph}}) \frac{(1-x)}{x^2} \frac{(q_{x}q_{0\perp}^2 - q_{0x}q_{\perp}^2)} {q_{\perp}^2 q_{0\perp}^2 (q_{0\perp}-q_{\perp})^2}, \end{split} \end{equation} where $M_0^2=(p_+ + q_0)^2$. \begin{figure} \includegraphics[width=.9\linewidth]{down_line_ssa_additional.pdf} \caption{Additional contributions to TSSA from diagrams with chromomagnetic vertices on the bottom line.} \label{fig:CM_vertex_down_line} \end{figure} We also calculated contributions to TSSA from diagrams with chromomagnetic vertices on the bottom line (Fig.~\ref{fig:CM_vertex_down_line}). There are three possible combinations which give for the trace: \begin{align} D_{1} & = 2 s^3 (q_{\perp} \cdot q_{0 \perp}), \\ D_{2} & = 2 s^3 (q^2 - (q_{\perp} \cdot q_{0 \perp})), \\ D_{3} & = 2 s^3 (q_0^2 - (q_{\perp} \cdot q_{0 \perp})). \end{align} One should substitute this expressions instead of $D$ in Eq.\ref{eq:ImAB1_initial} and Eq.\ref{eq:ImAB2}, replacing accordingly couplings $g_s$ and $C$. \section{Numerical results and discussion}\label{section:discussion} For numerical estimations we use parameters provided by the instanton liquid model for QCD vacuum\cite{Schafer:1996wv,Diakonov:2002fq}. We choose $F_{\pi}=93$~MeV, $m_q=90$~MeV, $\rho_c=1.6$~GeV$^{-1}(0.32~\textrm{fm})$. It corresponds to AQCM with the value $\mu_a=-0.45$ and $\alpha(\rho_c)\approx 0.6$. For perturbative coupling we use \begin{equation} \alpha_s(q^2)=\frac{4\pi}{9 \ln (q^2/\Lambda_{\mathrm{QCD}}^2)} \theta(q^2-1/\rho_c^2), \end{equation} where $\Lambda_{\mathrm{QCD}}=200$~MeV. Choice of $\Lambda_{\mathrm{QCD}}$ does not affect significantly numerical results. The step-function $\theta$ ``switches off" perturbative interaction at momenta lower than the instanton scale. It regularizes cross section, removing Landau pole and effectively works as a phenomenological gluon mass. Such procedure can be justified in terms of the potential between quarks. In Cornell potential the linear term starts to dominate the Coulomb-like term from one gluon exchange at distances more than $0.3$~fm. First, we will discuss results for parton cross section and TSSA in $qq\to \pi^0 X$ to demonstrate dynamics not affected by PDFs. Further we use $k_t=k_{\perp}$. The Fig.\ref{fig:parton_cs_vs_x_k} shows contributions of different diagrams from Fig.\ref{fig:Xsection_contributions} to $\pi^0$ production cross section. One could see that at chosen parameters contributions of diagrams (a) and (c) are of the same order while the contribution from (b) is smaller. The slope of cross-section with $k_t$ is determined by the shape of the form factor $F_g$. All three contributions have similar dependency on $x$. As expected, at high $k_t$ the diagram (a) with the perturbative vertex dominates. $E_{\textrm{sph}}$ in Eq.~\ref{eq:partoc_Xsec_abc} determines minimal $k_t$ at which the whole quark-pion system has nonzero transverse momentum. When $|k_t| > E_{\textrm{sph}}/2$, the exchanged gluon has to have nonzero transverse momentum $q_\perp$ at any $x$. At $|k_t| \leq E_{\textrm{sph}}/2$, momenta $q_\perp$ can be zero and we get divergence. We avoid this by the cut of the perturbative coupling $\alpha_s$ described above. This determines transition from ``flat'' behavior of the cross section at small $k_t<1.5$~GeV to falling. \begin{figure}[tb] \begin{minipage}{.49\columnwidth} \centering \includegraphics[width=\columnwidth]{qq_Xsec_x_02.pdf} (a) \end{minipage}% \hspace{0.0em} \begin{minipage}{.48\columnwidth} \centering \includegraphics[width=\columnwidth]{qq_Xsec_kt_1.pdf} (d) \end{minipage}\\ \begin{minipage}{.49\columnwidth} \centering \includegraphics[width=\columnwidth]{qq_Xsec_x_05.pdf} (b) \end{minipage} \hspace{0em} \begin{minipage}{.48\columnwidth} \centering \includegraphics[width=\columnwidth]{qq_Xsec_kt_2.pdf} (e) \end{minipage} \\ \begin{minipage}{.48\columnwidth} \centering \includegraphics[width=\columnwidth]{qq_Xsec_x_08.pdf} (c) \end{minipage} \hspace{0em} \begin{minipage}{.49\columnwidth} \centering \includegraphics[width=\columnwidth]{qq_Xsec_kt_4.pdf} (f) \end{minipage} \caption{Differential cross section $qq \to \pi^0 X$ as function of $k_t$(left column) and $x$(right column). Solid line is for the contribution of Fig.\ref{fig:Xsection_contributions}(a). Dashed line is Fig.\ref{fig:Xsection_contributions}(b). The dotted line is for the two pion process Fig.\ref{fig:Xsection_contributions}(c). Parameters are as described in the text.} \label{fig:parton_cs_vs_x_k} \end{figure} \begin{figure}[tbh] \includegraphics[width=.48\columnwidth]{qq_SSA_vs_x.pdf} \includegraphics[width=.49\columnwidth]{qq_SSA_vs_kt.pdf} \caption{Pion production asymmetry from scattering of constituent quarks. Left: TSSA as function of $x$ for different $x$. Right: TSSA as function of $k_{\perp}$ for $x=0.2,~0.5,~0.8$.} \label{fig:ssaQ} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=.8\columnwidth]{ssaQvsx_D.pdf} \caption{Contribution of diagrams with AQCM vertex on the bottom line to the parton level TSSA. Solid line is total result, long dashed line is result with perturbative bottom vertices. Lines denoted $D_{i}$ correspond to contributions depicted on Fig.\ref{fig:CM_vertex_down_line}.} \label{fig:ssaQ_D} \end{figure} Fig.\ref{fig:ssaQ} shows asymmetry for parton scattering. It is evident that TSSA changes the sign at some $k_t$. It is due to counteract of two terms in Eq.(\ref{eq:TSSA_parton}): $q_x q_0^2$ and $q_{0x} q^2$. At small $k_t$ the first term dominates. $q$ grows with $k_t$ and the second term overcomes the first one. TSSA reach high value $\sim 10\%$ at high $x$ and $k_t$. However, for small $k_t$, $A_N$ has peak at smaller $x$ and at bigger $x$ it changes sign. Fig.\ref{fig:ssaQ_D} demonstrates contribution to TSSA from diagrams with chromomagnetic vertices on the bottom line from Fig.~\ref{fig:CM_vertex_down_line}. This contributions are almost cancel out and the final result does not change significantly. In order to calculate hadron cross section and asymmetry, we use set of PDFs provided by NNPDF Collaboration\cite{Nocera:2014gqa}. Results on figures are obtained with NLO parton densities(valence + sea quarks) taken at the scale $Q^2=1$ GeV. Our results for cross section is depicted on Fig.\ref{fig:cs_rhic} and shows agreement with data at RHIC. Similar pQCD calculations usually are sensitive to a choice of fragmentation functions and scale. Good agreement of forward rapidity data and NLO pQCD calculation was reported in \cite{Adamczyk:2012xd}. However, there DSS fragmentation function\cite{deFlorian:2007aj} has been used, which includes previous RHIC data for fitting. Results of calculation with other fragmentation function, which do not include RHIC forward rapidity data to analysis, usually underestimate cross-section by factor 2\cite{deFlorian:2007aj}. Overall, for RHIC forward kinematics our model gives predictions similar to pQCD but using less parameters. Now let's look at TSSA. In a non-relativistic framework transverse and longitudinal polarized distributions are equal, $\Delta_T f = \Delta_L f$, since rotations in spin space between different basis commute with spatial operations. However, relativistically $\Delta_T f$ and $\Delta_L f$ are different. Therefore any difference between helicity and transversity PDFs is related to the relativistic nature of parton dynamics inside hadrons. Unfortunately, polarized transverse distribution is poorly known\cite{Radici:2016lam}. Instead we use the helicity parton densities $\Delta_L f$ from NNPDF as an estimation. There are evidences that longitudinal and transverse distributions are the same order\cite{Barone:2001sp,Gockeler:2005cj,Aoki:1996pi}. Moreover, nucleon's tensor charge has strong scale dependency and as result the transversity distribution may inherit this strong evolution\cite{Wakamatsu:2008ki,Barone:2001sp}. In our estimations we do not consider evolution for transversity and unpolarized pdf. \begin{figure}[tb] \includegraphics[width=.8\columnwidth]{STAR_pi0_Xsec.pdf} \caption{Differential cross section for $\pi^0$ production vs $k_t$ for RHIC. Data are from \cite{Adamczyk:2012xd,Adams:2006uz}} \label{fig:cs_rhic} \end{figure} \begin{figure}[tb] \includegraphics[width=0.8\columnwidth]{STAR_pi0_AN.pdf} \caption{TSSA for $\pi^0$ production on RHIC, data are from \cite{Abelev:2008af,Adamczyk:2012xd}} \label{fig:ssa_rhic} \end{figure} \begin{figure}[tbh] \includegraphics[width=1\columnwidth]{AN_xFbins.pdf} \caption{TSSA at individual $x_F$ bins. Data are from \cite{Abelev:2008af}.} \label{fig:rhic_ssa_xFbins} \end{figure} Fig.\ref{fig:ssa_rhic} shows results for $A_N$ at RHIC energies for the neutral pion. Our model predictions are close to data at $\eta=3.3$ and slightly underestimate it. At higher rapidity discrepancy becomes bigger. $A_N$ rises with $x_F$ with maximum asymmetry $\approx 10\%$ at $x_F=0.8$. Despite that model gives the correct trend of growing asymmetry, theoretical curves are shifted in $k_t$ in comparison with experimental points. One sees a dependency on pseudorapidity, however data do not have such effect. The reason for such behavior in our model is that for the same $x_F$, $k_t$ decreases with $\eta$. It is evident from Fig.~\ref{fig:ssaQ} that if $k_t$ is $1-2$~GeV asymmetry becomes small or changes the sign. This is what happens at $\eta=3.7$ when $x_F\approx 0.4$ on Fig.~\ref{fig:ssa_rhic}. In the case $\eta=3.3$ it occurs at lower $x_F$ and is not so noticeable. Fig.\ref{fig:rhic_ssa_xFbins} shows predictions of our model at different $x_F$. Results from fit of Sivers function \cite{Anselmino:2005ea} and twist-3 fit from \cite{Kouvaris:2006zy} are also shown. Notice that our model, in contrast with others, demonstrates asymmetry growing with $k_t$. Similar to the Fig.\ref{fig:ssa_rhic}, our theoretical curves are shifted to higher $k_t$ in respect with data points. A possible reason for ``shifted" results is an interference with other diagrams that we neglected in calculation. This effect requires further study. An additional contribution to TSSA induced by instantons was suggested in the papers \cite{Ostrovsky:2004pd} and \cite{Qian:2011ya,Qian:2015wyq}. It is based on the results from \cite{Moch:1996bs}, where the effects of instantons in the nonpolarized DIS process were calculated. In this mechanism the effect arises from phase shift in the quark propagator in the instanton field. This contribution might be complementary to the effect calculated here. Interplay between them could be the reason for overall shift of TSSA to the region of higher $k_t$. Results for cross-section are sensitive to the value of constituent quark mass $m_q$, because the non-perturbative coupling is proportional to $m_q$. In order to describe cross-section data we take $m_q=90$~MeV. It is in agreement with Single Instanton Approximation where $m_q=86$~MeV\cite{Faccioli:2001ug}. However, constituent quark masses from the Diakonov-Petrov Model ($m_q=350$ MeV)\cite{Diakonov:2002fq} and Mean Field Approximation($m_q=170$ MeV)\cite{Schafer:1996wv} are too big. The question how does the proposed mechanism interplay with the factorization approach requires additional study. In our model fragmentation and hard rescattering are coherent. It is clear that instanton generated vertices are suppressed at high enough $k_t$, factorization restores and fragmentation must appear from some other process, not coherent with hard rescattering. If we assume that this incoherent process is completely contained in fitted fragmentation functions, it is impossible to study intermediate kinematic region where both of them at work. We need a model for fragmentation. A possible answer is to calculate fragmentation functions in framework of our model in a way, similar to NJL models\cite{Yang:2016gnd,Nam:2012af}. If the model gives reasonable results for fragmentation function, it will be possible to study interplay between coherent and incoherent regimes. \section{Conclusion}\label{section:conclusion} We calculated TSSA and cross-section for pion production in $pp$ scattering at RHIC energies using the instanton induced effective interaction. The proposed framework requires less parameters in comparison with the traditional pQCD approach where one needs parameterize and fit the pion fragmentation function. Predictions of the model for cross section are consistent with experimental data. Our model produces the big asymmetry at RHIC kinematics, same magnitude as in experiment. However it is shifted to the region of higher $k_t$ in respect to data. Remarkable outcome of our approach is increase of the asymmetry with transverse momenta of a final particle at given kinematics. This grow is replaced by a slow decrease at $k_t>5$~GeV. Such behavior comes from a rather soft power-like form factor of effective vertices and a small average size of instanton, $\rho_c \approx 1/3$~fm, in QCD vacuum. Similar dependence of asymmetry in $k_t$ is seen in experiment and was not expected in the models based on TMD factorization and ad hoc parametrization of Sivers and Collins functions. Another feature of the approach is that $A_N$ does not depend on c.m. energy. The energy independence of TSSA is observed experimentally and in contradiction with naive expectation that spin effects in strong interaction should vanish at high energy. Moreover, the sign of the TSSA is defined by the sign of AQCM. Proposed mechanism breaks factorization and can not be treated as an additional contribution to the Sivers distribution function or to the Collins fragmentation function. In framework of this model, asymmetry in SIDIS and $pp$ is generated by distinct diagrams and in general could be different. If this effect has place, Sivers and Collins functions are not universal at small transversal momenta. This phenomenon requires further study. \begin{acknowledgments} In memory of N.I. Kochelev, a great mentor and scientist, who proposed an idea of this work. The study was supported by the National Natural Science Foundation of China, Grants No. 11975320(P.M.Z.) and No.11875296 (N.K.). N.K. thanks the Chinese Academy of Sciences President’s International Fellowship Initiative for the support via Grants No. 2020PM0073. \end{acknowledgments}
{ "timestamp": "2020-12-17T02:15:52", "yymm": "2012", "arxiv_id": "2012.08880", "language": "en", "url": "https://arxiv.org/abs/2012.08880" }
\section{Introduction} Quantum computing is experiencing the transition from a scientific to an engineering field with the promise to revolutionize an extensive range of applications demanding high-performance computing. The major areas include artificial intelligence, autonomous driving, cryptography, drug development, chemistry, and financial optimization. Many implementation approaches have been pursued for quantum computing systems, where currently the main streams can be identified based on superconducting, photonic, trapped-ion, and semiconductor qubits. Semiconductor-based quantum computing, specifically using CMOS technologies, is promising as it provides potential for the integration of qubits with their control and readout circuits on a single chip. This paves the way for the realization of a large-scale quantum computing system with many qubits (e.g., over 1000) for solving practical problems. Quantum computing, first envisioned by Richard Feynman and Paul Benioff in the 1980s \cite{feynman82, benioff80}, has passed several important milestones to reach the current state of development. The major landmarks include the invention of the Shor's algorithm for prime number factorization and discrete logarithm on a quantum computer \cite{shor94, shor97}, the development of Grover's algorithm for efficient search in large databases \cite{grover96}, the use of semiconductor quantum dots to implement qubits \cite{loss98}, a silicon-based quantum computer architecture \cite{kane98}, the first spin qubit in silicon \cite{pla12}, the first CMOS spin qubit \cite{maurand16}, and the proposal of using cryogenic CMOS circuits for control and readout of qubits \cite{charbon17}, \cite{patra18}. Recently, Google announced it has achieved the milestone of \emph{quantum supremacy} \cite{preskill12}: in 200 seconds, its Sycamore quantum processor completed a task, the equivalent of which would take a state-of-the-art supercomputer much longer to complete \cite{arute19}. These achievements along with long-term vision for the future of quantum computing have led to growing global interests and increasing amounts of investment by governments, established companies, and start-ups in this field, e.g., the launch of the US National Quantum Initiative \cite{raymer19, monroe19} and the EU Quantum Technologies Flagship Program \cite{eu_qc3}. Since the first realization of semiconductor qubits using quantum dots \cite{loss98}, many scientific research works have been devoted to improve the quality of these qubits by using different semiconductor materials (e.g., GaAs, SiGe, and Si) and isotopes (e.g., $\rm ^{28}$Si) \cite{zwanenburg13}. Furthermore, several qubit architectures, including spin-1/2 qubit \cite{levy02}, singlet-triplet qubit \cite{koppens05} and its subset hybrid spin/charge qubit \cite{shi12, kim14, kim15-1}, and exchange interaction qubit \cite{DiVincenzo00-2}, have been proposed to improve the performance and simplify the realization of qubits and logic quantum gates. Most of these developments attempt to improve the performance of individual qubits, e.g., their decoherence time, manipulation time, and fidelity. In a large-scale quantum computing system, however, there are other considerations which can be even more critical than the performance of qubits. For instance, quality of the interface between qubits and classical electronic circuits for control and readout is critical in the performance of a \emph{large-scale} quantum computer. Quantum computers can outperform classic computers by virtue of running quantum algorithms \cite{Montanaro06}. These algorithms are realized using \emph{quantum circuits} which are sequences of elementary quantum gates applied to qubits. Semiconductor qubits and quantum gates have limited coherence time, e.g., due to environmental noise, which is usually much shorter than the time required to execute quantum algorithms. Multiple physical qubits can be employed to construct a logical qubit with much higher performance \cite{fowler12}. The associated quantum error correction improves the fault tolerance of quantum algorithms required for large-scale quantum computation. The hardware overhead of redundant qubits is a challenge in the implementation of quantum error correction. In the Noisy Intermediate-Scale Quantum (NISQ) era, quantum computers are expected to be realized using \emph{imperfect} and \emph{limited} number of qubits (e.g., 50--100). However, the size of quantum circuits will be limited by noise in the quantum gates and, as a result, the quantum computers can outperform classic computers only in a few computational tasks \cite{preskill18}. CMOS technology can provide potential for the implementation of high-quality qubits, as a result of the high silicon purity achieved in advanced processes \cite{Vandersypen17, blokhina20}. Large arrays of qubits can be implemented in a compact chip area using nanometer-scale transistors. This allows the integration of redundant qubits for quantum error correction with target qubits on the same chip. Furthermore, the use of cryogenic CMOS circuits for control and readout of qubits enables the opportunity of integrating the qubits and their interface circuits on a single chip. This perspective, however, entails dealing with numerous new challenges, including the lack of precise cryogenic models of CMOS devices (e.g., transistors, inductors, capacitors, resistors, interconnects), process variations, the effects of control circuits on qubit performance, crosstalk between multiple paths of RF control signals, and decoherence of qubits arising from the noise of readout circuits. In this paper, we present an overview and future perspective of CMOS quantum computing, exploring developed semiconductor qubit structures, quantum gates, as well as control and readout circuits, with a focus on the promises and challenges of CMOS implementation. In Section~\ref{sec:CMOS_Qubits}, we investigate semiconductor qubit structures and quantum gates. The interface of classic and quantum electronics is elaborated in Section~\ref{sec:quantum_classic_interface}, where we discuss qubit control and readout circuits, as well as architectures for large-scale qubit arrays. In Section~\ref{sec:future}, we present future trends in CMOS quantum computing toward the realization of scalable quantum computers. \IEEEPARstart{}{} \section{CMOS Qubits} \label{sec:CMOS_Qubits} The fundamental building block of a quantum computer is a qubit, operating based on the \emph{superposition} of two basic quantum states. The qubit's state can be expressed in terms of the basic quantum states, $\Ket{0}$ and $\Ket{1}$, as $\Ket{\psi} = \alpha_0 \Ket{0} + \alpha_1 \Ket{1}$, where $\alpha_0$ and $\alpha_1$ are complex valued coefficients. The measured qubit state is a random outcome with a probability of $|\alpha_0|^2$ for $\Ket{0}$ and $|\alpha_1|^2$ for $\Ket{1}$, with the constraint of $|\alpha_0|^2 + |\alpha_1|^2 = 1$) \cite{nielsen10}. The second fundamental feature of qubits, first noted by Albert Einstein in 1935 \cite{einstein35}, is quantum \emph{entanglement}. The quantum state of each qubit in a pair or group of qubits cannot be described independently of the state of the others. That is, the physical properties of entangled qubits, e.g., position, spin, polarization, are correlated, and if one is measured, that of the others are also collapsed. Entanglement plays an essential role in quantum computing. Qubits have other special features that distinguish them from classic bits. In a perfectly isolated qubit, there exists a definite phase relation between different states, and the system is called to be coherent. In practice, however, due to interactions of the qubit with the physical environment, the coherence is lost with time through a process called quantum decoherence. Moreover, the qubits are fragile and their quantum state is lost upon measurement. Therefore, the measurement is not deterministically repeatable \cite{nielsen10}. The physical implementation criteria of quantum computers have been laid down by David DiVincenzo in 2000 \cite{DiVincenzo00} as follows: 1) scalable physical system with well-characterized qubits, 2) the ability to initialize the state of the qubits, 3) decoherence times much longer than the gate operation time, 4) a universal set of quantum gates, and 5) a qubit-specific measurement capability. Solid-state quantum dot qubits can be constructed based on the charge or spin of electrons (or holes). In a charge qubit, the quantum states can be defined based on the position of the charge in a quantum dot, while the spin qubits operate based on the polarization of the electron's spin. Several physical implementations have been proposed for these qubits, each featuring different benefits and challenges in terms of operation time, decoherence time, control, and readout requirements. We discuss these qubit structures with a focus on the CMOS implementation for large-scale quantum computing. \begin{figure}[tbh] \centering \includegraphics[width=\columnwidth]{cmos_qubits.pdf} \caption{Qubit structure candidates for CMOS implementation (a) double quantum dot charge qubit \cite{hayashi03}, (b) isolated double quantum dot charge qubit \cite{gorman05}, (c) MOS spin qubit \cite{hwang17}, (d) SOI CMOS spin qubit \cite{maurand16}, (e) hybrid charge/spin qubit \cite{shi12}.} \label{cmos_qubits} \end{figure} \subsection{Charge Qubits} The solid-state charge qubits have been extensively studied in the literature \cite{hayashi03, peterson10, petta04, hu05, gorman05, shinkai09, hollenberg04, giounanlis19, blokhina20}. A charge qubit, in its simplest form, can be constructed using a double quantum dot, in which the quantum states are defined by the excess electron occupation in the left and right quantum dots [Fig. \ref{cmos_qubits}(a)]. The two quantum dots are connected through an inter-dot tunneling barrier. The quantum dots are weakly coupled to the source and drain via tunneling barriers. The voltages applied to the left, right, and middle gates control the charge transport through the tunneling barriers. The drain-source voltage can be used to set the operating mode of the qubit. The quantum state of the qubit can be read directly using the drain-source current (for the discussed structure) \cite{hayashi03} or using a charge sensor implemented as a single electron transistor (SET) or quantum point contact (QPC) \cite{peterson10}. This structure should be operated at very low cryogenic temperatures, e.g., $\ll$\,1\,K, to exhibit the quantum effects. The early solid-state qubits were fabricated using GaAs, while, later, silicon-based structures achieved longer coherence times as a result of specific physical features of silicon \cite{zwanenburg13}. The charge qubit based on the double quantum dot was first demonstrated in GaAs/AlGaAs hetero-structure and achieved 1\,ns coherence time \cite{hayashi03}. The coherence time improved to 7\,ns using one-electron quantum dots and QPC charge detector \cite{peterson10}. This qubit structure can be made compatible with standard CMOS processes \cite{giounanlis19, blokhina20, bashir20 }. A charge qubit based on an isolated double quantum dot is shown in Fig.\,\ref{cmos_qubits}(b), where the operations are performed using capacitively coupled elements \cite{gorman05}. The quantum state readout is achieved using a SET device integrated with the qubit structure. A prototype implemented in a silicon-on-insulator (SOI) wafer with a phosphorous-doped active region exhibits a long coherence time of 200\,ns. This is attributed to the weak coupling of the isolated qubit to charge noise in the surrounding gates. However, the operational time is relatively short as a result of low inter-dot coupling. This structure requires extra fabrication steps in a conventional CMOS process. The longest coherence times have been achieved using trapped-ion charge qubits. These structures, however, require extra fabrication steps for precise implantation of donors, which is not fully compatible with current CMOS processes \cite{hollenberg04}. The charge qubits offer several advantages for CMOS quantum computing. The structure based on the double quantum dot is compatible with standard CMOS processes. Their readout can be performed directly through the drain-source current of the quantum dot or charge sensors integrated with the qubit. Control of the charge qubit can be performed using gate voltage pulses which can be generated with high accuracy in CMOS processes. The quantum gates can be realized as electrostatically coupled quantum dots \cite{shinkai09}. In advanced CMOS, the charge qubits have the potential to achieve fast operating times and maintain coherence significantly longer than the response delay of control and readout circuits. However, there are some challenges that must be addressed for reliable quantum computing using charge qubits. The effects of charge noise on the decoherence and fidelity of charge qubits should be evaluated. Furthermore, the power consumption budget of cooling systems in a large-scale quantum computer limits the minimum temperature of qubits, e.g., 4\,K rather than $\ll$1\,K. This would degrade the performance of the qubits, which is yet to be investigated. \subsection{Spin Qubits} The early proposals of silicon quantum computers were based on spin qubits \cite{loss98, kane98}. The solid-state spin qubits can be realized as donor-bound spins or electron spins in quantum dots. The donor-based spin qubits use electrons bound to individual donor atoms, e.g., phosphorous, at cryogenic temperatures. Using these spin qubits in high purity silicon, coherence times exceeding 1\,s are reported \cite{tyryshkin12}, and it is demonstrated that these can be implemented in MOS-like processes with extra fabrication steps \cite{pla12}. The need for precise localization of the donors with respect to electrostatic gates is a major challenge in the use of these qubits for large-scale quantum computing. The spin qubits based on quantum dots should be excited by a magnetic field to control the spin of electrons (or holes). Several variations of spin qubits based on quantum dots are presented in the literature \cite{levy02, maurand16, hanson07, bluhm19, vandersypen19, hwang17, morton11}. A spin qubit based on MOS double quantum dot structure is proposed in \cite{hwang17} [Fig.\,\ref{cmos_qubits}(c)]. The structure includes four gates (G1--G4) which can be individually tuned to form a quantum dot. A double quantum dot can be created under G1 and G2, which is tunnel-coupled to an electron reservoir under G3 and G4. A SET device is integrated with the structure for charge sensing. The qubit is compatible with standard CMOS processes. Another CMOS spin qubit is proposed in \cite{maurand16} [Fig.\,\ref{cmos_qubits}(d)]. It is implemented in an SOI process, with two p-channel transistors, one operating as a hole spin qubit, and the other used for the spin readout. There is a buried oxide (BOX) layer between the channel and the silicon substrate, resembling a fully depleted SOI (FDSOI) process. The measured coherence time is 60\,ns. A higher performance is expected by using n-type transistors with electrons as charge carriers. This structure is promising for the realization of scalable quantum computing circuits using a standard FDSOI process. Recently, some spin-based qubit structures are implemented in advanced CMOS and SOI processes \cite{oda16, clarke16, franceschi16, vinet18}, but a standard design procedure and comprehensive characterization are yet to be developed. \subsection{Hybrid Charge/Spin Qubits} The hybrid qubit can be viewed as a combination of a spin qubit and a charge qubit \cite{shi12, kim14, kim15-1}. The charge-like characteristics promote a high-speed operation of the qubit, while a long coherence time is achieved due to its spin-like features. The spin operation leads to suppressed charge noise effects because the variations of charge distributions are confined to a single quantum dot \cite{shi12}. The hybrid qubit can be controlled electrically, without the need for magnetic fields required for the manipulation of spin qubits \cite{kim14}. This is an attractive feature for fast operation, scalability, and integration with classical electronics. A possible implementation of this qubit structure is shown in Fig.\,\ref{cmos_qubits}(e), where the gate voltages are used to control the qubit characteristics, and the QPC is used for the quantum state readout. A technique should be developed to control the number of electrons in quantum dots. The hybrid qubit performance and electrical control features are promising for CMOS large-scale quantum computing. \subsection{Comparison of Qubits for Large-Scale Integration} It is noted that both the charge and spin qubits can be realized using quantum dot structures in CMOS processes. The choice between two qubits is dependent on several considerations of their performance and interactions with interface circuits. The spin qubits have received more attention because of superior physical properties, mainly the longer coherence time, when considered as \emph{standalone} components. In a \emph{large-scale} quantum computing scenario, however, there are many other considerations which can be even more critical than the coherence time. We briefly discuss the most important aspects. \begin{enumerate} \item Coherence time: Spin qubits feature longer coherence time compared to charge qubits. This offers longer time to perform quantum operations. \item Operational time: The charge qubits operate faster than the spin qubits. This allows more quantum operations to be completed within the decoherence time of qubits. \item Sensitivity to charge/spin noise: Semiconductor quantum dots are prone to charge and spin noise, resulting in fluctuating electric and magnetic fields in the qubit, respectively. These effects appear as dephasing and decoherence of the qubit state \cite{kuhlmann13}. The noise can also degrade the fidelity of quantum operations. The charge noise is dominant at low frequencies and follows the $1/f$ spectrum \cite{kuhlmann13, paladino14}, which unfortunately increases at cryogenic temperatures for CMOS transistors \cite{patra18}. The charge qubit features fast operation resulting from strong coupling to electric fields. However, this feature also leads to strong coupling of charge noise which degrades coherence time of the qubit. The spin qubit offers superior noise performance, as a result of the weak coupling of spins to the environment. Some techniques based on symmetric operation and gate pulse engineering are developed to mitigate the qubit noise \cite{reed16, martins16, yang19, kim15}. \item Quality of the qubit coupling for the realization of quantum gates: The weak interactions of the spin qubits with the environment, which is beneficial to their long coherence times, make inter-qubit operations challenging \cite{schulman12}. The charge qubits, on the other hand, can achieve stronger coupling to each other, enabling the realization of quantum gates through arrays of quantum dots \cite{jones18}. \item Qubits uniformity in the presence of transistors mismatch: In a large-scale quantum computer with many qubits, mismatch of the qubit characteristics can degrade quality of quantum operations. Qubit structures implemented in advanced CMOS experience greater mismatch at cryogenic temperatures \cite{hart19, hart20}. Furthermore, process-voltage-temperature (PVT) variations can significantly deviate the qubits from their optimal operating conditions. Such issues can be mitigated using digital calibration and error correction techniques in CMOS processes \cite{bashir20, esmailiyan20}. \item High-temperature operation of qubits: A large-scale quantum computer requires huge power consumption by the dilution refrigerator to maintain qubits at low temperatures essential for their proper operation. This encourages to increase the operational temperature of the qubits, e.g., from $\ll$\,1\,K to 4\,K, to reduce the power consumption as well as size of the cooler. However, this degrades the qubit performance in terms of decoherence, charge noise, and fidelity \cite{petit18}. The effect of increased temperature on performance of the charge and spin qubits as well as quantum gates fabricated using these structures should be evaluated to ensure feasibility of the high-temperature operation of CMOS quantum computing circuits. \item Integration of qubits with readout circuits: The charge qubit readout can be performed using charge sensors (e.g., SET or QPC) integrated with the qubit structure. The readout of spin qubits, however, is more complicated and requires the presence of a magnetic field and a spin-to-charge conversion. Therefore, charge qubits are more favorable for a large-scale system. \item Control circuits: The control of charge qubits can be performed electrically using gate pulses which can be accurately generated by the CMOS circuitry. For the spin qubits, however, a magnetic field is also required. The operating frequency of readout circuits can be as low as $<$\,1\,GHz for the charge qubits while it should be much higher, e.g., 10--20\,GHz, for the spin qubits \cite{bardin20}. A lower frequency of operation alleviates the realization of compact integrated and low-power readout circuits for large-scale qubits. Furthermore, the control circuits introduce external noise into the qubits which can degrade their performance \cite{reilly15, dijk19}. Therefore, sensitivity of spin and charge qubits to such effects should be evaluated for their application in CMOS quantum computing. \end{enumerate} \subsection{Quantum Gates} \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{cmos_quantum_gates.pdf} \caption{Semiconductor quantum gates: (a) electrostatic interaction approach, (b) exchange interaction approach, (c) array of coupled quantum dots, (d) two quantum dot arrays with electrostatic coupling \cite{bashir20}.} \label{cmos_quantum_gates} \end{figure} Quantum gates are operators that evolve the quantum state of qubits \cite{nielsen10, hidary19, sutor19}. Semiconductor-based quantum gates can be realized through interactions between quantum dots \cite{barence95, burkard99, fujisawa11, veldhorst14, Veldhorst15, eenink19, petit20, yang20, zajac16, braakman13, lawrie20, jones18, mills19, bauerle18}. We can consider three main approaches to the physical realization of quantum gates. These methods, shown in Fig.\,\ref{cmos_quantum_gates}, include the electrostatic interaction gates, the exchange interaction between spin-based qubits, and arrays of coupled quantum dots. These can be considered as potential candidates for the CMOS implementation, while still there are many challenges that should be resolved for their reliable operation. \subsubsection{Electrostatic Interaction Quantum Gates} The most straightforward method for the construction of quantum gates is through the electrostatic interaction between quantum dot-based qubits. As shown in Fig.\,\ref{cmos_quantum_gates}(a), two double quantum dots with electrostatic coupling can be used to realize quantum gates \cite{shinkai09, fujisawa11}. In a charge-based operation, coherent oscillations in one double quantum dot are strongly influenced by electron position in the other double quantum dot. The two electrons can simultaneously tunnel in a correlated manner. These coherent oscillations can be interpreted as two-qubit operations. The quantum gate function can be controlled by the coupling strength which is dependent on the physical spacing of the two structures and the voltages applied to the gate terminals of the qubits. Multiple two-qubit operations including CROT, SWAP, and FLIP have been achieved using this technique \cite{shinkai09, fujisawa11}. This structure is scalable and amenable to CMOS processes. The maximum coupling strength is limited by the minimum spacing rules of the CMOS process. \subsubsection{Exchange Interaction Quantum Gates} The exchange interaction (Heisenberg interaction) is a quantum mechanical effect resulting from an overlap between the wave functions of two electrons. This effect is proposed to perform quantum operations by using spin qubits \cite{levy02, DiVincenzo00-2}. Single-qubit gates based on spin qubits require stringent control of the magnetic fields applied to the spin and are very slow. An exchange interaction technique is proposed to realize universal quantum gates \cite{DiVincenzo00-2}. Single- and two-qubit gates are implemented using three-state spin qubits in which their interactions are controlled through the time duration of the pulses [Fig.\,\ref{cmos_quantum_gates}(b)]. This structure requires 3$\times$ more devices and about 10$\times$ more clock cycles, which can be readily accommodated by the current CMOS processes benefiting from the small transistor area and accurate clock generation. This technique is particularly useful for reconfigurable and scalable quantum computation as it permits selective choice of single- and multi-qubit quantum operations by turning on/off the coupling between qubits. Using this approach, CNOT, CZ, and SWAP gates have been implemented based on spin as well as hybrid qubits \cite{shi12, DiVincenzo00-2, veldhorst14, Veldhorst15, eenink19, petit20, yang20}. While the operation of single qubits above 1\,K has been demonstrated, it is more challenging for quantum gates. Recently, two-qubit logic quantum circuits operating at 1.1\,K and 1.5\,K are presented in \cite{petit20, yang20}. \subsubsection{Array of Coupled Quantum Dots} An array of coupled quantum dots [Fig.\,\ref{cmos_quantum_gates}(c)] can be used to implement various quantum gates \cite{giounanlis19, blokhina20, zajac16, braakman13, lawrie20}. Some of the quantum dots can be exploited as charge injectors or charge sensors. A major challenge is the tuning of the large quantum dot arrays. This structure is amenable to CMOS technology and lends itself to the large-scale integration. The coupled quantum dots can be roughly realized as transistors with a shared drain/source terminal. However, as fewer gate terminals are used for each quantum dot compared to standard structures, it can be difficult to control all features of the quantum dots, e.g., the number of electrons in the dots, the dot potentials, and the width of the depletion layer. The required gate pulse voltages can be generated with high accuracy using CMOS circuits. The quantum dot arrays have been recently implemented in standard CMOS processes \cite{gong19, guevel20, bashir20}, but a detailed qubit characterization has not been reported. This can be an important research endeavor in the future. The quantum dot arrays can also be realized using electrically controllable exchange interaction between spins in adjacent quantum dots \cite{jones18, mills19}. Structures based on two-dimensional quantum dot arrays are developed based on this approach \cite{mukhopadhyay18, mortemousque18, riggelen20}, which can be used in the realization of more complex quantum operations. In \cite{bashir20}, two quantum dot arrays with electrostatic coupling at the middle are used to realize quantum gates [Fig.\,\ref{cmos_quantum_gates}(d)]. Each row is constructed by transistor-like devices operating as quantum dots, imposer, and interface devices. The two arrays are used to realize charge qubits, while their entanglement is controlled by the dot-to-dot distance in the interaction area. This structure is implemented using an FDSOI CMOS process. \section{Quantum-Classic Electronic Interface} \label{sec:quantum_classic_interface} The interface between qubits and classical electronics for the control and readout of their states is of critical importance in a large-scale quantum computer. We discuss the qubit control and qubit readout circuits and proceed with interface architectures for large-scale qubit arrays. \subsection{Qubit Control Circuits} In a quantum computer, individual gate bias voltages, control pulses, and, in some cases, microwave signals should be routed to every qubit. The effects of classical electronic control on the qubit operation are investigated in the literature \cite{dijk19, reilly15, hornibrook15}. For a quantum computer with many qubits (e.g., $>$\,100), it is essential to integrate control circuits with qubits. This provides several advantages in terms of reduced number of control signal paths and interconnect density as well as mitigating the latency and synchronization issues of high-frequency clock propagation over long distances. The cryogenic control circuits can be integrated with qubits, which enables a compact CMOS implementation \cite{dijk19, vandersypen17-2, veldhorst17, bardin19, bashir19, bashir20, esmailiyan20, patra20_ISSCC, guevel20}. The qubit operation and fidelity are affected by the inaccuracies of the control signal, including frequency fluctuations, amplitude and phase noise, jitter, bandwidth, and operating speed \cite{dijk19}. Thermal noise generated by the control circuits is reduced by virtue of operating them at cryogenic temperatures. This improves the fidelity of the qubits, but also imposes new challenges in the cryogenic circuit design. The low-frequency phase/frequency noise of a signal generator is dominated by flicker ($1/f$) noise of transistors, which increases at cryogenic temperatures \cite{patra18}. Furthermore, any mismatch between frequency of the control signal and Larmor frequency of the qubit degrades its fidelity during the qubit idle operation. The presence of spurious tones leads to unwanted Rabi oscillations and reduces the fidelity \cite{dijk19}. These indicate stringent phase/frequency noise and spectral purity requirements for the control signal source. Using microwave burst pulses with optimized frequency, amplitude, and pulse duration, the charge qubit control can be performed close to its sweet spot energy level, where the energy difference between the qubit states is insensitive to the detuning energy variations \cite{kim15}. This approach requires fine control of the gate voltages of quantum dots (e.g., better than 0.1\,mV). This can be achieved using high-resolution digital-to-analog converters (DAC), but with extra circuit and power consumption overheads for large arrays of qubits. Another important issue in a many-qubit system arises from the coupling between control signal paths of different qubits, which can transfer a common noise to correlated errors between the qubits. This effect should be properly evaluated and mitigated to maintain the qubit performance. The operation of control circuits should be fast compared to the decoherence time of qubits and quantum gates. Considering the typical decoherence times of qubits, the speed of current CMOS processes can meet this requirement. Furthermore, the magnetic field required for the control of spin qubits can be implemented using an on-chip transmission line excited by a microwave pulse signal [known as an electron spin resonance (ESR) technique]. In a large-scale qubit array, however, this requires extra routing and increases the control complexity. The charge qubits are therefore preferable from the control viewpoint in a large-scale realization. \subsection{Qubit Readout Circuits} The readout of qubits is challenging as a consequence of their fragile quantum states. The readout process should be fast, reliable, and scalable so that it can be used in a practical quantum computer. In the case of charge-based qubits, a charge sensor can be integrated with the qubit to detect the presence of an electron in each quantum dot and generate a current which can be measured using a sensitive amplifier [Fig.\,\ref{qubit_readout}(a)]. The readout time is limited by constraints on the minimum noise added by the readout amplifier and a coupling strength between the quantum dot and the charge sensor device. Frequency of the readout circuits for the charge qubits can be low ($<$\,1\,GHz), which allows their readout using mixed-signal circuits with low power consumption and compact chip area in CMOS processes \cite{esmailiyan20}. Spin states are difficult to read directly, but can be converted to charge states via the spin-to-charge conversion followed by a single-shot charge readout \cite{elzerman04}. As shown in Fig.\,\ref{qubit_readout}(b), a magnetic field is applied to split the two-spin quantum states. The dot potential is tuned such that the electron leaves the dot to the reservoir in the spin-down state, while it stays in the dot in the spin-up state. Therefore, the spin state is correlated with the charge state, which can be read using the charge sensor (QPC here). The spin-to-charge conversion can also be achieved using Pauli spin blockade, which is appropriate for the exchange-interaction spin qubits \cite{petta05}. As the number of qubits is increased, this approach faces challenges due to the required proximity of one charge sensor to each qubit and separate readout amplifier circuits for individual qubits. RF reflectometry is another readout technique for charge and spin qubits in which a reactive resonator circuit is coupled to the quantum dot to measure changes in its impedance with the quantum state \cite{petersson10, crippa19, west19, schaal19}. The qubit has a state-dependent quantum capacitance which introduces a shift in the frequency response (e.g., phase of the reflection coefficient) of the resonator circuit [Fig.\,\ref{qubit_readout}(c)]. This shift can be measured to detect the quantum state. The resonance circuit and other readout circuitry can be integrated in a CMOS implementation. A highly stable and low-phase-noise reference signal, e.g., generated using direct digital synthesis (DDS), is required to enable accurate measurement of the small phase shift resulting from the capacitance change of quantum dots. This approach is promising for large-scale qubit arrays as a shared readout circuit can be used by multiple qubits. Furthermore, a frequency multiplexing architecture can be realized by using multiple resonator circuits for simultaneous readout of multiple qubits \cite{petersson10}. The frequency of readout circuits is typically in the range of 10--20\,GHz for the spin qubits. An implementation challenge arises from the large size of separate resonator circuits required for each qubit. Furthermore, the parasitic coupling between the frequency-multiplexed qubits can be high at such frequencies. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{qubit_readout.pdf} \caption{The qubit readout techniques: (a) charge sensing \cite{kim15}, (b) spin-to-charge conversion \cite{elzerman04}, (c) RF reflectrometry \cite{crippa19}.} \label{qubit_readout} \end{figure} \subsection{Interface to Large-Scale Qubit Arrays} A large-scale quantum computer can be realized using two-dimensional qubit arrays. Such structures can be implemented in CMOS processes with nanoscale feature size, which allows the integration of qubits with their control and readout circuits on a single chip. The number of qubits that can be integrated are limited by the chip footprint and power consumption of qubits. A significant challenge arises from requirements on the routing of control signals and gate bias voltages to the qubits. The architectures shown in Fig.\,\ref{qubit_arrays} are proposed to address the independent control and readout required for every qubit and chip area limitations. \begin{figure}[!t] \centering \includegraphics[width=0.9\columnwidth]{qubit_arrays.pdf} \caption{Control and readout interface architectures for large-scale qubit arrays: (a) crossbar (DRAM-like) dense qubit array \cite{veldhorst17, li18}, (b) sparse qubit array \cite{vandersypen17-2}.} \label{qubit_arrays} \end{figure} In a dense qubit array, a crossbar (DRAM-like) network shown in Fig.\,\ref{qubit_arrays}(a) can be used to route the control and readout signals to the qubits \cite{veldhorst17, li18, schaal19}. The row lines (RLs) and column lines (CLs) enable the identification of unique points on the grid for read/write operations. The qubit lines (QLs) connect the plunger gates through vias to the qubits. In the structure proposed in \cite{veldhorst17}, comprising 480 qubits, a single floating gate is used to define each qubit and another single floating gate provides exchange coupling between qubits to perform quantum operations. Qubits implemented in CMOS processes are expected to exhibit mismatched features, e.g., Rabi oscillation frequencies, due to process variations. Therefore, considering the low tolerance levels of qubits, their gate bias voltages and control signals must be independently calibrated \cite{vandersypen17-2}. This has been realized using some extra control lines to tune each qubit in the DRAM-like structure. In the crossbar network presented in \cite{li18}, a spin qubit module is used. It combines a global charge control, local tuning, and electron shuttling between dots with alternating local magnetic fields and global ESR control. In a sparse qubit array, the qubits are arranged in arrays of smaller size, as shown in Fig.\,\ref{qubit_arrays}(b) \cite{vandersypen17-2, boter19}. This allows the allocation of more space for the routing and control/readout circuitry. The control circuit transistors can be placed on the same layer as the qubit transistors, to be directly integrated with CMOS fabrication. The local control/readout electronics include digital-to-analog and analog-to-digital converters as well as vector modulators. The optimum array size and performance of the interface electronics should be determined based on features of the CMOS process, the number of qubits, control clock frequency, and available power budget. A fairly large number of cryogenic quantum controller circuits implemented in standard bulk CMOS and FDSOI processes are recently reported in the literature \cite{bardin19, bashir19, bashir20, esmailiyan20, pauka19, vandijk20_TCASI, patra20_ISSCC, vandijk20_JSSC, guevel20, charbon21}. These include a 4--8 GHz controller for transmon qubits in 28-nm CMOS \cite{bardin19}, a 2.4 GHz controller for charge qubits in 22-nm FDSOI \cite{bashir19, bashir20, esmailiyan20}, a controller in 28-nm FDSOI wire-bonded to a 30-qubit GaAs chip \cite{pauka19}, a 2--20 GHz controller for frequency multiplexed spin and transmon qubits in 22-nm FinFET \cite{patra20_ISSCC, vandijk20_JSSC}, and a single chip comprising double quantum dot qubit, 2.8 GHz control circuits, and readout circuits in 28-nm FDSOI \cite{guevel20}. \section{Future Trends} \label{sec:future} The future of quantum computing has been envisioned by many experts representing different viewpoints. We should make a distinction between short- and long-term expectations to distinguish hype from reality \cite{alexeev20}, \cite{corcoles20}. In the NISQ era, the number of qubits (50--100) is less than what can provide a breakthrough in the state-of-the-art computing power \cite{preskill18}. Furthermore, noisy qubits need extensive assistance from the quantum error correction and/or classic machine learning algorithms to mitigate imperfections in the operation of qubits and their control/readout circuitry. From another perspective, a wide range of application scenarios can be envisioned, from large quantum computers providing cloud-based services to customers, e.g., Google's quantum computing playground and IBM's ``Q Experience", to affordable quantum computers for corporate and personal users. Three levels of development can be envisioned for quantum computing. This starts from the classic scenario in which the quantum processing unit is operating at $<$\,0.1\,K, while the control/readout electronics operates at room temperature. In the next level, the control/readout electronics is moved to the cryogenic temperature of 4\,K. At the ultimate level, both the quantum processing unit and control/readout circuits are operating at the high cryogenic temperature of 4\,K (so-called ``hot qubits"), enabling their full integration in a single chip, as shown in Fig.\,\ref{QCCHIP} \cite{staszewski20}. An important vision is that \emph{current CMOS technologies can provide all the elements required to leverage the realization of personal quantum computers.} CMOS processes provide nanoscale transistors to implement qubits, precise control and readout operations, as well as intensive digital circuits for quantum error correction and calibration \cite{charbon17, Vandersypen17}. One of the key challenges in the realization of quantum error correction is a significant overhead of the required extra qubits ($\sim$100$\times$) \cite{preskill18, hidary19}. This can be feasible using CMOS, as large arrays of qubits can be implemented in a compact chip area. Furthermore, digital calibration techniques can be used to further compensate for the errors. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{QCCHIP.pdf} \caption{Single-chip CMOS quantum processor paradigm where qubits and control/readout circuits are operated at cryogenic temperate of 4\,K to enable their full integration \cite{staszewski20}} \label{QCCHIP} \end{figure} The implementation of qubits in standard CMOS processes can be an important milestone in quantum computing. Almost all qubits reported so far are fabricated under specially well-controlled lab conditions. To this end, two lines of activity should be pursued. Qubit and quantum gate structures amenable to the mainstream CMOS processes should be developed. This can be complemented by fine-tuning the foundry CMOS processes specifically for quantum applications \cite{govoreanu19}. Furthermore, qubits are usually reported as individual elements with optimized performance (e.g., decoherence time). We emphasize that the definition of high-quality qubit can be different when it should be operated in a \emph{large-scale array integrated with control and readout circuits.} In such operational conditions, there are many other performance metrics which should be considered, including the qubit's interaction with other qubits in the array, resilience against noise, and disturbance of control circuits, coupling to readout circuits, and tolerance of their parasitic elements. This indicates the need for a new design approach for qubit structures amenable to large-scale integration with control and readout circuits. Recently, several cryogenic CMOS circuits have been presented for quantum computing, including transistor modeling \cite{incandela18, beckers18, bonen19, hart19, hart20, beckers20, yang20_EDL, paz20}, on-chip passive device modeling \cite{patra20_JEDS}, as well as voltage reference \cite{homulle18}, readout amplifiers \cite{gong19}, circulator \cite{ruffino20}, oscillators \cite{gong20}, and a single-electron injection and detection circuits \cite{bashir20, esmailiyan20}. Furthermore, several control/readout circuits have been reported for large-scale qubits \cite{bardin19, bashir19, bashir20-2, pauka19, vandijk20_TCASI, patra20_ISSCC, vandijk20_JSSC, guevel20}. In \cite{bashir20}, charge qubit structures are implemented using a standard 22-nm FDSOI CMOS process. These qubits are integrated with their control and readout circuits \cite{esmailiyan20}. Many auxiliary qubits can be realized on a single chip to perform quantum error correction for the main qubits. This is a promising approach to large-scale quantum computing. A major challenge in the design of these qubits and their interface circuitry is the lack of accurate cryogenic models for transistors \cite{charbon21}. This has motivated foundries to develop such models. It is expected that foundries will provide cryogenic models in the near future. Another requirement is the development of structures and optimum design approaches (e.g., for low power consumption) for the circuits. Finally, the integration of the CMOS qubit arrays with their control/readout circuitry and mitigating the interface design issues (e.g., noise, parasitic elements, loading effects, undesired coupling) is the last mile toward the realization of an integrated quantum computer system-on-chip (SoC). \ifCLASSOPTIONcaptionsoff \newpage \fi
{ "timestamp": "2020-12-17T02:21:16", "yymm": "2012", "arxiv_id": "2012.09021", "language": "en", "url": "https://arxiv.org/abs/2012.09021" }
\section*{Introduction} The ground state of the Laplace equation in a regular polygon with Dirichlet boundary conditions at the $n$ sides, has a natural expression as a Neumann series of Bessel and trigonometric functions, $$\psi_n (r,\theta) = J_0(\lambda_n r) +2 {\sum}_{k=1}^\infty h_{k,n} J_{kn} (\lambda_n r) \cos(kn\theta), $$ with coefficients $h_{k,n}$ to be found and eigenvalue $-\lambda_n^2$ that scales with the area. For the equilateral triangle and the square, the solutions are known as sums of few trigonometric functions of the coordinates $x=r\cos\theta $ and $y=r\sin\theta $. Such solutions have a corresponding Neumann expression \cite{Molinari97}. For the square of area $\pi$: \begin{align} J_0(\sqrt{2\pi}r)+2{\sum}_{k=1}^\infty J_{4k}(\sqrt{2\pi}r) \cos(4k\theta) = \tfrac{1}{2}\cos (x \sqrt{2\pi}) + \tfrac{1}{2}\cos(y\sqrt{2\pi}) \end{align} The triangle requires some work to establish the equivalence: \begin{align} J_0(\lambda_3 r) + 2{\sum}_{k=1}^\infty \frac{\cos(k\pi/2 -\pi/6)}{\cos(\pi/6)} J_{3k}(\lambda_3 r) \cos(3k\theta ) = \tfrac{2}{3\sqrt 3} \sin (\tfrac{4\pi}{3R_3} x +\tfrac{2\pi}{3} ) \\ -\tfrac{2}{3\sqrt 3} [ \sin [\tfrac{2\pi}{3R_3}(x+y\sqrt 3) -\tfrac{2\pi}{3} ] -\tfrac{2}{3\sqrt 3} \sin [\tfrac{2\pi}{3R_3}(x-y\sqrt 3)-\tfrac{2\pi}{3}] \nonumber \end{align} where, for area $\pi$, $\lambda^2_3 = \frac{4\pi}{\sqrt 3}$ and $R_3=\tfrac{2}{3}\sqrt{\pi \sqrt 3}$. In \cite{Molinari97} I also obtained a sum that generalizes the integrable cases $n=3,4$: \begin{align}\label{sumn} f_n(x,y) =& J_0(r) +2 {\sum}_{k=1}^\infty \frac{\cos[nk \tfrac{3\pi}{2}-\tfrac{\pi}{2n}]}{\cos(\tfrac{\pi}{2n})} J_{nk}( r) \cos(nk \theta) \\ =&\frac{1}{n} {\sum}_{\ell=0}^{n-1} \frac{\cos[ r\cos (\theta+\tfrac{2\pi}{n}\ell)+\tfrac{\pi}{2n}] }{\cos (\tfrac{\pi}{2n})} \nonumber \\ =&\frac{1}{n} {\sum}_{\ell=0}^{n-1} \frac{\cos[x \cos (\tfrac{2\pi}{n}\ell) - y \sin (\tfrac{2\pi}{n}\ell)+\tfrac{\pi}{2n}] }{\cos (\tfrac{\pi}{2n})} \nonumber \end{align} For $n\to\infty $ the Riemann sum in the second line is $\int_0^{2\pi} \frac{dt}{2\pi} \cos (r\cos t ) = J_0(r)$; for $n=2$ it is $f_2(x,y)=\cos x$. For $n=6$: \begin{align} f_6(x,y)=J_0(r) +2 \sum_{k=1}^\infty (-1)^k J_{6k}(r) \cos(6k \theta) =\tfrac{1}{3}\cos x +\tfrac{2}{3}\cos(\tfrac{1}{2} x)\cos(\tfrac{\sqrt 3}{2} y) \end{align} The functions $f_n$ are eigenfunctions of the Laplace operator with eigenvalue $-1$ but, for $n>4$, they no longer vanish on the boundary of a $n$-polygon. \begin{figure} \begin{center} \includegraphics*[width=5cm]{Hexagon2.eps} \includegraphics*[width=5cm]{Hexagon.eps}\\ \includegraphics*[width=5cm]{Eptagon1.eps} \includegraphics*[width=5cm]{Eptagon2.eps}\\ \caption{\label{fig1} Left: contour plots of $f_6$ and $f_7$ in eq.\eqref{sumn}. Right: the separatrix line $f_6(x,y)=-1/3$ (Kagom\'e lattice, the eq. can be written as $0=\cos(x/2)[\cos(x/2)+\cos(\sqrt 3 y/2)]$. Sites are the doule zeros.), and $f_7(x,y)=-1.9633 ...$ (Mathematica)} \end{center} \end{figure} I only remark that the level curves $f_n (x,y)=C_n$ are closed around the origin (where $f_n=1$) up to a separatrix with $n$ self-intersections, with values $C_5=-0.334909$, $C_6=-1/3$, $C_7=0.19633$, etc. The lines are shown in Fig.1. The Laplacian in polygons has a long history. The ground state beyond the square cannot be finite sums of trigonometric functions, and has been investigated analytically and numerically in $1/n$ expansion (see for example \cite{Molinari97, GS2011,Jones2017}).\\ In this paper I generalize the identity \eqref{sumn}, and obtain a number of new formulas for Neumann series whose sums contain a finite number of terms. For certain values of the parameters, they are identities that are found in the tables by Gradshteyn and Ryzhik \cite{GR}, Prudnikov, Brychkov and Marichev \cite{prudnikov}, a recent paper by Al-Jarrah, Dempsey and Glasser \cite{Glasser}, and two old papers by Takizawa and Kobayasi \cite{Takizawa, Kobayasi}. In the last ones, the Neumann series appear as correlation functions for the heat flow in coupled harmonic oscillators. \section*{The summation formula} The source equation of various sums in this paper is: \begin{align} \boxed{ \sum_{k=-\infty}^{+\infty} J_{kn+p}(z) e^{ikny} =\frac{1}{n} \sum_{\ell=0}^{n-1} e^{i z\sin (y+\tfrac{2\pi}{n}\ell )-ip (y+\tfrac{2\pi}{n}\ell )} } \label{pippo} \end{align} For $y=0$ and even $n$ it is eq.1 in \cite{Takizawa}. Sums of this sort are tabulated only for $n=1,2$ in \cite{prudnikov}. \begin{proof} The result follows from the Fourier integral of a Bessel function of integer order. For fixed $z\in \mathbb C$, $ \sum_{k=-\infty}^\infty e^{ik\theta} J_{kn+p} (z)$ is uniformly convergent in $\theta $ by the bound $|J_{\pm m}(z)|\le C |z/2|^m/m!$ (Nielsen, see \S3.13 in \cite{Watson}). \begin{align*} \sum_{k=-\infty}^\infty e^{ikny} J_{kn+p} (z) =\sum_{k=-\infty}^\infty e^{ikny} \int_0^{2\pi} \frac{d\theta}{2\pi} e^{iz\sin\theta -i(kn+p)\theta}\\ =\sum_{k=-\infty}^\infty e^{ikny} \sum_{j=0}^{n-1} \int_{\frac{2\pi}{n}j}^{\frac{2\pi}{n}(j+1)} \frac{d\theta}{2\pi} e^{iz\sin\theta-ip\theta} e^{-ikn\theta} \end{align*} The sums are exchanged: $= \sum_{j=0}^{n-1} \sum_{k=-\infty}^\infty e^{ikny} \int_0^{\frac{2\pi}{n}} \frac{d\theta}{2\pi} e^{iz\sin (\theta+\frac{2\pi}{n}j) -ip(\theta+\frac{2\pi}{n}j)} e^{-ikn\theta} $. The functions $\sqrt{n/2\pi} \,e^{ikny} $ are a complete orthonormal basis in L$^2(0,2\pi/n)$. The infinite sum is the Fourier representation of $\exp[iz\sin (y+\frac{2\pi}{n}j) -ip(y+\frac{2\pi}{n}j)]$. \end{proof} \subsection*{1} The case $p=0$ and $y=\frac{\pi}{2}+\alpha $ is an extension with angle $\alpha $ of the equations 19 and 20 in \cite{Glasser}, where $\alpha =0$. With $J_{-m}(z)=(-)^mJ_m(z)$: \begin{align} J_0(z)+2\sum_{k=1}^{+\infty} e^{i kn\frac{\pi}{2}} J_{kn}(z) \cos(kn\alpha) =\frac{1}{n} \sum_{\ell=0}^{n-1} e^{ i z\cos (\alpha +\tfrac{2\pi}{n}\ell )} \label{1p0} \end{align} For $n=1$, separation of even and odd parity parts in $z$ gives the Jacobi expansions (eqs. 5.7.10.4 and 5 in \cite{prudnikov}): \begin{align} J_0(z)+2\sum_{k=1}^{\infty} (-)^k J_{2k}(z) \cos(2k\alpha) = \cos (z\cos\alpha)\\ \sum_{k=0}^{\infty} (-)^k J_{2k+1}(z) \cos[(2k+1)\alpha ]= \tfrac{1}{2}\sin ( z \cos \alpha ) \end{align} If $\alpha $ is replaced by $\alpha +\pi/2$ they are eqs. 8.514.5 and 6 in \cite{GR} and 10.4, 10.5 in \cite{Kor}: \begin{align} J_0(z)+2\sum_{k=1}^{\infty} J_{2k}(z) \cos(2k\alpha) = \cos (z\sin\alpha)\\ \sum_{k=0}^{\infty} J_{2k+1}(z) \sin[(2k+1)\alpha ]= \tfrac{1}{2}\sin ( z \sin \alpha ) \end{align} \subsection*{1.1} For $n$ replaced by $2n$, eq.\eqref{1p0} is: \begin{align} J_0(z)+2\sum_{k=1}^{\infty} (-)^{kn} J_{2kn}(z) \cos(2kn\alpha) =\frac{1}{2n} \sum_{\ell=0}^{2n-1} \cos [ z\cos (\alpha +\tfrac{\pi}{n}\ell )] \label{1p2} \end{align} Since terms $\ell$ and $n+\ell $ are the same, the sum is replaced by $2\sum_{\ell=0}^{n-1}$. The value $y=\frac{\pi}{2}$ yields eq.(23) in \cite{Glasser}. \\ For $n=1$ the derivative of \eqref{1p2} in $\alpha =\frac{\pi}{4}$ is: \begin{align} 2J_2(z)-6J_6(z)+10J_{10}(z) -14J_{14}(z)+ ... = z\tfrac{\sqrt 2}{4} \sin (z\tfrac{\sqrt 2}{2}) \end{align} For $n=2$ eq.\eqref{1p2} gives \begin{align} J_0(z)+2\sum_{k=1}^\infty J_{4k}(z) \cos(4k\alpha) =\frac{1}{2} [\cos (z\sin \alpha) +\cos(z\cos \alpha)]\label{13} \end{align} The values $\alpha=0, \frac{\pi}{4}$ give eqs. 5.7.1.19. Case $n=3$, $\alpha =0$ gives eq. 5.7.1.21 in \cite{prudnikov}. \\ The derivative of \eqref{13} with $n=4$ is: \begin{align} \sum_{k=1}^\infty k J_{4k}(z) \sin(4k\alpha) =\frac{z}{16} [\sin (z\sin \alpha)\cos \alpha- \sin(z\cos \alpha)\sin \alpha] \label{15} \end{align} The expansion in small $\alpha $ gives: \begin{align} \sum_{k=1}^\infty k^2 J_{4k}(z) =\frac{z}{64} (z - \sin z) \end{align} \subsection*{1.2} For $n$ replaced by $2n+1$, eq.\eqref{1p0} is: \begin{align*} J_0(z)+2\sum_{k=1}^{\infty} e^{i(2n+1)k\frac{\pi}{2}} J_{(2n+1)k}(z) \cos[(2n+1)k\alpha ] =\frac{1}{2n+1} \sum_{\ell=0}^{2n} e^{ i z\cos (\alpha +\tfrac{2\pi}{2n+1}\ell )} \end{align*} The even-parity and odd-parity parts in the exchange $z\to -z$ are: \begin{align*} &J_0(z)+2\sum_{k=1}^{\infty} (-)^k J_{(4n+2)k}(z) \cos[(4n+2)k\alpha ] =\frac{1}{2n+1} \sum_{\ell=0}^{2n} \cos[z\cos (\alpha +\tfrac{2\pi}{2n+1}\ell )]\\ &2\sum_{k=0}^{\infty} (-)^{n+k} J_{(2n+1)(2k+1)}(z) \cos[(2n+1)(2k+1)\alpha ] =\frac{1}{2n+1} \sum_{\ell=0}^{2n} \sin [z\cos (\alpha +\tfrac{2\pi}{2n+1}\ell )] \end{align*} Examples of the second equation are \begin{align} &\sum_{k=0}^{\infty} (-)^k J_{6k+3}(z) \cos[(6k+3)\alpha ] =-\frac{1}{6} \sum_{\ell=0}^{2} \sin [z\cos (\alpha +\tfrac{2\pi}{3}\ell )] \label{hexagon_triangle}\\ &\sum_{k=0}^{\infty} (-)^k J_{10k+5}(z) \cos[(10k+5)\alpha ] =\frac{1}{10} \sum_{\ell=0}^{4} \sin [z\cos (\alpha +\tfrac{2\pi}{5}\ell )] \label{deca} \end{align} The first equation with $\alpha =\pi$ is eq.22 in \cite{Glasser}.\\ Both sums are eigenfunctions of the Laplacian with eigenvalue $\lambda=-1$ (see Fig. 2). The sum \eqref{deca}, with $z=r$, $x=r\cos\alpha $ and $y=r\sin\alpha $, is \begin{align} f(x,y) = \sin x -2\sin (x \cos\tfrac{\pi}{5})\cos (y \sin\tfrac{\pi}{5}) +2\sin (x \cos\tfrac{2\pi}{5})\cos (y \sin\tfrac{2\pi}{5}) \label{DECX} \end{align} \begin{figure} \begin{center} \includegraphics*[width=4.5cm]{HEXA11.eps} \includegraphics*[width=4.5cm]{HEXA12.eps}\\ \includegraphics*[width=4.5cm]{POLY10.eps} \includegraphics*[width=4.5cm]{DECA.eps} \caption{Contour plots of the sums \eqref{hexagon_triangle} and \eqref{DECX}. The first is the ground state of the equilateral triangle (no nodal lines) and an excited state of the hexagon. The second function is zero on a line close to the first zero of $J_5(r)$.} \end{center} \end{figure} \subsection*{1.3} In \eqref{pippo} with $p=0$, multiply by $\exp(i\beta)$ ($\beta $ real) and take the real part. The left hand side becomes: \begin{align*} &J_0(x)\cos\beta +{\sum}_{k=1}^\infty J_{kn}(x) \; {\rm Re}[e^{i\beta} (e^{ikny} + e^{-ikn (y+\pi) })]\\ &=J_0(x)\cos\beta +2 {\sum}_{k=1}^\infty \cos(\beta - kn\tfrac{\pi}{2}) J_{kn}(x) \cos[kn (y+\tfrac{\pi}{2})] \end{align*} The identity \eqref{sumn} is obtained, when $\beta=\tfrac{\pi}{2n}$ and $y +\tfrac{\pi}{2}=\theta+\pi$. \subsection*{2} Parseval's identity is applied to \eqref{pippo}: \begin{align*} \sum_{k\in\mathbb Z} J^2_{kn+p}(x) &= \frac{1}{n^2}\sum_{k\ell} e^{ip \tfrac{2\pi}{n}(k-\ell) } \int_0^{2\pi} \frac{dy}{2\pi} e^{ix \sin (y-\tfrac{\pi}{n}(k-\ell)) - ix\sin (y+\tfrac{\pi}{n}(k-\ell)) }\nonumber\\ &= \frac{1}{n^2}\sum_{k\ell} e^{ip \tfrac{2\pi}{n}(k-\ell) } \int_0^{2\pi} \frac{dy}{2\pi} e^{-i2x \cos y \sin(\tfrac{\pi}{n}(k-\ell)) } \nonumber\\ &= \frac{1}{n^2}\sum_{k\ell} e^{ip \tfrac{2\pi}{n}(k-\ell) } J_0[2x \sin(\tfrac{\pi}{n}(k-\ell))]\nonumber \\ &=\frac{1}{n}+\frac{2}{n^2}\sum_{k=1}^{n-1} k \cos(\tfrac{2\pi}{n}kp) J_0(2x \sin \tfrac{\pi k}{n} ) \end{align*} The last sum is unchanged if $k$ is replaced by $n-k$: \begin{align} \sum_{k\in\mathbb Z} J^2_{kn+p}(x) =\frac{1}{n}+\frac{1}{n}\sum_{k=1}^{n-1} \cos(\tfrac{2\pi}{n}kp) J_0(2x \sin \tfrac{\pi k}{n} ) \label{Parseval0} \end{align} The left-hand side is $J_p(x)^2+\sum_{k=1}^\infty J^2_{kn+p}(x)+J^2_{kn-p}(z ) $. \\ For the special case $p=0$ and $n\to 2n$ in \eqref{Parseval0}, the sum is amenable to eq.29 in \cite{Glasser}: \begin{align} J_0^2(x)+ 2\sum_{k=1}^{\infty} J_{2kn}^2 (x)= \frac{1}{2n}+ \frac{1}{2n} J_0(2x) + \frac{1}{n} \sum_{k=1}^{n-1} J_0 (2x\cos \tfrac{\pi}{2n}k ) \label{Parseval1} \end{align} \subsection*{2.1} If $n\to 2n$ and $p=n$ in \eqref{Parseval0}, with simple steps one obtains: \begin{align} \sum_{k=0}^\infty J^2_{(2k+1)n}(x) = \frac{1}{4n}+ \frac{(-)^n}{4n} J_0(2x) +\frac{1}{2n}\sum_{\ell=1}^{n-1} (-1)^\ell J_0(2x \sin\tfrac{\pi\ell}{2n}) \label{Parseval2} \end{align} \subsection*{3} In eq.\eqref{pippo} the variable $y$ is shifted to $y+2t$. The equation is multiplied by $e^{iz'\sin y - iqy}$ and integrated in $y$: \begin{align} \sum_{k=-\infty}^{+\infty} J_{p+kn}(z)& J_{q-kn} (z') e^{i(kn+p)2t} \nonumber \\ =&\frac{1}{n} \sum_{\ell=0}^{n-1}e^{-ip\tfrac{2\pi}{n}\ell} \int_0^{2\pi} \frac{dy}{2\pi}e^{i z\sin (y+2t+\tfrac{2\pi}{n}\ell )+iz'\sin y -i(p+q)y} \label{pippo2} \end{align} In the integral, the shift $y$ to $y-t-\frac{\pi}{n}\ell $ changes the exponent to $$ i (z+z')\sin y \cos(t+\tfrac{\pi}{n}\ell )+i(z-z')\cos y\sin (t+\tfrac{\pi}{n}\ell ) -i(p+q)(y-t-\tfrac{\pi}{n}\ell ) $$ \subsection*{3.1} With $z=z'$ we obtain eq.1 in \cite{Kobayasi}: \begin{align} \sum_{k=-\infty}^{+\infty} J_{p+kn}(z) J_{q-kn} (z) e^{2iknt } =\frac{1}{n} \sum_{\ell=0}^{n-1}e^{-i(p-q)(t+\tfrac{\pi}{n}\ell )} J_{p+q}[2z \cos(t+\tfrac{\pi}{n}\ell ) ] \end{align} For $n=1,2$ it is (with a shift of the index $k$ in the first identity and renaming of parameter): \begin{align} &\sum_{k=-\infty}^{+\infty} J_{k}(z) J_{p-k} (z) e^{2ikt } =e^{ipt} J_p(2z \cos t )\\ &\sum_{k=-\infty}^{+\infty} J_{p+2k}(z) J_{q-2k} (z) e^{4ikt } =\tfrac{1}{2}e^{-i(p-q)t} [J_{p+q}(2z \cos t )+ i^{p-q} J_{p+q}(2z \sin t)] \end{align} The first one is eq.8.530 \cite{GR}. The second one, for $t=\frac{\pi}{4}, \frac{\pi}{2}$, becomes: \begin{align} &\sum_{k=-\infty}^{+\infty} (-)^kJ_{p+2k}(z) J_{q-2k} (z) = J_{p+q}(z \sqrt 2 ) \cos [(p-q)\tfrac{\pi}{4}]\\ &\sum_{k=-\infty}^{+\infty} J_{p+2k}(z) J_{q-2k} (z) = \tfrac{1}{2} J_{p+q}(2z) \end{align} For $p=q$ the first one is eq. 5.7.11.25 \cite{prudnikov}. \subsection*{3.2} Eq.\eqref{pippo2} with $p=q=0$ and $t=0$ is: \begin{align} J_0(z)J_0(z')+2\sum_{k=1}^\infty (-)^{kn} J_{kn}(z) J_{kn}(z') =\frac{1}{n} \sum_{\ell=0}^{n-1} \int_0^{2\pi}\frac{dy}{2\pi} e^{i z\sin (y+\tfrac{2\pi}{n}\ell )+iz'\sin y} \nonumber \end{align} For $n=1, 2$ they are eqs. 5.7.11.1 and 5.7.11.18 in \cite{prudnikov} and, for $z=z'$: eqs. 31, 32 in \cite{Glasser}. A new example is: \begin{align} \sum_{k=1}^\infty J_{4k}(x) J_{4k}(y) =\tfrac{1}{8} [J_0(x+y) +J_0(x-y)- 4J_0(x)J_0(y)+ 2J_0 (\sqrt{x^2+y^2})] \end{align} \rem{+++++++ \subsection*{3.3} Eq.\eqref{pippo2} with $p=q$ and $n=2p$ is: \begin{align} J_p(x)J_p(y)+&\sum_{k=1}^\infty (-1)^p[J_{2kp+p}(x) J_{2kp-p}(y)+ J_{2kp-p}(x)J_{2kp+p}(y)] \\ &=\frac{1}{2p} \sum_{\ell=0}^{2p-1} \int_0^{2\pi} \frac{ds}{2\pi}e^{i (x+y)\sin s\cos(\tfrac{\pi}{2p}\ell )+i(x-y)\cos x\sin(\tfrac{\pi}{2p}\ell) -2ips} \nonumber\\ &=\frac{1}{2p} \sum_{\ell=0}^{2p-1} \int_0^{2\pi} \frac{ds}{2\pi}e^{i R_\ell \sin (s+\theta_\ell) -2ips} \nonumber\\ &=\frac{1}{2p} \sum_{\ell=0}^{2p-1} \frac{J_{2p}(R_\ell )}{R_\ell^{2p}} [(x+y)\cos(\tfrac{\pi}{2p}\ell) - i (x-y)\sin(\tfrac{\pi}{2p}\ell) ]^{2p} \end{align} where $R_\ell^2 = x^2+y^2 +2xy \cos(\tfrac{\pi}{p}\ell)$, $(x+y)\cos(\tfrac{\pi}{2p}\ell) =R_\ell \cos\theta_\ell$, $(x-y)\sin(\frac{\pi}{2p}\ell) =R_\ell \sin\theta_\ell$. +++++++} \subsection*{4} Multiplication of \eqref{pippo} by $\exp (-a y)$ ($a>0$) with $p=0$ and $n=1$, and integration on $\mathbb R^+$ give: \begin{align} \frac{1}{a}J_0(z)+ \sum_{k=1}^\infty J_{2k}(z) \frac{2a}{a^2+4k^2} + \sum_{k=0}^\infty J_{2k+1}(z) \frac{2i(2k+1)}{a^2+(2k+1)^2} = \int_0^\infty dy\, e^{i z\sin y-ay} \nonumber \end{align} The integral in the right-hand side is done by series expansion, with eqs.3.895.1 and 3.895.4 \cite{GR}. The even and odd terms are: \begin{align} &\frac{1}{a} J_0 (z) + \sum_{k=1}^\infty J_{2k}(z) \frac{2a}{a^2+4k^2} =\sum_{k=0}^\infty (-)^k \frac{z^{2k}}{a(a^2+4)...(a^2+4k^2)}\\ &\sum_{k=0}^\infty J_{2k+1}(z) \frac{2(2k+1)}{a^2+(2k+1)^2} = \sum_{k=0}^\infty (-)^k \frac{z^{2k+1}}{(a^2+1)(a^2+9)...(a^2+(2k+1)^2)} \end{align} More and more identities can be obtained by derivation, or integration with functions. Here I limited myself to simple and, hopefully, useful examples.
{ "timestamp": "2020-12-17T02:18:34", "yymm": "2012", "arxiv_id": "2012.08944", "language": "en", "url": "https://arxiv.org/abs/2012.08944" }
\section{Introduction}\label{sec:intro} The study of quantitative versions of Helly-type questions was initiated by B\'ar\'any, Katchalski and Pach in \cite{BKP82}, where the \emphdef{Quantitative Volume Theorem} is shown, stating the following. \label{page:QVT} \emph{Assume that the intersection of any $2d$ members of a finite family of convex sets in $\Re^d$ is of volume at least one. Then the volume of the intersection of all members of the family is of volume at least $c(d)$, a constant depending on $d$ only.} In \cite{BKP82}, it is proved that one can take $c(d)=d^{-2d^2}$ and conjectured that it should hold with $c(d)=d^{-cd}$ for an absolute constant $c>0$. It was confirmed with $c(d)\approx d^{-2d}$ in \cite{MR3439267}, whose argument was refined by Brazitikos \cite{Bra17}, who showed that one may take $c(d)\approx d^{-3d/2}$. For more on quantitative Helly-type results, see the surveys \cite{HW18survey,DGFM19survey} and the recent papers \cite{RoSo17, MR3602856, SaXuSo}. In the present note, we continue the study started in \cite{DFN2019} (where a quantitative variant of the Colorful Helly Theorem is shown) by proving variants of the Fractional Helly Theorem, the $(p,q)$-Theorem and related results. \noshow{ The \emphdef{Colorful Helly Theorem}\label{page:chelly} found by Lov\'asz \cite{Lo74} (and with the first published proof by B\'ar\'any \cite{MR676720}) states the following. \emph{If $\Ci_1,\dots, \Ci_{d+1}$ are finite families (color classes) of convex sets in $\Re^d$, such that for any colorful selection $C_1\in \Ci_1,\dots, C_{d+1}\in \Ci_{d+1}$, the intersection $\bigcap\limits_{i=1}^{d+1} C_i$ is non-empty, then for some $j$, the intersection $\bigcap\limits_{C\in \Ci_j} C$ is also non-empty.} In \cite{DFN2019}, the following quantitative variant is shown. \begin{theorem}[Quantitative Colorful Helly Theorem]\label{thm:colorfulqellipsoid} Let $\mathcal{C}_1,\linebreak[0]\ldots,\mathcal{C}_{3d}$ be finite families of convex sets in $\Re^d$. Assume that for any colorful choice of $2d$ sets, $C_{i_k}\in \mathcal{C}_{i_k}$ for each $1\leq k\leq 2d$ with $1\leq i_1<\ldots<i_{2d}\leq 3d$, the intersection $\bigcap\limits_{k=1}^{2d} C_{i_k}$ is of volume at least one. \noindent Then, there is an $i$ with $1\leq i \leq 3d$ such that $\vol{\bigcap\limits_{C\in \mathcal{C}_i}C}\geq c(d)$ with some $c(d)>0$. \end{theorem} } The \emphdef{Fractional Helly Theorem} due to Katchalski and Liu \cite{KL79} (see also \cite[Chapter 8]{MR1899299}) states the following. \label{page:fracHelly} \emph{Fix a dimension $d$, and an $\alpha\in(0,1)$, and let $\mathcal{C}$ be a finite family of convex sets in $\Re^d$ with the property that among the subfamilies of $\mathcal{C}$ of size $d+1$, there are at least $\alpha \binom{|\mathcal{C}|}{d+1}$ for whom the intersection of the $d+1$ members is nonempty. Then, there is a subfamily $\mathcal{C}^\prime\subset\mathcal{C}$ of size $|\mathcal{C}^{\prime}|\geq\frac{\alpha}{d+1} |\mathcal{C}|$ such that the intersection of all members of $\mathcal{C}^\prime$ is nonempty.} Our first main result is the following, a Quantitative Fractional Helly Theorem (QFH in short). \begin{theorem}[QFH]\label{thm:QFHvolFew} For every dimension $d \geq 1$ and every $\alpha\in(0,1)$, there is a $\beta\in(0,1)$ such that the following holds. \noindent Let $\mathcal{C}$ be a finite family of convex sets in $\Re^d$. Assume that among all subfamilies of size $3d+1$, there are at least $\alpha \binom{|\mathcal{C}|}{3d+1}$ for whom the intersection of the $3d+1$ members is of volume at least one. \noindent Then, there is a subfamily $\mathcal{C}^\prime\subset\mathcal{C}$ of size at least $\beta |\mathcal{C}|$ such that the intersection of all members of $\mathcal{C}^\prime$ is of volume at least $d^{-cd^2}$ with a universal constant $c>0$. \end{theorem} \begin{remark}[Ellipsoids and volume]\label{rem:ellipsoidvsvolume} A well known consequence of John's Theorem, is that the volume of any compact convex set $K$ in $\Re^d$ with non-empty interior is at most $d^d$ times larger than the volume of the largest volume ellipsoid contained in $K$. More precise bounds for this volume ratio are known (cf. \cite{Ba97}), but we will not need them. Thus, \emph{from this point on, we phrase our results in terms of the volume of ellipsoids contained in intersections}. Its benefit is that this is how in the proofs we actually ``find volume'': we find ellipsoids of large volume. \end{remark} In this spirit, we re-state our result in terms of ellipsoids. It immediately yields Theorem~\ref{thm:QFHvolFew} by Remark~\ref{rem:ellipsoidvsvolume}. \begin{theorem}[QFH in terms of ellipsoids]\label{thm:QFHfew} For every dimension $d \geq 1$ and every $\alpha\in(0,1)$, there is a $\beta\in(0,1)$ such that the following holds. \noindent Let $\mathcal{C}$ be a finite family of convex sets in $\Re^d$. Assume that among all subfamilies of size $3d+1$, there are at least $\alpha \binom{|\mathcal{C}|}{3d+1}$ for whom the intersection of the $3d+1$ members contains an ellipsoid of volume one. \noindent Then, there is a subfamily $\mathcal{C}^\prime\subset\mathcal{C}$ of size at least $\beta |\mathcal{C}|$ such that the intersection of all members of $\mathcal{C}^\prime$ contains an ellipsoid of volume at least $c^{d^2}d^{-5d^2/2+d}$, where $c$ is the universal constant from Theorem~\ref{lem:qhellyell}. \end{theorem} \begin{remark} The lower bound on $\beta$ obtained by our proof is $\frac{\alpha}{C^d}$ with a universal constant $C>1$. Note that in the case of the classical Fractional Helly Theorem, the sharp bound on $\beta$ shown in \cite{Ka84} by Kalai is $\beta = 1 - (1-\alpha)^\frac{1}{d+1}$. \end{remark} Holmsen \cite{holmsen19} recently showed that, in an abstract setting, a colorful Helly-type result implies a fractional one. In \cite{SaXuSo}, the authors combine Holmsen's result with a Quantitative Colorful Helly Theorem (\cite[Proposition~1.4]{DFN2019}, \cite[Corollary~1.0.3]{SaXuSo}) to obtain a quantitative fractional result. We note that this approach works only when the size of subfamilies considered is large, $\frac{d(d+3)}{2}$. \begin{proposition}[QFH -- large subfamilies]\label{prp:QFHmany} For every dimension $d \geq 1$ and every $\alpha \in (0,1)$ the following holds. Let $\mathcal{C}$ be a finite family of convex bodies in $\Re^d$. Assume that among all subfamilies of size $\frac{d(d+3)}{2}$, there are at least $\alpha\binom{|\mathcal{C}|}{\frac{d(d+3)}{2}}$ for whom the intersection of the $\frac{d(d+3)}{2}$ members contains an ellipsoid of volume one. Then, there is a subfamily $\mathcal{C}' \subset \mathcal{C}$ of size at least $\beta|\mathcal{C}|$ such that the intersection of all members of $\mathcal{C}'$ contains an ellipsoid of volume one, where $\beta=\frac{2\alpha}{d(d+3)}$. \end{proposition} \begin{remark} In general, the value of $\beta$ we obtain is better than the one Holmsen's abstract result \cite{holmsen19} would yield, which is roughly $\left(\frac{\alpha}{3d^4}\right)^{\left(d/\sqrt{2}\right)^{d^2}}$. On the other hand, in \cite{holmsen19}, it is shown that $\beta \to 1$ as $\alpha \to 1$. \end{remark} Next, we turn to the $\mathbf{(p,q)}$-\textbf{Theorem}, a strong generalization of Helly's Theorem, conjectured by Hadwiger and Debrunner in 1957 \cite{hadwiger1957variante} and proved by Alon and Kleitman \cite{alon1992piercing} in 1992. It states that \textit{there is a function $H$ such that if we have a family $\mathcal{C}$ of convex sets in $\Re^d$, and among any $p$ of them there are $q$ with a nonempty intersection ($p \geq q \geq d+1$), then there is a set $T$ with $|T| \leq H(p,q,d)$ such that every member of $\mathcal{C}$ contains at least one point from $T$}. Observe that for any $p\geq q\geq d+1$, we have that $H(p,q,d)\leq H(p,d+1,d)$. Thus, to show the existence of the function $H$, it is sufficient to show that $H(p,d+1,d)$ is finite for all $p\geq d+1$. \begin{definition}[Quantitative $v$-transversal number] For a $v>0$, we say that a family $T$ of ellipsoid of volume $v$ is a \emph{quantitative $v$-transversal} of a family $\mathcal{C}$ of convex bodies in $\Re^d$, if every $C \in \mathcal{C}$ contains at least one ellipsoid from $T$. The quantitative $v$-transversal number $\tau = \tau (\mathcal{C}, v)$ for $\mathcal{C}$ is the smallest cardinality of a quantitative $v$-transversal. If there is a $C \in \mathcal{C}$ that contains no ellipsoid of volume $v$, then we set $\tau (\mathcal{C}, v)=\infty$. \end{definition} Our second main result follows. We phrase it without reference to $q$ in the definition of $H$, in fact, we fix $q$ at the smallest possible value for which we can prove the statement, $q=3d+1$. \begin{theorem}[Quantitative \texorpdfstring{$(p,q)$}{(p,q)}-Theorem -- small lower bound on $p$]\label{thm:pqsmallp} For every dimension $d \geq 1$ and every positive integer $p \geq 3d+1$, there is an integer $H = H(p,d)$ such that the following holds. Let $\mathcal{C}$ be a finite family of convex sets in $\Re^d$, each containing an ellipsoid of volume one, and assume that among any $p$ members of $\mathcal{C}$, there exist $3d+1$ whose intersection contains an ellipsoid of volume one. Then, $\mathcal{C}$ has quantitative $v$-transversal number at most $H(p,d)$ with $v=c^{d^2}d^{-5d^2/2+d}$, where $c$ is the universal constant from Theorem~\ref{lem:qhellyell}. \end{theorem} The following variant is presented in \cite{SaXuSo}, where $p$ is larger than in Theorem~\ref{thm:pqsmallp}, it is $\frac{d(d+3)}{2}$ and, in return, we obtain a quantitative 1-transversal, that is, there is no loss in the volume of the ellipsoids. \begin{proposition}[Quantitative \texorpdfstring{$(p,q)$}{(p,q)}-Theorem -- large lower bound on $p$, {\cite[Theorem~5.0.1]{SaXuSo}}]\label{prp:pqlargep} For every dimension $d \geq 1$ and every positive integer $p \geq \frac{d(d+3)}{2}$, there is an integer $H = H(p,d)$ such that the following holds. Let $\mathcal{C}$ be a finite family of convex sets in $\Re^d$, each containing an ellipsoid of volume one, and assume that among any $p$ members of $\mathcal{C}$, there exist $\frac{d(d+3)}{2}$ whose intersection contains an ellipsoid of volume one. Then, $\mathcal{C}$ has quantitative $1$-transversal number at most $H(p,d)$. \end{proposition} For results where the number of selected sets is much larger and, in return, one obtains a good approximation of the volume (and not just the volume of the largest ellipsoid contained in a set), see \cite{MessinaSoberon20} and \cite{MR3602856}. The structure of the paper is the following. In Section~\ref{sec:prelim}, we introduce some of our tools, mostly from \cite{DFN2019}. We prove Proposition~\ref{prp:QFHmany} in Section~\ref{sec:QFHmany}. Next, as a preparation for the $(p,q)$ results, in Section~\ref{sec:QEpsilonNet}, we prove a quantitative version of the \textbf{Selection Lemma} and the \textbf{Weak Epsilon Net Theorem} (see the statements of these two classical results in that section). Next, in Section~\ref{sec:pqlargep}, we give a short direct proof of Proposition~\ref{prp:pqlargep}. The arguments in these sections are modeled on proofs of the corresponding classical (ie., non-quantitative) results: the combinatorial core of those arguments are extracted, and then combined with geometric facts from \cite{DFN2019}, which guarantee uniqueness of certain ellipsoids contained in a convex body (see Lemmas~\ref{lem:lowest} and \ref{lem:dsquare}). In contrast, the proofs of Theorems~\ref{thm:QFHfew} and \ref{thm:pqsmallp} require further ideas, that are presented in Sections~\ref{sec:QFHfew} and \ref{sec:pqsmallp}. The difficulty in proving these results, where ``rough approximation'' is required, that is, the number of sets to be selected is $O(d)$ and not $O(d^2)$ (and, in return, there is a substantial loss of volume) is explained at the beginning of Sections~\ref{sec:pqsmallp}. \section{Preliminaries}\label{sec:prelim} We will rely on the following quantitative Helly theorem that is phrased in terms of the volume of an \emph{ellipsoid} contained in a convex body. \begin{theorem}[Quantitative Helly Theorem]\label{lem:qhellyell} Let $C_1,\ldots,C_n$ be convex sets in $\Re^d$. Assume that the intersection of any $2d$ of them contains an ellipsoid of volume at least one. Then $\bigcap\limits_{i=1}^n C_i$ contains an ellipsoid of volume at least $c^dd^{-3d/2}$ with an absolute constant $c>0$. \end{theorem} Theorem~\ref{lem:qhellyell} was stated in \cite{MR3439267} for volumes of intersections and not volumes of ellipsoids with the weaker bound $c^dd^{-2d}$, which was an improvement of the volume bound $d^{-2d^2}$ given by B\'ar\'any, Katchalski, and Pach \cite{BKP82}. The proof in \cite{MR3439267} clearly yields containment of ellipsoids as stated herein, and the argument was later refined by Brazitikos \cite{Bra17}, who obtained the bound $c^dd^{-3d/2}$ as stated above. Another one of our key tools is the following observation from \cite{DFN2019}. \begin{lemma}[{\cite[Lemma~3.2]{DFN2019}}]\label{lem:hellyklee} Assume that the origin centered Euclidean unit ball, $\Ball$ is the largest volume ellipsoid contained in the convex set $C$ in $\Re^d$. Let $E$ be another ellipsoid in $C$ of volume at least $\delta\vol{\Ball}$ with $0<\delta<1$. Then there is a translate of $\frac{\delta}{d^{d-1}} \Ball$ which is contained in $E$. \end{lemma} We state an immediate corollary of the Fractional Helly Theorem (see page~\ref{page:fracHelly}). \begin{proposition}\label{prop:fhellyklee} For every dimension $d \geq 1$ and every $\alpha \in (0,1)$, the following holds. \noindent Let $\mathcal{C}$ be a finite family of convex sets in $\Re^d$ and let $L$ be a convex set in $\Re^d$. Assume that among all subfamilies of size $d+1$, there are at least $\alpha \binom{|\mathcal{C}|}{d+1}$ for whom the intersection of the $d+1$ members contains a translate of $L$. \noindent Then, there is a subfamily $\mathcal{C}^\prime\subset\mathcal{C}$ of size at least $\frac{\alpha}{d+1} |\mathcal{C}|$ such that the intersection of all members of $\mathcal{C}^\prime$ contains a translate of $L$. \end{proposition} \begin{proof}[Proof of Proposition~\ref{prop:fhellyklee}] We use the following operation, the \emph{Minkowski difference} of two convex sets $A$ and $B$: \[ A\sim B:=\bigcap_{b\in B} (A-b). \] It is easy to see that $A\sim B$ is the set of those vectors $t$ such that $B+t\subseteq A$. Now, replace each convex set $C$ in $\mathcal{C}$ by $C\sim L$, and apply the Fractional Helly Theorem (see page~\ref{page:fracHelly}). \end{proof} The following definition and two lemmas introduce the unique lowest ellipsoid of volume one contained in a convex body. \begin{definition} For an ellipsoid $E$, we define its \emph{height} as the largest value of the orthogonal projection of $E$ on the last coordinate axis. \end{definition} \begin{lemma}[Uniqueness of Lowest Ellipsoid, {\cite[Lemma~2.5]{DFN2019}}]\label{lem:lowest} Let $C$ be a convex body, such that it contains an ellipsoid of volume one. Then there is a unique ellipsoid of volume one such that every other ellipsoid of volume one in $C$ has larger height. We call this ellipsoid the \emph{lowest ellipsoid} in $C$. \end{lemma} \begin{lemma}[Lowest ellipsoid determined by $O(d^2)$ members of an intersection {\cite[Lemma~3.1]{DFN2019}}]\label{lem:dsquare} Let $C_1,\dots, C_{n}$ be a finite family of convex bodies in $\Re^d$ whose intersection contains an ellipsoid of volume one. Then, there are $d(d+3)/2-1$ indices $i_1,\dots,i_{d(d+3)/2-1}\in[n]$ such that $\bigcap\limits_{i=1}^{n}C_i$, and $\bigcap\limits_{j=1}^{d(d+3)/2-1}C_{i_j}$ have the same unique lowest ellipsoid. \end{lemma} \section{Proof of Proposition~\ref{prp:QFHmany} -- QFH for large subfamilies} \label{sec:QFHmany} The following argument follows closely the proof of Theorem~8.1.1. in \cite{MR1899299}, the only difference is the use of the unique lowest ellipsoid of a convex body (Lemma~\ref{lem:lowest}) instead of its lexicographic minimum. Let $\mathcal{C} = \{C_1, \ldots, C_n\}$. We call an index set $I \in \mybinom{[n]}{\frac{d(d+3)}{2}}$ \emph{good}, if the corresponding intersection $\cap_{i \in I} C_i$ contains an ellipsoid of volume at least one. We say that a $\left(\frac{d(d+3)}{2}-1\right)$-element subset $S \subset I$ of a good index set $I\in \mybinom{[n]}{\frac{d(d+3)}{2}}$ is a \emph{seed} of $I$, if $\cap_{i \in I}C_i$ and $\cap_{i \in S}C_i$ have the same lowest ellipsoid. By Lemma~\ref{lem:dsquare}, all good index sets have a seed. Since we have $\alpha\mybinom{n}{\frac{d(d+3)}{2}}$ good index sets and only $\mybinom{n}{\frac{d(d+3)}{2}-1}$ possible seeds, there is a $\left(\frac{d(d+3)}{2}-1\right)$-tuple $S\in\mybinom{[n]}{\frac{d(d+3)}{2}-1}$ which is the seed of at least \[ \frac{\alpha\mybinom{n}{\frac{d(d+3)}{2}}}{\mybinom{n}{\frac{d(d+3)}{2}-1}} = \alpha\frac{n-\frac{d(d+3)}{2} + 1}{\frac{d(d+3)}{2}} \] good index sets. Every such good index set has the form $S \cup \{i\}$ for an $i$. So we have $\alpha\frac{n-\frac{d(d+3)}{2} + 1}{d(d+3)/2}$ convex bodies containing the lowest ellipsoid of $\cap_{i \in S}C_i$, plus the $(\frac{d(d+3)}{2} -1)$ bodies from $S$. Hence, the lowest ellipsoid of $\cap_{i \in S}C_i$ is contained in at least \[ \alpha\frac{n+1-d(d+3)/2}{d(d+3)/2} + \frac{d(d+3)}{2} -1 \geq \frac{2\alpha n}{d(d+3)} \] convex bodies among the $C_i$, completing the proof of Proposition~\ref{prp:QFHmany}. \section{Proof of Theorem~\ref{thm:QFHfew} -- QFH for small subfamilies}\label{sec:QFHfew} Let $\mathcal{C} = \{C_1, \ldots, C_n\}$. We call an index set $I \in \binom{[n]}{3d+1}$ \emph{good}, if the corresponding intersection $\cap_{i \in I} C_i$ contains an ellipsoid of volume at least one. We say that a $2d$-element subset $S \subset I$ of a good index set $I\in \binom{[n]}{3d+1}$ is a \emph{seed} of $I$, if the volume of the John ellipsoid of $\cap_{i \in S}C_i$ is at most $c^{-d}d^{3d/2}$ times the volume of the John ellipsoid of $\cap_{i \in I}C_i$, where $c$ is the absolute constant from Theorem~\ref{lem:qhellyell}. By Theorem~\ref{lem:qhellyell}, all good index sets have a seed. Since we have $\alpha\binom{n}{3d+1}$ good index sets and only $\binom{n}{2d}$ possible seeds, there is a $(2d)$-tuple $S\in\binom{[n]}{2d}$ which is the seed of at least \[\frac{\alpha\binom{n}{3d+1}}{\binom{n}{2d}} \geq \gamma \binom{n}{d+1}\] good index sets. Here $\gamma$ depends on $\alpha$ and $d$, but not on $n$. Let $I_1, \ldots, I_{\gamma \binom{n}{d+1}}$ be good index sets whose seed is $S$. Denote the John ellipsoid of the intersection $\cap_{i \in S}C_i$ by $\mathcal{E}$ and the John ellipsoid of $\cap_{i \in I_j}C_i$ by $\mathcal{E}_j$. By Lemma~\ref{lem:hellyklee}, for every $j$, there is a $v_j \in \Re^d$ such that $c^dd^{-5d/2 + 1}\mathcal{E} + v_j \subseteq \mathcal{E}_j$. Thus, we have shown that at least $\gamma\binom{n}{d+1}$ of the $(d+1)$-wise intersections contain a translate of $c^dd^{-5d/2 + 1}\mathcal{E}$. We can apply \Href{Proposition}{prop:fhellyklee} with $L=c^dd^{-5d/2 + 1}\mathcal{E}$, which implies that there are $\frac{\gamma}{d+1} n$ such $C_i$, that their intersection contains a translate of $c^dd^{-5d/2 + 1}\mathcal{E}$. And, since $\mathcal{E}$ has volume at least one, this ellipsoid has volume at least $c^{d^2}d^{-5d^2/2+d}$, completing the proof of \Href{Theorem}{thm:QFHfew}. \section{Roadmap to \texorpdfstring{$(p,q)$}{(p,q)}: Selection Lemma and Weak Epsilon Net} \label{sec:QEpsilonNet} In \cite{Alon02}, partly by extracting the combinatorial arguments presented in earlier works, it is shown in an abstract setting (that is, working with hypergraphs in general, and not specifically with convex sets in $\Re^d$) that a $(p,q)$-theorem may be obtained from a fractional Helly-type theorem in the following manner. First, combined with a Tverberg-type theorem, a fractional Helly-type theorem yields a selection lemma \cite[Proposition~11]{Alon02}, which in turn yields a weak epsilon-net theorem \cite[Theorem~9]{Alon02} using a greedy algorithm. On the other hand, if a hypergraph satisfies the $(p,q)$ condition, then the hypothesis of the fractional Helly-type theorem holds. It follows that the fractional transversal number of the hypergraph is bounded from above, which, combined with a weak epsilon-net, yields that its transversal number is bounded as well, which is what the $(p,q)$-theorem states. In the rest of the paper, we will mark where we follow this path, and where we do not. In this section, we state the above listed classical (non-quantitative) results along with their quantitative analogs. \textbf{Tverberg's Theorem} (see \cite{Tv66}, and for a simpler proof \cite{Tv81}) sates the following. \emph{For every dimension $d\geq1$ and integer $r \geq 1$, if $m \geq (r-1)(d+1)+1$ and $\{x_1,\dots,x_m\}$ is a set of points in $\Re^d$ of size $m$, then there is a partition $\cup_{i = 1}^r I_i = [m]$ such that $\cap_{i=1}^r \conv{\{x_j\;:\; j \in I_i\}}$ is not empty. }. It is shown in \cite[Proposition~10]{Alon02} that in an abstract setting, a fractional Helly-type theorem yields a Tverberg-type theorem. However, the Tverberg number (the lower bound on $m$) obtained there is very large. Luckily, we have a quantitative Tverberg theorem dues to Sarkar, Xue and Sober\'on with a much better Tverberg number. \begin{theorem}[Quantitative Tverberg Theorem {\cite[Theorem~4.1.2]{SaXuSo}}]\label{thm:QTverberg} For every dimension $d\geq1$ and integer $r \geq 1$, if $m \geq (r-1)\left(\frac{d(d+3)}{2}+1\right)+1$ and $\{E_1, \ldots, E_m\}$ is a multiset of ellipsoids of volume one, then there is a partition $\cup_{i = 1}^rI_i = [m]$ such that $\cap_{i=1}^r \conv{\{E_j\;:\; j \in I_i\}}$ contains an ellipsoid of volume one. \end{theorem} We will need the following form of the above result. \begin{cor}[Quantitative Tverberg Theorem with equal parts]\label{thm:QTverbergEqual} For every dimension $d \geq 1$, if $a = \left(\frac{d(d+3)}{2}-1\right)\left(\frac{d(d+3)}{2}+1\right)+1 $, $b = \frac{d(d+3)}{2}$ and $\{E_1, \ldots, E_{ab}\}$ is a multiset of ellipsoids of volume one, then there is a partition $\cup_{i = 1}^bI_i = [ab]$ such that $|I_1| = \ldots = |I_b| = a$ and $\cap_{i=1}^b \conv{\{E_j\;:\; j \in I_i\}}$ contains an ellipsoid of volume one. \end{cor} \begin{proof}[Proof of Corollary~\ref{thm:QTverbergEqual}] Pick any $a$ element subset $I \subset [ab]$ and use Theorem~\ref{thm:QTverberg} with $r = b$. It yields a partition $\cup_{i=1}^bI'_i = I$ such that $\cap_{i=1}^b \conv{\{E_j\;:\; j \in I'_i\}}$ contains an ellipsoid of volume one. Now complete the parts of the obtained partition from the remaining $(ab-a)$ indexes into $a$-element disjoint index sets $I_1 \supset I'_1, I_2 \supset I'_2, \ldots, I_b \supset I'_b$. Since $\conv{\{E_j\;:\; j \in I'_i\}} \subset \conv{\{E_j\;:\; j \in I_i\}}$, the partition $\cup_{i = 1}^bI_i = [ab]$ will have the desired properties. \end{proof} The \textbf{Selection Lemma} (the planar version first proved by Boros and F\"uredi \cite{BF84}, the general version due to B\'ar\'any \cite{MR676720}) sates the following. \emph{ For every dimension $d \geq 1$, there exists a $\lambda \in (0,1)$ with the following property. If $\{x_1,\dots,x_n\}$ is a multiset of points in $\Re^d$, then there is a subset $\mathcal{H} \subseteq \binom{[n]}{d+1}$ with $|\mathcal{H}| \geq \lambda \binom{n}{d+1}$ such that $\cap_{I \in \mathcal{H}}\conv{\{x_j\;:\; j \in I\}}$ is not empty. }. \begin{lemma}[Quantitative Selection Lemma]\label{thm:QSelection} For every dimension $d \geq 1$, there exists an integer $a(d)$ and a real number $\lambda(d) \in (0,1)$ with the following property. If $n \geq a$ and $\{E_1, \ldots, E_n\}$ is a multiset of volume one ellipsoids in $\Re^d$, then there is a subset $\mathcal{H} \subseteq \binom{[n]}{a}$ with $|\mathcal{H}| \geq \lambda \binom{n}{a}$ such that $\cap_{I \in \mathcal{H}}\conv{\cup\{E_j\;:\; j \in I\}}$ contains an ellipsoid of volume one. \end{lemma} The proof follows closely the proof of \cite[Proposition~11]{Alon02}. \begin{proof}[Proof of Lemma~\ref{thm:QSelection}] Let $a = \left(\frac{d(d+3)}{2}-1\right)\left(\frac{d(d+3)}{2}+1\right)+1 $ and $b = \frac{d(d+3)}{2}$ as in Corollary~\ref{thm:QTverbergEqual}. Let $\mathcal{S} = \left\{\conv{\cup\{E_i\;:\; i \in I\}}\;:\; I \in \binom{[n]}{a}\right\}$. Our plan is to show that a positive fraction of $b$-tuples in $\mathcal{S}$ has an intersection which contains an ellipsoid of volume one in order to apply Proposition~\ref{prp:QFHmany} to $\mathcal{S}$. Let \begin{align*} T = \bigg\{\{I_1, \ldots, I_b\}\;:\; I_i \in \binom{[n]}{a}, I_i \cap I_j = \emptyset \text{ for } i \neq j \text{ and } \\ \cap_{i = 1}^b\conv{\cup\{E_j\;:\; j \in I_i\}} \text{ contains an ellipsoid of volume } 1\bigg\}. \end{align*} Observe that $|T|$ is at least $\binom{n}{ab}$, since, according to Corollary~\ref{thm:QTverbergEqual}, for each $J \in \binom{[n]}{ab}$ there exists pairwise disjoint $I_1, \ldots, I_b \in \binom{J}{a}$ such that $\cap_{i = 1}^b\conv{\cup\{E_j\;:\; j \in I_i\}}$ contains an ellipsoid of volume one, and so each $J$ contributes a different $b$-tuple in $T$. Hence \[ |T| \geq \binom{n}{ab} \geq \left(\frac{n}{ab}\right)^{ab} \geq \frac{1}{(ab)^{(ab)}}\mybinom{\binom{n}{a}}{b}, \] which means that we can apply Proposition~\ref{prp:QFHmany} to $\mathcal{S}$ and conclude that a $\lambda(d)=\beta\left(\frac{1}{(ab)^{(ab)}}, d\right)$ fraction of the members of $\mathcal{S}$ has an intersection that contains an ellipsoid with volume $1$. This completes the proof of Lemma~\ref{thm:QSelection}. \end{proof} The \textbf{Weak Epsilon Net Theorem}, proved by Alon, B\'ar\'any, F\"uredi and Kleitman \cite{ABFK92} sates the following. \emph{ For every dimension $d \geq 1$ there exists a function $f: (0,1] \rightarrow \Re$ with the following property. For any $\varepsilon\in(0,1]$, if $\mathcal{C}$ is a finite family of convex bodies in $\Re^d$, and $w:\Re^d\rightarrow[0,1]$ is a weight function such that $\sum_{x\in C} w(x)\geq \varepsilon$ for all $C\in\mathcal{C}$, and $\sum_{x\in \Re^d}w(x)=1$, then there is a set $S\subset\Re^d$ such that each $C \in \mathcal{C}$ contains an element of $S$, and $S$ is of size at most $f(\varepsilon)$. } \begin{theorem}[Existence of quantitative weak $\varepsilon$-nets]\label{thm:weakepsilonnet} For every dimension $d \geq 1$ there exists a function $f: (0,1] \rightarrow \Re$ with the following property. For any $\varepsilon\in(0,1]$, if $\mathcal{C}$ is a finite family of convex bodies in $\Re^d$, and $\Elli$ is a finite family of volume one ellipsoids in $\Re^d$, and $w:\Elli\rightarrow[0,1]$ is a weight function on this family of ellipsoids such that $\sum_{E\in\Elli, E\subseteq C} w(E)\geq \varepsilon$ for all $c\in\mathcal{C}$, and $\sum_{E\in\Elli} w(E)=1$, then there is a family $S$ of ellipsoids of volume one such that each $C \in \mathcal{C}$ contains a member of $S$, and $S$ is of size at most $f(\varepsilon)$. \end{theorem} \begin{proof}[Sketch of the proof of Theorem~\ref{thm:weakepsilonnet}] The existence of quantitative weak epsilon nets follows from the Quantitative Selection Lemma (Lemma~\ref{thm:QSelection}), using a greedy algorithm. A very similar proof can be found in \cite[Theorem~10.4.2.]{MR1899299}. \end{proof} We note that by using Theorem~\ref{thm:QTverberg}, we obtain a smaller $f(\varepsilon)$ than we would if we used \cite[Theorem~9]{Alon02}. \section{Proof of Proposition~\ref{prp:pqlargep} -- Quantitative \texorpdfstring{$(p,q)$}{(p,q)}-Theorem with a large bound on \texorpdfstring{$p$}{p}} \label{sec:pqlargep} Let $\mathcal{C}$ be a finite family of convex bodies in $\Re^d$ as in Proposition~\ref{prp:pqlargep}. We will represent $\mathcal{C}$ as a finite hypergraph. For each subfamily of $\mathcal{C}$, take the lowest volume one ellipsoid contained in the intersection of that subfamily, if there is such an ellipsoid. This way, we have a finite family of volume one ellipsoids. This family of ellipsoids, denote it by $\Elli$, will be the vertex set of our hypergraph. For each $C\in\mathcal{C}$, consider the subfamily of ellipsoids in $E$ that are contained in $C$. The edges of the hypergraph will be subfamilies of ellipsoids obtained this way. We denote this hypergraph by $(\Elli,\mathcal{H})$. By \cite[Theorem~8]{Alon02} and Proposition~\ref{prp:QFHmany}, the fractional transversal number (see definition therein, or in \cite[Section~10]{MR1899299}) of $(\Elli,\mathcal{H})$ is bounded from above by some $T>0$ that depends on $d$ only. Our Quantitative Weak $\varepsilon$-Net Theorem, Theorem~\ref{thm:weakepsilonnet}, applied with $\varepsilon=1/T$ now completes the proof of Proposition~\ref{prp:pqlargep}. In summary, the proof of Proposition~\ref{prp:pqlargep} mainly follows the classical line of reasoning that yields the $(p,q)$-Theorem. As we will see in the next section, to obtain Theorem~\ref{thm:QFHfew}, some other ideas are required. \section{Proof of Theorem~\ref{thm:pqsmallp} -- Quantitative \texorpdfstring{$(p,q)$}{(p,q)}-Theorem with a small bound on \texorpdfstring{$p$}{p}}\label{sec:pqsmallp} First, we discuss why the same argument as in the previous section cannot be repeated. The main idea in Section~\ref{sec:pqlargep} was that we considered a hypergraph representing our convex sets, and studied properties of this hypergraph. In the setting of Theorem~\ref{thm:pqsmallp}, however, there are \emph{two} hypergraphs: we obtain one if we consider ellipsoids of volume one in our convex sets, and we obtain another, if ellipsoids of volume $v$ are considered (with $v=c^{d^2}d^{-5d^2/2+d}$). Thus, some additional care is required. \begin{definition}[Quantitative fractional $v$-transversal number] For a subset $S \subseteq \Re^d$, let $E_v(S)$ be the set of volume $v$ ellipsoids contained in $S$. Let $\mathcal{C}$ be a family of subsets of $\Re^d$ and let $\varphi: E_v(\Re^d) \rightarrow [0,1]$ be a function that attains only finitely many nonzero values. We say that $\varphi$ is a \emph{quantitative fractional $v$-transversal} for $\mathcal{C}$, if $\sum_{E \in E_v(C)} \varphi(E) \geq 1$ for all $C \in \mathcal{C}$. The \emph{quantitative fractional transversal number} of $\mathcal{C}$ is the infimum of $\sum_{E \in E_v(\Re^d)}\varphi(x)$ over all quantitative fractional transversals $\varphi$ of $\mathcal{C}$. \end{definition} \begin{lemma}[Boundedness of quantitative fractional $v$-transversal number]\label{lemma:boundedvtransversal} For every $d$ and $p \geq 3d+1$ there exists a $h = h(p,d)>0$ with the following property. Let $\mathcal{C}$ be a finite family of convex sets in $\Re^d$, each containing an ellipsoid of volume one, and assume that among any $p$ members of $\mathcal{C}$, there exist $3d+1$ whose intersection contains an ellipsoid of volume one. Then $\mathcal{C}$ has quantitative fractional $v$-transversal number less than $h$ with $v=c^{d^2}d^{-5d^2/2+d}$, where $c$ is the universal constant from Theorem~\ref{lem:qhellyell}. \end{lemma} \begin{proof} We are going to use linear programming duality as in \cite{alon1992piercing}, but first we need the definition of quantitative fractional $v$-matching numbers. \begin{definition} Let $\mathcal{C}$ be a finite family of convex sets in $\Re^d$ and let $m: \mathcal{C} \rightarrow [0,1]$ a function. We say that $m$ is a \emph{quantitative fractional $v$-matching} for $\mathcal{C}$, if for every ellipsoid $E \subset \Re^d$ with volume $v$, the sum $\sum_{E \subset C \in \mathcal{C}}m(C)$ is at most $1$. The \emph{quantitative fractional $v$-matching number} of $\mathcal{C}$ is the supremum of $\sum_{C \in \mathcal{C}}m(C)$ over all quantitative fractional $v$-matchings of $\mathcal{C}$. \end{definition} Let $v = c^{d^2}d^{-5d^2/2+d}$. Note that if we consider each member $C$ of $\mathcal{C}$ as the collection of all volume $v$ ellipsoids that it contains, then we obtain a hypergraph with finitely many edges, but an infinite vertex set. However, we can replace this infinite vertex set by a finite one, just like at the beginning of Section~\ref{sec:pqlargep}, and obtain another hypergraph with the same (fractional) transversal and matching numbers as those of the original hypergraph. Now, we may apply the duality of linear programming to see that the quantitative fractional $v$-matching number and the quantitative fractional $v$-transversal number are equal for any family of convex sets $\mathcal{C}$. We denote it by $\nu_v^*(C)$. We know also, that there is an optimal quantitative fractional $v$-matching taking only rational values. Let $m$ be such a quantitative fractional $v$-matching and suppose, that $m(C) = \frac{\tilde{m}(C)}{D}$ for every $C$, where $\tilde{m}(C)$ and $D$ are integers. Let $\tilde{\mathcal{C}} = \{C_1, \ldots, C_N\}$ be the multiset that contains $\tilde{m}(C)$ copies of each $C \in \mathcal{C}$. Taking some sets with multiplicities does not change the quantitative fractional matching number, so $\nu_v^*(\tilde{\mathcal{C}}) = \nu_v^*(\mathcal{C})$. We know that among any $p$ members of $\mathcal{C}$, there are $3d+1$ whose intersection contains an ellipsoid with volume $1$. So among any $\tilde{p} = 3d(p-1) + 1$ members of $\tilde{\mathcal{C}}$ there are $3d+1$ whose intersection contains an ellipsoid with volume $1$, because the $\tilde{p}$ element multiset from $\tilde{\mathcal{C}}$ either contains $3d+1$ copies fo the same set, or $p$ different sets from $\mathcal{C}$. For every $I \in \binom{[N]}{\tilde{p}}$ there is a subset $J \subset I$ with $|J| = 3d+1$ and $\cap_{j \in J} C_j$ containing an ellipsoid of volume one. So there are at least \[ \frac{\binom{N}{\tilde{p}}}{\binom{N-3d+1}{\tilde{p}-3d+1}} \geq \alpha \binom{N}{3d+1} \] $3d+1$-tuples from $\mathcal{C}$ whose intersection contains an ellipsoid of volume one. We can apply Theorem~\ref{thm:QFHfew} and conclude that there are $\beta N$ sets from $\tilde{\mathcal{C}}$ whose intersection contains an ellipsoid with volume $v$. On the other hand, no ellipsoid of volume $v$ can be in more than $\frac{N}{\nu_1^*(\mathcal{C})}$ of the sets from $\tilde{\mathcal{C}}$, hence $\nu_v^*(\mathcal{C}) \leq \frac{1}{\beta}$, completing the proof of Lemma~\ref{lemma:boundedvtransversal}. \end{proof} Now we are ready to prove Theorem~\ref{thm:pqsmallp}. \begin{proof}[Proof of Theorem~\ref{thm:pqsmallp}.] If among any $p$ members of $\mathcal{C}$ there are $3d+1$ whose intersecton contains an ellipsoid of volume $1$, then from Lemma~\ref{lemma:boundedvtransversal} it follows, that $\mathcal{C}$ has quantitative fractional $v$-transversal number at most $h(p,d)$ with $v = c^{d^2}d^{-5d^2/2+d}$. Let $\varphi$ be a quantitative fractional $v$-transversal of size $h(p,d)$, and for every ellipsoid $E$ with volume $v$ let $w(E) = \frac{\varphi(E)}{h(p,d)}$. We can use Theorem~\ref{thm:weakepsilonnet} with $w$ and $\mathcal{C}$ and conclude that there is a $\frac{1}{h}$-net $S$ with volume $v$ ellipsoids for $\mathcal{C}$ of size at most $f\left(\frac{1}{h}\right)$. Since for every $C \in \mathcal{C}$ the inequality $\sum_{E \subset C}w(E) \geq \frac{1}{h}$ holds, every $C \in \mathcal{C}$ contains at least one ellipsoid from $S$ and Theorem~\ref{thm:pqsmallp} follows with $H(p,d) = |S| = f\left(\frac{1}{h(p,d)}\right)$. \end{proof} \section{Concluding remarks} The following questions are left open. First, can the Helly number $3d+1$ be replaced by $2d$ in Theorems~\ref{thm:QFHvolFew}, \ref{thm:QFHfew}, \ref{thm:pqsmallp}? It would be interesting to see such result, even if the volume bound on the ellipsoid is worse than the $d^{-cd^2}$, which we obtained. Second, can the volume bound in the above mentioned theorems be replaced by $d^{-cd}$? Finally, as mentioned in Section~\ref{sec:QFHmany}, according to \cite{holmsen19} by Holmsen, a colorful Helly type theorem yields a fractional Helly type theorem in the abstract setting, when one hypergraph is considered (see Section~\ref{sec:pqsmallp} for the explanation of one hypergraph vs. two). Can one give a quantitative analogue of this result? More precisely, if we make some assumptions on a pair of hypergraphs, then does a colorful Helly type theorem that holds for the pair of hypergraphs imply the corresponding fractional result for the same pair? That would mean that Theorem~\ref{thm:QFHvolFew} follows from \cite{DFN2019} by an abstract argument. \invisiblesection{Acknowledgement} \subsection*{Acknowledgement} We thank Pablo Sober\'on for our discussions. Both authors were supported by the grant EFOP-3.6.3-VEKOP-16-2017-00002. MN was supported also by the J\'anos Bolyai Scholarship of the Hungarian Academy of Sciences as well as the National Research, Development and Innovation Fund (NRDI) grant K119670, as well as the \'UNKP-20-5 New National Excellence Program of the Ministry for Innovation and Technology from the source of the NRDI. \bibliographystyle{alpha}
{ "timestamp": "2021-01-25T02:15:39", "yymm": "2012", "arxiv_id": "2012.08964", "language": "en", "url": "https://arxiv.org/abs/2012.08964" }
\section{Introduction} It is an understatement to say that, for a century now, Noether's two theorems \cite{Noether,Bessel-Hagen} have played a central role in physics \cite{YKS}. Since both symmetry and variational principles have become so fundamental to the point that they structure modern theories, these theorems are nowadays an indispensable part of any syllabus in pure physics. At an advanced level: in gauge theories of which they constitute a cornerstone, but also, beforehand, in analytical mechanics where Noether's name is generally mentioned for the first time through her first theorem. Indeed, the second theorem objectively plays a marginal role in this framework, though it is not without interest concerning parameter-free variational principles \cite{Logan} (as Jacobi's formulation of the principle of least action). The first theorem, for its part, is simply called `the theorem of Noether', and is mostly stated with the principal aim of rederiving the usual conservation laws from invariances of the action under transformations. It has the advantage of gathering all the existing conservation laws under a universal and elegant symmetry principle. A semester-length course on classical mechanics --- which must cover many topics such as calculus of variations, Lagrangian and Hamiltonian mechanics, canonical transformations, the Hamilton-Jacobi equation, action-angle coordinates, and so on, not to mention all the important applications --- cannot spend many time on each individual chapter. This is the reason why the first theorem of Noether is rarely treated in more detail. Pedagogical articles on the subject are aimed to fill this gap. One of the most interesting topics is certainly the reversed question about searching for unknown conservation laws in some practical situation, by analysing the symmetries (if any) admitted by the action functional. The method, as described for example in Neuenschwander's book \cite{Neuenschwander}, consists in seeking transformations satisfying the so-called Rund-Trautman identity \cite{Rund,Trautman}. That task can generally be achieved in an algorithmic way if we restrict ourselves to point transformations \cite{Lutzky,Prince,Gourieux}. It is interesting to interpret the Rund-Trautman identity as the identical vanishing of a quantity $R$ that will be called the \emph{Rund-Trautman function} in this article. As will be shown, $R$ turns out to be the rate of change along the motion of both (i) a Noether asymmetricity and (ii) the non conservation of the quantity that would have been conserved in case of symmetry. These properties prove the role that can play $R$ in the problem of perturbatively finding `almost' conservation laws when a system depends on small parameters. This is the main purpose of the present article regarding the adiabatic hypothesis. Our approach will be elementary enough to avoid complicated mathematical techniques \cite{Lochak}. The paper is organized as follows. We first draw in section \ref{sec:events} a basic portrait of the space of events associated with a mechanical system, and we outline how point transformations act in that space. In section \ref{sec:Noether} are reviewed the important features of Noether's first theorem regarding point transformations, including the Rund-Trautman function as well as coordinate and gauge issues. In section \ref{sec:applications}, we apply the theory to the determination of all the Noether point symmetries of natural one-dimensional problems \cite{Lewis-Leach}, with an emphasis put on the time-dependent harmonic oscillator. Section \ref{sec:adiabatic} is then devoted to the notions of almost symmetries and almost conservation laws within the adiabatic hypothesis. In order to comply with the amusing saying according to which `classical mechanics is the art of solving the harmonic oscillator in many ways', we treat the case of the paradigmatic time-dependent harmonic oscillator \cite{Ehrenfest,Kulsrud}. After obtaining a formal expansion of an adiabatic invariant to arbitrary orders, we derive explicit expressions for some frequency profiles and we end the article with a numerical test. In order to remain as pedagogical as possible without obscuring the main content, all the calculations will be explained and detailed in appendices. Alternatively, they can be considered as exercises for the reader. \section{Continuous point transformations in the space of events}\label{sec:events} \subsection{The space of events and its coordinatizations}\label{subsecsec:events} Let us take as a starting point a classical mechanical problem whose configuration space $\mathcal Q$ is an $n$-dimensional smooth manifold. Since our framework is Newtonian, there exists independently a timeline $\mathcal T$, diffeomorphic to the real line, whose points are the positions in time. The space of events (or extended configuration space) is the Cartesian product $\mathcal E=\mathcal T\times\mathcal Q$ usually coordinatized by $(1+n)$-tuples whose first element is the absolute time along $\mathcal T$, the $n$ others being coordinates in $\mathcal Q$. However, it might be interesting to consider arbitrary coordinate systems in $\mathcal E$ possibly `mixing' $\mathcal T$ and $\mathcal Q$. Such a system will be generically denoted by $(t,q)$ with $q=(q^i)$ and $i=1,\dots,n$. We must only be sure that $t$ can play the role of a time along the actual evolution of the configuration point in $\mathcal Q$, i.e.\ that it increases strictly with the absolute time (it is certainly the case if $t$ is simply a future-oriented coordinate in $\mathcal T$, e.g.\ the absolute time itself). Geometrically, it amounts to say that the curve drawn by the evolution in $\mathcal E$ is, in coordinates, the graph of a mapping $\mathcal C\colon t\mapsto q(t)$. It is often interesting to reduce at least formally the specificity of the time coordinate by setting $t\equiv q^0$ and $(t,q)=(q^\mu)$ with $\mu=0,\dots,n$. \subsection{Continuous point transformations} A continuous point transformation $\Phi$ of the space of events is essentially a mechanism which unambiguously maps any event $(t,q)$ into a parameter-dependent one, $(t_\varepsilon,q_\varepsilon)$ say, where $\varepsilon$ is the parameter. Locally, it admits a form \begin{equation} (t,q)\longrightarrow(t_\varepsilon,q_\varepsilon)=\Phi(t,q;\varepsilon)=\big(t+\varepsilon\tau(t,q),q+\varepsilon\xi(t,q)\big)+\mathrm O(\varepsilon^2),\label{transfo} \end{equation} in some vicinity of $\varepsilon=0$, the quantities $\tau$ and $\xi=(\xi^i)$ being smooth functions of their arguments. The transformation $\Phi$ is entirely characterized and generated by the vector field \cite{Olver} \begin{equation*} \mathsf X =\tau\,\frac{\partial}{\partial t}+\xi^i\frac{\partial}{\partial q^i}=\xi^\mu\partial_\mu\qquad\Bigg(\xi^0\equiv\tau\;,\;\partial_\mu\equiv\frac{\partial}{\partial q^\mu}\Bigg \end{equation*} on $\mathcal E$, the Einstein summation convention being assumed in this paper (Latin and Greek indices cover the ranges $1,\dots,n$ and $0,\dots,n$ respectively). It is the vector field which has for components the $n$-tuple $(\xi^\mu)$ with respect to the coordinate system $(q^\mu)$. Actually, $\Phi$ drags the events along the integral curves of $\mathsf X$. From now on, $\varepsilon$ will be considered as an infinitesimal and any term of order higher than the first in $\varepsilon$ will be neglected. This way, $\varepsilon\mathsf X$ is the infinitesimal translation bringing the original event $(t,q)$ to the transformed one $(t_\varepsilon,q_\varepsilon)$. \subsection{Induced transformations and symmetries of point functions} Let $F_0$ be a smooth point function\footnote{The French mathematician Gabriel Lam\'e called \textit{`fonction-de-point'} a real-valued function defined on the `absolute space' under consideration (originally the three-dimensional physical space) \cite{Lame}. It unambiguously associates a real value to any point of that space and can secondarily acquire an analytical expression through a coordinate system. It is a basic example of scalar.} defined on $\mathcal E$. Applying a first-order Taylor expansion based on \eref{transfo}, while $(t,q)$ is mapped into $(t_\varepsilon,q_\varepsilon)$, the value of $F_0$ undergoes the transformation \begin{equation} F_0(t,q)\longrightarrow F_0(t_\varepsilon,q_\varepsilon)=F_0(t,q)+\varepsilon\mathsf X (F_0)(t,q).\label{transpoint} \end{equation} One says that $\Phi$ is a symmetry of $F_0$ if it leaves its values invariant in the flow of $\Phi$, i.e.\ if the variation of $F_0$ is everywhere zero in the direction of $\mathsf X$. This is the case if and only if (iff) $\mathsf X (F_0)$ vanishes identically. Geometrically, it means that the field $\mathsf X$ is tangent to the level surfaces of $F_0$ (or equivalently that the integral curves of $\mathsf X$ are contained in these surfaces). \begin{figure} \begin{center} \psset{xunit=1.8cm,yunit=1.4cm,algebraic=true,dimen=middle,dotstyle=o,dotsize=3pt 0,linewidth=0.5pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-2.1,-0.7)(2.2,2.4) \begin{small} \psaxes[labelFontSize=\scriptstyle,xAxis=true,yAxis=true,labels=none,ticks=none]{->}(-0.0,-.3)(-2.1,-0.5)(2.,2.) \psplot[plotpoints=200,linestyle=dashed,dash=2pt 2pt]{-2}{1.55}{2.718281828459045^(-x^(2.0))+0.1} \psplot[plotpoints=200]{-1.8}{1.8}{1.2*2.718281828459045^(-(x-0.4)^(2.0)+0.2)+0.2} \psdots[dotstyle=*](0.55,0.83) \psline[linestyle=dashed,dash= 1.5pt 1.5pt](0.55,0.83)(.55,-.3) \uput{0.15cm}[-90](.55,-.3){$t$} \psline[linestyle=dashed,dash= 1.5pt 1.5pt](0.55,0.83)(0,0.83) \uput{0.05cm}[180](0,0.83){{$q(t)$}} \psline[linewidth=.8pt]{->}(.55,.83)(1.05,.43) \uput{0.4cm}[105](1.05,.43){{$\dot q(t)$}} \psdots[dotstyle=*](0.7,1.54) \psline[linewidth=1pt,linecolor=blue]{->}(0.55,0.83)(0.65,1.52) \psline[linestyle=dashed,dash= 1.5pt 1.5pt](0.7,1.54)(-.0,1.54) \uput{.05cm}[180](-.0,1.54){$q_\varepsilon(t_\varepsilon)$} \psline[linestyle=dashed,dash= 1.5pt 1.5pt](0.7,1.54)(0.7,-.3) \uput{.15cm}[-90](0.75,-.3){$t_{\varepsilon}$} \psline[linewidth=.8pt]{->}(0.7,1.54)(1.3,1.0) \rput[bl](.95,1.37){$\dot q_\varepsilon(t_\varepsilon)$} \rput[bl](1.1,0){{$\mathcal C$}} \rput[bl](1.6,.7){{$\mathcal C_\varepsilon$}} \rput[bl](.32,1.1){\color{blue}$\varepsilon\mathsf X$} \uput[0](2,-.3){$t$} \uput{.1cm}[90](0.0,2){$q$} \psdots[dotstyle=*](-2,0.1183) \psline[linestyle=dashed,dash= 1pt 1pt](-2,0.1183)(-2,-.3) \uput{.15cm}[-90](-2.0,-.3){$t_1$} \psdots[dotstyle=*](-1.8,0.212) \psline[linestyle=dashed,dash= 1pt 1pt](-1.8,0.212)(-1.8,-.3) \uput{.15cm}[-90](-1.75,-.3){$t_{1\varepsilon}$} \psdots[dotstyle=*](1.8,0.406) \psline[linestyle=dashed,dash= 1pt 1pt](1.8,0.406)(1.8,-.3) \uput{.15cm}[-90](1.85,-.3){$t_{2\varepsilon}$} \psdots[dotstyle=*](1.55,0.19) \psline[linestyle=dashed,dash= 1pt 1pt](1.55,0.19)(1.55,-.3) \uput{.15cm}[-90](1.55,-.3){$t_{2}$} \pscurve[linecolor=red](.5,.3)(0.55,0.83)(0.7,1.54)(.9,2) \psline[linewidth=.8pt,linecolor=red]{->}(0.840,1.87)(.841,1.8722) \rput(.7,1.95){\color{red}$\Phi$} \end{small} \end{pspicture*} \end{center} \caption{Under the action of $\Phi$, and for sufficiently small values of $|\varepsilon|$, the original evolution $\mathcal C$ (dashed line) is mapped into a neighbouring evolution $\mathcal C_\varepsilon$ (solid line). The vector $\varepsilon\mathsf X$ is the first order approximation in $\varepsilon$ of the transformation ($\varepsilon$ is taken positive in the figure).}\label{fig:transformation_evolution} \end{figure} \subsection{Induced transformations of evolutions and their kinematic properties} Now, let $t\mapsto q(t)$ be a generic smooth evolution of the configuration between two extremities of time $t=t_1$ and $t=t_2$. As illustrated in figure \ref{fig:transformation_evolution}, for sufficiently small values of $|\varepsilon|$, its graph $\mathcal C$ drawn in $\mathcal E$ is transformed by $\Phi$ into the graph of another evolution $\mathcal C_\varepsilon$ between $t=t_{1\varepsilon}$ and $t=t_{2\varepsilon}$, according to (see \ref{app:graph}) \begin{equation} (t,q(t))\longrightarrow (t_\varepsilon,q_\varepsilon(t_\varepsilon))=(t+\varepsilon\tau(t,q(t)),q(t)+\varepsilon\xi(t,q(t))).\label{transgraph} \end{equation} Then, the velocity $\dot q(t)$ of the original evolution at $t=t$ becomes the velocity $\dot q_\varepsilon(t_\varepsilon)$ of the transformed evolution at $t=t_\varepsilon$. Viewing $t_\varepsilon$ and $q_\varepsilon(t_\varepsilon)$ as functions of the original value $t$ of the time through the equality in Equation \eref{transgraph}, one has \begin{equation} \dot q^i_\varepsilon(t_\varepsilon)=\frac{{\mathrm d} q^i_\varepsilon(t_\varepsilon)}{{\mathrm d} t_\varepsilon}=\frac{{\mathrm d} q^i_\varepsilon(t_\varepsilon)}{{\mathrm d} t}\Bigg\slash\frac{{\mathrm d} t_\varepsilon}{{\mathrm d} t}\,,\label{transvel} \end{equation} that is, to the first order in $\varepsilon$ (see \ref{app:vel}) : \begin{equation} \dot q^i_\varepsilon(t_\varepsilon)=\dot q^i(t)+\varepsilon\Bigg[\frac{{\mathrm d}\xi^i(t,q(t))}{{\mathrm d} t}-\dot q^i(t)\frac{{\mathrm d}\tau(t,q(t))}{{\mathrm d} t}\Bigg].\label{velocities} \end{equation} In brief, the formal transformation rules of the time, position and velocity are thus \begin{equation*} t\longrightarrow t_\varepsilon=t+\varepsilon \tau\,,\qquad q\longrightarrow q_\varepsilon=q+\varepsilon\xi\,,\qquad \dot q\longrightarrow \dot q_\varepsilon=\dot q+\varepsilon(\dot\xi-\dot q\dot\tau). \end{equation*} One could also determine the transformation rules of higher total $t$-derivatives of $q$ in a recursive way, if needed. \subsection{Induced transformations and symmetries of kinematic functions} According to the above transformation rules, while the triple $(t,q,\dot q)$ is transformed into $(t_\varepsilon,q_\varepsilon,\dot q_\varepsilon)$ by $\Phi$ along an evolution, the value of a smooth kinematic function $F_1$ of the time, position, and velocity, undergoes the transformation \begin{equation*} F_1(t,q,\dot q)\longrightarrow F_1(t_\varepsilon,q_\varepsilon,\dot q_\varepsilon)=F_1(t,q,\dot q)+\varepsilon\mathsf X^{[1]}(F_1)(t,q,\dot q), \end{equation*} where \begin{equation} \mathsf X^{[1]}=\tau\,\frac{\partial}{\partial t}+\xi^i\frac{\partial}{\partial q^i}+(\dot\xi^i-\dot q^i\dot\tau)\frac{\partial}{\partial\dot q^i}\label{firstprolongation} \end{equation} is the so-called first prolongation \cite{Olver} of $\mathsf X $ especially built to act upon such functions. Here again, the transformation is a symmetry of $F_1$ if it leaves invariant the values of that function, i.e.\ if $\mathsf X^{[1]}(F_1)$ vanishes identically. One could also prolong $\mathsf X $ to deal with kinematic functions depending on $\ddot q$ and possibly higher total $t$-derivatives of $q$. However, it will not be necessary for our purpose. \subsection{Adapted coordinate systems} \begin{figure} \centering \psset{xunit=.6cm,yunit=.6cm,algebraic=true,dotstyle=o,dotsize=3pt 0,linewidth=0.5pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(0.05,0.)(15.3,8.95) \parametricplot[linewidth=.3pt,linecolor=red]{1.7093114184455844}{2.696464557035488}{1*4.71*cos(t)+0*4.71*sin(t)+4.94|0*4.71*cos(t)+1*4.71*sin(t)+-0.38} \parametricplot[linewidth=.3pt,linecolor=red]{1.651358282808984}{2.6743618432795206}{1*3.5*cos(t)+0*3.5*sin(t)+4.63|0*3.5*cos(t)+1*3.5*sin(t)+-0.94} \psline{->}(0.08,0.4)(5.03,0.4) \psline{->}(9.4,0.4)(14.35,0.4) \uput[0](14.35,0.4){$q^{\bar\alpha}$} \rput{18.19}(7.52,6.81){\psellipse(0,0)(3.17,1.91)} \psline[linewidth=.3pt,linecolor=red](9.4,3.4)(14.35,3.4) \psline[linewidth=.3pt,linecolor=red](9.4,1.9)(14.35,1.9) \psline{->}(0.4,0.1)(0.4,4.77) \psline{->}(9.7,0.1)(9.7,4.77) \psline[linewidth=1pt,linecolor=blue]{->}(1.11,2.36)(2.14,3.8) \psline[linewidth=1pt,linecolor=blue]{->}(2.73,3.77)(4.06,4.48) \psline[linewidth=1pt,linecolor=blue]{->}(1.79,1.11)(2.52,2.12) \psline[linewidth=1pt,linecolor=blue]{->}(2.92,2.11)(4.12,2.78) \parametricplot[linewidth=.3pt,linecolor=red]{1.7235674377538832}{2.718741414308379}{1*5.72*cos(t)+0*5.72*sin(t)+9.88|0*5.72*cos(t)+1*5.72*sin(t)+3.18} \parametricplot[linewidth=.3pt,linecolor=red]{1.472907948870405}{2.949113549971398}{1*3.37*cos(t)+0*3.37*sin(t)+10.26|0*3.37*cos(t)+1*3.37*sin(t)+4.11} \psline[linewidth=1pt,linecolor=blue]{->}(4.95,6.09)(5.8,7.53) \psline[linewidth=1pt,linecolor=blue]{->}(6.29,7.63)(7.52,8.62) \psline[linewidth=1pt,linecolor=blue]{->}(7.11,5.31)(7.62,6.65) \psline[linewidth=1pt,linecolor=blue]{->}(8.34,6.88)(9.53,7.71) \psline[linewidth=1pt,linecolor=blue]{->}(9.85,1.9)(11.34,1.9) \psline[linewidth=1pt,linecolor=blue]{->}(12.35,1.9)(13.84,1.9) \psline[linewidth=1pt,linecolor=blue]{->}(10.04,3.4)(11.53,3.4) \psline[linewidth=1pt,linecolor=blue]{->}(12.42,3.4)(13.91,3.4) \parametricplot{1.7448508960176878}{2.9743639964300783}{1*3.26*cos(t)+0*3.26*sin(t)+5.02|0*3.26*cos(t)+1*3.26*sin(t)+4.47} \psline{->}(1.81,5.01)(1.79,4.91) \parametricplot{-0.36142931821872804}{1.293687305190943}{1*2.19*cos(t)+0*2.19*sin(t)+10.49|0*2.19*cos(t)+1*2.19*sin(t)+5.42} \psline{->}(12.54,4.64)(12.5,4.53) \begin{small} \rput[bl](1.1,6.5){$(q^\mu)$} \rput[bl](12.6,6.5){$(q^{\bar\mu})$} \rput[bl](7.5,7.2){\color{blue}$\mathsf X$} \rput[bl](7.,4.1){$\mathcal E$} \rput[bl](1.9,2.4){\color{blue}$\xi^\mu\partial_\mu$} \rput[bl](11.7,2.4){\color{blue}$\partial_{\bar\alpha}$} \end{small} \end{pspicture*} \caption{Schematic illustration of the vector field $\mathsf X$ over $\mathcal E$ and its representatives with respect to an arbitrary coordinate system $(q^\mu)$ and an adapted one $(q^{\bar\mu})$. In the latter, the integral curves of $\mathsf X$ coincide with the coordinate lines of $q^{\bar\alpha}$.}\label{fig:coordinates} \end{figure} For obvious practical reasons, once we have chosen a coordinate system $(q^\mu)$, each individual coordinate $q^\mu$ is tacitly identified with the $\mu$-th coordinate function which maps events to their value of $q^\mu$. From the transformation \eref{transfo} regarding the coordinates (which formally reads $q^\mu\to q^\mu+\varepsilon\xi^\mu$), and from the general transformation rule \eref{transpoint} of the point functions, one readily deduces that applying $\mathsf X$ to the coordinate functions $q^\mu$ yields the equalities $\mathsf X(q^\mu)=\xi^\mu$. This observation allows us to write the components of $\mathsf X$ in an arbitrary system $(q^\mu)$ as $(\xi^\mu)=(\mathsf X(q^\mu))$. In another system $(q^{\mu'})$ they will be $(\xi^{\mu'})=(\mathsf X(q^{\mu'}))$, that is, $(\xi^{\mu'})=(\xi^\nu\partial_\nu q^{\mu'})$. This last equality encodes, as expected, the contravariant transformation rule of the components of vector fields. According to the elementary theory of differential geometry \cite{Spivak}, it is always possible to define an extended coordinate system $(q^{\bar\mu})$ reducing the transformation $\Phi$ to a mere rigid translation of magnitude $\varepsilon$ along one of the coordinates, $q^{\bar\alpha}$ say (see figure \ref{fig:coordinates}). In such a system which is said to be \emph{adapted} to $\Phi$, the total derivatives of the coordinates $q^{\bar i}$ with respect to $\bar t=q^{\bar 0}$ remain invariant at all orders, thus $\mathsf X $ and its prolongations simply coincide with the partial derivative $\partial_{\bar\alpha}$. Hence, saying that $\Phi$ is a symmetry of a point or kinematic function $F$ amounts to saying that $F$ does not depend on the coordinate $q^{\bar\alpha}$ in the adapted system (although it may depend on its total derivatives with respect to $\bar t$ if $q^{\bar\alpha}\ne\bar t$). Alternatively stated, that kind of symmetry allows to reduce by one the number of variables necessary to describe $F$. \section{Noether's theory and the Rund-Trautman function}\label{sec:Noether} \subsection{Introduction of the Rund-Trautman function} Suppose that the dynamics of the system derives from the Hamilton's principle applied to an action functional \cite{Goldstein} \begin{equation*} S(\mathcal C)=\int_{t_1}^{t_2} L[q(t)]\,{\mathrm d} t \end{equation*} where $L$ is a a smooth Lagrangian of the first order and $[q(t)]$ a shorthand notation for the triple of arguments $(t,q(t),\dot q(t))$. It amounts to say that the motions are the evolutions satisfying the Euler-Lagrange equations \begin{equation*} \mathsf E_i(L)=0\qquad(i=1,\dots,n \end{equation*} where \begin{equation} \mathsf E_i=\frac{\partial}{\partial q^i}-\frac{{\mathrm d}}{{\mathrm d} t}\frac{\partial}{\partial\dot q^i}\label{ELop} \end{equation} is the $i$-th Euler-Lagrange operator with respect to the used coordinate system. Under $\Phi$, the value of the action transforms as \begin{eqnarray} S(\mathcal C)\longrightarrow S(\mathcal C_\varepsilon)&=\int_{t_{1\varepsilon}}^{t_{2\varepsilon}}L[q_\varepsilon(t_\varepsilon)]\,{\mathrm d} t_\varepsilon\,\label{transformedS} \end{eqnarray} where $[q_\varepsilon(t_\varepsilon)]$ stands for the triple of arguments $(t_\varepsilon,q_\varepsilon(t_\varepsilon),\dot q_\varepsilon(t_\varepsilon))$. To the first order in $\varepsilon$, the induced variation of the action is (see \ref{app:var}) \begin{equation} \delta S(\mathcal C)=S(\mathcal C_\varepsilon)-S(\mathcal C)=\varepsilon\int_{t_1}^{t_2}\bigg(\mathsf X^{[1]}(L)+\dot\tau L\bigg)[q(t)]\,{\mathrm d} t.\label{transS} \end{equation} Now, let us introduce some smooth point function $B$ (given up to a meaningless additive constant) and contemplate the difference \begin{equation} \mathcal D(\mathcal C)=\Big[\varepsilon B(t,q(t))\Big]_{t_1}^{t_2}-\delta S(\mathcal C)=\varepsilon\int_{t_1}^{t_2}R[q(t)]\,{\mathrm d} t,\label{difference} \end{equation} where has been introduced the kinematic function \begin{equation} R(t,q,\dot q)=\dot B-\mathsf X^{[1]}(L)-\dot\tau L\label{RT} \end{equation} that will be called the \emph{Rund-Trautman function} defined by the Lagrangian $L$, the transformation $\Phi$ and the \emph{boundary term} $B$. Rearranging its right-hand side (see \ref{app:I}), the last equation can be rewritten \begin{equation} R=\dot I-(\xi^i-\dot q^i\tau)\mathsf E_i(L),\label{relation} \end{equation} where was introduced the quantity \begin{equation} I(t,q,\dot q)= B+H\tau-p_i\xi^i=B-p_\mu\xi^\mu,\label{FI} \end{equation} with $p_i=\partial L/\partial{\dot q^i}$ and $p_0=-H=L-p_i\dot q^i$ the components of the extended momentum. From \eref{relation}, one sees that $R$ is the rate of change of $I$ along the motions. The fact that the left-hand side of \eref{difference} has a coordinate-free meaning suffices to say that $R{\mathrm d} t$ in the right-hand side is invariant under a change of extended coordinates. It is also the case of $I$ since $B$ and the contraction $p_\mu\xi^\mu$ are scalars. \subsection{Harmonization with the Lagrangian gauge freedom}\label{subsec:gauge} One knows that the dynamics is invariant under the addition of a total differential ${\mathrm d} G$ to the form $L{\mathrm d} t$, where $G$ is a point function. This addition corresponds to the gauge transformation $L\to \widetilde L= L+\dot G$ of the Lagrangian. Accordingly, the action gauge-transforms as \begin{equation*} S(\mathcal C)\longrightarrow \widetilde S(\mathcal C)=\int_{t_1}^{t_2}\widetilde L[q(t)]\,{\mathrm d} t=S(\mathcal C)+\Big[G(t,q(t))\Big]_{t_{1}}^{t_{2}} \end{equation*} and its variation under $\Phi$ as \begin{equation*} \delta S(\mathcal C)\longrightarrow\delta \widetilde S(\mathcal C)=\widetilde S(\mathcal C_\varepsilon)-\widetilde S(\mathcal C). \end{equation*} As is shown in \ref{app:gauge}, one obtains, to the first order in $\varepsilon$: \begin{equation} \delta \widetilde S(\mathcal C)=\delta S(\mathcal C)+\varepsilon\,\Big[\mathsf X (G)(t,q(t))\Big]_{t_1}^{t_2}\,.\label{vardeltaS} \end{equation} The difference $\mathcal D(\mathcal C)$ is rendered gauge-invariant if one endows with $B$ the compensating gauge transformation law \begin{equation*} B\longrightarrow\widetilde B=B+\mathsf X(G). \end{equation*} Indeed, it is easily verified that \begin{equation*} \Big[\varepsilon \widetilde B(t,q(t))\Big]_{t_1}^{t_2}-\delta\widetilde S(\mathcal C)=\Big[\varepsilon B(t,q(t))\Big]_{t_1}^{t_2}-\delta S(\mathcal C). \end{equation*} Consequently, the Rund-Trautman function is gauge invariant ($\widetilde R=R$), as well as $I$, according to the gauge transformation rule of the extended momenta: \begin{equation*} \widetilde I=\widetilde B-\tilde p_\mu\xi^\mu=\widetilde B-(p_\mu+\partial_\mu G)\xi^\mu=\widetilde B-p_\mu\xi^\mu-\mathsf X(G)=B-p_\mu\xi^\mu=I. \end{equation*} \subsection{Trivialization of the formalism through adapted gauge and coordinates}\label{subsec:triv} Locally, one can always choose a point function $G$ verifying the gauge condition $B+\mathsf X (G)=0$ in order to cancel the boundary term. Such a gauge will be said to be adapted to the couple formed by the transformation and the boundary term. It can be most easily done by using an adapted coordinate system $(q^{\bar\mu})=(\bar t,q^{\bar i})$ such that $\mathsf X=\partial_{\bar\alpha}$. Indeed, the gauge condition becomes $B+\partial_{\bar\alpha}G=0$ and an integral of this expression with respect to $q^{\bar\alpha}$ suffices to obtain a suitable point function $G$. Once this is done, the Lagrangian $\overline L$ expressed in the adapted system and defined by $\overline L{\mathrm d}\bar t=L{\mathrm d} t+{\mathrm d} G$ is given in an adapted gauge as well. The Rund-Trautman function is now simply $\overline R=-\partial_{\bar\alpha}\overline L$ and $I$ reduces to \begin{equation*} I=-\overline p_{\bar\alpha}=-\frac{\partial \overline L}{\partial\mathring q^{\bar\alpha}}\,, \end{equation*} where, in order to avoid any confusion, the empty bullet symbolizes the derivation with respect to the adapted time $\bar t$. Hence, equation \eref{relation} becomes, along the motions, tantamount to the Euler-Lagrange equation $\mathsf E_{\bar\alpha}(\overline L)=0$. Actually, $\overline R$ measures the dependence of the adapted Lagrangian $\overline L$ on $q^{\bar\alpha}$ as well as the rate of change of the momentum $\overline p_{\bar\alpha}$ along the motions (up to a sign). \subsection{Symmetry or not symmetry} One says that the transformation $\Phi$ is a Noether point symmetry (NPS) of the problem if there exists a boundary term $B$ such that $\mathcal D(\mathcal C)$ vanishes for any evolution $\mathcal C$. In this case, $R$ identically vanishes and thus the quantity $I$ is conserved along the motions. According to Paragraph \ref{subsec:gauge}, an NPS is as expected a gauge-invariant property generating a gauge-invariant constant of motion. In particular, an NPS leaves the action integral invariant in an adapted gauge and the symmetry is said to be strict in this case. If $\Phi$ is an NPS then its meaning becomes transparent as seen through the lens of the adapted Lagrangian $\overline L$ introduced in Paragraph \ref{subsec:triv}. Indeed, it says that $q^{\bar\alpha}$ is a cyclic coordinate\footnote{In addition, $p_{\bar\alpha}$ being a partial derivative of $\overline L$, it does not depend on $q^{\bar\alpha}$ either. This point demonstrates that an NPS $\Phi$ is a symmetry of the constant of motion $I$.}, or equivalently that its conjugate momentum is conserved. In general, however, $R$ is interpretable as the rates of change of both the `asymmetricity' and $I$. Therefore, seeking `almost' vanishing Rund-Trautman functions can be a good approach to find `almost' conserved quantities. In the adiabatic context, the almostness in question will become clearer in Section \ref{sec:applications}. But, before, we apply the theory discussed in this section to the general one-dimensional Lagrangian problems. \section{Application to one-dimensional problems}\label{sec:applications} \subsection{General aspects} We consider in this section the standard Lagrangian of a unit-mass particle experiencing a potential $V(t,q)$ along a straight line: \begin{equation*} L=\frac12\,\dot q^2-V(t,q) \end{equation*} Let us introduce the generator $\mathsf X=\tau\partial_t+\xi\partial_q$ of a point transformation $\Phi$ as well as a boundary term $B$. The corresponding Rund-Trautman function \eref{RT} takes the form \begin{equation*} R=R_3(t,q)\dot q^3+R_2(t,q)\dot q^2+R_1(t,q)\dot q+R_0(t,q), \end{equation*} where \begin{equation} \begin{array}{ll} R_3=\displaystyle\frac12\,\partial_q\tau,\phantom{\bigg|} & R_2=\displaystyle\frac12\,\partial_t\tau-\partial_q\xi, \\ R_1=\partial_qB+V\partial_q\tau-\partial_t\xi,\phantom{\bigg|}\qquad & R_0=\partial_t(B+\tau V)+\xi\partial_qV. \end{array}\label{Rs} \end{equation} The three quantities $R_3$, $R_2$ and $R_1$, taken in this order, are found to identically vanish iff $\tau$, $\xi$ and $B$ have the form \begin{equation} \begin{array}{ll} \tau=\tau(t), & \xi=\displaystyle\frac12\,\dot\tau(t) q+\psi(t), \\ B=\displaystyle\frac14\,\ddot\tau(t) q^2+\dot\psi(t) q+\chi(t), \end{array}\label{taupsichi} \end{equation} whatever $V$ may be. From now on, $\tau$, $\xi$ and $B$ have these forms in which the three functions $\tau(t)$, $\psi(t)$ and $\chi(t)$ are arbitrary but fixed. Therefore, the Rund-Trautman function $R$ reduces to the point function $R_0$. We will restrict ourselves to the case $\tau\ne 0$, i.e.\ to asynchronous transformations. Reversing $\Phi$ if necessary, one can suppose the transformation future-oriented ($\tau>0$) without loss of generality. It is shown in \ref{app:changevar} that the change of extended coordinates $(t,q)\to(T,Q)$, with \begin{equation} T=\int^t\frac{{\mathrm d} t}{\tau}\qquad\text{and}\qquad Q=\frac{q}{\sqrt{\tau}}-\int^t\frac{\psi}{\tau^{3/2}}\,{\mathrm d} t,\label{TQ} \end{equation} reduces $\Phi$ to the translation $(T,Q)\to(T+\varepsilon,Q)$, and its generator $\mathsf X$ to $\partial_T$. Using the adapted coordinates, one obtains after some straightforward but lengthy calculations (see \ref{app:V}), that the expression of $R_0$ in \eref{tauR0} is equivalent to the existence of a function $W(Q)$ such that \begin{equation} V=\frac{1}{\rho^2}\,W(Q)-\frac{\ddot\rho}{2\rho}\, q^2-\frac{1}{\rho}\frac{{\mathrm d}(\rho^2\dot\alpha)}{{\mathrm d} t}\,q+\beta+\frac{1}{\rho^2}\int^T\hspace{-1mm}\rho^2R_0\,{\mathrm d} T',\label{Valternatif} \end{equation} where \begin{equation} \rho=\sqrt{\tau}\,,\qquad\alpha=\int^t\frac{\psi}{\tau^{3/2}}\,{\mathrm d} t\qquad\text{and}\qquad \beta=\frac{\psi^2}{2\tau^2}-\frac{\chi}{\tau}\label{rho_beta} \end{equation} are functions of the time only. The function $W(Q)$ is defined up to a meaningless additive constant since an alteration $W(Q)\to W(Q)+\text{cst.}$ can be compensated by a redefinition of $\chi$ through $\chi\to \chi+\text{cst}$. Actually, $\chi$ has no physical meaning, its role is only to ensure the gauge symmetry according to which adding to the potential an explicit function of the time only does not affect the dynamics. Now, it is shown in \ref{app:Lambda} that the gauge condition $B+\mathsf X(G)=0$ is fulfilled if one chooses \begin{equation} G=-\frac{\dot\rho}{2\rho}\,q^2-\rho\dot\alpha q+\int^t\Bigg(\frac12\,\rho^2\dot\alpha^2+\beta\Bigg){\mathrm d} t.\label{Lambda} \end{equation} Then, following the method described in Paragraph \ref{subsec:triv} (see \ref{app:L}), one obtains the adapted Lagrangian \begin{equation} \overline L(T,Q,\mathring Q)=\frac 12\, \mathring Q^2-W(Q)-\int^T\hspace{-1mm}\rho^2 R_0\,{\mathrm d} T,\label{newlag} \end{equation} where the empty bullet symbolizes the total derivation with respect to the new time $T$. The integrand of the indefinite integral in \eref{newlag} is actually the new Rund-Trautman function and is, as expected, the opposite of the partial derivative of the new Lagrangian with respect to $T$. Moreover, the new energy function coincides with $I$: \begin{equation*} I=\overline H=\frac 12\, \mathring Q^2+W(Q)+\int^T\hspace{-1mm}\rho^2 R_0\,{\mathrm d} T. \end{equation*} From the above discussion, we deduce that the variational problem admits asynchronous NPS (ANPS) iff the potential can be written in a form \begin{equation} V=\frac{1}{\rho^2}\,W\Bigg(\frac{q}{\rho}-\alpha\Bigg)-\frac{\ddot\rho}{2\rho}\, q^2-\frac{1}{\rho}\frac{{\mathrm d}(\rho^2\dot\alpha)}{{\mathrm d} t}\,q+\beta,\label{sym_V} \end{equation} with $\rho>0$, $\alpha$ and $\beta$ three functions of the time. In this case, by inverting the equalities \eref{rho_beta} one constructs the functions $\tau$, $\psi$, $\chi$ which define, through \eref{taupsichi}, the vector field of the ANPS along with the boundary term. In the adapted coordinates \eref{TQ}, the problem is transformed into the conservative problem of a particle experiencing a time-independent potential $W$. In this new viewpoint, all becomes transparent: the symmetry ($T$-invariance) and the first integral (the new energy). \subsection{Noether point symmetries of quadratic potentials}\label{subsecquadra} Let us apply the above considerations to the frequently encountered quadratic potentials \begin{equation} V=\frac12\,a(t)q^2+b(t)q,\label{quadratic} \end{equation} where $a(t)$ and $b(t)$ are some functions of the time. Since the adapted coordinate $Q$ is necessarily linear in $q$, the only candidates to the function $W(Q)$ in \eref{sym_V} have clearly the form \begin{equation*} W(Q)=\frac 12\,C_2Q^2+C_1Q+C_0\,, \end{equation*} where $C_2$, $C_1$ and $C_0$ are constants. Now, substituting $W(Q)$ in \eref{sym_V} by the above expression and identifying the result with \eref{quadratic}, one obtains that $\rho>0$, $\alpha$ and $\beta$ must verify \numparts\begin{eqnarray}\label{system} \rho^3(\ddot\rho+a\rho)&=&C_2\,,\label{systemA}\\ \rho^3\bigg[\frac{{\mathrm d}^2}{{\mathrm d} t^2}\big(\alpha\rho\big)+a\alpha\rho+b\bigg]&=&C_1\,,\\ \beta\rho^2+\frac12\,C_2\alpha^2-C_1\alpha &=&C_0\,. \end{eqnarray}\endnumparts Fixing the three constants in the right-hand sides at arbitrary values, any solution of the differential equations \cref{system} gives an ANPS with its boundary term. Actually, the system of equations \cref{system} says that the left-hand sides must be constant, i.e.\ that their derivatives are zero. Differentiating them, one obtains \numparts\begin{eqnarray}\label{eqtaupsichi} \frac14\,\dddot\tau+a\dot\tau+\frac12\,\dot a\tau&=&0,\label{eqtau}\\ \ddot\psi+a\psi+\frac32\,b\dot\tau+\dot b\tau&=&0,\label{eqpsi}\\ \dot\chi+b\psi&=&0.\label{eqchi} \end{eqnarray}\endnumparts But these equations are precisely the necessary and sufficient conditions for $\Phi$ to be an NPS of $S$ with boundary term $B$. Indeed, inserting the expression \eref{quadratic} of the potential in the expression of $R_0$ in \eref{Rs} and taking into account the relations \eref{taupsichi}, one has \begin{equation*} R_0=\bigg[\frac14\,\dddot\tau+a\dot\tau+\frac12\,\dot a\tau\bigg]q^2+\bigg[\ddot\psi+a\psi+\frac32\,b\dot\tau+\dot b\tau\bigg]q+\big[\dot\chi+b\psi\big]. \end{equation*} The general solution of \eref{eqtau}, seen as a differential equation in $\tau$, depends linearly on three parameters, following which the general solution of \eref{eqpsi}, seen as a differential equation in $\psi$, depends linearly on two supplementary parameters. Alternatively stated, the Noether point symmetry group of $S$ for quadratic potentials is five dimensional \cite{Prince} and generated by three asynchronous transformations $(\tau\ne 0)$ and two synchronous others $(\tau=0)$. It has been shown in Reference \cite{Leone2017} that the last two ones actually manifest the linearity of the equation of motion. Conversely, it is also clear that the quadratic potentials are the only ones which allow for synchronous NPS (SNPS)\footnote{If $\tau=0$ then, taking into account \eref{taupsichi}, the expression of $R_0$ in \eref{Rs} becomes $-\psi\partial_qV-\ddot\psi q-\dot\chi$ and it is clear that it can identically vanish only if $V$ is a quadratic potential.}. While ANPS lead to first integrals quadratic in the velocities which are `energy-like', SNPS lead to first integrals linear in the velocities which are `momentum-like'. \subsection{The particular case of the time-dependent harmonic oscillator}\label{HO} The harmonic oscillator with time-dependent frequency $\omega(t)$ deserves a special attention. Here, the potential $V$ has the form \eref{quadratic} in which $a(t)=\omega^2(t)$ and $b(t)=0$. For $\Phi$ to be an ANPS, it suffices to take $\psi=\chi=0$, and $\tau$ a solution of \eref{eqtau} or, equivalently, $\rho$ a solution of Ermakov's equation \cite{Ermakov} \eref{systemA} for some value of $C_2$. The conserved quantity $I$ is now the so-called Ermakov-Lewis invariant \cite{Ermakov,Lewis1967} \begin{equation} I=\overline H=\frac12\,\mathring Q^2+\frac12\,C_2Q^2=\frac12(\rho\dot q-\dot\rho q)^2+\frac{C_2}{2\rho^2}\,q^2.\label{ELinv} \end{equation} The new problem is then easily solved in $Q(T)$, and $q(t)$ follows immediately. In particular, if $C_2>0$, the general solution reads \begin{equation} q(t)=\sqrt{2I\tau}\,\cos\Bigg(\sqrt{C_2}\int_0^t\frac{{\mathrm d} t}{\tau}+\varphi_0\Bigg).\label{general_solution} \end{equation} \subsubsection*{An example.} An interesting case occurs when the period of the oscillator is a quadratic function of the time, that is, when the frequency has the form $\omega(t)=(1+2\gamma t+\delta t^2)^{-1}$ in a suitable unit of time, $\gamma$ and $\delta$ being two constants. Indeed, since Equation \eref{eqtau} can be rewritten \begin{equation} \frac14\,\dddot\tau+\omega\,\frac{{\mathrm d}}{{\mathrm d} t}\big(\omega\tau\big)=0,\label{eqtauHO} \end{equation} it is clear that setting $\tau=\omega^{-1}$ allows to cancel separately both the terms of the left-hand side. Hence, $\tau=\omega^{-1}$ is a solution of \eref{eqtauHO} coming with the integrating constant $C_2=1-\gamma^2+\delta$. Historically, that profile of frequency was for example considered by Fock in a quantum mechanical context \cite{Fock}. \section{Adiabaticity and almost Noether point symmetries}\label{sec:adiabatic} \subsection{General considerations} As we saw above, the Rund-Trautman function \eref{RT} measures the rate of change of the quantity $I$ in \eref{FI} along the motions. If $\Phi$ is not an NPS of $S$ with boundary term $B$, it can nevertheless be expected that it is `almost' such a symmetry, the almostness in question being quantifiable with respect to some quantities that we have every reason to regard them as small. In the usual perturbation theory, they measure the weakness of the couplings between a system and its environment. In the adiabatic theory, they are the slow rate of change of time-dependent parameters on which the dynamics depends. For simplicity, suppose that the system depends on a single tunable parameter $\lambda$ and that the observer wants to make it pass from an initial value $\lambda_0$ to a final value $\lambda_1$. To this end, he chooses which evolution pattern $\lambda(s)$ he will make the parameter follow to reach $\lambda_1=\lambda(1)$ from $\lambda_0=\lambda(0)$. At this stage, $s\in[0,1]$ is only an abstract evolution parameter whose variations $\Delta s$ will be proportional to $\Delta t$. Then, he still needs to decide how long the process will take, or, putting it differently, to fix the proportionality factor $\eta=\Delta s/\Delta t>0$ at a certain value. Hence, if the beginning of the evolution is taken at $t=0$ then one has $s=\eta t$, the final time is $t=\eta^{-1}$, and at any intermediate instant $t$ the value of the parameter is $\lambda(\eta t)$. The more $\eta$ is small, the more $\lambda$ evolves slowly, and the adiabatic regime is reached in the limit $\eta\to0$. Obviously, it is only an unattainable horizon and one speaks of adiabaticity as soon as $\eta$ can be considered as very small as compared to the typical frequencies of the dynamics \cite{Goldstein,Arnold}. For simplicity again, we will remain in the situation where the dynamics is entirely embodied in a Lagrangian whose explicit time dependence is, by assumption, only realized via $\lambda$ (and not on its derivatives). Hereafter, a function $F(s,q,\dot q;\eta)$ will be said to be formally of the order $\nu$ ($\nu\geqslant 0$) if, when $\eta$ approaches 0, the ratio $F/\eta^\nu$ converges to a finite function of $s$, $q$, $\dot q$. The reason why $s$ is privileged over $t$ is clear since $s$ is bounded unlike $t$ which goes to infinity when $\eta$ approaches 0. For the same reason, along the motion, the $t$-derivative of $q$ is expected to remain bounded whereas its $s$-derivative would reach infinite values in the same limit $\eta\to 0$. Now, if a couple $(\Phi,B)$ is such that the Rund-Trautman function \eref{RT} is formally of some order $\nu+1$, then, the rate of change of $I$ along the motion is formally of the order $\nu+1$ with respect to $t$, and $\nu$ with respect to $s$. Hence, it is expected that the discrepancy between the initial and final values of $I$ along a motion taking place between $t=0$ and $t=\eta^{-1}$ is an $\mathrm O(\eta^{\nu})$, i.e.\ that $I$ is an adiabatic invariant of the $\nu$-th order. For the sake of illustration, we will develop this idea in the paradigmatic problem of an harmonic oscillator with a slowly varying frequency. \subsection{The time-dependent harmonic oscillator} We consider an harmonic oscillator with a slowly varying frequency $\omega(\eta t)$, where $\eta$ is very small as compared to the values taken by $\omega$ between $t=0$ and $t=\eta^{-1}$. Treating this example seems at first glance surprising since it is nothing but an application of the Paragraph \ref{HO} from which one formally knows constants of motion which are in some sense adiabatic invariants to all orders. But, in general, the form of $\omega(\eta t)$ does not allow for a solution of \eref{eqtauHO} having a simple analytic expression. This is the reason why it is more useful to seek approximate solutions. An integrating constant $C_2=1$ would transform the problem into the one of an harmonic oscillator with unit frequency. So, let us search for an approximate solution $\rho=\sqrt\tau$ of \begin{equation*} \rho^3(\ddot\rho+\omega^2(\eta t)\rho)=1. \end{equation*} Denoting the total derivative with respect to $s=\eta t$ by a prime, the left-hand side can be transformed to yield \begin{equation} \frac{\eta^2}{2}\Bigg(\tau\tau''-\frac12\,\tau'^2\Bigg)+\omega^2(s)\tau^2=1.\label{etatau} \end{equation} Then, if we insert a perturbative expansion \begin{equation*} \tau(s)=\tau_0(s)+\eta^2\tau_1(s)+\dots+\eta^{2k}\tau_{k}(s)+\dot \end{equation*} in \eref{etatau}, one obtains after an identification of its two sides the formal expressions \begin{eqnarray} \tau_0&=\omega^{-1}\nonumber\\ \tau_1&=\frac{1}{4\omega}\Bigg(\frac12\,{\tau'_0}^2-\tau_0\tau''_0\Bigg)\label{composantes}\\ \tau_{k}&=\frac{1}{4\omega}\sum_{i=1}^k\Bigg(\frac12\,\tau'_{i-1}\tau'_{k-i}-\tau_{i-1}\tau''_{k-i}\Bigg)-\frac{\omega}{2}\sum_{i=1}^{k-1}\tau_{i}\tau_{k-i}\qquad(k\geqslant 2)\nonumber \end{eqnarray} The $\tau_k$s are thereby deduced step by step provided that $\omega$ is sufficiently differentiable (each $\tau_k$ is expressible in terms of $\omega^{-1}$ and its $k$ first derivatives). The quantity $I$ thus admits a formal expansion \begin{equation} I=I_0+\eta I_1+\eta^2I_2+\dots+\eta^kI_k+\dots\label{I_expansion} \end{equation} in which the components of even orders are \begin{equation*} I_0=\frac{H}{\omega}\qquad,\qquad I_{2k}=H\tau_k+\frac14\,q^2\tau''_{k-1}\qquad (k\geqslant 1) \end{equation*} while the components of odd orders are \begin{equation*} I_{2k+1}=-\frac12\,q\dot q\tau'_k\qquad(k\geqslant 0) \end{equation*} A truncation \begin{equation*} I_{(k)}=I_0+\eta I_1+\eta^2I_2+\dots+\eta^kI_k\qquad(k\geqslant 0) \end{equation*} of $I$ is formally of the order $k$ since its $s$-derivative, depending on its parity, is given by \begin{equation*} I'_{(2k)}=\frac12\,\eta^{2k}\big(\dot q^2-\omega^2q^2\big)\tau'_k\qquad,\qquad I'_{(2k+1)}=-\frac12\,\eta^{2k+1}q\dot q\tau''_k \end{equation*} along the motion. Consider an increment $\Delta s$ such that $\eta\omega\ll\Delta s\ll 1$ on $[s,s+\Delta s]$. By the assumption $\Delta s\ll 1$, the slow quantities $\tau'_k$ and $\tau''_k$ are supposed to be almost constant on this interval. The derivatives of the truncations $I_{(k)}$ along the motion are thus, at $s$, approximately \begin{equation*} I'_{(2k)}\approx\frac12\,\eta^{2k}\big\langle\dot q^2-\omega^2q^2\big\rangle\tau'_k\qquad,\qquad I'_{(2k+1)}\approx-\frac12\,\eta^{2k+1}\big\langle q\dot q\big\rangle\tau''_k, \end{equation*} where the averages are taken over $[s,s+\Delta s]$. Moreover, according to the general solution \eref{general_solution}, since $\Delta t=\eta^{-1}\Delta s\gg\omega^{-1}$, the phases of $\omega q$ and $\dot q$ highly oscillate between $s$ and $s+\Delta s$ whereas their amplitudes are, to the lowest order, $\sqrt{2H}+\mathrm O(\eta)$. Consequently, the averages $\langle \dot q^2- \omega^2q^2\rangle$ and $\langle q\dot q\rangle$ are approximately of the order $\mathrm O(\eta)$ and the truncations $I_{(k)}$ are adiabatic invariants of an order close to $k+1$. \subsection{A first explicit example: the frequency following a power law} Suppose that the frequency profile have the form $\omega(s)=(1+\gamma s)^{r-1}$, with $\gamma$ and $r$ two constants. Using the recursive scheme \eref{composantes}, it can be easily verified that the $\tau_k$s have a certain form \begin{equation*} \tau_k=a_k\gamma^{2k}(1+\gamma s)^{1-(2k+1)r} \end{equation*} in which the $a_k$s are coefficients ($a_0=1$). We can express $a_k$ in terms of $a_{k-1}$ and $k$ only by exploiting the equality \eref{eqtauHO} which brings to us the relations \begin{equation} \tau'''_{k-1}+4\omega(\omega\tau_k)'=0.\label{recursion_derivee} \end{equation} We find \begin{equation*} a_k=\frac{2k-1}{8k}\Big[1-(2k+1)^2r^2\Big]a_{k-1}\,. \end{equation*} The $a_k$s form a divergent hypergeometric sequence admitting the closed form \begin{equation*} a_k=\frac{(2k-1)!!}{k!\,8^k}\prod_{i=1}^k\Big[1-(2k+1)^2r^2\Big]. \end{equation*} Finally, the components of the adiabatic expansion \eref{I_expansion} are \begin{eqnarray*} I_{2k}&=\frac{\tau_k}{2}\bigg[\dot q^2-\frac{(2k+1)r-1}{(2k-1)r+1}\,\omega^2q^2\bigg],\\ I_{2k+1}&=\frac{\tau_k}{2}\frac{(2k+1)r-1}{1+\gamma s}\,\gamma q\dot q. \end{eqnarray*} \subsection{A second explicit example: the frequency following an exponential law} \begin{figure} \centering \includegraphics[scale=.75]{Plot.eps} \caption{Numerical analysis (color online). Relative changes of the six first adiabatic invariants --- and of an Ermakov-Lewis invariant $I_{\text{EL}}$ --- between the initial and final states, as functions of the parameter $\eta$, for the harmonic oscillator with a slowly varying frequency $\omega(\eta t)=2^{\eta t}$. The conjugate pairs $(q,\dot q)$ and $(\rho,\dot\rho)$ are integrated using the fourth-order symplectic Runge-Kutta-Nystr\"om algorithm developed by Calvo and Sanz-Serna \cite{Calvo}, with $q(0)=\rho(0)=1$ and $\dot q(0)=\dot\rho(0)=0$ as initial conditions ($t$-time step: $0.001$). Before becoming numerically noisy when too small, $\delta_k$ is approximately proportional to a power $\eta^{\nu_k}$ with $\nu_0=1.06\pm 0.03$, $\nu_1=1.87\pm0.07$, $\nu_2=2.96\pm0.01$, $\nu_3=3.92\pm0.04$, $\nu_4=4.83\pm0.03$ and $\nu_5=6.57\pm0.15$.}\label{courbes} \end{figure} We now consider a frequency profile $\omega(s)=\mathrm{e}^{\gamma s}$, where $\gamma$ is a constant. The recursion scheme \eref{composantes} reveals that the $\tau_k$s have a certain form \begin{equation*} \tau_k=\frac{b_k\gamma^{2k}}{\omega^{2k+1}} \end{equation*} in which the $b_k$s are coefficients ($b_0=1$). Applying the relations \eref{recursion_derivee}, the $b_k$s are again the elements of a diverging hypergeometric sequence here given by \begin{equation*} b_k=\frac{(-1)^k}{8^k}\frac{\big[(2k-1)!!\big]^3}{k!}\,. \end{equation*} The components of the adiabatic expansion \eref{I_expansion} are thus \begin{eqnarray*} I_{2k}&=\frac{\tau_k}{2}\bigg[\dot q^2-\frac{2k+1}{2k-1}\,\omega^2q^2\bigg],\\ I_{2k+1}&=\frac{\tau_k}{2}(2k+1)\gamma q\dot q. \end{eqnarray*} Let us end this example by a numerical experiment in the case of a doubling of the frequency realized exponentially, i.e.\ for $\omega(s)=2^s=\mathrm{e}^{\gamma s}$ with $\gamma=\log(2)$. For a set of values of $\eta$, the equation of motion is integrated with $q(0)=1$ and $\dot q(0)=0$ as initial conditions. Then, the final values of the six first adiabatic invariants $I_{(0)},\dots,I_{(5)}$ are compared with their initial values through the relative differences \begin{equation*} \delta_k=\left|\frac{ I_{(k)}(s=1)-I_{(k)}(s=0)}{I_{(k)}(s=0)}\right|. \end{equation*} We find that $\delta_{k}$ follows in good approximation a power law $\delta_k\propto\eta^{\nu_k}$ with $\nu_k$ close enough to $k+1$ (see figure \ref{courbes}). This illustrates the fact that $I_{(k)}$ is an adiabatic invariant of an order close to $k+1$. \section{Final remarks} In our approach of the adiabatic invariance from Noether's theory, we did not apply an averaging procedure on the Rund-Trautman identity, over a well-chosen slow variable \cite{Boccaletti}. Also, unlike the works of Neuenschwander \textit{et al} on the subject, we did not assume a certain type of potential admitting an exact Noether symmetry and a convenient Rund-Trautman function to work on \cite{Neuenschwander,NeuenschwanderAJP}. Let us add that the frequency of the harmonic oscillator was supposed differentiable a certain number of times. However, in general, a time-dependent frequency is used as a transition between two regimes and discontinuities in the derivatives must be taken into account at the junctions. The great interest of the historic adiabatic invariant $I_0$ lies in its insensitivity to them.
{ "timestamp": "2020-12-17T02:15:07", "yymm": "2012", "arxiv_id": "2012.08853", "language": "en", "url": "https://arxiv.org/abs/2012.08853" }
\section{Introduction} \label{sect:Introduction} Data is considered a key production factor, comparable in importance to labour, capital, and infrastructure. Companies are often in need of data they do not possess, or cannot collect directly. Therefore, general purpose\footnote{See for example DAWEX, Azure Data Catalog, or AWS Data Exchange} and domain specific\footnote{See for example, Openprise, Lotame PDX (marketing), Qlik (business intelligence), or Battlefin (investment information)} data marketplaces (DMs) have appeared with the purpose of building a business of mediation between data selling and data buying \emph{companies}. Leading data management platforms and innovative startups\footnote{See for example Snowflake, Cognite, Carto, and Openprise} are also introducing marketplace functionalities into their products. Finally, personal information management systems (PIMS) have answered the call of recent legislative developments in personal data protection by offering data control, portability, and monetization services for \emph{individuals}. Designing and building a successful DM calls for solving a plethora of technology, business, and economics challenges in the context of a complex two-sided market (see \cite{Armstrong06, Rochet06, Rysman09} for an exposition). According to our survey of more than 75 real-world data marketplaces, the most common bootstrapping strategy is for a DM to spend effort and money to attract a sufficient set of data sellers, and then try to convince as many buyers as possible to start purchasing these datasets. Therein lie two fundamental problems: (1) the \emph{dataset pricing problem} for data sellers, and (2) the \emph{dataset purchasing problem} for data buyers. Recent theoretical work on the intersection between computer science and economics has looked into those problems, and has proposed solution concepts and algorithms for them \cite{Agarwal19, Chen19, Chawla19, Koutris15}. For data sellers, selecting efficient prices requires knowing the level of competition with other data sellers, the willingness to pay of buyers, potential customer lock-ins and other information that affects prices in digital and non-digital markets. For buyers, the problem of selecting which datasets to buy (problem (2)), given the prices set by sellers (problem (1)), can be further broken down into 2 interrelated subproblems: (2.a) compute how useful these datasets will be to their AI/ML algorithms, something that can be captured by various accuracy metrics, and (2.b) compute how such accuracy can be converted into monetary gains (via e.g., improved sales, acquisition of new customers, retention of existing ones, etc.). Sub-problem (2.b) is probably the easiest of the two challenges faced by buyers, since most companies are able to gather historical data about the impact of things like recommendation quality on actual sales \cite{Brovman16}. Subproblem (2.a), on the other hand, is inherently more challenging, since buyers need to have access to the data before they can compute their value for their AI/ML task, but such access is only granted \emph{after} a dataset purchase has taken place -- a chicken and egg problem essentially. (2.a) is further exacerbated when the buyer can/has to buy more than one dataset in order to improve the accuracy of its AI/ML algorithm. With $N$ available datasets, a buyer has $O(2^N)$ data purchase options each one with a cost equal to the sum of individual dataset prices and a value defined by the maximum accuracy of AI/ML algorithm operating over the aggregate data. In theoretical works, the value of any subset of datasets for a data buyer is considered as known \textit{a priori} \cite{Shen16}. In reality, however, things are completely different \cite{Hubert18}. In almost all the 75 DMs that we have surveyed, data sellers provide only a description of their datasets, a price, sometimes an outdated sample, and buyers have to make purchase decision with that information alone. Few of these DM (e.g., Dawex, Airbloc, Wibson or Databroker) also allow buyers to make offers (bids) for data when sellers do not indicate a fixed price, or if they are willing to pay something below the asking price. This case suffers as well from the fundamental problem (2.a) of not knowing the value of a dataset before purchasing it. \vspace{2pt} \noindent \textbf{Our contribution:} In this paper we show how to solve the \emph{dataset purchasing problem} for data buyers in a way that approximates the efficiency of an optimal full information solution, but in a way that is implementable in practice with real-world DMs. Our main contribution is a family of dataset purchase algorithms that we call "Try Before You Buy" (or TBYB) that allow data buyers to identify the best datasets to buy with only $O(N)$ information about the accuracy of AI/ML algorithms on individual datasets, instead of $O(2^N)$ information used by an optimal strategy using full information. Effectively, TBYB needs to know only the accuracy of an AI/ML on \emph{individual} datasets, and with this information it can approximate the optimal \emph{combination} of datasets that maximizes the profit of the buyer, i.e., the difference between the value extracted from the datasets minus the cost of purchasing them. The accuracy of individual datasets can either be precomputed by the DM or the data sellers, and be made available as part of the dataset description (e.g., for some common AI/ML algorithms). Another alternative is for the DM to use recently developed ``sandboxed'' environments that allow data buyers to experiment versions of the data without being able to copy or extract them (hence the ``Try'' part on the algorithm's name; Otonomo, Advaneo, Caruso or Battlefin are examples of marketplaces that implement such functionality). Overall, with TBYB our objective is to increase the efficiency of buying datasets online from DMs. We believe that this is key for allowing both DMs and the data offer side to grow, as well. \vspace{2pt} \noindent \textbf{Our findings:} We compare the performance of TBYB against several heuristics that do not use information about the value of a datasets for the particular AI/ML task at hand, as well as against an optimal solution that uses full information. We start with a synthetic evaluation and then validate our conclusions using real-world spatio-temporal data and a use case in predicting demand for taxi rides in metropolitan areas \cite{Andres20}. Our findings are as follows: \begin{itemize} \item TBYB remains close to the optimal for a wide range of parameters, whereas its performance gap against the heuristics increases with the catalog size. \item TBYB is almost optimal when buying more data is yielding a progressively diminishing return in value for the buyer (i.e., when the value function of the buyer is concave). With convex function, it becomes increasingly difficult for TBYB to match the optimal performance. It's performance gap with the heuristics, however, is maintained. \item When the asking price of datasets does not correlate with their actual value for the buyer, the performance advantage of TBYB over the heuristic becomes maximal. When the pricing of data follows their value for buyers, the performance of TBYB is still superior but the gap with the heuristics becomes smaller. \end{itemize} Overall, our work demonstrates that near optimal dataset purchasing is realistic in practice and that it could be implemented relatively easy by real-world data marketplaces. \section{Marketplace Model \& Definitions} \label{sect:Definitions} Existing DMs typically list the datasets that they make available and provide for each one a description and a price. In our case we will assume that the DM also provides for each dataset its \emph{accuracy} over a range of common AI/ML tasks. This list cannot/need not be exhaustive, nor it needs to capture all the specificities of the particular AI/ML algorithm that the buyer intends to use. The intention is merely to provide the buyer with a hint, even an \emph{approximate} one, regarding the accuracy that he should expect if he buys a particular dataset. If, on the other hand, a buyer would like to know before buying a dataset the \emph{exact} performance of his algorithm on the data, then the following two options exist. The buyer could submit a description of the task for which he needs data, so that the DM returns a list of candidate sellers, or, alternatively, she could just go over the data catalog and select the best candidates manually. In both cases, the DM can provide a sandboxed environment in which the buyer can submit his \emph{exact} algorithm and get an \emph{exact} answer in terms of the achieved accuracy over each candidate dataset, without being able to see/copy the raw data. To model any of the above cases (see figure \ref{Fig:model}), we will denote by $\mathcal{S}$ the set of suitable sellers for the AI/ML task of a particular buyer. We will denote by $d(s)$ the dataset offered by seller $s \in \mathcal{S}$, by $p(s)$ its \emph{price}, and by $a(d(s))$ the \emph{accuracy} that the buyer's AI/ML task can achieve if trained by $d(s)$. Similarly, for a subset of the sellers $S\in \mathcal{S}$, we will denote by $d(S)$ their aggregated dataset, and by $a(d(S))$ the maximum accuracy that can be achieved using all or a subset of the data in $d(S)$. We will also introduce the \emph{value function} $v(a)$ of the buyer, that indicates the (monetary) value that the buyer can achieve when his AI/ML algorithm gets to an accuracy of $a$. In Sect.~\ref{subsec:TheorSensitivityMUP} we will look at both concave and convex $v(\cdot)$ functions. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Model.jpg} \caption{Reference DM model} \label{Fig:model} \end{figure} When datasets are sequentially bought, we will append a subscript to identify the round to the notation we already defined. Hence, $S_n \subset \mathcal{S}$ will refer to the set of eligible datasets in round $n$. We will denote by $P_n$ the set of data already under the buyer's control in round $n$. As a result, she is able to achieve an accuracy equal to $a_n=max_{S'\in 2^{P_n}} a(S')$ and a value $v_n=v(a_n)$. Buyers will always look forward to optimizing their profit, hence a greedy buyer will decide to purchase a dataset $d(s)$, if its marginal value exceeds its cost, i.e. $v(a(d(s|P))) \geq {p(s)}$. In the event that such a value is unknown, the buyer must estimate it, and assume the risk that the purchase may not fulfil expectations. \section{Data Purchase Strategies} \label{sect:Purchase algorithms} In this section we will present a series of data purchase strategies that cover the spectrum from full information, i.e., knowing the accuracy over any subset of the available data, to having no information about accuracy, as is currently the case in most DMs. In between the two extremes, lies our proposed algorithm called TBYB, that runs only on accuracy information of individual datasets. \subsection{Optimal purchase under full information} \label{subsec:fullinformation} In this case, the buyer knows $a(d(S))$ for any subset $S\subseteq 2^\mathcal{S}$. This allows for an optimal purchase $\mathcal{S}^\star$ that maximizes the profit, i.e., the difference between the value that the buyer extracts from the data, and the cost paid to purchase them: \begin{equation} \label{eq:optimalfull} \mathcal{S}^\star = arg\,max_{S\in 2^\mathcal{S}} \left(v(a(d(S)))-\sum_{s \in S} p(s) \right), \end{equation} subject to $v(a(d(\mathcal{S^\star}))) \geq \sum_{s\in \mathcal{S}^\star} p(s)$. Such a full information scenario is optimal from a buyer's perspective, but not scalable nor practical: a DM would need to compute the accuracy of each AI/ML algorithm over $2^{|\mathcal{S}|}$ combinations of eligible datasets. \subsection{Try Before You Buy (TBYB)} \label{subsec:TBYB} For our proposal, we assume that the DM provides the buyer with the accuracy of her algorithm on individual datasets, but not on combinations of them. The algorithm is sequential and greedy in nature, and can run for up to $|\mathcal{S}|$ iterations. We will consider two versions. \subsubsection{Stand-alone version - S-TBYB} \label{subsec:S-TBYB} The marketplace provides $a(d(s))$ on all $s\in\mathcal{S}$. Then the algorithm starts buying datasets in descending order of \emph{expected profit} until a stopping condition is reached. For the first dataset, the profit is not expected but exact, so the best dataset is bought provided $v(a(d(s)))- p(s) \geq -\lambda \cdot v(a^\star)$, where: \begin{enumerate} \item the best accuracy $a^\star \leq 1$ that can be delivered by the marketplace. Either the data marketplace provides data buyers with this information, or the buyer makes her best guess, and \item the risk parameter $\lambda$ models the maximum relative admissible loss the buyer is willing to assume in each operation. The risk assumed every round will be bounded by $\lambda$ times the potential value of the sourcing operation which, in round n, is equal to $v(a^\star) - v_n$. For example, $\lambda = 0.1$ means that the buyer will buy a new dataset $s$ if its price is lower than the marginal value she expects to get plus $10\%$ of the maximum value that she could add by buying new data. \end{enumerate} In some sourcing problems, the marginal value of new data increases as more information is bought. In such a setting, buyers may be required to assume some temporary losses when acquiring the first datasets, in the hope that they provide additional accuracy, and become profitable when fused together with other data. \paragraph{n-th iteration}The buyer will proceed as follows: \begin{itemize} \item Identify the best possible dataset $s^\star \in S_n$ such that: \begin{equation} s^\star = arg\,max_{s\in S_n} \left( v(a(d(s)))- p(s) \right) \end{equation} \item Purchase $s^\star$ if its estimated marginal value exceeds its price, and a risk threshold that depends on the remaining value she expects to get out of the operation, i.e., if $v(E\{a(s^\star \cup S_n)\}) - v_n - p(s^\star) \geq -\lambda \cdot (v(a^\star) - v_n)$ \item If the buy condition is met then, $s^\star$ is added to the set of controlled datasets: $P_{n+1} = P_n \cup d(s^\star)$ and the next round starts \item else if no dataset in $S$ meets this requirement, then the process stops \end{itemize} To estimate $E\{a(s^\star \cup S_n)\}$ the buyer could use the following information: \begin{enumerate} \item The price and accuracy pairs $<p(s),v(s)>$ for all individual datasets $s\in S$ \item The accuracy of every possible combination of already purchased datasets, i.e., the $v(S')$ for all $S' \subseteq 2^{P_n}$ \end{enumerate} This estimation must be tailored to each specific problem, and it turns out to be non-trivial. We estimate the relative added accuracy of $s^\star$ by multiplying its individual accuracy $a(s^\star)$, and the ratio of the marginal contribution and the individual accuracy of the last purchased dataset: \begin{equation} \label{eq:ExpectedAccuracyEstimation} E\{a(s^\star \cup S_n)\} = \frac{a_n - a_{n-1}}{a(P_n - P_{n-1})} \cdot a(s^\star) \cdot (a^\star - a_n). \end{equation} \subsubsection{Assisted version - A-TBYB} \label{subsec:A-TBYB} In this case, we will assume the buyer is allowed to ask the marketplace \textit{every round} for the marginal accuracy of any eligible datasets given the data she already owns. \paragraph{n-th iteration}The purchase process will be the following: \begin{enumerate} \item Ask the marketplace for complementary datasets $S_n \subseteq \mathcal{S}$, and $a(d(s)|P_n),$ $\forall s \in S_n$ given the task $(\mathcal{M}, a)$ and $P_n$ \item If $S_n \neq \emptyset$: \begin{itemize} \item Identify the best possible dataset $s^\star \in S_n$ such that: \begin{equation} s^\star = arg\,max_{s\in S_n} \left( v(a(d(s \cup P_n)))- p(s) \right) \end{equation} \item Buy provided $v(a(d(P_n \cup d(s^\star)))) - v_n - p(s^\star) \geq -\lambda \cdot (v(a^\star) - v(a(P_n)))$ \item If the buy condition is met then, $s^\star$ is added to the set of controlled datasets: $P_{n+1} = P_n \cup d(s^\star)$ and the next round starts \item Else if the buy condition is not met, then the process stops \end{itemize} \end{enumerate} As a result, if the marketplace is asked to compute the marginal accuracy for every remaining dataset every round, the model will be processed a maximum if $\sum^{r-1}_{i=0}{|S|-i}$ times for $r$ rounds. To prevent abuses from buyers, a marketplace implementing this solution could set up a maximum limit of trials for a certain task. Such a limit may be updated as the buyer purchases data. \subsection{Buying without trying} \label{subsec:BuyingWithoutTrying} \subsubsection{Volume-based purchasing} \label{subsec:volumePurchasing} Most commercial marketplaces provide buyers with a description of datasets, their metadata, source, procedure used to collect them, etc. Oftentimes, the volume of data in a particular dataset (e.g. nº observations or samples) is used as the deciding figure of merit for choosing among different offers. Let $vol(s)$ denote the volume of dataset $s$, used as merit figure by the following volume-based purchasing heuristic: \paragraph{n-th iteration}We will assume that a greedy buyer would select the dataset $s^\star \in S_n$ with the highest $vol(s) / p(s)$ ratio every round. However, it is not possible to know the accuracy it will yield to the specific problem. We assume a conservative condition for the algorithm to decide to purchase $s^\star$, specifically: \begin{equation} p(s^*) \leq -\lambda \cdot (v(a^\star) - v_n), \end{equation} which assumes that even in the worst case, where the purchase does not improve $(\mathcal{M},a)$ accuracy at all, the maximum relative admissible loss is not exceeded in the operation. \subsubsection{Price-based purchasing} \label{subsec:PriceBasedPurchasing} It may happen that the marketplace just publishes the list of suitable datasets $\mathcal{S}$, and their prices. This setting resembles real situations where information about data offer is insufficient or misleading to the buyer's purposes. We assume such a buyer would randomly select among datasets whose price is lower than their maximum relative admissible loss. \paragraph{n-th iteration}The buyer will randomly select one of the datasets $S_n \subseteq \mathcal{S}$ such that, $\forall s \in S_n, p(s) \leq - \lambda \cdot (v(a^\star) - v_n)$. If $S_n = \emptyset$ then the process stops. \section{Performance evaluation with synthetic data} \label{sect:TheoreticalEvaluation} We will use synthetic data to evaluate the performance of the different purchase strategies of Section \ref{sect:Purchase algorithms} across a wide range of parameters. Our synthetic model is easy to reproduce, captures a wide range of parameters, and allows us to extract useful insights about the relative performance of different data purchase strategies. As we will show later in Sect.~\ref{sect:ValidationWithData}, our conclusions from this section are also validated by results with real data. \subsection{Synthetic model description} \label{subsec:TheoreticalModel} To simplify the evaluation, we will assume that the value for the data buyer will be equal to the accuracy $a$. Hence, the maximum value that a buyer can extract from data is equal to 1, which occurs when the accuracy of its AI/ML algorithm trained on the purchased data, becomes 1 (i.e., 100\%). We will denote as Total Cost of Data (TCOD), the cost of buying all the available datasets in $\mathcal{S}$, i.e., $TCOD=\sum_{s\in \mathcal{S}} p(s)$. Therefore, when $TCOD<1$, then the buyer is guaranteed to make a profit, independently of the data purchase strategy used. In the more interesting case of $TCOD\geq 1$, the buyer needs to select which datasets to buy carefully, to avoid ending up with a loss. Having equated value with accuracy, we will also need to connect the datasets bought, with the achieved accuracy. For a data buyer that buys datasets $S\subseteq \mathcal{S}$, the value will be given by the following expression: \begin{equation} v(a(d(S))) = a(d(S)) = \left( \frac{\sum_{s_i \in S}{DI^{i}}} {\sum_{s_i \in \mathcal{S}}{DI^{i}}} \right) ^{MUP}, \end{equation} where: \begin{itemize} \item \textbf{MUP is the Marginal Utility Profile parameter} that controls the concaveness/convexity of $v$ from 0 to 1 as more data is bought. When $MUP<1$, then buying additional datasets will have a decreasing marginal utility in terms of accuracy, and hence value for the buyer, both of which will be concave with respect to the amount of data bought. On the other hand, with $MUP>1$, the marginal contribution of new data sources will be increasing as more datasets are bought, making $v(\cdot)$ and $a(\cdot)$ convex. Finally, $MUP=1$ means that all datasets yield the same accuracy if they are bought first, and the same incremental change if they are bought second, and so forth. \item \textbf{DI is the Data Interchangeability parameter} that controls the relative importance of different datasets in $\mathcal{S}$. Setting DI equal to 1, amounts to making all datasets fully interchangeable. Therefore, in this case, it only matters how many datasets are bought, but not which ones. For $DI>1$ and $MUP=1$, dataset $s_i$ becomes DI times more important than dataset $s_{i-1}$, $1\leq i\leq |\mathcal{S}|$. Effectively, for $DI\neq 1$ what matters is not only how many datasets are bought, but also which ones. \end{itemize} The last element of our synthetic model has to do with how we set the prices of individual datasets. We will consider the following pricing schemes: \begin{itemize} \item \textbf{All datasets having the same price}, i.e., $p(s)=$TCOD$/|\mathcal{S}|$, $\forall s\in \mathcal{S}$. \item \textbf{Datasets having random prices} drawn from a uniform distribution in $[0,1]$ and scaled to add up to TCOD. \item \textbf{Datasets having a price that reflects their importance} captured by their Shapley value within a coalition of $\mathcal{S}$ datasets that achieve a total value equal to $a(d(\mathcal{S}))$ (see works such as \cite{Ghorbani19, Paraschiv19} for a justification and explanation about hot to use the Shapley value with aggregate datasets). \end{itemize} \subsection{Results for different marginal utility profiles} \label{subsec:TheorSensitivityMUP} \begin{figure} \centering \includegraphics[width=\textwidth]{MUPSensitivityRandomPrice.jpg} \caption{Profit (\% of the optimal) vs. TCOD for different MUP and value-unrelated prices} \label{Fig:MUPSensitivity Random Price} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{SamplePurchasingSequencesTheoretical.jpg} \caption{Purchase sequences for MUP = 1 and TCOD = 1.5 (left), and 3 (right)} \label{Fig:PurchasingSequencesTheoretical} \end{figure} Next we will compare optimal purchasing under complete information, with TBYB, and the value-agnostic heuristics across a range of parameter values for capturing things like the total cost of datasets (TCOD), the marginal utility of buying more datasets (MUP), their relative value (DI), and their relative price. Our main evaluation metric will be the profit for a data buyer, i.e., the value extracted from the data minus the cost paid to obtain them. As stated in the introduction, guaranteeing that buyers obtain a healthy profit from data is vital for bootstrapping the nascent data marketplace sector. In the future we will also examine seller side profits and social welfare. Of course doing the latter makes sense for already bootstrapped markets. It also requires modeling complex market dynamics such as competition, dynamic pricing, etc. that go beyond the scope of the current work. Whenever randomization is used, e.g., in pricing, or in some of the value-agnostic heuristics, we report average values from 50 executions. The first parameters that we examine are TCOD and MUP, assuming some datasets are more important than others ($DI = 2$). Obviously, as data become more expensive (higher TCOD), all the strategies, including the optimal one, yield a smaller profit for buyers. What we are interested to study, therefore, is the relative performance of different strategies under different TCODs and MUPs. Figure \ref{Fig:MUPSensitivity Random Price} shows that A-TBYB matches the optimal purchasing for both concave (MUP=0.5, left subplot) and linear (MUP=1, middle subplot) value profiles, across the entire range of TCOD values. For convex value profiles (MUP=3, right subplot), A-TBYB ceases to be optimal, but remains the best performing strategy. What is even more interesting, is the performance of the much simpler to implement S-TBYB, which stays above 90\% of the optimal for concave and linear MUPs, when the value agnostic heuristics drops below 50\% with TCOD>1 and even leads to losses (see MUP=1 results). Under convex value profiles (MUP=3), all strategies yield a lower performance, since reaching a higher accuracy (and therefore buyer value) requires buying more datasets, which, in turn, eats away the profit margins for buyers. Even in these cases, S-TBYB yields a profit and avoids loss. To explain why TBYB outperforms the value unaware heuristics, we plot in Fig.~\ref{Fig:PurchasingSequencesTheoretical} a series of ``\textit{purchase sequences}'', demonstrating the evolution of profit with the number of datasets purchased (by different algorithms). As shown in the plot, TBYB algorithms buy both the most valuable datasets (they achieve higher profits from the first round), and the right number of them (they stop buying before profits decrease). On the other hand, value unaware heuristics overbuy, and randomly select datasets they can afford according to their risk appetite, which generally lead to lower profits, or even loses especially for risk-prone buyers. \subsection{The effect of data interchangeability} \label{subsec:TheorSensitivityDI} To find out how TBYB is affected by the interchangeability of datasets, we have run a set of simulations for different values of the parameter DI. Figure~\ref{Fig:DISensitivity} shows three different plots of the relative profit of different purchase algorithms for different DI values under MUP=1. The subplot on the left depicts results for perfectly interchangeable datasets (DI = 1), whereas the next two show cases of datasets that are increasingly less interchangeable (DI=2 and DI=3). \begin{figure} \centering \includegraphics[width=\textwidth]{DISensitivity.jpg} \caption{Profit for purchase algorithms (MUP = 1, linear) with different DI values} \label{Fig:DISensitivity} \end{figure} These plots show that the performance benefits of TBYB over the heuristics increase when different datasets have different value in terms of the accuracy they can achieve, both alone, as well as combined with other datasets. This happens, of course, because the advantage of knowing the value of data before buying them, gets diminished when datasets are almost interchangeable. In reality, as we will show in the next section, real world datasets are not interchangeable, which means that value unaware heuristics will not be able to match the performance of TBYB. \subsection{The effect of data pricing} \label{subsec:TheorSensitivityPricing} In this section, we look at the role of dataset pricing on the performance of TBYB. Our main interest is to see what happens when the price of a dataset is proportional to its value for an AI/ML algorithm, and when it is not. The former we create via the Shapley value method discussed in Sect.~\ref{subsec:TheoreticalModel}. The latter we model it in two ways: with datasets that have the same price but yield different accuracy, and with datasets that have randomly distributed prices and different accuracy. Figure \ref{Fig:PricingSensitivity} shows the results of our purchase algorithms for different pricing models. In every case, A-TBYB matches the optimal. Pricing data based on their real value for AI/ML algorithms reduces the gap of S-TBYB vs. price-based purchasing, although S-TBYB still outperforms price-based purchasing for the same level of risk. Notice, however, that pricing data in accordance to their actual value for buyers, requires knowing the value function of each buyer, something that buyers, of course, have no incentive to disclose to sellers. Even if they did, different buyers may have different value functions, so, in general, it cannot be expected that the price of a dataset will follow its value for different buyers that may be using it with different AI/ML algorithms, and having different value functions.\footnote{Notice that to simplify our synthetic evaluation we have assumed that buyer value follows the accuracy achieved by each dataset. This, of course, need not apply in the real world, since different buyers may have radically different value functions that translate accuracy into monetary worth.} \begin{figure} \centering \includegraphics[width=\textwidth]{PricingSensivity.jpg} \caption{Profit for different pricing methodologies (MUP = 1, DI = 2)} \label{Fig:PricingSensitivity} \end{figure} \subsection{Summary} \label{subsec:TheorConclusions} Table \ref{tab:ParametersImpact} summarizes the impact of parameters in the performance gap between TBYB and price-based purchasing. In summary, TBYB, even in its simplest, stand-alone version, always outperforms the value unaware heuristics, especially in the most realistic scenarios involving high TCOD, concave value functions, non-interchangeable datasets, and pricing that does not follow value. In a good part of the parameter space TBYB is very close to the performance of optimal purchasing that uses full information. \begin{table} \caption{Impact of parameters on the gap between TBYB and price-based purchasing} \label{tab:ParametersImpact} \scriptsize \begin{tabular}{p{1.2cm}p{5cm}p{6cm}} \hline Parameter&Impact&Explanation\\ \hline \texttt{TCOD}&The higher TCOD, the more valuable TBYB&More difficult for other strategies to find the right datasets in terms of price - value to buy \\ \texttt{MUP}&The higher MUP, the more difficult to find the optimal. TBYB loses effectiveness but still outperforms other algorithms&TBYB buys more valuable datasets, minimizes temporary losses and limits risk for buyers, since it allows for a better estimation of expected marginal value of datasets\\ \texttt{DI}&The less interchangeable datasets are, the more advantage of using TBYB&With perfectly interchangeable datasets, TBYB only improves the estimation of marginal utility as information increases \\ \texttt{Pricing}&TBYB gap with price-based purchasing narrows when prices are not related to value&Price-based purchasing works better if value is embedded in price\\ \hline \end{tabular} \end{table} \section{Validation with real data} \label{sect:ValidationWithData} The synthetic model of the previous section limits the ways in which two or more datasets may mix, and impact the accuracy of an AI/ML algorithm. It allows only for concave/convex mixing with equal (interchangeable, DI=1) or unequal (non-interchangeable, DI>1) contributions to accuracy from the different datasets. In reality, however, different datasets may mix in much more complex ways, that cannot be represented by any parameter setting of the above model. For example, a certain dataset $d_i$ can be very useful if combined with another dataset $d_j$, but not so useful if combined with others that individually yield the same accuracy as $d_j$ does. To verify our conclusions from the previous section, we tested the performance of different data purchase strategies using real spatio-temporal data, in a use case that involves forecasting demand for taxi rides in a city. Furthermore, in this section we expand our performance evaluation by introducing a new data pricing scheme, and a new data purchase strategy: \begin{enumerate} \item \textbf{Volume-based pricing}. In this case the price of a dataset becomes proportional to its volume. In our use case, volume will correspond to the number of drivers in the company. \item \textbf{Volume-based purchasing}. We will test the performance of a new heuristic that seeks to purchase that largest possible dataset in terms of volume for a given price. \end{enumerate} According to an internal survey covering more than 75 companies in the data economy, pricing and purchasing data by volume is a commonly extended practice in data trading. Therefore we compare TBYB against those practices as well. \subsection{Use case description} \label{subsec:ValidationUseCaseDescription} We will assume that a data buyer is looking to purchase datasets for training a multiseasonal SARIMA forecasting model with the purpose of forecasting, at an hourly timescale, the demand for taxi rides in different districts of Chicago City, for the weeks to come. To achieve this objective, a number of taxi companies will be assumed to be the data sellers, that release historical data on past trips that they have provided in the \textit{observation period} or $T_o$. Such datasets are publicly available, thanks to reporting obligations that such companies have to fulfil towards the local authorities \cite{TaxiTrips19}. The accuracy of the forecasting algorithm is quantified by how accurate the algorithm can predict real demand observed in a \textit{control period} or $T_c$. Our model is able to accommodate any sequence similarity metric in order to compare predicted vs. real demand in $T_c$. \subsection{Dataset description} \label{subsec:ValidationDatasetDescription} From the above-mentioned repository \cite{TaxiTrips19}, we have obtained 11.1 MM rides corresponding to the first 8 months of 2019. These rides are included in 15 datasets that correspond to the 15 largest taxi companies in the city (servicing 94\% of the total demand), plus a hypothetical 16th company that aggregates all the rides reported by the remaining smaller companies. These will be our 16 data sellers according to our problem formulation. We computed the exact Shapley value of data from each company to the forecasting accuracy achieved by the multiseasonal SARIMA model in predicting the demand in the second half of April using taxi rides from the previous six weeks for training ($T_o = $ Mar. 4th - Apr. 14th and $T_c = $ Apr. 15th - 28th). We deliberately chose to predict the taxi demand of a medium size district of the city (community area 11, Jefferson Park), where data from several companies is needed in order to achieve a good prediction accuracy. As a result, the Shapley values are very different for each source (standard deviation = 76\% of the average). Moreover, these were found to be weakly correlated with the number of licenses of each company ($R^2 = 0.54397$), because big companies usually concentrate in other areas of the city. Unlike what happened in the theoretical use case, the maximum accuracy the marketplace can give using all the information is $a^\star = 0.896294$ in this case. As in the synthetic case, we will assume that the economic value of a prediction is equal to its accuracy. \subsection{Empirical results} \label{subsec:ValidationResults} We have simulated all purchase algorithms for different TCOD, pricing models, and $\lambda$ parameters. Figure~\ref{Fig:ResultsChicagoD11} (a) shows that both A-TBYB and S-TBYB achieve above 90\% of the optimal buyer's profit under value-unrelated dataset pricing. The results are in line with the ones we obtained using synthetic data. Regarding volume-based purchasing, it proved to outperform price-based purchasing, but only when TCOD is low (< 5). This is because value and volume are not tightly correlated in this case, hence buying by volume does not lead necessarily to higher accuracy. Looking at Fig.~\ref{Fig:ResultsChicagoD11} (b) we see the corresponding results under volume-based prices. In this case, profit reduces faster than in the case of value-unrelated prices as TCOD grows, since valuable datasets are set higher prices. Still TBYB outperforms buying without trying algorithms, since it selects cheaper and more valuable datasets. \begin{figure} \centering \includegraphics[width=\textwidth]{D11Results.jpg} \caption{Profit for different purchase strategies when prices are (a) unrelated to value, and (b) related to volume} \label{Fig:ResultsChicagoD11} \end{figure} Figure \ref{Fig:PurchasingSequencesD11} shows an average purchase sequence to understand why TBYB works for volume-related prices. TBYB improves price-based purchasing both by selecting the best datasets, and stopping the purchase process before profit gets diminished. This feature is especially relevant when the offer is wide in comparison to the value they provide (TCOD $>> 1$). Picking datasets based on volume did not improve price-based purchasing in this specific example. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{D11Sequence2.jpg} \caption{Sample purchase sequence for volume-based pricing and TCOD = 3.3} \label{Fig:PurchasingSequencesD11} \end{figure} \section{Related work} \label{sect:RelatedWork} Different efforts have been done by the research community in order to identify the challenges of existing data marketplaces and define new business models \cite{Fernandez20}. In particular, some ML/AI oriented data marketplace proposals are trying to mechanize in a marketplace the process that some niche data an AI service providers are already bringing to the market. Most AI/ML-oriented theoretical data marketplace platforms leverage a data valuation framework similar to what we propose in this paper \cite{Agarwal19,Chen19}. In general, such marketplaces train buyers’ models in a neutral platform by feeding them with their data, and ask for a price depending on the accuracy (and, thus the value) they provide. They even suggest that marketplaces should return a trained algorithm instead of bulk data. The more you pay, the higher the accuracy you get in any case. To the best of our knowledge, there is no practical nor commercial implementation of those designs yet, although some digital service providers (SigOpt, Comet.ml), provide algorithm optimization services. Furthermore, some researchers have studied the dynamics of a data marketplace \cite{Moor19}, and proposed mechanisms that prevent sellers and buyers to postpone their arrival to the marketplace or misreport their costs or values. However, lots of challenges and research issues are still open around data trading and pricing \cite{Pei20, Yang19, Shen16}. As data provided to buyers usually benefits from combining different sources, it becomes very relevant the problem of how to fairly split the payment for a transaction among all the sources that contributed to the traded data. Existing marketplaces usually calculate this through simple heuristics, such as the data volume or the number of sources involved. However, simple heuristics are not necessarily tied to the utility of data, and consequently could be considered unfair by sellers. To address this challenge, researchers resort to well-known concepts of game theory to split the revenue. Most papers propose using the Shapley value \cite{Shapley52} for such a task \cite{Agarwal19, Ghorbani19, Paraschiv19}, whereas others propose to use the core \cite{Yan20}. We have used the Shapley value and its approximation algorithms (see \cite{Jia19_2, Castro09, Ghorbani19}) to price datasets according to their value. \section{Conclusion and future work} \label{Conclusion} TBYB was shown to provide near optimal data buyers' profits under a wide range of parameters and data. Used with off-the-shelf AI/ML algorithms, or more complex ones in a sandboxed ``try before you buy'' infrastructure, can make TBYB a practical, high performance alternative to value-unaware purchasing, which is currently the normal in real-world DMs. Helping buyers achieve higher profits would thus help in bootstrapping and growing the currently nascent data marketplace economy. We are currently working on developing fully functional prototype of TBYB to be used by real users. This will enable us to test arbitrary user-provided algorithms and models. Of course, building a functional DM goes beyond the scope of this paper. It involves additional aspects not covered here such as dynamic data pricing, protection against arbitrage, price discrimination, and lots of engineering, scalability, and security challenges that will be the focus of forthcoming works.
{ "timestamp": "2020-12-17T02:15:39", "yymm": "2012", "arxiv_id": "2012.08874", "language": "en", "url": "https://arxiv.org/abs/2012.08874" }
\section{Motivations} Fundamental advances in cryptography were made in secret during the 20th century. One exception was Claude E. Shannon's paper ``Communication Theory of Secrecy Systems'' \cite{shannonsecrecy}. Until 1967, the literature on security was not extensive, but a book \cite{histreview} with a historical review of cryptography changed this trend \cite{crypto}. Since then, the amount of sensitive data to be protected against attackers has increased significantly. Continuous improvements in security are needed and every improvement creates new possibilities for attacks \cite{pufintheory}. Recent hardware-intrinsic security systems, biometric secrecy systems, 5th generation of cellular mobile communication networks (5G) and beyond as well as the emerging internet of things (IoT) networks, have several salient characteristics that differentiate them from existing architectures. These include the deployment of large numbers of possibly low-complexity terminals with light or no infrastructure, stringent constraints on latency, and primary applications of data gathering, inference, and control. These unique characteristics call for a rethinking of some of the fundamentals of data communications and storage. For instance, these characteristics make it very challenging to provide adequate secrecy and privacy primitives. In particular, traditional cryptographic protocols, which require key distribution or certificate management, might not be suitable to support the diverse applications supported by these technologies and might not be able to assure the privacy of personal information intrinsic in the data collected by such applications. Furthermore, low-complexity terminals may not have the processing power to implement such protocols, and even if they do, latency tolerances may not permit the processing time needed for cryptographic operations. Similarly, traditional methods of storing a secret key in a secure non-volatile memory (NVM) can be shown to be not secure due to possible invasive attacks to the hardware. Thus, secrecy and privacy for information systems are issues that need to be rethought in the context of recent networks, digital circuits, and database storage. Information-theoretic security is an emerging approach to provide secrecy and privacy for, e.g., wireless communication systems and networks by exploiting the unique characteristics of the wireless communication channel. Information-theoretic security methods such as physical layer security (PLS) use signal processing, advanced coding, and communication techniques to secure wireless communications at the physical layer. There are two key advantages of PLS. Firstly, it enables the use of resources available at the physical layer such as multiple measurements, channel training mechanisms, power, and rate control, which cannot be utilized by the upper layers of the protocol stack. Secondly, it is based on an information-theoretic foundation for secrecy and privacy that does not make assumptions on the computational capabilities of adversaries, unlike cryptographic primitives. By considering the security and privacy requirements of recent digital systems and the potential benefits from information-theoretic security and privacy methods, it can be seen that information-theoretic methods can complement or even replace conventional cryptographic protocols for wireless networks, databases, and user authentication and identification. Since information-theoretic methods do not generally require pre-shared secret keys, they might considerably simplify the key management in complicated networks. Thus, these methods might be able to fulfill the stringent hardware area constrains of digital devices and latency constraints in certain 5G/6G applications, or to save computations; hence, battery life for low-power devices such as IoT devices. Furthermore, information-theoretic methods offer ``built-in" secrecy and privacy, which are generally agnostic to the network infrastructure and provide better scalability as the size of a network or database increases. \begin{figure} \centering \includegraphics[scale=.45]{./RingOscillatornew.eps} \caption{RO logic circuit.} \label{fig:ROlogiccircuit} \end{figure} A promising local solution to information-theoretic security and privacy problems is a \textit{physical unclonable function (PUF)} \cite{PUFFirst}. PUFs generate ``fingerprints'' for physical devices by using their intrinsic and unclonable properties. For instance, consider ring oscillators (ROs) with a logic circuit of multiple inverters serially connected with a feedback of the output of the last inverter into the input of the first inverter, as depicted in Figure~\ref{fig:ROlogiccircuit}. RO outputs are oscillation frequencies $1/\widehat{x}$, where $\widehat{x}$ is the oscillation period, that are unique and uncontrollable since the difference between different RO outputs is caused by submicron random manufacturing variations that cannot be controlled. One can use RO outputs as a source of randomness, called a \textit{PUF circuit}, to extract secret keys that are unique to the digital device that embodies these ROs. The complete method that puts out a unique secret key by using RO outputs is called an \textit{RO PUF}. Similarly, binary static random access memory (SRAM) outputs can be used as a source of randomness to implement SRAM PUFs in almost all digital devices because most digital devices have embedded SRAMs used for data storage. The logic circuit of an SRAM is depicted in Figure~\ref{fig:SRAMlogiccircuit} and the logically stable states of an SRAM cell are $(\overline{Q},Q)=(1,0)$ and $(0,1)$. During the power-up, the state is undefined if the manufacturer did not fix it. The undefined power-up state of an SRAM cell converges to one of the stable states due to random and uncontrollable mismatch of the inverter parameters, fixed when the SRAM cell is manufactured \cite{SoftHelper}. There is also random noise in the cell that affects the cell at every power-up. Since the physical mismatch of the cross-coupled inverters is a manufacturing variation, the power-up state of an SRAM cell is considered as a PUF response with one challenge, which is the address of the SRAM cell \cite{SoftHelper}. \begin{figure} \centering \includegraphics[width=0.32\textwidth, height=0.73\textheight, keepaspectratio=true]{./SRAMLogicCircuit.eps} \caption{SRAM logic circuit.} \label{fig:SRAMlogiccircuit} \end{figure} PUFs resemble biometric features of human beings. In this review, we will list state-of-the-art methods that bridge the gap between the practical secrecy systems that use PUFs and the information-theoretic security limits by \begin{itemize} \item Modeling real PUF outputs to solve security problems with valid assumptions; \item Analyzing methods that make information-theoretic analysis tractable, e.g., by transforming PUF symbols so that the transform-domain outputs are almost independent and identically distributed (i.i.d.), and that result in smaller hardware area than benchmark designs in the literature; \item Stating the information-theoretic limits for realistic PUF output models and providing optimal and practical (i.e., low-complexity and finite-length) code constructions that achieve these limits; \item Illustrating best-in-class nested codes for realistic PUF output models. \end{itemize} In short, we start with real PUF outputs to obtain mathematically-tractable models of their behaviour and then list optimal code constructions for these models. Since we discuss methods developed from the fundamentals of signal processing and information theory, any further improvements in this topic are likely to follow the listed steps in this review. \subsection{Organization and Main Insights} This paper is organized as follows. In Section~\ref{sec:pufbasics}, we define a PUF, list its existing and potential applications, and analyze the most promising PUF types. The PUF output models and design challenges faced when manufacturing reliable, low-complexity, and secure PUFs are listed in Section~\ref{sec:corrbiasnoisePUF}. The main security challenge in designing PUFs, i.e., output correlations, is tackled in Section~\ref{sec:transformcoding} mainly by using a transform coding method, which can provably protect PUFs against various machine learning attacks. The reliability and secrecy performance (e.g., the number of authenticated users) metrics used for PUF designs are defined and jointly optimized in Section~\ref{sec:quantandcodedesign}. PUF security and complexity performance evaluations for the defined transform coding method are given in Section~\ref{sec:comparisons}. Performance results for error-correction codes used in combination with previous code constructions that are used for key extraction with PUFs, are shown in Section~\ref{sec:correction} in order to illustrate that previous key extraction methods are strictly suboptimal. We next define the information theoretic metrics and the ultimate key-leakage-storage rate regions for the key agreement with PUFs problem, as well as comparing available code constructions for the key agreement problem in Section~\ref{sec:WZchapter}. Optimal code constructions for the key extraction with PUFs are implemented in Section~\ref{sec:codeproposal} by using nested polar codes, which are used in 5G networks in the control channel, to illustrate significant gains from using optimal code constructions. In Section~\ref{sec:DiscussionsandOpenProblems}, we provide a list of open PUF problems that might be interesting for information theorists, coding theorists, and signal processing researchers in addition to the PUF community. \section{PUF Basics}\label{sec:pufbasics} We give a brief review of the literature on PUFs and discuss the problems with previous PUF designs that can be tackled by using signal processing and coding-theoretic methods. A PUF is a function that is embodied in a physical device and is unclonable. In the literature, there are alternative expansions of the term PUF such as ``physically unclonable function'', suggesting that it is a function that is only physically-unclonable. Such PUFs may provide a weaker security guarantee since they allow their functions to be digitally-cloned. For any practical application of a PUF, we need the property of unclonability both physically and digitally. We therefore consider a function as a PUF only if it is a physical function, which is embodied in a physical device, that is unclonable digitally and physically. Physical identifiers such as PUFs are heuristically defined to be complex challenge-response mappings that depend on the random variations in a physical object. Secret sequences are derived from this complex mapping, which can be used as a secret key. One important feature of PUFs is that the secret sequence generated is not required to be stored and it can be regenerated on demand. This property makes PUFs cheaper (no requirement for a memory for secret storage) and safer (the secret sequence is regenerated only on demand) alternatives to other secret generation and storage techniques such as storing the secret in an NVM \cite{PUFFirst}. There are an immense number of PUF types, which makes it practically impossible to give a single definition of PUFs that covers all types. We provide the following definition of PUFs that includes all PUF types of interest for this review. \begin{definition}[\hspace{1sp}\cite{PUFFirst}]\label{def:PUFdefinition} A PUF is a challenge-response mapping embodied by a physical device such that it is fast and easy for the physical device to put out the PUF response and hard for an attacker, who does not have access the PUF circuits, to determine the PUF response to a randomly chosen challenge, even if he has access to a set of challenge-response pairs. \end{definition} The terms used in Definition~\ref{def:PUFdefinition}, i.e., fast, easy, and hard, are relative terms that should be quantified for each PUF application separately. There are physical functions, called physical one-way functions (POWFs), in the literature that are closely related to PUFs. Such functions are obtained by applying the cryptographic concept of ``one-way functions'', i.e., functions that are easy to evaluate but (on average) difficult to invert \cite{onewayfunction}, to physical systems. As the first example of POWFs, the speckle pattern obtained from coherent waves propagating through a disordered medium is a one-way function of both the physical randomness in the medium and the angle of the beam used to generate the optical waves \cite{PappuThesis}. Similar to POWFs, biometric identifiers such as the iris, retina, and fingerprints are closely related to PUFs. Most of the assumptions made for biometric identifiers are satisfied also by PUFs, so we can apply almost all of the results in the literature for biometric identifiers to PUFs. However, it is common practice to assume that PUFs can resist invasive (physical) attacks, which are considered to be the most powerful attacks used to obtain information about a secret in a system, unlike biometric identifiers that are constantly available for attacks. The reason for this assumption is that invasive attacks permanently destroy the fragile PUF outputs \cite{PUFFirst}. This assumption will be the basis for the PUF system models used throughout this review. We; therefore, assume that the attacker does not observe a sequence that is correlated with the PUF outputs, unlike biometric identifiers, since physical attacks applied to obtain such a sequence permanently change the PUF outputs. \subsection{Applications of PUFs} \label{sec:application} A PUF can be seen as a source of random sequences hidden from an attacker who does not have access to the PUF outputs. Therefore, any application that takes a secret sequence as input can theoretically use PUFs. We list some scenarios where PUFs fit well practically: \begin{itemize} \item Security of information in wireless networks with an eavesdropper, i.e., a passive attacker, is a PLS problem. Consider Wyner's wiretap channel model introduced in \cite{WynerWTC}, where a transmitter sends a message through a broadcast channel so that a legitimate receiver can reliably reconstruct the message, while the message should be kept secret from an eavesdropper. This model is the most common PLS model, which is a channel coding problem unlike the secret key agreement problem we consider below that is a source coding problem. A randomized encoder helps the transmitter in keeping the message secret by confusing the eavesdropper. Therefore, one can use PUFs at the transmitter as the source of local randomness when a message should be sent securely through the wiretap channel. \item Consider a 5G/6G mobile device that uses a set of SRAM outputs, which are available in mobile devices, as PUF circuits to extract secret keys so that the messages to be sent are encrypted with these secret keys before sending the data over the wireless channel. Thus, the receiver (e.g., a base station) that previously obtained the secret keys (sent by mobile devices, e.g., via public key cryptography) can decrypt the data, while an eavesdropper who only overhears the data broadcast over the wireless channel cannot easily learn the message sent. \item The controller area network (CAN) bus standard used in modern vehicles is illustrated in \cite{autonomouscars} to be susceptible to denial-of-service attacks, which shows that safety-critical inputs of the internal vehicle network such as brakes and throttle can be controlled by an attacker. One countermeasure is to encrypt the transmitted CAN frames by using block ciphers with secret keys generated from PUF outputs used as inputs. \item IoT devices such as wearable or e-health devices may carry sensitive data and use a PUF to store secret keys so that only a mobile device with access to the secret keys can control the IoT devices. One common example of such applications is when PUFs are used to authenticate wireless body sensor network devices \cite{WBSNPUFs}. \item Cloud storage requires security to protect users' sensitive data. However, securing the cloud is expensive and the users do not necessarily trust the cloud service providers. A PUF in a universal serial bus (USB) token, i.e., Saturnus$^{\tiny{\textregistered}}$, has been trademarked to encrypt user data before uploading the data to the cloud, decrypted locally by reconstructing the same secret from the same PUF. \item System developers want to mutually authenticate a field programmable gate array (FPGA) chip and the intellectual property (IP) components in the chip, and IP developers want to protect the IP. In \cite{mutualauthenticatePUF}, a protocol is described to achieve these goals with a small hardware area that uses one symmetric cipher and one PUF. \end{itemize} Other applications of PUFs include providing non-repudiation (i.e., undeniable transmission or reception of data), proof of execution on a specific processor, and remote integrated circuit (IC) enabling. Every application of PUFs has different assumptions about the PUF properties, computational complexity, and the specific system models. Therefore, there are different constraints and system parameters for each application. We focus mainly on the application where a secret key is generated from a PUF for user, or device, authentication with privacy and secrecy guarantees, and low complexity. \subsection{Main PUF Types} We review four PUF types, i.e., silicon, arbiter, RO, and SRAM PUFs. We consider mainly the last two PUF types for algorithm and code designs due to their common use in practice and because signal processing techniques can tackle the problems arising in designing these PUFs. For a review of other PUF types that are mostly considered in the hardware design and computer science literatures, and various classifications of PUFs, see, e.g., \cite{DevadasPUFReview,TimPUFReview, pufintheory}. The four PUF types considered below can be shown to satisfy the assumption that invasive attacks permanently change PUF outputs, since digital circuit outputs used as the source of randomness in these PUF types change permanently under invasive attacks due to their dependence on nano-scale alterations in the hardware. \subsection{Silicon and Arbiter PUFs} Common complementary metal-oxide-semiconductor (CMOS) manufacturing processes are used to build silicon PUFs, where the response of the PUF depends on the circuit delays which vary across integrated circuits (ICs) \cite{PUFFirst}. Due to high sensitivity of the circuit delays to environmental changes (e.g., ambient temperature and power supply voltage), arbiter PUFs are proposed in \cite{ExtractingSecretKeys}, for which an arbiter (i.e., a simple transparent data latch) is added to the silicon PUFs so that the delay comparison result is a single bit. The difference of the path delays is mapped to, e.g., the bit 0 if the first path is faster, and the bit 1 otherwise. The difference between the delays can be small, causing meta-stable outputs. Since the output of the mapper is generally preset to 0, the incoming signals must satisfy the setup time ($t_{\text{setup}}$) of the latch to switch the output to 1, resulting in a bias in the arbiter PUF outputs. Symmetrically implementable latches (e.g., set-reset latches) should be used to overcome this problem, which is difficult because FPGA routing does not allow the user to enforce symmetry in the hardware implementation. We discuss below that PUFs without symmetry requirements, e.g., RO PUFs, provide better results. \subsection{Ring Oscillator PUFs} \label{subsec:ROPUFs} The logic circuit of an odd number of inverters serially connected, where the output of the last inverter is given as input to the first inverter is an RO, as depicted in Figure~\ref{fig:ROlogiccircuit}. The first logic gate in Figure~\ref{fig:ROlogiccircuit} is a NAND gate, giving the same logic output as an inverter gate when the ENABLE signal is 1 (ON), to enable/disable the RO circuit. The manufacturing-dependent and uncontrollable component in an RO is the total propagation delay of an input signal to flow through the RO, determining the oscillation frequency $1/\hat{x}$ of an RO that is used as the source of randomness. A self-sustained oscillation is possible when the ring provides a 2$\pi$ phase shift and has unit voltage gain at the oscillation frequency $1/\hat{x}$. Consider an RO with $m\geq 3$ inverters. Each inverter should provide a phase shift of $\frac{\pi}{m}$ with an additional phase shift of $\pi$ due to the feedback. Therefore, the signal should flow through the RO twice to provide the necessary phase shift \cite{RingOscillators}. Suppose a propagation delay of $\tau_{d}$ for each inverter, so the oscillation frequency of an RO is $\displaystyle \frac{1}{\hat{x}} = \frac{1}{2 m \tau_d}$. We remark that since RO outputs are generally measured by using 32-bit counters, it is realistic to assume that a measured RO output $\displaystyle \frac{1}{\hat{x}}$ is a realization of a continuous distribution that can be modeled by using the histogram of a family of RO outputs with the same circuit design, as assumed below. The propagation delay $\tau _{d}$ is affected by nonlinearities in the digital circuit. Furthermore, there are deterministic noise sources such as the cross-talk between adjacent signal traces and additional random noise sources such as thermal noise and flicker noise \cite{RingOscillators}. Such effects should be eliminated to have a reliable RO output. Rather than improving the standard RO designs, which would impose the condition that manufacturers should change their RO designs, the first proposal to fix the reliability problem was to make hard bit decisions by comparing RO pairs \cite{ROFirst}, as illustrated in Figure~\ref{fig:ROPUFfirst}. \begin{figure} \centering \includegraphics[width=1.0\textwidth, height=0.73\textheight, keepaspectratio=true]{./ROpuffirstone} \caption{The first and most common RO PUF design \cite{ROFirst}.} \label{fig:ROPUFfirst} \end{figure} In Figure~\ref{fig:ROPUFfirst}, the multiplexers are challenged by a bit sequence of length at most $\lceil\log _2 N \rceil$ so that an RO pair is selected among $N$ ROs. The counters put out the number of rising edges from each RO for a fixed time duration. A logic bit decision is made by comparing the counter values, which can be bijectively mapped to the oscillation frequencies. For instance, when the upper RO has a greater counter value, then the bit $0$ is generated; otherwise, the bit $1$. Given that ROs are identically laid out in the hardware, the differences in the oscillation frequencies are determined mainly by uncontrollable manufacturing variations. Furthermore, it is not necessary to have a symmetric layout when hard-macro hardware designs are used for different ROs, unlike arbiter PUFs. The key extraction method illustrated in Figure~\ref{fig:ROPUFfirst} gives an output of $N\choose 2$ bits, which are correlated due to overlapping RO comparisons. This causes a security threat and makes the RO PUF vulnerable to various attacks, including machine learning attacks. Thus, non-overlapping pairs of ROs are used in \cite{ROFirst} to extract each bit. However, there are systematic variations in the neighboring ROs due to the surrounding logic, which also should be eliminated to extract sequences with full entropy. Furthermore, ambient temperature and supply voltage variations are the most important effects that reduce the reliability of RO PUF outputs. A scheme called \textit{1-out-of-k masking} is proposed as a countermeasure to these effects, which compares the RO pairs that have the maximum oscillation frequency differences for a range of voltages and temperatures to extract bits \cite{ROFirst} . The bits extracted by such a comparison are more reliable than the bits extracted by using previous methods. The main disadvantages of this scheme are that it is inefficient due to unused RO pairs, and only a single bit is extracted from the (semi-) continuous RO outputs. We review transform-coding based RO PUF methods below that significantly improve on these methods without changing the standard RO hardware designs. \subsection{SRAM PUFs} There are multiple memory-based PUFs such as SRAM, Flip-flop, DRAM, and Butterfly PUFs. Their common feature is to posses a small number of challenge-response pairs with respect to their sizes. As the most promising memory-based PUF type that is already used in the industry, we consider SRAM PUFs that use the uncontrollable settling state of bi-stable circuits \cite{SRAM-PUF}. In the standard SRAM design, there are four transistors used to form the logic of two cross-coupled inverters, as depicted in Figure~\ref{fig:SRAMlogiccircuit}, and two other transistors to access the inverters. The power-up state, i.e., $(\overline{Q},Q)=(1,0)$ or $(0,1)$, of an SRAM cell provides one secret bit. Concatenating many such bits allows to generate a secret key from SRAM PUFs on demand. We provide an open problem about SRAM PUFs in Section~\ref{sec:DiscussionsandOpenProblems}. \section{Correlated, Biased, and Noisy PUF Outputs}\label{sec:corrbiasnoisePUF} PUF circuit outputs are biased (nonuniform), correlated (dependent), and noisy (erroneous). We review a transform-coding algorithm that extracts an almost i.i.d. uniform bit sequence from each PUF, so a helper-data generation algorithm can correct the bit errors in the sequence generated from noisy PUF outputs. Using this transform-coding algorithm, we also obtain memoryless PUF measurement-channel models, so standard information-theoretic tools, which cannot be easily applied to correlated sequences, can be used. \begin{remark} The bias in the PUF circuit outputs is considered in the PUF literature to be a big threat against the security of the key generated from PUFs since the bias allows to apply, e.g., machine learning attacks. However, it is illustrated in \cite[Figure 6]{OnurProblem} that the output bias does not change the information-theoretic rate regions significantly, illustrating that there exist code constructions that do not require PUF outputs to be uniformly distributed. \end{remark} There are multiple \textit{key-generation}, i.e., \textit{generated-secret (GS)}, and \textit{key-binding}, i.e., \textit{chosen-secret (CS)}, methods to reconstruct secret keys from noisy PUF outputs, where the key is generated from the PUF outputs or bound to them, respectively. A code-offset fuzzy extractor (COFE) \cite{Dodis2008fuzzy} is an example of key-generation methods and the fuzzy-commitment scheme (FCS) \cite{FuzzyCommitment} is a key-binding method. Since a secret key should be stored in a secure database for both models, it might be practical to allow a trusted entity to choose the secret key that is bound to a PUF output. Thus, we first analyse a method that significantly improves reliability, privacy, secrecy, and hardware cost performance by using a transform-coding algorithm that is applied to PUF outputs in combination with the FCS. We remark that the information-theoretic analysis of the CS model follows directly from the analysis of the GS model \cite{IgnaTrans}, so one can use either model for comparisons. Correlation in PUF outputs might leak information about the secret key, called \textit{secrecy leakage}, and about the PUF output, called \textit{privacy leakage} \cite{benimdissertation,IgnaTrans,ourMMMM}. Moreover, noise reduces the reliability of PUF outputs and error-correction codes are needed to satisfy the stringent reliability constraints. The transform-coding approach proposed in \cite{bizimpaper} in combination with a set of scalar quantizers has made its way into secret-key binding with continuous-output identifiers. This approach allows to significantly reduce the output correlation and to adjust the effective noise at the PUF output with reliability guarantees; see \cite{DPwithEfe} for a similar decorrelation approach applied to provide differential privacy to temporally-correlated eye movement measurements. \subsection{PUF Output Model}\label{sec:systemmodel} Consider a (semi-)continuous output physical function such as an RO output as a source that puts out a {real-valued} symbol $\hat{x}$. Systematic variations in RO outputs in a two-dimensional array are less than the systematic variations in one-dimensional ROs, since in a two-dimensional array the maximum distance between RO hardware logic circuits is less, which decreases the variations in the RO outputs caused by surrounding hardware logic circuits \cite{maiti2011improved}. Thus, consider a two-dimensional RO array of size $\displaystyle l\!= r\!\times\! c$ and represent the array as a vector random variable $\widehat{X}^l$. Suppose there is a single two-dimensional RO array in each device with the same circuit design and the RO array emits an output $\displaystyle \widehat{X}^l$ according to a probability density $f_{\widehat{X}^l}$. Each RO output is disturbed by mutually-independent additive Gaussian noise and the vector noise is denoted as $\widehat{Z}^l$. Define the noisy RO outputs as $\widehat{Y}^l\! =\! \widehat{X}^l \!+\! \widehat{Z}^l$. Observe~that $\widehat{X}^l$ and $\widehat{Y}^l$ are correlated. A secret key can thus be agreed by using these outputs \cite{AhlswedeCsiz,Maurer}. \begin{remark} PUF outputs are noisy, as discussed above in this section. However, the first PUF outputs are used by, e.g., a manufacturer to generate or embed a secret key, which is called the \emph{enrollment} procedure. Since a manufacturer can measure multiple noisy outputs of the same RO to estimate the noiseless RO output, we can consider that the PUF outputs measured during enrollment are noiseless. However, during the reconstruction step, e.g., an IoT device observes a noisy RO output, which can be the case because the IoT device cannot measure the RO outputs multiple times due to delay and complexity constraints. Therefore, we consider a key-agreement model where the first measurement sequence (during enrollment) is noiseless and the second measurement sequence (during reconstruction) is noisy; see also Figure~\ref{fig:problemsetup} below. Extensions to key agreement models with two noisy sequences, where the noise components can be correlated, are discussed in \cite{ourMMMM,ourISITPoor,ourBCITW}. \end{remark} One needs to extract random sequences with i.i.d. symbols from $\widehat{X}^l$ and $\widehat{Y}^l$ to employ available information-theoretic results in \cite{IgnaFuzzy} for secret-key binding with identifiers by using the FCS. An algorithm is proposed in \cite{bizimpaper} that extracts almost i.i.d. binary and uniformly distributed random vectors $X^n$ and $Y^n$ from $\widehat{X}^l$ and $\widehat{Y}^l$, respectively. For such $\displaystyle X^n$ and $\displaystyle Y^n$, we can define a binary error vector as $E^n\! =\! X^n\! \xor\! Y^n$, where $\xor$ is the modulo-2 sum. The random sequence $\displaystyle E^n$ corresponds to a sequence of i.i.d. Bernoulli random variables with parameter $p$, {i.e.,} $E^n\sim\text{Bern}^n(p)$. The channel $P_{Y|X}$ is thus a binary symmetric channel (BSC) with crossover probability $p$, i.e., BSC($p$). We discuss a transform-coding method below, which further provides reliability guarantees for each bit generated. The FCS can reconstruct a secret key by using correlated random variables without leaking any information about the secret key \cite{FuzzyCommitment}. The FCS is depicted in Figure~\ref{fig:fuzzycommitment}, where an encoder $\Enc(\cdot)$ maps a secret key $S\in\mathcal{S}$, which is uniformly distributed in the set $\{1,2,\ldots,|\mathcal{S}|\}$, into a binary codeword $\displaystyle C^n$ that is added modulo-2 to the binary PUF-output sequence $\displaystyle X^n$ during enrollment. The resulting sequence is called helper data $\displaystyle W$, sent through a public and noiseless communication link to a database. The modulo-2 sum of the helper data $W$ and $Y^n$ gives the result $R^n=W\xor Y^n=C^n\!\xor E^n$, which can be later mapped to an estimate $\displaystyle \hat{S}$ of the secret key $S$ by the decoder $\displaystyle \Dec(\cdot)$ during reconstruction. We next give information-theoretic rate regions for the FCS; see \cite{CoverandThomas} for information-theoretic notation and basics. \begin{figure} \centering \resizebox{0.55\linewidth}{!}{ \begin{tikzpicture} \node (a) at (0,-1.5) [XOR,scale=1.0] {}; \node (b) at (6,-1.5) [XOR,scale=1.0] {}; \node (f) at (0,-0.7) [draw,rounded corners = 6pt, minimum width=2.8cm,minimum height=3cm, align=left] {}; \node (g) at (6,-0.7) [draw,rounded corners = 6pt, minimum width=2.75cm,minimum height=3cm, align=left] {}; \node (d) at (0,-0.1) [draw,rounded corners = 6pt, minimum width=2.6cm,minimum height=0.7cm, align=left] {$ C^n = \Enc\left(S\right)$}; \node (c) at (3,-2.7) [draw,rounded corners = 5pt, minimum width=1.3cm,minimum height=0.65cm, align=left] {$P_{Y|X}$}; \node (e) at (6,-0.1) [draw,rounded corners = 6pt, minimum width=2.6cm,minimum height=0.7cm, align=left] {$\hat{S} = \Dec\left(R^n\right)$}; \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=0.6pt] (a.east) -- (b.west) node [midway, above] {$W$}; \node (a1) [below of = a, node distance = 1.2cm] {$X^n$}; \node (b1) [below of = b, node distance = 1.2cm] {$Y^n$}; \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (a1.north) -- (a.south); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (a1.east) -- (c.west); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (c.east) -- (b1.west); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (b1.north) -- (b.south); \node (a2) [above of = d, node distance = 1.5cm] {$S$}; \node (f2) [below of = f, node distance = 2.5cm] {Enrollment}; \node (g2) [below of = g, node distance = 2.5cm] {Reconstruction}; \node (b2) [above of = e, node distance = 1.5cm] {$\hat{S}$}; \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (e.north) -- (b2.south); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (a2.south) -- (d.north); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (b.north) -- (e.south) node [midway, right] {$R^n$}; \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (d.south) -- (a.north) node [midway, left] {$C^n$};; \end{tikzpicture} } \caption{The fuzzy commitment scheme (FCS).}\label{fig:fuzzycommitment} \end{figure} \begin{definition}\label{def:ratepair} A secret-key vs. privacy-leakage rate pair $\displaystyle \left(R_\text{s}\!\;\text{,}\;\!R_\ell\right)$ is achievable by the FCS with perfect secrecy, {i.e.,} zero secrecy leakage, if, given any $\delta\!>\!0$, there is some $n\!\geq\!1$, and an encoder and decoder for which $\displaystyle R_\text{s}=\frac{\log|\mathcal{S}|}{n}$ and \begin{alignat}{2} &P_B=\Pr[S\ne\hat{S}] \leq \delta && (\text{reliability})\\ &H(S)\geq n(R_\text{s}-\delta)&&(\text{key uniformity})\\ &I\left(S;W\right)\!=\!0 && (\text{perfect secrecy})\label{eq:secrecyconst}\\ &I\left(X^n;W\right) \leq n(R_\ell+\delta) \quad\quad\quad&&(\text{privacy})\label{eq:privacyconstFCS} \end{alignat} where (\ref{eq:secrecyconst}) suggests that $S$ and $W$ are independent and (\ref{eq:privacyconstFCS}) suggests that the rate of dependency between $X^n$ and $W$ is bounded. The achievable secret-key vs. privacy-leakage rate, or key-leakage, region $\mathcal{R}_{\text{FCS}}$ for the FCS is the union of all achievable pairs. \end{definition} \begin{theorem}[\hspace{1sp}\cite{IgnaFuzzy}] The key-leakage region $\mathcal{R}_{\text{FCS}}$ for the FCS with a channel $P_{Y|X}$ that is a BSC$(p)$, uniformly distributed $X$ and $Y$, and zero secrecy leakage is \begin{align} \mathcal{R}_{\text{FCS}} = \{&\left(R_\text{s},R_\ell\right)\colon\;\;\; 0\leq R_\text{s}\leq 1-H_b(p),\qquad R_\ell\geq 1-R_\text{s}\}\label{eq:ls0} \end{align} where $H_b(p)=-p\log p - (1-p)\log(1-p)$ is the binary entropy function. \end{theorem} The region $\mathcal{R}_{\text{FCS}}$ suggests that any (secret-key, privacy-leakage) rate pair that sums to 1~bit/source-bit is achievable with the constraint that the secret-key rate is at most the channel capacity of the BSC($p$), i.e., $\underset{p_X}{\max}\;I(X;Y)=1-H_b(p)$. Furthermore, smaller secret-key rates and greater privacy-leakage rates than these rates are also achievable. The FCS is a particular realization of the CS model. The region $\mathcal{R}$ of all achievable (secret-key, privacy-leakage) rate pairs for the CS model with a negligible secrecy-leakage rate, where a generic encoder is used to confidentially transmit an embedded secret key to a decoder that observes $Y^n$ and the helper data $W$, is given in \cite{IgnaTrans} as \begin{align} \mathcal{R}\! =\! &\bigcup_{P_{U|X}}\!\Bigg\{\left(R_\text{s},R_\ell\right)\!\colon\!\quad 0\leq R_\text{s}\leq I(U;Y),\qquad R_\ell\geq I(U;X)-I(U;Y)\Bigg\}\label{eq:chosensecret} \end{align} where $U-X-Y$ forms a Markov chain and the alphabet $\mathcal{U}$ of the auxiliary random variable $U$ can be limited to have the size $\displaystyle |\mathcal{U}|\!\leq\!|\mathcal{X}|+1$. The auxiliary random variable $U$ represents a distorted version of $X$ through a channel $P_{U|X}$. The FCS is optimal, {i.e.,}~it~achieves a boundary point of $\mathcal{R}$, for a BSC $P_{Y|X}$ with crossover probability $p$ only at the point $\displaystyle (R_{\text{s}}^*,R_\ell^*)\!=\!(1\!-\!H_b(p),H_b(p))$ \cite{IgnaFuzzy}. This point corresponds to the highest achievable secret-key rate; see Figure~\ref{fig:ratecomparison} below. We remark that the region $\mathcal{R}$ gives an outer bound for the perfect-secrecy case considered to obtain $\mathcal{R}_{\text{FCS}}$. \section{Transform Coding Steps}\label{sec:transformcoding} The main aim of transform coding is to reduce correlations between outputs of the ROs in the same two-dimensional array by using, e.g., a linear transformation. We discuss a transform-coding algorithm proposed in \cite{bizimMDPI} as an extension of \cite{bizimpaper} to provide reliability guarantees to each generated bit. Joint optimization of the quantizer and error-correction code parameters to maximize the security and reliability performance, and a simple method to decrease hardware storage are its main steps. The output of these post-processing steps is a bit sequence $X^n$ (or its noisy version $Y^n$) used in the FCS. One applies the same post-processing steps for the enrollment and reconstruction. The~difference is that during enrollment the design parameters are chosen as a function of the PUF-circuit output statistics by the device manufacturer. It thus suffices to discuss only the enrollment steps. Figure~\ref{fig:postprocessing} shows the post-processing steps that include transformation, histogram equalization, quantization, bit allocation, and bit-sequence concatenation. RO outputs $\widehat{X}^l$ are correlated due to, {e.g.,} the surrounding logic in the hardware. A~transform $\emph{T}_{r\!\times\!c}(\cdot)$ of size $\displaystyle r\!\times\! c$ is applied to an array of RO outputs to reduce correlations. Decorrelation~performance of a transform depends on the source statistics. We model each {real-valued} output $T$ in the transform domain, called \emph{transform coefficient}, obtained from an RO-output dataset in \cite{ROLarge} by using the corrected Akaike's information criterion (AICc) \cite{CorrAIC} and the Bayesian information criterion (BIC) \cite{BIC2}. These criteria suggest that a Gaussian distribution can be fitted to each transform coefficient $T$ for the DCT, discrete Walsh-Hadamard transform (DWHT), discrete Haar transform (DHT), and Karhunen-Lo\`{e}ve transform (KLT), which are common orthogonal transforms considered in the literature for image processing, digital watermarking, etc. Use maximum-likelihood estimation to derive unbiased estimates for the parameters of Gaussian distributions. \begin{figure} \centering \includegraphics[width=0.8\textwidth, height=0.8\textheight, keepaspectratio]{./Transformcodingmodel2} \caption{{Transform-coding steps \cite{bizimpaper}.}} \label{fig:postprocessing} \end{figure} The histogram equalization step in Figure~\ref{fig:postprocessing} converts the probability density of the $i$-th coefficient $T_i$ into a standard normal distribution such that $ \widehat{T}_i = \frac{T_i - \mu_i}{\sigma_i}$, where $\displaystyle \mu_i$ is the mean and $\displaystyle \sigma_i$ is the standard deviation of the $i$-th transform coefficient for all $\displaystyle i\!=\!1,2,\ldots,l$. Quantization steps for all transform coefficients are thus the same. Without histogram equalization, we need a different quantizer for each transform coefficient. Therefore, the histogram equalization step reduces the storage for the quantization steps. Transformed and equalized coefficients $\displaystyle \widehat{T}_i$ are independent if the transform $\emph{T}_{r\!\times\!c}(\cdot)$ decorrelates the RO outputs perfectly and the transform coefficients $\displaystyle T_i$ are jointly Gaussian. One can thus use a scalar quantizer for all coefficients without a performance loss for this case. Scalar quantizers and bit extraction methods are given below to satisfy the security and reliability requirements of the FCS with the independence assumption, which can be combined with a correlation-thresholding approach in practice. \section{Joint Quantizer and Error-Correction Code Design}\label{sec:quantandcodedesign} The aim of the post-processing steps in Figure~\ref{fig:postprocessing} is to extract a uniformly-random bit sequence $\displaystyle X^n$. Use a quantizer $\displaystyle \Delta(\cdot)$ with quantization-interval values $\displaystyle k=1,2,\cdots,2^{K_i}$, where $\displaystyle K_i$ is the number of bits we extract from the $i$-th coefficient $\widehat{T}_i$ for $i\!=\!1,2,\ldots,l$. Let \begin{align} \Delta(\hat{t}_i) = k\quad \text{if}\quad b_{k-1}\!<\!\hat{t}_i\!\leq\!b_k \label{eq:quantizer} \end{align} and choose $\displaystyle b_k = \Phi^{-1}\left(\frac{k}{2^{K_i}}\right)$, where $\displaystyle \Phi^{-1}(\cdot)$ is the quantile function of the standard normal distribution. The output $k$ is assigned to a bit sequence of length $K_i$. The most likely error event when we quantize $\widehat{T}_i$ is a jump to a neighboring quantization step due to zero-mean noise model assumed. Thus, apply a Gray mapping when assigning bit sequences of length $K_i$ to the integers $k=1,2,\ldots,2^{K_i}$, so neighboring bit sequences change only in one bit. We next discuss a reliability metric proposed for a joint quantizer and code design by fixing the maximum number of erroneous transform coefficients and considering an error-correction code that can correct all error patterns with up to a fixed number of errors. \subsection{Quantizer Design with Fixed Maximum Number of Errors} We discuss a conservative approach that assumes that either all bits extracted from a transform coefficient are correct or they all flip. The \textit{correctness probability} $P_c$ of a transform coefficient is defined to be the probability that all bits associated with this coefficient are correct. This metric is used to determine the number of bits extracted from each coefficient such that there is a channel encoder and a bounded minimum distance decoder (BMDD) that satisfy the block-error probability constraint $P_B\leq10^{-9}$, which is a common block-error probability considered for PUFs that consist of CMOS circuits \cite{ROFirst}. This approach results in reliability guarantees for the random-output RO arrays. Let $Q(\cdot)$ be the Q-function, $\displaystyle \sigma^2_{\hat{n}}$ the noise variance, and $\displaystyle f_{\widehat{T}}$ the probability density of the standard Gaussian distribution. For a $K$-bit quantizer and the quantization boundaries $b_k$ as in (\ref{eq:quantizer}) for an equalized Gaussian transform coefficient $\widehat{T}$, the correctness probability is \begin{align} P_c&(K) \!=\!\sum_{k=0}^{2^K-1}\!\int\displaylimits_{b_{k}}^{b_{k+1}}\Bigg[\!Q\Big(\frac{b_{k}\!-\!\hat{t}}{\sigma_{\hat{n}}}\Big)\!-\!Q\Big(\frac{b_{k+1}\!-\!\hat{t}}{\sigma_{\hat{n}}}\Big)\!\Bigg]f_{\widehat{T}}(\hat{t})d{\hat{t}}\label{eq:correctness} \end{align} which calculates the probability that the additive noise will not change the quantization interval assigned to the transform coefficient, i.e., all bits associated with the transform coefficient stay the same after adding noise. Suppose the channel decoder can correct all errors in up to $\displaystyle C_{\text{max}}$ transform coefficients. Suppose~further that coefficient errors occur independently, i.e., noise on different transform coefficients are mutually independent, and that the correctness probability $\displaystyle P_{c,i}(K)$ of the $i$-th coefficient $\widehat{T}_i$ for $i\!=\!1,2,\ldots,l$ is at least $\displaystyle \thickbar{P}_c(C_{\text{max}})$. A sufficient condition for satisfying the block-error probability constraint $P_B\!\leq\!10^{-9}$ is that $\displaystyle \thickbar{P}_c(C_{\text{max}})$ satisfies the inequality \begin{align} \sum_{c=C_{\text{max}}+1}^{l}{l \choose c}{(1\!-\!\thickbar{P}_c(C_{\text{max}}))}^{c}{\thickbar{P}_c(C_{\text{max}})}^{l-c}\!\leq\! 10^{-9}\label{eq:threshold}. \end{align} Determine the number $K_i$ of bits extracted from the $i$-th transform coefficient as the maximum value $K$ such that $\displaystyle P_{c,i}(K)\geq \thickbar{P}_{c}(C_{\text{max}})$. The first coefficient, i.e., DC coefficient, $\widehat{T}_1$ is not used since its value is a scaled version of the mean of the RO outputs in the same array, which is generally known by an attacker. Ambient-temperature and supply-voltage variations have a highly-linear effect on the RO outputs, so the DC coefficient is the most affected coefficient, which is another reason not to use the DC coefficient \cite{bizimtemperature}. Therefore, choose $K_1\!=\!0$ so that the total number $\displaystyle n(C_{\text{max}})$ of extracted bits is \begin{align} n(C_{\text{max}})\!=\!\sum_{i=2}^l K_i \label{eq:totalbits}. \end{align} In the worst case, the coefficients in error are the coefficients from which the largest number of bits are extracted. Sort the numbers $K_i$ of bits extracted from all coefficients in descending order such that $\displaystyle K^{\prime}_{i}\!\geq\!K^{\prime}_{i+1}$ for all $i\!=\!1,2,\ldots,l-2$. The channel decoder thus must be able to correct up to \begin{align} e( C_{\text{max}}) = \sum_{i=1}^{C_{\text{max}}} K^{\prime}_{i}\label{eq:minimume} \end{align} bit errors, which can be satisfied by using a block code with minimum distance $\displaystyle d_{\text{min}}\!\geq\!2e(C_{\text{max}})\!+\!1$ since a BMDD can correct all errors with up to $e=\lfloor\frac{d_{\text{min}-1}}{2}\rfloor$ errors \cite{ECC}. Suppose a key bound to physical identifiers in a device is used in the advanced encryption standard (AES) with a uniformly-distributed secret key of length 128 bits. The block code used in the FCS should thus have a code length of at most $\displaystyle n(C_{\text{max}})$ bits, code dimension of at least $128$ bits, and minimum distance of $\displaystyle d_{\text{min}} \geq 2e(C_{\text{max}})+1$ for a fixed $\displaystyle C_{\text{max}}$. The code rate should be as high as possible to operate close to the optimal (secret-key, privacy-leakage) rate point of the FCS. This optimization problem is hard to solve. One can illustrate by an exhaustive search over a set of $\displaystyle C_{\text{max}}$ values and over a selection of algebraic codes that there is a channel code that satisfies these constraints with a reliability guarantee for each extracted bit. Restricting the search to codes that admit low-complexity encoders and decoders is desired for, e.g., IoT applications, for which complexity is the bottleneck. Conditions listed above are conservative. For a given transform coefficient, the correctness probability can be significantly greater than the correctness threshold $\displaystyle \thickbar{P}_c(C_{\text{max}})$. Secondly, due to Gray mapping, it is more likely that less than $K_i$ bits are in error when the $i$-th coefficient is erroneous. It is also unlikely that the bit errors always occur in the transform coefficients from which the largest numbers of bits are extracted. Thus, even if a channel code cannot correct all error patterns with up to $e( C_{\text{max}})$ errors, it can still be the case that the block-error probability constraint is satisfied. We next illustrate such a case. \section{PUF Performance Evaluations}\label{sec:comparisons} Suppose RO outputs $\widehat{X}^l$ are represented as a vector random variable with an autocovariance matrix $\mathbf{C_{\widehat{X}\widehat{X}}}$. Consider RO arrays of sizes $8$ $\!\times \!$ $8$ and 16 $\!\times \!$ 16. Autocovariance matrix of such RO array outputs and noise parameters can be estimated from the RO output dataset in \cite{ROLarge}. Using the dataset, we compare the performance of the DCT, DWHT, DHT, and KLT in terms of their decorrelation efficiency, complexity, uniqueness, and~security. \subsection{Decorrelation Performance} One should eliminate correlations between the RO outputs and make them independent to extract uniform bit sequences by quantizing each transform coefficient separately. Use the \textit{decorrelation} \textit{efficiency} $\displaystyle \eta_c$ \cite{decorrelation} as a decorrelation performance metric. Consider the autocovariance matrix $\mathbf{C_{TT}}$ of the transform coefficients, so $\displaystyle \eta_c$ of a transform is \begin{align} \eta_c = 1-\frac{\sum\limits_{a=1}^{l}\sum\limits_{b=1}^{l}|\mathbf{C_{TT}}(a,b)|\mathds{1}\{a\!\ne\!b\}}{\sum\limits_{a=1}^{l}\sum\limits_{b=1}^{l}|\mathbf{C_{\widehat{X}\widehat{X}}}(a,b)|\mathds{1}\{a\!\ne\!b\}} \end{align} where the indicator function $\displaystyle \mathds{1}\{a\!\ne\!b\}$ takes on the value 1 if $\displaystyle a\!\ne\!b$ and 0 otherwise. The decorrelation efficiency of the KLT is 1, which is optimal \cite{decorrelation}. We list the average decorrelation efficiency results of other transforms in Table~\ref{tab:decorrelationeff}. All transforms have similar and good decorrelation efficiency performance for the RO outputs in the dataset \cite{ROLarge}. The DCT and DHT have the highest efficiency for $\displaystyle 8\!\times \!8$ RO arrays, whereas for $\displaystyle 16 \!\times \! 16$ RO arrays, the best transform is the DWHT. Table~\ref{tab:decorrelationeff} indicates that increasing the array size improves $\displaystyle \eta_c$. \begin{table}[t] \caption{The average RO output decorrelation-efficiency results.} \centering \begin{tabular}{ c|C{1.7cm}|C{1.7cm}|C{1.7cm}| } \cline{2-4} & DCT & DWHT & DHT\\ \cline{1-4} \multicolumn{1}{|c|}{$\displaystyle \eta_c$ for $8\times 8$} & 0.9978 & 0.9977& 0.9978\\ \cline{1-4} \multicolumn{1}{|c|}{$\displaystyle \eta_c$ for $16\times 16$} & 0.9987 &0.9988 & 0.9986\\ \cline{1-4} \end{tabular}\label{tab:decorrelationeff} \end{table} \subsection{Transform Complexity} We measure the complexity of a transform in terms of the number of operations required to compute the transform. We are interested in a computational-complexity comparison for RO arrays of sizes $ r\!=\!c\!=\!8$ and $r\!=\!c\!=\!16$, which are powers of 2, so that fast algorithms are available for the DCT, DWHT, and DHT. The computational complexity of the KLT for $r\!=\!c\!=\!n$ is $\displaystyle O(n^3)$, while it is $\displaystyle O(n^2\log_2 n)$ for the DCT and DWHT, and $\displaystyle O(n^2)$ for the DHT \cite{mySPbook}. There are efficient implementations of the DWHT without multiplications \cite{bizimMDPI}, which can be applied also to the transforms proposed in \cite{bizimICASSP2020}. The DWHT is thus a good candidate for RO PUF designs for IoT applications. For instance, a hardware implementation of two-dimensional (2D) DWHT in a Xilinx ZC706 evaluation board with a Zynq-7000 XC7Z045 system-on-chip is illustrated in \cite{bizimMDPI} to require approximately $11\%$ smaller hardware area and $64\%$ less processing time than the benchmark RO PUF hardware implementation in \cite{Pufky}. \subsection{Uniqueness and Security}\label{subsec:uniqueness} The bit sequence extracted from a physical identifier should be uniformly distributed to make the rate region $\mathcal{R}_{\text{FCS}}$ in (\ref{eq:ls0}) valid. A common measure, called \textit{uniqueness}, for measuring randomness of bit sequences is the average fractional Hamming distance between the bit sequences extracted from different RO PUFs. Similar uniqueness results are obtained for all transforms, where the mean Hamming distance is $0.500$ and Hamming distance variance is approximately $\displaystyle 7\!\times \!10^{-4}$. All~transforms thus provide close to optimal uniqueness results due to their high decorrelation efficiencies and equipartitioned quantization intervals, which are better than previous RO PUF results with mean values of $0.462$ \cite{ROFirst} and $0.473$ \cite{ROLarge}. The national institute of standards and technology (NIST) provides a set of randomness tests that check whether a bit sequence can be differentiated from a uniformly random bit sequence \cite{NIST}. Apply these tests to measure the randomness of the generated sequences. The bit sequences generated from ROs in the dataset \cite{ROLarge} with the DWHT pass most of the applicable tests for short lengths, which is considered to be an acceptable result \cite{NIST}. The KLT performs the best due to its optimal decorrelation performance. One can apply a thresholding approach such that the reliable transform coefficients from which the bits are extracted do not have high correlations, which further improves the security performance. \section{Error-Correction Codes for PUFs with Transform Coding}\label{sec:correction} Suppose that bit sequences extracted by using the transform-coding method are i.i.d. and uniformly distributed so that the secrecy leakage is zero. We assume that signal processing steps mentioned above perform well, so we can conduct standard information- and coding-theoretic analysis. We list different codes designed for the transform-coding algorithm according to the reliability metric considered above. \begin{figure} \centering \newlength\figureheight \newlength\figurewidth \setlength\figureheight{5.2cm} \setlength\figurewidth{14.3cm} \input{./Selectedcorrectness2.tikz} \caption{The correctness probabilities for transform coefficients.} \label{fig:correctnessprob} \end{figure} Select a channel code for the quantizer designed above for a fixed maximum number of errors to store a secret key of length 128 bits. The correctness probabilities defined in (\ref{eq:correctness}) for the transform coefficients with the three highest and three smallest probabilities are plotted in Figure~\ref{fig:correctnessprob}. The indices of the $16\times 16$ transform coefficients follow the order in the dataset \cite{ROLarge}, where the coefficient index at the first row and first column is $1$, and it increases columnwise up to $16$ so that the second row starts with the index $17$, the third row with the index $33$, etc. The most reliable transform coefficients are the low-frequency coefficients, which are in our case at the upper-left corner of the 2D transform-coefficient array with indices such as $1,2,3,17,18,19,33,34,$ and $35$. The low-frequency transform coefficients therefore have the highest signal-to-noise ratios (SNRs) for the source and noise statistics obtained from the RO dataset \cite{ROLarge}. The least reliable coefficients are observed to be spatially away from the transform coefficients at the upper-left or lower-right corners of the 2D transform-coefficient array. These results indicate that the \emph{SNR-packing efficiency}, which can be defined similarly as the energy-packing efficiency, of a transform follows a more complicated scan order than the classic zig-zag scan order used for the energy-packing efficiency. Observe~from Figure~\ref{fig:correctnessprob} that increasing the number of extracted bits decreases the correctness probability for all coefficients since the quantization boundaries get closer so that errors due to noise become more likely, {i.e.,} the probability $P_{c}(K)$ defined in (\ref{eq:correctness}) decreases with increasing $K$. Fix the maximum number $\displaystyle C_{\text{max}}$ of transform coefficients allowed to be in error and calculate the correctness threshold $\displaystyle \thickbar{P}_c(C_{\text{max}})$ using (\ref{eq:threshold}), the total number $\displaystyle n(C_{\text{max}})$ of extracted bits using (\ref{eq:totalbits}), and the number $\displaystyle e(C_{\text{max}})$ of errors the block code should be able to correct using (\ref{eq:minimume}). Observe that if $\displaystyle C_{\text{max}}\!\leq\!10$, $\thickbar{P}_c(C_{\text{max}})$ is so large that $\displaystyle P_{c,i}(K\!=\!1)\!\leq\!\thickbar{P}_c(C_{\text{max}})$ for all $i=2,\ldots,l$. If $\displaystyle 11\!\leq\!C_{\text{max}}\!\leq\!15$, $\displaystyle n(C_{\text{max}})$ is less than the required code dimension of 128 bits. Furthermore, increasing $\displaystyle C_{\text{max}}$ results in a smaller correctness threshold $\displaystyle \thickbar{P}_c(C_{\text{max}})$ so that the maximum of the number $\displaystyle K_{\text{max}}(C_{\text{max}})\!=\!K^{\prime}_1(C_{\text{max}})$ of bits extracted among the $l-1$ used coefficients increases. This result can increase hardware complexity. Therefore, consider only the cases where $\displaystyle C_{\text{max}}\!\leq\!20$. Table~\ref{tab:myresults} shows $\displaystyle \thickbar{P}_c(C_{\text{max}})$, $\displaystyle n(C_{\text{max}})$, and $\displaystyle e(C_{\text{max}})$ for a range of $\displaystyle C_{\text{max}}$ values used for channel-code selection. \begin{table}[t] \centering \caption{Code-parameter constraints.} \begin{tabular}{ |c|c|c|c|c|c| } \hline \rowcolor{blue!25} $\displaystyle \mathbf{C_{\text{\textbf{max}}}}$ & $\mathbf{16}$ & $\mathbf{17}$& $\mathbf{18}$ & $\mathbf{19}$ & $\mathbf{20}$\\ \hline $\displaystyle \thickbar{P}_c$ & $0.9902$ & $0.9889$ & $0.9875$ & $0.9860$ & $0.9844$\\ \hline $\displaystyle K_{\text{max}}$ & $3$ & $3$ & $3$ & $3$ & $3$\\ \hline $\displaystyle n$ & $144$ & $224$ & $250$ & $255$ & $259$\\ \hline $\displaystyle e$ & $18$ & $20$& $21$ & $23$ & $25$\\ \hline \end{tabular}\label{tab:myresults} \end{table} Consider binary (extended) Bose–Chaudhuri–Hocquenghem (BCH) and Reed-Solomon (RS) codes, which have good minimum-distance $d_{\text{min}}$ properties. An exhaustive search does not provide a code with dimension of at least 128 bits and with parameters satisfying any of the $\displaystyle (n(C_{\text{max}}),\,e(C_{\text{max}}))$ pairs in Table~\ref{tab:myresults}. However, the correctness threshold analysis leading to Table~\ref{tab:myresults} is conservative. Thus, choose a BCH code with parameters as close as possible to an $\displaystyle (n(C_{\text{max}}),\,e(C_{\text{max}}))$ pair, for which one can prove that even if the number $\displaystyle e_{\text{BCH}}$ of errors the chosen BCH code can correct is less than $\displaystyle e(C_{\text{max}})$, the block-error probability constraint is satisfied. Consider the binary BCH code with the block length $255$, code dimension $131$, and a capability of correcting all error patterns with up to $\displaystyle e_{\text{BCH}}=18$ errors. First, impose the condition that exactly one bit is extracted from each coefficient, {i.e.,} $K_{i}\!=\!1$ for all $i\!=\!2,3,\ldots,l$, so that in total $n\!=\!l-1\!=\!255$ bits are obtained, which results in mutually independent bit errors $E_i$. Therefore, the chosen block code should be able to correct all error patterns with up to $e\!=\!20$ bit errors rather than $e(20)\!=\!25$ bit errors, which is still greater than the error-correction capability $\displaystyle e_{\text{BCH}}=18$ of the considered BCH code. The block error probability $P_B$ for the BCH code $\mathcal{C}(255,131,37)$ with a BMDD corresponds to the probability of having more than $18$ errors in the codeword, i.e., we obtain \begin{align} P_B = \sum_{j=19}^{255}\Bigg[\sum_{A\in\mathcal{F}_j}\prod_{i\in A}(1-P_{c,i})\,\bigcdot\prod_{i\in A^{c}}P_{c,i} \Bigg] \label{eq:blockerrorforbch} \end{align} where $P_{c,i}$ is the correctness probability of the $i$-th transform coefficient $\widehat{T}_i$ defined in (\ref{eq:correctness}) for $i\!=\!2,3,\ldots,256$, $\displaystyle \mathcal{F}_j$ is the set of all size-$j$ subsets of the set $\displaystyle\{2,3,\ldots,256\}$, and $A^{c}$ denotes the complement of the set $A$. The correctness probabilities $P_{c,i}$ are different and they represent probabilities of independent events due to the independence assumption for the transform coefficients. We use the discrete Fourier transform characteristic function method \cite{DFTCF} to calculate the block-error probability and obtain the result $P_B\!\approx\!1.26\!\times\!10^{-11}\!<\!10^{-9}$. The block-error probability constraint is thus satisfied by using the BCH code $\mathcal{C}(255,131,37)$ with a BMDD although the conservative analysis suggests that it would not be satisfied. We next compare the BCH code $\mathcal{C}(255,131,37)$ with previous codes proposed for binding keys to physical identifiers with the FCS and a secret-key length of $128$ bits such that $P_B\!\leq\!10^{-9}$ is satisfied. The (secret-key, privacy-leakage) rate pair for this code is $(R_\text{s},R_\ell)=(\frac{131}{255},1\!-\!\frac{131}{255})\approx(0.514,\,0.486)$ bits/source-bit. This pair is significantly better than previous results. The main reason for obtaining a better (secret-key, privacy-leakage) rate pair is that the quantizer defined above allows to obtain a higher identifier-output reliability by decreasing the number of bits extracted from a transform coefficient. Compare the secret-key $R_\text{s}$ and privacy-leakage $R_\ell$ rates of the BCH code $\mathcal{C} (255,131,37)$ with the region of all achievable rate pairs for the CS model and the FCS for a BSC $P_{Y|X}$ with crossover probability $p_b\!=\! 1-\frac{1}{l-1}\sum_{i=2}^lP_{c,i}(K_i\!=\!1)\!\approx\!0.0097$, {i.e.,} the probability of being in error averaged over all used transform coefficients with the quantizer defined above. Compute the boundary points of the region $\mathcal{R}$ by finding the optimal auxiliary random variable $U$ in (\ref{eq:chosensecret}) when $P_{Y|X}$ is a BSC. The regions of all rate pairs achievable by the FCS and CS model, the maximum secret-key rate point, the (secret-key, privacy-leakage) rate pair of the BCH code, and a finite-length (non-asymptotic) bound \cite{Polyanskiy} for the block length of $n=255$ bits and $P_B\!=\!10^{-9}$ are plotted in Figure~\ref{fig:ratecomparison}. \begin{figure} \centering \setlength\figureheight{5.75cm} \setlength\figurewidth{14.4cm} \input{./Comparison.tikz} \caption{The operation point of the considered BCH code $\mathcal{C}(255,131,37)$, regions of achievable rate pairs according to (\ref{eq:ls0}) and (\ref{eq:chosensecret}), the maximum secret-key rate point $(R_\ell^*,R_{\text{s}}^*)$, and a finite-length bound for $n=255$ bits, $P_B=10^{-9}$, and BSC $(0.0097)$.} \label{fig:ratecomparison} \end{figure} The maximum secret-key rate is $R_\text{s}^*\!\approx\!0.922$ bits/source-bit with a corresponding minimum privacy-leakage rate of $R_\ell^*\!\approx\!0.079$ bits/source-bit. There is a gap between the rate tuple achieved by the BCH code and the only operation point where the FCS is optimal, i.e., $(R_\ell^*,R_\text{s}^*)$. Part~of this rate loss can be explained by the short block length of the code and the small block-error probability constraint. The finite-length bound given in \cite[Theorem 52]{Polyanskiy} establishes that the rate pair $(R_\text{s},R_\ell)\!=\!(0.691,0.309)$ bits/source-bit is achievable by using the FCS, as depicted in Figure~\ref{fig:ratecomparison}. One can therefore further improve the rate pairs by using better channel codes and decoders with higher hardware complexity, but this may not be possible for IoT applications. Figure~\ref{fig:ratecomparison} also illustrates that there exist other code constructions (other than standard channel codes) that~reduce the privacy-leakage rate as well as the storage rate for each fixed secret-key rate, which we consider below. \section{Code Constructions for PUFs}\label{sec:WZchapter} Consider the two-terminal key agreement problem, where the identifier outputs during enrollment are noiseless. We mention two optimal linear code constructions from \cite{bizimWZ} that are based on Wyner-Ziv (WZ) coding \cite{WZCard}. The first construction uses random linear codes and achieves all points of the key-leakage-storage regions of the GS and CS models. The second construction uses nested polar codes for vector quantization during enrollment and for error correction during reconstruction. Simulations show that nested polar codes achieve privacy-leakage and storage rates that improve on existing code designs, and one designed code achieves a rate tuple that cannot be achieved by existing methods. Several practical code constructions for key agreement with identifiers have been proposed in the literature. For instance, the COFE and the FCS both require a standard error-correction code to satisfy the constraints of, respectively, the key generation (GS model) and key embedding (CS model) problems, as discussed above. Similarly, a polar code construction is proposed in \cite{IgnaPolar} for the GS model. These constructions are shown to be suboptimal in terms of the privacy-leakage and storage rates. The binary Golay code is used in \cite{IgnaTrans} as a vector quantizer (VQ) in combination with Slepian-Wolf (SW) codes \cite{SW} to illustrate that the key vs. storage (or key vs. leakage) rate ratio can be increased via quantization. This observation motivates the use of a VQ to improve the performance of previous constructions. We next consider VQ by using WZ coding to decrease storage rates. The WZ-coding construction turns out to be optimal, which is not coincidental. For instance, the bounds on the storage rate of the GS model and on the WZ rate (storage rate) have the same mutual information terms optimized over the same conditional probability distribution. This similarity suggests an \emph{equivalence} that is closely related to the concept of \emph{formula duality}. In fact, the optimal random code construction, encoding, and decoding operations are identical for both problems. One therefore can call the GS model and WZ problem \emph{functionally equivalent}. Such a strong connection suggests that there might exist constructive methods that are optimal for both problems for all channels, which is closely related to the \emph{operational duality} concept. \begin{figure} \centering \resizebox{0.61\linewidth}{!}{ \begin{tikzpicture} \node (so) at (-1.5,-2.2) [draw,rounded corners = 5pt, minimum width=1.0cm,minimum height=0.8cm, align=left] {$P_X$}; \node (a) at (0,0) [draw,rounded corners = 6pt, minimum width=3.2cm,minimum height=1.2cm, align=left] {$ (S,W) \overset{(a)}{=} \Enc(X^n)$\\ $W\overset{(b)}{=}\Enc(X^n,S)$}; \node (c) at (3,-2.2) [draw,rounded corners = 5pt, minimum width=1.6cm,minimum height=0.8cm, align=left] {$P_{Y|X}$}; \node (b) at (6,0) [draw,rounded corners = 6pt, minimum width=3.2cm,minimum height=1.2cm, align=left] {$\widehat{S} = \Dec\left(Y^n,W\right)$}; \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (a.east) -- (b.west) node [midway, above] {$W$}; \node (a1) [below of = a, node distance = 2.2cm] {$X^n$}; \node (b1) [below of = b, node distance = 2.2cm] {$Y^n$}; \node (k9) [below of = a1, node distance = 0.6cm] {Enrollment}; \node (k19) [below of = b1, node distance = 0.6cm] {Reconstruction}; \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (so.east) -- (a1.west); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (a1.north) -- (a.south); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (a1.east) -- (c.west); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (c.east) -- (b1.west); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (b1.north) -- (b.south); \node (a2) [above of = a, node distance = 2.2cm] {$\;S$}; \node (b2) [above of = b, node distance = 2.2cm] {$\;\widehat{S}$}; \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(b.north)-(0.3,0)$) -- ($(b2.south)-(0.3,0)$) node [midway, left] {$(a)$}; \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(b.north)+(0.3,0)$)-- ($(b2.south)+(0.3,0)$) node [midway, right] {$(b)$}; \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(a.north)-(0.3,0)$)-- ($(a2.south)-(0.3,0)$) node [midway, left] {$(a)$}; \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(a2.south)+(0.3,0)$)-- ($(a.north)+(0.3,0)$) node [midway, right] {$(b)$}; \end{tikzpicture} } \caption{The $(a)$ GS and $(b)$ CS models.}\label{fig:problemsetup} \end{figure} Consider the GS model in Figure~\ref{fig:problemsetup}(a), where a secret key is generated from a biometric or physical source. During enrollment, the encoder observes an i.i.d. noiseless sequence $X^n$, generated by the source according to some $P_X$, and computes a secret key $S$ and public helper data $W$ as $\displaystyle (S,W)\,{=}\,{\Enc}(X^n)$. During reconstruction, the decoder observes a noisy source measurement $Y^n$ of the source $X^n$ through a memoryless channel $P_{Y|X}$ together with the helper data $W$. The decoder estimates the secret key as $\displaystyle \widehat{S}\,{=}\,{\Dec}(Y^n\!,W)$. Similarly, Figure~\ref{fig:problemsetup}(b) shows the CS model, where a secret key $S$ that is independent of $(X^n,Y^n)$ is embedded into the helper data as $W = \Enc(X^n,S)$. The decoder for the CS model estimates the secret key as $\widehat{S}=\Dec(Y^n,W)$. The source, measurement, secret key, and storage alphabets $\mathcal{X}$, $\mathcal{Y}$, $\mathcal{S}$, and $\mathcal{W}$ are finite sets, which can be achieved if, e.g., the transform-coding algorithm discussed above is applied. \begin{definition}\label{def:achievabilityGSCS} A key-leakage-storage tuple $(R_\text{s},R_\ell,R_\text{w})$ is \emph{achievable} for GS and CS models if, given any $\delta>0$, there is some $n\!\geq\!1$, an encoder, and a decoder such that $\displaystyle R_\text{s}=\frac{\log|\mathcal{S}|}{n}$ and \begin{alignat}{2} &P_B=\Pr[S\ne\hat{S}] \leq \delta &&\qquad\quad (reliability) \label{eq:reliabilityconstMMMM}\\ &I\left(S;W\right) \leq n\delta &&\qquad\quad(weak\; secrecy) \label{eq:secrecyconstMMMM}\\ &I\left(X^n;W\right) \leq n(R_\ell+\delta) \quad\quad\quad&&\qquad\quad (privacy) \label{eq:privacyconstMMMM}\\ &H(S) \geq n(R_\text{s}-\delta) \quad&&\qquad\quad (uniformity)\label{eq:uniformityconstMMMM}\\ &\log|\mathcal{W}| \le n(R_\text{w}+\delta) &&\qquad\quad (storage)\label{eq:storageconstMMMM} \end{alignat} are satisfied. The \emph{key-leakage-storage} regions $\mathcal{R}_{\text{gs}}$ and $\mathcal{R}_{\text{cs}}$ for the GS and CS models, respectively, are the closures of the sets of achievable tuples for the corresponding models. \end{definition} \begin{theorem}[\hspace{1sp}\cite{IgnaTrans}]\label{theo:secrecyregions} The key-leakage-storage regions for the GS and CS models, respectively, are \begin{align} \begin{split} \mathcal{R}_{\text{gs}}\! =\! &\bigcup_{P_{U|X}}\!\Big\{\left(R_\text{s},R_\ell,R_\text{w}\right)\!\colon\! \nonumber\\ &0\leq R_\text{s}\leq I(U;Y),\nonumber\\ &R_\ell\geq I(U;X)-I(U;Y),\nonumber\\ &R_\text{w}\geq I(U;X)-I(U;Y)\Big\} \text{, \end{split} \qquad\quad\text{ and }\qquad \begin{split} \mathcal{R}_{\text{cs}}\! =\! &\bigcup_{P_{U|X}}\!\Big\{\left(R_\text{s},R_\ell,R_\text{w}\right)\!\colon\! \nonumber\\ &0\leq R_\text{s}\leq I(U;Y),\nonumber\\ &R_\ell\geq I(U;X)-I(U;Y),\nonumber\\ &R_\text{w}\geq I(U;X)\Big\} \end{split} \end{align} where $U-X-Y$ form a Markov chain. These regions are convex sets. The alphabet $\mathcal{U}$ of the auxiliary random variable $U$ can be limited to have size $\displaystyle |\mathcal{U}|\!\leq\!|\mathcal{X}|+1$ for both regions. \end{theorem} \begin{remark} One can improve the weak secrecy to strong secrecy, i.e., we can replace (\ref{eq:secrecyconstMMMM}) with $I(S;W)\leq \delta$ by applying information reconciliation and privacy amplification steps to multiple blocks of identifier outputs as described in \cite{MaurerSecrecyFree}, e.g., by using multiple PUFs in a device for key agreement. \end{remark} Assume, as above, that $X^n\sim \text{Bern}^n(\frac{1}{2})$ and the channel $P_{Y|X}$ is a BSC$(p_A)$, where $p_A\in[0, 0.5]$. Define the star-operation as $q*p_A = q(1-p_A)+(1-q)p_A$. The key-leakage-storage region of the GS model is \begin{align} \mathcal{R}_{\text{gs},\text{bin}}\! =\! &\bigcup_{q\in[0,0.5]}\!\Big\{\left(R_\text{s},R_\ell,R_\text{w}\right)\!\colon\!\nonumber\\ &0\leq R_\text{s}\leq 1- H_b(q*p_A),\nonumber\\ &R_\ell\geq H_b(q*p_A)- H_b(q),\nonumber\\ &R_\text{w}\geq H_b(q*p_A)- H_b(q)\Big\}\label{eq:BSCRegionGS}. \end{align} \subsection{Comparisons Between Code Constructions for PUFs}\label{subsec:CompareMethods} There are several existing code constructions proposed for the GS and CS models. Consider the three best methods: FCS for the CS model, and COFE and the polar code construction in \cite{IgnaPolar} for the GS model, to compare them with the WZ-coding constructions. Similar steps to the FCS are applied in the COFE, except that the secret key is a hashed version of $X^n$. The FCS achieves the single optimal point in the key-leakage region with the maximum secret-key rate $R_\text{s}^*=I(X;Y)$; the privacy-leakage rate is $R_\ell^* =H(X|Y)$. Similarly, the COFE achieves the same boundary point in the key-leakage region. This is, however, the only boundary point of the key-leakage regions that these methods can achieve. One can improve both methods by adding a VQ step: instead of $X^n$ we use its quantized version $X_q^n$ during enrollment. This asymptotically corresponds to summing the original helper data and an independent random variable $J^n\sim\text{Bern}^n(q)$ such that $W=X^n\xor C^n\xor J^n$ is the new helper data so that we create a virtual channel $P_{Y|X\xor J}$ and apply the FCS or COFE to this virtual channel. The modified FCS and COFE can achieve all points of the key-leakage region if we take a union of all rate pairs achieved over all $q\in[0, 0.5]$. However, the helper data have $n$ bits for both methods, and the resulting storage rate of $1$ bit/source-bit is not necessarily optimal. The polar code construction in \cite{IgnaPolar} requires less storage rate than the FCS and COFE. However, this approach improves only the storage rate and cannot achieve all points of the key-leakage-storage region. Furthermore, in \cite{IgnaPolar} some code designs assume that there is a ``private'' key shared only between the encoder and decoder, which is not realistic since a private key requires hardware protection against invasive attacks. If such a protection is possible, then there is no need to use an on-demand key reconstruction method like a PUF. The existing methods cannot, therefore, achieve all points of the key-leakage-storage region for a BSC, unlike the WZ-coding constructions described in \cite{bizimWZ} and illustrated with nested polar code designs below. In previous works, only the secret-key rates of the proposed codes are compared because the sum of the secret-key and privacy-leakage rates is one. This constraint means that increasing the key vs. leakage (or key vs. storage) rate ratio is equivalent to increasing the key rate. Instead, WZ-coding constructions are more flexible than the existing methods in terms of achievable rate tuples. Therefore, use the key vs. storage rate ratio as the metric to control the storage and privacy leakage. \section{Optimal Code Constructions with Polar Codes}\label{sec:codeproposal} Polar codes \cite{Arikan} have a low encoding/decoding complexity, asymptotic optimality for various information-theoretic problems, and good finite length performance if a list decoder is used in combination with an outer code. Furthermore, they have a structure that allows a simple nested code design and they can be used for WZ coding \cite{RudigerPolarExtended}. Polar codes rely on the \textit{channel polarization} phenomenon, where a channel is converted into polarized bit channels by a polar transform. This transform converts an input sequence $U^n$ with frozen and unfrozen bits to a codeword of the same length $n$. A polar decoder processes a noisy observation of the codeword together with the frozen bits to estimate ${U}^n$. Let $\mathcal{C}(n,\mathcal{F},G^{|\mathcal{F}|})$ denote a polar code of block length $n$, where $\mathcal{F}$ is the set of indices of the frozen bits and $G^{|\mathcal{F}|}$ is the sequence of frozen bits. In the following, we use the nested polar code construction proposed in \cite{RudigerPolarExtended} for WZ coding. \subsection{Polar Code Construction for the GS Model}\label{subsec:polarcons} Consider two polar codes $\mathcal{C}_1(n,\mathcal{F}_1, V)$ and $\mathcal{C}(n,\mathcal{F}, \xoverline{V})$ with $\mathcal{F}=\mathcal{F}_1 \cup \mathcal{F}_w$ and $\xoverline{V}=[V, W]$, where $V$ has length $m_1$ and $W$ has length $m_2$ such that $m_1$ and $m_2$ satisfy \begin{align} \begin{split} &\frac{m_1}{n} = H_b(q)-\delt \end{split} \qquad\qquad\quad\text{ and }\quad \begin{split} &\frac{m_1+m_2}{n} = H_b(q*p_A)+\delt \end{split} \end{align} for some distortion $q\in[0,0.5]$ and $\delta>0$. The indices in $\mathcal{F}_1$ represent frozen channels with assigned values $V$ for both codes and $\mathcal{C}$ has additional frozen channels with assigned values $W$ denoted by the set of indices $\mathcal{F}_w$, so the codes are nested. The code $\mathcal{C}_1$ serves as a VQ with a desired distortion $q$ since its rate is greater than the lossy source coding capacity with average distortion $q$. The code $\mathcal{C}$ serves as the error-correction code for a BSC($q*p_A$) since its rate is less than channel capacity of this channel. The idea is to obtain $W$ during enrollment and store it as public helper data. For reconstruction, $(W,V,Y^n)$ are used by the decoder to estimate the secret key $S$ of length $n-m_1-m_2$. Figure~\ref{fig:blockdig} shows the block diagram of this construction. Suppose $V$ is the all-zero vector so that no additional storage is necessary. This choice has no effect on the average distortion $E[q]$ between $X^n$ and $X_q^n$ defined below; see \cite[Lemma 10]{RudigerPolarExtended}. \begin{figure} \centering \input{./blockdiagram2222222222222} \caption{Second WZ-coding construction for the GS model.} \label{fig:blockdig} \end{figure} \textit{Enrollment}: The uniform binary sequence $X^n$ generated by a PUF during enrollment is treated as the noisy observation of a BSC$(q)$. $X^n$ is quantized by a polar decoder of $\mathcal{C}_1$. Extract from the decoder output $U^n$ the bits at indices $\mathcal{F}_w$ and store them as the helper data $W$. The bits at the indices $j\in \{1,2,\ldots,n\}\setminus \mathcal{F}$ are used as the secret key. Note that applying the polar transform to $U^n$ generates $X_q^n$, which is a distorted version of $X^n$. The distortion between $X^n$ and $X_q^n$ is modeled as a BSC($q$) because the error sequence $E_{q}^n=X^n\xor X_q^n$ resembles an i.i.d. sequence $\sim\text{Bern}^n(q)$ when $n\rightarrow \infty$ \cite[Lemma 11]{RudigerPolarExtended}. \textit{Reconstruction}: During reconstruction, the polar decoder of $\mathcal{C}$ observes the binary sequence $Y^n$, which is a noisy measurement of $X^n$ through a BSC$(p_A)$. The frozen bits $\xoverline{V}=[V, W]$ at indices $\mathcal{F}$ are given to the polar decoder. The output $\widehat{U}^n$ of the polar decoder is the estimate of $U^n$ and contains the estimate $\widehat{S}$ of the secret key at the unfrozen indices of $\mathcal{C}$, i.e., $j\in \{1,2,\ldots,n\}\setminus \mathcal{F}$. We next summarise a method to design practical nested polar codes for the GS model. \textit{Construction of $\mathcal{C}$ and $\mathcal{C}_1$}: Since $\mathcal{C}\subseteq\mathcal{C}_1$ are nested codes, they must be constructed jointly. $\mathcal{F}$ and $\mathcal{F}_{1}$ should be chosen such that the reliability and security constraints are satisfied. For a given secret key size $n-m_1-m_2$, block length $n$, crossover probability $p_A$, and target block-error probability $P_B=\Pr[S\ne\widehat{S}]$, consider the following nested polar code design procedure \cite{bizimWZ}. \begin{enumerate} \item Construct a polar code of rate $(n\!-\!m_1\!-\!m_2)/n$ and use it as the code $\mathcal{C}$, i.e., define the set of frozen indices $\mathcal{F}$. \item Evaluate the error correction performance of $\mathcal{C}$ with a decoder for a BSC over a range of crossover probabilities to obtain the crossover probability $p_c$, resulting in a target block-error probability of $P_B$. Using $p_c=E[q]*p_A$, we obtain the target distortion $E[q]=(p_c-p_A)/(1-2p_A)$ averaged over a large number of realizations of $X^n$. \item Find an $\mathcal{F}_1\subset \mathcal{F}$ that results in an average distortion of $E[q]$ with a minimum possible amount of helper data. Use $\mathcal{F}_1$ as the frozen set of $\mathcal{C}_1$. \end{enumerate} Step 1 is a conventional channel code design task, similar to the codes designed for the transform-coding algorithm above, and step 2 is applied by Monte-Carlo simulations. For step 3, we start with $\mathcal{F}_1^{'} = \mathcal{F}$ and compute the resulting average distortion $E[q']$ via Monte-Carlo simulations. If $E[q']$ is not less than $E[q]$, remove elements from $\mathcal{F}_1^{'}$ according to the reliabilities of the polarized bit channels and repeat the procedure until we obtain the desired average distortion $E[q]$. The distortion level introduced by the VQ is an additional degree of freedom in choosing the code design parameters. For instance, different values of $P_B$ can be targeted with the same code by changing the distortion level. Alternatively, devices with different $p_A$ values can be supported by using the same code. This additional degree of freedom makes the mentioned code design suitable for a wide range of applications. \subsection{Designed Polar Codes for the GS Model} Consider, e.g., the GS model where $S$ is used in the AES and $\log|\mathcal{S}|\!=\!n\!-\!m_1\!-\!m_2\!=\!128$ bits, as considered above. If we use PUFs in an FPGA as the randomness source, we must satisfy a block-error probability $P_B$ of at most $10^{-6}$. Consider a BSC $P_{Y|X}$ with crossover probability $p_A=0.15$, which is a common value for SRAM PUFs under ideal environmental conditions \cite{SoftHelper} and for RO PUFs under varying environmental conditions \cite{bizimtemperature}. Consider nested polar codes for these parameters to illustrate that one can achieve better key-leakage-storage rate tuples than previously proposed codes. \emph{Code 1}: Consider $n=1024$ bits and recall that $n-m_1-m_2=128$ bits, $P_B=10^{-6}$, and $p_A=0.15$. Polar successive cancellation list (SCL) decoders with list size $8$ are used as the VQ and channel decoder. First, design the code $\mathcal{C}$ of rate $128/1024$ and evaluate its performance with the SCL decoder for a BSC with a range of crossover probabilities. A block-error probability of $10^{-6}$ is observed at a crossover probability of $p_c=0.1819$. Since $p_A=0.15$, this corresponds to an average distortion of $E[q]=0.0456$ and the target average distortion is obtained at $n-m_1=778$ bits. Thus, $m_2=650$ bits of helper data suffice to obtain a block-error probability of $P_B=10^{-6}$. \emph{Code 2}: Consider the same parameters as in Code 1, except $n=2048$ bits. Apply the same steps as above. A crossover probability of $p_c=0.2682$ is required to obtain a block-error probability of $10^{-6}$, which gives an average distortion of $E[q]=0.1689$. The target average distortion is achieved with helper data of length $611$ bits. The error probability $P_B$ is calculated as an average over a large number of PUF realizations, i.e., over a large number of PUF devices with the same circuit design. To satisfy the block-error probability requirement for each PUF realization, one could consider using the maximum distortion instead of $E[q]$ in step 3 of the design procedure given above. This would increase the amount of helper data. One can guarantee for the considered parameters a block-error probability of at most $10^{-6}$ for $99.99\%$ of all realizations $x^n$ of $X^n$ by adding $32$ bits to the helper data for Code 1 and $33$ bits for Code 2. The numbers of extra helper data bits required are small since the variance of the distortion $q$ over all PUF realizations is small for the blocklengths considered. For comparisons, we consider the helper data sizes required to guarantee $P_B=10^{-6}$ for $99.99\%$ of all PUF realizations. \begin{figure*} \centering \setlength\figureheight{5.75cm} \setlength\figurewidth{16.4cm} \input{./RateComparisons.tikz} \caption{Storage-key rates for the GS model with $p_A=0.15$. The $(R_\text{w}^*,R_\text{s}^*)$ point is the best possible point achieved by SW-coding constructions, which lies on the dashed line representing $R_\text{w}+R_\text{s} = H(X)$. The block error probability satisfies $P_B \leq 10^{-6}$ and the key length is 128 bits for all code points.} \label{fig:codecomparisons} \end{figure*} \subsection{Code Comparisons} Figure~\ref{fig:codecomparisons} depicts the storage-key $(R_\text{w},R_\text{s})$ projection of the boundary points of the region $\mathcal{R}_{\text{gs},\text{bin}}$ for $p_A=0.15$. Furthermore, we show the point with the maximum secret-key rate $R_\text{s}^*$ and the minimum storage rate $R_\text{w}^*$ to achieve $R_\text{s}^*$. For the FCS and COFE, use the random coding union bound \cite[Thm. 16]{Polyanskiy} to confirm that the plotted rate pairs are achievable for a secret-key length of $128$ bits, an error probability of $P_B=10^{-6}$, and blocklengths of $n=1024$ and $n=2048$. These rate pairs are shown in Figure~\ref{fig:codecomparisons} to the right of the dashed line representing $R_\text{w}+R_\text{s}=1$ bit/source-bit. Similarly, the rate pairs achieved by the previous polar code design in \cite{IgnaPolar}, and Codes 1 and 2 are shown in Figure~\ref{fig:codecomparisons}. Storage rates of the FCS and COFE are 1 bit/source-bit, which is suboptimal. The previous polar code construction in \cite{IgnaPolar} achieves a rate point with $R_\text{s}+R_\text{w} =1$ bit/source-bit, which is expected since this is a SW-coding construction. The previous polar code construction improves on the rate pairs achieved by the FCS and COFE in terms of the key vs. storage ratio. Nested polar codes achieve the key-leakage-storage rates of approximately $(0.125, 0.666, 0.666)$ bits/source-bit by Code 1 and $(0.063, 0.315, 0.315)$ bits/source-bit by Code 2, projections of which are depicted in Figure~\ref{fig:codecomparisons}. These rates are significantly better than all previous code constructions for the same parameters and without any private key assumption. Designed nested polar codes increase the ratio $R_\text{s}/R_\text{w}$ from approximately $0.188$ for Code 1 to $0.199$ for Code 2, suggesting to increase the blocklength to obtain better ratios. Code 2 achieves privacy-leakage and storage rates that cannot be achieved by previous methods without applying the method of \textit{time sharing}, since Code 2 achieves privacy-leakage and storage rates of $0.315$ bits/source-bit that are significantly less than the minimum privacy-leakage and storage rates $R^*_{\text{w}}=R^*_\ell=H_b(p_A)\approx 0.610$ bits/source-bit that can be asymptotically achieved by previous methods. Apply the sphere packing bound \cite[Eq. (5.8.19)]{gallagerbook} to upper bound the key vs. storage rate ratio that can be achieved by SW-coding constructions for the maximum secret-key rate point. Consider $p_A=0.15$, $n=1024$, and $P_B=10^{-6}$, for which the sphere packing bound requires that the rate of the code $\mathcal{C}$ satisfies $R_\mathcal{C} \leq 0.273$. Assume that the key rate is given by its maximal value $R_\text{s} = R_\mathcal{C}$ and the storage rate is given by its minimal value $R_\text{w} = 1 - R_\mathcal{C}$, then we arrive at $R_\text{s}/R_\text{w} \le 0.375$. A similar calculation for $n=2048$ yields $R_\text{s}/R_\text{w} \le 0.437$. These results indicate that there are still gaps between the maximum key vs. storage rate ratios achieved by WZ-coding constructions and the ratios achieved by Codes 1 and 2. The gaps can be reduced by using different nested polar codes that improve the minimum-distance properties, as in \cite{ourPeterSubcode}, or by using nested algebraic codes for which design methods are available in the literature, as in \cite{bizimThomas}. We remark again that such optimality-seeking approaches, e.g., based on information-theoretic security, provide the right insights into the best solutions for the digital era's security and privacy problems. \section{Discussions and Open Problems}\label{sec:DiscussionsandOpenProblems} \begin{itemize} \item We want to use low-complexity scalar quantizers after transformation without extra secrecy leakage; however, the decorrelation efficiency metric does not fully represent the dependency between transform coefficients. What is the right metric to use for choosing the transform used in combination with scalar quantizers? Is mutual information between transform coefficients an appropriate metric for this purpose? The choice of the transform should also depend on a reliability metric such as SNR-packing efficiency so that the transform, quantizers, and the error-correction codes can be designed jointly. What is the right reliability metric for this purpose? \item It is shown in \cite{bizimtemperature} that the ambient temperature and supply voltage affect the RO outputs deterministically rather than adding extra random noise, which was assumed in the RO PUF literature. What are the right output models for common PUF types, i.e., what are the deterministic and random components, and how are they related? \item SRAM PUFs are already used in products. In the literature there is no extensive analysis of the output correlations between different SRAMs in the same device possibly because SRAM outputs are binary and it is difficult to model the correlation between binary symbols. However, SRAM outputs are modeled in \cite{SoftHelper} as binary-quantized sums of independent Gaussian random variables. Is it possible to determine or approximate the correlations between the Gaussian random variables of different SRAMs? If yes, this might be useful for an attacker to obtain information about the secret sequence generated from the SRAM PUF output, which causes extra secrecy leakage. \item The transform-coding approach discussed above results in reliability guarantees for the random-output RO arrays, which considers an average over all ROs manufactured. The worst case scenario is when the transform coefficient value is on the quantization boundary, for which the secret-key capacity is $0$ bit. If one replaces the average reliability metric used above by a lower bound on the reliability of each RO, i.e., a worst-case scenario metric, how would this change the rate of the error-correction code used? For a fixed code, what should be the optimal bound on the reliability of each RO to maximize the yield, i.e., the percentage of ROs among all manufactured ROs for which the worst-case reliability guarantee is satisfied? \item Are the WZ problem and the GS model \textit{operationally equivalent} (cf. operational duality)? \item Linear block-code constructions discussed above are for uniformly-distributed PUF outputs. Can one construct other (random) linear block codes that are asymptotically optimal for non-uniform PUF outputs? Is it necessary to use an extensions of the COFE for this purpose? \item Consider the nested polar code design procedure given above. It is not possible to construct a code with this procedure for $n\leq 512$ since $q*p_A$ is an increasing function of $q$ for any $q\in [0, 0.5]$. Is a nested polar code construction possible for $n=512$ if one improves the code design procedure and the decoder? \end{itemize} \bibliographystyle{IEEEtran}
{ "timestamp": "2020-12-17T02:17:45", "yymm": "2012", "arxiv_id": "2012.08924", "language": "en", "url": "https://arxiv.org/abs/2012.08924" }
\section{Introduction} \renewcommand{\labelitemi}{$\bullet$} \label{intro} Numerous practical applications include two or more sub-problems, many of which can be summarized as a combinatorial optimization problem. The combinatorial optimization problem is one of the most challenging problems. A combinatorial optimization problem usually involves traversing a search space in order to find an optimal solution or approximately optimal solution from a bounded solution set while maximizing (or minimizing) the objective function. Many interdependent components make it difficult to solve such problems: solving each component optimally not ensure obtaining an optimal solution to the overall problem.This type of problem is prevalent in supply chain management(like distribution,scheduling,loading,transporation,etc.)\cite{RN31,RN61},vehicle routing problem,logistics problem,etc.the reason why some optimization problems are difficult to tackle is that it is stated that interdependency among components in operational/dynamic problems play a key role in the complexity of the problems\cite{michalewicz2012quo}.\par The TTP combines two combinatorial problems, namely Travelling Salesman Problem (TSP) and 0–1 Knapsack Problem (KP) In order to demonstrate the complexity that arises by interdependency in multi-component problems, a benchmark problem called Traveling Thief Problem (TTP) was introduced by Bonyadi et al.\cite{RN23} in 2013.This problem can be illustrated in the following way.\par A thief gets a cyclic journey through $n$ cities, and using a picking plan, picks $m$ items into a rented knapsack with restrained capacity. As items are picked up at each subsequent city to fill the knapsack, the total profit of item and weight in the knapsack increases. The heavier the knapsack gets, the slower the thief becomes, therefore increasing the entire travelling time and hence the renting cost. The overall goal of the TTP is to concurrently maximize the picked total profit of item and minimize the renting cost. TTP can be considered to represent many real-world logistics issues\cite{RN20}.\par From the above statement, it is clear that the two components of the TTP interact with each other. When the weight of the knapsack increases, it affects the speed of the thief, thereby increasing the rental time of the knapsack. When the tour is reselected, the order of the items in the corresponding city also changes. This interdependent relationship between the two components makes problem complicated.\par Since the TTP is introduced by Bonyadi et al\cite{RN23} in 2013 as the benchmark problem for solving the multi-component and interdependence problems. Many scholars have proposed corresponding algorithms to solve this problem successively. Polyakovskiy et al. \cite{RN17} was the first to create many benchmark instances and propose several heuristic algorithms to solve the TTP. An initial cyclic tour sequence is generated for TSP component by using the Lin-Kernighan heuristic\cite{RN75} in their paper, then select items under a fixed route until the optimal solution is obtained. In their first method for solving TTP named Simple Heuristic (SH), items are selected based on the score value. They also proposed some iterative heuristics called Random Local Search (RLS) and (1+1) EA based on flipping the picked items with a specific probability.\par Bonyadi et al.\cite{RN22} proposed a heuristic method to tackle the TTP. In their approach, the TTP is disintegrated into sub-problems (TSP and KP). They process the two sub-problems while maintaining communication between them and then composed the solutions to get the final solution called CoSolver. They also proposed an approach is called density-based heuristics (DH) in their paper, a tour is generated for TSP, then a solution for KP is generated in a fixed tour.\par Mei et al.\cite{RN48} introduced two evolutionary heuristic approach for solving TTP. The first one is Cooperative Co-evolution (CC) which is to solve each sub-problem independently without considering the dependencies. The second one is the Memetic Algorithm (MA) that solves this problem as a whole and considers the dependencies between each sub-problem. An efficient Memetic Algorithm (MA) with the two-stage local search named MATLS is proposed by Mei et al.\cite{RN20} to solve the large scale TTP with several complexity reduction methods.\par In the KP of the TTP problem, a optimized picking plan called PackIterative was proposed by Faulkner et al.\cite{RN19} To avoid the bias toward the KP component, they proposed an insertion operator to optimize the tour iteratively for a fixed picking plan generated by the Lin-Kernighan heuristics\cite{RN75}. Several simple iterative heuristics (S1-S5) and some complex heuristics are proposed. According to the performance analysis, an simple iterative heuristic named S5 is the best performance on average among all the approaches. \par Yafrani and Ahiod \cite{RN21} introduced two heuristic algorithms. They compared two traditional types of search heuristics:population-based heuristic and single-solution heuristic. The first method is a Memetic Algorithm, called MA2B that uses 2-OPT operator and bit-flip operator. This method uses a genetic algorithm based on population evolution. The other approach is a single solution based heuristic method, named CS2SA which apply 2-OPT steepest ascent hill climbing heuristic and an adapted Simulated Annealing (SA) for efficient item picking to solve TTP. These two algorithms perform more competitive compared to MATLS and S5 on many TTP instances.\par Wagner \cite{wagner2016stealing} studyed a swarm intelligence approach, an idea of focusing on short TSP tours and good TTP tours for solving the TSP component of the TTP based on ACO (Ant Colony Optimization). This method is effective and computationally efficient for the small instance of the TTP. However, its performance deteriorates significantly for many large instances. Neumann et al.\cite{neumann2018fully} investigated the underlying non-linear Packing While Traveling Problem (PWTP) of the TTP where the item are selected for a fixed route. they give an exact dynamic programming approach and a fully polynomial time approximation scheme (FPTAS) to solve this problem while maximizing the benefit.\par Yafrani and Ahiod \cite{RN10} proposed two simple iterative neighborhood algorithms which are based on local search. The first approach named JNB (standing for Joint N1-BF) is a neighborhood-based heuristic that combines the N1 neighborhood (swapping two adjacent cities) of TSP and one-bit-flip of KP. The second one is named J2B (Joint 2OPT-BF) which is a combination of 2-OPT heuristic and one bit-flip heuristic. Martins et al.\cite{martins2017hseda} introduced an approach named Heuristic Selection based on Estimation of Distribution Algorithm (HSEDA). This method applies the EDA probabilistic model using an approximation function to finding better heuristics for solving the TTP. Martins et al. confirmed this approach outperforms the other algorithms on most of the medium-sized TTP instances.\par Wu et al.\cite{wu2017exact} proposed three exact algorithms and a hybrid approach for solving the TTP. They are Dynamic Programming, Branch and Bound search and Constraint Programming, respectively. El Yafrani et al.\cite{RN21} proposed an approache based on ConSolver with 2-OPT and Simulated Annealing (CS2SA). After that, CS2SA* and CS2SA-R are introduced based on CS2SA. CS2SA* is an implementation of CS2SA with instance-based parameter tuning. CS2SA-R uses random restarts when no improvements in the state of returning the so far best solution. \par Alharbi et al. \cite{alharbi2018design}] introduced a modified Artificial Bees Colony (ABC) algorithm based on swarm intelligence for solving the TTP in an interdependent manner. It is efficient in mid-sized TTP instances compared to the state-of-the-art approaches. Namazi et al.\cite{RN28,RN15} proposed an extended and modified form of the reversing heuristic to consider both the TSP and KP components concurrently. Items regarded as less profitable and selected in cities beginning in the reversed segment are substituted by items that tend to be equally or more profitable and not selected in the later cities. Maity et al.\cite{maity2020efficient} introduced scoring value which is calculated by they proposed formulation to pick items for a fixed picking plan generated by the chained Lin-Kernighan heuristics.\par In this paper, We mainly focus on the KP component of the TTP, Because we believe the KP component of the TTP is more critical as compared to the TSP component for optimization. A near-optimal tour (TSP component of the TTP) is generated by Lin–Kernighan Heuristic (LKH). We chiefly discuss whether the weight of the items have a greater determinant of the final profit than other item attributes(location in the tour and value of item).Therefore, A formula for calculating the value of the item to calculate the impact of the item on the final profit is proposed. We believe that the final profit is not only related to the value, weight, and location of the items, but the order in which the items are picked up. The effect of the selection order of items on the final profit is also discussed in this article.\par The rest of this paper is organized as follows. In Section 2, a background about the TTP and some heuristics are introduced, In Section 3, The proposed algorithms are applied on some TTP instances and the experimental results are reported and discussed in Section 4. Finally, Section 5 concludes the paper and outlines some future directions. \section{Background} In this section,we present brief background introduction and formulation about the TTP. Some common heuristics for TSP and KP are briefly revisited.\par \subsection{The Travelling Thief Problem} The Travelling Thief Problem is a combination of two well known benchmark problems,namly the Travelling Saleman Problem (TSP) and the Knapsack Problem (KP). In the TTP, we consider $n$ cities and the associated distance matrix $\{d_{i,i'}\}(i\neq i')$,The distance is the distance between each pair of cities, $d_{i,i'} = d_{i',i}$ ($i,i' \in \{0,\cdots,n\}$ ). There are $m$ items scattered in these cities. Each item $j (j \in \{0,\cdots,m\})$ is located at city $l_j$ having a profit $p_j >0$ and a weight $w_j >0$. A thief starts from the first city to visit all these cities only once and pick up a subset of the items available in each city. We suppose each item is avaliable in only one city and we note $ A_i \in \{1,\cdots,n\}$ as the avaliablity vector that contains the reference of the city where the item $i$ is located. The cyclic tour is designed by using a permutation of n cities. Given a tour $c$,We define $c_k = i$ as $i$ is the $k$-th city in the tour $c$,and we define $c(i) = k$ as the location of city $i$ in the tour $c$ is $k$. A knapsack with a maximum weight capacity W and a rent rate R per time unit is rented by the thief to carry the picked items. $W$ denotes the maximum capacity of the knapsack, $v_{min}$ (when the knapsack is empty) and $v_{max}$ (When the knapsack is full of items) are the minimum and maximum possible velocity, respectively. The total weight of the items in the knapsack must not exceed the maximum weight limit. The speed of the thief varies with the weight of the backpack. The heavier the knapsack gets, the slower the thief becomes in the tour. \par A solution of the TTP is represented as follows: \begin{itemize} \item The tour $c = (c_1,c_2,\cdots,c_n)$ is a vector containing the permutation of cities. \item The picking plain $z = (z_1,z_2,\cdots,z_m)$ is a binary vector determined that item is picked if $z_j = 1$,or not picked if $z_j = 0$. \end{itemize} The interdependence of the two sub-problems in the TTP problem is reflected in the dependence of the speed of the thief and the total weight of the knapsack.The total weight of the items picked from city $i$ is given in equation \ref{eq:weightInCity}, and the total weight of the items picked from the begin city to the $k$-th city in the cyclic tour $c$ is given in equation \ref{eq:weightAll}. The velocity of the thief decreases linearly with the increase of the total weight of the knapsack. We note $v_{c,z}(k)$ as the velocity at the city $c_k$ in equation \ref{eq:speed}, and note $C = \frac{v_{max} - v_{min}}{W}$. \begin{equation} W_z(i) = \sum_{l_j=i}^{} w_j z_j \label{eq:weightInCity} \end{equation} \begin{equation} W_{c,z}(k) = \sum_{k'=1}^{n} W_z(c_{k'}) \label{eq:weightAll} \end{equation} \begin{equation} v_{c,z}(k) = v_{max} - W_{c,z}(k) \times C\label{eq:speed} \end{equation} The goal of TTP is to find out a proper tour $c$ and a picking plan $z$ to maximise the total gain G(c,z) defined in equation \ref{eq:Gain}. In other words, the goal is to maximise the total profit while minimise the total renting cost of knapsack. The total weight of the picked item must not exceed the capacity of the knapsack. \begin{equation} maximization \quad G(c,z) = \sum_{i=1}^{m} p_i z_i - R \times T(c,z) \label{eq:Gain} \end{equation} \begin{equation} T(c,z) = T_{c,z}(n+1) = T_{c,z}(n) + \frac{d(c_n,c_1)}{v_{c,z}(n)} \end{equation} \begin{equation} T_{c,p}(k) = \frac{\sum_{k'=1}^{k-1} d(c_{k'},c_{k'+1})}{v_{c,z}(k')} \end{equation} \begin{equation} s.t. : \sum_{k=1}^{m} w_k z_k \le W \end{equation} \subsection{In the TSP component} Lin–Kernighan Heuristic (LKH) introduced by Lin and Kernighan \cite{RN75} is a generalization of the 2-OPT search algorithm for solving TSP.This algorithm and the Chained Lin–Kernighan heuristic (CLKH) often are often used to optimize TSP problems and to initialize TSP component of the TTP. To solveing the TSP component of the TTP,2-OPT (a segment reversing heuristic) is often used for modifying the tour $c$. On a tour given by two position $i$ and $j$ $(1 < i < j \le n)$, the order of the visited cities between these two position is reversed to get a new tour. The 2-opt function is defined as follows. \begin{gather} c'(j-k,i+k) = 2OPT(c(i+k,j-k)) \label{eq:2opt} \\ s.t. : 0 < i <j \le n ; \quad 0 \le k \le j-i \notag \end{gather} The Delaunay triangulation method \cite{delaunay1934sphere} is used as a candidate generator for the 2-OPT heuristic. Generated candidate objects by the Delaunay triangulation can reduce the time complexity without significantly reducing the quality of the solution.Besides,by tracking time and weight information at rach city of a given tour in the TTP also can reduce the total time budget. \subsection{In the KP component} For solving the KP component, the bit-flip operator introduced by Faulkner et al. \cite{RN19} is often used for optimizing the packing plan $z$. The bit-flip operator works iteratively by flipping each bit in the picking plan $z$. Given a picking plan $z$ and a selected item j, the picking state $z_j$ is flipped from 0 to 1 or vice versa to obtain a new picking plan $z'$.If the performance is improved after bit-flip operation, we will keep this state, otherwise we will continue the bit-flip operation until the termination condition is reached. \section{Proposed approach} This section describes our idea of optimizing TTP, illustrates it with examples, and proposes a algorithm for solving TTP.\par In the definition of the TTP given in section 2, a tour and an item picking plan are required. First, the TSP search heuristic render TTP solution with an empty tour. Then, the items are required to insert into the empty tour to increase the profit. It is common to employ a proper measure of elements of a problem to make a judgment. A scoring function for picking items is introduced to decide which item should be picked. This function is commonly based on profit, weight, and distance from the city where the item is picked to the end city. Generally, the function is $ScoreValue_{i,k}( c ) = \frac{p_k}{w_k \times \sum{}{}d_{i,1}}, $ (or other form ). where $c$ is the tour, $p_k$ is the profit, $ w_k$ is the weight of the item $k$ in city $i$, $\sum{}{}d_{i,1}$ is the distance from the city $i$ where the item k is picked to the end of the tour, $i \in (2 ,\cdots,n), k \in (1, \cdots, m).$ The higher the score of an item, the more likely it is to be picked up. However, If an item is of high profit but very heavy and is close to the start city in the tour, the item’s score value is also relatively high, according to the principle that the greater the score value of the item, the more likely it is to pick up the item. Picking up this item will result in other items in the tour that are close to the end city, and items with high score value may not be picked up. In this case, it may slow down the speed of the thief, spend more time, and make the total profit smaller.\par Without considering picking up other items, the change of profit caused by inserting an item $k$ in city $i$ to the overall profit is $\Delta p'_{i,k} = p_k - R \times \frac{\sum{}{}d_{i,1}}{v_{max} - w_k \times C} $, However, the fact that a single item changes the overall profit is closely related to the items previously selected. The actural change of profit caused by inserting an item $k$ in city $i$ is $\Delta p'_{i,k} = p_k - R \times \frac{\sum{}{}d_{i,1}}{v_{max} - W_{c,z}(k) \times C} $ we noted. $\Delta p'_{i,k}$ may be a positive number ($\Delta p'_{i,k} > 0 $), Due to the accumulated weight of the items picked up before, $\Delta p_{i,k}$ may become a negative number ($\Delta p_{i,k} < 0$). This means that the impact of a single item on the overall profit needs to take into account the cumulative effect of the items picked up previously.\par Besides, owing to the cumulative effect of the weight of the picked items, the order in which items are picked up needs to be taken into account. In the following section, we use a function on sequence of numbers or sets called \emph{Reverse-search}, For any position $k$ of given sequence of $n$ number $S = (s_1,s_2,\cdots,s_n)$, the \emph{Reverse-search} function is defined as follows. \begin{gather} S'(s_{n}, \cdots, s_{1}) = Rev(S(s_1,\cdots,s_n)) \label{eq:Rev} \end{gather} This function defined in equation \ref{eq:Rev} is similar to the 2OPT function; the difference is that the elements in the 2OPT function are numbers, and the elements in this function can be numbers or sets. For TTP, the elements in this function are the sets of item attributes (weight, value, scoring value, etc.) in the city. \par To further explaination, the following example can be illustrated.As an example, consider the simple TTP instance shown in Figure \ref{fig:TTP_example}, which illustrates an example of the TTP with $n = 5$ cities and $m = 4$ items. Each city is assigned with a set of items except the first city (No items in the starting city),The nodes represents the citys. For example, node 2 is associated with the item of profit $p_2 = 101$ and weight $w_2 = 10$. Suppose that the capacity of the knapsack $W = 10$, renting rate $R = 1$, maximun speed $v_{max} = 1$ and minimum speed $v_{min} = 0.1$, Furthermore, assuming that an interin solution has the tour $c = [1, 2, 3, 4, 5]$ and the picking plan $z = [1, 0, 0, 0]$ (If the item picking plan is based on the score function mentioned above $ScoreValue_{i,k}( c ) = \frac{p_k}{w_k \times \sum{}{}d_{i,1}},$ then the score value of the items are $s_1 = 1.01$, $s_2 = 0.8$, $s_3 = 1$, and $s_4 = 1$, According to the principle of picking up high-scoring items first and the current total weight $W_c$ of items picked up must not exceed the maximum capacity $W$), Since in the subset of the tour 1-2 no item is picked up, hence the thief travelled with maximum velocity $v_max = 1$, From city 2,item2 is picked which makes the current speed $v_c = 0.1$ and the knapsack is full. The optimal objective value will be $G(c,z) = 101 - 1 \times (1 + \frac{10}{0.1}) = 0$, Assuming that the tour is fixed and the picking plan $z' = (0, 1, 1, 1)$. Travel time from city 1 to 3 is $1+5 = 6$. From city 3, item2 is picked which makes $v_c = 0.82,W_c = 1$. Thus, the travel time from city 3 to 4 is $\frac{3}{0.82} = 3.66$. From city 4 item3 is picked which makes $v_c = 0.46,W_c = 6$. The travel time from city 4 to 5 is $\frac{1}{0.46} = 2.17$. From city 5 item4 is picked which makes $v_c = 0.28,W_c = 8$. The travel time from city 5 to 1 is $\frac{1}{0.28} = 3.57$. Therefore, $T (c, z') = 6 + 3.66 + 2.17 + 3.57 = 15.40$, and the objective value $G(x, z') = 18 - 1 \times (15.40) = 2.60$.\par \begin{figure}[h] \centering \includegraphics[scale=0.75]{TTP_example.eps} \caption{An example TTP instance} \label{fig:TTP_example} \end{figure} \subsection{EW formula} Based on the above inspiration, we proposed a new selecting items approach. and choosing potential items in reverse order according a specific formula proposed by us. Our motivation for this approach is twofold: (i) Due to the cumulative effect, the weight of the item has a greater impact on the final profit than other item attributes (value, location, etc.). (ii) Prioritizing the selection of high-value items at the end of the travel route can maximize the final profit.\par The formula for expanding the effect of item weight is proposed (EW formula). A new scoring function based on location, weight, and profit of the item is introduced to generate a score for the item $k$ placed in city $i$ as follows: \begin{equation} Score_{i,k}( c ) = \frac{p_k}{w_k^{\alpha} \times \sum{}{}d_{i,1}} \label{eq:myScore} \end{equation} where $\sum{}{}d_{i,1}$is the distance from city $i$ to the end of a given tour $c$, $p_k$ is the profit and $w_k$ is the weight of the item $k$. we proposed exponents applied to the weight of an item to manage the impact on the score value. Our preliminary study shows that keeping the exponent on the weight of an item results in better objective value on large-scale instances. To get the best performing values of $\alpha$, we perform an experimental study over dozens of times to calculate the objective value. The best objective value is found when the value of $\alpha$ = $1.5$, where $\alpha > 1$.\par \begin{figure}[h] \centering \includegraphics[scale=0.55]{reversePic.eps} \caption{Illustrative diagram of proposed method} \label{fig:reversePic} \end{figure} \subsection{Reverse order searching approach} The constructing process starts by calculating $Score_{i,k}$ for each item according to formula introduced in equation \ref{eq:myScore}, the items are sorted according to the non-decreasing order of $Score_{i,k}$ . We will discard all items with $Score_{i,k} < 0$ and keep the items with a $Score_{i,k} > 0$. We suppose there are a total of $l$ items ($l \le m$). We calculate the ratio of the number of picked items $l$ to the total number of items $m$ as $r$ ($r = \frac{l}{m}$). Take the average value of all items with a $Score_{i,k} > 0$ as $AVG_{score}$. The function can be summarized as follows. \begin{equation} AVG_{score}( c ) = \frac{\sum_{i=1}^{l}Score_{i,k}}{l} \label{eq:myScore} \end{equation} For every item $k$, if $Score_{i,k} > AVG_{score}$, then the item $k$ is a potential one. In city $i$, the potential items are denoted as the set $ s_i$, where $s_i$ contains zero, one or more items. The high-value items is picked up in reverse order along the given tour. If inserting item $k$ does not decrease the objective value and also fits into the knapsack then items $k$ is picked, otherwise, process the next item, and so on. For all items on the entire route, we mark the picked items as $Above\_AVG(s_1, \cdots, s_n)$. Then, picking up items in reverse order until the item is picked up to $r$ of the knapsack capacity. The proposed approach we noted as RWS (Based on Item Selection Weight and Reverse Order Allocation). In order to facilitate readers to follow our algorithm, the illustrative diagram of proposed method shown in Figure \ref{fig:reversePic}. In the entire travel tour, high-value and low-weight items in cities near the end (green item in the picture) are more likely to be selected than high-value and low-weight items in cities near the beginning (red item in the picture). \par \begin{algorithm}[h] \caption{ Algorithm framework} \label{alg:frame} \begin{algorithmic}[1] \State $(c^*, z^*) \gets \emptyset$ \{best\ solution\} \State Set the current picking plan $ z = \emptyset$ and current weight $W_c =0$ \State Set current tour $ c = \emptyset $, and calculate $G(c,z)$ \While{not global timout} \State $c \gets$ $LKTour()$ \State $z \gets InitPickingPlan(c)$ \State $(c,z) \gets $ TSPSolver$(c, z)$ \State $(c,z) \gets $ KPSolver$(c, z)$ \If{$G(c,z) > G(c^*,z^*)$} \State $ (c^*,z^*) \gets (c,z)$ \EndIf \EndWhile \State \Return{$(c^*,z^*)$} \label{code:recentEnd} \end{algorithmic} \end{algorithm} \begin{algorithm}[htb] \caption{ Initial Picking Plan} \label{alg:init} \begin{algorithmic}[1] \State Compute the score of the each item $I_k \in m $ by proposed formulation for the given tour $c$ \State Sorting the items of $m$ in descending order according to their score value \State Calculate the value of items greater than 0 and calculate their average $AVG_{score}$ and max value $MAX_{score}$ \State Set current packing plan $z = \emptyset$ and current weight of knapsack $W_c = 0$ \State Set $\beta \in [0, 1]$, which is set according to the size of the instance \While{$W_c < W $} \For{$i\gets n$ To $2$ } \If {$ Score_{i,k} > AVG_{score} + (MAX_{score} - AVG_{score}) \times \beta $} \State add item $I_k$ to the picking plan $z = z \cup \{I_k\}$ \State set $W_c = W_c + w_k$ \EndIf \If {$ W \times r \le W_c$} \State Break \EndIf \EndFor \For{$j\gets 1$ To $m$ } \If {$ W \geq W_c$} \State Insertion heuristic \Else \State Break \EndIf \EndFor \If{$G(c,z) > G(c,z^*)$} \State $ z^* \gets z$ \EndIf \EndWhile \State \Return{$z^*$} \end{algorithmic} \end{algorithm} The Algorithm \ref{alg:frame} describes the basic framework for solving TTP. Based on the above ideas, the initial picking plan is introduced in Algorithm \ref{alg:init}. The idea of Algorithm \ref{alg:init} can be explained as follows: At first,The new initial cyclic tour is generated by Lin-Kernighan heuristic, the priporities of the items are determined by the formula in equation \ref{eq:myScore}, The higher the score of the items, the higher the priority of the item being picked up. Then, pick out items with positive scores (Items with negative scores do not contribute to total profit) and calculate their average score $AVG_{score}$ and max score $MAX_{score}$. Afterwards, select items with a score greater than the average in the reverse order of the cities in the travel tour. The capacity constraint is imposed as a global constraint. That is, any insertion that result in the violation of the capacity constraint will be prohibited. The Insertion heuristic is based on YI Mei \cite{RN20} in the Algorithm \ref{alg:init}. Finally, restore the picking plan $z^{*} = z$. The $TSPSolver$ adopts 2-OPT heuristic search to optimize TSP component. Besides, the Delaunay triangulation is introduced as a candidate generator for 2-OPT heuristic. In the $KPSolver$, both the BitFlip operator and the simulated annealing metaheuristic are commonly used algorithms for the KP component. In this article, we use the simulated annealing algorithm to solve the KP component. \section{Experimental Study} In this section, the experimental setup of the TTP is described and the comparative results are investigated with other state-of-the-art approaches. \subsection{Benchmark instances and experimental setup} We use the comprehensive set of TTP instances for our investigations that from Polyakovskiy et al.\cite{RN17}. The two components of TTP are balanced in these instances in such a way that the near-optimal solution of one subproblem does not guarantee the optimal solution of annther one.\par The TTP dataset introduces the following diversification parameters resilting in 9720 TTP instances, and these instances are generally based on the instances from TSPLIB by REinelt \cite{reinelt1991tsplib} and the types of the kanpsacks introducesd by Martello et al.\cite{martello1999dynamic}. We consider a subet of the TTP library to perform our tests: $\ eil76$,\ $kroA100$,\ $ch130$,\ $u159$,\ $a280$,\ $u574$,\ $ u724$, \ $dsj1000,$\ $rl1304$,\ $fl1577$,\ $d2103$,\ $pcb3038$,\ $fnl4461$,\ $pla7397$,\ $rl11849$,\ $usa13509$,\ $brd14051$,\ $d15112$,\ $d18512$,\ $pla33810$.\par These instances cover small, medium, and large size instances with differenct characteristics. We denote the 4 categories as A, B, C, and D (Category C and D have same KP type and Item factor): \begin{itemize} \item Category A: 1 item in each city, item values and weights are bounded and strongly correlated, small capacity of the knapsack. \item Category B: 5 item in each city, uncorrelated KP but similar item weights, average capacity of the knapsack. \item Category C: 10 item in each city, uncorrelated KP, high capacity of the knapsack. \item Category D: 9 item in each city, uncorrelated KP, high capacity of the knapsack. \end{itemize} According to different data types A, B, C, D, the $\beta$ metioned in Algorithm \ref{alg:init} is 1, 0.65, 0.5, 0.5, respectively. The experiment setup is adopted in all experiments. All solvers are run on each TTP instance 10 times indepently, and all the algorithms have a maximum runtime limit of 600 seconds. All experimnts are performed on a computer with Intel(R) Core(TM) i5-8500 CPU(3.00GHz).\par \subsection{Comparison of algorithms} In order to gain further insights into the performance of each solver for solving the TTP, We performed a statistical analysis of Friedman's test \cite{zimmerman1993relative} for all methods.It is an alternative to repeated measures one-way analysis of variance \cite{cuevas2004anova}. It is a non-parametric test. When the dependent variable is ordinal, it is used to find the difference between groups and be used for continuous data. To measure the consistency of the results, the ralative standard deviation (RSD) is introduced. The formula is defined as $ RSD = \frac{S}{\overline{x}} \times 100\%$, where $S$ is the mean of the standard deviation, $\overline{x}$ is the arithmetic mean. For quality measures of all methods, we adopt average ranking and Friedman's test ranking, and calculate the ranking of each method based on the target value of each TTP instance for the average ranking method. And then, we compute the average ranking for each method.\par In Friedman's test ranking, the formula of test statistic is defined as $F = \frac{12}{nk(k+1)} \sum r^2-3n(k+1)$, where $n$ donate the number of instances, $k$ is the mean of the number of methods. First, the ranking of each method is calculated for each instance. And next, we compute the sum of the rank (r) of each method. Then, the probability value (p) and the degrees of freedom (d) are applied to calculate the critical chi-square value. The null hypothesis is rejected if the F value is greater than the critical chi-square value. Finally, the average rank of each method is calculated. The size of Friedman's test value can be used to measure the quality of the various algorithms listed.\par \begin{table}[htbp] \centering \caption{Results for category A} \label{Tab01} \begin{tabular}{ccccccccc} \toprule[0.3mm] \multirow{2}{*}{\textbf{Insance}} & \multicolumn{2}{c}{$MATLS$} & \multicolumn{2}{c}{$S5$} & \multicolumn{2}{c}{$CS2SA^*$} & \multicolumn{2}{c}{$RWS$} \\ \cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){6-7} \cmidrule(r){8-9} & $Mean$ & $RSD$ & $Mean$ & $RSD$ & $Mean$ & $RSD$ & $Mean$ & $RSD$ \\ \toprule[0.1mm] $eil76 $ &3705(3) & 1.35 & 3742(2) & 0 & 3425(4) & 0.31 & \textbf{3765}(1) & 2.52 \\ $kroA100 $ &\textbf{4660 }(1) & 1.36 & 4283(4) & 0 & 4435(3) & 1.07 & 4445(2) & 1.9 \\ $ch130$ &8876(4) & 0.79 & \textbf{9250}(1) & 0 & 8964(3) & 0.63 & 9013(2) & 0.03 \\ $u159$ &8403(4) & 1.40 & \textbf{8634}(1) & 0 & 8452(3) & 0 & 8627(2) & 0.33 \\ $a280$ &17678(4) & 0.54 & \textbf{18406}(1) &0.01 & 17728(3) & 0.22 & 17743(2) & 0.53 \\ $u574$ &26121(3) & 2.30 & \textbf{26957 }(1) & 0.10 & 26100(4) & 0.03 & 26366(2) & 0.6 \\ $u724$ &48980(3) & 1.25 & \textbf{50313 }(1) & 0.12 & 49623(2) & 0.03 & 48794(4) & 1.0 \\ $dsj1000$ &143699(2) & 0 & 137653(4) & 0.16 & \textbf{144219}(1) & 0 & 140656(3) & 0.7 \\ $rl1304$ &75800(3) & 1.26 & \textbf{80067 }(1) & 0.86 & 75825(2) & 0.01 & 75206(4) & 0.7 \\ $fl1577$ &88375(3) & 0.41 & \textbf{92328}(1) & 1.25 & 88259(4) & 0.16 & 88923(2) & 0.7 \\ $d2103$ &113005(4) & 0.45 & \textbf{120482 }(1) & 0.2 & 118844(2) & 0 & 118338(3) & 0.05 \\ $pcb3038$ &148265(3) & 1.18 & \textbf{160006}(1) & 0.15 & 145837(4) & 0.6 & 148973(2) & 0.6 \\ $fnl4461$ &247553(2) & 0.40 &\textbf{ 262237}(1) & 0.11 & 239287(4) & 0.42 & 241291(3) & 0.3 \\ $pla7397$ &365613(2) & 1.32 & \textbf{395156 }(1) & 0.56 & 315153(4) & 0 & 315386(3) & 0.15 \\ $rl11849$ &661392(2) & 0.29 & \textbf{707183 }(1) & 0.24 & 658519(3) & 0.05 & 653857(4) & 0.33 \\ $usa13509$ &747885(2) & 0.53 & \textbf{809623}(1) & 0.35 &683123(3) & 0.2 & 677983(4) & 0.66 \\ $brd14051$ &815602(2) & 0.36 & \textbf{875008}(1) & 0.25 & 800495(3) & 0.06 & 798787(4) & 0.12 \\ $d15112$ &871153(2) & 0.52 & \textbf{939726 }(1) & 0.48 & 870253(3) & 0.12 & 868019(4) & 0.27 \\ $d18512$ &996582(2) & 0.84 & \textbf{1072308}(1) & 0.21 & 964625(3) & 0.23 & 962781(4) & 0.32 \\ $pla33810$ &1730352(4) & 0.92 &\textbf{1870306}(1) & 0.62 & 1778256(3) & 0.31 & 1781984(2) & 0.36 \\ \midrule \multirow{1}{*}{\textbf{Average ranking}} & \multicolumn{2}{c}{$2.75$} & \multicolumn{2}{c}{$1.35$} & \multicolumn{2}{c}{$3.05$} & \multicolumn{2}{c}{$2.85$} \\ \bottomrule \end{tabular} \end{table} The results of the comparison study between the proposed method and three other state-of-the-art algorithms are shown in Table 1-3. For each instance, 10 independent runs are performed. The best mean objective values are highlighted in bold, the mean objective value is regarded as the quality of the solution to compare the performance of the algorithms, and the results of the Friedman’s test-based ranking for each method are presented in the last row of each table.\par \begin{table}[htbp] \centering \caption{Results for category B} \label{Tab02} \begin{tabular}{ccccccccc} \toprule[0.3mm] \multirow{2}{*}{\textbf{Insance}} & \multicolumn{2}{c}{$MATLS$} & \multicolumn{2}{c}{$S5$} & \multicolumn{2}{c}{$CS2SA^*$} & \multicolumn{2}{c}{$RWS$} \\ \cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){6-7} \cmidrule(r){8-9} & $Mean$ & $RSD$ & $Mean$ & $RSD$ & $Mean$ & $RSD$ & $Mean$ & $RSD$ \\ \toprule[0.1mm] $eil76 $ &\textbf{22185}(1) & 0.75 & 20097(3) & 0 & 18753(4) & 0 & 21620(2) & 0.02 \\ $kroA100 $ &\textbf{42535}(1) & 1.45 & 39440(3) & 0 & 39271(4) & 0 & 41258(2) & 0.3 \\ $ch130$ &\textbf{61028}(1) & 0.12 & 58685(2) & 1.21 & 50695(4) & 0 & 57964(3) & 4.6 \\ $u159$ &58289(2) & 1.06 & 57618(4) & 0 & 58090(3) & 0 & \textbf{58946}(1) & 1 \\ $a280$ &\textbf{110132}(1) & 2.16 & 109921(3) & 0 & 107696(4) & 0 & 107874(2) & 1.4 \\ $u574$ &\textbf{254770}(1) &0.76 & 251775(2) & 0.02 & 248584(3) & 0 & 247992(4) & 1.5 \\ $u724$ &303435(4) & 1.17 & 305977(2) & 0.32 & \textbf{309636}(1) & 0 & 304420(3) & 2.3 \\ $dsj1000$ &340317(2) & 1.55 & \textbf{342189}(1) & 0.59 & 332883(4) & 0 & 339557(3) & 0.68 \\ $rl1304$ &572766(4) & 1.2 & 575102(3) & 0.85 & 585600(2) & 0 & \textbf{590103}(1) & 0.02 \\ $fl1577$ &609288(3) & 1.77 & 607247(4) & 1.62 & \textbf{636422}(1) & 0 & 635112(2) & 0.1 \\ $d2103$ &849625(2) & 1.35 & \textbf{853587}(1) & 1.2 & 842520(4) & 0 & 842596(3) & 0.02 \\ $pcb3038$ &1168108(4) & 0.52 & 1179510(2) & 0.16 & \textbf{1193738}(1) & 0 & 1176520(3) & 0.55 \\ $fnl4461$ &1617401(4) & 0.3 & 1625856(2) & 0.16 & \textbf{1628414}(1) & 0 & 1624685(3) & 0.25 \\ $pla7397$ &4178551(2) & 3.25 & \textbf{4371433}(1) & 0.82 & 3713312(4) & 0 & 3751665(3) & 0.52 \\ $rl11849$ &4587812(4) & 0.48 & 4630753(3) & 0.29 & 4710135(2) & 0 & \textbf{4729374}(1) & 0.1 \\ $usa13509$ &7767305(4) & 2.1 & 7818115(3) & 0.86 & \textbf{8115168}(1) & 0 & 8022398(2) & 1.3 \\ $brd14051$ &6492925(4) & 1.25 & 6552658(3) & 0.58 & 6654162(2) & 0 & \textbf{6778329}(1) & 0.64 \\ $d15112$ &6828152(4) & 2.3 & 6991416(3) & 1.21 & \textbf{7606856}(1) & 0 & 7596136(2) & 0.41 \\ $d18512$ &7164397(4) & 1.25 & 7257669(3) & 0.81 & \textbf{7579996}(1) & 0 & 7507146 (2) & 0 \\ $pla33810$ &15532942(4) & 1.5 & 15574550(3) & 0.74 & 15756385(2) & 0.8 & \textbf{15821323}(1) & 0.3 \\ \midrule \multirow{1}{*}{\textbf{Average ranking}} & \multicolumn{2}{c}{$2.8$} & \multicolumn{2}{c}{$2.5$} & \multicolumn{2}{c}{$2.45$} & \multicolumn{2}{c}{$2.25$} \\ \bottomrule \end{tabular} \end{table} As described in Section 3, We argue that the weight of the item has a greater impact on the final profit than other item attributes (value, location, etc.). In order to verify our speculation, the variant in equation \ref{eq:myScore} is configured as follows: \begin{itemize} \item Solver1: The value of exponent $\alpha$ is set to 1.5. \item Solver2: The value of exponent $\alpha$ is set to 1. \end{itemize} The results shown in Table \ref{Tab04} indicate that the Solver 2 performs better in many TTP instances. That is, this result verifies that our reasoning is plausible. \begin{figure} \centering \begin{minipage}[c]{0.30\textwidth \centering \includegraphics[height=3.5cm,width=4.2cm]{friedmanA.eps} \end{minipage}% \begin{minipage}[c]{0.30\textwidth} \centering \includegraphics[height=3.5cm,width=4.2cm]{friedmanB.eps} \end{minipage} \begin{minipage}[c]{0.30\textwidth} \centering \includegraphics[height=3.5cm,width=4.2cm]{friedmanC.eps} \end{minipage} \caption{Friedman test of four approaches in Category A, B, and C} \label{fig:friedmanTest} \end{figure} \begin{figure} \centering \begin{minipage}[c]{0.50\textwidth \centering \includegraphics[height=4.5cm,width=6.2cm]{Bar_A.eps} \end{minipage}% \begin{minipage}[c]{0.50\textwidth} \centering \includegraphics[height=4.5cm,width=6.2cm]{Box_A.eps} \end{minipage} \caption{Shown are the rescaled performances of our approaches with parameter $\alpha$ on Category A instances} \label{fig:barBoxA} \end{figure} \begin{figure} \centering \begin{minipage}[c]{0.50\textwidth \centering \includegraphics[height=4.5cm,width=6.2cm]{Bar_B.eps} \end{minipage}% \begin{minipage}[c]{0.50\textwidth} \centering \includegraphics[height=4.5cm,width=6.2cm]{Box_B.eps} \end{minipage} \caption{Shown are the rescaled performances of our approaches with parameter $\alpha$ on Category B instances} \label{fig:barBoxB} \end{figure} \begin{figure} \centering \begin{minipage}[c]{0.50\textwidth \centering \includegraphics[height=4.5cm,width=6.2cm]{Bar_C.eps} \end{minipage}% \begin{minipage}[c]{0.50\textwidth} \centering \includegraphics[height=4.5cm,width=6.2cm]{Box_C.eps} \end{minipage} \caption{Shown are the rescaled performances of our approaches with parameter $\alpha$ on Category C instances} \label{fig:barBoxC} \end{figure} \begin{figure} \centering \begin{minipage}[c]{0.50\textwidth \centering \includegraphics[height=4.5cm,width=6.2cm]{Bar_D.eps} \end{minipage}% \begin{minipage}[c]{0.50\textwidth} \centering \includegraphics[height=4.5cm,width=6.2cm]{Box_D.eps} \end{minipage} \caption{Shown are the rescaled performances of our approaches with parameter $\alpha$ on Category D instances} \label{fig:barBoxD} \end{figure} \begin{table}[htbp] \centering \caption{Results for category C} \label{Tab03} \begin{tabular}{ccccccccc} \toprule[0.3mm] \multirow{2}{*}{\textbf{Insance}} & \multicolumn{2}{c}{$MATLS$} & \multicolumn{2}{c}{$S5$} & \multicolumn{2}{c}{$CS2SA^*$} & \multicolumn{2}{c}{$RWS$} \\ \cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){6-7} \cmidrule(r){8-9} & $Mean$ & $RSD$ & $Mean$ & $RSD$ & $Mean$ & $RSD$ & $Mean$ & $RSD$ \\ \toprule[0.1mm] $eil76 $ &\textbf{88115}(1) & 0.32 & 85664(4) & 0 & 87577(3) & 0 & 87664(2) & 0.27 \\ $kroA100 $ &155492(4) & 0.01 & 155540(3) & 0 & 155585(2) & 0 & \textbf{155947}(1) & 0.48 \\ $ch130$ &\textbf{203468}(1) & 2.13 & 201085(3) & 0.82 & 197555(4) & 0 & 202348(2) & 1.9 \\ $u159$ &242558(2) & 0.45 & 242485(3) & 0.31 & 242201(4) & 0.52 & \textbf{244770}(1) & 0.05 \\ $a280$ &426259(3) & 0.2 & \textbf{429000}(1) & 0 & 421713(4) & 0 & 426736(2) & 0.14 \\ $u574$ &966207(2) & 0.24 & \textbf{966344}(1) & 0.11 & 953997(4) & 0 & 955745(3) & 0.16 \\ $u724$ &1188761(3) & 0.45 & 1188364(4) & 0.08 & 1191819(2) & 0 & \textbf{1193604}(1) & 0.32 \\ $dsj1000$ &1472612(2) & 1.2 & \textbf{1479605}(1) & 0.24 & 1468858(4) & 0 & 1469206(3) & 0.05 \\ $rl1304$ &2178475(4) & 0.21 & 2184853(3) & 0.33 & 2198943(2) & 0 & \textbf{2198947}(1) & 0 \\ $fl1577$ &2466353(4) & 0.26 & 2470917(3) & 0.21 & 2505291(2) & 0 & \textbf{2505295}(1) & 0 \\ $d2103$ &3392866(2) & 0.32 & 3392172(3) & 0.26 & 3373781(4) & 0 & \textbf{3410978}(1) & 0.93 \\ $pcb3038$ &4564228(4) & 0.22 & 4573748(3) & 0.15 & 4612956(2) & 0 & \textbf{4612966}(1) & 0 \\ $fnl4461$ &6534422(4) & 0.17 & \textbf{6554497}(1) & 0.26 & 6545335(3) & 0 & 6545355(2) & 0 \\ $pla7397$ &13865791(2) & 1.55 & \textbf{14239606}(1) & 1.2 & 13197756(4) & 0 & 13440471(3) & 1.83 \\ $rl11849$ &18275210(4) & 0.23 & 18314650(3) & 0.12 & \textbf{18505203}(1) & 0 & 18422410(2) & 0.09 \\ $usa13509$ &25878184(4) & 0.44 & 25918971(3) & 0.55 & 26437361(2) & 0 & \textbf{26552971}(1) & 0 \\ $brd14051$ &23672405(4) & 0.62 & 23826398(2) & 0.51 & \textbf{23908540}(1) & 0 & 23809751(3) & 0.01 \\ $d15112$ &25942410(4) & 1.52 & 26211252(3) & 1.04 & 27182609(2) & 0.15 & \textbf{27184251}(1) & 0 \\ $d18512$ &27164388(4) & 1.25 & 27427144(3) & 0.32 & 27849746(2) & 0.21 & \textbf{27980876}(1) & 0 \\ $pla33810$ &58003895(3) & 0.5 & 57967586(4) & 0.42 & 58107703(2) & 0.01 & \textbf{58818293}(1) & 0.21 \\ \midrule \multirow{1}{*}{\textbf{Average ranking}} & \multicolumn{2}{c}{$3.05$} & \multicolumn{2}{c}{$2.6$} & \multicolumn{2}{c}{$2.7$} & \multicolumn{2}{c}{$1.7$} \\ \bottomrule \end{tabular} \end{table} \begin{table}[htbp] \centering \caption{Performance comparision of two solvers on 3 categories of TTP instances } \label{Tab04} \begin{tabular}{cccccccc} \toprule[0.3mm] \multirow{2}{*}{\textbf{Insance}} & \multicolumn{2}{c}{$Category\ A$} & \multicolumn{2}{c}{$Category\ B$} & \multicolumn{2}{c}{$Category\ C$} \\ \cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){6-7} & $Solver 1$ & $Solver 2$ & $Solver 1$ & $Solver 2$ & $Solver 1$ & $Solver 2$ \\ \toprule[0.1mm] $eil76 $ &\textbf{3765} & 3670 & \textbf{21620} & 20192 & \textbf{87664} & 87599 \\ $kroA100 $ &\textbf{4445} & 4424 & 41258 & \textbf{41353} & \textbf{155947} & 155669 \\ $ch130$ &\textbf{9013} & 8963 & 57964 &\textbf{ 58792} & \textbf{202348} & 202182 \\ $u159$ &\textbf{8627} & 8566 & \textbf{58966} & 58955 & \textbf{244770} & 244228 \\ $a280$ &17723 & 17723 & 107874 & \textbf{108378} & \textbf{426736} & 424358 \\ $u574$ &\textbf{26366} & 26265 & 247992 & \textbf{249368} &\textbf{955745} & 953998 \\ $u724$ &48794 & \textbf{49588} & 304420 & \textbf{309750} & \textbf{1193604} & 1191819 \\ $dsj1000$ &\textbf{141117} & 140620 & \textbf{339557} & 338661 & \textbf{1469206 } & 1468859 \\ $rl1304$ &75206 & \textbf{76435} & 585103 & \textbf{585600} & \textbf{2198947} & 2198942 \\ $fl1577$ &\textbf{88923} & 88248 & 635112 & \textbf{636424} & \textbf{2505295} & 2505294 \\ $d2103$ &118338 & \textbf{118652} & \textbf{842596} & 842522 & \textbf{3410978} & 3393849 \\ $pcb3038$ &\textbf{149337} &146115 & 1176520 & \textbf{1193737} & \textbf{4612966} & 4612957 \\ $fnl4461$ &\textbf{241291} & 240822 & 1624685 & \textbf{1628417} & \textbf{6545355} & 6545346 \\ $pla7397$ &\textbf{315386} &314073 & \textbf{3751665} & 3713312 & \textbf{13440471} & 13197751 \\ $rl11849$ &653857 &\textbf{659283} & \textbf{4729274} & 4710149 & 18422410 & \textbf{18504597} \\ $usa13509$ &677983 & \textbf{682238} & 8022398 & \textbf{8115207} & \textbf{26552971} & 26422272 \\ $brd14051$ &798787 &\textbf{802188} & \textbf{6778329} & 6654177 & 23809751 & \textbf{23907953} \\ $d15112$ &868019 &\textbf{868998} & 7606136 & \textbf{7606876} & \textbf{27184251} & 27182054 \\ $d18512$ &962781 &\textbf{964518} & 7507146 & \textbf{7580272} & \textbf{27980877} & 27861162 \\ $pla33810$ &\textbf{1781384} & 1777592 & \textbf{15821323} & 15745060 & \textbf{58818293} & 58545275 \\ \bottomrule \end{tabular} \end{table} \subsection{Results analysis and discussion} According to the presented results, the proposed algorithm surpasses the other state-of-the-art algorithms (MATLS\cite{RN20}, S5\cite{RN19}, and CS2SA*\cite{RN9}) for most instances of the TTP. This is mainly due to the solution space is explored more adequate. The simple Bit-Flip heuristics and the simulated annealing algorithm are commonly used for KP component of TTP. However, we run some instances and the result shows that the simulated annealing algorithm has better performance on the large-scale instances. The proposed algorithm adopts a reverse order picking plan, based on sorting the items according to the item's profits, weights, and location in the given tour, and picks the item with a score greater than the average. In order to avoid falling into a local optimal situation, the algorithm uses different travel tour instead of fixed one in a given time budget. The algorithm has some competitiveness in category A even the profit and the weight of the items are strongly correlated as shown in Table \ref{Tab01}. The presented results show that S5 surpasses the other algorithms in the most instances. The category A has the smallest knapsack capacity and only one item in each city. We argue the greedy approach adopted by S5 is beneficial in solving this type of KP component of the TTP.\par For the instances with a higher knapsack capacity in category B (5 items in each city, KP uncorrelated with similar weights), the comparative results suggeste that the CS2SA* and S5 still competitive in this type of the category. However, Table \ref{Tab02} show that RWS clearly outperforms the other heuristics for the majority of the instances such as u159, rl1304, rl11849, brd14051, pla33810. MATLS and CS2SA* also perform better in some instances which are shown in the table. \par Table \ref{Tab03} shows the comparative results for Category C (10 items per city, uncorrelated). This category has the largest knapsack capacity. The CS2SA* perform better in many instances compared to other algorithms shown in the table. Note that the RWS outperfoms the other heuristics in the most instances for high knapsack capacities such as kroA100, u159, u724, rl1304, fl1577, d2103, pcb3038, usa13509, d15112, d18512, pca33810. However, the MATLS perform poorly, it is mainly because the population-based heuristics for the TTP is not efficient for handling large-scale instances. To get a better performance analysis of all algorithms we perform, The Friedman’s test is applied to find the differences between the groups when the dependent variable is ordinal. The Nemenyi post-hoc test after the Friedman test is applied .The test ranking of all the algorithms is shown in Figure \ref{fig:friedmanTest}, the friedman test ranking of tour approaches in category A,B and C are presented. As can be seen from the above figure, in large-scale instances (in the Category C), our proposed algorithm ranks perform better the other algorithms. \par In addition, to verify our suppose mentioned in section 3, Due to the cumulative effect of the weight of the picked item, the weight of the item has a greater impact on the final profit than other item attributes (value, location, etc.). We did some experiments and the results are shown in table \ref{Tab04}. The value of exponent $\alpha$ isused to manage the impact of the weight of items on the final profits. It can be clearly observed that the Solver1 outperforms the Solver2 in the many instances (especially in the Category C). A representative excerpt of the results is shown in Figure \ref{fig:barBoxA}, \ref{fig:barBoxB}, \ref{fig:barBoxC}. Note that we rescale the achieved objectives values into the range $\left[ 0, 1 \right]$ by normalization method. The box diagram on the right side of the picture shows the performance of the algorithm with the different parameter $\alpha$. The results shown that in small-scale (Category A) and medium-scale (Category B) instances, the performance of the solver1 algorithm has no obvious superiority. However, in large-scale instances (Category C), the performance of the solver1 algorithm has greater advantages. For further verification, we compared two large-scale instances of experiments in Category C (item factor: 10) and D (item factor: 9) in Figure \ref{fig:barBoxD}. This experimental result also proves the superior performance of solver1. In other words, the result of this investigation verifies our suppose: the weight of the item has a greater impact on the final profit in the large-scale instances.\par Therefore, from the result, we can conclude that our proposed algorithm performs better than the other state-of-the-art algorithms on most of the instances, espically for category B and category C. The proposed algorithm adopts a reverse order picking plan, based on sorting the items according to the proposed formula. It expands the search space and is likely to find potential solutions for solving the TTP. \section{Conclusion} In real-world optimisation problems, combinatorial optimization problems with two or more interpendent components have a major role. Due to the interpendency, an optimal solution to one of components does not gurantee the overall solution to the whole problem. The TTP can be thought of as a combination of two interdependent well-known problems namly the Travelling Saleman Problem (TSP) and 0-1 Knapsack Problem (KP) which was introduced to represent the real-world applications. The interaction and dependence between the sub-problems indicate the complexity of the whole problem. Some approaches have been introduced to solve this problem, such as heuristic, cooperative methods and other methods etc. \par In this paper, due to the cumulative effect of the weight of the picked item, we suppose that the weight of the item has a greater impact on the final profit than other item attributes (value, location, etc.) , To address the issue, we proposed a new heuristic for the TTP based on managing the impact of the weight of items on the final profits. Besides, we believe that high-value and low-weight items near the end of the travel route should be picked up, Under the condition that the total picked-up item weight is not weightier than the knapsack capacity, the items will be picked up from back to front according to the route, So we proposed a method of picking items in reverse order. The obtained results show that our approach are competitive for many instances of different sizes and types compared other heuristics. Especially, our algorithm performs better in large-scale instances. \par Most real-world combinatiorial optimization problems have more than two components. In the future, further research will be made to investigate other problems with more than two components to get the internal dependencies. Furthermore, our proposed method can be further improved in sapce exploration and adopted in problems with many interacting component that have great potential in the real-world applications. \section *{Acknowledgments} This work was partially supported by the Natural Science Foundation of Guangdong Province of China (Grant No.2020A1515010691), Science and Technology Project of Guangdong Province of China (Grant No.2018A0124), and National Natural Science Foundation of China (Grant Nos. 61573157 and 61703170). The authors also gratefully acknowledge the reviewers for their helpful comments and suggestions that helped to improve the presentation. \bibliographystyle{spmpsci}
{ "timestamp": "2020-12-17T02:16:27", "yymm": "2012", "arxiv_id": "2012.08888", "language": "en", "url": "https://arxiv.org/abs/2012.08888" }
\section*{Introduction} Throughout this paper, $R$ denotes a discrete valuation ring with field of fractions $K$ and residue field $k$ of characteristic $p >0$. Let $f: \mathcal{X} \to \Spec(R)$ be a faithfully flat morphism of finite type, and let $f_K: X \to \Spec(K)$ be its generic fiber. Assume that we are given a finite flat $K$-group scheme $G$ and an fppf $G$-torsor $Y \to X$. The problem of extending the $G$-torsor $Y \to X$ consists in finding a finite and flat $R$-group scheme $\mathcal{G}$ whose generic fiber is isomorphic to $G$ and an fppf $\mathcal{G}$-torsor $\mathcal{Y} \to \mathcal{X}$ whose generic fiber is isomorphic to $Y \to X$ as a $G$-torsor. \\ A general solution to this problem does not exist, but this question has been investigated in various settings, by Grothendieck, Dajano Tossici \cite{Tossici} and Marco Antei \cite{Ant}, amongst others. For example, the case when $G$ is a constant group scheme of order coprime to $p$ and $\mathcal{X}\to \Spec(R)$ is smooth with geometrically connected fibers is known to have a solution; see \cite[X, \S 3.1 and \S 3.6]{Groth2}.\\ A natural strategy consists of first looking for a model of $G$ that is finite and flat (if any) and then, focusing on the extension of the $G$-torsor. For instance, this has been done by Tossici in the case where $p$ divides $|G|$ \cite{Tossici}. He studied the extension of torsors under finite and flat commutative group schemes over local schemes under some extra assumptions. Moreover he also studied, using the so-called effective models, the extension of $\mathbb Z/p\mathbb Z$-torsors and $\mathbb Z/p^2\mathbb Z$-torsors imposing the normality of $\mathcal{Y}$. \\ Antei and Emsalem approached the issue with a different point of view in \cite{Antei}. Since an $R$-model of $G$ that is finite and flat does not always exist, they choose to work with a more general model of $G$ that is flat but only quasi-finite, and then, extend the torsor over some scheme $\mathcal{X}'$, which is obtained by modifying the special fiber of $\mathcal{X}$ (where $\mathcal{X}$ is given as in the beginning of the Introduction). By allowing such models of $G$, they solved the problem of extending any $G$-torsor up to a modification of $\mathcal{X}$, without any assumptions on the residue characteristic. When $\mathcal{X}$ is a relative curve, this modification is obtained by performing a finite sequence of N\'eron blow-ups of $\mathcal{X}$ along closed subschemes of the special fiber. \\ Let us now assume that $G$ is commutative and admits a finite flat model $\mathcal{G}\to \Spec(R)$. In an earlier work \cite{Ant}, Antei took advantage of the bijective correspondence between fppf $\mathcal{G}$-torsors over $\mathcal{X}$ and the set $\Hom(\mathcal{G}^D, \Pic_{\mathcal{X}/R})$ of morphisms of group functors, where $\mathcal{G}^D$ denotes the Cartier dual of $\mathcal{G}$, and $\Pic_{\mathcal{X}/R}$ the relative Picard functor of $\mathcal{X} \to \Spec(R)$. Thus, the extension of torsors is reduced to the extension of certain group schemes and morphisms between them. Given some quite strong assumption on the Picard functor $\Pic_{\mathcal{X}/R}$, Antei treated essentially the case where $\mathcal{C}$ is smooth. He proved that $G$-torsors always extend in this context \cite[Theorem 3.10]{Ant}. \\ In this paper, we shall consider the problem of extending fppf $G$-torsors over a smooth projective $K$-curve $C$, endowed with a $K$-rational point $Q_0$ and seek for an extension over some $R$-regular model $\mathcal{C}$ of $C$. We first emphasize that the existence of an extension of a given $G$-torsor is, in general, a strong requirement: if we assume that we have a finite flat model $\mathcal{G}$ of $G$, that is in addition \'etale, our extended torsor---if it exists---should be \'etale, \emph{i.e.} unramified over any codimension one point $x$ of $\mathcal{C}_k$. But this is quite a strong condition; in order to relax this condition, we shall work inside a larger category, namely, the category of \textit{logarithmic} torsors. More precisely, we endow $\mathcal{C}$ with the logarithmic structure induced by its special fiber $\mathcal{C}_k$, seen as a divisor. Then logarithmic torsors over $\mathcal{C}$ are, roughly speaking, tamely ramified along $\mathcal{C}_k$, which dramatically extends the range of this kind of techniques. \\ The first natural question that comes to mind is to check whether or not the previous stated one-to-one correspondence between torsors and morphisms of groups generalizes to the logarithmic world. Indeed, we prove the following theorem :\\ \textbf{Theorem \ref{raynaudlog}.} \textsl{Let $C$ be a smooth projective and geometrically connected curve endowed with a point. Let $f : \mathcal{C} \to \Spec(R)$ be a regular model of $C$. Let $\mathcal{G}$ be a finite flat and commutative group scheme over $R$ and let $\mathcal{G}^D$ denote its Cartier dual. We have a canonical isomorphism : $$R^1_{klf}f_*(\mathcal{G}_{\mathcal{C}}^D) \xrightarrow{\simeq} \underline{\Hom}(\mathcal{G}, \Pic^{log}_{\mathcal{C}/R}). $$ where $\Pic^{log}_{\mathcal{C}/R}$ is the logarithmic Picard functor (cf. Definition \ref{Piclog})}\hop \hop The global section of the sheaf on the left can be interpreted as $\mathcal{G}^D_{\mathcal{C}}$-pointed log torsors over $\mathcal{C}$ (cf. Corollary \ref{pointedlog}). Hence, this shows that extending torsors into log torsors over $\mathcal{C}$ also reduces to the extension of some group functors and morphisms between them. However, this criterion is not easy to handle in practice (for example, we don't have any general result about the representability of the log Picard functor by a scheme or by an algebraic space). This is why, we deduce from the previous statement another criterion that is easier to use in practice, namely : \textbf{Corollary \ref{G^D -> J}.} \textsl{Let $G$ be a finite and commutative $K$-group scheme and let $Y \to C$ be an fppf pointed $G$-torsor. Let $J$ be the Jacobian of the curve $C$, let $G^D \to Pic^0_{C/K}=J$ be the associated morphism from the isomorphism (\ref{pointed-GJ}). If the morphism $G^D \to J$ extends into a morphism $\mathcal{G}^D \to \mathcal{J}$, where $\mathcal{J}$ is the Néron model of $J$ and $\mathcal{G}$ a finite flat $R$-group scheme with generic fiber $G$, then the $G$-torsor $Y \to C$ extends uniquely into a logarithmic pointed $\mathcal{G}$-torsor over $\mathcal{C}$.} In particular, if the extended morphism $\mathcal{G}^D \to \mathcal{J}$ factors through $\mathcal{J}^0$, we obtain an fppf extension for the initial torsor, which can be in fact deduced from the correspondence mentioned above since $\mathcal{J}^0\simeq \Pic^0_{\mathcal{C}/R}$ in our case.\\ This criterion will be useful for us for three reasons. First of all, if the associated morphism $G^D \to J$ extends into some morphism $\mathcal{G}^D \to \mathcal{J}$, we compute the obstruction for the torsor over $C$ that extends into a log torsor over $\mathcal{C}$ to extend into an fppf one. More precisely, we'll see that this obstruction can be written using the obstruction for the Poincaré extension of $J_C$ by $\mathbb{G}_{m,C}$ to extend into an fppf extension of $\mathcal{J}_{\mathcal{C}}$ by $\mathbb{G}_{m,\mathcal{C}}$. \\ Secondly, we generalize a result by Chiodo \cite[Propositions 7.4.1 and 7.5.1]{Chiodo} which provides a criterion for the existence of a finite \'etale $R$-model of $J[r]$, when $r$ is prime to $p$. In fact, his result is a finiteness criterion for $\mathcal{J}[r]$, because when $r$ is prime to $p$, $\mathcal{J}[r]$ is \'etale, hence is the natural candidate for being the N\'eron model of $J[r]$. This no longer holds when $p$ divides $r$; nevertheless, we show that the same finiteness criterion for $\mathcal{J}[r]$ holds. This yields interesting examples of commutative group schemes admitting a finite flat model that maps into $\mathcal{J}$. Applying our previous results to this setting, we obtain the following:\\ \noindent\textbf{Corollary \ref{coro}.} \textsl{Let $C$ be a smooth projective geometrically connected curve over $K$ of genus $g \geq 2$ with a rational point, and let $\mathcal{C}$ be a regular model of $C$ over $R$. Let $G$ be a finite flat $K$-group scheme killed by $r$, and let $Y \rightarrow C$ be a pointed fppf $G$-torsor such that $Y$ is geometrically connected. If $C$ is semistable and if Chiodo's criterion is satisfied (Proposition~\ref{prop:CC}), then $G$ has a finite flat model $\mathcal{G}$ and $Y \rightarrow C$ extends uniquely into a pointed logarithmic $\mathcal{G}$-torsor over $\mathcal{C}$.}\\ Finally, the last part of the paper is devoted to the study of examples of extensions of torsors. We give ourselves a hyperelliptic curve on $\mathbb Q$, depending on a prime number $p$, and whose Jacobian contains a subgroup isomorphic to $(\mathbb Z/p\mathbb Z)^2$. This then gives a $\mu_{p}^2$ -torsor on the curve by what is explained above. To begin with, we will construct a regular model of the curve above $\mathbb Z_l$, for a prime number $l$. Then, we will ask if the previous torsor extends on this model. For different values of $l$, we will treat different examples which will give in some cases an fppf extension of the initial torsor and in others, a logarithmic extension not coming from an fppf torsor, and this is achieved by studying the (unique) extension of the morphism $(\mathbb Z/p\mathbb Z)^2 \to J$ into a morphism $(\mathbb Z/p\mathbb Z)^2 \to \mathcal{J}$. \\ This paper is divided into four sections. In section \ref{section1}, we recall some basic and well-known facts about log schemes and log torsors. Then, we'll prove Theorem \ref{raynaudlog} stated above as well as Corollary \ref{G^D -> J}. In section \ref{section2}, we compute the obstruction for an fppf torsor that extends into a log torsor (under the assumptions of Corollary \ref{G^D -> J}) to extend into an fppf torsor. Section \ref{section3} is devoted to the generalization of the result of Chiodo stated before, and then, applying our main theorems to get Corollary \ref{coro}. Finally, in section \ref{section4} we will study examples of extension of torsors over a given hyperelliptic curve. Throughout this article, all schemes and log schemes are assumed to be locally noetherian.\\ \textbf{Acknowledgements}. The author would like to warmly thank her thesis advisors Jean Gillibert and Dajano Tossici for their support and encouragements, and the long hours of discussion they devoted to her, without which this article would not have been completed.\\ The author would also like to thank her colleague William Dallaporta with whom working on the construction of regular models was very enriching.\\ Finally, the author would also like to warmly thank the referee for its interesting comments and suggestions which helped to improve the paper. \section{Extension of torsors into log and fppf torsors}\label{section1} \subsection{Logarithmic schemes and logarithmic torsors} We start this section by giving some basic definitions about logarithmic schemes and the way they are used through this paper (we often write \textit{log} instead of logarithmic for simplification). For a detailed introduction to log schemes, we refer the reader to \cite{illusie}. Monoids are assumed here to be commutative with a unit element. The group of fractions of a monoid $P$ is denoted $P^{gp}$. We call a monoid $P$ integral if the canonical morphism $P \to P^{gp}$ is injective. We say that a monoid is fine if it is integral and of finite type. We say that a monoid $P$ is saturated if it is integral and satisfies the condition that for any $a \in P^{gp}$, there exists $n \geq 1$ such that $a^n \in P$. \\ A \textbf{pre-logarithmic} structure on a scheme $X$ is a sheaf of monoids $M$ on the Zariski site $X_{Zar}$ endowed with a homomorphism of sheaves of monoids $$ \alpha: M \to \mathcal{O}_X$$ where $\mathcal{O}_X$ is regarded as a monoid for the multiplicative law. A pre-log structure $M$ is called a \textbf{log} structure if $\alpha$ induces an isomorphism $$ \alpha^{-1}(\mathcal{O}_X^{\times}) \xrightarrow{\ \simeq\ } \mathcal{O}_X^{\times} $$ A log scheme is a scheme endowed with a log structure. For a log scheme $X$, we shall denote the log structure of $X$ by $M_X$. A morphism of log schemes is defined in a natural way. \\ If $X$ is a scheme, the inclusion $\mathcal{O}_X^{*} \subset \mathcal{O}_X$ defines a log structure over $X$ that is called the trivial log structure. Therefore, the category of schemes can be identified to a full subcategory of the category of log schemes. More precisely, the functor of inclusion $X \to (X, \mathcal{O}_X^{*} \subset \mathcal{O}_X)$ is the right adjoint of the forgetful functor $(Y, M_Y \to \mathcal{O}_Y) \to Y$ from the category of log schemes to that of schemes. If $X$ is a log scheme, the largest Zariski open subset of $X$ (possibly empty) on which the log structure is trivial is called the \textbf{open of triviality} of $X$. \\ We say a log scheme $X$ is fine (resp. fine and saturated) if the following condition is satisfied: Zariski locally on $X$, there exists a fine (resp. a fine and saturated) monoid $P$ and a homomorphism $\alpha: P \to \mathcal{O}_X$ such that $M_X$ is isomorphic to the log structure associated to the constant sheaf $P$ on $X$ regarded as a pre-log structure with respect to $\alpha$. \noindent Now, we recall the definition of a log structure defined by a divisor. This is the main example of log structure which we will consider in this article: \begin{exam} \label{logdiv} Let $X$ be a noetherian and regular scheme and let $j: U \to X$ be an open subset whose complementary is a divisor $D$ over $X$. Then the inclusion $$ \mathcal{O}_X \cap j_{*}\mathcal{O}_U^{*} \to \mathcal{O}_X$$ defines a fine and saturated log structure on $X$, which we call the log structure defined by $D$. It is clear from the definition that $U$ is the open of triviality of this log structure.\\ For example, $\Spec(R)$ can be seen as a log scheme with the log structure induced by $\Spec(k)$, seen as a divisor. It is called the canonical log structure on $\Spec(R)$. \end{exam} In this paper, we shall endow the category of fine and saturated log schemes with the \textbf{Kummer log flat} topology (denoted sometimes by klf to simplify). We refer to \cite{kato} or \cite[\S 2.2]{Gill2} for the definition of this Grothendieck topology. A torsor defined with respect to this topology will be called a \textbf{logarithmic torsor} (or a log torsor). If $X$ is a log scheme, $G$ a group scheme over $X$, we denote by $H_{klf}^1(X,G)$ the first cohomology group that classifies $G$-logarithmic torsors over $X$. Log torsors in this papers are defined with respect to this topology. Moreover, a Kummer log flat cover of a scheme endowed with the trivial log structure is just a cover for the fppf topology. So, in this paper, the category of schemes is endowed with the fppf topology. \begin{exam}( $\mathbb{G}_m$-log torsors) \label{exam:logdiv} \\ If $X$ is a regular scheme, we write $\Div(X)$ for the group of Cartier divisors over $X$ and $\DivPrinc(X)$ for the subgroup of principal divisors. We recall that $H_{fppf}^1(X,\mathbb{G}_m)$ is isomorphic to \\ $\Div(X)/ \DivPrinc(X)$. Now, if we assume that $X$ is endowed with a logarithmic structure induced by a divisor $D$ over $X$, then one has a similar description for $H_{klf}^1(X,\mathbb{G}_m)$. Indeed, let us call group of divisors with rational coefficients over $D$, and denote by $\DivRat(X,D)$, the subgroup of $\Div(X) \otimes_{\mathbb Z} \mathbb Q$ formed by the divisors over $X$ whose restriction to $U$ has integral coefficients. Then we have a canonical isomorphism $$ \DivRat(X, D)/ \DivPrinc(X) \xrightarrow{\ \simeq\ } \ H^1_{klf}(X, \mathbb{G}_m) $$ (see \cite[Theorem 3.1.3]{Gill1}). \end{exam} \subsection{Characterization of torsors via the Picard functor} \subsubsection{The classical case} If $\mathcal{G}$ is a any finite flat and commutative group scheme over some base $S$, $\mathcal{G}^D$ will denote it Cartier dual, namely, $\mathcal{G}^D:= \Hom_S(\mathcal{G},\mathbb{G}_m)$.\\ The following isomorphism is due to Raynaud : \begin{thm}\cite[Proposition 6.2.1]{Raynaud}\label{raynaud} Let $f:\mathcal{X} \to \Spec(R)$ be a proper flat morphism of finite type, $\mathcal{G}$ an $R$-group scheme which is finite flat and commutative. Assume that $f_*\mathcal{O}_{\mathcal{X}}=\mathcal{O}_{R}$. Then we have a canonical isomorphism $$R^1_{fppf}f_*(\mathcal{G}_{\mathcal{X}}^D) \xrightarrow{\simeq} \underline{\Hom}(\mathcal{G}, \Pic_{\mathcal{X}/R}) $$ where $\Pic_{\mathcal{X}/R}$ is the relative Picard functor of $\mathcal{X}$ over $R$. \end{thm} \begin{prop}\label{R^1f_*} We keep the assumptions of Theorem \ref{raynaud} and we assume in addition that $f$ has a section. We have an isomorphism : $$H^0(R,R_{fppf}^1f_*\mathcal{G}_{\mathcal{X}}^D) \simeq H_{fppf}^1(\mathcal{X},\mathcal{G}_{\mathcal{C}}^D)/H_{fppf}^1(R,\mathcal{G}^D) $$ \end{prop} \begin{proof} Let's write the Leray sequence associated to $f$ and $\mathcal{G}_{\mathcal{X}}^D$: \begin{small} $0 \to H^1_{fppf}(R, f_*\mathcal{G}^D_{\mathcal{X}}) \to H^1_{fppf}(\mathcal{X},\mathcal{G}^D_{\mathcal{X}}) \to H^0(R,R^1_{fppf}f_*\mathcal{G}^D_{\mathcal{X}}) \to H^2_{fppf}(R, f_*\mathcal{G}^D_{\mathcal{X}}) \xrightarrow{\delta} H^2_{fppf}(\mathcal{X},\mathcal{G}^D)$ \end{small} \newline We have that \begin{align*} f_* \underline{\Hom}(\mathcal{G}_{\mathcal{X}},\mathbb{G}_{m,\mathcal{X}}) &= f_*\underline{\Hom}(f^*\mathcal{G},\mathbb{G}_{m,\mathcal{X}})\\ &=\underline{\Hom}(\mathcal{G},f_*\mathbb{G}_{m,\mathcal{X}})\\ &=\underline{\Hom}(\mathcal{G},\mathbb{G}_{m,R}) \end{align*} where the last equality follows from the assumption $f_*\mathcal{O}_{\mathcal{X}}=\mathcal{O}_{R}$. Hence, $f_*\mathcal{G}_{\mathcal{X}}^D= \mathcal{G}^D$.\\ In addition, $\delta$ is injective since $f$ has a section, and the exact sequence above becomes : \begin{equation*} 0 \to H^1_{fppf}(R, \mathcal{G}^D) \to H^1_{fppf}(\mathcal{X},\mathcal{G}^D_{\mathcal{X}}) \to H^0(R,R^1_{fppf}f_*\mathcal{G}^D_{\mathcal{X}}) \to 0 \end{equation*} \end{proof} \begin{cor}\label{Pointed} Let $f:\mathcal{X} \to \Spec(R)$ be a proper flat morphism of finite presentation with a section, $\mathcal{G}$ a finite flat and commutative $R$-group scheme. Assume that $f_*\mathcal{O}_{\mathcal{X}}=\mathcal{O}_R$. Then we have a canonical isomorphism $$H_{fppf}^1(\mathcal{X},\mathcal{G}_{\mathcal{X}}^D)/H_{fppf}^1(R,\mathcal{G}^D) \xrightarrow{\simeq} {\Hom}(\mathcal{G}, \Pic_{\mathcal{X}/R}) $$ Moreover, if $\mathcal{X}$ is a relative curve (the fibers are of dimension $1$), the morphism on the right factors through $\Pic^0_{\mathcal{X}/R}$, the identity component of the relative Picard functor. \end{cor} \begin{proof} Th first part is deduced from Theorem \ref{raynaud} and Proposition \ref{R^1f_*}.\\ As for the second part, since $\mathcal{G}^D$ is finite, it is torsion, hence the morphism $\mathcal{G}^D \to \Pic_{\mathcal{X}/R}$ factors through the functor $\Pic^{\tau}_{\mathcal{X}/R}$, where $\Pic^{\tau}_{\mathcal{X}/R}:= \cup_{n} n^{-1} (\Pic^0_{\mathcal{X}/R})$, with $n$ being the multiplication in $\Pic_{\mathcal{X}/R}$. Finally, as noted in \cite[\S 8.0]{Raynaud}, in the case of relative curves, $\Pic^{\tau}_{\mathcal{X}/R}=\Pic^0_{\mathcal{X}/R}$. \end{proof} \subsubsection{The logarithmic case.} We note that Raynaud's proof of the Theorem \ref{raynaud} being quite formal, it is natural to ask if it generalizes to the logarithmic case. The log Picard functor has been first introduced by Kajiwara in the paper \cite{Kajiwara} for log curves without self intersection over a field. A relative version of this functor for semistable families with fibers of any dimension has been studied by Olsson in \cite{Olsson}. For the log structure, he used what he calls the special log structure that is related to the semistability. Bellardini in his pre-print \cite{Alb} provided a comparison map between the relative log Picard functor defined using the special log structure and the one defined using the canonical log structure on $R$ (cf. Example \ref{logdiv}). When $\mathcal{X} \to \Spec(R)$ is a semistable curve endowed with the inverse image of the log structure of that of $R$, he proved that the log Picard functor of $\mathcal{X}$ coincides with the Néron model of $\Pic_{\mathcal{X}_K/K}$. In this paper, we define the log Picard functor using the canonical log structure of $R$, in the same way is it is defined in \cite{Alb}.\\ In this section, $C$ denotes a smooth projective and geometrically connected curve over $K$, with a rational point $Q_0$. Let $f: \mathcal{C} \to \Spec(R)$ denote a regular model of $C$, i.e. an integral, projective flat and regular $R$-scheme, with generic fiber $C$. The existence of such regular model for a smooth curve is well-known and proved in \cite[ \S 8.3.4 , Corollary 3.51]{Liu}. We view $\mathcal{C}$ as a log scheme via the log structure induced by its special fiber seen as a divisor (cf. Example \ref{logdiv}). It is also the inverse log structure of that on $\Spec(R)$.\\ Finally, given the properness of $\mathcal{C}$, the $K$-rational section $Q_0$ extends uniquely into an $R$-section $\mathcal{Q}_0$ over $\mathcal{C}$, that we may see as a log section. \begin{lem}\label{f_*} $$f_*M^{gp}_{\mathcal{C}}=M^{gp}_{R} $$ \end{lem} \begin{proof} Let denote by $j: C \to \mathcal{C}$ and by $j_0: \Spec(K) \to \Spec(R)$ the open immersions. \\ The log structure on $\mathcal{C}$ is the direct image of the trivial log structure on $C$, hence it follows from the universal property of $M_{\mathcal{C}}^{gp}$ that $M_{\mathcal{C}}^{gp}=j_*M_C^{gp}=j_*\mathcal{O}_C^*$.\\ In the same way, we have $M_{R}^{gp}=j_{0,*}M_{K}^{gp}=j_{0,*}\mathcal{O}_{R}^*$.\\ From the commutative diagram \[ \begin{tikzcd} C \arrow{r}{f_K} \arrow[swap]{d}{j} & \Spec(K) \arrow{d}{j_0} \\% \mathcal{C} \arrow{r}{f}& \Spec(R) \end{tikzcd} \] we have that $f \circ j = j_0 \circ f_K$. We deduce that : \begin{align*} (f \circ j)_*(\mathcal{O}_C^*) & = (j_0 \circ f_{K})_*(\mathcal{O}_C^*). \end{align*} But, \begin{align*} (j_0 \circ f_{K})_*(\mathcal{O}_C^*) &= j_{0,*}(f_{K_*}\mathcal{O}_C^*)\\ &= j_{0,*}\mathcal{O}_{K}^*\\ &= M_{R}^{gp} \end{align*} where the second equality comes from the fact that $f_*(\mathcal{O}_C)=\mathcal{O}_{\Spec(K)}$, which is a consequence of the fact that $C$ has a section over $K$. On the other hand, \begin{align*} (f \circ j)_*(\mathcal{O}_C^*) &=f_*(j_*\mathcal{O}_C^*)\\ &=f_*M_{\mathcal{C}}^{gp}. \end{align*} And the lemma is proved. \end{proof} \begin{defi}\label{Piclog} We recall that $\Spec(R)$ is seen as a log scheme via the log structure induced by $\Spec(k)$. Let $(Sch/R)$ denote the category of schemes over $R$. We consider it as a full subcategory of the category $(fs/R)$ of fine and saturated log schemes over $R$ as follows : given a morphism of schemes $T \to \Spec(R)$, we put on $T$ the inverse log structure of that on $\Spec(R)$. \\ \begin{enumerate} \item Recall the definition of the following functor : \begin{center} \begin{align*} \mathbb{G}_{m,log,R} : (fs/R) & \to (Ab)\\ T & \mapsto \Gamma(T, M_{T}^{gp}) \end{align*} \end{center} This is a sheaf for the kummer log flat topology \cite[Theorem 3.2]{kato}. \item Consider the following functor \begin{align*} (Sh/R) &\to (Sets)\\ T & \mapsto \{M_{\mathcal{C}_{T}}^{gp}-log~torsors~on~\mathcal{C}_{T} \}\\ & = \{\mathbb{G}_{m,log,\mathcal{C}_T}-log~torsors~on~\mathcal{C}_{T} \}. \end{align*} The \textbf{log Picard functor}, denoted by $\Pic^{log}_{\mathcal{C}/R}$, is defined to be the fppf sheafification on $(Sch/R)$ of the previous functor. It can also be defined by the formula $$\Pic_{\mathcal{C}/R}^{log}(T) = H^0(T, R^1f_*\mathbb{G}_{m,log,\mathcal{C}_T}) $$ where $R^1f_*\mathbb{G}_{m,log,\mathcal{C}_T}$ is computed using the kummer log flat topology. \end{enumerate} \end{defi} \begin{thm}\label{raynaudlog} Let $\mathcal{G}$ be a finite flat and commutative group scheme over $R$. We have a canonical isomorphism : $$R^1_{klf}f_*(\mathcal{G}_{\mathcal{C}}^D) \xrightarrow{\simeq} \underline{\Hom}(\mathcal{G}, \Pic^{log}_{\mathcal{C}/R}). $$ \end{thm} \begin{proof} We shall check that the same arguments as for Theorem \ref{raynaud} can be transported to the log setting.\\ We have an exact sequence in the kummer log flat site (cf. \cite[exact sequence 2.3.2]{Gill1}) : $$0 \to \mathbb{G}_{m,\mathcal{C}} \to \mathbb{G}_{m,log,\mathcal{C}} \to (\mathbb{G}_{m,log,\mathcal{C}}/\mathbb{G}_{m,\mathcal{C}})^{klf} \to 0,$$ where the quotient is computed in the kummer log flat site. We deduce a long exact sequence \begin{equation}\label{Hom} 0 \to \underline{\Hom}(\mathcal{G}_{\mathcal{C}},\mathbb{G}_{m,\mathcal{C}}) \to \underline{\Hom}(\mathcal{G}_{\mathcal{C}},\mathbb{G}_{m,log,\mathcal{C}}) \to \underline{\Hom}(\mathcal{G}_{\mathcal{C}},(\mathbb{G}_{m,log,\mathcal{C}}/\mathbb{G}_{m,log,\mathcal{C}})^{klf}) \end{equation} It is proved in \cite[Lemme 2.3.1]{Gill1} that the multiplication by any integer $n$ is an automorphism in $\mathbb{G}_{m,log,\mathcal{C}}$, hence, the quotient $(\mathbb{G}_{m,log,\mathcal{C}}/\mathbb{G}_{m,log,\mathcal{C}})^{klf}$ has no torsion points. Since $\mathcal{G}_{\mathcal{C}}$ is finite, the last term of the exact sequence (\ref{Hom}) is then trivial and hence, we have \begin{equation}\label{Hom} \underline{\Hom}(\mathcal{G}_{\mathcal{C}},\mathbb{G}_{m,\mathcal{C}})=\underline{\Hom}(\mathcal{G}_{\mathcal{C}},\mathbb{G}_{m,log,\mathcal{C}}). \end{equation} The same arguments allow to prove that $$\underline{\Ext}_{klf}^1(\mathcal{G},\mathbb{G}_{m,log,R})=\underline{\Ext}_{klf}^1(\mathcal{G},\mathbb{G}_{m,R})$$ In particular, \begin{equation}\label{Ext} \underline{\Ext}_{klf}^1(\mathcal{G},\mathbb{G}_{m,log,R})=\underline{\Ext}_{klf}^1(\mathcal{G},\mathbb{G}_{m,R})=\{0\} \end{equation} by \cite[Theorem 4.1]{Gill2}.\\ Finally, using Lemma \ref{f_*}, we have that $f_*\mathbb{G}_{m,log,\mathcal{C}}=\mathbb{G}_{m,log,R}$. We can now write the log version of the exact sequence appearing in the proof of Theorem \ref{logdiv}.\\ \noindent Consider the following functor $$F \mapsto H(F):=f_*\underline{\Hom}(f^*\mathcal{G},F)= \underline{\Hom}(\mathcal{G},f_* F),$$ for some sheaf $F$ on the log flat site of $\mathcal{C}$. The derived functor $R^iH$ is the result of two spectral sequences : \begin{center} \begin{align*} R^{p+q}H(F) & \Rightarrow R^qf_*\underline{\Ext}^p(f^*\mathcal{G},F)\\ R^{p+q}H(F) & \Rightarrow \underline{\Ext}^q(\mathcal{G},R^pf_*F) \end{align*} \end{center} We take $F$ to be sheaf $\mathcal{G}_{m,log,\mathcal{C}}$. We obtain a commutative diagram with exact lines : \begin{center} \small \xymatrix@C=1em{ 0 \ar[r] & R^1f_*(\underline{\Hom}(\mathcal{G}_{\mathcal{C}},\mathbb{G}_{m,log;\mathcal{C}})) \ar[d]_-{} \ar[r]^-{} & R^1H(\mathbb{G}_{m,log}) \ar[d]_-{} \ar[r]^-{} & f_*\underline{\Ext}_{klf}^1(\mathcal{G}^D_{\mathcal{C}},\mathbb{G}_{m,log,\mathcal{C}}) \ar[d]^-{} \\ 0 \ar[r] & \underline{\Ext}^1_{klf}(\mathcal{G},f_*\mathcal{G}_{m,log,\mathcal{C}}) \ar[r]_-{} & R^1H(\mathbb{G}_{m,log}) \ar[r]_-{} & \underline{\Hom}(\mathcal{G},R^1f_*\mathbb{G}_{m,log,\mathcal{C}}) \ar[r] & \underline{\Ext}^2_{klf}(\mathcal{G},f_*\mathbb{G}_{m,log,\mathcal{C}}) \ar[r] & R^2H(\mathcal{G}_{m,log}) } \end{center} Now, using (\ref{Hom}), (\ref{Ext}) and Lemma \ref{f_*}, we deduce the following exact sequence $$0 \to R^1f_{*}(\mathcal{G}_{\mathcal{C}}^D) \to \underline{\Hom}(\mathcal{G},R^1f_{*}\mathbb{G}_{m,log,\mathcal{C}}) \to \underline{\Ext}^2_{klf}(\mathcal{G},\mathbb{G}_{m,log,R}) \xrightarrow{\gamma} R^2H(\mathbb{G}_{m,log}) $$ with $\gamma$ being injective with the same arguments as in \cite[Proposition 6.2.1]{Raynaud} (after noticing that $\mathcal{C}$ has an $R$-section), which ends the proof. \end{proof} \begin{cor}\label{pointedlog} Let $\mathcal{G}$ be a finite flat and commutative group scheme over $R$ and let $\mathcal{G}^D$ denote its Cartier dual. We have a canonical isomorphism : $$H^1_{klf}(\mathcal{C},\mathcal{G}_{\mathcal{C}}^D)/H^1_{klf}(R,\mathcal{G}^D) \xrightarrow{\simeq} \Hom(\mathcal{G}, \Pic^{log}_{\mathcal{C}/R}). $$ \end{cor} \begin{proof} The proof is similar to the proof of Proposition \ref{R^1f_*}. \end{proof} \begin{rema} View $\mathcal{Q}_0$ as a log point. We call a \textbf{pointed} $\mathcal{G}$-log torsor over $\mathcal{C}$ (relative to $\mathcal{Q}_0$) a log $\mathcal{G}$-torsor $h: \mathcal{Y} \to \mathcal{C}$ such that there exists a log point in $\mathcal{Y}(R)$ whose image by $h$ is $\mathcal{Q}_0$. Equivalently, a $\mathcal{G}$-log torsor $f: \mathcal{Y} \to \mathcal{C}$ is pointed (relative to $\mathcal{Q}_0$) if its restriction to $\mathcal{Q}_0$ is the trivial $\mathcal{G}$-log torsor. We denote by $H^1_{klf}(\mathcal{C},\mathcal{Q}_0,\mathcal{G})$ the first cohomology group that classifies isomorphism classes of pointed $\mathcal{G}$-log torsors over $\mathcal{C}$ (relative to $\mathcal{Q}_0$). We have an exact sequence \[ \begin{tikzcd} 0 \to H^1_{klf}(\mathcal{C},\mathcal{Q}_0,\mathcal{G}) \arrow{r} & H^1_{klf}(\mathcal{C},\mathcal{G}) \arrow{r}{\mathcal{Q}_0^*} & H^1_{klf}(R,\mathcal{G}) \to 0 \end{tikzcd} \] where the exactness on the left is by definition of $H^1_{klf}(\mathcal{C},\mathcal{Q}_0,\mathcal{G})$, and the exactness on the right is because $\mathcal{Q}_0^*\circ b=\mathrm{id}$, where $b:H^1_{klf}(R,\mathcal{G}) \to H^1_{klf}(\mathcal{C},\mathcal{G})$ is the base-change map. For the same reason, $b$ is injective and the exact sequence above splits, yielding an isomorphism \begin{equation}\label{eq:0} H^1_{klf}(\mathcal{C},\mathcal{Q}_0,\mathcal{G}) \simeq H^1_{klf}(\mathcal{C},\mathcal{G})/H^1_{klf}(R,\mathcal{G}) \end{equation} Hence, we have an isomorphism \begin{equation}\label{PointedlogGPic} H^1_{klf}(\mathcal{C},\mathcal{Q}_0,\mathcal{G}^D)\simeq \Hom(\mathcal{G}, \Pic^{log}_{\mathcal{C}/R}). \end{equation} We have the same definition for pointed fppf torsors over $\mathcal{C}$ (resp. $C$) relative to $\mathcal{Q}_0$ (resp. $Q_0$), viewing this time $\mathcal{Q}_0$ (resp. $Q_0$) as an $R$-point (resp. $K$-point) of a scheme. In particular, it follows from Corollary \ref{Pointed} that: \begin{equation}\label{pointed GPic} H^1_{fppf}(\mathcal{C},\mathcal{Q}_0,\mathcal{G}^D)\simeq \Hom(\mathcal{G}, \Pic_{\mathcal{C}/R})\simeq \Hom(\mathcal{G}, \Pic^0_{\mathcal{C}/R}). \end{equation} \begin{equation}\label{pointed-GJ} H^1_{fppf}(C,Q_0,\mathcal{G}_K^D)\simeq \Hom(\mathcal{G}_K, \Pic_{C/K})\simeq \Hom(\mathcal{G}_K, \Pic^0_{C/K}). \end{equation} with $\mathcal{G}_K^D:= \mathcal{G}^D \times \Spec(K)$. \end{rema} \subsection{Extension of torsors} \begin{defi} If $A$ is a smooth and separated curve over $K$, the \textbf{N{\'e}ron model} of $A$ over $R$ is the unique smooth separated $R$-algebraic space $\mathcal{A}$, with generic fiber $A$, such that $\mathcal{A}(T)=A(T\times_{\Spec(R)} \Spec(K))$ for any smooth morphism $T\to \Spec(R)$. \end{defi} In the definition, we don't require the Néron model to be of finite type. In addition, if $A$ is an abelian variety, then its Néron model exists and is a scheme of finite type (cf. \cite[\S 1.3 ; Corollary 2]{BLR}).\\ The notations here are the same as in the previous section. Since $C$ has a section, it follows from \cite[\S 8.1, Proposition 4]{BLR} that we have an isomorphism: \begin{equation*}\label{secPic} \Pic_{C/K}(T) \simeq \Pic(C \times_K T)/ \Pic(T) \end{equation*} \noindent Given that the curve $C$ is a proper scheme over a field, $\Pic_{C/K}$ is representable by a $K$-group scheme, called the Picard scheme, and we denote it in the same way \cite[\S 8.2. Theorem 3]{BLR}. \begin{prop} The log Picard functor $\Pic^{log}_{\mathcal{C}/R}$ and the Néron model of the Picard scheme $\Pic_{C/K}$ have the same points on the smooth site of $R$. \end{prop} \begin{proof} Let $T$ be an $R$-scheme endowed with the inverse log structure of that of $\Spec(R)$. The Leray spectral sequence associated to $f : \mathcal{C} \to \Spec(R)$ and $\mathcal{G}_{m,log,\mathcal{C}_T}$ gives an exact sequence: \\ \begin{center} $0 \to H^1_{klf}(T,f_*\mathbb{G}_{m,log,\mathcal{C}_T}) \to H^1_{klf}(\mathcal{C}_T,\mathbb{G}_{m,log,\mathcal{C}_T}) \to H^0(T,R^1f_*\mathbb{G}_{m,log,\mathcal{C}_T}) \to H^2_{klf}(T,f_*\mathbb{G}_{m,log,\mathcal{C}_T}) \rightarrow H^2_{klf}(\mathcal{C}_T,\mathbb{G}_{m,log,\mathcal{C}_T})$ \end{center} All the log structures coming into play here are inverse image of that of $\Spec(R)$, hence it follows from Lemma \ref{f_*} that $f_*\mathbb{G}_{m,log,\mathcal{C}_T} = \mathbb{G}_{m,log,T}$. Therefore, the exact sequence becomes : \begin{center} $0 \to H^1_{klf}(T,\mathbb{G}_{m,log,T}) \to H^1_{klf}(\mathcal{C}_T,\mathbb{G}_{m,log,\mathcal{C}_T}) \to H^0(T,R^1f_*\mathbb{G}_{m,log,\mathcal{C}_T}) \to H^2_{klf}(T,\mathbb{G}_{m,log,T}) \xrightarrow{\delta} H^2_{klf}(\mathcal{C}_T,\mathbb{G}_{m,log,\mathcal{C}_T})$ \end{center} where $\delta$ is injective since $\mathcal{C}$ has a section. Therefore, the previous exact sequence becomes $$ 0 \to H^1_{klf}(T,\mathbb{G}_{m,log,T}) \to H^1_{klf}(\mathcal{C}_T,\mathbb{G}_{m,log,\mathcal{C}_T}) \to H^0(T,R^1f_*\mathbb{G}_{m,log,\mathcal{C}_T}) \to 0 $$ Now, assume that $T$ is smooth over $R$. Then, $\mathcal{C}_{T}$ is $\mathcal{C}$-smooth, and hence, is regular. Therefore, using \cite[Proposition 2.2.6]{Gill1}, we have that $H^1_{klf}(\mathcal{C}_T,\mathbb{G}_{m,log,\mathcal{C}_T})=\Pic(C \times T_K)$ and $H^1_{klf}(T,\mathbb{G}_{m,log,T})=\Pic(T_K)$. Hence, for any $R$-smooth scheme $T$, we have $$\Pic_{C/K}(T_K)=\Pic(C \times T_K)/\Pic(T_K) = H^0(T,R^1f_*\mathbb{G}_{m,log,\mathcal{C}_T})=\Pic^{log}_{\mathcal{C}/R}(T)$$ Hence, if $\mathcal{N}$ denotes the Néron model of $\Pic_{C/K}$ over $R$, we have for every $R$-smooth scheme $T$ : \begin{align*} \mathcal{N}(T) &= \Pic_{C/K}(T_K)\\ &=\Pic^{log}_{\mathcal{C}/R}(T) \end{align*} \end{proof} \begin{cor}\label{N->Pic} If $\mathcal{N}$ denotes the Néron model of $\Pic_{C/K}$, then we have a canonical morphism $\mathcal{N} \to \Pic_{\mathcal{C}/R}^{log}$, which is an isomorphism on the generic fibers. \end{cor} \begin{proof} Given that $\mathcal{N}$ and $\Pic_{\mathcal{C}/R}^{log}$ coincide in the smooth site over $R$, and since $\mathcal{N}$ is smooth itself, we have $\Pic_{\mathcal{C}/R}^{log}(\mathcal{N})=\mathcal{N}(\mathcal{N})$. Hence, the identity map $\mathcal{N} \to \mathcal{N}$ gives rise to a canonical map $\mathcal{N} \to \Pic_{\mathcal{C}/R}^{log}$, which is generically an isomorphism. \end{proof} As explained before, although the isomorphism (\ref{PointedlogGPic}) shows that extending torsors over $C$ into log torsors over $\mathcal{C}$ reduces to the extension of some group functors and morphisms between them, the criterion is not easy to handle. But we deduce from that the following criterion which is more useful in practice.\\ If $G$ is a finite flat and commutative group scheme over $K$, we call a model of $G$ over $R$ a finite flat group scheme $\mathcal{G}$ over $R$, whose generic fiber is isomorphic to $G$. We denote by $\mathcal{J}$ the Néron model of the Jacobian $J$ of $C$. \begin{cor}\label{G^D -> J} Let $G$ be a finite and commutative $K$-group scheme and let $Y \to C$ be an fppf pointed $G$-torsor. Let $G^D \to Pic^0_{C/K}=J$ be the associated morphism from the isomorphism (\ref{pointed-GJ}). If the morphism $G^D \to J$ extends into a morphism $\mathcal{G}^D \to \mathcal{J}$, for some model $\mathcal{G}$ of $G$, then the $G$-torsor $Y \to C$ extends uniquely into a logarithmic pointed $\mathcal{G}$-torsor over $\mathcal{C}$. \end{cor} \begin{proof} If $\mathcal{N}$ denotes the Néron model of $\Pic_{C/K}$, the morphism $J \to \Pic_{C/K}$ extends uniquely into a morphism $\mathcal{J} \to \mathcal{N}$, since $\mathcal{J}$ is smooth. Therefore, by Corollary \ref{N->Pic}, the morphism $\mathcal{G}^D \to \mathcal{J}$ gives a morphism $\mathcal{G}^D \to \Pic^{log}_{\mathcal{C}/R}$, and hence, the initial torsor extends by the isomorphism (\ref{PointedlogGPic}). \end{proof} \begin{prop}\label{G^D->J^0} Let $G$ be a finite and commutative $K$-group scheme and let $Y \to C$ be an fppf pointed $G$-torsor. Let $G^D \to J$ be the associated morphism from the isomorphism (\ref{pointed-GJ}). Assume that the morphism $G^D \to J$ extends into a morphism $\mathcal{G}^D \to \mathcal{J}$, for some model $\mathcal{G}$ of $G$, so that we have a logarithmic extension of the torsor (cf. Corollary \ref{G^D -> J}). Then this extension comes from an fppf one if and only if the morphism $\mathcal{G}^D \to \mathcal{J}$ factors through $\mathcal{J}^0$, the identity component of $\mathcal{J}$. \end{prop} \begin{proof} Using the isomorphism (\ref{pointed GPic}), it suffices to justify why $\Pic_{\mathcal{C}}^0=\mathcal{J}^0$. Indeed, since $f: \mathcal{C} \to \Spec(R)$ has a section, the $gcd$ of the geometric multiplicities of the irreducible components of $\mathcal{C}_k$ (the special fiber of $\mathcal{C}$) is equal to $1$, and the result follows from \tsb{\cite[Proposition 4.2.1 (1) and Theorem 8.2 (i)]{Raynaud}}. \end{proof} \section{Obstruction of extension into fppf torsors}\label{section2} As in the previous section, $C$ denotes is a smooth projective and geometrically connected $K$-curve, with a rational point $Q_0$, which extends uniquely into an $R$-section $\mathcal{Q}_0$, over some fixed $R$-regular model $\mathcal{C}$ of $C$. We also denote by $J$ the Jacobian of $C$ and by $\mathcal{J}$ its Néron model.\\ The goal of this section is to compute the obstruction to the extension of a torsor over $C$ into an fppf one over $\mathcal{C}$. If we put ourselves in the hypothesis of Corollary \ref{G^D -> J}, we will compute the obstruction for the extended log torsor to come from an fppf one. In particular, we will see that this obstruction can be written using the obstruction for the Poincaré extension (of $J_C$ by $\mathbb{G}_{m,C}$) to extend into an fppf one (cf. \cite[Exposé VIII; Remark 7.2]{Groth1}). \\ To do this, we first need a more constructive proof of Theorem \ref{raynaud} in the case where the base is $\Spec(K)$. Indeed, we will associate to any fppf pointed $G$-torsor over $C$ a morphism $G^D \to J$, using the Poincaré universal extension of $J_C$ by $\mathbb{G}_{m,C}$. \subsection{Preliminaries} \noindent For each $R$-scheme $T$, we denote by $\mathcal{Q}_{0,T}: T \to \mathcal{C} \times T$ the section of $C\times T$ induced by the rational point $\mathcal{Q}_{0}\in \mathcal{C}(R)$. A \textbf{rigidified line bundle} over $\mathcal{C} \times T$ is a couple $(\mathcal{L},\alpha)$, where $\mathcal{L}$ is a line bundle over $\mathcal{C} \times T$ and $\alpha$ is a trivialization $\mathcal{O}_T \simeq \mathcal{Q}_{0,T}^{*}\mathcal{L}$.\\ The functor $\Pic^{rig}(\mathcal{C}, .) : (Sch/R) \to (Sets)$, which associates to each $R$-scheme $T$ the set $\Pic^{rig}(\mathcal{C}, T)$ of isomorphism classes of line bundles on $\mathcal{C} \times T$ which are rigidified along the section $\mathcal{Q}_{0,T}$, is in fact a sheaf with respect to the fppf topology. Furthermore, given that $\mathcal{C}$ has a section, it is shown in \cite[\S 8.1]{BLR}, in the discussion that follows Proposition 4, that the sheaf $\Pic^{rig}(\mathcal{C},.)$ is canonically isomorphic to the relative Picard functor $\Pic_{\mathcal{C}/R}$. Therefore, for any $R$-scheme $T$, we have an isomorphism of groups \begin{equation*} \Pic^{rig}(\mathcal{C},T) \simeq \Pic_{\mathcal{C}/R}(T) \end{equation*} which sends the class of $(\mathcal{L},\alpha)$ to the class of $\mathcal{L}$. In fact, if one replaces $G$ by $\mathbb{G}_m$ and $T$ by $\mathrm{Spec}(K)$, then this isomorphism is none other than the isomorphism \eqref{eq:0}. One checks that, under this isomorphism, elements of $\Pic^0_{\mathcal{C}/R}(T)$ correspond to rigidified line bundles with relative degree zero, i.e. line bundles whose restrictions to each geometric fiber of $\mathcal{C}\times T \to T$ have degree zero. It follows that we have \begin{equation}\label{rig0} \Pic^{0,rig}(\mathcal{C},T) \simeq \Pic^0_{\mathcal{C}/R}(T) = \Hom_{R-\mathrm{schemes}}(T,\Pic^0_{\mathcal{C}/R}) \end{equation} for every $R$-scheme $T$.\\ In the level of the generic fibers, if we let $T= \Pic^0_{C/K}$, then the identity map $1: \Pic^0_{C/K} \to \Pic^0_{C/K}$ corresponds to the universal line bundle $\mathcal{P}_K$ over $C \times J$, which is a rigidified line bundle of relative degree $0$. It is called the \textbf{Poincar\'e line bundle} over $C \times J$.\hop \noindent This Poincar\'e line bundle satisfies the following universal property: to any rigidified line bundle $\mathcal{L}$ of degree $0$ over $C \times T$ corresponds, by definition, a unique element of $\Pic^{0,rig}(C,T)$, thus a unique morphism of schemes $f: T \to \Pic^0_{C/K}=J$, and we have: $$ (id_{C} \times f)^{*} \mathcal{P}_K \simeq \mathcal{L} $$ \hop \hop \hop The following lemma will be used in the sequel in order to handle group extensions via torsors. We state it in a general framework in order to cover schemes and log schemes at the same time. \begin{lem} \label{lem:SGA7} Let $G$ and $H$ be two group objects in some topos. Let $pr_1,pr_2$ and $m$ be the projections and multiplication map $H\times H\to H$. Then the data of an extension of $H$ by $G$ is equivalent to the data of a $G$-torsor $E\to H$ together with an isomorphism of $G$-torsors \begin{equation} \label{eq:square} m^*E\simeq pr_1^{*}E \times^G pr_2^{*}E, \end{equation} where $\times^G $ denotes the contracted product of $G$-torsors. \end{lem} \begin{proof} This is \cite[VII, \S 1.1.6]{Groth1}. \end{proof} In the setting of Lemma~\ref{lem:SGA7}, we say that a $G$-torsor $E\to H$ satisfies the theorem of the square if there exists an isomorphism as in \eqref{eq:square}. \noindent It is well-known that any rigidified line bundle of degree zero over an abelian scheme verifies the theorem of the square. Therefore, if we view $J \times C$ as the abelian scheme $J_{C}$ over $C$ and if we denote by $m$, (resp. $pr_1$, $pr_2$) the multiplication (resp. the first projection, the second projection) in $J_{C}$ induced by that of $J$, we have an isomorphism $$ m^{*}\mathcal{P}_K \simeq pr_1^{*}\mathcal{P}_K \otimes pr_2^{*}\mathcal{P}_K $$ We deduce from that: \begin{lem} \label{PoinExt1} $\mathcal{P}_K$ has a unique underlying structure of extension of $J_{C}$ by $ \mathbb{G}_{m,C}$. \end{lem} \begin{proof} The existence of an extension structure follows from Lemma~\ref{lem:SGA7}. The uniqueness follows from the fact that $m^{*}\mathcal{P}_K $ is a rigidified line bundle on the projective variety $J\times J\times C$, hence has no nontrivial automorphisms. \end{proof} \noindent By abuse of notation, we denote this extension by $\mathcal{P}_K$ again. \hop \noindent We now can state the following lemma: the following lemma, far from being redundant, is especially interesting in its proof, since it allows to associate to any pointed torsor, in a bijective manner, a rigidified line bundle, which, in the level of the generic fibers, can be described using the associated morphism from Theorem \ref{raynaud} and the Poincaré universal extension. \begin{lem} \label{Mon lemme} Let $\mathcal{G}$ be a finite flat commutative $R$-group scheme. We have isomorphisms $$ H^1_{fppf}(\mathcal{C},\mathcal{Q},\mathcal{G}) \simeq \Hom_{R}(\mathcal{G}^D,\Pic^0_{\mathcal{C}/R})\simeq \Pic^{0,rig}(\mathcal{C} \times \mathcal{G}^D)$$ where the second isomorphism is given in the level of the generic fibers by $$f \mapsto \mathcal{L}:=(id_C \times f)^*\mathcal{P}_K $$ \end{lem} \begin{proof} The second isomorphism follows from (\ref{rig0}).\\ As for the description of the isomorphism in the lever of generic fibers, it is already discussed. \end{proof} \subsection{Poincaré universal extension} Now, we shall prove that over a regular scheme endowed with the log structure defined by a divisor, extensions of smooth group schemes by $\mathbb{G}_m$ over the open subset of triviality can be extended in the logarithmic world. \begin{prop} \label{prop ext} Let $X$ be a regular and integral scheme endowed with the logarithmic structure defined by a divisor with complement $U$. Let $\mathcal{H}$ be a commutative $X$-group scheme, smooth and of finite type over $X$. Then the restriction morphism $$ \Ext_{klf}^1(\mathcal{H}, \mathbb{G}_{m,X}) \to \Ext^1_{fppf}(\mathcal{H}_U,\mathbb{G}_{m,U}) $$ is an isomorphism, where $\mathcal{H}_U :=\mathcal{H} \times_X U$. \end{prop} \noindent In order to prove this proposition, we shall use the following lemma. \begin{lem} \label{extension droites log} Let $X$ be a regular and integral scheme endowed with the logarithmic structure defined by a divisor, and let $V$ be an open subset of $X$ such that $\mathrm{codim}(X\backslash V) \geq 2$. Then the restriction map $$ H^1_{klf}(X,\mathbb{G}_m) \to H^1_{klf}(V, \mathbb{G}_m) $$ is an isomorphism. \end{lem} A brief comment on the terminology: $\mathrm{codim}(X\backslash V) \geq 2$ means that every point of $X\backslash V$ has codimension at least $2$ in $X$. \begin{proof}[Proof of Lemma~\ref{extension droites log}] Let $D$ be the divisor defining the log structure of $X$. Then one can write $D:= \sum_{m=1}^r D_m$, where the $D_m$'s are irreducible and reduced divisors. Consider the following diagram, in which the lines are exact according to \cite[Corollary 3.1.4]{Gill1}, and where the vertical arrows denote the restrictions to $V$: $$ \xymatrix{ 0 \ar[r] & H^1_{fppf}(X,\mathbb{G}_m) \ar[d]_-{} \ar[r]^-{} & H^1_{klf}(X,\mathbb{G}_m) \ar[d]_-{} \ar[r]^-{} & \bigoplus\limits_{m=1}^r(\mathbb{Q}/ \mathbb{Z}).D_m \ar[d]^-{} \ar[r] & 0 \\ 0 \ar[r] & H^1_{fppf}(V,\mathbb{G}_m) \ar[r]_-{} & H^1_{klf}(V, \mathbb{G}_m) \ar[r]_-{} & \bigoplus\limits_{m=1}^r(\mathbb{Q}/ \mathbb{Z}).(D_m)_{V} \ar[r] & 0 } $$ \noindent Since $X$ is regular, Weil and Cartier divisors agree on $X$. In addition, given that $X$ is normal and integral and that $\mathrm{codim}(X\backslash V)\geq 2$, it is clear that any Weil divisor on $V$ extends uniquely into a Weil divisor on $X$. Therefore, the two vertical arrows in the two ends of the diagram above are bijective. It follows that the vertical arrow in the middle is bijective, too. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop ext}] Consider the restriction morphism $$ \Ext_{klf}^1(\mathcal{H}, \mathbb{G}_{m,X}) \to \Ext^1_{fppf}(\mathcal{H}_U,\mathbb{G}_{m,U}) $$ Let us prove that it is surjective. Let us fix an extension $E$ of $\mathcal{H}_U$ by $\mathbb{G}_{m,U}$. Let $V$ be the largest open subset of $X$ over which $E$ has a logarithmic extension. Then $V$ contains $U$, and it also contains all points of $X$ of codimension $1$ because the extension problem has a solution over the spectrum of a discrete valuation ring by \cite[Theorem 4.1.1]{Gill1}. In other terms, we have that $\codim(X \setminus V) \geq 2$. Now, according to Lemma~\ref{lem:SGA7}, this extension over $V$ can be viewed as a $\mathbb{G}_m$-log torsor over $\mathcal{H}_V$ that verifies the theorem of the square. Let us now apply Lemma~\ref{extension droites log} to this $\mathbb{G}_m$-log torsor on $\mathcal{H}_V$, observing on the one hand that $\mathcal{H}_V$ is an open subset of $\mathcal{H}$ whose complement has codimension at least $2$, and on the other hand that, $\mathcal{H}$ being smooth over $X$, it is also regular, and the log structure induced by $X$ on $\mathcal{H}$ is also the log structure defined by a divisor on $\mathcal{H}$. Therefore, our $\mathbb{G}_m$-log torsor on $\mathcal{H}_V$ can be extended into a $\mathbb{G}_m$-log torsor on $\mathcal{H}$. This $\mathbb{G}_m$-log torsor also satisfies the theorem of the square by uniqueness of the extension over $\mathcal{H} \times_X \mathcal{H}$. According to Lemma~\ref{lem:SGA7} again, we obtain a logarithmic extension of $\mathcal{H}$ by $\mathbb{G}_{m,X}$ that extends $E$, hence the result. As for the injectivity of the restriction map, if we assume that we have two different extensions over $X$ with the same generic fiber, then there must exist some $x \in X$ of codimension $1$ over which these two extensions do not agree (indeed, if they agree over all points of $X$ of codimension $1$, they would agree over some open $V$ of $X$ such that $\codim (X \setminus V) \geq 2$ and thus, they would agree everywhere). But this contradicts \cite[Theorem 4.1.1]{Gill1} recalled above. Hence, the morphism is injective. \end{proof} \begin{cor} \label{PoinExt} The Poincar\'e extension $\mathcal{P}_K$ from Lemma~\ref{PoinExt1} can be uniquely extended into a logarithmic extension of $\mathcal{J}_{\mathcal{C}}$ by $\mathbb{G}_{m,\mathcal{C}}$ that we denote by $\mathcal{P}^{log}$. Moreover, it is rigidified with respect to the section induced by $\mathcal{Q}_0$. \end{cor} \begin{proof} Since $\mathcal{J}$ is the N\'eron model of $J$ over $\Spec(R)$, it is smooth and of finite type over $\Spec(R)$. Therefore, $\mathcal{J}_{\mathcal{C}}$ is smooth and is of finite type over $\mathcal{C}$ and, according to Proposition~\ref{prop ext}, it follows that $\mathcal{P}_K$ extends uniquely into a logarithmic extension of $\mathcal{J}_{\mathcal{C}}$ by $\mathbb{G}_{m,\mathcal{C}}$, that we denote by $\mathcal{P}^{log}$. In order to check that $\mathcal{P}^{log}$ is rigidified, consider the commutative diagram: \[ \begin{tikzcd} \Ext_{klf}^1(\mathcal{J} \times \mathcal{C}, \mathbb{G}_{m,\mathcal{C}}) \arrow{r}{\mathcal{Q}_{0,\mathcal{J}}^{*}} \arrow[hookrightarrow]{d}{\simeq} & \Ext_{klf}^1(\mathcal{J}, \mathbb{G}_m) \arrow[hookrightarrow]{d}{\simeq} \\% \Ext_{fppf}^1(J \times C, \mathbb{G}_{m,C}) \arrow{r}{Q_{0,J}^{*}}& \Ext_{fppf}^1(J, \mathbb{G}_m) \end{tikzcd} \] in which $\mathcal{Q}_{0,\mathcal{J}}: \mathcal{J} \to \mathcal{J} \times \mathcal{C}$ denotes the section induced by $\mathcal{Q}_0$, and the vertical maps are the restrictions which are isomorphisms by Proposition~\ref{prop ext}. Since $\mathcal{P}_K$ is rigidified along the section induced by $Q_0$, it implies that $Q_{0,J}^{*}\mathcal{P}_K$ is trivial, and then it follows from the diagram above that $\mathcal{Q}_{0,\mathcal{J}}^{*} \mathcal{P}^{log}$ is trivial as well. \end{proof} It is shown in \cite[VIII, Remark 7.2]{Groth1} that the morphism which associates to an extension its generic fiber: $$ \Ext^1_{fppf}(\mathcal{J}^0_{\mathcal{C}}, \mathbb{G}_{m,\mathcal{C}}) \to \Ext^1_{fppf}(J_{C}, \mathbb{G}_{m,C}) $$ is an isomorphism. Therefore, we can extend $\mathcal{P}_K$ (see Lemma~\ref{PoinExt1}) into an fppf extension of $\mathcal{J}^0$ by $\mathbb{G}_m$ over $\mathcal{C}$, that we denote by $\mathcal{P}^0$. In fact, given the uniqueness of the extension, and since any fppf extension can be seen as a logarithmic one, it follows that the pull-back of $\mathcal{P}^{log}$ by the inclusion map $\mathcal{J}^0\subseteq \mathcal{J}$ is equal to $\mathcal{P}^0$, which implies for the associated line bundles that: $$ \mathcal{P}^{log}|_{\mathcal{J}^0\times\mathcal{C}}= \mathcal{P}^0. $$ In particular, $\mathcal{P}^0$ is rigidified. \subsection{Obstruction of extension into fppf torsors} \begin{thm} \label{thm prolong log} Let us consider a pointed fppf $G$-torsor $Y \to C$ (relative to $Q_0$). Assume that the corresponding $K$-morphism $G^D \to J$ defined in (\ref{pointed-GJ}) can be extended into an $R$-morphism $i: \mathcal{G}^D \to \mathcal{J}$, for some model $\mathcal{G}$ of $G$ over $R$. Then $Y$ extends uniquely into a logarithmic $\mathcal{G}$-torsor over $\mathcal{C}$, pointed relatively to $\mathcal{Q}_0$, and which can be constructed using the extension $\mathcal{P}^{log}$. \\ Furthermore, if the morphism $i$ factors through $\mathcal{J}^0$, then $Y$ extends into an fppf $\mathcal{G}$-torsor over $\mathcal{C}$, which can be constructed using $\mathcal{P}^{0}$. \end{thm} \begin{proof} The existence and uniqueness of the extension is already proved in Corollary \ref{G^D -> J}. Here, we will only prove that this extension is constructed using the Poincaré universal extension.\\ Let $\mathcal{P}^{log}$ be the extension from Corollary~\ref{PoinExt}. Using the morphism $(i \times id_{\mathcal{C}}): \mathcal{G}^D \times \mathcal{C} \to \mathcal{J} \times \mathcal{C}$, we pull-back $\mathcal{P}^{log}$ and get a logarithmic extension $(i \times id_{\mathcal{C}})^{*}(\mathcal{P}^{log})$ of $\mathcal{G}^D_{\mathcal{C}}$ by $\mathbb{G}_{m,\mathcal{C}}$. Finally, according to \cite[Corollary 4.2]{Gill2}, we have a canonical isomorphism \begin{equation}\label{eq:2} H^1_{klf}(\mathcal{C},\mathcal{G}) \simeq \Ext^1_{klf}(\mathcal{G}^D_{\mathcal{C}}, \mathbb{G}_{m,\mathcal{C}}) \end{equation} hence we can associate to the extension $(i \times id_{\mathcal{C}})^{*}(\mathcal{P}^{log})$ a logarithmic $\mathcal{G}$-torsor over $\mathcal{C}$. The isomorphism \eqref{eq:2} being compatible with the isomorphism in Lemma~\ref{Mon lemme} in the level of the generic fibers, we deduce that the generic fiber of the extended torsor obtained is $Y$. By the uniqueness of extension (cf. \cite[Proposition 3.6]{Gill2}), we deduce that the extended torsor is pointed. \\ As for the second part, we use the same proof and the isomorphism of Lemma \ref{Mon lemme}. \end{proof} Let $E_Y$ be the unique element of $Ext_{fppf}^1(G_C^D,\mathbb{G}_{m,C})$ that corresponds to the torsor $Y$ according to \cite[Theorem 2]{Wat}. According to Theorem~\ref{thm prolong log}, since we have a morphism $i: \mathcal{G}^D \to \mathcal{J}$, we know that $E_Y$ extends uniquely into an element $E_Y^{log}$ of $\Ext^1_{kfl}(\mathcal{G}^D_{\mathcal{C}},\mathbb{G}_{m,\mathcal{C}})$, which is explicitly given by $$ E_Y^{log}:=(i \times id)^{*} \mathcal{P}^{log}. $$ Let us consider the following diagram where the exact lines are extracted from the spectral sequence comparing fppf and log flat cohomology \cite[exact sequence 4.1.4]{Gill1}, and where the vertical arrows are induced by the map $i:\mathcal{G}^D \to \mathcal{J}$ $$ \xymatrix{ 0 \ar[r] & \Ext^1_{fppf}(\mathcal{J}_{\mathcal{C}},\mathbb{G}_{m,\mathcal{C}}) \ar[d]_-{} \ar[r]^-{\omega} & \Ext^1_{klf}(\mathcal{J}_{\mathcal{C}},\mathbb{G}_{m,\mathcal{C}}) \ar[d]_-{} \ar[r]^-{\gamma} & \Hom_{\mathcal{C}}(\mathcal{J}_{\mathcal{C}},R^1\epsilon_{*}\mathbb{G}_m) \ar[d]^-{h\mapsto h\circ i} \\ 0 \ar[r] & \Ext^1_{fppf}(\mathcal{G}^D_{\mathcal{C}},\mathbb{G}_{m,\mathcal{C}}) \ar[r]_-{\alpha} & \Ext^1_{klf}(\mathcal{G}^D_{\mathcal{C}},\mathbb{G}_{m,\mathcal{C}}) \ar[r]_-{\beta} & \Hom_{\mathcal{C}}(\mathcal{G}^D_{\mathcal{C}},R^1\epsilon_{*}\mathbb{G}_m) } $$ $\beta$ associates to $E_Y^{log}$ some morphism $\mathcal{G}^D_{\mathcal{C}} \to R^1\epsilon_{*}\mathbb{G}_m$. The right part of the diagram being commutative, we have: \begin{equation}\label{crit2} \gamma (\mathcal{P}^{log}) \circ i = (\beta \circ (i \times id)^{*})(\mathcal{P}^{log}): \mathcal{G}^D_{\mathcal{C}} \to R^1\epsilon_{*}\mathbb{G}_m \end{equation} \begin{thm} \label{THM_v2} Let $Y \to C$ be a pointed fppf $G$-torsor. Assume that the associated $K$-morphism $G^D \to J$ from (\ref{pointed-GJ}) can be extended into an $R$-morphism $i: \mathcal{G}^D \to \mathcal{J}$, where $\mathcal{G}$ is a model of $G$. Let $\gamma (\mathcal{P}^{log} \circ i)$ be the associate morphism to the torsor $Y$ defined in ($\ref{crit2}$). Then $Y$ extends uniquely into a pointed fppf $\mathcal{G}$-torsor over $\mathcal{C}$ if and only if $\gamma (\mathcal{P}^{log}) \circ i=0$ : the obstruction to the extension into an fppf torsor is contained in the image of $\gamma (\mathcal{P}^{log}) \circ i$. \\ In addition, if $\Phi:=\mathcal{J}_k/\mathcal{J}_k^0$, we can identify $\gamma(\mathcal{P}^{log})$ to a morphism $$ \gamma_{\Phi}(\mathcal{P}^{log}) : \Phi \to \bigoplus_{i=1}^r (\mathbb Q/\mathbb Z).E_i $$ which is exactly the obstruction for $\mathcal{P}_K$ to extend into an fppf extension of $\mathcal{J}_{\mathcal{C}}$ by $\mathbb{G}_{m,\mathcal{C}}$ (cf. \cite[Exposé VIII ; remark 7.2]{Groth1}).\\ \end{thm} \begin{proof} If we assume that we have an fppf extension of the torsor, according to \cite[Theorem 2]{Wat}, this extended fppf torsor corresponds to a unique element of $Ext^1_{fppf}(\mathcal{G}_{\mathcal{C}}^D,\mathbb{G}_{m,\mathcal{C}})$, whose generic fiber is $E_Y$. Therefore, $E_Y^{log}$ is contained in $Im\alpha=Ker\beta$. Whence $(\beta \circ (i \times id)^{*})(\mathcal{P}^{log})=\beta(E_Y^{log})=0$ and thus $\gamma ( \mathcal{P}^{log}) \circ i =0$ in $Hom_{\mathcal{C}}(\mathcal{G}_{\mathcal{C}}^D,R^1\epsilon_*\mathbb{G}_m)$.\\ The proof on the other direction is done with the same arguments, by reading the proof back word.\\ Let's now prove the second part of the theorem. \hop Let $E_1,\dots, E_r$ be the irreducible components of $\mathcal{C}_k$, so that $$ \mathcal{C}_k=\sum_{i=1}^r n_iE_i $$ where the $n_i$ are positive integers. $\gamma(\mathcal{P}^{\rm log})$ is a morphism of sheaves for the fppf topology on $\mathcal{C}$ $$ \gamma(\mathcal{P}^{\rm log}): \mathcal{J}_{\mathcal{C}} \rightarrow R^1\epsilon_{*}\mathbb{G}_m $$ \noindent It follows from \cite[Theorem 2.2.2]{Gill1} that the restriction of the sheaf $R^1\epsilon_{*}\mathbb{G}_m$ to the smooth site of $\mathcal{C}$ can be described as follows $$ (R^1\epsilon_{*}\mathbb{G}_m)|_{\mathrm{smooth}/\mathcal{C}} \simeq \bigoplus_{i=1}^r (\mathbb Q/\mathbb Z).E_i $$ where $(\mathbb Q/\mathbb Z).E_i$ denotes the skyscraper sheaf supported by $E_i\subset {\mathcal{C}}$, with value $\mathbb Q/\mathbb Z$. Because $\mathcal{J}_{\mathcal{C}}$ is a smooth group scheme over $\mathcal{C}$, we have $$ \Hom_{\mathcal{C}}(\mathcal{J}_{\mathcal{C}}, R^1\epsilon_{*}\mathbb{G}_m) = \Hom_{\mathcal{C}}(\mathcal{J}_{\mathcal{C}}, \bigoplus_{i=1}^r (\mathbb Q/\mathbb Z).E_i) = \bigoplus_{i=1}^r \Hom_{E_i}(\mathcal{J}_{E_i}, \mathbb Q/\mathbb Z) $$ in which the second equality follows from the adjunction formula. \\ \noindent We now observe that, $E_i$ being an irreducible vertical component of $\mathcal{C}$, we have $$ \Hom_{E_i}(\mathcal{J}_{E_i}, \mathbb Q/\mathbb Z) = \Hom_k(\mathcal{J}_k, \mathbb Q/\mathbb Z)= \Hom_k(\Phi, \mathbb Q/\mathbb Z) $$ where $\Phi=\mathcal{J}_k/\mathcal{J}_k^0$ denotes the group of connected components of $\mathcal{J}_k$, and the last equality follows from the fact that $\Hom(\mathcal{J}_k^0,\mathbb Q/\mathbb Z)=0$.\\ \noindent Going through these isomorphisms, $\gamma(\mathcal{P}^{\rm log})$ can be identified with a morphism $$ \gamma_\Phi(\mathcal{P}^{\rm log}): \Phi \to \bigoplus_{i=1}^r (\mathbb Q/\mathbb Z).E_i $$ For the fact that this is the obstruction for the Poincaré extension to extend into an fppf extension, it is proved in \cite[Theorem 4.1.5(i)]{Gill1}. \\ \end{proof} \section{A finiteness criterion for $\mathcal{J}[p]$ and application of Corollary \ref{G^D -> J}} \label{section3} Let $C$ be a smooth and geometrically connected $K$-curve with Jacobian $J$. We assume here that $k$ is algebraically closed. This section is devoted to the generalization of a result by Chiodo \cite[Propositions 7.4.1 and 7.5.1]{Chiodo} which provides a necessary and sufficient condition for $J[r]$ to admit a finite \'etale model over $R$, when $r$ is prime to $p$. Such a model is the N\'eron model of $J[r]$, hence is equal to $\mathcal{J}[r]$. Here, we will treat the case where $p$ divides $r$. In this case, a finite model of $J[r]$ is not necessarily \'etale, hence it is not necessarily the N\'eron model of $J[r]$. But one may still ask for a condition for $\mathcal{J}[r]$ to be finite and flat over $R$. We will see that the result of Chiodo generalizes to this case. Once this is done, given a finite flat $K$-group scheme $G$ killed by $r$, we combine this finiteness criterion for $\mathcal{J}[r]$ with Theorem~\ref{thm prolong log} in order to prove the existence of models of $G$ and the extension of torsors over the minimal regular model of the curve $C$.\\ We start this section with some preliminaries about semistability. For the terminology used about graph theory, we refer to \cite[\S 2.1 and \S 2.2]{Chiodo}. A proper curve of genus $\geq 2$ over an algebraically closed field $k$ is \emph{semistable} if it is reduced, connected, has only ordinary double points (i.e. its singularities are nodal), and its irreducible components that are isomorphic to $\mathbb{P}^1_k$ meet the other components in at least two points. \cite[Definition 1.1;1.2]{DSCH}. \\ A proper flat morphism of schemes $\mathcal{C} \to \Spec(R)$ is \emph{semistable} if its geometric fibers are semistable curves. In particular, given a smooth curve $C$ over $K$, a semistable scheme $\mathcal{C} \to \Spec(R)$ with a specified isomorphism $\mathcal{C} \times_{\Spec(R)} \Spec(K) \simeq C$ is called a semistable model of $C$ over $R$ and, by abuse of notation, one says that $\mathcal{C}$ is a semistable curve over $R$. An abelian variety $A$ over $K$ has semistable reduction over $R$ if the connected component of the special fiber of its N\'eron model $\mathcal{A}_k$ has unipotent rank $0$, which means that $\mathcal{A}_k^0$ is an extension of an abelian variety by a torus. This terminology is justified by the following fact: given a smooth, geometrically connected curve $C$ over $K$ of genus $g \geq 2$, the following conditions are equivalent: \begin{itemize} \item $C$ has semistable reduction over $R$ \item the minimal regular model of $C$ over $R$ is semistable; \item the Jacobian variety of $C$ has semistable reduction over $R$ \end{itemize} See \cite[Expos\'e~1, Proposition 2.2]{DSCH} for the equivalence of the first two conditions, and \cite[Expos\'e~1, Proposition 5.7]{DSCH} for the equivalence with the last one. $C$ denotes a smooth, geometrically connected $K$-curve of genus $g \geq 2$, which has semistable reduction over $R$. Let $\mathcal{C}$ be the minimal regular model of $C$ over $R$. We denote by $\Gamma$ the dual graph of the special fiber $\mathcal{C}_k$: it is the graph whose set of vertices is the set of irreducible components of $\mathcal{C}_k$ and whose set of edges is the set of nodes of $\mathcal{C}_k$; we denote by $b_1(\Gamma)$ the first Betti number of $\Gamma$. As previously, $J$ denotes the Jacobian of $C$ and $\mathcal{J}$ denotes its N\'eron model over $R$. We denote by $\Phi_k$ the group of components of the special fiber $\mathcal{J}_k$ of $\mathcal{J}$. \begin{defi} In a graph, a \emph{circuit} is a path that begins and ends at the same vertex. A circuit that does not repeat vertices is called a \emph{cycle}. \noindent If $\Omega$ is a graph, we let $\Cyc(\Omega)$ denote the set of cycles in $\Omega$, and define $$ c_2(\Omega):=\gcd\{\text{common number of edges of $C$ and $C'$}~|~ C,C'\in \Cyc(\Omega)\} $$ \noindent If $\Cyc(\Omega)= \varnothing $, we put $c_2(\Omega)=0$. \end{defi} \begin{exam} If $\Omega$ is a polygon with $d$ edges, then $c_2(\Omega)=d$. If $\Omega$ consists of two polygons with $d$ edges each, sharing $d'$ edges, then $c_2(\Omega)=\gcd(d,d')$. Other examples are given in \cite[\S{}5.6]{Chiodo}. \end{exam} With hypothesis as stated above, the following statement was proved by Chiodo \cite[Proposition 7.5.1]{Chiodo}, using a slightly different terminology: \begin{prop}(Chiodo's criterion)\label{Chiodo's criterion}. \label{prop:CC} We have $$ \Phi_{k}[r] \simeq (\mathbb Z/r\mathbb Z)^{b_1(\Gamma)} $$ if and only if $r$ divides $c_2(\Gamma)$. \end{prop} The main step in generalizing Chiodo's result is the following: \begin{prop} \label{prop:phib1} Let $r>1$ be an integer. The group scheme $\mathcal{J}[r]$ is finite and flat if and only if the curve $C$ has semistable reduction and the dual graph $\Gamma$ of its special fiber satisfies the condition $$ \Phi_{k}[r] \simeq (\mathbb Z/r\mathbb Z)^{b_1(\Gamma)} $$ \end{prop} \begin{proof} The case when $r$ is prime to $p$ has been proved by Chiodo. Therefore, we may (and do) assume that $p$ divides $r$. Firstly, we observe that if $C$ has not semistable reduction, then its Jacobian has not semistable reduction either. Hence the unipotent rank of $\mathcal{J}_k$ is not $0$, which implies that $\mathcal{J}_{k}$ contains the additive group $\mathbb{G}_a$ as a subgroup. But $\mathbb{G}_{a,k}[p] = \mathbb{G}_{a,k}$ since $p$ is equal to the characteristic of $k$, hence $\mathcal{J}_{k}[r]$ is not a finite group scheme since it contains a one-dimensional subgroup. Therefore, the fact that $C$ has a semistable reduction is a necessary condition to $\mathcal{J}[r]$ to be finite. We now assume that the minimal regular model $\mathcal{C}$ of $C$ is semistable. Then, since $J$ has semistable reduction, $\mathcal{J}[r]$ is quasi-finite and flat (see \cite[\S 7.3, Lemma~2]{BLR}). According to Lemma~\ref{lem:finitude} below, $\mathcal{J}[r]$ is finite if and only if its special fiber $\mathcal{J}_k[r]$ has rank $r^{2g}$ as a finite $k$-scheme. On the other hand, by snake Lemma, we have a long exact sequence: $$ 0 \to \mathcal{J}_k^0[r] \to \mathcal{J}_k[r] \to \Phi[r] \to \mathcal{J}_k^0/r \to \cdots $$ Since $J$ is semistable, the multiplication map $\mathcal{J}_k^0 \xrightarrow{~r~} \mathcal{J}_k^0$ is surjective \cite[\S 7.3, Lemma~1]{BLR}, hence $\mathcal{J}_k^0/r=0$. Therefore, we have \begin{equation} \label{eq:rank} \rank \mathcal{J}_k[r] = \rank \mathcal{J}_k^0[r] + \rank \Phi[r]. \end{equation} The semistability of $C$ implies that $\Pic^0_{\mathcal{C}} \simeq \mathcal{J}^0$ \cite[\S 9.5, Theorem~4]{BLR}, and that $\Pic^0_{\mathcal{C}_{k}}$ is extension of an abelian variety of rank $a$ by a torus of rank $b_1({\Gamma})$ \cite[\S9.2, Example~8]{BLR}. We have $$ a+b_1({\Gamma}) = \dim_k H^1(\mathcal{C}_k, \mathcal{O}_{\mathcal{C}_k})= \dim_K H^1(C,\mathcal{O}_C)=g $$ where the first equality follows from \cite[\S 7.5, Definition 5.21]{Liu}, and the second one, given the flatness and projectivity of $\mathcal{C}$ over $R$, follows from \cite[\S 8.3, Corollary~8.3.6]{Liu}. Finally, it is well-known that the $r$-torsion subgroup of an abelian variety (resp. a torus) of dimension $d$ is a finite group scheme of rank $r^{2d}$ (resp. $r^d$). Putting all this together, we deduce that $$ \rank\Pic^0_{\mathcal{C}_{k}}[r] = r^{2a}\times r^{b_1({\Gamma})}=r^{2g-b_{1}(\Gamma)} $$ as a $k$-group scheme. Hence, it follows from \eqref{eq:rank} that $\mathcal{J}_k[r]$ has rank $r^{2g}$ if and only if $\Phi_k[r]$ has rank $r^{b_1(\Gamma)}$ as a $k$-group scheme. Since $\Phi_k[r]$ is \'etale, this means that $\Phi[r]$ has exactly $r^{b_1(\Gamma)}$ points over $k$. By \cite[Lemma 7.3.4]{Chiodo}, this is equivalent to $\Phi_{k}[r] \simeq (\mathbb Z/r\mathbb Z)^{b_1(\Gamma)}$. \end{proof} \begin{lem} \label{lem:finitude} Let $X$ be a quasi-finite flat and separated $R$-scheme. If the generic and the special fiber of $X$ have the same rank, then $X$ is finite over $R$. \end{lem} \begin{proof} By faithfully flat descent \cite[Proposition 2.7.1 (xv)]{Groth4}, we may assume that $R$ is a complete DVR. Then, according to \cite[Corollary 6.2.6]{Groth3}, $X$ is isomorphic as an $R$-scheme to the disjoint union $X' \sqcup X''$, where $X'$ is a finite $R$-scheme and $X''$ is a quasi-finite $R$-scheme such that $X'' \cap X_k = \varnothing$. The flatness of $X$ over $R$ implies that of $X'$ and $X''$ over $R$. So, we have that : $$ \mathrm{rank}~X'_K= \mathrm{rank}~X'_k= \mathrm{rank}~X_k $$ The first equality comes from the fact that $X'$ is finite flat, and the second one from $X'' \cap X_k = \varnothing$. But we have by assumption $ \mathrm{rank}~X_{K}= \mathrm{rank}~X_k$, hence it follows from the equality above that $ \mathrm{rank}~X'_K= \mathrm{rank}~X_K$, hence that $X'_K=X_K$. Thus $X'' = \varnothing$, which means that $X$ is finite over $R$, hence the result. \end{proof} \begin{cor} \label{cor Chiodo} Let $r>1$ be an integer. The group scheme $\mathcal{J}[r]$ is finite flat if and only if the curve $C$ has semistable reduction and the dual graph $\Gamma$ of the special fiber of $\mathcal{C}$ satisfies Chiodo's criterion. \end{cor} \begin{proof} This follows by combining Proposition~\ref{prop:phib1} with Proposition \ref{prop:CC}. \end{proof} \begin{cor} \label{coro} Let $C$ be a smooth projective geometrically connected curve over $K$ of genus $g \geq 2$ with a $K$-rational point. Let $G$ be a finite flat commutative $K$-group scheme killed by $r$, and let $Y \rightarrow C$ be a pointed fppf $G$-torsor such that $Y$ is geometrically connected. If $\mathcal{C}$ is semistable and if Proposition \ref{Chiodo's criterion} is satisfied, then $G$ has an $R$-model $\mathcal{G}$ and $Y \rightarrow C$ extends uniquely into a pointed logarithmic $\mathcal{G}$-torsor over any regular model of $C$. \end{cor} \begin{proof} Let $G^D \to J$ be the morphism from (\ref{pointed-GJ}) associated to the $G$-torsor $Y \to C$. Since $G$ is killed by $r$, so is $G^D$, hence this morphism factors through $G^D \to J[r]$. In addition, given that $Y$ is geometrically connected, this morphism is injective; hence, $G^D$ can be seen as a subscheme of $J[r]$ so that we can consider $\overline{G^D}$, the schematic closure of $G^D$ inside $\mathcal{J}[r]$. According to \cite[Proposition 2.8.5]{Groth4}, it is the unique closed subscheme of $\mathcal{J}[r]$ that is flat over $\Spec(R)$ and whose generic fiber is $G^D$. Furthermore, since taking the schematic closure commutes with fibered products over $\Spec(R)$ (\cite[Corollary 2.8.6]{Groth4}), the multiplication over $G^D$ extends naturally into $\overline{G^D}$. So $\overline{G^D}$ is a closed and flat subgroup scheme of $\mathcal{J}[r]$.\\ Denote by $\mathcal{C}$ the minimal regular model of $C$ over $R$. Since $C$ is semistable and given the hypothesis on the dual graph of $\mathcal{C}_k$, it follows from Corollary \ref{cor Chiodo} that $\mathcal{J}[r]$ is finite over $R$, hence $\overline{G^D}$ is finite over $\mathrm{Spec}(R)$ since closed immersions are finite and compositions of finite morphisms are finite. We conclude that $\overline{G^D}$ is a model of $G^D$. If we let $\mathcal{G}$ be the Cartier dual of $\overline{G^D}$, then $\mathcal{G}$ is a model of $G$ which comes with an inclusion map $\mathcal{G}^D \hookrightarrow \mathcal{J}[r]$ that extends the inclusion $G^D \hookrightarrow J[r]$. It remains to use Theorem~\ref{thm prolong log} to conclude. \end{proof} \section{An example of extension of torsors over a hyperelliptic curve}\label{section4} In this section, we will study two examples of extension of torsors in a given curve to illustrate the results above. For some rational number $c \neq \pm 1$ and a prime $p \neq 2$, we consider the following hyperelliptic curve defined over $\mathbb{Q}$ by: $$ y^2=f(x)=x^{2p}-(1+c^2)x^p+c^2 $$ \noindent (the example is taken from \cite{GL}). Note that $Q_0=(1,0)$ is a rational point over this curve, so that we can consider pointed torsors over $C$ relative to $Q_0$. \\ We first construct $\mathcal{C}_l$, a regular model of the curve over $\mathbb{Z}_l$, for some prime $l$. Then, knowing that the Jacobian of the curve contains a subgroup isomorphic to $(\mathbb Z/p\mathbb Z)^2$ \cite[Lemma 3.3]{GL}, this gives, by Lemma \ref{Mon lemme}, a pointed $\mu_p^2$-torsor over $C$ (relative to $Q_0$). The goal of this section is to study the extension of this torsor over the model $\mathcal{C}_l$. \subsection{A hyperelliptic curve whose Jacobian contains a subgroup isomorphic to $(\mathbb Z/p\mathbb Z)^2$} \label{jacobian contains a subgroup} What follows comes from \cite[Lemma 3.3]{GL}. Let's give the following hyperelliptic curve $C$ whose affine equation over $\mathbb Q$ is given by: $$ y^2=f(x)=x^{2p}-(1+c^2)x^p+c^2 $$ \noindent where $c$ is an integer number different from $\pm 1$ and $p \neq 2$ is a prime. Given that in the projective space, this curve has a singularity at one of the two points at infinity, one checks that we have a smooth projective model of the curve over $\mathbb Q$. It is covered by the following two affine charts: \begin{itemize} \item $y^2=f(x)=x^{2p}-(1+c^2)x^p+c^2$ \hop \item $t^2=s^{2p}f(\frac{1}{s})$ \end{itemize} \hop \noindent that we glue together using $(x,y)=(\frac{1}{s},\frac{t}{s^p})$. We also denote indifferently this model by $C$. \noindent Using the Jacobian criterion, we can check that the curve has bad reduction over the primes $2$, $p$ and the primes that divide $c(c^2-1)$.\\ The $p$-torsion in the Jacobian of $C$ comes from the relations: $$ (y-x^p-c)(y+x^p+c)=-(c^2+1)x^p $$ $$(y-x^p+c)(y+x^p-c)=-(c^2+1)x^p $$ We compute that $$ \mathrm{div}(y-x^p-c)=p(0,c)-p\infty$$ $$ \mathrm{div}(y-x^p+c)=p(0,c)-p\infty$$ Where $\infty$ denotes one of the two points at infinity on $C$. Hence, the two divisors \begin{center} $(0,c)-\infty$ and $(0,-c) -\infty$ \end{center} \noindent define two classes of order $p$ in the Jacobian of $C$ that are independent classes of divisors by Riemann-Roch theorem. Which means that the Jacobian of the curve $C$ contains a subgroup isomorphic to $(\mathbb Z/p\mathbb Z)^2$. Therefore, we have a pointed $\mu_p^2$-torsor over $C$.\\ \\ \noindent Let's consider $\mathcal{C}$, the arithmetic surface over $\mathbb Z$ given by the equations: \begin{itemize} \item $y^2=f(x)=x^{2p}-(1+c^2)x^p+c^2$ \hop \item $t^2=s^{2p}f(\frac{1}{s})$ \end{itemize} \hop It is clear that its generic fiber is the curve $C$. \subsection{Construction of a regular model of $C$ and study of extension of pointed torsors over it} In this section, we study two examples of extension of pointed torsors. In the first one, for a suitably chosen prime $p$, after constructing from $\mathcal{C}$ a regular model $\mathcal{C}_p$ of $C$ over $\mathbb Z_p$, we check by using the previous results and the appendix that the pointed $\mu_p^2$-torsor over $C$ extends into an fppf-torsor over $\mathcal{C}_p$. As for the second example, we fix $p=3$ and $l=c$ a prime different from $p$ and we construct a regular model $\mathcal{C}_l$ of $C$ over $\mathbb Z_l$. In this case, we will see that the pointed $\mu_p^2$-torsor does not extend into an fppf-torsor over $\mathcal{C}_l$. \begin{enumerate} \item \textbf{First example}. We recall the equations of the surface $\mathcal{C}$ over $\mathbb Z$ in the affine charts: \begin{itemize} \item $y^2=f(x)=x^{2p}-(1+c^2)x^p+c^2$ \hop \item $t^2=s^{2p}f(\frac{1}{s})$ \end{itemize} \hop \noindent with $c$ an integer number different from $\pm 1$ and $p$ an odd prime that does not divide $c(c^2-1)$ (in particular, $p \neq 3$). \begin{itemize} \item \textbf{Construction of a regular model $\mathcal{C}_p$ over $\mathbb Z_p$}. Using the Jacobian criterion, we check that in each affine chart, there are two singular points in the fiber over the prime $p$, and in fact, they belong to the intersection of the two charts. So, it suffices to desingularize in only one of the two charts; for example, the first one. \\ In the first chart, the Jacobian matrix on a point $(x,y)$ of $\mathcal{C}$ is given by: \begin{center} $J(x,y)=(-2px^{2p-1}+p(1+c^2)x^{p-1} ~~~~2y)\equiv (0~~~~2y)\mod~p$ \end{center} Therefore, the eventual singularities are the solutions of the system: $$ \left\{ \begin{aligned} & 2y \equiv 0 \mod~p\\ & y^2=(x^p-1)(x^p-c^2) \equiv (x-1)^p(x-c^2)^p \mod~p \end{aligned}. \right. $$ Thus, on the surface $\mathbb Z[X,Y]/(Y^2-f(X))$, this gives two points that are eventually singular: $\mathfrak{M}:=<x-1,y,p>$ and $\mathfrak{M}':=<x-c^2,y,p>$. While the Krull dimension of the local ring $(\mathbb Z[X,Y]/(Y^2-f(X)))_{\mathfrak{M}}$ is $2$, we compute that $\mathrm{dim}_{\mathbb{F}_p} \mathfrak{M}/ \mathfrak{M}^2=3$. Thus, $\mathfrak{M}$ is not a regular point and the same is true for $\mathfrak{M}'$. Next, we will blow-up separately $\mathcal{C}$ in each of the closed points $\mathfrak{M}$ and $\mathfrak{M}'$.\hop We view $\mathcal{C}$ as an arithmetic surface inside $\mathbb{A}^2_{\mathbb Z}$. Then, the blow-up of $\mathcal{C}$ at $\mathfrak{M}$ is formed by taking the following three schemes and gluing them together as explained below: \textbf{Chart 1: $\mathcal{C}_{p,1}^1$}. Define new variables $y=(x-1)v$ and $p=(x-1)w$. Let's write $f(x)= \sum_{i=0}^{2p} a_i(x-1)^i=\sum_{i=2}^{2p} a_i(x-1)^i+ p(1-c^2)(x-1)$ since $a_0=0$ and $a_1=p(1-c^2)$. After replacing $y$ and $p$ by the new variables in the equation of the first chart of $\mathcal{C}$ and then, we get that $\mathcal{C}_{p,1}^1$ is given by the system: $$ \mathcal{C}_{p,1}^1 \left\{ \begin{aligned} & v^2= \sum_{i=0}^{2p-2} a_{i+2}(x-1)^i+w(1-c^2) \\ & (x-1)w=p \\ \end{aligned} \right. $$ \textbf{Chart 2: $\mathcal{C}_{p,1}^2$}. The second chart is formed using the new variables $uy=x-1$ and $wy=p$. Replacing in the equation of $\mathcal{C}$ gives : $$ \mathcal{C}_{p,1}^2 \left\{ \begin{aligned} & 1=w(1-c^2)u+\sum_{i=0}^{2p-2}a_{i+2}y^iu^{i+2} \\ & yw=p \\ \end{aligned} \right. $$ \textbf{Chart 3: $\mathcal{C}_{p,1}^3$}. The third chart is formed using the new variables $up=x-1$ and $vp=y$. We get: $$ \mathcal{C}_{p,1}^3: v^2=u(1-c^2)+\sum_{i=0}^{2p-2} a_{i+2}p^iu^{i+2} $$ Then, we glue the four charts together (the three charts defined above and the chart given by $t^2=s^{2p}f(\frac{1}{s})$) using the change of variables defined above and this gives us a model over $\mathbb Z$ of the curve $C$. We state that the model obtained is no longer singular at $\mathfrak{M}$. Indeed, we just have to check the regularity at $\mathfrak{M}$ in each one of the three affine charts defined above. For instance, in the first chart $\mathcal{C}_{p,1}^1$, the Jacobian matrix at a point $(x,v,w)$ is given by: \begin{center} $J(x,v,w) = \begin{pmatrix}-\sum_{i=1}^{2p-2}a_{i+2}i(x-1)^{i-1} & 2v & c^2-1 \\ w & 0 & x-1 \end{pmatrix}$ \end{center} Since we look for singularities after the blow-up at $\mathfrak{M}$, we focus on points that verify $x=1$, $y=0$ and $p=0$. We get: \begin{center} $J(x,v,w) = \begin{pmatrix}-a_3 & 2v & c^2-1 \\ w & 0 & 0 \end{pmatrix}$ \end{center} We get the system (everything $\mod~p$): $$ \left\{ \begin{aligned} & v^2 \equiv a_2+w(1-c^2) \\ & (x-1)w \equiv 0\\ & 2vw \equiv 0\\ & (c^2-1)w \equiv 0 \\ \end{aligned} \right.$$ Thus, $w \equiv 0$. The first equation then gives $v^2-a_2 \equiv 0$; an easy computation of $a_2$ gives that $a_2 \equiv 0 \mod~p$ and thus $v \equiv 0$. Therefore, $\mathfrak{M}'':=<x-1,v,w,p>$ is the only possible singularity and we can check that in fact, this point is regular. Indeed, we need to see that $p$ and $w$ live in $\mathfrak{M}''^2$. For $p$, it is clear since $p=(x-1)w$. As for $w$: \begin{center} $0 \equiv v^2- \sum_{i=0}^{2p-2}a_{i+2}(x-1)^i-w(1-c^2) \equiv a_2+ a_3(x-1)-w(1-c^2) \mod \mathfrak{M}''^2$ \end{center} Since $a_2$ and $a_3$ are both multiples of $p$ (this is easy to check), and given the assumption that $p$ does not divide $1-c^2$, we deduce that $w \in \mathfrak{M}''^2$. Therefore, $\mathfrak{M}''$ is regular. As for the two other charts, we check in the same way that they are not singular at $\mathfrak{M}$. Finally, we conclude that the model we constructed is no longer singular at $\mathfrak{M}$. We tensor that model by $\mathbb Z_p$ and we get a model of our curve over $\mathbb Z_p$, let's denote it by $\mathcal{C}_{p,1}$. \hop \hop We do exactly the same work at the other singular point of $\mathcal{C}$: the blow-up of $\mathcal{C}$ at $\mathfrak{M'}$ is formed by taking the following three schemes and gluing them together using the suitable change of variables. \\ \textbf{Chart 1: $\mathcal{C}_{p,2}^1$}. We define new variables $y=(x-c)\beta$ and $p=(x-c)\gamma$. By writing $f(x)= \sum_{i=0}^{2p}b_i(x-c)^i$, we get $$ \mathcal{C}_{p,2}^1 \left\{ \begin{aligned} & \beta^2= \sum_{i=0}^{2p-2}b_{i+2}(x-c^2)^i- \gamma c^{2p-2}(2c^{2p}-(1+c^2)) \\ & (x-c^2)\gamma=p \\ \end{aligned} \right.$$ \textbf{Chart 2: $\mathcal{C}_{p,2}^2$}. We define new variables $x-c=y\alpha$ and $p=y\gamma$: $$ \mathcal{C}_{p,2}^2 \left\{ \begin{aligned} & 1=\alpha \gamma c^{2p-2}(2c^{2p}-(1+c^2))-b_2\alpha^2 \\ & y\alpha=p \\ \end{aligned} \right.$$ \textbf{Chart 3: $\mathcal{C}_{p,2}^3$}. We define new variables $x-c=p\alpha$ and $y=p\beta$: $$ \mathcal{C}_{p,2}^3: \beta^2= \alpha c^{2p-2}(2c^{2p}-(1+c^2))+ \sum_{i=0}^{2p-2} b_{i+2}p^i\alpha^{i+2}$$ We glue the four charts together using the change of variables and this gives a model of $C$ over $\mathbb Z$. Once again, we can check that this model is no longer singular at $\mathfrak{M}'$. We denote by $\mathcal{C}_{p,2}$ the model over $\mathbb Z_p$ after tensoring the gluing of the four charts by $\mathbb Z_p$. Finally, we denote by $\mathcal{C}_{p}$ the gluing of $\mathcal{C}_{p,1}$ and $\mathcal{C}_{p,2}$. By the work done before, $\mathcal{C}_{p}$ is a \textit{regular} model of our curve over $\mathbb Z_p$. \item \textbf{Computation of the special fiber of $\mathcal{C}_p$ and the group of components $\Phi$ of the N\'eron model of the Jacobian of the curve $C$ over $\mathbb Z_p$}.\hop In this section, we compute the special fiber of the regular model $\mathcal{C}_p$ constructed before and then, we will use a result mentioned in \cite{BLR} to compute the group of components of the N\'eron model of the Jacobian of the curve from the computed special fiber. This group gives information about the extension of torsors over the curve. Indeed, we have the lemma: \begin{lem} \label{premier} With the hypothesis of Proposition \ref{G^D->J^0}, given a $G$-pointed torsor over $C$ (relative to $Q_0=(1,0)$), if the morphism $G^D \to J$ extends to a morphism $\mathcal{G}^D \to \mathcal{J}$, and if $\# \mathcal{G} \bigwedge \# \Phi =1$ where $\Phi$ denotes the group of components of $\mathcal{J}_k$, then the given pointed torsor extends into an fppf $\mathcal{G}$-torsor over $\mathcal{C}$. \end{lem} \begin{proof} By definition, $\Phi=\mathcal{J}_k/\mathcal{J}_k^0$. So, we have an exact sequence of fppf-sheaves of groups: $$ 0 \to \mathcal{J}_k^0 \to \mathcal{J}_k \to \Phi \to 0$$ from which, if $\mathcal{G}_k$ denotes the special fiber of $\mathcal{G}$, we deduce the long exact sequence of fppf sheaves of groups $$ 0\to \Hom_k(\mathcal{G}_k^D,\mathcal{J}_k^0) \to \Hom_k(\mathcal{G}_k^D,\mathcal{J}_k) \to \Hom_k(\mathcal{G}_k^D, \Phi) \to...$$ Since $\# \mathcal{G} \bigwedge \# \Phi =1$, $\Hom_R(\mathcal{G}_k^D, \Phi) = \{0\}$ and the result follows from Proposition \ref{G^D->J^0}. \end{proof} \hop\hop Let us now describe the special fiber of $\mathcal{C}_p$. One can show that after gluing together all the charts that cover $\mathcal{C}_p$, the special fiber of $\mathcal{C}_p$, that we denote by $\tilde{\mathcal{C}}_p$, lies over the charts $\mathcal{C}_{p,1}^1$ and $\mathcal{C}_{p,2}^1$, except for some points at infinity. \hop The chart $\mathcal{C}_{p,1}$ contains two components of the special fiber that lie inside $\mathcal{C}_{1,p}^1$; they intersect in one point ($x \equiv 1,v \equiv 0,w \equiv 0 \mod~p$): $$ G_{p,1}^1 \left\{ \begin{aligned} & x \equiv 1 \\ & v^2 \equiv w(1-c^2) \\ \end{aligned}\right. ~~~~~ \nolinebreak G_{p,1}^2 \left\{ \begin{aligned} & w \equiv 0 \\ & v^2 \equiv \sum_{i=0}^{2p-2} a_{i+2}(x-1)^i \\ \end{aligned}\right. $$ In fact, one can compute the $a_i$'s and we find that: $$ a_i= \left\{ \begin{aligned} & 0~~~if~~~ i=0 \\ & {{2p}\choose{i}}-(1+c^2) {{p}\choose{i}}~~~if~~~ 0<i \leq p \\ & {{2p}\choose{i}} ~~~if ~~~p < i \leq 2p \\ \end{aligned}\right. $$ It follows that: $$ G_{p,1}^2 \left\{ \begin{aligned} & w \equiv 0 \\ & v^2 \equiv -(1+c^2)(x-1)^{p-2}+(x-1)^{2p-2} \\ \end{aligned}\right. $$ For the same reasons, $\mathcal{C}_{p,2}$ contains two components of the special fiber that intersect in one point ($x \equiv c, \beta \equiv 0, \gamma \equiv 0 \mod~p$): $$ G_{p,2}^1 \left\{ \begin{aligned} & x \equiv c^2 \\ & \beta^2 \equiv \gamma c^{2p-2}(2c^{2p}-(1+c^2)) \\ \end{aligned}\right. ~~~~~ \nolinebreak G_{p,2}^2 \left\{ \begin{aligned} & \gamma \equiv 0 \\ & \beta^2 \equiv \sum_{i=0}^{2p-2} b_{i+2}(x-c)^i \\ \end{aligned}\right. $$ And one also checks that: $$ G_{p,2}^2 \left\{ \begin{aligned} & \gamma \equiv 0 \\ & \beta^2 \equiv -(1+c^2)(x-c^2)^{p-2}+(x-c^2)^{2p-2} \\ \end{aligned}\right. $$ Notice that $G_{p,1}^1$ (resp. $G_{p,2}^1$) is contained in the exceptional divisor produced after the first (resp. second) blow-up, while $G_{p,1}^2$ (resp. $G_{p,2}^2$) meets the exceptional divisor in only one point. Thus, after gluing together $\mathcal{C}_{p,1}$ and $\mathcal{C}_{p,2}$, the components $G_{p,1}^2$ and $G_{p,2}^2$ can be identified together since that blowing-up doesn't make change in points where we didn't blow-up. Therefore, $\tilde{\mathcal{C}}_{p}$ is composed of three components (see figure \ref{fig:fsp} below). \begin{figure}[h!] \labellist \small\hair 2pt \pinlabel $G_{p,1}^2$ at 180 550 \pinlabel $G_{p,1}^1$ at 30 600 \pinlabel $G_{p,2}^2$ at 460 550 \pinlabel $G_{p,2}^1$ at 600 600 \endlabellist \centering \includegraphics[scale=0.3]{FSp} \caption{The special fiber $\tilde{\mathcal{C}}_p$.} \label{fig:fsp} \end{figure} \hop \hop We now compute the group of components $\Phi$. The incidence matrix of the special fiber $\tilde{\mathcal{C}}_p$ is given by: \begin{center} \[ A:= \begin{bmatrix} (G_{p,2}^1)^2 & G_{p,2}^1G_{p,1}^1 & G_{p,2}^1G_{p,1}^2 \\ G_{p,1}^1G_{p,2}^1 & (G_{p,1}^1)^2 & G_{p,1}^1G_{p,1}^2 \\ G_{p,1}^2G_{p,2}^1 & G_{p,1}^2G_{p,1}^1 & (G_{p,1}^2)^2 \end{bmatrix}= \begin{bmatrix} -4 & 2 & 2 \\ 2 & -2 & 0 \\ 2 & 0 & -2 \end{bmatrix}\] \end{center} Indeed, for example: \begin{center} \begin{align*} (G_{p,1}^1G_{p,1}^2)_{(x=0,v=0,w=0)} &=\mathrm{dim}_{\mathbb{F}_p} \frac{\mathbb{F}_p[x,v,w]_{<x,v,w>}}{(x-1,w,v^2-w(1-c^2),v^2- \sum_{i=0}^{2p-2}a_{i+2}(x-1)^i)} \\ &=\mathrm{dim}_{\mathbb{F}_p} \frac{\mathbb{F}_p((x,v,w))}{(x,w,v^2)}\\ &=2. \end{align*} \end{center} By \cite[\S 9.6, Theorem 1]{BLR}, using this matrix, we compute that $\Phi \simeq \mathbb Z/2\mathbb Z \bigoplus \mathbb Z/2\mathbb Z$. \hop \hop \item \textbf{Study of the extension of the pointed $\mu_p^2$-torsor}. As shown in subsection \ref{jacobian contains a subgroup}, the Jacobian of the curve $C$ contains a subgroup isomorphic to $(\mathbb Z/p \mathbb Z)^2$; thus, we have a pointed $\mu_p^2$-torsor over $C$. Let $\mathcal{J} \to \mathbb Z$ be the N\'eron model of $J$ over $\mathbb Z$; then, the inclusion $(\mathbb Z/p\mathbb Z)^2 \subset J$ extends to a morphism $\mathcal{H}:= (\mathbb Z/p\mathbb Z)^2 \to \mathcal{J}$. Let's now consider $\mathcal{H} \times_{\mathbb Z} \mathbb Z_p \simeq \mathcal{H} \to \mathcal{J}_p$ (where $\mathcal{J}_p:= \mathcal{J} \times_{\mathbb Z} \mathbb Z_p$). By endowing $\mathcal{C}_p$ with the logarithmic structure induced by its special fiber, and by the work done in section \ref{section1}, we know that the $\mu_p^2$-pointed torsor over $C$ considered before extends uniquely to a logarithmic $\mu_p^2$-torsor over $\mathcal{C}_p$. The question is, then, whether or not it extends to a pointed fppf-torsor over $\mathcal{C}_p$. To check this, according to Proposition \ref{G^D->J^0}, it suffices to verify if the image of $\mathcal{H}$ lands into $\mathcal{J}^0_p$. This amounts to ask if the two divisors \begin{center} $(0,c) - \infty$ and $(0,-c) - \infty$ \end{center} extend into sections of $\mathcal{J}^0_p$ over $\mathbb Z_p$. Another way to say the same thing is: given the regular model $\mathcal{C}_p \to \mathbb Z_p$, the curve $C$ and $\mathcal{C}_p$ have the same field of fractions and then, we can consider two natural divisors over $\mathcal{C}_p$ \begin{center} $\frac{1}{p} \mathrm{div}(y-x^p-c)= \overline{(0,c)}-\overline{\infty}+\frac{1}{p}V_p$ \\ $\frac{1}{p} \mathrm{div}(y-x^p+c)= \overline{(0,-c)}-\overline{\infty}+\frac{1}{p}V_p'$ \end{center} where $\overline{(0,c)}$, $\overline{(0,-c)}$ and $\overline{\infty}$ are horizontal divisors corresponding to the sections of $\mathcal{C}_p$ that extend the points of $C$, and $V_p$ and $V_p'$ are two vertical divisors over $\mathcal{C}_p$. These two divisors with rational coefficients define the logarithmic $\mathbb{G}_m$-torsors that extend the divisors $(0,c)-\infty$ and $(0,-c)-\infty$. We can say that the image of $\mathcal{H}$ lands into $\mathcal{J}^0_p$ if and only if the divisors above have integer coefficients, i.e. $\frac{1}{p}V_p$ and $\frac{1}{p}V_p'$ have integer coefficients (cf. Appendix).\\ Let's now compute those divisors over $\mathcal{C}_p$. Consider the ideals $\mathfrak{N}:=<x,y-c>$ and $\mathfrak{N}':=<x,y+c>$. These prime ideals are not contained in any of the maximal ideals $\mathfrak{M}$ and $\mathfrak{M}'$ where we blew-up $\mathcal{C}$, which means that the geometric points corresponding to $\mathfrak{M}$ and $\mathfrak{M}'$ are not contained in the varieties corresponding to $\mathfrak{N}$ and $\mathfrak{N}'$. Since blowing-up doesn't make change where we didn't blow-up, this means that the points $(0,c)$, $(0,-c)$ and $\infty$ are not affected by the blow-up (in particular, the multiplicities remain unchanged) and therefore, the extended divisors in the regular model $\mathcal{C}_p$ are given by \begin{center} $\frac{1}{p} \mathrm{div}(y-x^p-c)= \overline{(0,c)}-\overline{\infty}$ \\ $\frac{1}{p} \mathrm{div}(y-x^p+c)= \overline{(0,-c)}-\overline{\infty}$ \end{center} \textbf{Conclusion}. We conclude, using what has been explained previously, that the pointed $\mu_p^2$-torsor extends into a pointed fppf-torsor over $\mathbb{C}_p$. This is coherent with what we found about the group of components of $\mathcal{J}_p$. Indeed, we assumed that $p$ is odd (we used this assumption in the construction of the regular model)and we also found that $\# \Phi =4$. Therefore, $p^2 \bigwedge \# \Phi = 1$ and it leads again, by Lemma \ref{premier}, to the conclusion that we have an fppf-extension. \end{itemize} \item \textbf{Second example}. For the second example, we are going to treat the case where $p=3$ and we will construct a regular model over $\mathbb Z_l$ of the curve $C$, with $l$ a prime different from $p$, therefore different from $3$. We consider here the arithmetic surface over $\mathbb Z$ (denoted by $\mathcal{C}$ and whose generic fiber is isomorphic to $C$), given by the equations: \begin{itemize} \item $y^2=f_3(x)=x^6-(1+l^2)x^3+l^2$ \hop \item $t^2=s^6f_3(\frac{1}{s})$ \end{itemize} \begin{itemize} \item \textbf{Construction of a regular model $\mathcal{C}_l$ over $\mathbb Z_l$ with $l \neq 3$}. In this case, one first checks easily that the second chart contains no singular points in the fiber over $l$, so that we can focus the study on the first chart only. \\ Using the Jacobian criterion, we find one singular point in the first chart: $\tilde{\mathfrak{M}}:=<x,y,l>$ and by a computation of dimensions, we check that this point is, in fact, not regular. We view $\mathcal{C}$ as a surface inside $\mathbb{A}_{\mathbb Z}^2$ and its blow-up at $\tilde{\mathfrak{M}}$ is formed by taking the following schemes, and gluing them together, using the change of variables defined below: \hop \textbf{Chart 1: $\mathcal{C}_{l}^1$}. Define new variables $y=xv$ and $l=xw$. After replacing in the equation of the first chart of $\mathcal{C}$, we find that $\mathcal{C}_l^1$ is given by the system: $$ \mathcal{C}_{l}^1 \left\{ \begin{aligned} & v^2= x^4-(1+l^2)x+w^2 \\ & xw=l \end{aligned}\right.$$ \textbf{Chart 2: $\mathcal{C}_{l}^2$}. The second chart is formed using the new variables $x=yu$ and $l=yw$. Replacing in the equation of the first chart of $\mathcal{C}$ gives: $$ \mathcal{C}_{l}^2 \left\{ \begin{aligned} & 1=x^4u^6-(1+l^2)xu^3+w^2\\ & xw=l \\ \end{aligned}\right.$$ \textbf{Chart 3: $\mathcal{C}_{l}^3$}. The third chart is formed using the new variables $x=lu$ and $y=lv$. We get: $$\mathcal{C}_l^3: v^2=l^4u^6-(1+l^2)lu^3+1$$ Then, we glue the four charts together (the three charts above and the one given by $t^2=s^6f_3(\frac{1}{s})$) using the change of variables defined above. This gives a model for the curve $C$ over $\mathbb Z$ and we state that it is no longer singular at $\tilde{\mathfrak{M}}$. We need to check this in each one of the three affine charts above. For instance, in $\mathcal{C}_l^1$, the Jacobian matrix on a point $(x,v,w)$ is given by: \begin{center} $J(x,v,w) = \begin{pmatrix}4x-(1+l^2) & -2v & 2w \\ w & 0 & x \end{pmatrix}$ \end{center} Since we look for singularities after the blow-up at $\tilde{\mathfrak{M}}$, we focus on the points that verify $x=0$, $y=0$ and $l=0$. We get: \begin{center} $J(x,v,w) = \begin{pmatrix}-1 & -2v & 2w \\ w & 0 & 0 \end{pmatrix}$ \end{center} We get the system: $$ \left\{ \begin{aligned} & v^2 \equiv w^2 \\ & 2vw \equiv 0 \\ & 2w^2 \equiv 0 \end{aligned}\right.$$ Thus, $w \equiv v \equiv x \equiv 0 \mod~l$. Thus, $\tilde{\mathfrak{M}}':=<x,v,w,l>$ is the only possible singular point in this chart. But, $l=xw \in \tilde{\mathfrak{M}}'^2$ and $x=x^4-l^2x+w^2-v^2 \in \tilde{\mathfrak{M}}'^2$, hence $\tilde{\mathfrak{M}}'$ is regular. We check in the same way that the two other affine charts are not singular at $\tilde{\mathfrak{M}}$. Finally, we conclude that the constructed model is not singular at $\tilde{\mathfrak{M}}$. We tensor that model with $\mathbb Z_l$ and we denote the result by $\mathcal{C}_l$: it is a \textit{regular} model of the curve $C$ over $\mathbb Z_l$. \item \textbf{Computation of the special fiber of $\mathcal{C}_l$ and the group of components $\Phi_l$ of the N\'eron model of the Jacobian of the curve $C$ over $\mathbb Z_l$}. We check that except for some points at infinity, the special fiber $\tilde{\mathcal{C}_l}$ of $\mathcal{C}_l$ lies over the first chart $\mathcal{C}_l^1$. It has three components that meet in one point $(x=0,v=0,w=0)$ and they are given by: $$ G_{l}^1 \left\{ \begin{aligned} & x \equiv 0 \\ & v \equiv w \\ \end{aligned}\right. ~~~~~ \nolinebreak G_{l}^2 \left\{ \begin{aligned} & x \equiv 0 \\ & v \equiv -w \\ \end{aligned}\right.~~~~~ \nolinebreak G_{l}^3 \left\{ \begin{aligned} & w \equiv 0 \\ & v^2 \equiv x^4-x \\ \end{aligned}\right. $$ \begin{figure}[h!] \labellist \small\hair 2pt \pinlabel $G_{l}^3$ at 21 86 \pinlabel $G_{l}^2$ at 100 80 \pinlabel $G_{l}^1$ at 5 60 \endlabellist \centering \includegraphics[scale=1.3]{FP3} \caption{The special fiber $\tilde{\mathcal{C}}_l$.} \label{fig:fs3} \end{figure} We now compute the group of components $\Phi_l$. The incidence matrix of the special fiber is given by: \begin{center} \[B:= \begin{bmatrix} (G_{l}^1)^2 & G_{l}^1G_{l}^2 & G_{l}^1G_{l}^3 \\ G_{l}^2G_{l}^1 & (G_{l}^2)^2 & G_{l}^2G_{l}^3 \\ G_{l}^3G_{l}^1 & G_{l}^3G_{l}^2 & (G_{l}^3)^2 \end{bmatrix} = \begin{bmatrix} -2 & 1 & 1 \\ 1 & -2 & 1 \\ 1 & 1 & -2 \end{bmatrix}\] \end{center} Using this matrix and \cite[\S 9.6, Theorem 1]{BLR}, we compute that $\Phi_l \simeq \mathbb Z/3\mathbb Z$. \hop \hop \item \textbf{Study of the extension of the pointed $\mu_3^2$-torsor}. As shown previously, the Jacobian of the curve $C$ contains a subgroup isomorphic to $(\mathbb Z/3\mathbb Z)^2$ and this gives a pointed $\mu_3^2$-torsor over $C$ (relative to $Q_0=(1,0)$). If $\mathcal{J} \to \mathbb Z$ denotes the N\'eron model of $J$ over $\mathbb Z$, the inclusion $(\mathbb Z/3\mathbb Z)^2 \hookrightarrow J$ extends to a morphism $\mathcal{H}:=(\mathbb Z/3\mathbb Z)^2 \to \mathcal{J}$, where $\mathcal{H}$ is a finite flat group scheme over $\mathbb Z$. We know by section \ref{section1} that the $\mu_3^2$-pointed torsor extends uniquely into a logarithmic $\mu_3^2$-torsor over $\mathcal{C}_l$. In this example, we will check that this logarithmic model does not lift to an fppf one, i.e, our torsor does not extend to a pointed fppf-torsor over $\mathcal{C}_l$.\\ Let $\mathfrak{N}=<x,y-l>$ and $\mathfrak{N}'=<x,y+l>$; these prime ideals are contained in the maximal ideal $\tilde{\mathfrak{M}}$ where we blew-up $\mathcal{C}$; it means that the varieties corresponding to these prime ideals are blown-up in one point (corresponding to $\tilde{\mathfrak{M}}$). After the blow-up, all the other points except $\tilde{\mathfrak{M}}$ remain unchanged while $\tilde{\mathfrak{M}}$ gives two vertical divisors $G_l^1$ (denoted by $V_l$) and $G_l^2$ (denoted by $V_l'$) corresponding to the exceptional divisor in the affine space. We've already seen that these components appear with multiplicity one, therefore we have: \begin{center} $ \frac{1}{3} \mathrm{div}(y-x^p-l)= \overline{(0,l)}-\overline{\infty} + \frac{1}{3} (V_l+V_l')$ \\ $\frac{1}{3} \mathrm{div}(y-x^p+l)= \overline{(0,-l)}-\overline{\infty} + \frac{1}{3} (V_l+V_l')$ \end{center} \textbf{Conclusion}: We deduce, using what has been explained before, that the pointed $\mu_p^3$-torsor doesn't extend to an fppf one over $\mathbb Z_l$. By Lemma \ref{premier}, this implies in particular that $\# \Phi_l$ and $\#(\mathbb Z/3\mathbb Z)^2=9$ have a non-trivial divisor in common. But, we computed that $\Phi_l \simeq \mathbb Z/3\mathbb Z$, so this was coherent. \end{itemize} \end{enumerate} \section{Appendix} \label{integ} Here, we describe the morphism $\gamma_{\Phi}(\mathcal{P}^{log})$ of Theorem \ref{THM_v2} as it is needed in section \ref{section4} of this paper. Assume that $k$ is perfect and let's describe the morphism on the $k$-points. One can assume that $R$ is strictly Henselian. \\ Let $t_k\in\Phi(k)$, then $t_k$ comes from a $k$-point of $\mathcal{J}_k$, because the map $\mathcal{J}_k(k)\to \Phi(k)$ is surjective (here, one uses the fact that $k$ is algebraically closed). One can further lift $t_k$ into a point $t\in \mathcal{J}(R)$ because $R$ is Henselian and $\mathcal{J}$ is smooth \cite[\S 2.3, Proposition 5]{BLR}. \noindent Let $L_t:=(id_{C}\times t_K)^{*}\mathcal{P}_K$ be the degree zero line bundle on $C$ corresponding to $t_K\in J(K)$. Then $\mathcal{L}_t:=(id_{\mathcal{C}}\times t)^{*}\mathcal{P}^{\rm log}$ is a $\mathbb{G}_m$-log torsor on $\mathcal{C}$ with generic fiber $L_t$. \noindent Let us recall from Example~\ref{exam:logdiv} that the group $H_{klf}^1(\mathcal{C},\mathbb{G}_m)$ can be described as the group of divisors with rational coefficients above $\mathcal{C}_k$, modulo principal divisors. We shall now describe $\mathcal{L}_t$ as such a divisor class. \noindent According to Raynaud \cite[\S 9.5, Theorem 4]{BLR}, one has a canonical isomorphism $\mathcal{J}^0 \simeq \Pic^0_{\mathcal{C}/R}$, hence, if $n$ is an integer such that $n\Phi=0$, then $L_{nt}\simeq L_t^{\otimes n}$ can be uniquely extended into an element of $\Pic^0_{\mathcal{C}/R}(R)$ i.e. a rigidified line bundle on $\mathcal{C}$ which is algebraically equivalent to zero relative to $\Spec(R)$. So, if $\Delta_K(t)$ is a divisor representing $L_t$, then there exists a divisor $\Delta(nt)$ on $\mathcal{C}$ which is algebraically equivalent to zero relative to $\Spec(R)$, and whose generic fiber is $n\Delta_K(t)$. Moreover, $\Delta(nt)$ is unique up to a multiple of the special fiber $\mathcal{C}_k$ \cite[Theorem 6.4.1, (3)]{Raynaud}, hence we may (and do) choose it in such a way that its coefficient with respect to the component in which $Q_0$ reduces is zero. It follows that \begin{equation} \label{Dt} D_t:= \frac{1}{n} \Delta(nt) \end{equation} is a divisor with rational coefficients extending $\Delta_K(t)$, which is algebraically equivalent to zero relative to $\Spec(R)$ (in the sense that one of its multiples is), and which intersects trivially the section $Q_0$. According to the previous discussion, this is the unique such divisor. It follows from the underlying extension structure of $\mathcal{P}^{\rm log}$ that $\mathcal{L}_{nt}=\mathcal{L}_t^{\otimes n}$, hence, by the uniqueness argument above, $\mathcal{L}_t$ is the class of the divisor $D_t$. \noindent Now, $\gamma_{\Phi}(\mathcal{P}^{\rm log})(t)$ can be described as follows: if we write $$ D_t = D_t^{\rm hor} + \sum_{i=1}^r q_i E_i $$ where $D_t^{\rm hor}$ is a horizontal divisor with integral coefficients, then we have: \begin{equation} \label{gammaPlog} \gamma_{\Phi}(\mathcal{P}^{\rm log})(t) = (q_1,\dots, q_r) \mod{\mathbb Z} \end{equation} We can compute the kernel of this morphism. To do so, we need to introduce some tools. Given a degree zero divisor $c_K$ on $C$, we denote by $\overline{c_K}$ its schematic closure in $\mathcal{C}$, which is a Cartier divisor by regularity of $C$, and by $c_k$ its image in $\Phi(k)$. \noindent Denote by $[~,~]$ the N\'eron symbol as defined in \cite[Definition 2.1.1]{Pepin}. It follows from \eqref{Dt} that \begin{center} \begin{equation}\label{eqNS} [ c_K~,~(D_t)_K ] = \overline{c_K}\cdot D_t \end{equation} \end{center} \noindent where $\overline{c_K}\cdot D_t$ denotes the intersection number, defined for divisors with rational coefficients by extending the classical local intersection numbers by $\mathbb Q$-linearity.\\ \noindent On the other hand, according to \cite[Lemma 4.2.1]{Pepin}, we have \begin{center} \begin{equation}\label{eqNSGP} [ c_K~,~(D_t)_K ] = \langle c_k~,~t_k \rangle \mod{\mathbb Z} \end{equation} \end{center} where $\langle,\rangle$ denotes the the Grothendieck's monodromy pairing $\Phi\times \Phi\to \mathbb Q/\mathbb Z$.\\ Let us assume that $t$ belongs to the kernel of $\gamma_{\Phi}(\mathcal{P}^{\rm log})$. Then according to \eqref{gammaPlog}, all the $q_i$ are integers. Hence, for any degree zero divisor $c_K$ on $C$, it follows from (\ref{eqNS}) and (\ref{eqNSGP}) that $\langle c_k~,~t_k \rangle =0$. Hence, $\langle c_k~,~t_k \rangle =0$ for all possible choices of $c_k$. The field $k$ being algebraically closed, the monodromy pairing is perfect \cite[Theorem 1.3]{BL}, hence it follows that $t_k=0$. Therefore, $\gamma_{\Phi}(\mathcal{P}^{log})$ is injective, which implies that the kernel of the morphism $\gamma(\mathcal{P}^{log})$ is exactly $\mathcal{J}^0$. This allows to recover Proposition \ref{G^D->J^0}. \bibliographystyle{plain}
{ "timestamp": "2021-11-30T02:33:17", "yymm": "2012", "arxiv_id": "2012.08896", "language": "en", "url": "https://arxiv.org/abs/2012.08896" }
\subsubsection*{Acknowledgments} \section{Introduction} Bayesian inference is a popular method for estimating unknown parameters from data, largely due to its ability to quantify uncertainty in the estimation results~\cite{gelman2014bayesian}. In the current work we consider a special class of Bayesian inference problem where data have to be collected in a sequential manner. A typical example of this type of problem is the estimation of parameters, such as the initial states or the equation coefficients, in a dynamical system from observations related to the state vector at discrete times. Such problems arise from many real-world applications, ranging from weather prediction~\cite{annan2004efficient} to biochemical networks~\cite{golightly2011bayesian}. It should be emphasized that, unlike many data assimilation problems that seek to estimate the time-dependent states in dynamical systems, the parameters that we want to estimate here are assumed do not vary in time. To distinguish the two types of problems, we refer to the former as \emph{state estimation} problems and the latter as \emph{parameter estimation}. We should also note that in this work we focus on methods which use samples to represent the posterior distribution, and that approximation based methods, such as the Variational Bayes~\cite{beal2003variational} and the Expectation Propagation~\cite{minka2001appBI} will not be discussed here. Conventional sampling methods, such as Markov Chain Monte Carlo (MCMC) simulations~\cite{gilks1995markov}, use all the data in a single batch, are unable to take advantage of the sequential structure of the type of problems. On the other hand, sequential methods utilize the sequential structure of the problem and update the posterior whenever a new collection of data become available, which makes them particularly convenient and efficient for sequential inference problems. A popular sequential method for parameter estimation is the Ensemble Kalman filtering (EnKF) algorithm, which was initially developed to address the dynamical state estimation problems~\cite{evensen2009data}. The EnKF method was extended to estimate parameters in many practical problems, e.g., \cite{annan2004efficient,annan2005parameter}, and more recently it was generically formulated as a derivative-free optimization based parameter estimation method in \cite{iglesias2013ensemble}. The EnKF method for parameter estimation was further developed and analyzed in \cite{arnold2014parameter,iglesias2016regularizing,schillings2017analysis}, etc. The basic idea of the EnKF method for parameter estimation is to construct an artificial dynamical system, turning the parameters of interest into the states of the constructed dynamical system, before applying the standard EnKF procedure to estimate the states of the system. A major limitation of the EnKF method is that, just like the original version for dynamical state estimation, it can only compute a Gaussian approximation of the posterior distribution, and the approximation may result in substantial approximation error, unless the actual posterior is highly close to Gaussian. Moreover the approximation error, unlike random sampling error, can not be reduced by increasing the sample size. On the other hand, the Sequential Monte Carlo sampler (SMCS) method~\cite{del2006sequential}, does not have such a limitation. The SMCS algorithm is a generalisation of the particle filter~\cite{arulampalam2002tutorial,doucet2009tutorial} for dynamic state estimation, generating weighted samples from the posterior distribution. Since the SMCS algorithm was proposed in \cite{del2006sequential}, considerable improvements and extensions of the method have been proposed, such as, \cite{fearnhead2013adaptive,beskos2017multilevel,heng2020controlled,everitt2020sequential}, and more information on the developments of the SMCS methods can be found in the recent reviews~\cite{dai2020invitation,chopin2020introduction} On the other hand, we need to note that there are other parameter estimation schemes also based on particle filtering, e.g., \cite{gilks2001following,chopin2002sequential}, and the differences and connections between SMCS and these schemes are discussed in \cite{del2006sequential}. The SMCS method makes no assumption or approximation of the posterior distribution, and can directly draw (weighted) samples from any posterior. As will be discussed later, a key issue in the implementation of SMCS is the choice of suitable forward and backward kernels, as the performance of SMCS depends critically on the choices of these kernels. As has been shown in \cite{del2006sequential}, the optimal forward and backward kernels exist in principle, but designing effective kernels for specific problems is nevertheless a highly challenging task. In dynamic state estimation problems, often the EnKF approximation is used as the proposal distribution in the particle filtering algorithm~\cite{papadakis2010data,wen2018defensive}, especially for problems in which the posteriors are modestly non-Gaussian. Building upon similar ideas, we propose in this work to construct the kernels in SMCS by using an EnKF framework. Specifically, the forward kernel is obtained directly from an EnKF approximation, and the backward kernel is derived by making a Gaussian approximation of the optimal backward kernel. With several numerical examples we illustrate that the proposed method performs competitively relative to the EnKF approach. The numerical results also demonstrate that the EnKF-SMCS algorithm performs well for highly non-Gaussian posteriors. The remaining work is organized as follows. In Section~\ref{sec:setup} we present the generic setup of the sequential inference problems that we consider in this work. In Sections~\ref{sec:smcs} and \ref{sec:enkf} we respectively review the SMCS and the EnKF methods for solving sequential inference problems. In Section~\ref{sec:enkf-smcs} we present the proposed EnkF-SMCS method and in Section~\ref{sec:examples} we provide several numerical examples to illustrate the performance of the proposed method. Finally Section~\ref{sec:conclusions} offers some concluding remarks. \section{Problem setup} \label{sec:setup} We consider a sequential inference problem formulated as follows. Suppose that we want to estimate the parameter $x\in {\mathbb R}^{n_x}$ from data $y_1, ..., y_t, ..., y_T$ which become available sequentially in time. In particular the data $y_t\in {\mathbb R}^{n_y}$ is related to the parameter of interest $x$ via the follow model, \[ y_t = G_t(x) + \eta_t, \quad t=1...T,\] where each $G_t(\cdot)$ is a mapping from ${\mathbb R}^{n_x}$ to ${\mathbb R}^{n_y}$, and the observation noise $\eta_t \sim \@N(0, R_t)$. It follows that the likelihood function can be written as, \begin{equation} \pi(y_t|x) = \@N(G_t(x),R_t),\quad t=1...T.\label{e:lh}\end{equation} It is important to note here that the restriction that the error model has to be additive Gaussian as is in Eq.~\eqref{e:lh} is due to the use of EnKF. While noting that relaxing such a restriction is possible, we emphasize here that additive Gaussian assumption noise is reasonable for a wide range of practical problems. We can now write the posterior distribution in a sequential form: \begin{equation} \pi_t(x)=\pi(x|y_1,...y_t) \propto \pi_0(x) \prod_{i=1}^t\pi(y_i|x), \label{e:pt} \end{equation} where $\pi_0(x)$ is the prior distribution of $x$, and our goal is to draw samples from $\pi_t$ for any $0<t\leq T$. The posterior in Eq.~\eqref{e:pt} is essential in a data tempering formulation, and as is pointed out in \cite{fearnhead2013adaptive,zhou2016toward}, such problems pose challenges for usual MCMC methods especially when the amount of data is large, as they cannot conveniently exploit the sequential structure of the problem. In what follows, we first discuss two sequential methods for this type of problems: the EnKF and the SMCS algorithms, and we then propose a scheme to combine these two methods. \section{Sequential Monte Carlo Sampler}\label{sec:smcs} We first give a brief introduction to the SMCS method for sampling the posterior distribution $\pi_t(x)$, following \cite{del2006sequential}. The key idea of SMCS is to construct a joint distribution $\pi(x_1,...,x_t)$, the marginal of which is equal to the target distribution $\pi_t(\cdot)$. Note here that $\pi(x_1,...,x_t)$ needs only to be known up to a normalization constant. One then applies the sequential importance sampling algorithm~\cite{arulampalam2002tutorial,doucet2009tutorial} to draw weighted samples from $\pi(x_1,...,x_t)$, which after being marginalized over $x_1,...,x_{t-1}$, yields samples from $\pi_t(\cdot)$. Next we describe SMCS in a recursive formulation where, given an arbitrary conditional distribution $L_{t-1}(x_{t-1}|x_t)$, we can construct a joint distribution of $x_{t-1}$ and $x_{t}$ in the form of, \begin{equation} p_t(x_{t-1},x_t)=\pi_t(x_t) L_{t-1}(x_{t-1}|x_t). \end{equation} such that the marginal distribution of $p_t(x_{t-1},x_t)$ over $x_{t-1}$ is $\pi_t(x_t)$. Now, given a marginal distribution $q_{t-1}(x_{t-1})$ and a conditional distribution $K_{t}(x_t|x_{t-1})$, we can construct an importance sampling (IS) distribution for $p(x_{t-1},x_t)$ in the form of \begin{equation} q(x_{t-1},x_t) = q_{t-1}(x_{t-1})K_{t}(x_t|x_{t-1}). \label{e:isq} \end{equation} It is important to note here that a key requirement of the IS distribution $q(x_{t-1},x_t)$ is that we can directly draw samples from it. We let $\{x^m_{t-1:t}\}_{m=1}^M$ be an ensemble drawn from $q(x_{t-1},x_t)$, and note that the weighted ensemble $\{(x^m_{t-1:t},w_{t}^m)\}_{m=1}^M$ follows the distribution $p_t(x_{t-1:t})$, where the weights are computed according to \begin{subequations}\label{e:isw} \begin{eqnarray} w_t(x_{t-1:t}) &=& \frac{p_t(x_{t-1},x_t)}{q_t(x_{t-1},x_t)} = \frac{\pi_t(x_t) L_{t-1}(x_{t-1}|x_t)}{q_{t-1}(x_{t-1})K_{t}(x_t|x_{t-1})} \notag\\ &=& w_{t-1}(x_{t-1}) \alpha(x_{t-1},x_t), \end{eqnarray} where \begin{equation} w_{t-1}(x_{t-1}) = \frac{\pi_{t-1}(x_{t-1})}{q_{t-1}(x_{t-1})},\quad \alpha_t(x_{t-1},x_t)=\frac{\pi_t(x_t) L_{t-1}(x_{t-1}|x_t)}{\pi_{t-1}(x_{t-1})K_{t}(x_t|x_{t-1})}. \end{equation} \end{subequations} As can be seen here, once the two conditional distributions $K_t$ and $L_{t-1}$ (respectively referred to as the forward and backward kernels in the rest of the paper) are chosen, we can draw samples from Eq.~\eqref{e:isq} and compute the associated weights from Eq.~\eqref{e:isw}, obtaining weighted samples from $p_t(x_{t-1},x_t)$ as well as its marginal $\pi_t(x_t)$. The SMCS essentially conducts this procedure in the following sequential manner: \begin{enumerate} \item let $t=0$, draw an ensemble $\{x^m_{0}\}_{m=1}^M$ from $q_0(x_0)$, and compute $w^m_0=\pi_0(x^m_0)/q_0(x_0^m)$ for $m=1...M$; \item let $t=t+1$; \label{st:t+1} \item draw $x^m_{t}$ from $K(\cdot|x^m_{t-1})$ for each $m=1...M$; \item compute $w^{m}_{t}$ using Eq.~\eqref{e:isw}; \item return to step~\ref{st:t+1} if $t<T$. \end{enumerate} Note here that, a resampling step is often used in SMCS algorithm to alleviate the ``sample degeneracy'' issue \cite{del2006sequential}. The resampling techniques are well documented in the PF literature, e.g., ~\cite{arulampalam2002tutorial,doucet2009tutorial}, and so are not discussed here. \\ As can be seen from the discussion above, to use SMCS one must choose the two kernels. In principle, optimal choices of these kernels are available. For example, it is known that once $K_t(x_{t}|x_{t-1})$ is provided, one can derive that the optimal choice of $L_{t-1}(x_{t-1}|x_t)$ is~\cite{del2006sequential}: \begin{eqnarray} L^\mathrm{opt}_{t-1}(x_{t-1}|x_t) &=& \frac{q_{t-1}(x_{t-1}) K_t(x_t|x_{t-1})}{q_t(x_t)} \notag\\ &=& { \frac{q_{t-1}(x_{t-1}) K_t(x_t|x_{t-1})}{{\int q_{t-1}(x_{t-1}) K_t(x_t|x_{t-1}) dx_{t-1}}}, } \label{e:optL} \end{eqnarray} where the optimality is in the sense of yielding the minimal estimator variance. We also note that use of the optimal L-kernel allows the weights to be written as \begin{equation} w_t(x_{t-1:t}) = \frac{\pi_t(x_t)}{q_t(x_t)}. \label{e:optw} \end{equation} Moreover, we can see here that if we can choose $K_t$ such that $q_t= \pi_t$, then the weight function is always unity, which means that we now sample directly from the target distribution (the ideal case). While obtaining such an ideal $K_t$ is usually not possible in practice, it nevertheless provides useful guideline regarding choice of the forward kernel $K_t$ i.e. it should be chosen such that the resulting $q_t$ is close to $\pi_t$. For example, it is proposed in \cite{del2006sequential} to use the MCMC moves as the forward kernel. A main limitation of the MCMC kernel is that it typically requires a number of MCMC moves to propose a ``good'' particle, and since each MCMC move involves an evaluation of the underlying mathematical model, $G_t$, the total computational cost can be high when $G_t$ is computationally intensive. In this work we consider an alternative to the use of MCMC kernels. Specifically we propose to choose $K_t$ of the form \begin{equation} K_t(\cdot|x_{t-1}) = \@N(\cdot|T_{t}(x_{t-1}),\Sigma^K_t), \end{equation} i.e., a Gaussian distribution with mean $T_t(x_{t-1})$ and variance $\Sigma_t$, where $T_t(\cdot)$ is a ${\mathbb R}^{n_x}\rightarrow {\mathbb R}^{n_x}$ transformation. {We shall compute $T_t$ and $\Sigma_t$ (or equivalently the forward kernel $K_t$) using the EnKF method.} \section{Ensemble Kalman Filter}\label{sec:enkf} In this section we give a brief overview of the EnKF parameter estimation method proposed in \cite{iglesias2013ensemble}, which essentially aims to compute a Gaussian approximation of $\pi_t(x_t)$ in each time step $t$. To formulate the problem in an EnKF framework, we first construct an \emph{artificial dynamical system} denoted by $F_t$; at any time $t$, we have the states $u_t=[x_t,z_t]^T$ where $z_t=G_t(x_t)$, and the dynamical model, \begin{equation} u_t= F_t(u_{t-1}),\quad x_t= x_{t-1},\quad {z}_t = G_t(x_{t}).\label{e:prop} \end{equation} The data is associated to the states through $y_t = z_t+\eta_t$, or equivalently \[y_t = H u_t +\eta_t = [\it0_{n_y\times n_x}, I_{n_y\times n_y}] u_t+\eta_t,\] where $I_{n_y\times n_y}$ is a $n_y\times n_y$ identity matrix and $\it0_{n_y\times n_x}$ is a $n_y\times n_x$ zero matrix. We emphasize here that once we have the posterior distribution $\pi(u_t|y_{1:t})$, we can obtain the posterior $\pi_t(x_t)=\pi(x_t|y_{1:t})$ by marginalizing $\pi(u_t|y_{1:t})$ over $z_t$. Now let us see how the EnKF proceeds to compute a Gaussian approximation of the posterior distribution $\pi(u_t|y_{1:t})$. At time $t$, suppose that the prior $\pi(u_{t}|y_{1:t-1})$ can be approximated by a Gaussian distribution with mean $\tilde{\mu}_{t}$ and covariance $\tilde{C}_{t}$. It follows that the posterior distribution $\pi(u_{t}|y_{1:t})$ is also Gaussian and its mean and covariance can be obtained analytically: \begin{equation} {\mu}_t = \tilde{\mu}_t +Q_t(y_t-H\tilde{\mu}_t), \quad {C}_t = (I-Q_tH)\tilde{C}_t , \label{e:postparams} \end{equation} where $I$ is the identity matrix and \begin{equation} Q_t =\tilde{C}_t H^T(H\tilde{C}_t H^T+R_t)^{-1}\label{e:gain} \end{equation} is the so-called Kalman gain matrix. In the EnKF method, one avoids computing the mean and the covariance directly in each step. Instead, both the prior and the posterior distributions are represented with a set of samples. Suppose that at time $t-1$, we have an ensemble of particles $\{u_{t-1}^m\}_{m=1}^M$ drawn according to the posterior distribution $\pi(u_{t-1}|y_{1:t-1})$, we can propagate the particles via the dynamical model~\eqref{e:prop}: \begin{equation} \tilde{u}_t^m = F_t(u_{t-1}^m), \end{equation} for $m=1...M$, obtaining an assemble $\{\tilde{u}_t^m\}_{m=1}^M$ following the prior $\pi(u_{t}|y_{1:t-1})$. We can compute a Gaussian approximation, $\@N(u_t | \tilde{\mu}_t, \tilde{C}_t)$, of $\pi(u_{t}|y_{1:t-1})$, where the mean and the covariance of $\pi(u_{t}|y_{1:t-1})$ are estimated from the samples: \begin{equation} \tilde{\mu}_t = \frac1M\sum_{m=1}^M \tilde{u}_{t}^m, \quad \tilde{C}_t=\frac1{M-1}\sum_{m=1}^M(\tilde{u}_t^m-\tilde{\mu}_t )(\tilde{u}_t^m-\tilde{\mu}_t )^T. \label{e:priorparams} \end{equation} Once $\tilde{\mu}_t$ and $\tilde{C}_t$ are obtained, we then can compute $\mu_t$ and $C_t$ directly from Eq.~\eqref{e:postparams}, and by design, the posterior distribution $\pi(u_t|y_{1:t})$ is approximated by $\@N({\mu}_t, {C}_t)$. Moreover it can be verified that the samples \begin{equation} {u}_{t}^m =\tilde{u}_t^m +Q_t(y_t-(H\tilde{u}_{t}^m-\eta^m_t)), \quad\eta_t^m\sim \@N(0,R_t), \quad m=1...M, \label{e:update} \end{equation} with $Q_t$ computed by Eq.~\eqref{e:gain}, follow the distribution $\@N({\mu}_t,{C}_t)$. That is, $\{u_{t}^m\}_{m=1}^M$ are the approximate ensemble of $\pi(u_t|y_{1:t})$, and consequently the associated $\{x_{t}^m\}_{m=1}^M$ approximately follows distribution $\pi_t(x_t)= \pi(x_t|y_{1:t})$. \section{EnKF-SMCS} \label{sec:enkf-smcs} Now we shall discuss how to use the EnKF scheme to construct the forward kernel $K_t$ for SMCS. First recall that $u_t=[x_t,z_t]^T$, $H =[\it0_{n_y\times n_x}, I_{n_y\times n_y}]$ and the propagation model $x_t=x_{t-1}$, we can derive from Eq.~\eqref{e:update} that, \begin{subequations} \begin{align} \label{e:updatex} x_t= x_{t-1}+Q_t^x (y_t-G(x_{t-1})) + Q^x_t\eta_t+\eta'_t,\\ \eta_t\sim \@N(0,R_t),\, \eta'_t\sim \@N(0,\delta^2\Sigma^q_{t-1}), \end{align} \end{subequations} where $Q^x_t = Q_t[1:n_x,1:n_y]$, $\delta$ is a small constant and $\Sigma^q_{t-1}$ is the covariance of $q_{t-1}$ (the evaluation of $\Sigma^q_{t-1}$ is provided in Eq.~\eqref{e:estqt-1}). Eq.~\eqref{e:updatex} can also be written as a conditional distribution: \begin{subequations}\label{e:Kt} \begin{equation} K_t(\cdot|x_{t-1}) = \@N(\cdot | T_t(x_{t-1}), \Sigma^K_t), \end{equation} where \begin{equation}T_t(x_{t-1}) = x_{t-1}+Q^x_t (y_t-G_t(x_{t-1}))\quad \mbox{and}\quad\Sigma^K_t=Q^x_tR_t(Q^x_t)^T+\delta^2\Sigma^q_{t-1}.\end{equation} \end{subequations} Note that the purpose of introducing the small noise term, $\eta'_{t}$, in Eq.~\eqref{e:updatex} is to ensure that $\Sigma^K_t$ is strictly positive definite and so $K_t$ is a valid Gaussian conditional distribution. In all the numerical implementations performed in this work, $\delta$ is set to be $10^{-4}$. According to the discussion in Section~\ref{sec:enkf}, we have, if $q_{t-1}$ is a good approximation to $\pi_{t-1}$ \begin{equation} q_t(x_{t}) = \int K_t(x_t|x_{t-1}) q_{t-1}(x_{t-1}) d x_{t-1} \approx \pi_t(x_t). \label{e:qt} \end{equation} That is, Eq.~\eqref{e:Kt} provides a good forward Kernel for the SMC sampler. It should be noted here that since $T_n$ is a nonlinear transform, in general, we can not derive the analytical expression for $q_t$ and as a result, we can not use the optimal backward kernel given in Eq.~\eqref{e:optL}. Nonetheless, we can use a sub-optimal backward kernel: \begin{equation} \hat{L}_{t-1}(x_{t-1}|x_t) = \frac{\hat{q}_{t-1}(x_{t-1}) \hat{K}_t(x_t|x_{t-1})}{\int \hat{q}_{t-1}(x_{t-1}) \hat{K}_t(x_t|x_{t-1}) dx_{t-1}}, \label{e:suboptL} \end{equation} where $\hat{q}_{t-1}$ is the Gaussian approximation of $q_{t-1}$, and $\hat{K}_t$ is an approximation of ${K}_t$. Next we need to determine $\hat{q}_{t-1}$ and $\hat{K}_t$. Here $\hat{q}_{t-1}$ can be estimated from the ensemble $\{x^m_{t-1}\}_{m=1}^M$: \begin{subequations} \label{e:qhat} \begin{align} \hat{q}_{t-1}(\cdot) &=\@N(\cdot | \xi_{t-1},{\Sigma}^q_{t-1}),\\ \xi_{t-1} &= \frac{1}{M}\sum_{m=1}^M x_{t-1}^m, \quad {\Sigma}_{t-1}^q =\frac1{M-1}\sum_{m=1}^M({x}_{t-1}^m-\xi_{t-1} )(x_{t-1}^m-\xi_{t-1} )^T. \label{e:estqt-1} \end{align} \end{subequations} Now recall that the issue with the optimal backward kernel $L^\mathrm{opt}_{t-1}$ is that the transform $T_t$ inside the forward kernel $K_t$ is nonlinear, and as a result $q_t$ can not be computed analytically. Here to obtain $\hat{L}_{t-1}$ in Eq.~\eqref{e:suboptL} explicitly, we take \begin{equation} \hat{K}_t(\cdot|x_{t-1}) = \@N(\cdot | x_{t-1}+Q^x_t (y_t-\bar{y}_t), \Sigma_t^K),\quad\mathrm{with} \quad \bar{y}_t={\mathbb E}[G_t(x_{t-1})],\label{e:hatKt} \end{equation} and in practice $\bar{y}$ is evaluated from the particles, i.e., \begin{equation} \bar{y} = \frac1M\sum_{m=1}^M G_t(x_{t-1}^M). \end{equation} It follows that the backward kernel $\hat{L}_{t-1}$, given by Eq.~\eqref{e:suboptL}, is also Gaussian and is given by \begin{subequations}\label{e:suboptLnormal} \begin{equation} \hat{L}_{t-1}(\cdot|x_t) = \@N(\cdot | T^L_{t-1}(x_t),\Sigma^L_{t-1}), \end{equation} where \begin{multline} T_{t-1}^L(x_{t}) =(I-\Sigma_{t}^K(\Sigma^K_t+\Sigma^q_{t-1})^{-1})(x_t-Q_t^x(y_t-\bar{y}_t)) \\+(I-\Sigma_{t-1}^q(\Sigma^q_{t-1}+\Sigma^K_{t})^{-1})\xi_{t-1}, \end{multline} and \begin{equation} \Sigma^L_t= \Sigma^q_{t-1}-\Sigma^q_{t-1}(\Sigma_{t-1}^q+\Sigma^K_t)^{-1}\Sigma_{t-1}^q. \label{e:alpha1} \end{equation} \end{subequations} It follows that the resulting incremental weight function is \begin{equation} \alpha_t(x_{t-1},x_t)=\frac{\pi_t(x_t) \hat{L}_{t-1}(x_{t-1}|x_t)}{\pi_{t-1}(x_{t-1})K_{t}(x_t|x_{t-1})}.\label{e:alpha1} \end{equation} Now using the ingredients presented above, we summarize the EnKF-SMCS scheme in Algorithm~\ref{alg:enkf-smcs}. \begin{algorithm} Initialization: draw sample $\{x_0^m\}_{m=1}^M$ from distribution $q_0(x_0)$; compute the weights $w^m_0=\pi_0(x_0^m)/q_0(x_0^m)$ for $m=1...M$ and renormalize $\{w_0^m\}_{m=1}^M$ so that $\sum_{m=1}^M w^m_0=1$;\;\\ \For{$t=1$ \KwTo $T$}{ estimate $\xi_{t-1}$ and ${\Sigma}^q_{t-1}$ from the ensemble $\{(x_{t-1}^m,w^m_{t-1})\}_{m=1}^M$ using Eq.~\eqref{e:estqt-1};\; \\ let $\tilde{u}_t^m = [x_{t-1}^m, G(x_{t-1})^m]^T$ for $m=1...M$;\;\\ evaluate $\tilde{\mu}_t$ and $\tilde{C}_t$ with Eq.~\eqref{e:priorparams}, and compute $Q_t$ with Eq.~\eqref{e:gain};\;\\ draw $x_{t}^m \sim K_t(x_t|x_{t-1}^m)$ for $m=1...M$ with $K_t$ given by Eq.~\eqref{e:Kt};\;\\ compute $\hat{L}_{t-1}$ from Eq.~\eqref{e:suboptLnormal};\;\\ update the weights: \[w_{t}^m = w_{t-1}^m\frac{\pi_t(x^m_t) \hat{L}_{t-1}(x^m_{t-1}|x^m_t)}{\pi_{t-1}(x^m_{t-1})K_{t}(x^m_t|x^m_{t-1})} \] and renormalize $\{w_t^m\}_{m=1}^M$ so that $\sum_{m=1}^M w^m_t=1$;\;\\ resample if needed. } \caption{The EnKF-SMCS algorithm}\label{alg:enkf-smcs} \end{algorithm} It is important to note that a key challenge is yet to be addressed in Algorithm~\ref{alg:enkf-smcs}. Namely, we can see from Eq.~\eqref{e:alpha1} that when updating the particle weight, we need to compute $\pi_t(x_t)$, which involves the evaluation of the forward model from $G_1$ to $G_t$. This operation is required at each time step, and therefore, the total computational cost can be prohibitive if the total number of steps, $T$, is large. We propose a method to tackle the issue, which is based on the following two observations. First, in sequential inference problems, one is only interested in the posterior distribution at the final step where all data are incorporated; second, in many practical problems the posteriors may not vary substantially in several consecutive steps. It therefore may not be necessary to \emph{exactly} compute the posterior distribution at each time step and, as a result, we only need to sample the posterior distribution in a relatively small number of selected steps. Based on this idea, we propose the following scheme in each time step to reduce the computational cost: we first compute an approximate weight for each particle, and then assess that if some prescribed conditions (based on the approximate weights) are satisfied. If such conditions are satisfied, we evaluate the actual weights of the particles. To implement this scheme, we have to address the following issues: \begin{itemize} \item First we need a method to compute the approximate weight, which should be much easier to compute than the exact weight. Recall that in Eq.~\eqref{e:alpha1} one has to evaluate $\pi_t(\-x_t)/\pi_{t-1}(\-x_{t-1})$ which involves computing the forward models from $G_1(\-x_t)$ all the way to $G_t(\-x_t)$, and so the computational cost is high. To reduce the computational cost, we propose the following approximate method to evaluate Eq.~\eqref{e:alpha1}. Namely we first write $\pi_t(\-x_t)/\pi_{t-1}(\-x_{t-1})$ as, \[ \frac{\pi_t(\-x_t)}{\pi_{t-1}(\-x_{t-1})} = \frac{\pi_{t-1}(\-x_t)}{\pi_{t-1}(\-x_{t-1})} \pi(y_{t}|\-x_t), \] and naturally we can approximate $\pi_{t-1}$ with $q_{t-1}$, yielding, \[\frac{\pi_t(\-x_t)}{\pi_{t-1}(\-x_{t-1})} \approx \frac{{q}_{t-1}(\-x_t)}{{q}_{t-1}(\-x_{t-1})} \pi(y_{t}|\-x_t).\] In principle $q_{t-1}$ can be evaluated via Eq.~\eqref{e:qt}, however, this is still computionally expensive. Thus we make another approximation, replacing $q_t$ with $\hat{q}_t$, where $\hat{\pi}_{t-1}$ is the Gaussian approximation of $q_t$ given by Eqs.~\eqref{e:qhat}, and as a result, we obtain \begin{equation} \alpha_t(x_{t-1},x_t)\approx\frac{\hat{q}_{t-1}(x_t)\pi(y_{t}|\-x_t) \hat{L}_{t-1}(x_{t-1}|x_t)}{\hat{q}_{t-1}(x_{t-1})K_{t}(x_t|x_{t-1})},\label{e:alpha12} \end{equation} which is used to compute the approximate weights. \item Second we need to prescribe the conditions for triggering the computation of the actual weights. Following \cite{green2017estimating}, we use the Effective Sample Size (ESS) \cite{doucet2009tutorial} (based on the approximate weights) as the main indicator for computing the actual weights. Namely if the ESS calculated with the approximate weights is smaller than a threshold value, the actual weights are computed. Moreover we also have two additional conditions that can also trigger the computation of the actual weights: 1) if the actual weights have not been computed for a given number of steps; 2) if the inference reaches the final step, i.e., $t=T$. \item Finally we shall discuss how to compute the actual weight $w_t$. It should be noted here that the recursive formulas~\eqref{e:isw} can not be used here since the actual value of $w_{t-1}$ is not available. However, letting $t_0$ be the preceding step where the actual weights are computed, and it can be shown that \begin{equation} w_t = w_{t_0} \frac{\pi_t(x_t)}{\pi_{t_0}(x_{t_0})}\prod_{i=t_0}^{t-1} \frac{\hat{L}_i(x_i|x_{i+1})}{K_{i+1}(x_{i+1}|x_{i})}, \end{equation} which is used to calculate the actual weights of the particles. \end{itemize} We refer to this modified scheme as EnKF-SMCS with weight refinement (EnKF-SMCS-WR), the complete procedure of which is described in Algorithm~\ref{alg:enkf-smcs2}. Finally note that in both EnKF-SMCS algorithms, a resampling step is needed. \begin{algorithm} Initialization: draw sample $\{x_0^m\}_{m=1}^M$ from distribution $q_0(x_0)$; compute the weights $w^m_0=\pi_0(x_0^m)/q_0(x_0^m)$ for $m=1...M$ and renormalize $\{w_0^m\}_{m=1}^M$ so that $\sum_{m=1}^M w^m_0=1$;\;\\ let $t_0=0$;\\ \For{$t=1$ \KwTo $T$}{ estimate $\xi_{t-1}$ and ${\Sigma}^q_{t-1}$ from the ensemble $\{(x_{t-1}^m,w^m_{t-1})\}_{m=1}^M$ using Eq.~\eqref{e:estqt-1};\; \\ let $\tilde{u}_t^m = [x_{t-1}^m, G(x_{t-1})^m]^T$ for $m=1...M$;\;\\ evaluate $\tilde{\mu}_t$ and $\tilde{C}_t$ with Eq.~\eqref{e:priorparams}, and compute $Q_t$ with Eq.~\eqref{e:gain};\;\\ draw $x_{t}^m \sim K_t(x_t|x_{t-1}^m)$ for $m=1...M$ with $K_t$ given by Eq.~\eqref{e:Kt};\;\\ calculate the approximate weights for $m=1...M$: \[w_{t}^m = w_{t-1}^m\alpha^m_t,\quad \alpha_t^m=\frac{\hat{q}_{t-1}(x^m_t)\pi(y_{t}|\-x^m_t) \hat{L}_{t-1}(x^m_{t-1}|x^m_t)}{\hat{q}_{t-1}(x^m_{t-1})K_{t}(x^m_t|x^m_{t-1})}, \] and renormalize $\{w_t^m\}_{m=1}^M$ so that $\sum_{m=1}^M w^m_t=1$;\;\\ calculate the ESS of the approximate weights $\{w_t^m\}_{m=1}^M$; \;\\ \If{ESS$<\mathrm{ESS}_{\min}$ $\lor$ $t-t_0>\Delta T_{\max}$ $\lor$ $t=T$} { compute $\hat{L}_{t-1}$ from Eq.~\eqref{e:suboptLnormal};\;\\ calculate the weights for $m=1...M$: \[ w^m_t = w^m_{t_0} \frac{\pi_t(x^m_t)}{\pi_{t_0}(x^m_{t_0})}\prod_{i=t_0}^{t-1} \frac{\hat{L}_i(x^m_i|x^n_{i+1})}{K_{i+1}(x^m_{i+1}|x^m_{i})}, \] and renormalize $\{w_t^m\}_{m=1}^M$ so that $\sum_{m=1}^M w^m_t=1$;\;\\ calculate the ESS of the weights $\{w_t^m\}_{m=1}^M$; \;\\ \If{ESS$<\mathrm{ESS}_\mathrm{resamp}$ } {resample;} let $t_0=t$; } } \caption{The EnKF-SMCS-WR algorithm}\label{alg:enkf-smcs2} \end{algorithm} \section{Numerical examples}\label{sec:examples} We shall provide three examples to demonstrate the performance of the proposed EnKF-SMCS algorithms. The first example is used to illustrate that the proposed method can perform well when the posterior is strongly non-Gaussian and the EnKF becomes highly inaccurate. The second and the third examples show that even for problems where the EnKF performs reasonably well, the EnKF-SMCS can further improve the performance. {We emphasize here that in all these examples, we assume that the forward model $G_t$ is computationally intensive and thus the main computational cost arises from the simulation of $G_t$. As a result, the main computational cost of all methods is measured by the number of forward model evaluations, which in all the methods used in this section is equal to the product of the number of steps and that of the particles. } \subsection{The Bernoulli model} \begin{figure} \centering \centerline{\includegraphics[width=.5\linewidth]{figs/figs_new/bern/obs_bern_04} \includegraphics[width=.5\linewidth]{figs/figs_new/bern/obs_bern_08}} \caption{The simulated data for $\sigma=0.4$ (left) and $\sigma =0.8$ (right). The lines show the simulated states in continuous time and the dots are the noisy observations. } \label{f:berndata} \end{figure} \begin{figure} \centering \centerline{\includegraphics[width=.5\linewidth]{figs/figs_new/bern/bern_04} \includegraphics[width=.5\linewidth]{figs/figs_new/bern/bern_08}} \caption{The estimation bias error (the difference between the sample mean and the ground truth) plotted at each time step. The left plot is the bais for $\sigma=0.4$ and the right figure is that for $\sigma=0.8$. } \label{f:bern} \end{figure} Our first example is the Bernoulli equation, \begin{subequations} \label{e:bern} \begin{equation} \frac{d v}{d\tau} -v=-v^3,\ \ \ v(0)=x, \end{equation} which has an analytical solution, \begin{equation} v(\tau)={G}(x,\tau) = x (x^2+(1-x^2)e^{-2\tau})^{-1/2}. \label{e:solution} \end{equation} \end{subequations} This model is an often-used benchmark problem for data assimilation methods as it exhibits strong non-Gaussian behavior~\cite{apte2007sampling}. Here we pose it as a sequential inference problem. Namely, suppose that we can observe the solution of the equation, $v(\tau)$, at different times $\tau = t\cdot \Delta_t$ $\tau= \Delta t$ for $t=1,...,T$, and we aim to estimate the initial condition $x$ from the sequentially observed data. The observation noise is assumed to follow a zero-mean Gaussian distribution with standard deviation $\sigma$. In this example, we take $T=50$, and $\Delta_t=0.3$ and, moreover, we consider two different noise levels: $\sigma=0.4$ and $\sigma_t=0.8$. In the numerical experiments, we set the ground truth to be $x=10^{-4}$ and the data is simulated from the model~\eqref{e:bern} for $\sigma=0.4$ and $\sigma=0.8$, which are shown in Figs.~\ref{f:berndata}. In the experiments the prior distribution for $x$ is taken to be uniform: $U[-1,10]$. We sample the posterior distribution with three methods: the EnKF method in \cite{iglesias2013ensemble}, EnKF-SMCS (Algorithm~1), and EnKF-SMCS-WR (Algorithm~2). In each method, we use $5\times 10^4$ particles, and the bias error, i.e., the difference between the sample mean, which is a commonly used estimator, and the ground truth is then computed at each time step. The results are shown in Figs.~\ref{f:bern} where the left figure show the results for the small noise case ($\sigma=0.4$) and the right figure shows those for the large noise case ($\sigma=0.8$). It is important to note here that, in the EnKF-SMCS-WR method, only the time steps where the actual weights are computed are shown (marked by asterisks). In other words, only the asterisk signs show the correct estimation errors and the line is merely for visual guidance. First, one can see from the figures that all the methods perform in the small noise case, which is sensible as intuitively the inference should be more accurate when the observation noise is small. More importantly, we can also see that in both cases, the EnKF results in significantly higher errors than the two SMCS methods, suggesting that EnKF performs poorly for this highly non-Gaussian example. On the other hand, we observe that the two SMCS algorithms produce largely the same results in both cases (recall that only the asterisks show the results), while EnKF-SMCS-WR only calculates the actual sample weights at 10 time steps in the small noise case and 8 in the large noise case, as is compared to 50 in the EnKF-SMCS, which suggests that the proposed EnKF-SMCS-WR algorithm can significantly reduce the computational cost associated with the weight computation. \subsection{Lorenz 63 model} \begin{figure} \centerline{\includegraphics[width=1.25\linewidth]{figs/figs_new/l63/obs_l63}} \caption{The simulated data for the Lorenz 63 example. The lines show the simulated states in continuous time and the dots are the noisy observations.} \label{f:lorenz63data01} \end{figure} \begin{figure} \centerline{\includegraphics[width=1\linewidth]{figs/figs_new/l63/l63_x1}} \caption{The estimation bias error of each parameter when $x$ is observed.}\label{f:x1} \end{figure} \begin{figure} \centerline{\includegraphics[width=1\linewidth]{figs/figs_new/l63/l63_x2}} \caption{The estimation bias error of each parameter when $y$ is observed. } \label{f:x2} \end{figure} Our second example is the Lorenz 63 model, a popular example used in several works on parameter estimation, such as \cite{annan2004efficient,mehrkanoon2012parameter}. Specifically the model consists of three variables $x$, $y$ and $z$, evolving according to the differential equations \begin{subequations} \label{e:lorenz63} \begin{eqnarray} \frac{dx}{d\tau}&=& \alpha(y-x),\\ \frac{dy}{d\tau}&=& x(\rho -z)-y,\\ \frac{dz}{d\tau}&=& xy-\beta z, \end{eqnarray} \end{subequations} where $\alpha$, $\rho$ and $\beta$ are three constant parameters. In this example we take the true values of the three parameters to be $\alpha=10$, $\beta=8/3$ and $\rho=28$, which we assume that we have no knowledge of. Now suppose that observations of $(x,y,z)$ are made at a sequence of discrete time points: $\tau = t\cdot \Delta_t$ for $\Delta_t=0.1$ and $t=1,...,100$, and we want to estimate the three parameters $(\alpha,\beta,\rho)$ from these observed data. The measurement noise here is taken to be zero-mean Gaussian with variance $3^2$, and the priors of the three parameters are also taken to be Gaussian with means {$[6, 0, 24]$}, and variances $[1, 1 ,1]$. The data used in our numerical experiments are shown in Fig.~\ref{f:lorenz63data01}. In the numerical experiments, we conduct inference for two different cases: one is that variable $x$ is observed and the other is that $y$ is observed. In each case we draw samples from the posterior distributions with EnKF, EnKF-SMCS and EnKF-SMCS-WR, where $10^3$ samples are drawn with each method. We plot the estimation bias errors for the case that $x$ is observed in Fig.~\ref{f:x1} and those for that with $y$ being observed in Fig.~\ref{f:x2}. As before, the time steps where the actual weights are computed are marked by asterisks. One can see that, in both cases, the errors in the EnKF is larger than those in the two SMCS methods, especially for parameter $\alpha$. Once gain, the two SMCS methods yield largely the same bias errors while EnKF-SMCS-WR employs much less computations of the actual weight: 15 time steps in the first case and 14 in the second (indicated by asterisks in Fig.~\ref{f:x2}). The example shows that even for problems where the posterior distributions are rather close to Gaussian, the SMCS can further improve the estimation accuracy. \begin{table}[htbp] \caption{ The estimation bias of each parameter when x is observed} \label{tab:x} \centering \begin{tabular}{cc|ccccc} \hline &\multirow{2}*{$t$}&\multirow{2}*{54}&\multirow{2}*{65}&\multirow{2}*{74}&\multirow{2}*{90}&\multirow{2}*{100} \\ & & & & & & \\ \midrule \multirow{3}*{$\alpha$} &EnKF&1.608& 1.54& 1.552& 1.496& 1.492 \\ &EnKF-SMCS&0.8389& 0.7805& 0.9274& 0.9383& 1.046 \\ &EnKF-SMCS-WR&1.179& 1.095& 0.978& 1.03& 1.044 \\ \hline \multirow{3}*{$\beta$} &EnKF&0.4264& 0.4262& 0.468& 0.4262& 0.4394 \\ &EnKF-SMCS&0.2601& 0.2123& 0.2496& 0.2535& 0.2751 \\ &EnKF-SMCS-WR&0.3963& 0.3195& 0.2699& 0.2914& 0.2936 \\ \hline \multirow{3}*{$\rho$} &EnKF&0.275& 0.2808& 0.2699& 0.3043& 0.265 \\ &EnKF-SMCS&0.0981& 0.1176& 0.1373& 0.1362& 0.1492 \\ &EnKF-SMCS-WR&0.0421& 0.1167& 0.1396& 0.1313& 0.1334 \\ \hline \end{tabular} \end{table} \begin{table}[htbp] \caption{ The estimation bias of each parameter when y is observed} \label{tab:y} \centering \begin{tabular}{cc|ccccc} \hline &\multirow{2}*{$t$}&\multirow{2}*{79}&\multirow{2}*{90}&\multirow{2}*{91}&\multirow{2}*{94}&\multirow{2}*{100} \\ & & & & & & \\ \midrule \multirow{3}*{$\alpha$} &EnKF&0.8221& 0.7711& 0.7686& 0.7733& 0.795 \\ &EnKF-SMCS&0.4839& 0.5297& 0.5091& 0.5101& 0.5177 \\ &EnKF-SMCS-WR&0.6305& 0.6239& 0.6154& 0.6083& 0.6063 \\ \hline \multirow{3}*{$\beta$} &EnKF&0.2232& 0.1924& 0.1909& 0.1864& 0.1956 \\ &EnKF-SMCS&0.1228& 0.1331& 0.1259& 0.1239& 0.1243 \\ &EnKF-SMCS-WR&0.1591& 0.1462& 0.142& 0.1439& 0.1436 \\ \hline \multirow{3}*{$\rho$} &EnKF&0.3362& 0.2767& 0.2836& 0.2706& 0.261 \\ &EnKF-SMCS&0.0939& 0.1041& 0.1032& 0.1055& 0.1084 \\ &EnKF-SMCS-WR&0.1188& 0.1293& 0.1266& 0.1261& 0.1255 \\ \hline \end{tabular} \end{table} \subsection{A kinetic model of the ERK pathway} In the last example we consider the parameter estimation problems in kinetic models of biochemical networks. Estimating the kinetic parameters is an essential task in kinetic modeling of the biochemical reaction networks, including genetic regulatory networks and signal transduction pathways~\cite{quach2007estimating}. In particular we consider the kinetic model of the Extracellular signal Regulated Kinase (ERK) pathway suppressed by Raf-1 kinase inhibitor protein (RKIP)~\cite{kwang2003mathematical,sun2008extended}. Here we shall omit further details of biological background of the problem and proceed directly to the mathematical formulation of the problem; readers who are interested in more application-related information may consult \cite{kwang2003mathematical,sun2008extended}. In this problem the mathematical model that is derived based on enzyme kinetics, and is represented by a dynamical system: \begin{equation} \frac{dx}{d\tau} = SV(x), \label{e:kinetic} \end{equation} where $\tau$ is the time, $x$ is a vector of state variables which are concentrations of metabolites, enzyme and proteins or gene expression levels, $S$ is a stoichiometric matrix that describes the biochemical transformation in a biochemical network, and $V(x)$ is the vector of reaction rates and is usually the vector of nonlinear function of the state and input variables. Specifically, in this ERK pathway model we have $$ x=[x_1, x_2,...,x_{11}]^T,\quad V(x)=[v_1, v_2,...,v_7]^T,$$ which forms a system of 11 ordinary differential equations. Moreover the rates of reactions $V(x)$ are~\cite{kwang2003mathematical,sun2008extended}: \begin{subequations} \label{e:kinetic2} \begin{align*} & v_1 = k_1x_1x_2-k_2x_3, \\ & v_2 = k_3x_3x_9-k_4x_4, \\ & v_3 = k_5x_4, \\ & v_4 = k_6x_5x_7-k_7x_8, \\ & v_5 = k_8x_8, \\ & v_6 = k_9x_6x_{10}-k_{10}x_{11}, \\ & v_7 = k_{11}x_{11}, \end{align*} \end{subequations} where $k_1,...,k_{11}$ are the kinetic parameters, and the stoichiometric matrix $S$ is given by~\cite{kwang2003mathematical,sun2008extended}: \begin{align*} S =& \begin{bmatrix} -1& 0& 1& 0& 0& 0& 0\\ -1& 0& 0& 0& 0& 0& 1\\ 1& -1& 0& 0& 0& 0& 0\\ 0& 1& -1& 0& 0& 0& 0\\ 0& 0& 1& -1& 0& 0& 0\\ 0& 0& 1& 0& 0& -1& 0\\ 0& 0& 0& -1& 1& 0& 0\\ 0& 0& 0& 1& -1& 0& 0\\ 0& -1& 0& 0& 1& 0& 0\\ 0& 0& 0& 0& 0& -1& 1\\ 0& 0& 0& 0& 0& 1& -1 \end{bmatrix}. \end{align*} In this problem, we can make observations of some of the concentrations $x_1,...x_{11}$ at different times, from which we estimate the 11 kinetic parameters $k_1,...,k_{11}$. In our numerical experiments the specific setup is the following. In many practical problems, not all the species' concentrations can be conveniently observed \cite{kwang2003mathematical,sun2008extended}. To mimic the situation we assume that the observations can only be made on 4 of the states: $\{x_1, x_4, x_7, x_{10}\}$, and $\{x_2, x_3, x_5, x_6, x_8, x_9, x_{11}\}$ are not observed. Second the observation is made 50 times with each time spacing is $\Delta_t=0.001$, and the measurement noise is taken to be zero-mean Gaussian with standard deviation (STD) shown in Table~\ref{tab:states}. The initial values of the concentrations are also given in Table~\ref{tab:states}. We use simulated data in this example where the true values of the eleven parameters are shown in Table~\ref{tab:paras}. The prior of the eleven parameters are also taken to be Gaussian with means and standard deviations both shown in Table~\ref{tab:paras}. Next we compare the simulation results of the EnKF-SMCS-WR and EnKF. With each method we generate 10000 samples, and for the EnKF-SMCS-WR method, the actual weights are computed at steps 2, 4, 8, 13, 17, 25, 27, 28, 30, 36, 39, 41, 50. We then select 4 representative steps (note that at all these steps the actual weights are computed): $t=30,\,36,\,41,$ and $50$, and at each time step we compute the sample mean, standard deviation, and bias error, where the results are shown in Table~\ref{tab:11d2}. We restate here that the sample mean is usually used as an estimator of the parameters and the bias error can be used to measure the performance of the estimator. For the purpose of sequential inference, we should devote the majority of our attention to the estimator accuracy at the final step. Therefore to compare the performance of the two methods, we mark the smaller bias error at $t=50$ as bold in Table~\ref{tab:11d2}, where one can see that the EnKF-SMCS-WR method yields smaller bias error in all but two dimensions ($k_4$ and $k_5$). It should be noted here that there is another dimensions $k_2$ where the difference is rather small and may not be statistically significant. That said, we can see that EnKF-SMCS-WR has a better performance overall, as it is either more accurate or close to the EnKF in terms of the bias error, in all dimensions but $k_4$. We should also mention that we have also conducted the simulation with EnKF-SMCS, whose results are very close to those of EnKF-SMCS-WR and are omitted here. \begin{table}[htbp] \caption{The initial values and observation noise of the concentrations (states $x_i$)} \label{tab:states} \centering \begin{tabular}{c|cccccc} \hline & $x_1$&$x_2$&$x_3$&$x_4$&$x_5$&$x_6$\\ \hline initial\ values& 66& 0.054& 0.019& 59& 0.09& 0.012 \\ noise STD &0.005& $5\times 10^{-5}$& $2\times 10^{-5}$& 0.035& 0.0005& $5\times 10^{-6}$ \\ \midrule &$x_7$&$x_8$&$x_9$&$x_{10}$&$x_{11}$& \\ \hline initial\ values& 65& 26& 175& 161& 2.18& \\ noise STD & 0.05& 0.02& 0.03& 0.003& 0.002& \\ \hline \end{tabular} \end{table} \begin{table}[htbp] \caption{The true values and priors of the kinetic parameters} \label{tab:paras} \centering \begin{tabular}{c|cccccc} \hline & $k_1$&$k_2$&$k_3$&$k_4$&$k_5$&$k_6$ \\ \hline truth &0.5242& 0.0075& 0.6108& 0.0025& 0.0371& 0.8101 \\ \hline prior mean&0.5& 0.1& 0.62& 0.04& -0.5& 0.8 \\ prior STD&0.05& 0.03& 0.01& 0.04& 0.5& 0.02 \\ \midrule &$k_7$&$k_8$&$k_9$&$k_{10}$&$k_{11}$& \\ \hline truth & 0.0713& 0.0687& 0.96& 0.0012& 0.872& \\ \hline prior mean& 0& 0.4& 0.9& 0& 0.9& \\ prior STD& 0.05& 0.3& 0.1& 0.005& 0.05& \\ \hline \end{tabular} \end{table} \begin{table}[htbp] \caption{ Comparison of the results of the kinetic model.} \label{tab:11d2} \centering \begin{tabular}{cc|cccc|cccc} \hline \multicolumn{2}{c|}{ \multirow{2}*{} }&\multicolumn{4}{|c|}{ \multirow{2}*{ $\bf{EnKF-SMCS-WR}$ } }& \multicolumn{4}{|c}{ \multirow{2}*{$\bf{EnKF}$ } } \\ \multicolumn{2}{c|}{}&\multicolumn{4}{|c|}{}&\multicolumn{4}{|c}{} \\ \hline &\multirow{2}*{$t$}&\multirow{2}*{30}&\multirow{2}*{36}&\multirow{2}*{41}&\multirow{2}*{50} &\multirow{2}*{30}&\multirow{2}*{36}&\multirow{2}*{41}&\multirow{2}*{50} \\ & & & & & & & & & \\ \hline \multirow{3}*{$k_1$} &Mean& 0.4829& 0.4969& 0.5129& 0.5178 & 0.4804& 0.4946& 0.5124& 0.5165 \\ &STD& 0.0329& 0.0302& 0.0291& 0.0268 & 0.0356& 0.0329& 0.0314& 0.0295 \\ &Bias& 0.0413& 0.0273& 0.0113& \textbf{0.0064} & 0.0438& 0.0296& 0.0118& 0.0077 \\ \hline \multirow{3}*{$k_2$} &Mean& 0.0988& 0.0992& 0.0997& 0.0997 & 0.1019& 0.1020& 0.1011& 0.1015 \\ &STD& 0.0308& 0.0308& 0.0312& 0.0309 & 0.0297& 0.0297& 0.0297& 0.0297 \\ &Bias& 0.0913& 0.0917& 0.0922& \textbf{0.0922} & 0.0944& 0.0945& 0.0936& 0.0940\\ \hline \multirow{3}*{$k_3$} &Mean& 0.6186& 0.6187& 0.6185& 0.6185 & 0.6204& 0.6203& 0.6202& 0.6201 \\ &STD& 0.0094& 0.0094& 0.0094& 0.0097 & 0.0100& 0.0100& 0.0100& 0.0100 \\ &Bias& 0.0078& 0.0079& 0.0077& {\bf0.0077} & 0.0096& 0.0095& 0.0094& 0.0093 \\ \hline \multirow{3}*{$k_4$} &Mean& 0.0053& -0.0049& -0.0081& -0.0059 & 0.0061& -0.0037& -0.0073& -0.0054 \\ &STD& 0.0147& 0.0129& 0.0118& 0.0104 & 0.0146& 0.0129& 0.0119& 0.0105 \\ &Bias& 0.0028& 0.0074& 0.0106& 0.0084 & 0.0036& 0.0062& 0.0098& {\bf0.0079} \\ \hline \multirow{3}*{$k_5$} &Mean& 0.0365& 0.0367& 0.0376& 0.0373 & 0.0364& 0.0365& 0.0376& 0.0373 \\ &STD& 0.0017& 0.0016& 0.0015& 0.0014 & 0.0019& 0.0018& 0.0017& 0.0016 \\ &Bias& 0.0006& 0.0004& 0.0005& 0.0002 & 0.0007& 0.0006& 0.0005& 0.0002 \\ \hline \multirow{3}*{$k_6$} &Mean& 0.8013& 0.8005& 0.8009& 0.8004 & 0.7996& 0.7999& 0.7999& 0.7998 \\ &STD& 0.0213& 0.0217& 0.0221& 0.0222 & 0.0201& 0.0201& 0.0200& 0.0200 \\ &Bias& 0.0088& 0.0096& 0.0092& {\bf0.0097} & 0.0105& 0.0102& 0.0102& 0.0103 \\ \hline \multirow{3}*{$k_7$} &Mean& 0.0154& 0.0131& 0.0136& 0.0106 & 0.0101& 0.0057& 0.0063& 0.0031 \\ &STD& 0.0464& 0.0447& 0.0432& 0.0417 & 0.0487& 0.0477& 0.0467& 0.0446 \\ &Bias& 0.0559& 0.0582& 0.0577& {\bf0.0607} & 0.0612& 0.0656& 0.0650& 0.0682 \\ \hline \multirow{3}*{$k_8$} &Mean& 0.0787& 0.0826& 0.0827& 0.0822 & 0.0819& 0.0869& 0.0864& 0.0895 \\ &STD& 0.0342& 0.0288& 0.0259& 0.0210 & 0.0349& 0.0298& 0.0265& 0.0220 \\ &Bias& 0.0100& 0.0139& 0.0140& {\bf0.0135 } & 0.0132& 0.0182& 0.0177& 0.0208 \\ \hline \multirow{3}*{$k_9$} &Mean& 0.9966& 0.9915& 0.9534& 0.9586 & 1.0060& 0.9947& 0.9585& 0.9638 \\ &STD& 0.0763& 0.0737& 0.0693& 0.0646 & 0.0735& 0.0701& 0.0683& 0.0654 \\ &Bias& 0.0366& 0.0315& 0.0066& {\bf0.0014} & 0.0460& 0.0347& 0.0015& 0.0038 \\ \hline \multirow{3}*{$k_{10}$} &Mean& 0.0002& 0.0001& 0.0005& 0.0005 & -0.0006& -0.0005& -0.0003& -0.0003 \\ &STD& 0.0052& 0.0053& 0.0053& 0.0053 & 0.0050& 0.0050& 0.0050& 0.0050 \\ &Bias& 0.0010& 0.0011& 0.0007& {\bf0.0007} & 0.0018& 0.0017& 0.0015& 0.0015 \\ \hline \multirow{3}*{$k_{11}$} &Mean& 0.8859& 0.8819& 0.8802& 0.8751 & 0.8872& 0.8803& 0.8811& 0.8762 \\ &STD& 0.0367& 0.0363& 0.0357& 0.0360 & 0.0406& 0.0404& 0.0403& 0.0400 \\ &Bias& 0.0139& 0.0099& 0.0082& {\bf0.0041} & 0.0152& 0.0083& 0.0091& 0.0052 \\ \hline \end{tabular} \end{table} \section{Conclusions}\label{sec:conclusions} In this work we propose a sampling method to compute the posterior distribution that arrises in sequential Bayesian inference problems. The method is based on SMCS, which seeks to generate weighted samples from the posterior in a sequential manner and, specifically, we propose to construct the forward kernel in SMCS using an EnKF framework and also derive a backward kernel associated to it. With numerical examples, we demonstrate that the EnKF-SMCS method can often yield more accurate estimations than the direct use of either SMCS or EnKF for a class of problems. We believe that the method can be useful in a large range of real-world parameter estimation problems where data becomes available sequentially in time. Some extensions and improvements of the EnKF-SMCS algorithm are possible. First in this work we focus on problems with a sequential structure, but we expect that the method can be applied to batch-inference problems (where the data are available and used for inference altogether) as well. In fact, many batch inference problems can be artificially ``sequentialized'' by some data tempering treatments~\cite{geyer2011importance} and, consequently, the EnKF-SMCS algorithm can be applied in these scanrios. In this respect, combining data tempering methods and the EnKF-SMCS method to address batch inference problems can be a highly interesting research problem. Second, as has been discussed previously, the proposed method relies on the assumption that the posterior distributions do not deviate strongly from being Gaussian. For problems with highly nonlinear models, the posterior distributions may depart far from Gaussian, and as result the kernels obtained with the EnKF method may not be effective for SMCS. In this case, the performance of the EnKF-SMCS method may be improved by approximating the posterior with a mixture distribution (e.g. \cite{hoteit2008new,stordal2011bridging}). Finally, as the method is based on an EnKF scheme, it requires that the observation noise is additive Gaussian. We believe that such a requirement can be relaxed by borrowing the ideas of some methods developed for the dynamic state estimation problems in the data assimilation community, e.g., \cite{pajonk2012deterministic}. We plan to investigate these issues in the future. \bibliographystyle{plain}
{ "timestamp": "2020-12-17T02:14:57", "yymm": "2012", "arxiv_id": "2012.08848", "language": "en", "url": "https://arxiv.org/abs/2012.08848" }
\section{Introduction} While the method of forcing has ``impressive success in proving independence results for set theory'' \cite[p.81]{bottom}, mathematical logic lacks general methods to prove independence of arithmetical $\Pi_1$-sentences. This lack has been pointed out by Pudl\'ak in \cite{bottom} and repeatedly in his latest book \cite{pudlakbuch}. There he asks for a ``method that would be as powerful as forcing and work also for finite problems. [\ldots] To develop such methods is one of the principal goals in proof complexity.''\cite[p.342]{pudlakbuch} A suggestion~\cite{am} is that forcing itself could be developed to become such a method. Indeed, two landmark results of proof complexity, namely the theorems of Riis~\cite{riis,riisbrics} and Ajtai~\cite{ajtai}, have originally been proved by forcing type arguments. In contrast to set theory, however, a general theory of forcing in bounded arithmetic has not been developed.\footnote{An exception is Kraj\'i\c{c}ek's book \cite{kraforce} that follows a conceptually different set-up going back to Scott~\cite{scott}.} Instead, later developments ``eliminate the non-standard model theory'' \cite[p.367]{bpu} and forcing. Forcing arguments in bounded arithmetic remain largely informal and confined to the most simple kind of forcings akin to Cohen forcing in set theory. The leading idea in Pudl\'ak's book~\cite{pudlakbuch} or the survey~\cite{pudlakbulletin} is that the computational complexity of computational problems associated to sentences could cause independence. A particularly appealing instance of this idea is given by true sentences of the form $\forall x\exists y\varphi(x,y)$ where $\varphi(x,y)$ defines some polynomial time decidable and polynomially bounded relation.\footnote{All relevant technical concepts are going to be defined precisely later.} In particular, $\exists y$ is implicitly bounded, so the sentence is $\Pi_1$ in an appropriate language. The associated computational problem is the (total) NP search problem to compute, given an input $x$, some $y$ such that $\varphi(x,y)$ is true. On the computational complexity side~NP search problems are compared using (polynomial time) many-one or Turing reductions and organized into various classes \cite{pls,papa}. An elegant definitorial set-up~\cite{beame} uses {\em type~2}~NP search problems where $\varphi(x,y)$ is allowed to mention (a predicate for) an oracle $\alpha$. By a {\em finitary combinatorial principle} we mean an existential first-order sentence $\varphi$ which is valid in the finite. Such a sentence $\varphi$ might or might not have built-in symbols, for example, an order symbol $<$ whose interpretation over universe $[n]:=\{0,\ldots, n-1\}$ is required to be the natural order. The {\em associated type 2 NP search problem} $Q_\varphi$ asks, given $n$ (in binary) and access to an oracle~$\alpha$ that codes an (exponentially large) structure on $[n]$, to find witnesses to the existential quantifiers in $\varphi$. For example $Q_\mathit{WPHP}$, for the weak pigeonhole principle $\mathit{WPHP}$, asks, given $n$ and an oracle $\alpha$ coding a function $f:[n]^2\to [n]$, to find a collision of $f$. This problem underlies collision resistant hash functions and is thus important for cryptography (cf.~\cite{krawphp,krapud,krapolywphp,komar}). Further, Papadimitriou's seminal work~\cite{papa} identified a couple of principles $\varphi$ such that many natural NP search problems reduce to $Q_\varphi$. On the logical side, there is a substantial amount of work aimed at characterizing the~NP search problems which are provably total in bounded arithmetics (\cite{approx} contains a recent survey). For example~\cite{bk}, those provably total in $\mathsf{T}^1_2$ are in the class PLS from~\cite{pls}, i.e., many-one reducible to $Q_\textit{ITER}$, where $\textit{ITER}$ is the so-called {\em iteration principle} with built-in order~$<$. It is not known whether there are NP search problems outside PLS (this would imply\footnote{In fact, $\mathrm{P}\neq\mathrm{TFNP}$ seems to be much stronger than $\mathrm{P}\neq\mathrm{NP}$; see \cite{hubacek} for a recent discussion.} $\mathrm{P}\neq\mathrm{NP}$) but there are many such {\em type 2} problems~\cite{morioka}:\footnote{The proof given in \cite{morioka} treats only many-one reductions. Corollary~\ref{cor:morioka} gives a stronger result.} \begin{theorem}[Buresh-Oppenheim, Morioka 2004] \label{thm:morioka} If $\varphi$ is a finitary combinatorial principle without built-in symbols that fails in some infinite model, then $Q_\varphi$ is not Turing reducible to~$Q_\textit{ITER}$. \end{theorem} Papadimitriou's~\cite{papa} principles exemplify $\varphi$ as above (see Remark~\ref{rem:papa}). Beame et al.~\cite{beame} showed that their associated search problems are not equivalent under Turing reductions. Equivalently~\cite{ciy}, the associated complexity classes are distinct relative to a Cohen-generic oracle. Such oracles are produced by forcings of the type first considered by Feferman~\cite{feferman}. We refer to~\cite{kurtz} and the references therein for more information about generic oracles. These oracle separations use proof techniques underlying results stating that bounded depth Frege proofs of the propositional translation of one principle from substitution instances of another require exponential size. This translation is a straightforwardly defined sequence of tautologies, one for each natural $n>0$, expressing totality, i.e., that $\exists y\varphi(n,y)$ is true for all oracles~$\alpha$. The similarity of techniques raises the suspicion that the oracle separations might follow from the proof length lower bounds. It took a while for this to be confirmed. Improving~\cite{morioka}, Buss and Johnson~\cite{bj} showed: \begin{theorem}[Buss, Johnson 2012]\label{thm:bj} Let $\varphi,\psi$ be finitary combinatorial principles. If $Q_\varphi$~is Turing reducible to $Q_\psi$, then there are quasipolynomial size bounded depth Frege proofs of~the propositional translation of $\varphi$ from substitution instances of the propositional translation~of~$\psi$. \end{theorem} In fact, Buss and Johnson got {\em shallow} Frege proofs and were able to prove a partial converse (see~\cite{bj}). Theorem~\ref{thm:bj} confirms the abovementioned suspicion. Intuitively, however, the proof length lower bounds seem to be much stronger, and it is one of the goals of the present paper to clearly confirm this intuition. Despite these separations on the (relativized) computational complexity side, it is still open whether full relativized bounded arithmetic $\mathsf T_2(\alpha)$ has more provably total type 2 NP search problems than its second level $\mathsf T^2_2(\alpha)$. This is one of the central open problems in bounded arithmetic (e.g.\ \cite{bkt} or~\cite{approx} survey what is known). It is here where a general theory of forcing as Pudl\'ak asks for would be desirable. One of the most beautiful results is\footnote{The statement includes a later improvement due to Kraj\'i\v{c}ek: see \cite[Section 11.5]{krabuch}.} \begin{theorem}[Riis 1993]\label{thm:Briis} If $\varphi$ is a finitary combinatorial principle without built-in symbols that fails in some infinite model, then $Q_\varphi$ is not provably total in~$\mathsf T^1_2(\alpha)$. \end{theorem} This holds for~$\mathsf S^2_2(\alpha)$ by known conservativity, but fails for~$\mathsf T^2_2(\alpha)$ and~$\mathit{WPHP}$~\cite{mpw}. Riis' original proof~\cite{riis,riisbrics} used a variant of ``the first forcing argument in the context of weak arithmetic'' \cite[p.278]{krabuch} due to Paris and Wilkie~\cite{pw}. These forcings are essentially different from Feferman's forcing mentioned above: the latter expands the standard model by an unbounded set while the former expand a nonstandard model by a bounded~set. \paragraph{Results} We consider {\em universal variants} of bounded arithmetics and especially the theories $\forall\mathsf{S}^1_2(\PV(\alpha)),\forall\mathsf{T}^1_2(\PV(\alpha)),\forall\mathsf{T}_2(\PV(\alpha))$ in the language $\ensuremath{{\sf PV}}\xspace(\alpha)$ that contains a symbol for every polynomial time algorithm with oracle $\alpha$. They are defined using the same (induction or) minimization schemes as the usual bounded arithmetics $\mathsf S^1_2(\ensuremath{{\sf PV}}\xspace(\alpha)), \mathsf T^1_2(\ensuremath{{\sf PV}}\xspace(\alpha)),\mathsf T_2(\ensuremath{{\sf PV}}\xspace(\alpha))$ but have as base theory $\forall\PV(\alpha)$, the theory of all universal sentences true in the standard model for all oracles $\alpha$. Adding $\forall\PV(\alpha)$ harmonizes the computational and logical approach to type 2 NP search problems in that the logical notion of consequentiality over various theories coincides with natural notions of reductions. In particular, a type~2 NP search problem $\varphi(x,y)$ is Turing reducible to another $\psi(u,v)$ if and only if $\varphi(x,y)$ is a {\em consequence} of $\psi(u,v)$ over $\forall\mathsf{S}^1_2(\PV(\alpha))$. This means, roughly, that the totality of $\varphi(x,y)$ is provable in~$\forall\mathsf{S}^1_2(\PV(\alpha))$ plus the totality of $\psi(u,v)$ for all oracles that are polynomial time computable relative to $\alpha$. This follows from known witnessing theorems, the contribution here consists mainly in spelling out the right definitions. Indeed, we give a quite simple proof of \begin{theorem}\label{thm:bjstrong} Let $\varphi,\psi$ be finitary combinatorial principles. If $Q_\varphi$ is a consequence of~$Q_\psi$ over $\forall\mathsf{T}_2(\PV(\alpha))$, then there are quasipolynomial size bounded depth Frege proofs of the propositional translation of $\varphi$ from substitution instances of the propositional translation~of~$\psi$. \end{theorem} By the equivalence of Turing reducibility and consequentiality over $\forall\mathsf{S}^1_2(\PV(\alpha))$, this result strengthens Theorem~\ref{thm:bj} by replacing $\forall\mathsf{S}^1_2(\PV(\alpha))$ by $\forall\mathsf{T}_2(\PV(\alpha))$. Thereby, it confirms the abovementioned intuition that the oracle separations of~\cite{beame} seem to be much weaker than the corresponding proof length lower bounds. As already mentioned, progress to understand the relative complexity of type 2 NP search problems is hindered by our lack of general methods to prove independence from relativized bounded arithmetics. Here we describe a general forcing method to prove independence from $\forall\mathsf{T}^1_2(\PV(\alpha))$, situated in the framework of \cite{am}. Many, mostly informal, forcing type arguments in bounded arithmetic use what we call {\em typical} forcings with {\em typical graded} forcing frames. We prove a general theorem stating that under a series of simple technical conditions such forcings produce models of $\forall\mathsf{T}^1_2(\PV(\alpha))$. We stress that this result refers to arbitrary forcings not necessarily of the Cohen type. We refrain from reproducing this rather technical statement here and refer to Theorem~\ref{thm:T12}. It is meant as a contribution to Pudl\'ak's question in the relativized setting. Our main result, described next, is obtained as an application. We first reexamine Riis' Theorem~\ref{thm:Briis} in the light of Theorem~\ref{thm:T12}. We give a new proof using a natural forcing whose conditions are partial oracles that code {\em partial} structures on~$[n]$ that embed into an infinite model where $\varphi$ fails and hence do not verify the principle. The generic $\alpha^N$ then codes a {\em total} structure on $[n]$ that falsifies $\varphi$. It is straightforward to verify the conditions of Theorem~\ref{thm:T12} for this forcing, so we get a slight strengthening of Theorem~\ref{thm:Briis} with $\forall\mathsf{T}^1_2(\PV(\alpha))$ replacing $\mathsf T^1_2(\alpha)$. This yields the \begin{corollary}\label{cor:morioka} If $\varphi$ is a finitary combinatorial principle without built-in symbols that fails in some infinite model, then $Q_\varphi$ is independent from $Q_\textit{ITER}$ over $\forall\mathsf{T}^1_2(\PV(\alpha))$. \end{corollary} Being {\em independent} just negates being a consequence. Recalling the relation of Turing reducibility and $\forall\mathsf{S}^1_2(\PV(\alpha))$, we see that the corollary strengthens Theorem~\ref{thm:morioka} in that it replaces $\forall\mathsf{S}^1_2(\PV(\alpha))$ by $\forall\mathsf{T}^1_2(\PV(\alpha))$. Our main interest are finitary combinatorial principles without built-in symbols. Theorem~\ref{thm:Briis} suggests to study their relative strength over $\forall\mathsf{T}^1_2(\PV(\alpha))$. We aim at a model-theoretic property implying independence of $Q_\varphi$ from~$Q_{\tilde\varphi}$ over $\forall\mathsf{T}^1_2(\PV(\alpha))$. Note this is stronger than refuting Turing reducibility. For example, Buss et al ~\cite[Theorem~10]{bkt} proved this for~$\tilde\varphi=\mathit{WPHP}$ and $\varphi=\mathit{HOP}$. The {\em Herbrandized ordering principle} $\mathit{HOP}$ states, roughly, that partial orders have minimal elements. The proof uses quite involved combinatorics specifically tailored for $\mathit{HOP}$. Nevertheless, the authors point out that the proof ``relies on the fact that the injective $\mathit{WPHP}$ is very over-determined, in the sense that even relatively small subsets of the $a^2$ pigeons must already contain a collision.''~\cite{bkt} This hints at the possibility that there is a more general theorem, one concerning independence from ``very over-determined'' principles. We formalize and quantify the determinacy of a principle and then prove such a general result. This is done again by forcing with partial structures. The cited comment means that the $\mathit{WPHP}$ is verified in `small' partial structures, i.e., with only a small fraction of function values defined. We shall call such principles {\em weak} in distinction from {\em strong} ones and prove our main result: \begin{theorem}\label{thm:main} If $\varphi$ is a strong finitary combinatorial principle without built-in symbols and~$\tilde\varphi$ is a weak finitary combinatorial principle, then $Q_\varphi$ is independent from $Q_{\tilde\varphi}$ over $\forall\mathsf{T}^1_2(\PV(\alpha))$. \end{theorem} We view Theorem~\ref{thm:main} as an extension of Riis' Theorem~\ref{thm:Briis} because its proof extends our proof of Theorem~\ref{thm:Briis} which we consider natural and intuitive. Taking $\mathit{WPHP}$ for $\tilde\varphi$, it gives a simple model-theoretic criterion, namely being strong, for independence from $\forall\mathsf{T}^1_2(\PV(\alpha))$ ``plus $\mathit{WPHP}$''. We check it applies to many of the commonly studied principles (Section~\ref{sec:disc}), and, in particular, to $\mathit{HOP}$. Compared to~\cite{bkt} our proof is different. First, it does not rely on the already mentioned witnessing theorem for~$\mathsf T^1_2(\alpha)$ by PLS \cite{bk}. Second, it has to sidestep the amplification of failure of $\mathit{WPHP}$ (cf.~\cite[Section~2]{thapen1}) since this is not available for general weak~$\tilde \varphi$. However, the combinatorial core of the argument is `the same' and isolated as the Core Lemma~\ref{lem:dense2}. Our forcing set-up interprets it as a density argument. \medskip We would like to emphasize the comparative simplicity of our proofs of the mentioned results. The proof of Theorem~\ref{thm:bjstrong} proceeds by an intuitive model-theoretic argument followed by an application of the standard propositional simulation. This is technically much simpler than the more direct and quite elaborate construction of propositional proofs in \cite{bj}. The proof of Theorem \ref{thm:main} is a straightforward application of our general forcing Theorem~\ref{thm:T12}. Intuitively speaking, the combinatorics needed to fuel the forcing argument are akin to those one would aim at when trying to refute Turing reducibility. The surplus value added by the forcing machinery then consists in strengthening the independence from $\forall\mathsf{S}^1_2(\PV(\alpha))$ to $\forall\mathsf{T}^1_2(\PV(\alpha))$. We hope this can make a point in favor of further developing the general theory of forcing in bounded arithmetic. \section{Universal variants of bounded arithmetics}\label{sec:pv} Usually the bounded arithmetic $\mathsf{S}^1_2$ is written in Buss' language and shown to have a conservative extension that proves Cook's theory $\ensuremath{{\sf PV}}\xspace$~\cite{cookpv}, a theory having symbols for all polynomial time functions. One can add a predicate $\alpha$ and show $\mathsf{S}^1_2(\alpha)$ has a conservative extension $\mathsf{S}^1_2(\ensuremath{{\sf PV}}\xspace(\alpha))$ containing $\ensuremath{{\sf PV}}\xspace(\alpha)$, Cook's theory for functions computed in polynomial time with oracle $\alpha$. The universal variants $\forall\mathsf{S}^1_2$ and $\forall\mathsf{S}^1_2(\PV(\alpha))$ use instead~$\forall\ensuremath{{\sf PV}}\xspace$ and~$\forall\ensuremath{{\sf PV}}\xspace(\alpha)$, respectively, the true universal theories of polynomial time (with oracle $\alpha$). Basic lemmas concerning bounded arithmetics carry over to the universal variants without surprises, and we sketch the deve\-lop\-ment only insofar as we shall need it or insofar it allows for a smooth introduction of notations and concepts used later on. This section has preliminary character. Section~\ref{sec:univar} defines universal variants of bounded arithmetics in the languages $\ensuremath{{\sf PV}}\xspace$ and~$\ensuremath{{\sf PV}}\xspace(\alpha)$. Section~\ref{sec:aux} discusses auxiliary theories in the language $\ensuremath{{\sf PV}}\xspace\cup\{\alpha\}$, leading to a useful technical lemma (Lemma~\ref{lem:PVpa}). We prove it via a detour in propositional logic in Section~\ref{sec:prop}, thereby recalling the Paris-Wilkie translation. Section~\ref{sec:subst} treats substitutions of formulas for oracles, and Section~\ref{sec:deforacle} spells out how to define oracle computations and prove conservativity of $\forall\mathsf{S}^1_2(\PV(\alpha))$ over $\forall\mathsf{S}^1_2(\alpha)$. \subsection{Definitions and notations}\label{sec:univar} A {\em language $L$} is a set of function and relation symbols $S$ each having an arity $\mathit{ar}(S)\in \mathbb{N}$. We view constants as nullary function symbols. Writing a formula $\varphi$ or a term $t$ as~$\varphi(\bar x)$ or $t(\bar x)$ means that the free variables of $\varphi$ or $t$ are among those in the tuple $\bar x$. The interpretation of $S\in L$ in an $L$-structure~$\str A$ with universe $A$ is denoted by superscript~$S^A$. The interpretation of a term $t(x_0,\ldots,x_{r-1})$ is denoted $t^A$, a function from $A^r$ into $A$. Often we do not notationally distinguish between $\str A$ and $A$, or omit the superscript when $\str A$ is clear from context. An $L$-formula {\em with parameters from} $\str A$ is a formula in the language obtained from $L$ by adding every $a\in A$ as a constant. Such formulas are interpreted in $\str A$ understanding that the new constants are interpreted by themselves. \medskip The language $\ensuremath{{\sf PV}}\xspace$ contains the binary relation symbol~$<$ and a function symbol for every polynomial time Turing machine. We consider every such machine to take as inputs $\bar n\in\mathbb{N}^r$ for some fixed $r\in\mathbb{N}$ which is the arity of its symbol in $\ensuremath{{\sf PV}}\xspace$. The {\em standard $\ensuremath{{\sf PV}}\xspace$-model} has universe $\mathbb{N}$ and interprets these symbols by the function computed by the machine, and $<$ by the natural order. We denote the standard model also by $\mathbb{N}$ and do not distinguish notationally between a symbol in $\ensuremath{{\sf PV}}\xspace$ and its interpretation in~$\mathbb{N}$. We let $$ \forall\ensuremath{{\sf PV}}\xspace $$ denote the set of universal $\ensuremath{{\sf PV}}\xspace$-sentences which are true in the standard $\ensuremath{{\sf PV}}\xspace$-model $\mathbb{N}$. This theory goes back to DeMillo and Lipton \cite{millolipton}. \medskip To fix some notation we list some functions in $\ensuremath{{\sf PV}}\xspace$. It contains the {\em smash} $n\#m:=2^{|n|\cdot|m|}$ where the {\em length} $|n|:=\ceil{\log (n+1)}$ is the length of the binary expansion of $n$ (except that $|0|=0$). We have a binary $\mathit{bit}(i,n)$ with $n=\sum_{i<|n|}\mathit{bit}(i,n)\cdot 2^i$ and $\mathit{bit}(i,n)=0$ for $i\ge |n|$. We think of a number with binary expansion $100110$ as coding the string $01100$. We have a binary function in $\ensuremath{{\sf PV}}\xspace$ that maps a string to an initial segment of a given length. More precisely, $\ensuremath{{\sf PV}}\xspace$ contains a function mapping $(n,j)$ with $j<|n|$ to $$\textstyle n_{<j}:=2^j+\sum_{i<j}\mathit{bit}(i,n)\cdot 2^i. $$ Every $n$ {\em codes} the set $\{i\in\mathbb{N} \mid \mathit{bit}(i,n)=1\}$ of cardinality $\mathit{card}(n)$. We also write~$x\in y$ for $\mathit{bit}(x,y){=}1$. For every finite sequence $(n_0,\ldots, n_{k-1})\in\mathbb{N}^k$ there is a unique $n\in\mathbb{N}$ such that $\mathit{lh}(n)=k$ and $(n)_i=n_i$ for $i<\mathit{lh}(n)$ and $(n)_i=0$ for $i\ge \mathit{lh}(n)$. Here, $\mathit{lh}(n)$ is a unary function in $\ensuremath{{\sf PV}}\xspace$ and $(n)_i$ is a binary function in $\ensuremath{{\sf PV}}\xspace$ applied to $(n,i)$. There is a $k$-ary $t_k\in\ensuremath{{\sf PV}}\xspace$ such that $t_k(n_0,\ldots, n_{k-1})=n$; we write $\langle n_0,\ldots, n_{k-1}\rangle$ instead $t_k(n_0,\ldots, n_{k-1})$. Further, $$ (n)_{<j} $$ is the code of $(n_0,\ldots, n_{\min\{k,j\}-1})$. We assume that for some constant $c>0$ \begin{equation}\label{eq:seqbound} \textstyle k<|n| < c\cdot (1+\sum_{i<k}|(n)_i|). \end{equation} \medskip Let $\alpha$ be a unary relation symbol. For a structure $M$ not interpreting $\alpha$ we let $(M,\alpha^M)$ denote its expansion interpreting $\alpha$ by $\alpha^M\subseteq M$. In particular, $(\mathbb{N},\alpha^\mathbb{N})$ with $\alpha^\mathbb{N}\subseteq\mathbb{N}$ is the expansion of the standard $\ensuremath{{\sf PV}}\xspace$-model $\mathbb{N}$ which interprets $\alpha$ by $\alpha^\mathbb{N}$. This structure has an expansion~$\langle\mathbb{N}, \alpha^\mathbb{N}\rangle$ interpreting the language $\ensuremath{{\sf PV}}\xspace(\alpha)$ which extends $\ensuremath{{\sf PV}}\xspace\cup\{\alpha\}$ by adding a symbol for every polynomial time oracle Turing machine. Such a symbol is interpreted in~$\langle\mathbb{N}, \alpha^\mathbb{N}\rangle$ by the function the machine computes when given oracle~$\alpha^\mathbb{N}$. \begin{definition} The theory $\forall\PV(\alpha)$ is the set of universal $\ensuremath{{\sf PV}}\xspace(\alpha)$-sentences which are true in $\langle\mathbb{N}, \alpha^\mathbb{N}\rangle$ for every $\alpha^\mathbb{N}\subseteq\mathbb{N}$. \end{definition} We use standard notations for formula classes. The {\em existential} or {\em universal closure} of a formula is the sentence obtained by existentially or universally quantifying its free variables. For a set of formulas~$\Phi$ we let $\exists\Phi$ ($\forall\Phi$) be the closure of $\Phi$ under existential (universal) quantification $\exists x$ ($\forall x$). A formula in a language containing $\ensuremath{{\sf PV}}\xspace$ is {\em bounded} if it is obtained from atomic formulas by Boolean combinations and {\em bounded quantifiers} $\exists x{<}t, \forall x{<}t$ where $t$ is a $\ensuremath{{\sf PV}}\xspace$-term not containing~$x$. The {\em sharply bounded} formulas are similarly defined but allow only {\em sharply bounded quantifiers} $\exists x{<}|t|, \forall x{<}|t|$. We shall always indicate the language in the notation: the set of sharply bounded formulas in one of the lan\-guages~$\ensuremath{{\sf PV}}\xspace,\ensuremath{{\sf PV}}\xspace\cup\{\alpha\}$ or~$\ensuremath{{\sf PV}}\xspace(\alpha)$ is denoted by $\Delta_0^b,\Delta_0^b(\alpha)$ and $\Delta_0^b(\ensuremath{{\sf PV}}\xspace(\alpha))$ respectively. Closing under positive Boolean combinations, sharply bounded quantification and existential (non-sharply) bounded quantifica\-tion~$\exists x{<}t$ defines the sets $\Sigma^b_1,\Sigma^b_1(\alpha)$ and~$\Sigma^b_1(\ensuremath{{\sf PV}}\xspace(\alpha))$, respectively. The sets of bounded formulas are denoted $\Sigma_\infty^b,\Sigma_\infty^b(\alpha)$ and $\Sigma_\infty^b(\ensuremath{{\sf PV}}\xspace(\alpha))$. \medskip The following is easy to see. \begin{lemma}\label{lem:QE} Every $\Delta_0^b(\ensuremath{{\sf PV}}\xspace(\alpha))$-formula is $\forall\PV(\alpha)$-provably equivalent to some quantifier free $\ensuremath{{\sf PV}}\xspace(\alpha)$-formula; hence, $\forall\PV(\alpha)$ proves every $\forall\Delta_0^b(\ensuremath{{\sf PV}}\xspace(\alpha))$-sentence which is true in~$\langle\mathbb{N},\alpha^\mathbb{N}\rangle$ for every $\alpha^\mathbb{N}\subseteq\mathbb{N}$. Analogous statements hold for $\forall\ensuremath{{\sf PV}}\xspace$. \end{lemma} Let $\Phi$ be a set of formulas. The {\em minimization scheme}~$\mathsf{MIN}(\Phi)$ and {\em length minimization scheme} $\mathsf{LMIN}(\Phi)$ contain, respectively, for every~$\varphi(y,\bar x)\in\Phi$ the universal closure of \begin{eqnarray*} &&\varphi(x,\bar x)\to \exists y{\le} x\ \big(\varphi(y,\bar x)\wedge \forall z{<}y \ \neg\varphi(z,\bar x)\big),\\ &&\varphi(x,\bar x)\to \exists y{\le} x \ \big(\varphi(y,\bar x)\wedge \forall z{<}y\big (|z|{<}|y|\to \neg\varphi(z,\bar x)\big)\big). \end{eqnarray*} We introduce notation for {\em universal} variants of some relativized bounded arithmetics, namely those that are going to play a role later on: \begin{equation}\label{eq:thys} \begin{array}{lcl} \forall\mathsf{T}_2(\PV(\alpha))&:=&\forall\PV(\alpha)\cup\mathsf{MIN}(\Sigma^b_\infty(\ensuremath{{\sf PV}}\xspace(\alpha))),\\ \forall\mathsf{T}^1_2(\PV(\alpha))&:=&\forall\PV(\alpha)\cup \mathsf{MIN}(\Sigma^b_1(\ensuremath{{\sf PV}}\xspace(\alpha))),\\ \forall\mathsf{S}^1_2(\PV(\alpha))&:=&\forall\PV(\alpha)\cup \mathsf{LMIN}(\Sigma^b_1(\ensuremath{{\sf PV}}\xspace(\alpha))). \end{array} \end{equation} The theories $\forall\mathsf{T}_2,\forall\mathsf T^1_2,\forall\mathsf S^1_2$ are similarly defined in the language $\ensuremath{{\sf PV}}\xspace$ using $\forall\ensuremath{{\sf PV}}\xspace$ in place of ~$\forall\PV(\alpha)$. Buss' original theories $\mathsf T_2, \mathsf T^1_2,\mathsf S^1_2$ (in the language $\ensuremath{{\sf PV}}\xspace$) are similarly defined but using a subset of $\forall\ensuremath{{\sf PV}}\xspace$ based on Cook's theory~\cite{cookpv} (cf.~\cite{krabuch}). \subsection{An auxiliary theory}\label{sec:aux} Let the theory $$ \mathit{Th}_{\Delta^b_0(\alpha)}(\N) $$ consist of all $\forall\Delta_0^b(\alpha)$-sentences which are true in $(\mathbb{N},\alpha^N)$ for every $\alpha^N\subseteq\mathbb{N}$. Then define $$\forall\mathsf{T}_2(\alpha),\forall\mathsf{T}^1_2(\alpha),\forall\mathsf{S}^1_2(\alpha)$$ as in \eqref{eq:thys} but in the language $\ensuremath{{\sf PV}}\xspace\cup\{\alpha\}$ and using $\mathit{Th}_{\Delta^b_0(\alpha)}(\N)$ instead $\forall\PV(\alpha)$. These definitions look less natural than their analogues in the language $\ensuremath{{\sf PV}}\xspace(\alpha)$ but, in fact, the theories are not really different: \begin{proposition}\label{prop:cons} $\forall\mathsf{S}^1_2(\PV(\alpha))$ is conservative over $\forall\mathsf{S}^1_2(\alpha)$, in fact, every model $(M,\alpha^M)$ of~$\forall\mathsf{S}^1_2(\alpha)$ has a unique expansion to a model $\langle M,\alpha^M\rangle$ of $\forall\mathsf{S}^1_2(\PV(\alpha))$; conversely, every model of $\forall\mathsf{S}^1_2(\PV(\alpha))$ has this form. The same holds for~$\forall\mathsf{T}^1_2(\alpha)$ and~$\forall\mathsf{T}^1_2(\PV(\alpha))$, as well as for $\forall\mathsf{T}_2(\alpha)$ and $\forall\mathsf{T}_2(\PV(\alpha))$ \end{proposition} We give a proof in Section~\ref{sec:deforacle}. We feel $\mathit{Th}_{\Delta^b_0(\alpha)}(\N)$ is the right analogue of $\forall\PV(\alpha)$ or $\forall\ensuremath{{\sf PV}}\xspace$ in the language $\ensuremath{{\sf PV}}\xspace\cup\{\alpha\}$ because the analogue of Lemma~\ref{lem:QE} fails. Indeed: \begin{proposition}\label{prop:ugly} $ \mathit{Th}_{\Delta^b_0(\alpha)}(\N)$ is not equivalent to a universal theory. \end{proposition} \begin{proof} Let $M$ be a proper elementary extension of the standard $\ensuremath{{\sf PV}}\xspace$-model $\mathbb{N}$. We claim that for every $\alpha^M\subseteq M$ all universal sentences of $\mathit{Th}_{\Delta^b_0(\alpha)}(\N)$ are true in $(M,\alpha^M)$. Equivalently, every quantifier free $(\ensuremath{{\sf PV}}\xspace\cup\{\alpha\})$-formula~$\varphi(\bar x)$ which is satisfiable in $(M,\alpha^M)$ is also satisfiable in~$(\mathbb N,X)$ for some $X\subseteq\mathbb{N}$. \medskip To see this, let $\bar a$ be a tuple from $M$ such that $(M,\alpha^M)\models\varphi(\bar a)$. The formula $\varphi(\bar x)$ is a Boolean combination of its atomic subformulas of the form $$ t(\bar x){<}s(\bar x),\ t(\bar x){=}s(\bar x), \ \alpha(t(\bar x)) $$ for certain $\ensuremath{{\sf PV}}\xspace$-terms~$t(\bar x),s(\bar x)$. The formula $\varphi(\bar a)$ is truth functionally satisfied when its atomic subformulas are assigned their truth values in~$(M,\alpha^M)$. Since $M$ is an elementary extension of $\mathbb{N}$, there exists a tuple~$\bar n$ from~$\mathbb{N}$ satisfying in~$\mathbb{N}$ the same inequalities (and equalities) of terms appearing in $\varphi(\bar x)$ as $\bar a$ does in~$M$. We can thus choose $X\subseteq\mathbb{N}$ that contains~$t^\mathbb{N}(\bar n)$ if and only if $\alpha^M$ contains~$t^M(\bar a)$; here,~$t(\bar x)$ ranges over the terms appearing in~$\varphi(\bar x)$. Then~$(\mathbb{N},X)$ and $\bar n$ give the same truth assignment to the atomic subformulas in~$\varphi(\bar x)$ as $(M,\alpha^M)$ and $\bar a$. Hence, $(\mathbb{N},X)\models\varphi(\bar n)$, and our claim is proved. \medskip It now suffices to show $(M,\alpha^M)\not\models\mathit{Th}_{\Delta^b_0(\alpha)}(\N)$ for $\alpha^M:=\mathbb{N}\subseteq M$. Indeed, any nonstandard $a\in M\setminus\mathbb{N}$ falsifies (plugging for $x$) $$ \alpha(0)\wedge\forall y{<}|x|(\alpha(y)\to\alpha(y{+}1))\to \alpha(|x|) $$ in $(M,\alpha^M)$. But the universal closure of this formula is in $\mathit{Th}_{\Delta^b_0(\alpha)}(\N)$. \end{proof} The definitions of $\forall\mathsf{S}^1_2(\alpha),\forall\mathsf{T}^1_2(\alpha),\forall\mathsf{T}_2(\alpha)$ are robust with respect to these issues: \begin{lemma}\label{lem:PVpa} $\forall\ensuremath{{\sf PV}}\xspace\cup\mathsf{LMIN}(\Sigma^b_1(\alpha))$ proves $ \mathit{Th}_{\Delta^b_0(\alpha)}(\N).$ \end{lemma} We give a proof in the following section via a detour in propositional logic. \subsection{Propositional logic and simulation}\label{sec:prop} Propositional formulas (in negation normal form) are built from literals and constants $0,1$ using $\vee,\wedge$. A literal is a constant, a variable $X$ or a negated variable $\neg X$. For a formula $F$ we let $\neg F$ be obtained by swapping $\wedge/\vee$ and $0/1$ and $X/\neg X$. {\em Depth 0} formulas are literals; {\em depth $d+1$} formulas are depth $d$ formulas or disjunctions or conjunctions thereof. We fix a Frege system: a set of finitely many sound inference rules such that any formula~$F$ which is a logical consequence of a set of formulas $\Gamma$, has a Frege proof from $\Gamma$. This is a sequence of formulas ending with $F$ such that all formulas are either from $\Gamma$ or follow from earlier formulas by an inference rule. See \cite[Section~4.4]{krabuch} for precise definitions. A depth~$d$ Frege proof is one that contains only depth~$d$ formulas. Formulas $F$, finite sets of formulas $\Gamma$, Frege proofs $\pi$ etc. are coded by (binary strings coded by) natural numbers. The {\em size} of these objects is the length of the coding number. The {\em Paris-Wilkie translation} maps $\Sigma^b_\infty(\alpha)$-sentences $\varphi$ with parameters from $\mathbb{N}$ to propositional formulas $\langle\varphi\rangle$ in variables written $\langle \alpha(m)\rangle,m\in\mathbb{N}$. Atomic sentences without~$\alpha$ are mapped to $0$ or $1$ according to their truth value (in the standard $\ensuremath{{\sf PV}}\xspace$-model); atoms~$\alpha(t)$ for a closed $\ensuremath{{\sf PV}}\xspace$-term $t$ (without variables and with parameters from $\mathbb{N}$) are mapped to~$\langle\alpha(m)\rangle$ for $m:=t^\mathbb{N}$ the value of $t$ in the standard $\ensuremath{{\sf PV}}\xspace$-model. Recursively, define $\langle\neg\varphi\rangle:=\neg\langle\varphi\rangle$, $\langle\varphi\wedge\psi\rangle:=\langle\varphi\rangle\wedge\langle\psi\rangle$, and $ \textstyle\langle\forall y{<}t\ \psi(y)\rangle:=\textstyle\bigwedge_{m<t^{\mathbb{N}}} \langle\psi(m)\rangle $ A (partial) assignment $A$ {\em agrees with} $\alpha^\mathbb{N}\subseteq\mathbb{N}$ if for all $m\in\mathbb{N}$, $A$ is either undefined on the variable $\langle\alpha(m)\rangle$ or maps it to the truth value of $\alpha(m)$ in $(\mathbb{N},\alpha^\mathbb{N})$. If such $A$ is defined on all variables of $\langle\varphi(\bar n)\rangle$, then \begin{equation}\label{eq:Aalpha} A\models\langle\varphi(\bar n)\rangle\Longleftrightarrow(\mathbb{N},\alpha^\mathbb{N})\models\varphi(\bar n). \end{equation} For every fixed $\varphi(x_0,\ldots, x_{k-1})\in \Sigma^b_\infty(\alpha)$ the formulas $\langle\varphi(\bar n)\rangle$ have constant depth and quasipolynomial size. More precisely, there is $d\in\mathbb{N}$ such that for all $\bar n=(n_0,\ldots,n_{k-1})\in\mathbb{N}^k$ the formula $\langle\varphi(\bar n)\rangle$ has depth $d$ and size at most $ 2^{(1+\sum_{i<k}|n_i|)^{d}}$. \begin{proof}[Proof of Lemma~\ref{lem:PVpa}.] We formalize \eqref{eq:Aalpha} for fixed $\varphi(\bar x)\in \Delta_0^b(\alpha)$. There is a $\ensuremath{{\sf PV}}\xspace$-function mapping $\bar n$ to (the code of)~$\langle\varphi(\bar n)\rangle$. We write $\langle\varphi(\bar x)\rangle$ for this function. We code assigments~$A$ by sequences with entries $(A)_i$ of the form $\langle m_i,b_i\rangle$ meaning $A(\langle\alpha(m_i)\rangle)=b_i$. Choose a $\Delta_0^{b}(\alpha)$-formula $ \q{z \textit{ agrees with }\alpha}$ with the obvious meaning. Choose a quantifier free $\ensuremath{{\sf PV}}\xspace$-formula $\q{z\textit{ is defined on }y}$ defining (in the standard $\ensuremath{{\sf PV}}\xspace$-model) the pairs $(A,F)$ of assignments $A$ and formulas $F$ such that $A$ is defined on every variable appearing in $F$. Choose a quantifier free $\ensuremath{{\sf PV}}\xspace$-formula $\textit{Sat}(x,y)$ defining the pairs $(A,F)$ of assignments $A$ and formulas $F$ such that $A$ is defined on $F$ and satisfies~$F$. Thus \eqref{eq:Aalpha} means that $\mathit{Th}_{\Delta^b_0(\alpha)}(\N)$ proves \begin{equation}\label{eq:fAalpha} \q{z \textit{ agrees with }\alpha}\wedge\q{z \textit{ is defined on }\langle\varphi(\bar x)\rangle}\to\big( \textit{Sat}(z,\langle\varphi(\bar x)\rangle) \leftrightarrow \varphi(\bar x) \big). \end{equation} \noindent{\em Claim 1.} $\forall\ensuremath{{\sf PV}}\xspace$ proves \eqref{eq:fAalpha}. \medskip \noindent{\em Claim 2.} $\forall\ensuremath{{\sf PV}}\xspace\cup\mathsf{LMIN}(\Sigma_1^b(\alpha))$ proves $\exists z \big(\q{z \textit{ agrees with }\alpha}\wedge\q{z \textit{ is defined on }\langle\varphi(\bar x)\rangle}\big). $ \medskip We omit the straightforward proofs. Let $\forall\bar x\varphi(\bar x)\in\mathit{Th}_{\Delta^b_0(\alpha)}(\N)$. By \eqref{eq:Aalpha}, $\langle\varphi(\bar n)\rangle$ is a tautology for every tuple $\bar n$ from $\mathbb{N}$. In other words, $\forall\ensuremath{{\sf PV}}\xspace$ contains the universal closure of $$ \q{z \textit{ is defined on }\langle\varphi(\bar x)\rangle}\to\textit{Sat}(z,\langle\varphi(\bar x)\rangle). $$ This and the two claims imply that $\forall\ensuremath{{\sf PV}}\xspace\cup\mathsf{LMIN}(\Sigma_1^b(\alpha))$ proves $\varphi(\bar x)$. \end{proof} The propositional simulation of $\mathsf{T}_2(\alpha)$ extends to its universal variant $\forall\mathsf{T}_2(\alpha)$: \begin{proposition}\label{prop:simulation} Let $\varphi(x_0,\ldots,x_{k-1})\in\Sigma^b_\infty(\alpha)$. If $\forall\mathsf{T}_2(\alpha)$ proves $\varphi(\bar x)$, then there is $d\in\mathbb{N}$ such that for every $\bar n=(n_0,\ldots,n_{k-1})\in\mathbb{N}^k$ there is a size $2^{(1+\sum_{i<k}|n_i|)^{d}}$ depth $d$ Frege proof of~$\langle\varphi(\bar n)\rangle$. \end{proposition} \begin{proof} If $\forall\mathsf{T}_2(\alpha)\vdash\varphi(\bar x)$, then $\mathsf{T}_2(\alpha)\vdash(\psi(\bar x,\bar y)\to\varphi(\bar x))$ for some $\forall\bar x\bar y\psi(\bar x,\bar y)\in\mathit{Th}_{\Delta^b_0(\alpha)}(\N)$. By the usual propositional simulation (see~\cite[Corollary 9.1.4]{krabuch}), for all $\bar n,\bar m$ there is a constant depth quasipolynomial (in $\bar n,\bar m$) Frege proof of $\neg\langle\psi(\bar n,\bar m)\rangle \vee\langle \varphi(\bar n)\rangle$. Choose the all 0 tuple $\bar 0$ for $\bar m$ and note, as in the previous proof, that $\langle\psi(\bar n,\bar 0)\rangle$ has size polylogarithmic in $\bar n$. By \eqref{eq:Aalpha}, $\langle\psi(\bar n,\bar 0)\rangle$ is a tautology, so has a constant depth proof of size exponential in $|\langle\psi(\bar n,\bar 0)\rangle|$, so quasipolynomial in~$\bar n$. Modus ponens gives a proof of $\langle \varphi(\bar n)\rangle$. \end{proof} \subsection{Oracle substitutions}\label{sec:subst} We need some notation for substitutions of oracles by formulas: given a $(\ensuremath{{\sf PV}}\xspace\cup\{\alpha\})$-formula~$\chi$ and a $\ensuremath{{\sf PV}}\xspace(\alpha)$-formula $\psi(u,\bar y)$ let the $\ensuremath{{\sf PV}}\xspace(\alpha)$-formula \begin{equation}\label{eq:subpsi} \chi[\alpha/\psi(\cdot,\bar y)] \end{equation} be obtained from $\chi$ by replacing each atomic subformula of the form $\alpha(t)$ for some $\ensuremath{{\sf PV}}\xspace$-term~$t$ by $\psi(t,\bar y)$. As usual we silently assume that bounded variables in $\chi $ are suitably renamed to become distinct from those in $\bar y$. Of particular interest is the substitution of the oracle by a set polynomial time computable in it. We use special, suggestive notation in this case: for $f(u,\bar y)\in\ensuremath{{\sf PV}}\xspace(\alpha)$ we write \begin{equation}\label{eq:subfct} \chi[\alpha/f_{\bar y}^{-1}(0)]:=\chi[\alpha/f(\cdot , \bar y){=}0]. \end{equation} Let $(N,\alpha^N)$ be a $(\ensuremath{{\sf PV}}\xspace\cup\{\alpha\})$-structure. A set $A\subseteq N$ is {\em $\Delta_1^b(\alpha)$-definable in $(N,\alpha^N)$} if there are $\Sigma^b_1(\alpha)$-formulas with parameters from $N$ defining $A$ and its complement $N\setminus A$. \begin{lemma} \label{lem:subst} Let $\mathsf T$ be $\forall\mathsf{S}^1_2(\alpha),\forall\mathsf{T}^1_2(\alpha)$ or $\forall\mathsf{T}_2(\alpha)$ and $(N,\alpha^N) $ be a model of $\mathsf T$. If $A\subseteq N$ is $\Delta_1^b(\alpha)$-definable in $N$, then $(N,A)\models\mathsf T$. \end{lemma} \begin{proof} Consider the case $\mathsf T=\forall\mathsf{S}^1_2(\alpha)$, the others are similar. It is easy to check that $(N,A)$ satisfies $\mathsf{LMIN}(\Sigma_1^b(\alpha))$. That it also satisfies $\mathit{Th}_{\Delta^b_0(\alpha)}(\N)$ then follows from Lemma~\ref{lem:PVpa}. \end{proof} \subsection{Defining oracle computations}\label{sec:deforacle} We think of an oracle computation as a binary decision tree whose inner nodes are labeled by queries to the oracle and whose leaves are labeled by the output. The tree is potentially huge but implicitly feasible in the sense that there is a polynomial time function $t(\bar x,z)$ computing the output or the next query from the input $\bar x$ and the answers $z$ obtained sofar. The answers are coded by the bits of the number $z$, the most significant one not being used. As a convention, we shall code queries by odd numbers and outputs by even numbers. \begin{definition}\label{df:tree} Let $M$ be a model of $\forall\ensuremath{{\sf PV}}\xspace$ and let $t(\bar x,z),h(\bar x)$ be definable functions in~$M$. Then $t(\bar x,z)$ is a {\em decision tree (of height at most $h(\bar x)$) in $M$} if $M$ satisfies the universal closure of~\eqref{eq:oraclefct} (and~\eqref{eq:qucomp}): \begin{eqnarray}\label{eq:oraclefct} i{<}|z|{-}1\wedge t(\bar x,z_{<i})\text{ is even}\ \to\ t(\bar x,z){=}t(\bar x, z_{<i}),\\\label{eq:qucomp} h(\bar x){\le}|z|\ \to\ t(\bar x, z)\text{ is even}. \end{eqnarray} For $\ensuremath{{\sf PV}}\xspace$-terms $t(\bar x,z),h(\bar x)$ we say $t(\bar x,z)$ is a {\em decision tree (of height $h(\bar x)$)} if this holds in all $\ensuremath{{\sf PV}}\xspace$-models, that is, if $\forall\ensuremath{{\sf PV}}\xspace$ proves~\eqref{eq:oraclefct} (and \eqref{eq:qucomp}). Let $\alpha^M\subseteq M$ and $t(\bar x,z)$ be a decision tree in $M$. Then $c\in M$ is a {\em sequence of $\alpha^M$-answers to $t$ on $\bar a$} if $(M,\alpha^M)\models \mathit{Answer}^\alpha_t(\bar a,c)$ where \begin{equation}\label{df:answeralpha} \mathit{Answer}^\alpha_t(\bar x,z):=\forall i{<}|z|{-}1\ \big( \mathit{bit}(i,z){=}0\ \leftrightarrow\ \neg\alpha(\lfloor t(\bar x,z_{<i})/2\rfloor)\wedge t(\bar x,z_{<i})\text{ is odd} \big). \end{equation} If additionally, $t(\bar a,c)$ is even (in $M$), we call $c$ {\em complete}. \end{definition} Note that $\mathit{Answer}^\alpha_t$ is $\Delta^b_0(\alpha)$ if $t$ is a $\ensuremath{{\sf PV}}\xspace$-term. \begin{lemma}\label{lem:falphaf} For every $f(\bar x)\in\ensuremath{{\sf PV}}\xspace(\alpha)$ there are $t(\bar x,z),h(\bar x)\in\ensuremath{{\sf PV}}\xspace$ such that $t(\bar x,z)$ is a decision tree of height $|h(\bar x)|$ and $\forall\PV(\alpha)$ proves \begin{equation}\label{eq:falphaf} f(\bar x) {=}y\ \leftrightarrow\ \exists z{<}h(\bar x) \big(\mathit{Answer}^\alpha_t(\bar x,z)\wedge t(\bar x,z){=}2y\big) \end{equation} \end{lemma} \begin{proof} Let $f$ correspond to the oracle machine $\mathbb{A}$. Choose $t(\bar x,z)\in\ensuremath{{\sf PV}}\xspace$ representing the following algorithm: on input $(\bar x,z)$ run $\mathbb{A}$ on $\bar x$ answering queries by $\mathit{bit}(0,z),\mathit{bit}(1,z),\ldots$ until either $\mathbb{A}$ halts with result $y$ or asks the $|z|$-th query $y$ (hence $\mathit{bit}(|z|-1,z)=1$ is not used to answer queries); in the first case output $2y$ and in the second output $2y+1$. Choose~$h(\bar x)\in\ensuremath{{\sf PV}}\xspace$ such that~$|h(\bar x)|$ is bigger than the number of steps taken by $\mathbb{A}$ on $\bar x$. Let $g(\bar x)\in\ensuremath{{\sf PV}}\xspace(\alpha)$correspond to the oracle machine that on $\bar x$ runs $\mathbb{A}$ and outputs the string of oracle answers. Then $\forall\PV(\alpha)$ contains the universal closure of $$ \mathit{Answer}^\alpha_t(\bar x,g(\bar x))\wedge t(\bar x,g(\bar x)){=}2f(\bar x), $$ so $f(\bar x) {=}y$ implies the r.h.s.\ of \eqref{eq:falphaf}. The converse is clear by Lemma~\ref{lem:QE}. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:cons}.] We only prove the first statement. Let $(M,\alpha^M)\models \forall\mathsf{S}^1_2(\alpha)$. The theory $\forall\mathsf{S}^1_2(\alpha)$ proves that the r.h.s.\ of \eqref{eq:falphaf} defines (the graph of) a function. We may hence define the expansion $\langle M,\alpha^M\rangle$ according to the equivalences~\eqref{eq:falphaf}. Uniqueness is clear by Lem\-ma~\ref{lem:falphaf}. By standard means (see e.g.\ \cite[Theorem~1.3.3.3]{busshand}) the same lemma implies that~$\Sigma^b_1(\ensuremath{{\sf PV}}\xspace(\alpha))$-formulas are equivalent to $\Sigma^b_1(\alpha)$-formulas, provably in the theory $\mathsf T$ consisting of $\forall\mathsf{S}^1_2(\alpha)$ plus the universal closures of the equivalences~\eqref{eq:falphaf}. Since $\mathsf T$ holds in $\langle M,\alpha^M\rangle$, we can infer $\mathsf{LMIN}(\Sigma^b_1(\ensuremath{{\sf PV}}\xspace(\alpha)))$ from $\mathsf{LMIN}(\Sigma^b_1(\alpha))$. Further, $\Sigma^b_1(\alpha)$-formulas are $\forall\mathsf{S}^1_2(\alpha)$-provably equivalent to {\em strict} $\Sigma^b_1(\alpha)$-formulas (see e.g.~\cite[Lemma~5.2.14]{krabuch}), i.e., formulas obtained from $\Delta_0^b(\alpha)$-formulas by bounded existential quantification. It follows that quantifier free $\ensuremath{{\sf PV}}\xspace(\alpha)$-formulas are $\mathsf T$-provably equivalent to $\forall\Delta_0^b(\alpha)$-formulas. Thus $\langle M,\alpha^M\rangle\models\forall\PV(\alpha)$, and we conclude $\langle M,\alpha^M\rangle\models\forall\mathsf{S}^1_2(\PV(\alpha))$. Conversely, if $N\models\forall\mathsf{S}^1_2(\PV(\alpha))$, then its $\ensuremath{{\sf PV}}\xspace\cup\{\alpha\}$-reduct models $\forall\mathsf{S}^1_2(\alpha)$ (Lemma~\ref{lem:QE}). Thus $N$ equals the unique expansion of this reduct to an $\forall\mathsf{S}^1_2(\PV(\alpha))$-model. \end{proof} \section{NP search problems and propositional proofs} A (type 2) NP search problem is given by a polynomial time decidable (with oracle $\alpha$) and polynomially bounded relation $R(x,y)$ such that for every $x$ there exists $y$ with $R(x,y)$. The computational task is, given $x$ (and oracle $\alpha$), to compute some $y$ with $R(x,y)$. The set of such problems (without oracle) has been introduced to complexity theory in \cite{megiddopapa}. Many natural such problems ask to find a certain configuration in an exponentially large first-order structure given by an oracle $\alpha$. For example, the {\em ($n^2$ to $n$) weak pigeonhole principle $\mathit{WPHP}$} asks given $n$ (in binary) and an oracle $\alpha$ coding a function $f$ from $[n]^2$ into $[n]$ to find an assignment to $x,y,x',y'$ in $[n]$ satisfying \begin{equation}\label{eq:wphpformula} (f(x,y){=}f(x',y')\wedge \neg x{=}x')\ \vee\ (f(x,y){=}f(x',y')\wedge \neg y{=}y'). \end{equation} Similarly, there is a type 2 NP search problem $Q_\varphi$ associated to every existential sentence~$\varphi$ which is valid in the finite: given~$n$ (in binary) and an oracle $\alpha$, witness the existential quantifiers in the structure coded by~$\alpha$ on $[n]$. Section~\ref{sec:searchprbl} formally defines type 2 NP search problems, Turing and many-one reductions, and characterizes Turing reducibility by $\forall\mathsf{S}^1_2(\PV(\alpha))$-provability (Proposition~\ref{prop:Tred}). This characterization is essentially known. It is one of our main motivations to study universal variants of bounded arithmetics. Section~\ref{sec:coding} discusses two ways how to encode finite structures by finite sets, the {\em unary} and the {\em binary} encoding. Section~\ref{sec:FOsearch} then formally defines the search problems $Q_\varphi$ above. Finally, we derive Theorem~\ref{thm:bjstrong} in Section~\ref{sec:bj}. \subsection{NP search problems}\label{sec:searchprbl} Formally, we identify a {\em type 2 NP search problem} with a $\Delta_0^b(\alpha)$-for\-mu\-la~$\varphi(x, y)$ such that for some $\ensuremath{{\sf PV}}\xspace$-term $t( x)$ the following are true in $(\mathbb{N},\alpha^{\mathbb{N}})$ for all $\alpha^M\subseteq\mathbb{N}$: \begin{eqnarray}\label{eq:total} &&\forall x\exists y\varphi( x, y),\\\label{eq:tbd} &&\forall x\forall y(\varphi(x,y)\to y{<}t(x)) . \end{eqnarray} If $\varphi( x, y)$ is a $\Delta_0^b$-formula we speak of a {\em type 1 NP search problem}. We refer to \eqref{eq:total} as the {\em totality} and to \eqref{eq:tbd} as the {\em boundedness} of $\varphi(x,y)$. We shall discuss examples in Section~\ref{sec:comb}. Being {\em solvable in polynomial time} means that there is $f(x)\in\ensuremath{{\sf PV}}\xspace(\alpha)$ such that $\forall x \varphi(x,f(x))$ is true in $\langle\mathbb{N},\alpha^\mathbb{N}\rangle$ for all $\alpha^\mathbb{N}\subseteq\mathbb{N}$. By Lemma~\ref{lem:QE} this means that $\forall\PV(\alpha)$ proves $\varphi(x,f(x))$. This terminology follows \cite{thakra,thask} except that we allow only a unary predicate $\alpha$ instead of an arbitrary finite first-order language. Section~\ref{sec:coding} spells out how $\alpha$ can code such languages. Our choice allows technically simple definitions of reductions: \begin{definition}\label{df:Tred} Let $\varphi(x, y)$ and $\psi( u, v)$ be type 2 NP search problems. We say $\varphi( x, y)$ is {\em (polynomial time) Turing reducible} to $\psi( u, v)$ if there are $f(z,x,v),g(x,v),h(x,v)\in\ensuremath{{\sf PV}}\xspace(\alpha)$ and $\ensuremath{{\sf PV}}\xspace$-terms $q(x),s(x)$ such that the universal closure of \begin{equation}\label{eq:Tred} \begin{split} &g(x,w){<}s(x)\ \wedge \Big(\forall i{<}|q(x)| \ \psi\big(g(x,(v)_{<i}),(v)_{i}\big)\big[\alpha/f(\cdot,x,(v)_{<i}){=}0\big] \to\ \varphi(x,h(x,v))\Big). \end{split} \end{equation} is true in $\langle\mathbb{N},\alpha^N\rangle$ for all $\alpha^N\subseteq\mathbb{N}$. If $q(x)$ is the constant $\ell\in\mathbb{N}$, we speak of Turing reducibility {\em with $\ell$ queries}; and for $\ell=1$ we speak of {\em many-one} reducibility. \end{definition} Intuitively, \eqref{eq:Tred} interprets $v$ as a sequence $(v)_0,(v)_1,\ldots$ of answers to oracle queries to~$\psi( u, v)$. The $i$-th query is given by some instance $u_i$ and oracle $\beta_i$ computed by $g$ and~$f$ from the input $x$ and answers $(v)_0,\ldots, (v)_{i-1}$ obtained sofar, namely $u_i:=g(x,(v)_{<i})$ and $\beta_i:=f(\cdot,x,(v)_{<i})^{-1}(0)$. Finally,~$h$ returns some solution to $\varphi(x,y)$. For simplicity, the formalization \eqref{eq:Tred} assumes Turing reductions to always make the same number of queries, namely $|q(x)|$, independently from the answers obtained. The first conjunct ensures that the size $|u_i|$ of the queries is bounded by $|s(x)|$, hence the whole computation runs in time polynomial in $|x|$. This first conjunct can be omitted in case $q(x)$ is constant. In particular, $\varphi( x, y)$ is many-one reducible to $\psi( u, v)$ if and only if there are $f,g,h\in \ensuremath{{\sf PV}}\xspace(\alpha)$ such that for all $\alpha^\mathbb{N}\subseteq \mathbb{N}$, $\langle\mathbb{N},\alpha^\mathbb{N}\rangle$ satisfies the universal closure of \begin{equation}\label{eq:manyone} \psi(g(x),v) \big[\alpha/f_x^{-1}(0)\big] \to\ \varphi(x,h(x,v)). \end{equation} The following notation mimics notations like $\mathit{WPHP}[\ensuremath{{\sf PV}}\xspace(\alpha)]$ familiar from the literature. For a type 2 NP search problem $\varphi(x,y)$, let \begin{equation}\label{eq:subPV} \varphi[\ensuremath{{\sf PV}}\xspace(\alpha)]:=\Big\{ \forall x\forall \bar z\exists y\ \varphi(x,y)[\alpha/f^{-1}_{\bar z}(0)]\mid f(u,\bar z)\in\ensuremath{{\sf PV}}\xspace(\alpha) \Big\}. \end{equation} \begin{definition}\label{df:conseq} Let $\varphi(x, y)$ and $\psi( u, v)$ be type 2 NP search problems and $\mathsf T$ a theory. We say $\varphi( x, y)$ is a {\em consequence of $\psi( u, v)$ over $\mathsf T$} if $ \mathsf T\cup \psi[\ensuremath{{\sf PV}}\xspace(\alpha)]$ proves $ \exists y\varphi(x,y).$ Otherwise we say $\varphi( x, y)$ is {\em independent over $\mathsf T$ from $\psi( u, v)$}. \end{definition} We are not aware of a reference for this notion of consequence. It is a natural logical analogue of the complexity theoretic notion of reducibility. The mode of speech follows Hanika \cite[Definition~4.4]{hanikathesis} whose notion is weaker in that $\mathsf T$ is given only one sentence from~$\psi[\ensuremath{{\sf PV}}\xspace(\alpha)]$ when asked to prove $ \exists y\varphi(x,y)$. We state the following only for the universal variants of bounded arithmetics that we explicitly defined but it is clear from the proof that it holds for other universal variants as well. \begin{proposition}\label{prop:trans} Let $\mathsf T$ be $\forall\mathsf{S}^1_2(\PV(\alpha)),\forall\mathsf{T}^1_2(\PV(\alpha))$ or $\forall\mathsf{T}_2(\PV(\alpha))$. Consequentiality over~$\mathsf T$ is transitive as a relation over type 2 NP search problems. \end{proposition} \begin{proof} Let $\mathsf T'$ be $\forall\mathsf{S}^1_2(\alpha),\forall\mathsf{T}^1_2(\alpha)$ or $\forall\mathsf{T}_2(\alpha)$ if $\mathsf T$ is $\forall\mathsf{S}^1_2(\PV(\alpha)),\forall\mathsf{T}^1_2(\PV(\alpha))$ or $\forall\mathsf{T}_2(\PV(\alpha))$, respectively. Suppose $\varphi( x, y)$, $\psi(u,v)$, $\chi( z, w)$ are type 2 NP search problems and $\psi(u,v)$ is a consequence of $\chi( z, w)$ over $\mathsf T$ and $\varphi( x, y)$ is a consequence of $\psi( u, v)$ over $\mathsf T$. We have to show that $\varphi( x, y)$ is a consequence of $\chi( z, w)$ over $\mathsf T$. Let a model of $\mathsf T\cup\chi[\ensuremath{{\sf PV}}\xspace(\alpha)]$ be given. By Proposition~\ref{prop:cons} it has the form $\langle N,\alpha^N\rangle$ for~$(N,\alpha^N)\models\mathsf T'$. For contradiction, assume $\langle N,\alpha^N\rangle\models\forall y\neg\varphi(n,y)$ for some $n\in N$. Then $\langle N,\alpha^N\rangle\models\forall v\neg\psi(m,v)[\alpha/f_{\bar a}^{-1}(0)]$ for certain $f(z,\bar z)\in\ensuremath{{\sf PV}}\xspace(\alpha)$ and $\bar a,m$ from~$N$. Let~$A$ be the set defined by $f(z,\bar a){=}0$ in $\langle N,\alpha^N\rangle$ and note it is $\Delta^b_1(\alpha)$-definable in $(N,\alpha^N)$ by Lemma~\ref{lem:falphaf}. Then $(N,A)\models \forall v\neg\psi(m,v)$ and $(N,A)\models\mathsf T'$ by Lemma~\ref{lem:subst}, so $\langle N, A\rangle\models\mathsf T$ by Proposition~\ref{prop:cons}. We are left to show $\langle N,A\rangle\models\chi[\ensuremath{{\sf PV}}\xspace(\alpha)]$. For contradiction, assume there are $g(u,\bar u)\in\ensuremath{{\sf PV}}\xspace(\alpha)$ and $\bar b,k$ from $N$ such that $\langle N,A\rangle\models\forall w\neg\chi(k,w)[\alpha/g_{\bar b}^{-1}(0)]$. If $\theta(u,\bar u)$ is a $(\ensuremath{{\sf PV}}\xspace\cup\{\alpha\})$-formula equivalent to $g(u,\bar u){=}0$ in $\langle N,A\rangle$, then $(N,A)$ satisfies $\forall w\neg\chi(k,w)[\alpha/\theta(\cdot,\bar b)]$ and thus \begin{equation}\label{eq:theta} \langle N,\alpha^N\rangle\models\forall w\neg\chi(k,w)\big[\alpha/\theta(\cdot,\bar b)[\alpha/f_{\bar a}^{-1}(0)]\big]. \end{equation} To get the desired contradiction it suffices to choose $\theta$ such that $\theta(u,\bar u)[\alpha/f_{\bar z}^{-1}(0)]$ is $\forall\PV(\alpha)$-provably equivalent to $\ell(u,\bar u,\bar z){=}0$ for some $\ell\in\ensuremath{{\sf PV}}\xspace(\alpha)$. Indeed, then \eqref{eq:theta} gives $\langle N,\alpha^N\rangle\models\forall w\neg\chi(k,w)\big[\alpha/\ell_{\bar b,\bar a}^{-1}(0)]$, contradicting $\langle N,\alpha^N\rangle\models\chi[\ensuremath{{\sf PV}}\xspace(\alpha)]$. We choose $\ensuremath{{\sf PV}}\xspace$-terms $t(u,\bar u,z),h(u,\bar u)\in\ensuremath{{\sf PV}}\xspace$ according Lemma~\ref{lem:falphaf} for $g(u,\bar u)$ and take the r.h.s. of \eqref{eq:falphaf} for $\theta(u,\bar u)$. Then $\theta(u,\bar u)[\alpha/f_{\bar z}^{-1}(0)]$ is $\Sigma^b_1(\ensuremath{{\sf PV}}\xspace(\alpha))$. But the leading $\exists z{<}h(u,\bar u)$ can be eliminated by replacing $z$ by $s(u,\bar u,\bar z)$ for a suitable $s\in \ensuremath{{\sf PV}}\xspace(\alpha)$. The resulting $\Delta^b_0(\ensuremath{{\sf PV}}\xspace(\alpha))$-formula is $\forall\PV(\alpha)$-provably equivalent to $\ell(u,\bar u,\bar z){=}0$ for a suitable $\ell\in\ensuremath{{\sf PV}}\xspace(\alpha)$. \end{proof} The following proposition characterizes natural reducibilities by consequentiality over the universal variants $\forall\PV(\alpha)$ and $\forall\mathsf{S}^1_2(\PV(\alpha))$. The interesting directions from right to left follow from known witnessing theorems. In \cite[Fact~4.6]{hanikathesis} and \cite[Proposition~7.1]{pudlakbulletin} proofs appear for $\mathsf{S}^1_2(\alpha)$ and only one member of $\psi[\ensuremath{{\sf PV}}\xspace(\alpha)]$. The converse directions from left to right are easy given the definition of~$\forall\PV(\alpha)$. \begin{proposition} \label{prop:Tred} Let $\varphi(x, y)$ and $\psi( u, v)$ be type 2 NP search problems. \begin{enumerate}\itemsep=0pt \item[(a)] $\varphi( x, y)$ is Turing reducible to $\psi( u, v)$ if and only if $\varphi(x,y)$ is a consequence of $\psi( u, v)$ over $\forall\mathsf{S}^1_2(\PV(\alpha))$. \item[(b)] There is $\ell\in\mathbb{N}$ such that $\varphi( x, y)$ is Turing reducible to $\psi( u, v)$ with $\ell$ queries if and only if $\varphi(x,y)$ is a consequence of $\psi( u, v)$ over $\forall\PV(\alpha)$. \item[(c)] $\varphi( x, y)$ is solvable in polynomial time if and only if $\forall\PV(\alpha)$ proves $\exists y\varphi(x,y)$, if and only if $\forall\mathsf{S}^1_2(\PV(\alpha))$ proves $\exists y\varphi(x,y)$. \end{enumerate} \end{proposition} \begin{proof} We first prove (a). For the direction from left to right, assume $\varphi( x, y)$ is Turing reducible to $\psi( u, v)$. By Lemma~\ref{lem:QE}, $\forall\PV(\alpha)$ proves the $\Delta_0^b(\ensuremath{{\sf PV}}\xspace(\alpha))$-formula \eqref{eq:Tred}. We argue in $\forall\mathsf{S}^1_2(\PV(\alpha))\cup\psi[\ensuremath{{\sf PV}}\xspace(\alpha)]$ that there exists $v$ satisfying $$ \forall i{<}|q(x)| \ \psi\big(g(x,(v)_{<i}),(v)_{i}\big)\big[\alpha/f(\cdot,x,(v)_{<i}){=}0\big]. $$ Let $\chi_0(x,v)$ be obtained by replacing $\forall i{<}|q(x)|$ by $\forall i{<}\mathit{lh}(v)$, and consider the formula $$ \chi_1(x,w):=\exists v\big( \mathit{lh}(v){=}|q(x)|{-}|w|\wedge \chi_0(x,v) \big). $$ Let $t(x)$ witness boundedness \eqref{eq:tbd} of $\psi(u,v)$. We can assume~$t(x)$ is non-decreasing, i.e., $\forall\ensuremath{{\sf PV}}\xspace$ proves $(x{\le} x'\to t(x){\le} t(x'))$. By \eqref{eq:seqbound} the quanti\-fier~$\forall i{<}\mathit{lh}(v)$ can be sharply bounded in$v$ and $\exists v$ can be bounded by~$c\#(q(x)\# t(s(x)))$ for a suitable $c\in\mathbb{N}$. Hence, $\chi_1(x,w)$ is $\forall\ensuremath{{\sf PV}}\xspace$-provably equivalent to a $\Sigma_1^b(\ensuremath{{\sf PV}}\xspace(\alpha))$-formula. Since trivially $\chi_1(x,q(x))$, $\mathsf{LMIN}(\Sigma_1^b(\ensuremath{{\sf PV}}\xspace(\alpha)))$ gives a minimal length $w$ with $\chi_1(x,w)$. Then $|w|=0$ because each answer sequence $v$ can be prolongued by any $v'$ with $\psi\big(g(x,v),v'\big)\big[\alpha/f(\cdot,x,v){=}0\big]$; and such~$v'$ exists by $\psi[\ensuremath{{\sf PV}}\xspace(\alpha)]$. \medskip For the direction from right to left, assume $\forall\mathsf{S}^1_2(\PV(\alpha))\cup\psi[\ensuremath{{\sf PV}}\xspace(\alpha)]$ proves $ \exists y\varphi(x,y)$. By compactness there are $\ell\in\mathbb{N}$ and $f_0(z,\bar z_0),\ldots, f_{\ell-1}(z,\bar z_{k-1})\in\ensuremath{{\sf PV}}\xspace(\alpha)$ and a quantifier free $\ensuremath{{\sf PV}}\xspace(\alpha)$-formula $\chi(\bar x)$ such that $\forall\bar x\chi(\bar x)\in\forall\PV(\alpha)$ and such that $\mathsf{S}^1_2(\ensuremath{{\sf PV}}\xspace(\alpha))$ proves \begin{eqnarray*} &&\exists y\bar w\forall\bar v\ \theta(x,y,\bar w,\bar v),\textup{ where}\\ &&\quad\bar w:= \bar x \ u_0\bar z_0\cdots u_{\ell-1}\bar z_{\ell-1}, \\ &&\quad\bar v:=v_0\cdots v_{\ell-1},\\ &&\quad\theta:=\textstyle \neg \chi(\bar x)\vee\bigvee_{i<\ell} \neg\psi(u_i,v_i)\big[\alpha/f_i(\cdot,\bar z_{i}){=}0 \big]\vee\varphi(x,y). \end{eqnarray*} By a well-known witnessing argument (see \cite[Theorem~7.3.3]{krabuch}) a witness tuple $y\bar w$ is computable from $x$ by a polynomial time counterexample computation~\cite{counter}: a polynomial time Student computes a candidate $y^0\bar w^0$ and sends it to a computationally unbounded Teacher; Teacher answers with a counterexample $\bar v^0$, i.e., such that $\neg\theta(x,y^0,\bar w^0,\bar v^0)$; then Student computes another candidate~$y^1\bar w^1$ and Teacher answers with a counterexample $\bar v^1$ and so on, until Student finally computes~$y^t\bar w^t$ such that no counterexample exists; then the computation stops with output $y^t\bar w^t$. We can assume that $t$ equals $|q(x)|$ for some $\ensuremath{{\sf PV}}\xspace$-term~$q(x)$, independent of Teacher's answers. The whole computation runs in time polynomial in $|x|$, so there is a $\ensuremath{{\sf PV}}\xspace$-term $s(x)$ bounding all components of all $y^j\bar w^j$'s, and in particular the $u^j_i$'s. Now just note that each answer from Teacher can be simulated by $\ell$ oracle calls to~$\psi$, namely to get $\bar v^j$ such that $\psi(u^j_i,v^j_i)\big[\alpha/f_i(\cdot,\bar z^j_{i}){=}0 \big]$ for all $i<\ell$ where we write $\bar w^j$ as $\bar x^j \ u^j_0\bar z^j_1\cdots u^j_{\ell-1}\bar z^j_{\ell-1}$ and $\bar v^j$ as $ v^j_0\cdots v^j_{\ell-1}$. More specifically, we look for $f,g,h\in\ensuremath{{\sf PV}}\xspace(\alpha)$ such that the universal closure of \eqref{eq:Tred} is true. The function $f$, given $x$ and previous answers, simulates the functions $ f_i(\cdot,\bar z^j_{i})$ for the $\bar z^j_i$ computed by Student; $g$ computes the $u^j_i$'s by simulating Student; $h$ outputs $y^t$ of Student's final candidate $y^t\bar w^t$. \medskip The proof of (b) is similar but simpler. For the forward direction, $\ell$ many $v_0,\ldots,v_{\ell-1}$ can be collected in the tuple $\langle v_0,\ldots, v_{\ell-1}\rangle$ without need to rely on $\mathsf{LMIN}(\Sigma_1^b(\ensuremath{{\sf PV}}\xspace(\alpha)))$. For the converse, $\ensuremath{{\sf PV}}\xspace(\alpha)$-provability yields a counterexample computation with constantly many rounds. This follows from the KPT-Theorem~\cite{kpt}, in fact, a simple version of it proved in \cite[Theorem~2.2]{cooktha} by a simple proof that works for $\forall\PV(\alpha)$. \medskip The two forward directions of (c) are clear (recall Lemma~\ref{lem:QE}). The last statement implies the first by applying (a) with $v{=}v$ for $\psi(u,v)$. \end{proof} The type 2 NP search problems provably total in universal variants of bounded arithmetics form a meaningful complexity class in that they are closed under Turing reductions. Again, we state this only for $\forall\mathsf{S}^1_2(\PV(\alpha)),\forall\mathsf{T}^1_2(\PV(\alpha))$ and $\forall\mathsf{T}_2(\PV(\alpha))$. Note that by the previous proposition consequentiality over these theories is implied by Turing reducibility. \begin{corollary}\label{cor:Tredclosed} Let $\mathsf T$ be $\forall\mathsf{S}^1_2(\PV(\alpha)),\forall\mathsf{T}^1_2(\PV(\alpha))$ or $\forall\mathsf{T}_2(\PV(\alpha))$, and let $\varphi(x,y)$ and $\psi(u,v)$ be type 2 NP search problems. If $\varphi(x,y)$ is a consequence of $\psi(u,v)$ over $\mathsf T$ and $\mathsf T$ proves $\exists v\psi(u,v)$, then $\mathsf T$ proves $\exists y\varphi(x,y)$. \end{corollary} \begin{proof} Assume that $\varphi(x,y)$ is a consequence of $\psi(u,v)$ over $\mathsf T$ and $\mathsf T$ proves $\exists v\psi(u,v)$. The latter is equivalent to $\psi(u,v)$ being a consequence of $w{=}w$ over $\mathsf T$. By Proposition~\ref{prop:trans}, $\varphi(x,y)$ is a consequence of $w{=}w$ over $\mathsf T$. Hence $\mathsf T$ proves $\exists y\varphi(x,y)$. \end{proof} It might be worthwhile to look for complexity theoretic reductions equivalent to consequentiality over higher levels of the bounded arithmetic hierarchy (cf.~\cite[Section~7]{pudlakbulletin}). Such a notion of reduction is implicit in \cite[Proof of Theorem~8]{bkt} for the special case of $\mathsf{T}^1_2(\alpha)$ and~$\psi$ the search problem associated to the weak pigeonhole principle~\eqref{eq:wphpformula}. To define this and similar problems we need to agree on how to code finite structures by oracles. \subsection{Unary and binary codes of structures}\label{sec:coding} There are at least two common ways how to code structures by oracles, namely, the unary and the binary encoding. The unary encoding codes functions by their graphs while the binary encoding uses their bit graphs. Both codings work not only over $\mathbb{N}$ but over certain non-standard models too. Let $(N,\alpha^N)$ be a model of $\forall\mathsf{S}^1_2(\alpha)$, and let $L$ be a finite language. For notational simplicity we assume $L\subseteq \mathbb{N}$ and $\mathbb{N}$ is an initial segment of $N$. Recall that $\mathit{ar}(S)$ denotes the arity of the symbol $S\in L$. For $n\in N$ we write $$ [n]:=\{a\in N\mid a<n\}. $$ Here and below we omit superscripts as in $<^N$ for interpretations of $\ensuremath{{\sf PV}}\xspace$-symbols in $N$. \begin{definition}\label{df:Aalpha} Let $n\in N\setminus\{0\}$. We say $\alpha^N$ is the {\em unary code (in $N$)} of the $L$-structure $\str A(L,n,\alpha^N) $ with universe $[n]$ if $\alpha^N$ contains exactly the tuples $\langle S,\bar a\rangle\in\alpha^N$ where $S\in L$ is a relation symbol and $\bar a\in S^{[n]}$ (the interpretation of $S $ in $\str A(L,n,\alpha^N)$), or $\langle S,\bar a,b\rangle\in\alpha^N$ where $S\in L$ is a function symbol and $ S^{[n]}(\bar a)=b$. If such a structure exists, we say $\str A(L,n,\alpha^N)$ {\em is defined (in~$(N,\alpha^N)$)}; otherwise, the notation $\str A(L,n,\alpha^N)$ is undefined. \end{definition} A disadvantage of the unary code is that not every set $\alpha^\mathbb{N}\subseteq \mathbb{N}$ is the unary code of an $L$-structure on~$[n]$ because the relations determined for function symbols have to be graphs of functions on~$[n]$. Another disadvantage is that function symbols cannot be evaluated in polynomial time given oracle access to the code. This is avoided by the binary code: \begin{definition}\label{df:Balpha} Let $n\in N\setminus\{0\}$. Call an element of $N$ {\em relevant (wrt $L,n$)} if it equals either \begin{enumerate}\itemsep=0pt \item[--] $\langle S,\bar a\rangle$ for some $\bar a\in [n]^{\mathit{ar}(S)}$ and $S\in L$ a relation symbol, or \item[--] $\langle S,\bar a,i\rangle$ for some $i<|n|$ and $\bar a\in [n]^{\mathit{ar}(S)}$ and $S\in L$ a function symbol. \end{enumerate} A set $\alpha^N\subseteq N$ is a {\em binary code (in $N$)} of the $L$-structure $\str B(L,n,\alpha^N) $ with universe $[n]$ if \begin{enumerate}\itemsep=0pt \item[--] every relation symbol $S\in L$ is interpreted in $\str B(L,n,\alpha^N)$ by the set $S^{[n]}$ of those $\bar a\in [n]^{\mathit{ar}(S)}$ with $ \langle S,\bar a\rangle\in\alpha^N$; \item[--] every function symbol $S\in L$ is interpreted in $\str B(L,n,\alpha^N)$ by the function $S^{[n]}$ mapping $\bar a\in [n]^{\mathit{ar}(S)}$ to $\min\{a,n-1\}$ for the unique $a\in[n]$ such that for all $i<|n|$ we have $\mathit{bit}(i,a)$ equal to 1 or 0 depending on whether~$\langle S,\bar a,i \rangle$ is in~$\alpha^N$ or not. \end{enumerate} \end{definition} \begin{remark} Some comments are in order: \begin{itemize}\itemsep=0pt \item[--] Since $(N,\alpha^N)\models\forall\mathsf{S}^1_2(\alpha)$, there exists a unique $a$ as required. Hence, by Lemma~\ref{lem:subst}, $\str B(L,n,A)$ is well-defined for every $n\in N\setminus\{0\}$ and every $\Delta_1^b( \alpha)$-definable $A\subseteq N$. \item[--] The minimum above is an almost arbitrary convention to ensure the right range. It can be avoided when restricting $n$ to powers of 2 as is frequently done in the context of NP search problems (e.g.~\cite{papa,bj}). \item[--] Every set $\alpha^N\subseteq N$ such that $(N,\alpha^N)\models\forall\mathsf{S}^1_2(\alpha)$ is the binary code (in $N$) of a unique $L$-structure on~$[n]$. In particular, every set $\alpha^\mathbb{N}\subseteq \mathbb{N}$ is the binary code (in~$\mathbb{N}$) of a unique $L$-structure on~$[n]$. \item[--] Functions can be evaluated in polynomial time with oracle access to the binary code: for a, say, unary function symbol $S\in L$ there is $\tilde S(x,y)\in\ensuremath{{\sf PV}}\xspace(\alpha)$ such that $\tilde S(a,n)$ is the value of $S$ on $a<n$ in $\str B(L,n,\alpha^\mathbb{N})$. \end{itemize} \end{remark} The following lemma states for all models of $\forall\mathsf{S}^1_2(\alpha)$, that the unary code is in P relative to the binary code, and that the binary code is in $\textup{NP}\cap\textup{coNP}$ relative to the unary code (see~\cite[Section~7.6]{krabuch} for complexity classes in $N$). \begin{lemma}\label{lem:AB} There are a $\Delta_0^b( \alpha)$-formula $\psi_0(u,x)$ and $\Sigma_1^b( \alpha)$-formulas $\psi_1(u,x),\psi_2(u,x)$ independent of $(N,\alpha^N)$ such that for all $n\in N\setminus\{0\}$: \begin{enumerate}\itemsep=0pt \item[(a)] If $A\subseteq N$ denotes the set defined by $\psi_0(u,n)$ in $(N,\alpha^N)$, then $\str A(L,n,A)$ is defined and equals $\str B(L,n,\alpha^N)$. \item[(b)] If $\str A(L,n,\alpha^N)$ is defined and $A\subseteq N$ denotes the set defined by $\psi_1(u,n)$ in $(N,\alpha^N)$, then $\psi_2(u,n)$ defines $N\setminus A$ in $(N,\alpha^N)$ and $\str A(L,n,\alpha^N)=\str B(L,n,A)$. \end{enumerate} \end{lemma} \begin{proof}[Sketch of proof.] We only sketch the definition of $\psi_1(u,x)$. It implements the following procedure: given $(u,n)$, reject if $u$ is not relevant wrt $L,n$; else, say $u=\langle S,\bar a,i\rangle$ for $\bar a\in[n]^{\mathit{ar}(S)}$, $i<|n|$ and $S\in L$ a function symbol; guess $b\in [n]$; if $\alpha(\langle S,\bar a,b\rangle)\wedge\mathit{bit}(i,b){=}1$, accept; else reject. \end{proof} \subsection{NP search problems from finitary combinatorial principles}\label{sec:FOsearch} Let $L$ be a finite language disjoint from $\ensuremath{{\sf PV}}\xspace$. Following \cite{bj} we use existential first-order $L$-sentences of a syntactically simple form to define type 2 NP search problems. It is important to allow not only symbols from $L$ but additionally ``built-in'' symbols. For notational simplicity we only consider built-in symbols from $\ensuremath{{\sf PV}}\xspace$: \begin{definition}\label{df:builtin} An {\em $L$-formula with built-in $\ensuremath{{\sf PV}}\xspace$} is a $(\ensuremath{{\sf PV}}\xspace\cup L)$-formula. \end{definition} The difference is in the semantics: $L$-formulas with built-in $\ensuremath{{\sf PV}}\xspace$ are evaluated in $L$-structures with universe $\mathbb{N}$ or $[n]$ for $n\in\mathbb{N}\setminus\{0\}$ (up to isomorphism). On universe $\mathbb{N}$ the evaluation is as usual by considering the expansion interpreting the symbols from $\ensuremath{{\sf PV}}\xspace$ as in the standard model. For an $L$-structure on $[n]$ it is usual in finite model theory to consider the expansion by the graphs of $\ensuremath{{\sf PV}}\xspace$-function symbols restricted to $[n]$. We proceed equivalently but avoid the extra symbols for the graphs. Instead we require that every atomic formula in which some $\ensuremath{{\sf PV}}\xspace$-function symbol $f$ occurs has the form $f(\bar t){=}s$ where $\bar t,s$ are $L$-terms. Such an atom expresses that $(\bar t,s)$ is in the graph of $f$. We omit further details because, in fact, we are only interested in {\em basic} sentences, following Buss and Johnson's \cite{bj} mode of speech: \begin{definition} \label{df:basic} An $L$-formula with built-in $\ensuremath{{\sf PV}}\xspace$ is {\em basic} if it equals \begin{equation}\label{eq:basic} \textstyle \exists \bar y\ \bigvee_{i\in I} \bigwedge_{j\in J}\lambda_{ij}, \end{equation} where $I,J$ are nonempty index sets and each $\lambda_{ij}$ is a literal of the form $$R(\bar u),\ \neg R(\bar u),\ f(\bar u){=}v,\ \neg u{=}v,\textup{ or } u{=}v, $$ where $R$ is a relation symbol and $f$ a function symbol from $L\cup\ensuremath{{\sf PV}}\xspace$, and $\bar u,u,v$ are variables. \end{definition} This slightly deviates from \cite[Definition~2.9]{bj} in that there relation symbols are forbidden but constant symbols from $\ensuremath{{\sf PV}}\xspace$ are allowed within $\bar u,u,v$ above. To be precise how such sentences are evaluated in $L$-structures with universe $U=[n]$ or $U=\mathbb{N}$ we stipulate that $u{<}v$ (which is of the form $R(\bar u)$ above) defines the natural order on~$U$, and $f(\bar u){=}v$ for $r$-ary $f(\bar u)\in\ensuremath{{\sf PV}}\xspace$ defines $\{(\bar a,b)\in U^{r+1}\mid f^\mathbb{N}(\bar a)=b\}$. \begin{definition}\label{df:fcp} A {\em finitary combinatorial principle (in the language $L$)} is a basic $L$-sentence with built-in $\ensuremath{{\sf PV}}\xspace$ that is {\em valid in the finite}, i.e., true in all finite $L$-structures with universe~$[n]$ for some $n\in\mathbb{N}\setminus\{0\}$. Being {\em without built-in symbols} means that $\ensuremath{{\sf PV}}\xspace$-symbols do not occur. \end{definition} \begin{remark}\label{rem:herbrand} Standard Herbrandization allows to compute from any $L$-formula $\varphi$ with built-in $\ensuremath{{\sf PV}}\xspace$ an equivalid basic $L'$-formula $\varphi'$ with built-in $\ensuremath{{\sf PV}}\xspace$ where $L'$ is $L$ plus certain functions symbols. Note that a negative literal $\neg f(\bar u){=}v$ can be eliminated using $\exists y(f(\bar u){=}y\wedge \neg y{=}v)$. In fact, $\varphi$ is true in all $L$-structures on a given universe ($\mathbb{N}$ or $[n]$) if and only if $\varphi'$ is true in all $L'$-structures on that universe. \end{remark} Let $\exists\bar y\psi(\bar y)$ be a basic $L$-sentence with built-in $\ensuremath{{\sf PV}}\xspace$, and $\bar y=(y_0,\ldots,y_{k-1})$. Define \begin{equation}\label{eq:Amodels} \q{\str A(L,x,\alpha)\models\psi(y)} \end{equation} to be the quantifier free $\ensuremath{{\sf PV}}\xspace(\alpha)$-formula obtained from $\psi(\bar y)$ as follows: first, replace $L$-atoms of the form $R(\bar u)$ by $\alpha(\langle R,\bar u\rangle)$ and $ f(\bar u){=}v$ by $\alpha(\langle f, \bar u,v\rangle)$ (note $\ensuremath{{\sf PV}}\xspace$-atoms are left untouched); second, letting $\psi'(y_0,\ldots,y_{k-1})$ denote the resulting formula, define \eqref{eq:Amodels} to be $$ 0{<}x\to\textstyle\bigwedge_{j{<}k} (y)_j{<}x\wedge\psi'((y)_0,\ldots,(y)_{k-1}); $$ note $(y)_j$ is a $\ensuremath{{\sf PV}}\xspace$-term with variable $y$ and a constant $j\in\ensuremath{{\sf PV}}\xspace$. Even if $\exists \bar y\psi(\bar y)$ is valid in the finite, $\q{\str A(L,x,\alpha)\models\psi(y)}$ might not be a type 2 NP search problem. It can fail to be total (cf.~\eqref{eq:total}) since $\alpha$ can fail to be the unary code of a some $L$-structure on~$[x]$. One can define a different total search problem: find $y$ such that $\q{\str A(L,x,\alpha)\models\psi(y)}$ if $\str A(L,x,\alpha)$ is defined, and otherwise $y$ witnesses that~$\alpha$ is not such a code. But this property of $y$ is not verifiable in polynomial time with oracle~$\alpha$, so the search problem is not NP. These problems disappear when using the binary code. The formula \begin{equation}\label{eq:Bmodels} \q{\str B(L,x,\alpha)\models\psi(y)} \end{equation} is similarly defined but replacing $ f(\bar u){=}v$ (not by $\alpha(\langle f, \bar u,v\rangle)$ but instead) by a $\Delta^b_0(\alpha)$-formula defining the graph of the interpretation of $f$ in $\str B(L,x,\alpha)$. The choice of this formula shall not play any further role; for example, one might take \begin{equation*}\label{eq:binf} \begin{array}{rcl} &&\Big( v{<}x\wedge \forall i{<}|x|\big(\alpha(\langle f,\bar u,i\rangle)\leftrightarrow \mathit{bit}(v,i){=}1\big)\Big)\\ &&\vee\; \Big( v{=}x{-}1\wedge\exists i{<}|x|\big( \alpha(\langle f,\bar u,i\rangle) \wedge \mathit{bit}(x{-}1,i){=}0 \\ &&\qquad \wedge \ \forall j{<}|x| ( i{<}j\to (\alpha(\langle f,\bar u,j\rangle)\leftrightarrow \mathit{bit}(x{-}1,j){=}1 ) ) \big) \Big). \end{array} \end{equation*} All formulas have the free variables shown. We employ suggestive notation for substitutions. E.g. $\q{\str B(L,n,f_{\bar z}^{-1}(0))\models\psi(a)}$ is obtained by substituting $n,a$ for $x,y$ and $f_{\bar z}^{-1}(0)$ for~$\alpha$ (see~\eqref{eq:subfct}). The following is clear: \begin{lemma} \label{lem:formAB} Let $(N,\alpha^N)$ be a model of $\forall\mathsf{S}^1_2(\alpha)$ and $\exists y_0\cdots y_{k-1} \psi(y_0,\ldots,y_{k-1})$ be a basic $L$-formula with built-in $\ensuremath{{\sf PV}}\xspace$. Then for all $(n,a)\in N^2$ with $n\neq 0$: $$ N\models \q{\str B(L,n,\alpha)\models\psi(a)}\ \Longleftrightarrow\ \str B(L,n,\alpha^N)\models\psi((a)_0,\ldots, (a)_{k-1}). $$ If furthermore $\str A(L,n,\alpha^N)$ is defined, then $$ N\models \q{\str A(L,n,\alpha)\models\psi(a)}\ \Longleftrightarrow\ \str A(L,n,\alpha^N)\models\psi((a)_0,\ldots, (a)_{k-1}). $$ \end{lemma} If $\exists \bar y\psi(\bar y)$ is valid in the finite, then $\q{\str B(L,x,\alpha)\models\psi(y)}$ is a type 2 NP search problem in the sense of Section~\ref{sec:searchprbl}. Indeed, the above lemma (for $N=\mathbb{N}$) implies totality \eqref{eq:total}, and boundedness is witnessed by $t(x):=c\#( x\#\cdots\# x)$ with~$k$ iterations of $\#$ and suitable $c\in\mathbb{N}$ (by \eqref{eq:seqbound}). It is the problem, given a natural $n>0$ and access to an oracle~$\alpha^\mathbb{N}\subseteq\mathbb{N}$, to find a satisfying assignment of $\psi(y_0,\ldots,y_{k-1})$ in $\str B(L,n,\alpha^\mathbb{N})$. \begin{definition}\label{df:assNPSP} Let $\varphi=\exists\bar y\psi(\bar y)$ be a finitary combinatorial principle in the language $L$. The type 2 NP search problem $Q_\varphi$ {\em associated to} $\varphi$ is $\q{\str B(L,x,\alpha)\models\psi(y)}$. \end{definition} Here, and in similar contexts below, we silently assume that the language $L$ is finite and disjoint from $\ensuremath{{\sf PV}}\xspace$, and that $\psi(\bar y)$ is quantifier free. \subsection{Proof of Theorem~\ref{thm:bjstrong}}\label{sec:bj} Let $\exists\bar y\varphi(\bar y),\exists\bar w\tilde \varphi(\bar w)$ be finitary combinatorial principles in the languages $L,\tilde L$, respectively. Hence we have type 2 NP search problems $\q{\str B(L, x,\alpha)\models\varphi(y)}$ and $\q{\str B(\tilde L, \tilde x,\alpha)\models\tilde\varphi(w)}$. Let $t(x)$ and $\tilde t(\tilde x)$ be terms witnessing their boundedness \eqref{eq:tbd}. Using the propositional translation $\langle\cdot\rangle$ of Section~\ref{sec:prop}, the totality of these search problems is naturally expressed by a sequence of propositional tautologies, one for each universe $[n]$ where $n>0$. We get two such sequences, one for the unary and one for the binary code of structures. There is some recent work~\cite{sergithesis,barny} comparing the two translations in propositional proof complexity. \begin{definition} Let $n\in\mathbb{N}\setminus\{0\}$. The {\em binary translation of $\varphi$ on $[n]$} is $$ \Big\langle \exists y{<}t(n)\q{\str B(L, n,\alpha)\models\varphi(y)}\Big\rangle. $$ \end{definition} The formula $\str A(L,x,\alpha)\textit{ is defined}$ is the conjunction of $$ \forall \bar u{<}x\exists v{<}x\ \alpha(\langle S, \bar u,v\rangle)\wedge \forall \bar u,v,v'{<}x \big(v{=}v'\vee \neg\alpha(\langle S, \bar u,v\rangle)\vee\neg\alpha(\langle S, \bar u,v'\rangle)\big) $$ for every function symbol $S\in L$. This is a $\Delta_0^b(\alpha)$-formula with free variable $x$. It is satisfied by $n\neq 0$ in a model $(N,\alpha^N)$ of $\forall\mathsf{S}^1_2(\alpha)$ if and only if $ \str A( L, n,\alpha^N)$ is defined in $(N,\alpha^N)$. \begin{definition}\label{df:untransl} Let $n\in\mathbb{N}\setminus\{0\}$. The {\em unary translation of $\varphi$ on $[n]$} is $$ \Big\langle \str A(L,n,\alpha)\textit{ is defined}\to \exists y{<}t(n)\q{\str A(L, n,\alpha)\models\varphi(y)}\Big\rangle. $$ \end{definition} `Propositional translation'' in Theorems~\ref{thm:bj} and \ref{thm:bjstrong} refers to the unary tanslation. \begin{example}\label{ex:wphptransl} A basic sentence expressing the $n^2$ to $n$ weak pigeonhole principle~\eqref{eq:wphpformula} is the existential closure of \begin{equation*} \big(f(x,y){=}z\wedge f(x',y'){=}z\wedge \neg x{=}x'\big)\ \vee\ \big(f(x,y){=}z\wedge f(x',y'){=}z\wedge \neg y{=}y'\big). \end{equation*} Write $i\in[n^2]$ as $i=i_0\cdot n+i_1$ for $i_0,i_1\in[n]$. Further write $p_{ij}$ for the propositional variable $\langle\alpha(\langle f,i_0,i_1,j\rangle)\rangle$ where $i\in[n^2],j\in[n]$. The unary translation on $[n]$ has many occurrences of the Boolean constants 0,1. If one eliminates these occurrences by repeatedly replacing subformulas $0\vee F, 1\wedge F$ by $F$ etc., then one gets the familiar disjunction of $$ \begin{array}{lcl} \textstyle\bigwedge_{j<n}\neg p_{ij}&&i\in[n^2],\\ p_{ij}\wedge p_{ij'}&&j,j'\in[n],j\neq j',\\ p_{ij}\wedge p_{i'j}&&j\in[n],i,i'\in[n^2], i\neq i', \end{array} $$ with multiple occurrences of the last disjuncts. \end{example} \begin{remark}The unary translation is very similar to the propositional translation used by Buss and Johnson~\cite{bj}. More precisely, the translation in \cite[Definition~3.2]{bj} produces a sequent $F\Rightarrow G$; if one eliminates Boolean constants as indicated in the example above both in $(\neg F\vee G)$ and in our unary translation, then one obtains the same formula. \end{remark} A {\em substitution instance} of a propositional formula is obtained by simultaneously replacing some of its variables by propositional formulas. The first statement of the following is a slightly more detailed statement of Theorem~\ref{thm:bjstrong}. \begin{theorem} If $\q{\str B(L,x,\alpha)\models\varphi(y)}$ is a consequence of $\q{\str B(\tilde L, \tilde x,\alpha)\models\tilde \varphi(w)}$ over $\forall\mathsf{T}_2(\PV(\alpha))$, then there are $d,n_0\in\mathbb{N}$ such that for all $n>n_0$ there are size $2^{|n|^d}$ depth~$d$ Frege proofs of the unary translation of $\varphi$ on $[n]$ from substitution instances of the unary translations of $\tilde\varphi$ on~$[\tilde n]$ for all $\tilde n<2^{|n|^d}$. The same holds for the binary translations of $\varphi$ and $\tilde\varphi$. \end{theorem} \begin{proof} Assume $\q{\str B(L,x,\alpha)\models\varphi(y)}$ is a consequence of $\q{\str B(\tilde L, \tilde x,\alpha)\models\tilde \varphi(w)}$ over~$\forall\mathsf{T}_2(\PV(\alpha))$. Recall $t(x),\tilde t(\tilde x)$ are terms witnessing the boundedness of these search problems. By compactness there is a finite $\Delta\subseteq\ensuremath{{\sf PV}}\xspace(\alpha)$ such that $ \forall\mathsf{T}_2(\PV(\alpha))$ proves \begin{equation}\label{eq:conseq} \textstyle \bigwedge_{f(z,\bar z)\in\Delta}\forall \tilde x \bar z\exists w{<}\tilde t(\tilde x)\ \q{\str B(\tilde L,\tilde x,f^{-1}_{\bar z}(0))\models\tilde\varphi(w)}\ \to\ \exists y{<}t(x)\ \q{\str B(L,x,\alpha)\models\varphi(y)}. \end{equation} Let $\psi_0(u,x)$ be the formula from Lemma~\ref{lem:AB}. \medskip \noindent{\em Claim 1.} For every $f(z,\bar z)\in\ensuremath{{\sf PV}}\xspace(\alpha)$, $\forall\mathsf{T}_2(\PV(\alpha))$ proves \begin{equation}\label{eq:ant} \begin{split} \textstyle &\Big(\big( \str A(\tilde L,\tilde x,\alpha)\textit{ is defined}\to \exists w{<}\tilde t(\tilde x)\q{\str A(\tilde L,\tilde x,\alpha)\models\tilde\varphi(w)}\big)\big[\alpha/\psi_0(\cdot,\tilde x)\big]\Big)\big[\alpha/f^{-1}_{\bar z}(0)\big]\\ & \to \exists w{<}\tilde t(\tilde x)\q{\str B(\tilde L,\tilde x,f^{-1}_{\bar z}(0))\models\tilde\varphi(w)}. \end{split} \end{equation} \noindent{\em Proof of Claim 1:} By Proposition~\ref{prop:cons}, models of $\forall\mathsf{T}_2(\PV(\alpha))$ have the form $\langle M,\alpha^M\rangle$ where $(M,\alpha)\models\forall\mathsf{T}_2(\alpha)$. Suppose the assignment of $\tilde n,\bar c$ to $\tilde x,\bar z$ falsifies the succedent of~\eqref{eq:ant} in $\langle M,\alpha^M\rangle$, i.e., $\str B(\tilde L,\tilde x,f^{-1}_{\bar z}(0))\not\models\exists\bar w\tilde\varphi(\bar w)$ by Lemma~\ref{lem:formAB}. We have to show that~$\tilde n,\bar c$ falsify the antecedent of~\eqref{eq:ant} in $\langle M,\alpha^M\rangle$. Let $A:=\{a\in M\mid f^M(a,\bar c)=0\}$. Then $A$ is $\Delta^b_1(\alpha)$-definable in $(M,\alpha^M)$ by Lemma~\ref{lem:falphaf}, so $(M,A)\models\forall\mathsf{T}_2(\alpha)$ by Lemma~\ref{lem:subst}. Writing~$B$ for the set defined by $\psi_0(u,\tilde n)$ in $(M,A)$, Lemma~\ref{lem:AB} gives that $\str A(\tilde L,m,B)$ is defined (in $(M,A)$) and equals $\str B(\tilde L,\tilde n,A)$. Thus $$ (M,A)\not\models\big( \str A(\tilde L,\tilde n,\alpha)\textit{ is defined}\to \exists w{<}\tilde t(\tilde x)\q{\str A(\tilde L,\tilde n,\alpha)\models\tilde\varphi(w)}\big)\big[\alpha/\psi_0(\cdot,\tilde n)\big], $$ by Lemma~\ref{lem:formAB}. Then $\tilde n,\bar c$ falsify the antecedent of \eqref{eq:ant} in $\langle M,\alpha^M\rangle$. \hfill$\dashv$\medskip For every $f(z,\bar z)\in\Delta$, the antecedent of $\eqref{eq:ant}$ is $\forall\PV(\alpha)$-provably equivalent to a $(\ensuremath{{\sf PV}}\xspace\cup\{\alpha\})$-formula. This formula is obtained by substituting atoms $f(t,\bar z){=}0$ by suitable $\Sigma^b_1(\alpha)$-formulas obtained from $\Sigma^b_1(\alpha)$-definitions of the graph of $f$ (see Lemma~\ref{lem:falphaf}). Let $$ \chi_0(\tilde x,\bar z_0),\ldots, \chi_{|\Delta|-1}(\tilde x,\bar z_{|\Delta|-1}) $$ enumerate the $(\ensuremath{{\sf PV}}\xspace\cup\{\alpha\})$-formulas thus obtained. By conservativity (Proposition~\ref{prop:cons}) \begin{equation}\label{eq:alphaimpl} \textstyle \forall\mathsf{T}_2(\alpha)\ \vdash\ \bigwedge_{i< |\Delta|}\forall \tilde x \bar z_{i}\chi_i(\tilde x,\bar z_i)\ \to\ \exists y{<}t(x)\q{\str B(L,x,\alpha)\models\varphi(y)}. \end{equation} Let $\psi_1(u,x)$ be the formula from Lemma~\ref{lem:AB}. \medskip \noindent{\em Claim 2.} $\forall\mathsf{T}_2(\alpha)$ proves \begin{equation}\label{eq:prvA} \begin{split}& \textstyle \bigwedge_{i< |\Delta|}\forall \tilde x \bar z_{i} \chi_i(\tilde x,\bar z_i)\big[ \alpha/\psi_1(\cdot,x) \big] \\ & \to \big( \str A(L,x,\alpha)\textit{ is defined}\to\exists y{<}t(x)\q{\str A(L,x,\alpha)\models\varphi(y)}\big). \end{split} \end{equation} \noindent{\em Proof of Claim 2:} Suppose $(M,\alpha^M)\models\forall\mathsf{T}_2(\alpha)$ and $n\in M$ falsifies the succedent of~\eqref{eq:prvA} in~$(M,\alpha^M)$. Then $n\neq 0$, $\str A(L,n,\alpha^M)$ is defined and $\str A(L,n,\alpha^M)\not\models\exists\bar y\varphi(\bar y)$ by Lemma~\ref{lem:formAB}. Let $A\subseteq M$ be defined by $\psi_1(u,n)$ in $(M,\alpha^M)$. By Lemma~\ref{lem:subst}, $(M,A)\models\forall\mathsf{T}_2(\alpha)$. By Lemma~\ref{lem:AB}, $\str B(L,n,A)$ equals $\str A(L,n,\alpha^M)$, so $(M,A)\not\models\exists y{<}t(x)\q{\str B(L,n,\alpha)\models\varphi(y)}$ by Lemma~\ref{lem:formAB}. Thus $(M,A)$ falsifies the antecedent of \eqref{eq:alphaimpl}, so $n$ falsifies the antecedent of~\eqref{eq:prvA} in $(M,\alpha^M)$. \hfill$\dashv$\medskip Parikh's theorem (see e.g.~\cite[Theorem~1.4.3]{busshand}) allows to bound $\forall \tilde x \bar z_{i}$ in \eqref{eq:prvA} by a $\ensuremath{{\sf PV}}\xspace$-term $s(x)$. Thereby we get a $\Sigma^b_\infty(\alpha)$-formula and can apply Proposition~\ref{prop:simulation}. This yields for every natural $n>0$ a quasipolynomial (in~$n$) size bounded depth Frege proof of the unary translation of $\varphi$ on $[n]$ from the formulas $\textstyle \left\langle\chi_i(\tilde n,\bar c_i)\big[ \alpha/\psi_1(\cdot,n) \big]\right\rangle $ where $ \tilde n,\bar c_i<s(n),i<|\Delta|$. These formulas are substitution instances of the unary translation of $ \tilde\varphi$ on $[\tilde n]$. \medskip The proof of the second statement is similar but simpler: from \eqref{eq:conseq} move to a $(\ensuremath{{\sf PV}}\xspace\cup\{\alpha\})$-formula by substituting definitions for the graphs of the functions in $\Delta$. Then bound the quantifiers $\forall\tilde x\bar z$ using Parikh's theorem and apply the simulation (Proposition~\ref{prop:simulation}). \end{proof} \section{Finitary combinatorial principles}\label{sec:comb} From a computational perspective it is natural to view a finitary combinatorial principle as a search problem as in Definition~\ref{df:assNPSP}. From a more logical perspective one might think of it as a reasoning rule that allows to infer the existence of certain configurations in finite structures. The interesting case is when the principle fails in some infinite structure, so the rule is sound only in the finite. It is not obvious how to compare the logical strength of such principles in the finite since they all hold in the same (all) finite structures. The crucial observation is that they might behave differently with respect to {\em partial} finite structures, allowing the distinction between {\em weak} and {\em strong} principles. Intuitively, a principle is weak if seeing only a small fraction of a given structure is already sufficient to verify its truth. We shall verify later that the thus distinguished logical strength of principles implies distinct computational complexities of the associated type 2 NP search problems. We define partial structures and their logic in Section~\ref{sec:partial}, and their codes by partial oracles in Section~\ref{sec:partialcodes}. Weak and strong principles are defined in Section~\ref{sec:weakstrong} and examples are discussed in Section~\ref{sec:exas}. Section~\ref{sec:dense} establishes the combinatorial lemmas for the forcing constructions to come. \subsection{Partial structures}\label{sec:partial} Let $L$ be a language. For the sake of exposition, let us agree that the interpretation $S^A$ of a symbol $S\in L$ in an $L$-structure $\str A$ with universe $A$ is a function from $A^{\mathit{ar}(S)}$ into $A$ or into~$\{0,1\}$ depending on whether~$S$ is a function or a relation symbol. For relation symbols we identify $S^{A}$ with $\{\bar a\in A^{\mathit{ar}(S) } \mid S^A(\bar a)=1\}$. A {\em partial $L$-structure} $\str A$ is similarly explained but allowing value $1/2$ which we read as ``undefined'' and assume to be outside $A$. That is, the interpretation $S^{A}$ for $S\in L$ is a function from $A^{\mathit{ar}(S)}$ into $A\ \dot\cup\ \{1/2\}$ or into $\{0,1,1/2\}$ depending on whether~$S$ is a function or a relation symbol. $\str A$ is {\em total} if $S^A(\bar a)\neq 1/2$ for all $S\in L$ and all $\bar a\in A^{\mathit{ar}(S)}$. Let $\str A,\str B$ be partial $L$-structures with universes~$A,B$ respectively. Then $\str B$ is a {\em partial substructure of} $\str A$ if $B\subseteq A$ and interpretations $S^{B}$ are obtained from $S^A$ by changing some values to~$1/2$; it is {\em induced} if for every $S\in L$ and all $\bar b\in B^{\mathit{ar}(S)}$ we have $S^B(\bar b)=S^{A}(\bar b)$ except for the case that $S$ is a function symbol and $S^A(\bar b)\not\in B$; in this case $S^B(\bar b)=1/2$. We say $\str A$ {\em extends} a partial substructure $\str B$ if $A=B$. An {\em isomorphism} from $\str A$ onto $\str B$ is a bijection $\pi$ from~$A\cup\{0,1,1/2\}$ onto $B\cup\{0,1,1/2\}$ which is the identity on $\{0,1,1/2\}$ and such that $\pi\circ S^A=S^B\circ\pi $ for all $S\in L$; here, we assume $\{0,1,1/2\}\cap(A\cup B)=\emptyset$. An {\em embedding} from $\str B$ into~$\str A$ is an isomorphism from $\str B$ onto a partial substructure of $\str A$. \begin{definition} \label{df:size} The {\em size} of $\str A$ is \begin{equation*}\textstyle \sum_{S\in L}|\{\bar a\in A^{\mathit{ar}(S)}\mid S^A(\bar a)\neq 1/2\}|. \end{equation*} We let $s_L(n)$ denote the size of a total $L$-structure with a universe of cardinality $n$, that is, \begin{equation*} \textstyle s_L(n):=\sum_{S\in L}n^{\mathit{ar}(S)}. \end{equation*} \end{definition} We explain how to evaluate formulas in a partial $L$-structure $\str A$. We silently extend all~$S^A$ to domain $(A\cup\{1/2\})^{\mathit{ar}(S)}$ giving value $1/2$ to all new argument tuples, i.e., $S^A(\bar a):=1/2$ if $\bar a\in(A\cup\{1/2\})^{\mathit{ar}(S)}\setminus A^{\mathit{ar}(S)}$. Then the interpretation $t^A$ of a closed $L$-term $t$ (i.e., $t$ has no variables) with parameters from $A$ is defined as usual by composition of the interpretation of its function symbols. That is, values of closed terms are computed bottom-up as usual but upon encountering the value $1/2$ the computation is aborted with output~$1/2$. For an $L$-sentence $\varphi$ with parameters from $A$ we define the {\em truth value $v^{\str A}(\varphi)\in \{0,1,1/2\}$ of $\varphi$ in $\str A$} in a way familiar from 3-valued logic (see e.g.~\cite{partial}): \begin{itemize}\itemsep=0pt \item[--] If $\varphi$ has the form $t{=}s$ for closed $L$-terms $t,s$ with parameters from $A$, then $v^{\str A}(\varphi):=1/2$ if at least one of $t^A,s^A$ equals $1/2$; otherwise, $v^{\str A}(\varphi)$ is 1 or 0 depending on whether $t^A$ equals $s^A$ or not. \item[--] If $\varphi=S(t_0,\ldots,t_{\mathit{ar}(R)-1})$ for closed $L$-terms $t_0,\ldots, t_{\mathit{ar}(S)-1}$ with parameters from $A$ and $S\in L$ a relation symbol, then $v^{\str A}(\varphi):=S^A(t^A_0,\ldots, t^A_{\mathit{ar}(S)-1})$. \item[--] If $\varphi=\neg\psi$, then $v^{\str A}(\varphi):=1-v^{\str A}(\psi)$. \item[--] If $\varphi=(\psi\wedge\chi)$, then $v^{\str A}(\varphi):=\min\{v^{\str A}(\psi),v^{\str A}(\chi)\}$. \item[--] If $\varphi=\forall x\psi(x)$, then $v^{\str A}(\varphi):=\min\{v^{\str A}(\psi(a))\mid a\in A\}$. \end{itemize} We consider formulas as built from atomic formulas using~$\neg,\wedge,\forall x$ and view $(\varphi\vee\psi)$ and $\exists x\varphi$ as abbreviations of $\neg(\neg \varphi\wedge\neg\psi)$ and $\neg\forall x\neg\varphi$, respectively. Then \begin{eqnarray*} v^{\str A}(\varphi\vee\psi)&=&\max\big\{v^{\str A}(\varphi),v^{\str A}(\psi)\big\},\\ v^{\str A}(\exists x\varphi(x))&=&\max\big\{v^{\str A}(\varphi(a))\mid a\in A\big\}. \end{eqnarray*} \begin{definition} A partial structure $\str A$ {\em verifies $\varphi$} if $v^{\str A}(\varphi)=1$; it {\em falsifies} $\varphi$ if it verifies~$\neg\varphi$. \end{definition} Clearly, if a partial structure $\str A$ extends $\str B$, then it verifies every sentence which is verified by $\str B$. A total structure $\str A$ verifies $\varphi$ if and only if $\str A\models\varphi$. \begin{lemma}\label{lem:pres} Let $\str A$ be a partial structure and $\str B$ a partial substructure of $\str A$. Then every existential sentence verified by $\str B$ is verified by $\str A$. \end{lemma} \begin{proof} Call a sentence $\varphi$ with parameters from $B$ {\em good} if $v^{\str B}(\varphi)=1/2$ or $v^{\str B}(\varphi)=v^{\str A}(\varphi)$. The set of good sentence contains all atomic formulas and is closed under $\wedge$ and $\neg$, so contains all quantifier free sentences with parameters from~$B$. If $\str B$ verifies $\exists \bar x\varphi(\bar x)$ for quantifier free $\varphi(\bar x)$, then it verifies $\varphi(\bar b)$ for some tuple $\bar b$ from~$B$. Since~$\varphi(\bar b)$ is good, also $\str A$ verifies $\varphi(\bar b)$ and hence $\exists \bar x\varphi(\bar x)$. \end{proof} \subsection{Partial codes}\label{sec:partialcodes} As structures are coded by oracles, partial structures are coded by ``partial oracles''. As in Section~\ref{sec:coding}, we fix finite languages $L,\tilde L\subseteq\mathbb{N}$. We further fix a model $(N,\alpha^N)$ of $\forall\mathsf{S}^1_2(\alpha)$, so $\langle N,\alpha^N\rangle\models\forall\mathsf{S}^1_2(\PV(\alpha))$ (Proposition~\ref{prop:cons}). We do not distinguish between symbols in~$\ensuremath{{\sf PV}}\xspace(\alpha)$ and their interpretations in $\langle N,\alpha^N\rangle$. We also blur the distinction between $p\in N$ and the set it codes, namely the set of $a\in N$ with $\mathit{bit}(p,a)=1$. Recall that {\em relevant} elements of $N$ are those used to code structures (see Definition~\ref{df:Balpha}). \begin{definition} Let $n\in N\setminus\{0\}$. Let $p\in N$ be such that $p=\langle p_0,p_1\rangle$ for certain $p_0,p_1\in N$. Such a $p$ is a {\em partial $L$-oracle on $[n]$} if $p_0$ and $p_1$ code disjoint sets of relevant (wrt~$L,n$) elements such that for every function symbol~$S\in L$ and $\bar a\in[n]^{\mathit{ar}(S)}$ either all or none of $\langle S,\bar a,i\rangle,i<|n|,$ are elements of $p_0\cup p_1$. Then $p$ codes the following partial structure $$ \str B(p)=\str B(L,n,p) $$ with universe $[n]$: \begin{enumerate}\itemsep=0pt \item[--] for a function symbol $S\in L$ we have $S^{[n]}(\bar a)=1/2$ if $p_0\cup p_1$ does not contain $\langle S,\bar a,i\rangle$ for some (equivalently all) $i<|n|$; otherwise $S^{[n]}(\bar a)=\min\{a,n-1\}$ for the unique $a\in[n]$ with $\langle S,\bar a,i\rangle\in p_{\mathit{bit}(i,a)}$ for all $i<|n|$; \item[--] for a relation symbol $S\in L$ we have $S^{[n]}(\bar a)$ equal to $0$ if $\langle S,\bar a\rangle\in p_0$, equal to 1 if $\langle S,\bar a\rangle\in p_1$, and equal to $1/2$ if $\langle S,\bar a\rangle\not\in p_0\cup p_1$. \end{enumerate} \end{definition} In the standard $\ensuremath{{\sf PV}}\xspace$-model $N=\mathbb{N}$, one might call an $\tilde L$-structure $\str C$ on $[m]$ ``implicitly feasible in'' $\str B(L,n,\alpha^\mathbb{N})$ if a binary code of $\str C$ is polynomial time Turing reducible to $\alpha^\mathbb{N}$. These are precisely the structures $\str C$ of the form $\str B(L,n,f^{-1}(0))$ for some $f\in\ensuremath{{\sf PV}}\xspace(\alpha)$. It shall be convenient to work instead with a presentation of such structures (see Lemma~\ref{lem:family} below) given by a family of decision trees computing the interpretations of the symbols in $\tilde L$. Recall, Definition~\ref{df:tree} defines sequences of $\alpha$-answers to decision trees $t$. The mode of speech for partial oracles is analogous: \begin{definition}\label{df:pansw} Let $n\in N\setminus\{0\}$, $p$ a partial $L$-oracle on $[n]$ and $t(\bar x,z)$ a decision tree in~$N$. Then $c\in N\setminus\{0\}$ is a {\em sequence of $p$-answers to $t$ on $\bar a$} if for all $i<|c|-1$ we have $t(\bar a,c_{<i})$ is odd and: \begin{enumerate}\itemsep=0pt \item[--] $\mathit{bit}(i,c)=1$ and $ \lfloor t(\bar a,c_{<i})/2\rfloor\in p_1$, or \item[--] $\mathit{bit}(i,c)=0$ and $\lfloor t(\bar a,c_{<i})/2\rfloor\in p_0$, or \item[--] $\mathit{bit}(i,c)=0$ and $ \lfloor t(\bar x,c_{<i})/2\rfloor$ is not relevant (wrt $L,n$). \end{enumerate} It is {\em complete} if $t(\bar a,c)$ is even; it is {\em maximal} if it is either complete or~$t(\bar a,c)$ is odd and~$\lfloor t(\bar a, c)/2 \rfloor$ is relevant and outside $p_0\cup p_1$. \end{definition} \begin{definition}\label{df:family} For each $\tilde S(\bar x)\in\tilde L$ let $t_{\tilde S}(\bar x,z)$ be a decision tree of height $h_{\tilde S}(\bar x)$ in $N$. For $m,n\in N\setminus\{0\}$ and a partial $L$-oracle $p$ on $[n]$ we get a partial $\tilde L$-structure $$ \str C((t_{\tilde S})_{\tilde S\in\tilde L},m,p) $$ with universe $[m]$ as follows. For $\tilde S\in\tilde L$ and $\bar a\in[m]^{\mathit{ar}(\tilde S)}$ let $\tilde S^{[m]}(\bar a)\neq 1/2$ if and only if there is exactly one complete sequence $c$ of $p$-answers to~$t_{\tilde S}$ on $\bar a$; then $$ \tilde S^{[m]}(\bar a):=\left\{\begin{array}{ll} \min\{ t_{\tilde S}(\bar a,c)/2,m-1\}&\text{if $\tilde S$ is a function symbol,}\\ \min\{ t_{\tilde S}(\bar a,c)/2,1\}&\text{if $\tilde S$ is a relation symbol.}\ \end{array}\right. $$ For $\alpha^N \subseteq\mathbb{N}$ we define $\str C((t_{\tilde S})_{\tilde S\in\tilde L},m,\alpha^N)$ analogously using sequences of $\alpha^N$-answers. \end{definition} The minima above are just a convention to ensure the right range. Of course, in the standard $\ensuremath{{\sf PV}}\xspace$-model there can only be at most one complete sequence of $p$-answers. In our possibly nonstandard model $N$, this holds if the decision trees have a sufficiently simple definition like the following. \begin{definition}\label{def:givenbyterms} A family $(t_{\tilde S})_{\tilde S\in\tilde L}$ of decision trees in~$N$ is {\em given by terms} if every $t_{\tilde S}(\bar x),\tilde S\in\tilde L,$ is the interpretation (in $N$) of some $\ensuremath{{\sf PV}}\xspace$-term with parameters from $N$, and has height $|h_{\tilde S}(\bar x)|$ for some $\ensuremath{{\sf PV}}\xspace$-term $h_{\tilde S}(\bar x)$ with parameters from $N$. \end{definition} \begin{lemma} \label{lem:family} Let $m\in N\setminus\{ 0\}$, $f(z,\bar z)\in\ensuremath{{\sf PV}}\xspace(\alpha)$, and $\bar a$ a tuple from $N$. Then there is a family $(t_{\tilde S})_{\tilde S\in\tilde L}$ of decision trees in~$N$ given by terms such that $$\str B(\tilde L,m,f_{\bar a}^{-1}(0))=\str C((t_{\tilde S})_{\tilde S\in\tilde L},m,\alpha^N).$$ \end{lemma} \begin{proof} It is easy to see, and also follows from Lemma~\ref{lem:subst}, that $\str B(L,m,f_{\bar a}^{-1}(0))$ is well defined in $\langle N,\alpha^N\rangle$. Let $\tilde S\in\tilde L$ be a function symbol (the case of a relation symbol is similar). Consider the following algorithm with oracle $\alpha$ and parameters $\bar z,m$ from~$\mathbb{N}$: on input $\bar x\in[m]^{\mathit{ar}(\tilde S)}$ compute the length $|m|$ binary string whose $i$-th bit is 1 or 0 depending on whether $f(\langle\tilde S,\bar x,i\rangle, \bar z)=0$ or not; finally output the number with this binary expansion if it is in~$[n]$, otherwise output $n-1$. Now choose $t_{\tilde S},h_{\tilde S}$ according Lemma~\ref{lem:falphaf}. \end{proof} \subsection{Weak and strong principles}\label{sec:weakstrong} Let $L$ be a finite language. We define simple model-theoretic notions for an $L$-sentence $\varphi$ to be weak or strong. The case of interest is when $\varphi$ is basic (Definition~\ref{df:basic}), valid in the finite and fails in some infinite model. \begin{definition}\label{df:d} Let $\varphi$ be an $L$-sentence. The {\em determinacy of}~$\varphi$ is the function $d:\mathbb{N}\setminus\{0\}\to \mathbb{N}$ such that $d(n)$ is the minimal $m\in\mathbb{N}$ such that every partial $L$-structure with universe of cardinality $n$ and size at least $ m$ verifies $\varphi$. If $s_L(n)\ge n^{\Omega(1)}\cdot d(n)$, then we say $\varphi$ is {\em weak}. \end{definition} Observe that $d(n)>0$ because there is no sentence verified by the completely undefined structure. We have $d(n)\le s_L(n)$ if and only if $\varphi$ is valid in the finite, and otherwise $d(n)=s_L(n)+1$. Intuitively, the smaller the determinacy the weaker the principle (i.e., the claim that it has no finite models). \begin{remark}\label{rem:weakPV} The same definitions apply to $L$-sentences with built-in $\ensuremath{{\sf PV}}\xspace$ understanding verification as follows: a partial $L$-structure $\str B$ with universe $[n]$ for some $n\in\mathbb{N}\setminus\{0\}$ {\em verifies} a $(\ensuremath{{\sf PV}}\xspace\cup L)$-sentence if and only if so does the partial $(\ensuremath{{\sf PV}}\xspace\cup L)$-structure that interprets the symbols from $L$ as $\str B$ and the symbols from $\ensuremath{{\sf PV}}\xspace$ as the partial substructure induced on~$[n]$ in the standard $\ensuremath{{\sf PV}}\xspace$-structure $\mathbb{N}$. It is easy to check that for basic sentences verification coincides with truth as explained in Section~\ref{sec:FOsearch} (after Definition~\ref{df:basic}). \end{remark} \begin{definition}\label{df:g} Let $g:\mathbb{N}\setminus\{0\}\to\mathbb{N}$, and $\str B$ be an infinite (total) $L$-structure with universe~$B$. An induced partial substructure $\str B_0$ of $\str B$ with finite universe $B_0$ is {\em $g$-large} if there exists a subset $V\subseteq B\setminus B_0$ of size at most $ g(|B_0|)$ such that for every function symbol $S\in L$ the interpretation~$S^B$ of~$S$ in $\str B$ maps $B_0^{\mathit{ar}(S)}$ into $B_0\cup V$. The structure $\str B$ is {\em $g$-large} if every finite partial substructure of $\str B$ embeds into a $g$-large partial substructure of $\str B$ with a universe of the same cardinality. An $L$-sentence is {\em strong} if its negation has an infinite $n^{o(1)}$-large model. \end{definition} Assume $\str B\not\models\varphi$ where $\varphi$ is basic and valid in the finite. Then no finite subset of $B$ is closed under the interpretations of the function symbols in $\str B$. Definition~\ref{df:g} quantifies how many function values are outside a given finite subuniverse. Intuitively, the smaller~$g$, the closer $\varphi$ is to be satisfiable in the finite; hence, the smaller $g$, the stronger the principle. \medskip Our aim is to verify these intuitions to some extent, namely in the sense of Theorem~\ref{thm:main}. The proof requires some fair amount of work, and before getting there we consider \subsection{Examples}\label{sec:exas} We start with common pigeonhole principles. \begin{example}\label{ex:php1} Let $L:=\{f,c\}$ for a unary function symbol $f$ and a constant $0$. The {\em ($n$ to $n{-}1$) pigeonhole principle $\mathit{PHP}$} is the existential closure of $$ (f(x){=}u\wedge f(y){=}u\wedge \neg x{=}y)\vee (f(x){=}u\wedge c{=}u). $$ This is a basic (Definition~\ref{df:basic}) variant of $(f(x){=}f(y)\wedge \neg x{=}y)\vee f(x){=}c$. It has maximal determinacy $d(n)=s_L(n)=n+1$. It is not weak and it is strong, indeed, its negation has a 1-large model. \end{example} \begin{proof} To prove the second statement, let $\str A$ have universe $A:=\mathbb{N}$ and interpret $c$ by $0$ and $f$ by the successor function. Let $\str A_0$ be a partial substructure of $\str A$ with universe~$A_0$ of cardinality $n$. Map the minimal element of $A_0$ to 0, the second largest element of $A_0$ to 1 and so on. This embeds $\str A_0$ into the partial substructure induced on $[n]$ in $\str A$. This partial substructure is $1$-large witnessed by $V:=\{n\}$. The first statement follows noting that this partial substructure has size $s_L(n)-1=n$ and, of course, does not verify the principle. \end{proof} For readability we write our principles from now on not in basic form as in Definition~\ref{df:basic} but allowing ourselves atoms with more than one symbol of the language. \begin{example}\label{ex:bphp} Let $L=\{f,g,c\}$ for unary function symbols $f,g$ and a constant $c$. Following~\cite{bj}, let the {\em onto pigeonhole principle} $\textit{OPHP}$ be the existential closure of $$ \neg g(c){=}c\ \vee\ f(x){=}c \ \vee\ \neg g(f(x)){=}x\ \vee\ \big(\neg c{=}x\wedge\neg f(g(x)){=}x\big), $$ and the {\em left pigeonhole principle} $\textit{LPHP}$ is the same with the last disjunct deleted. Both principles have maximal determinacy $d(n)=s_L(n)=2n+1$, are not weak and are strong, indeed, their negations have 1-large models. \end{example} \begin{proof} Expand the structure $\str A$ of the previous example letting $g^A$ be the predecessor function (understanding $g^A(0)=0$). Then argue as there. \end{proof} \begin{example}\label{ex:php3} Let $L:=\{f\}$ for a binary function symbol $f$. The {\em $n^2$ to $n$ weak pigeonhole principle} $\mathit{WPHP}$ is defined in Example~\ref{ex:wphptransl}. It has determinacy $d(n)=\sqrt{s_L(n)}+1=n+1$. It is weak and not strong. \end{example} \begin{proof} Note $s_L(n)=n^2$. It is clear that once a structure on a universe with cardinality $n$ has $n+1$ values distinct from $1/2$, then there is a collision and the principle is verified. It is also clear that there are partial structures of size $n$ not verifying the principle. To see $\mathit{WPHP}$ is not strong, let $\str A$ be a model of its negation. Restricted on a set $A_0$ of~$n$ points $f^A$ takes at least $n^2-n$ many values outside $A_0$. Hence, a set $V$ from the definition of a large partial substructure must have cardinality at least $n^2-n$. \end{proof} \begin{example}\label{ex:php2} Let $L:=\{f,g\}$ for unary function symbols $f,g$. The {\em $2n$ to $n$ weak pigeonhole principle} $\mathit{WPHP}'$ is the existential closure of $$ (f(x){=}f(y)\wedge \neg x{=}y)\ \vee\ (g(x){=}g(y)\wedge \neg x{=}y)\ \vee\ f(x){=}g(y). $$ It has determinacy $d(n)=s_L(n)/2+1=n+1$. It is neither weak nor strong. \end{example} \begin{proof} Note $s_L(n)=2n$. In any partial $L$-structure with $n+1$ elements where $f$ or $g$ is defined (i.e., value $\neq 1/2$) two of these values are equal, so the principle is verified. Hence $d(n)\le n+1$. But $d(n)>n$ because there are size $n$ partial structures on $[n]$ that do not verify the principle, e.g., interpret $f$ by a permutation and let $g$ be completely undefined. That $\mathit{WPHP}'$ is not strong can be seen as in the previous example. \end{proof} \begin{example} \label{ex:rphp} The provably total (type 1) NP search problems of Je\v r\'abek's \cite{japx} theory of approximate counting $\mathsf{APC}_1$ are many-one reducible to the {\em $n$ to $n^2$ retraction pigeonhole principle} $\textit{rPHP}$ (see~\cite[Proposition~1.14]{jdual}) for $\ensuremath{{\sf PV}}\xspace$-functions. To express it by a first-order formula over universe $[n]$ we take $L=\{g,f_0,f_1\}$ for a binary function symbol $g$ and two unary function symbols $f_0,f_1$ and state that $g$ does not witness that $x\mapsto (f_0(x),f_1(x))$ is a surjection from~$[n]$ onto~$[n]^2$: the existential closure of \begin{equation*}\label{eq:rphp} \neg f_0(g(x,y)){=}x\ \vee\ \neg f_1(g(x,y)){=}y. \end{equation*} It is neither weak nor strong.\end{example} \begin{proof} A partial structure on $[n]$ that interprets $g$ by an arbitrary binary function and has~$f_0,f_1$ completely undefined does not verify $\textit{rPHP}$ and has size $n^2$. Since $s_L(n)=n^2+2n$, this shows that $\textit{rPHP}$ is not weak. That it is not strong is seen as in Example~\ref{ex:php3}. \end{proof} We turn to other principles. \begin{example}\label{ex:par} Let $L=\{f\}$ for a unary function symbol $f$. The {\em parity principle} $\mathit{PAR}$ states that involutions have fixed points and is valid in structures of odd finite size: the existential closure of \begin{equation*}\label{eq:par} \neg x{=}f(f(x))\ \vee \ x{=}f(x). \end{equation*} It has determinacy $$ d(n)=\begin{cases}n&\text{if $n$ is odd,}\\0&\text{else.}\end{cases} $$ It is not weak and it is strong, indeed, its negation has a 1-large model. \end{example} \begin{proof} Let $\str A$ have universe $A=\mathbb{N}$ and let $f^\mathbb{N}$ map even $n$ to $n+1$, and odd $n$ to $n-1$. Then $\mathit{PAR}$ fails in $\str A$. It is easy to see that any partial substructure of $\str A$ of size~$n$ embeds into the partial substructure induced in $\str A$ on $[n]$. For even $n$ this substructure is total, and for odd $n$, only the last point $n-1$ is mapped to something outside. Our claims follow. \end{proof} \begin{example}\label{ex:hop} Let $L:=\{f,\prec\}$ for a unary function symbol $f$ and a binary relation symbol~$\prec$ (with infix notation) and constants $0,1$. The {\em Herbrandized ordering principle}~$\mathit{HOP}$ negates the Skolemized infinity axiom stating ``$\prec$ is a partial order without a minimal element'': the existential closure of \begin{equation*}\label{eq:hop} x{\prec} x\ \vee\ (x{\prec} y\wedge y{\prec}z\wedge \neg x{\prec} z) \ \vee \ \neg f(x){\prec} x. \end{equation*} It has maximal determinacy $d(n)=s_L(n)=n^2+n$. It is not weak and it is strong, indeed, its negation has a 1-large model. \end{example} \begin{proof} To prove the second statement, let $\str A$ have universe $A:=\mathbb{N}$, interpret $\prec$ by the inverse natural order, i.e., $\prec^A:=\{(i,j)\mid j<i\}$, and $f$ by the successor function. Every partial substructure with universe of cardinality $n$ embeds into the partial substructure induced on~$[n]$ which is $1$-large witnessed by $V:=\{n\}$. The claim about determinacy follows noting that the described 1-large partial substructure of $\str A$ has size $n^2+n-1$, namely, it has only one value $1/2$ (taken by $f^{A}$ on $n-1$). \end{proof} \begin{remark}\label{rem:hopvar} The same reasoning applies to weaker variants \cite{bkt,atstha} of $\mathit{HOP}$ adding disjuncts saying that $\prec$ is not linear and/or $f$ does not map all points to immediate predecessors: $(\neg x{\prec} y\wedge\neg y {\prec} x\wedge \neg x{=}y)$ and/or $(f(x){\prec} y\wedge y{\prec} x)$. \end{remark} \begin{example} Let $L:=\{P,s,\prec,\textit{min},\textit{max}\}$ where $P$ is a unary and $\prec$ a binary relation symbol, $s$ is a unary function symbol, and $\textit{min},\textit{max}$ are constants. The {\em Induction principle} \textit{IND}\ states induction for the predicate $P$ on a discrete linear order $\prec$ with minimum $\textit{min}$, maximum $\textit{max}$ and successor $s$: the existential closure of \begin{eqnarray*} &&x{\prec} x\ \vee\ (x{\prec} y\wedge y{\prec}z\wedge \neg x{\prec} z) \ \vee\ (\neg x{\prec}y\wedge\neg y{\prec}x\wedge \neg x{=}y) \\ && \vee \ x{\prec}\textit{min}\ \vee\ \textit{max}{\prec}x \ \vee\ ( x{\prec} y\wedge y{\prec} s(x))\ \vee\ (\neg \textit{max}{=}x\wedge\neg x{\prec} s(x)) \\ && \vee \ \neg P(\textit{min})\ \vee\ P(\textit{max})\ \vee\ (P(x)\wedge \neg P(s(x))). \end{eqnarray*} It has maximal determinacy $d(n)=n^2+2n+2$ for $n>1$. It is not weak and it is strong, indeed, its negation has a $2$-large model. \end{example} \begin{proof} For the first claim, consider the natural partial structure on $[n]$ that interprets $P$ by~$[n]$ and leaves $\textit{max}$ undefined. For the second claim consider the structure $\str A$ on $A:=\mathbb{N}\ \dot\cup\ \{\infty\}$ that interprets $\prec$ by the natural order extended by declaring $\infty$ larger than all natural numbers, $P$ by $\mathbb{N}$, $s$ by the natural successor extended by $s^{A}(\infty):=\infty$, and $\textit{min},\textit{max}$ by $0,\infty$. Clearly, $\str A$ falsifies $\textit{IND}$. To see it has 2-bounded overflow, let $\str A_0$ be a partial substructure on a universe $A_0$ of size~$n$ and distinguish two cases. If $\infty\notin A_0$, then map $A_0$ order preserving onto $[n]$; the partial structure induced on $[n]$ has only $\textit{max}$ and $s(n-1)$ undefined, so is $2$-large. If $\infty\in A_0$ then map $A_0\setminus\{\infty\}$ onto $[n-1]$ as above and note that the induced partial substructure on $[n-1]\cup\{\infty\}$ is 1-large. \end{proof} \begin{example}\label{ex:ba} Let $L:=\{\sqcup,\sqcap,\sim,f,0,1\}$ where $\sqcup,\sqcap$ are binary function symbols (in infix notation), $\sim,f$ are unary function symbols and $0,1$ are constants. Recall Boolean algebras are axiomatized by a finite set $E$ of equations in the language $L\setminus\{f\}$. The {\em Herbrandized atomicity principle} $\textit{HAP}$ negates the Skolemized infinity axiom stating ``here is an atomless Boolean algebra'': the existential closure of $$\textstyle \bigvee_{\zeta\in E} \neg \zeta\ \vee\ \neg f(0){=}0\ \vee\ \neg f(x){\sqcap} x{=}f(x)\ \vee\ (f(x){=}x\wedge \neg x{=}0). $$ For $n$ a power of 2, its determinacy is $> s_L(n)-\log n$. It is neither weak nor strong. \end{example} \begin{proof} On a universe of cardinality $n=2^k$, take a Boolean algebra with $k$ atoms and interpret~$f$ to map the interpretation of $0$ to itself, any other non-atom to an atom below it, and declare it undefined on all atoms. This shows the claim about the determinacy and that $\textit{HAP}$ is not weak. To see $\textit{HAP}$ is not strong, let $\str A$ falsify $\textit{HAP}$. Any partial substructure of $\str A$ that contains~$n$ pairwise disjoint non-empty elements (in the sense of $\str A$), has $\sqcup^A$ completely undefined. This gives ${n\choose 2}$ many values of $\sqcup^A$ outside its universe. \end{proof} \begin{example}\label{ex:dlo} Take $L:=\{\prec,b,0,1\}$ for a binary relation symbol $\prec$, a binary function symbol $b$ (for ``between'') and constants $0,1$. The {\em Herbrandized discreteness principle} $\textit{HDP}$ negates the Skolemized infinity axiom stating ``here is a dense non-empty partial order'': the existential closure~of \begin{eqnarray*} && x{\prec} x\ \vee\ (x{\prec} y\wedge y{\prec} z\wedge \neg x{\prec} z) \ \vee\ (x{\prec} y\wedge \neg b(x,y){\prec} y)\ \vee\ (x{\prec} y\wedge \neg x{\prec}b(x,y))\ \vee\ \neg 0{\prec}1. \end{eqnarray*} The last disjunct ensures that the partial order is non-empty and thus $\textit{HDP}$ is valid in the finite. $\textit{HDP}$ has determinacy $d(n)>2n^2 -2n$ for $n>1$. It is neither weak nor strong. \end{example} \begin{proof} Note $s_L(n)=2n^2+2$. Consider a partial structure on $[n]$ for $n>1$ that interprets~$\prec$ by the natural order, $0,1$ by themselves, and $b$ by some function that maps $(i,j)$ with $|i-j|>1$ to some point between $i$ and $j$, and maps $(i,i)$ to 0, and is undefined on the $2(n-1)$ many pairs $(i,j)$ with $|i-j|=1$. This does not verify $\textit{HDP}$, and has size $s_L(n)-2(n-1)$. To see $\textit{HDP}$ is not strong, let $\str A$ falsify $\textit{HDP}$ and consider a linearly ordered subset $A_0$ of size $n$. Then $b^A$ takes a value outside~$A_0$ on each pair of $\prec^A$-consecutive points in $A_0$; this gives at least $n-1$ pairwise distinct values outside $A_0$. \end{proof} \begin{remark}\label{rem:papa} Following \cite{beame}, every basic $L$-sentence $\varphi$ valid in the finite defines a complexity class, namely the type 1 NP search problems many-one reducible to $Q_\varphi$ (see Definition~\ref{df:assNPSP}). The classes associated to $\mathit{PHP},\textit{Onto-PHP},\textit{Left-PHP}$ and $\mathit{PAR}$ are Papadimitriou's classes PPP, PPAD, PPADS and PPA~\cite{papa}. Papadimitriou~\cite{papa} showed that his classes contain many natural search problems of independent interest.\footnote{A minor difference is that usually the problems are defined only for structures with a universe of the form $[2^n]$ while we allow any $[n]$. The principle~$\mathit{PAR}$, then, has to be slightly changed so as to be valid in even instead of odd structures.} \end{remark} Finally, we mention an important example with built-in $\ensuremath{{\sf PV}}\xspace$: \begin{example}\label{ex:iter} Let $\textit{ITER}$ be $\exists y\textit{ITER}(y)$ (cf.~\cite{bk}) where $\textit{ITER}(y)$ is the following formula with a unary function symbol $f$ and built-in order $<$ and constant 0: $$ f(0){=}0\ \vee \ f(y){<} y \ \vee \ \big(y{<}f(y)\wedge f(y){=}f(f(y))\big). $$ It has maximal determinacy $d(n)=n$, so is not weak. \end{example} \begin{proof} Interpret $f$ on $[n]$ by the successor, undefined on $n-1$. \end{proof} \begin{remark} \label{rem:iter} The complexity class associated to $\textit{ITER}$ is the complexity class PLS from~\cite{pls}. Built-in symbols are necessary to characterize PLS. More precisely, assume that not all PLS problems are solvable in polynomial time. Then there does not exist a finitary combinatorial principle without built-in symbols whose associated class would equal PLS. \end{remark} \begin{proof} Let $\varphi=\exists \bar y\psi(\bar y)$ be such a principle, say in language $L$. If $\varphi$ fails in some infinite $L$-structure, then by Theorem~\ref{thm:morioka}, $Q_\varphi$ is not Turing reducible to $Q_\textit{ITER}$. Otherwise $\varphi$ is valid. By Proposition~\ref{prop:Tred}~(3) it suffices to show that $\forall\mathsf{S}^1_2(\alpha)$ proves $\exists y \q{\str B(L,x,\alpha)\models\psi(y)}$. But, if $(N,\alpha^N)\models\forall\mathsf{S}^1_2(\alpha)$ and $n\in N\setminus \{0\}$, then $\str B(L,n,\alpha^N)\models\varphi$ since $\varphi$ is valid, so $(N,\alpha^N)\models \exists y \q{\str B(L,n,\alpha^N)\models\psi(y)}$ by Lemma~\ref{lem:formAB}. \end{proof} \subsection{Density arguments} \label{sec:dense} We now establish the combinatorics needed for the forcing proofs of Theorems~\ref{thm:Briis} and~\ref{thm:main}. The sense of the forcing set-up in \cite{am} is to reduce independence questions for bounded arithmetics to questions in finite combinatorics. Consequently, the combinatorics in this section are carried out in the standard model $\mathbb{N}$. For the rest of this section we let \begin{enumerate}\itemsep=0pt \item[--] $L$ and $\tilde L$ be finite languages; \item[--] $r_{L}:=1+\max_{S\in L}\mathit{ar}(S)$ and $r_{\tilde L}:=1+\max_{\tilde S\in \tilde L}\mathit{ar}(\tilde S)$; \item[--] $\tilde \varphi$ be a finitary combinatorial principle in the language $\tilde L$ as in Definition~\ref{df:fcp}, hence possibly with built-in $\ensuremath{{\sf PV}}\xspace$. \end{enumerate} \begin{definition} \label{def:extend} Let $n\in\mathbb{N}\setminus\{0\}$ and $p,q$ be partial $L$-oracles $p,q$ on $[n]$. The {\em size} $\|p\|$ of~$p$ is the size of $\str B(p)$ (as a partial structure, see Definition~\ref{df:size}). We say $p$ {\em extends} $q$ if $\str B(p)$ extends~$\str B(q)$, in other words, if $q_0\subseteq p_0$ and $q_1\subseteq p_1$; if additionally $b\in \mathbb{N}$ and $\|p\|\le \|q\|+b$ we call~$p$ a {\em $b$-extension} of $q$. Call $a\in[n]$ {\em active in $\str B(p)$} if there are $S\in L$ and $\bar a\in[n]^{\mathit{ar}(S)}$ such that $S^{[n]}(\bar a)\neq 1/2$ in~$\str B(p)$ and $a$ appears in $\bar a$ or $S$ is a function symbol and $a=S^{[n]}(\bar a)$ in~$\str B(p)$. \end{definition} \begin{lemma}\label{lem:dense1} Let $n\in\mathbb{N}\setminus\{0\}$, $\str B$ an $L$-structure, $p$ a partial $L$-oracle on $[n]$ such that $\str B(p)$ embeds into~$\str B$, $S\in L$ and $\bar a\in[n]^{\mathit{ar}(S)}$. If \begin{equation}\label{eq:psmall} n>\|p\|\cdot r_L, \end{equation} then there is a $1$-extension $q$ of $p$ such that $\str B(q)$ embeds into $\str B$ and $S^{[n]}(\bar a)\not=1/2$ in $\str B(q)$. \end{lemma} \begin{proof} Write $p=\langle p_0,p_1\rangle$ and let $W\subseteq [n]$ be the set of $a\in[n]$ that are active in $\str B(p)$. Note that $|W|\le \|p\|\cdot r_L$. Let $e$ be the embedding of $\str B(p)$ into $\str B$. If $S$ is a relation symbol, obtain $q$ from $p$ by adding $\langle S,\bar a\rangle$ to $p_{b}$ where $b=S^{B}(e(\bar a))$ in $\str B$. If $S$ is a function symbol and $v:=S^B(e(\bar a))$ is in the image of $e$, obtain $q$ from $p$ by adding $\langle S,\bar a,i\rangle$ to $p_{\mathit{bit}(e^{-1}(v),i)}$ for all $i<|n|$. If $v$ is not in the image of $e$, note that by \eqref{eq:psmall} there is $a\in[n]\setminus W$. Then change $e$ by mapping $a$ to $v$ and proceed as before. \end{proof} The following two lemmas show extendibility of a partial oracle to ensure that a partial $\tilde L$-structure of the form $\str C((t_{\tilde s})_{\tilde s\in\tilde L},m,q)$ verifies $\tilde \varphi$. The first is simple and useful for small $m$, and the second is useful for large~$m$ and the combinatorial core of the proof of Theorem~\ref{thm:main}. \begin{lemma}\label{lem:dense3} Let $\str B$ be an $L$-structure, $n,m,b_0\in\mathbb{N}\setminus\{0\}$, $p$ a partial $L$-oracle on $[n]$ such that~$\str B(p)$ embeds into~$\str B$, and $(t_{\tilde S})_{\tilde S\in\tilde L}$ a family of decision trees of height at most $ b_0$. If \begin{equation}\label{eq:msmall} n> r_L\cdot (\|p\|+b_0|\tilde L|m^{r_{\tilde L}-1}), \end{equation} then there exists a $b_0|\tilde L|m^{r_{\tilde L}-1}$-extension $q$ of $p$ such that $\str C((t_{\tilde s})_{\tilde s\in\tilde L},m,q)$ verifies $\tilde \varphi$ and $\str B(q)$ embeds into $\str B$. \end{lemma} \begin{proof} For $\tilde S\in \tilde L$ and $\bar a\in[m]^{\mathit{ar}(\tilde S)}$ let $z_{\tilde S,\bar a}$ be a maximal sequence of $p$-answers to~$t_{\tilde S}$ on~$\bar a$. Note there are at most $ |\tilde L|m^{r_{\tilde L}-1}$ many such sequences and each has length at most $b_0$. If all these sequences are complete, then $\str C((t_{\tilde s})_{\tilde s\in\tilde L},m,q)$ is total and thus verifies $\tilde\varphi$ (being valid in the finite). Otherwise choose a 1-extension $q$ of $p$ that prolongues at least one of the answer sequences. This is possible by the previous lemma if $r_L\|p\|< n$. By \eqref{eq:msmall} we can repeat this step until all sequences are complete. \end{proof} By the {\em size} $|\varphi|$ of a formula $\varphi$, we mean the size (number of nodes) of the formula tree, that is, the number of occurrences of atomic subformulas and logical symbols $\wedge,\vee,\neg,\exists,\forall$. \begin{lemma}[Core Lemma]\label{lem:dense2} Suppose the assumptions of the previous lemma hold and additionally \begin{enumerate}\itemsep=0pt \item[(i)] $\str B$ is $g$-large where $g:\mathbb{N}\setminus\{0\}\to\mathbb{N}$ is some function; \item[(ii)] $n\ge (2b_0^2r_L+1)\cdot g(n)+r_L\cdot\|p\|$; \item[(iii)] $s_{\tilde L}(m)\ge 2b_0\tilde d(m)$ where $\tilde d$ is the determinacy of $\tilde \varphi$. \end{enumerate} Then there exists a $b_0|\tilde\varphi|$-extension $q$ of $p$ such that $\str C((t_{\tilde s})_{\tilde s\in\tilde L},m,q)$ verifies $\tilde \varphi$ and $\str B(q)$ embeds into $\str B$. \end{lemma} \begin{proof} We claim that it suffices to find $q$ as desired but neglecting the size bound, i.e., such that $q$ extends $p=\langle p_0,p_1\rangle$, $\str C((t_{\tilde s})_{\tilde s\in\tilde L},m,q)$ verifies $\tilde \varphi$, and $\str B(q)$ embeds into $\str B$. Given such $q=\langle q_0,q_1\rangle$, we have to find some $q'=\langle q'_0,q'_1\rangle$ with the same properties and of size $\|q'\|\le \|p\|+b_0|\tilde\varphi|$. Recall that $\tilde \varphi$ has the form \eqref{eq:basic} from Definition~\ref{df:basic}. That $\str C:=\str C((t_{\tilde s})_{\tilde s\in\tilde L},m,q)$ verifies~$\tilde \varphi$ means that there are a tuple~$\bar b$ from $[m]$ and $i\in I$ such that~$\str C$ verifies~$\lambda_{ij}(\bar b)$ for all $j\in J$. The literals $\lambda_{ij}(\bar b), j\in J,$ are verified in a partial substructure~$\str C'$ of $\str C$ of size at most $|J|<|\tilde\varphi|$. For every $\tilde S\in \tilde L$ and $\bar b\in[m]^{\mathit{ar}(\tilde S)}$ such that $\tilde S^{[m]}(\bar b)\neq 1/2$ in~$\str C'$ choose a complete sequence $z_{\tilde S,\bar b}$ of $q$-answers to~$t_{\tilde S}$ on $\bar b$. Consider the set $Q$ of relevant (wrt $L,n$) queries $q$ needs to answer in these sequences. More precisely, this is the set of all relevant $\floor{f_{\tilde S}(\bar b,(z_{\tilde S,\bar b})_{<i})/2}$ where $i<|z_{\tilde S,\bar b}|-1$. Then $|Q|<b_0|\tilde \varphi|$. By deleting certain elements from $q_0$ and $q_1$ we get a partial $L$-oracle $q'$ extending $p$ (and extended by $q$) of size at most $ \|p\|+b_0|\tilde\varphi|$ such that $\str C((t_{\tilde S})_{\tilde S\in\tilde L},m,q')$ extends~$\str C'$, so verifies $\tilde\varphi$. Namely, obtain $Q'$ from~$Q$ by adding $\langle S,\bar a,i\rangle$ whenever $\langle S,\bar a,i'\rangle\in Q$ for some $i'$ (here, $S\in L$, $\bar a\in[n]^{\mathit{ar}(S)}$ and $i,i'$ range over $[|n|]$), and define $q'_0:=q_0\cap(p_0\cup Q')$, and similarly $q'_1$. This proves the claim. \medskip For the sake of contradiction, assume that $q$ as in the claim does not exist. \medskip Consider a pair $(X,q)$ where $X$ is a set of pairs $(\tilde S,\bar b)$ with $\tilde S\in \tilde L$ and $\bar b\in[m]^{\mathit{ar}(\tilde S)}$, and~$q$ is a partial $L$-oracle on $[n]$ that extends $p$ and such that $\str B(q)$ embeds into $\str B$. From $(X,q)$ we compute another such pair $(X',q')$ as follows. Choose an embedding $e$ of the partial structure $\str B(q)$ coded by $q$ into a $g(n)$-large partial substructure $\str B^*$ of $\str B$ with universe $B^*$ of size $n$. Note $\str B(q)$ extends $\str B(p)$ and $e$ embeds $\str B(p)$ into $\str B^*$. Let $\str B^*_n$ be the partial structure on $[n]$ which is isomorphic under $e$ to $\str B^*$ and let $q^*$ be the partial oracle coding it. Then $q^*$ extends~$q$. Choose $V\subseteq B$ witnessing that $\str B^*$ is $g(n)$-large. Let $W_n$ be the set of $a\in [n]$ active in~$\str B(p)$. Let $R_n:=[n]\setminus W_n$ and let $R,W$ be the images of~$R_n,W_n$ under $e$. Note $$ |R_n|\ge n-\|p\|\cdot r_L. $$ For $(\tilde S,\bar b)\in X$ choose a maximal sequence $z_{\tilde S,\bar b}$ of $q^*$-answers to~$t_{\tilde S}$ on $\bar b$. Let~$Y$ be obtained from $X$ by deleting all $(\tilde S,\bar b)$ such that $z_{\tilde S,\bar b}$ is complete. Then $$ |Y|> |X|-\tilde d(m). $$ Indeed, if at least $ \tilde d(m) $ many $z_{\tilde S,\bar b}$ are complete, then $\str C((t_{\tilde S})_{\tilde S\in\tilde L},m,q^*)$ has size at least $ \tilde d(m)$, and thus verifies $\tilde \varphi$. But this contradicts our assumption. Say $(\tilde S,\bar b)$ {\em touches} $a\in[n]$ if there are $j\le|z_{\tilde S,\bar b}|$ and $S\in L$ and $\bar a\in[n]^{\mathit{ar}(S)}$ such that $\floor{t_{\tilde S}(\bar b,z_{\tilde S,\bar b})_{<j}/2}$ equals $\langle S,\bar a\rangle$ or $\langle S,\bar a,i\rangle$ for some $i<|n|$, and such that $a$ appears in $\bar a$ or ($S$ is a function symbol and) $e(a)=S^{B^*}(e(\bar a))$ in $\str B^*$. Note that any $(\tilde S,\bar b)\in Y$ touches at most $b_0\cdot r_L$ many $a\in[n]$. By averaging, there exists $r_0\in R_n$ which is touched by at most $ |Y|\cdot b_0\cdot r_L/|R_n|$ many pairs in $Y$. Similarly, there exists $r_1\in R_1\setminus\{r_0\}$ touched by at most $|Y|\cdot b_0\cdot r_L/(|R_n|-1)$ many pairs in $Y$. Continuing like this we find pairwise distinct $r_0,\ldots, r_{|V|-1}$ such that at most $$ |V|\cdot |Y|\cdot b_0\cdot r_L/(|R_n|-|V|)\le b_0\cdot \frac{g(n)\cdot s_{\tilde L}(m)\cdot r_L}{n-\|p\|\cdot r_L-g(n)} $$ many pairs in $Y$ touch any of them. Observe that (ii) implies that the denominators are positive. Define $X'$ by deleting all these pairs from $Y$ and note \begin{equation}\label{eq:shrink} |X'|> |X|-\tilde d(m)- b_0\cdot \frac{g(n)\cdot s_{\tilde L}(m)\cdot r_L}{n-\|p\|\cdot r_L-g(n)}. \end{equation} Let $e'$ map $r_0,\ldots, r_{|V|-1}\in[n]$ bijectively onto $V$ and otherwise agree with $e$. Let $\str B'$ be the induced partial substructure of $\str B$ whose universe $B'$ is the image of $e'$, and let $q'$ be the partial $L$-oracle on $[n]$ such that $e':\str B(q')\cong\str B'$. Then $q'$ extends $p$ since $e'$ equals $e$ on~$W_n$. For $( \tilde S,\bar b)\in X'$ let $z'_{\tilde S,\bar b}$ be a maximal sequence of $q'$-answers to~$t_{\tilde S}$ on~$\bar b$. Then, as strings of bits, $z_{\tilde s,\bar b} $ is an initial segment of $z'_{\tilde s,\bar b}$, i.e.\ $\mathit{bit}(i,z'_{\tilde S,\bar b})=\mathit{bit}(i,z_{\tilde S,\bar b})$ for all $i<|z_{\tilde S,\bar b}|-1$. We claim $$ |z'_{\tilde S,\bar b}|>|z_{\tilde s,\bar b}|. $$ Indeed, $t_{\tilde S}(\bar b,z_{\tilde S,\bar b})$ is odd and $\floor{t_{\tilde S}(\bar b,z_{\tilde S,\bar b})/2}$ equals $\langle S,\bar a,i\rangle$ for some function symbol $S\in L,\bar a\in[n]^{\mathit{ar}(S)}$ and $i<|n|$ such that $S^{A}(e(\bar a))\in V$ in~$\str B$ (if $\floor{t_{\tilde S}(\bar b,z_{\tilde S,\bar b})/2}$ would not have this form, then $z_{\tilde S,\bar b}$ could be prolongued). Since all components of $\bar a$ are touched by $(\tilde S,\bar b)$ and $(\tilde S,\bar b)\in X'$, we know $e'(\bar a)=e(\bar a)$ and $S^{B'}(e'(\bar a))=e'(r_j)$ in $\str B'$ for some $j<|V|$. As $z'_{\tilde S,\bar b}$ is maximal, it has a length $\ge |z_{\tilde S,\bar b}|+1$ with $\mathit{bit}(|z_{\tilde S,\bar b}|-1,z'_{\tilde S,\bar b})=\mathit{bit}(i,r_j)$. Consider $(X_0,p_0)$ for $p_0:=p$ and $X_0$ the set of all pairs $(\tilde S,\bar b)$ with $\tilde S\in \tilde L$ and $\bar b\in[m]^{\mathit{ar}(\tilde S)}$. Define a sequence $(X_0,p_0), (X_1,p_1),\ldots$ by iterating the function $(X,q)\mapsto (X',q')$. This gives a sequence $X_0\supseteq X_1\supseteq \cdots$ and a sequence of partial oracles $p=p_0,p_1,\ldots$ each extending~$p$. The maximal sequence of $p_i$-answers to $t_{\tilde S}$ on $\bar b$ for pairs $(\tilde S,\bar b)\in X_i$ is prolonged in each step, and the pair gets deleted once the sequence is completed (recall the definition of $Y$ above). As the decision trees have height at most $b_0$, we conclude that $X_{b_0}$ is empty. On the other hand, the sets $X_i$ shrink per step as estimated in~\eqref{eq:shrink}. At the start $|X_0|=s_{\tilde L}(m)$, so $$ 0=|X_{b_0}|> s_{\tilde L}(m)-b_0\cdot \tilde d(m)- b_0^2\cdot\frac{g(n)\cdot s_{\tilde L}(m)\cdot r_L}{n-\|p\|\cdot r_L-g(n)}, $$ hence (recall $\tilde d(m)>0$) $$ b_0> \frac{s_{\tilde L}(m)}{\tilde d(m)}\cdot \big( 1-b_0^2\cdot \frac{g(n)\cdot r_L}{n-\|p\|\cdot r_L-g(n)} \big). $$ By (ii), the r.h.s. is $\ge (s_{\tilde L}(m)/\tilde d(m))\cdot 1/2$, a contradiction to (iii). \end{proof} \section{Typical forcing} This section gives a general method to construct models of $\forall\mathsf{T}^1_2(\PV(\alpha))$ by forcing. We define {\em typical} forcings with {\em typical graded} forcing frames that encompass many forcing type arguments in bounded arithmetic~\cite{pw,riisbrics,ajtai}. Theorem~\ref{thm:T12} states that such forcings produce models of $\forall\mathsf{T}^1_2(\PV(\alpha))$ if they satisfy a series of simple technical conditions. We give an application in the next section but believe general result is of independent interest. The proof follows the set-up from \cite{am}, a simplified form of which is recalled in Section~\ref{sec:basics}. Section~\ref{sec:definability} proves Theorem~\ref{thm:T12}. Throughout this section we fix \begin{enumerate}\itemsep=0pt \item[--] a countable language $L$ containing $\ensuremath{{\sf PV}}\xspace$; \item[--] a unary relation symbol $\alpha\notin L$; \item[--] an $L$-expansion $\mathbb{N}$ of the standard $\ensuremath{{\sf PV}}\xspace$-model; \item[--] a countable proper elementary extension $M$ of $\mathbb{N}$. \end{enumerate} \subsection{Forcing basics}\label{sec:basics} We recall some standard forcing terminology. New notions are highlighted as definitions. \medskip A (countable) {\em forcing frame} is a triple $(P,\preccurlyeq,\mathcal D)$ where $(P,\preccurlyeq)$ is a countable partial order with elements called {\em conditions} and $p\preccurlyeq q$ reads as $p$ {\em extends} $q$, and $\mathcal D$ is a countable family of dense subsets of $P$. A subset of $P$ is {\em dense (below $p$)} if every condition ($\preccurlyeq p$) has an extension in it. Conditions $p,q$ are {\em compatible}, written $p\|q$, if they have a common extension. \begin{definition} \label{def:graded} A {\em graded} forcing frame has additionally a non-increasing function $\| \cdot\|$ from~$P$ into $M$, that is, $\|q\|\le^M \|p\|$ for all $p,q\in P$ with $p\preccurlyeq q$. We say $p$ is a $b$-extension of $q$ if~$p\preccurlyeq q$ and $M\models \|p\|\le\|q\|+b$. A graded forcing frame is {\em typical} if $P\subseteq M$ and there are formulas $\q{x\preccurlyeq y}$ and $\q{x\| y}$ and $\q{\|x\|=y}$ such that for all $p,q\in P$ and $b\in M$: \[ \begin{array}{llll} M\models& \q{p\preccurlyeq q}& \Longleftrightarrow & p\preccurlyeq q;\\ M\models& \q{p\| q}&\Longleftrightarrow& p\| q;\\ M\models& \q{\|p\|=b}&\Longleftrightarrow& \|p\| =b. \end{array} \] \end{definition} Since this mode of speech does not depend on $\mathcal D$ we shall also refer to $(P,\preccurlyeq,\|\cdot\|)$ as typical. The {\em forcing language} is $L\cup\{\alpha\}$ together with the elements of $M$ as constants. A {\em (universal) pre-forcing} is a binary relation $\Vdash$ between conditions and sentences of the forcing language satisfying the following: \begin{equation}\label{eq:recurrence} \begin{split} p\Vdash(\varphi\wedge\psi)&\ \Longleftrightarrow\ p\Vdash\varphi\text{ and }p\Vdash\psi;\\ p\Vdash\neg\varphi &\ \Longleftrightarrow\ q\not\Vdash\varphi \text{ for all } q\preccurlyeq p;\\ p\Vdash\forall x\varphi(x)&\ \Longleftrightarrow\ p\Vdash\varphi(a)\text{ for all } a\in M. \end{split} \end{equation} We write formulas with $\wedge,\neg,\forall$ and view $(\varphi\vee\psi)$ and $\exists x\varphi$ as abbreviations of the classical dualities $\neg(\neg \varphi\wedge\neg\psi)$ and $\neg\forall x\neg\varphi$. Then \begin{equation}\label{eq:recurrenceOR} \begin{split} p\Vdash(\varphi\vee\psi)&\ \Longleftrightarrow\ \{p\in P\mid p\Vdash\varphi\}\cup\{p\in P\mid p\Vdash\psi\} \text{ is dense below }p;\\ p\Vdash\exists x\varphi(x)&\ \Longleftrightarrow\ \textstyle \bigcup_{a\in M} \{p\in P\mid p\Vdash\varphi(a)\} \text{ is dense below }p. \end{split} \end{equation} Also note that $p\Vdash\neg\neg\varphi$ if and only if $\{p\in P\mid p\Vdash\varphi\}$ is dense below~$p$; for typcial forcings, defined next, this is equivalent to $p\Vdash\varphi$ (see Lemma~\ref{lem:forcing}~(e) below). \begin{definition}\label{def:typical} A {\em typical forcing} is a pre-forcing that satisfies the following for $p,q\in P$ and all atomic sentences $\varphi$ and closed terms $s,t$ of the forcing language: \begin{eqnarray*} \text{(Extension)}&&\text{if $q\preccurlyeq p\Vdash\varphi$, then $q\Vdash\varphi$};\\ \text{(Stability)}&&\text{if the set of conditions forcing $\varphi$ is dense below $p$, then $p\Vdash\varphi$};\\ \text{(Conservativity)}&&\text{if $\varphi$ does not mention $\alpha$, then: } p\Vdash\varphi\Longleftrightarrow M\models\varphi;\\ \text{(Extensionality)}&&\text{if $M\models s{=}t$, then: } p\Vdash\alpha(t)\Longleftrightarrow p\Vdash\alpha(s). \end{eqnarray*} \end{definition} A {\em filter} $G$ is set of conditions that contains a common extension of any two $p,q\in G$, and that contains any condition of which it contains an extension. A {\em generic} filter is one that intersects ``sufficiently many'' dense sets including those in $\mathcal D$. We refer to \cite[Definition~2.9]{am} for a definition, and just recall the standard lemma that every condition is contained in some generic filter (\cite[Lemma~2.12]{am}). For such a filter $G$ \cite[Definition 2.16]{am} defines a structure~$M[G]$ interpreting the forcing language. We skip the definition as we only need the following genuine properties: \begin{lemma}[Forcing Lemma]\label{lem:forcing} Assume $(P,\preccurlyeq,\mathcal D)$ is a forcing frame and $\Vdash$ is a typical forcing. Then for every generic filter $G$, sentence $\varphi$ of the forcing language, and $p\in P$: \begin{enumerate}\itemsep=0pt \item[(a)] There is $\alpha^M_G\subseteq M$ such that $M[G]\cong (M,\alpha^M_G)$ as structures interpreting the forcing language ($(M,\alpha^M_G)$ interprets each constant $a\in M$ by $a$ itself). \item[(b)] {\em (Truth Lemma)} $M[G]\models \varphi$ if and only if $q\Vdash \varphi$ for some $q\in G$. \item[(c)] {\em (Forcing Completeness)} $p\Vdash\varphi$ if and only if $M[H]\models\varphi$ for all generic filters $H$ containing $p$. \item[(d)] The set of sentences forced by $p$ is closed under logical consequence. \item[(e)] (Extension), (Stability) and (Conservativity) hold for all sentences $\varphi$ of the forcing language. \end{enumerate} \end{lemma} \begin{proof} This is proved in \cite{am}, we give precise references. First observe that, in the sense of \cite[Definition~2.16]{am}, $M[G]$ {\em is defined} for all generic filters $G$. Thus, (a)-(d) are \cite[Proposition~2.26]{am}, \cite[Theorem~2.19]{am}, \cite[Corollary~2.20~(2)]{am} and \cite[Corollary~2.20~(3)]{am}, respectively. In (e), (Extension) and (Stability) are \cite[Lemma~2.6~(1),(2)]{am}, and (Conservativity) is implied by (a) and (c). \end{proof} We remark that typical forcings behave nicely with bounded quantifiers, namely: \begin{equation}\label{eq:bdqu} p\Vdash\forall y{<}t\ \varphi(y)\ \Longleftrightarrow\ p\Vdash\varphi(a)\text{ for all $a \in M$ with }M\models a{<}t. \end{equation} \subsection{Partially definable forcing}\label{sec:definability} A condition $p$ is {\em compatible} with a sentence $\varphi$ of the forcing language, written $p\|\varphi$, if some extension of $p$ forces $\varphi$. Compatibility is dual to forcing in the sense that $p\|\varphi$ if and only if $p\not\Vdash\neg\varphi$, and, $p{\not\hspace*{-0.3ex}\|} \neg\varphi$ if and only if $p\Vdash\varphi$. \begin{theorem}\label{thm:principal} Let $\Phi$ be a set of formulas of the forcing language. Under the assumptions of the previous lemma, suppose $\Vdash$ is {\em definable for $\Phi$}, i.e., for all $p\in P$ and $\varphi(\bar x)\in\Phi$, the set of tuples $\bar a$ from $M$ such that $p\|\varphi(\bar a)$ is definable in $M$. Then~$M[G]\models\mathsf{MIN}(\exists \Phi)$. \end{theorem} \begin{proof} This follows from \cite[Theorem~3.5]{am} and \cite[Lemma~3.9~(1)]{am}. \end{proof} \begin{definition} Let $b_0\in N\subseteq M$. A {\em $\Delta^{b_0}_0(\alpha)$-formula with parameters from $N$} is a $\ensuremath{{\sf PV}}\xspace\cup\{\alpha\}$-formula with parameters from $N$ all of whose quantifiers are {\em $b_0$-bounded}, i.e., of the form $\forall x{<}b_0$ and $\forall x{<}b_0$. Closing these formulas under positive Boolean combinations, $b_0$-bounded quantifiers and bounded existential quantifiers $\exists x{<}t$ (where $t$ is a $\ensuremath{{\sf PV}}\xspace$-term without $x$ and possibly with parameters from $N$) yields the set of {\em $\Sigma^{b_0}_1(\alpha)$-formulas with parameters from~$N$}. \end{definition} Recall (Section~\ref{sec:dense}) the size $|\varphi|$ of a formula $\varphi$ is the size of its formula tree. \begin{lemma}[Definability Lemma] \label{lem:definability} Let $(P,\preccurlyeq,\mathcal D,\|\cdot\|)$ be a typical graded forcing frame,~$\Vdash$ a typical forcing, and $b_0\in M\setminus\{0,1\}$. Suppose \begin{enumerate}\itemsep=0pt \item[(a)] for every $r\in\mathbb{N}$ and $p\in P$ the set $\{ q\in P \mid q\text{ is a $b_0^r$-extension of } p\}$ is definable in $M$; \item[(b)] for every literal sentence $\varphi$ of the forcing language and all $p^*,p\in P$ with $p\succcurlyeq p^*\Vdash\varphi$ there exists a $b_0$-extension $q$ of $p$ that is compatible with $p^*$ and forces $\varphi$; \item[(c)] for every atomic formula $\varphi(\bar x)$ of the forcing language and $p\in P$ the set of tuples $\bar a$ from $M$ such that $p\Vdash\varphi(\bar a)$ is definable in $M$. \end{enumerate} Then $\Vdash$ is definable for $\Delta^{b_0}_0(\alpha)$-formulas with parameters from $M$. \end{lemma} Intuitively, conditions (a)-(c) are not much to ask for after a suitable choice for $b_0$, and this choice is mainly restricted by condition (a). Consider the usual case that $P$ has a minimum, is undefinable in $M$ and there is an upper bound $s\in M$ on $\|p\|,p\in P$. Then (a) implies $b_0^r\le^M s$ for all $r\in\mathbb{N}$, equivalently, $b_0$ is bounded by an infinitesimal power of $s$. \begin{proof}[Proof of Lemma~\ref{lem:definability}] By the Forcing Lemma~\ref{lem:forcing}~(d) we can restrict attention to $\Delta_0^{b_0}(\alpha)$-formulas in negation normal form (NNF), i.e., formulas built from literals by $\wedge,\vee$ and $b_0$-bounded quantification $\exists x{<}b_0,\forall x{<}b_0$. For $\varphi$ in NNF let~$\varphi\neg$ be the formula in NNF obtained from $\neg\varphi$ by pushing the negation inside, that is, by swapping $\forall/\exists$ and $\wedge/\vee$ and literals with their complementary version. Let $k_\varphi$ denote the number of occurrences of $\forall,\exists,\wedge,\vee$ in $\varphi$. We show by induction on $k_\varphi$ that, if $\varphi$ has quantifier rank at most $r$, then: \begin{enumerate}\itemsep=0pt \item[(i)] for all tuples $\bar a$ from $M$ and all conditions $p,p^*\in P$ with $p\succcurlyeq p^*\Vdash \varphi(\bar a)$ there exists a $|\varphi|\cdot b_0^{r+1}$-extension $q$ of $p$ with $p^*\| q$ and $q\Vdash\varphi(\bar a)$; \item[(ii)] there is a formula $\hat\varphi(z,\bar x)$ such that for all $p\in P$ the formula $\hat\varphi(p,\bar x)$ defines the set $\{\bar a\mid p\|\varphi(\bar a)\}$ in $M$; \item[(iii)] there is a formula $\tilde\varphi(z,\bar x)$ such that for all $p\in P$ the formula $\tilde\varphi(p,\bar x)$ defines the set $\{\bar a\mid p\Vdash\varphi(\bar a)\}$ in $M$. \end{enumerate} For $k_\varphi=0$, $\varphi$ is a literal. If $\varphi$ does not mention $\alpha$, then (i)-(iii) are trivial. If $\varphi(\bar x)$ is $\alpha(t(\bar x))$ for some term $t(\bar x)$, then (i) and (iii) hold by (b) and (c), respectively. For (ii), note that by~(b) we have that $p\|\alpha(t(\bar a))$ if and only if there is a $b_0$-extension $q$ of $p$ that forces~$\alpha(t(\bar a))$; this is easy to express using (a) and (c). If $\varphi(\bar x)$ is $\neg\alpha(t(\bar x))$ for some term $t(\bar x)$, then (i) holds by (b). For (ii), using (Stability), set $\hat{\varphi}(z,\bar x):=\neg\widetilde{\alpha(t)}(z,\bar x)$. For (iii) set $\tilde{\varphi}(z,\bar x):=\neg\widehat{\alpha(t)}(z,\bar x)$. \medskip For the induction step we distinguish four cases whether $\varphi(\bar x)$ is obtained by $\wedge,\vee,\forall x{<}b_0$ or $\exists x{<}b_0$ from formulas $\psi$ with $k_\psi<k_\varphi$. \begin{enumerate}\itemsep=0pt \item Suppose $\varphi(\bar x)= (\varphi_0(\bar x)\wedge\varphi_1(\bar x))$. For (i) let $\bar a$ be a tuple from $M$ and suppose $$ p\succcurlyeq p^*\Vdash(\varphi_0(\bar a)\wedge\varphi_1(\bar a)). $$ Then $p\succcurlyeq p^*\Vdash\varphi_0(\bar a)$. By induction there is a $|\varphi_0|b_0^{r+1}$-exten\-sion~$q^0$ of~$p$ which is compatible with~$p^*$ and forces $\varphi_0(\bar a)$. Choose $q^*$ extending both $p^*$ and~$q^0$. Then $q^0\succcurlyeq q^*\Vdash\varphi_1(\bar a)$. By induction there is a $|\varphi_1|b_0^{r+1}$-extension $q$ of $q^0$ which is compatible with~$q^*$ and forces~$\varphi_1(\bar a)$. Then $q$ is a $|\varphi_0|b_0^{r+1}+|\varphi_1|b_0^{r+1}< |\varphi|b_0^{r+1}$-extension of $p$ and compatible with~$p^*$. It forces $\varphi_1(\bar a)$ by choice and $\varphi_0(\bar a)$ as it extends~$q^0$, so $q\Vdash(\varphi_0(\bar a)\wedge\varphi_1(\bar a))$. For (ii) observe we just showed that $p\|\varphi(\bar a)$ if and only if there is a $|\varphi|b_0^{r+1}$-extension~$q$ of~$p$ that forces both $\varphi_0(\bar a)$ and $\varphi_1(\bar a)$. This can be expressed using (a) and (iii) for~$\varphi_0,\varphi_1$. For (iii) set $\tilde\varphi(z,\bar x):=\tilde\varphi_0(z,\bar x)\wedge\tilde\varphi_1(z,\bar x)$. \item Suppose $\varphi(\bar x)= (\varphi_0(\bar x)\vee\varphi_1(\bar x))$. For (i) let $\bar a$ be a tuple from $M$ and suppose $$ p\succcurlyeq p^*\Vdash(\varphi_0(\bar a)\vee\varphi_1(\bar a)). $$ Then there are $b\in\{0,1\}$ and $\tilde p$ such that $p^*\succcurlyeq\tilde p\Vdash\varphi_b(\bar a)$ (recall~\eqref{eq:recurrenceOR}). Then $p\succcurlyeq \tilde p$ and induction gives a $|\varphi_b|b_0^{r+1}$-extension $q$ of $p$ which is compatible with~$\tilde p$, and hence also with~$p^*$, and forces $\varphi_b(\bar a)$, and hence also $\varphi(\bar a)$. For (ii) set $\hat\varphi(z,\bar x):=\hat\varphi_0(z,\bar x)\vee\hat\varphi_1(z,\bar x)$. For (iii) set $\tilde\varphi(z,\bar x):=\neg\widehat{\varphi\neg}(z,\bar x)$; note $\varphi\neg$ is a conjunction with $k_{\varphi\neg}=k_{\varphi}$, so $\widehat{\varphi\neg}$ has been defined in the previous case. \item Suppose $\varphi(\bar x)=\forall y{<} b_0 \psi(y,\bar x)$. Let $\bar a$ be a tuple from $M$ and suppose $$ p\succcurlyeq p^*\Vdash\forall y{<} b_0\ \psi(y,\bar a). $$ We claim that for every~$b\le^M b_0$ there is a $b\cdot |\psi|\cdot b_0^{r}$-extension~$q^b$ of $p$ such that $q^b\|p^* $ and $q^b\Vdash \psi(c,\bar a)$ for all~$c<^Mb$. This is an $M$-definable property of $b$. Indeed, using (a) and the definability of $\|\cdot\|$ (Definition~\ref{def:graded}), the set of $b\cdot |\psi|\cdot b_0^{r}$-extensions of $p$ is definable in $M$ (with parameter~$b$), forcing $\psi(c,\bar a)$ for all~$c<^Mb$ is expressed using $\hat\psi(z,y,\bar x)$, and compatibility with $p^*$ is expressed using $\q{x\| y}$. Since $M$ is an elementary extension of $\mathbb{N}$, it satisfies induction for all formulas in its language. We can thus prove our claim by induction on $b$ in $M$. Then (i) will follow, witnessed by $q^{b_0}$ (recall~\eqref{eq:bdqu}). For~$b=0$ take~$q^0:=p$. Assume that~$b<^Mb_0$ and we found $q^b$ as desired. Let $q^*$ be a common extension of $q^b$ and $p^*$. Then $q^b\succcurlyeq q^*\Vdash \psi(b,\bar a)$. Note $\psi$ has quantifier rank at most $r-1$. Applying (i) for $\psi$ gives a $|\psi|b_0^{r}$-extension $q^{b+1}$ of~$q^b$ that forces $\psi(b,\bar a)$ and is compatible with $q^*$ and hence with~$p^*$; since~$q^{b+1}$ extends $q^b$ it forces $\psi(c,\bar a)$ for all $c$ with $M\models c{<}b{+}1$. To see (ii), note we showed that $p\|\varphi(\bar a)$ if and only if there exists a $|\psi|\cdot b_0^{r+1}$-extension of~$p$ forcing $\psi(c,\bar a)$ for all $c$ with $M\models c{<}b_0$. This is easily expressed using (a) and the formula~$\tilde\psi(z,y,\bar x)$. For (iii) set $\tilde\varphi(z,\bar x):=\forall y{<}b_0\tilde{\psi}(z,y,\bar x)$ (recall \eqref{eq:bdqu}). \item Suppose $\varphi(\bar x)= \exists y{<} b_0\psi(y,\bar x)$. For~(i) let $\bar a$ be a tuple from $M$ and suppose $$p\succcurlyeq p^*\Vdash\exists y{<} b_0\ \psi(y,\bar a). $$ Then there are $b\in M$ and $\tilde p$ such that $p^*\succcurlyeq\tilde p\Vdash (b{<}b_0\wedge \psi(b,\bar a))$ (recall~\eqref{eq:recurrenceOR}). By (Conservativity), $b<^M b_0$ and $\tilde p\Vdash\psi(b,\bar a)$. As $\psi(y,\bar x)$ has quantifier rank $\le r-1$, induction gives a $|\psi|b_0^{r}$-extension $q$ of $p$ which is compatible with~$\tilde p$, and hence with $p^*$, and forces~$\psi(b,\bar a)$ and hence $\exists y{<} b_0\ \psi(y,\bar a)$. For (ii), note we just saw that $p\| \exists y{<} b_0\psi(y,\bar a)$ if and only if $p\| \psi(b,\bar a)$ for some $b<^Mb_0$. We thus set $\hat\varphi(z,\bar x):=\exists y{<}b_0\hat\psi(z,y,\bar x)$. For (iii), set $\tilde\varphi(z,\bar x):=\neg\widehat{\varphi\neg}(z,\bar x)$; note $\varphi\neg$ starts with $\forall y{<}b_0$ and has $k_{\varphi\neg}=k_{\varphi}$, so $\widehat{\varphi\neg}$ has been defined in the previous case. \end{enumerate} This finishes the proof of the Definability Lemma. \end{proof} We are ready to prove the main result in this section, a general method to produce models of $\forall\mathsf{T}^1_2(\PV(\alpha))$ by typical forcings. \begin{definition} A {\em $\ensuremath{{\sf PV}}\xspace$-cut} in $M$ is a substructure $N$ of the $\ensuremath{{\sf PV}}\xspace$-reduct of~$M$ such that $a <^Mb\in N$ implies $a\in N$ for all $a,b\in M$. \end{definition} Recall the notation $\alpha^M_G$ from the Forcing Lemma~\ref{lem:forcing}~(a). \begin{theorem}\label{thm:T12} Assume the forcing frame $(P,\preccurlyeq,\mathcal D,\|\cdot\|),$ the forcing $\Vdash$ and $b_0\in M$ satisfy the assumption of the previous lemma, and let $G$ be a generic filter. Assume further that $N$ is a $\ensuremath{{\sf PV}}\xspace$-cut of $M$ such that $b_0\in N$ and $b_0$ {\em bounds lengths in~$N$}, i.e., $N\models \forall x\ |x|{<}b_0$. Set $$ \alpha^N:=\alpha^M_G\cap N. $$ Then $(N,\alpha^N)$ has a unique expansion to a model of $\forall\mathsf{T}^1_2(\PV(\alpha))$. \end{theorem} \begin{proof} By Lemma~\ref{lem:definability} and Theorem~\ref{thm:principal} we have $(M,\alpha^M_G)\models\mathsf{MIN}(\exists\Delta_0^{b_0}(\alpha))$. We claim that \begin{equation*}\label{eq:s12} (N,\alpha^N)\models\mathsf{MIN}(\Sigma_1^{b}(\alpha)). \end{equation*} For contradiction, assume $\varphi(x)$ is a $\Sigma_1^{b}(\alpha)$-formula with parameters from~$N$ that defines in~$(N,\alpha^N)$ a non-empty set without minimum. Since~$b_0$ bounds lengths in $N$, $\varphi(x)$ is in $(N,\alpha^N)$ equivalent to a $\Sigma_1^{b_0}(\alpha)$-formula~$\varphi'(x)$ with parameters from $N$. Since $N$ is a $\ensuremath{{\sf PV}}\xspace$-cut in $M$, $\varphi'(x)$ defines also in~$(M,\alpha^M_G)$ a non-empty set without minimum. But a standard collection argument (see e.g.~\cite[Proof of Theorem~4.3]{am}) shows $\varphi'(x)$ is in $(M,\alpha^M_G)$ equivalent to a $\exists\Delta_0^{b_0}(\alpha)$-formula. We thus get a contradiction to $\mathsf{MIN}(\exists\Delta_0^{b_0}(\alpha))$. Clearly, $N\models\forall\ensuremath{{\sf PV}}\xspace$, so the theorem follows by Lemma~\ref{lem:PVpa} and Proposition~\ref{prop:cons}. \end{proof} \section{Riis' theorem and extensions}\label{sec:riis} We define a forcing whose conditions are partial oracles on $[n]$ coding partial structures that do not verify a given $\varphi$. The oracle in the generic expansion then codes a total structure on~$[n]$ where $\varphi$ fails. It is a routine task to verify that our forcing has various desirable properties (typical, graded, etc.). We shall give the details in Section~\ref{sec:fwps}. Sections~\ref{sec:Briis} and \ref{sec:riisext} then prove certain stronger variants of Theorems~\ref{thm:Briis} and \ref{thm:main} as an application of Theorem~\ref{thm:T12}. We view Theorem~\ref{thm:main} as an extension of Theorems~\ref{thm:Briis} because the proof of the latter is not much more than the former plus an additional application of the Core Lemma~\ref{lem:dense2}. The proof exemplifies the role of forcing in bounded arithmetic, as viewed in~\cite{am}, to reduce independence to finite combinatorics, here, density arguments. \subsection{Forcing with partial structures}\label{sec:fwps} We define a notion of forcing in the following situation: \begin{enumerate}\itemsep=0pt \item[--] $L$ is a finite language and $\varphi$ is a basic $L$-sentence (Definition~\ref{df:basic}); \item[--] $\mathbb{N}$ is an expansion of the standard $\ensuremath{{\sf PV}}\xspace$-model interpreting a countable language including ($\ensuremath{{\sf PV}}\xspace$ and) $L$; \item[--] $\str B\not\models\varphi$ where $\str B$ is the $L$-reduct of $\mathbb{N}$; \item[--] $M$ is a countable proper elementary extension of $\mathbb{N}$; \item[--] $b_0,n\in M\setminus \mathbb{N}$ such that $b_0^k<^M n$ for all $k\in\mathbb{N}$. \end{enumerate} Hence the role of $L$ in Section~\ref{sec:basics} is played by the language of $M$ here. \medskip There is a $(\ensuremath{{\sf PV}}\xspace\cup L)$-formula $\mathit{PaOr}(y,x)$ that defines in $\mathbb{N}$ the pairs $(m,p)$ such that $m>0$ and $p$ is a code of a partial $L$-oracle on $[m]$ and the partial $L$-structure $\str B(L,m,p)=\str B(p)$ is embeddable into $\str B$. The size $\|p\|$ of such~$p$ does not depend on $m$ and is definable in $\mathbb{N}$. We have a $(\ensuremath{{\sf PV}}\xspace\cup L)$-formula ``$x$~is relevant (wrt~$L,y$)'' defining in $\mathbb{N}$ the set of pairs $(a,m)$ such that $a$ is relevant (wrt $L,m$). A {\em partial $L$-oracle on $[n]$ in $M$} is an element satisfying $\mathit{PaOr}(n,x)$ in $M$, and $a\in M$ is {\em relevant (wrt $L,n$)} if it satisfies ``$x$ is relevant (wrt $L,n$)'' in $M$. We do not distinguish a partial $L$-oracle $p$ notationally from the pair of sets it codes. We write $p=\langle p_0,p_1\rangle$ (in $M$); formally, $p_0,p_1$ are $(p)_0,(p)_1$ calculated in $M$. Since $M$ is an elementary extension of $\mathbb{N}$, the function $\|\cdot\|$ extends to $M$. Let $P\subseteq M$ be the set of partial $L$-oracles $p$ (on $[n]$) in $M$ such that $$ M\models \|p\|\le b_0^k\text{ for some }k\in\mathbb{N}.$$ We let $p,q,\ldots$ range over~$P$. Note that $P$ is not definable in $M$. We set $p\preccurlyeq q$ if and only if~$p$ extends $q$ in the sense of Definition~\ref{def:extend} (applied in $M$). \begin{lemma} \label{lem:typ} $(P,\preccurlyeq,\|\cdot\|)$ is a typical graded forcing frame. \end{lemma} \begin{proof} Clearly, $\|\cdot\|$ is non-increasing. For typicality, we already noted the formula $\q{\|x\|{=}y}$ and set (recall $x\in y$ is $\mathit{bit}(y,x){=}1$) \begin{eqnarray*} \q{x{\preccurlyeq} y}&:=&\forall z(z\in (y)_0\to z\in (x)_0)\wedge \forall z(z\in (y)_1\to z\in (x)_1);\\ \q{x\| y}&:=&\exists z(\mathit{PaOr}(n,z)\wedge \q{z{\preccurlyeq} x}\wedge \q{z{\preccurlyeq} y}). \end{eqnarray*} A pair of conditions $(p,q)$ satisfies $\q{x\| y}$ in $M$ if and only if $p$ and $q$ have a common extension {\em in $P$}. Indeed, if there is a partial $L$-oracle extending both $p$ and $q$, then there is one of size at most $ \|p\|+\|q\|$ which hence is in $P$. \end{proof} This completes the definition of the forcing frame up to the choice of $\mathcal D$. This choice will be based on the following corollaries to Section~\ref{sec:dense}, explaining the title of that section. \begin{corollary}\label{cor:D1} For every relevant $a\in M$, the set $ D(a):=\{q\in P\mid a\in q_0\cup q_1\} $ is dense. \end{corollary} \begin{proof} Let $p\in P$, and $a\in M$ be relevant, i.e., in $M$ of the form $\langle S,\bar a,i\rangle$ or $\langle S,\bar a\rangle$ for $S\in L$ and~$\bar a\in [n]^{\mathit{ar}(S)}$ and $i<|n|$ in $M$. Lemma~\ref{lem:dense1} formalizes as a sentence which is true in $\mathbb{N}$, and hence in $M$. The assumption~\eqref{eq:psmall} of this lemma holds in $M$ for all $p\in P$. Hence its conclusion gives in $ M$ a 1-extension $q$ of $p$ in $D(a)$. Clearly, $q\in P$. \end{proof} The following corollary is proved by a case distinction as to whether $m$ is small or large and then applies Lemma~\ref{lem:dense3} or \ref{lem:dense2}. It is not needed in the proof of Riis' theorem given in the next section. \begin{corollary}\label{cor:D2} Assume $\str B$ is $n^{o(1)}$-large and $\tilde \varphi$ is a weak finitary combinatorial principle in the language $\tilde L$. Then for every $m\in M\setminus\{0\}$ and every family of decision trees $(t_{\tilde S})_{\tilde S\in\tilde L}$ in~$M$ of height at most $b_0$ the following set is dense: $$ D((t_{\tilde S})_{\tilde S\in\tilde L},m):=\big\{q\in P\mid \str C((t_{\tilde S})_{\tilde S\in\tilde L},m,q)\text{ verifies }\tilde \varphi\big\}. $$ \end{corollary} \begin{proof} There is a definable function in $\mathbb{N}$ that maps $n,p$ to the (natural numbers coding the) partial structure $\str B(L,n,p)=\str B(p)$. Similarly, $\str C((t_{\tilde S})_{\tilde S\in\tilde L},m,q)$ is the value of a definable (in~$\mathbb{N}$) function on $m,q$ and the parameters in the definitions of the decision trees $t_{\tilde S}, \tilde S\in\tilde L$. The size function $s_{\tilde L}$, the determinacy $\tilde d$ of $\tilde\varphi$ are clearly definable in $\mathbb{N}$, and so is some function $g(n)\le n^{o(1)}$ witnessing that $\str B$ is $g$-large. Since $M$ is an elementary extension of~$\mathbb{N}$ these functions extend to $M$, and we denote the extensions again by by $s_{\tilde L},\tilde d$ and $g$. We have Lemmas~\ref{lem:dense3} and \ref{lem:dense2} for $M$ instead~$\mathbb{N}$. Let $p\in P$ be given. We distinguish two cases. Assume first that $m$ satisfies (iii) of Lemma~\ref{lem:dense2}. We have assumptions~(i) and (ii) of this lemma. For (ii), observe that overspill gives $t\in M\setminus \mathbb{N}$ such that $M\models g(n) {<} n^{1/t}$; hence the r.h.s.\ of~(ii) is at most $n^{1/t'}$ for some~$t'\in M\setminus \mathbb{N}$. The conclusion of Lemma~\ref{lem:dense2} gives in~$M$ a $b_0|\tilde\varphi|$-extension $q$ of $p$ in $D((t_{\tilde S})_{\tilde S\in\tilde L},m)$. Note $q\in P$ because $\|q\|\le \|p\|+|\tilde\varphi|b_0\le b_0^k$ for suitable standard $k\in \mathbb{N}$. Now assume that $m$ violates (iii) of Lemma~\ref{lem:dense2}, i.e., $s_{\tilde L}(m)< 2b_0\tilde d(m)$ in $M$. As $\tilde \varphi$ is weak, $s_{\tilde L}(m)\ge m^{1/\ell}\cdot \tilde d(m)$ for some $\ell\in\mathbb{N}\setminus\{0\}$. As $\tilde\varphi$ is valid in the finite, $\tilde d(m)>0$ in~$M$. It follows that $m< (2b_0)^\ell$ in~$M$. But then the assumption \eqref{eq:msmall} of Lemma~\ref{lem:dense3} holds true in $M$: the r.h.s.\ is bounded by $b_0^{k}$ for some standard $k\in \mathbb{N}$ and $b_0^k<n$ in $M$. The conclusion of this lemma gives in~$M$ some $q\in D((t_{\tilde S})_{\tilde S\in\tilde L},m)$ extending~$p$; indeed $q\in P$ because $\|q\|\le \|p\|+b_0|\tilde L|m^{r_{\tilde L}-1}< b_0^{k}$ in $M$. \end{proof} We next define a typical forcing $p\Vdash\varphi$ for $p\in P$ and $\varphi$ a sentence in the forcing language. One might be tempted to define $p\Vdash\alpha(t)$ if and only if $t^M\in p_1$; recall $t^M$ is the value of the closed term $t$ of the forcing language in $M$ (treating its constants from $M$ as parameters). This, however, does not work: assume $t^M=\langle S,\bar a\rangle\notin p_1$ with $S^{[n]}(\bar a)=1/2$ in~$\str B(p)$; it might be that every partial substructure of $\str B$ containing an isomorphic copy of $\str B(p)$ is such that the copy of $\bar a$ is mapped to 1 by $S^A$ in $\str B$. In this case, $t^M\in q_1$ for all extensions $q$ of $p$ with $S^{[n]}(\bar a)\neq 1/2$ in~$\str B(q)$. Then Forcing Completeness (Lemma~\ref{lem:forcing}~(c)) fails:~$\alpha(t)$ is not forced by $p$ but holds in all generic expansions built by filters containing $p$. The issue is sidestepped using a weaker and slightly more technical definition: \begin{lemma}\label{lem:defass} There is exactly one typical forcing $\Vdash$ satisfying for all closed terms $t$ of the forcing language and all $p\in P$: \begin{equation}\label{eq:forcedef} p\Vdash\alpha(t)\ \Longleftrightarrow \ t^{M}\textup{ is relevant and $t^{M}\notin q_0$ for every 1-extension $q$ of $p$}. \end{equation} Moreover, $(P,\preccurlyeq,\mathcal D,\|\cdot\|),\Vdash$ and $b_0$ satisfy the assumption of the Definability Lemma~\ref{lem:definability}. \end{lemma} \begin{proof} We define $p\Vdash\varphi$ for atomic formulas $\varphi$ without $\alpha$ according to (Conservativity) and use the recurrence~\eqref{eq:recurrence} to define it on more complex formulas. Uniqueness being clear, we check this defines a typical forcing. The rest being obvious we have to check (Extension) and (Stability) for atoms of the form $\alpha(t)$ where $t$ is a closed term of the forcing language. For (Extension) assume $p\succcurlyeq q\not\Vdash\alpha(t)$. We show $p\not\Vdash \alpha(t)$. This is clear if~$t^{M}$ is not relevant. Otherwise there is a 1-extension $q'$ of $q$ with $t^{M}\in q'_0$. Deleting some elements from $q_0',q'_1$ gives a 1-extension $p'$ of $p$ with $t^{M}\in p'_0$, so $p\not\Vdash\alpha(t)$. For (Stability) assume $p\not\Vdash\alpha(t)$. We have to find some extension $q$ of $p$ that does not have an extension forcing $\alpha(t)$. If $t^{M}$ is not relevant, we take $q:=p$. Otherwise there is a 1-extension $q$ of $p$ with $t^{M}\in q_0$. Clearly, no extension of $q$ forces $\alpha(t)$. We now verify the assumptions of the Definability Lemma~\ref{lem:definability}. Assumptions (a) and (c) being clear, we prove (b). Let $\varphi$ be a literal sentence of the forcing language and suppose $p\succcurlyeq p^*\Vdash \varphi$. We can assume $\varphi$ mentions $\alpha$ (otherwise take $q:=p$), so equals $\alpha(t)$ or $\neg\alpha(t)$ for some closed term $t$. Assume the former (the latter case is similar). Then $t^{M}$ is relevant, so Corollary~\ref{cor:D1} gives $r\preccurlyeq p^*$ with $r\in D(t^M)$. Then $t^{M}\in r_1$ because $r\Vdash\alpha(t)$. From $r$ get a 1-extension $q$ of~$p$ with $t^{M}\in q_1$ by deleting some elements from $r_0,r_1$. Clearly,~$q$ is compatible with $p^*$ and forces $ \alpha(t)$. \end{proof} Finally, we observe that the generic $\alpha_G^M$ from the Forcing Lemma~\ref{lem:forcing}~(a) is as expected: \begin{lemma}\label{lem:alphagood} For every relevant $a\in M$: \begin{eqnarray*}\label{eq:p1} \textstyle a\in \alpha^{M}_G& \Longleftrightarrow & a\in p_1\text{ for some } p\in G\\\label{eq:p0} & \Longleftrightarrow & a\not\in p_0\text{ for all } p\in G. \end{eqnarray*} \end{lemma} \begin{proof} If $a\in \alpha^{M}_G$, then there is $p\in G$ forcing~$\alpha(a)$ by Lemma~\ref{lem:forcing}~(a),(b). By genericity there is $q\in G\cap D(a)$. Then $a\notin q_0$ as otherwise $q\Vdash\neg \alpha(a)$ and then~$p,q\in G$ would not be compatible. Hence $a\in q_1$. Conversely, if $a\in p_1$ for some $p\in G$, then $p\Vdash\alpha(a)$. Then $a\in \alpha^{M}_G$ by Lemma~\ref{lem:forcing}~(a),(b). This shows the first equivalence. The second is similar. \end{proof} \subsection{Proof of Theorem~\ref{thm:Briis}}\label{sec:Briis} We prove the following stronger version of Theorem~\ref{thm:Briis}. A function $f:\mathbb{N}\to\mathbb{N}$ is {\em subexponential} if $f(n)\le 2^{n^{o(1)}}$. If $f$ is definable (in the standard $\ensuremath{{\sf PV}}\xspace$-structure $\mathbb{N}$), then it has an extension $f^M$ to any elementary extension $M$ of $\mathbb{N}$. Call a $\ensuremath{{\sf PV}}\xspace$-cut $N$ of $M$ {\em subexponential in~$n$} if $f^M(n)\in N$ for all definable subexponential functions $f:\mathbb{N}\to\mathbb{N}$. To be clear about the notation in the following statement, recall that by Proposition~\ref{prop:cons} every model of $\forall\mathsf{T}^1_2(\PV(\alpha))$ has the form $\langle N,\alpha^N\rangle$ for $N\models\forall\ensuremath{{\sf PV}}\xspace$ and $\alpha^N\subseteq N$. \begin{theorem}\label{thm:strongmain} Let $L$ be a finite language and $\varphi$ a basic $L$-sentence without built-in symbols that fails in some infinite model. Then there exists a model $\langle N,\alpha^N\rangle$ of $\forall\mathsf{T}^1_2(\PV(\alpha))$ and $n\in N\setminus\{0\}$ such that $$ \str B(L,n,\alpha^N)\not\models\varphi. $$ Moreover, if $\psi(x)$ is a $\ensuremath{{\sf PV}}\xspace$-formula that defines an unbounded set in~$\mathbb{N}$, then $N,n$ can be chosen such that $N$ is a $\ensuremath{{\sf PV}}\xspace$-cut in an elementary extension $M$ of $\mathbb{N}$ such that $N$ is subexponential in $n$ and $M\models\psi(n)$. \end{theorem} \begin{remark} If $\varphi$ is not valid in the finite, the first statement is trivial but the second is not. An interesting case is that the spectrum of $\neg\varphi$ is co-infinite and belongs to the polynomial hierarchy, or equivalently, the set of $n>0$ such that $\varphi$ is valid in structures of size $[n]$ is infinite and definable by a bounded $\ensuremath{{\sf PV}}\xspace$-formula $ \q{\varphi \textit{ is valid on } [x]}$. Then we get $\str B(L,n,\alpha^N)\not\models\varphi$ and $N\models\q{\varphi \textit{ is valid on } [n]}$ (since this is bounded and true in $M$). \end{remark} \begin{proof}[Proof of Theorem~\ref{thm:strongmain}] The proof consists mainly in putting the pieces together. Let $\str B$ be an infinite model of~$\neg\varphi$. We can assume it has universe $B=\mathbb{N}$. We let $\mathbb{N}$ be the $\ensuremath{{\sf PV}}\xspace\cup L$-structure whose $\ensuremath{{\sf PV}}\xspace$-reduct is the standard model and whose $L$-reduct is $\str B$. Let $f_0,f_1,\ldots$ enumerate the definable subexponential functions. For every $k\in\mathbb{N}$ the formula \begin{equation}\label{eq:nb} \textstyle \psi(x)\wedge y^k{<}x\wedge\bigwedge_{i<k}|f_i(x)|{<}y \end{equation} is satisfiable in $\mathbb{N}$. Thus there exists a countable elementary extension $M$ of $\mathbb{N}$ and $n,b_0\in M$ such that assigning $n$ to $x$ and $b_0$ to $y$ satisfies \eqref{eq:nb} for all $k\in\mathbb{N}$. Clearly, $n,b_0\in M\setminus\mathbb{N}$. Let $\mathcal D$ be the family of dense sets $D(a), a\in M,$ from Corollary~\ref{cor:D1}. The previous section gives a typical graded forcing frame $(P,\preccurlyeq,\mathcal D,\|\cdot\|)$ and a typical forcing $\Vdash$ satisfying the assumptions of the Definability Lemma~\ref{lem:definability} (see Lemma~\ref{lem:defass}). Let $G$ be a generic filter and $$ \textstyle N:=\bigcup_{k\in\mathbb{N}}\big\{a\in M\mid a\le^M f^M_k(n) \big\}. $$ This is a $\ensuremath{{\sf PV}}\xspace$-cut in $M$ and $b_0$ bounds lengths in $N$. By Theorem~\ref{thm:T12}, $(N,\alpha^N)$ has an expansion $\langle N,\alpha^N\rangle\models\forall\mathsf{T}^1_2(\PV(\alpha))$ where $\alpha^N:=\alpha^M_G\cap N$. Note that $\alpha^N=\alpha^M_G$ since $\alpha^M_G$ contains only relevant elements and these are in $N$. The ``moreover'' part is obvious. To verify $\str B(L,n,\alpha^N)\not\models\varphi$, we first observe that $\str B(L,n,\alpha^N)$ is the union of the partial structures $\str B(p), p\in G$. More precisely and first, every $\str B(p), p\in G,$ is a partial substructure of $\str B(L,n,\alpha^N)$ because, by Lemma~\ref{lem:alphagood}, sequences of $p$-answers are sequences of $\alpha^N=\alpha^M_G$-answers. Second, assume $S^{[n]}(\bar a)=b$ in $\str B(L,n,\alpha^N)$ for some~$S\in L$ and $\bar a,b$ from $[n]$. We claim that $S^{[n]}(\bar a)=b$ in $\str B(p)$ for some $p\in G$. Say, $S$ is a function symbol (the case of a relation symbol is similar). Choose $p\in G\cap D(\langle S, \bar a,0\rangle)$. Then $\langle S, \bar a,i\rangle\in p_0\cup p_1$ for all $i<|n|$, so $S^{[n]}(\bar a)\neq 1/2$ in~$\str B(p)$. Since $\str B(p)$ is a partial substructure of $\str B(L,n,\alpha^N)$ we have $S^{[n]}(\bar a)=b$ in $\str B(p)$. We now verify $\str B(L,n,\alpha^N)\not\models\varphi$. Assume otherwise and recall $\varphi$ has the form~\eqref{eq:basic} (Definition~\ref{df:basic}). Choose~$i\in I$ and a tuple $\bar a$ from $[n]$ such that $\str B(L,n,\alpha^N)$ verifies $\lambda_{ij}(\bar a)$ for all $j\in J$. The literals $\lambda_{ij},j\in J,$ are verified in a partial substructure $\str C$ of $\str B(L,n,\alpha^N)$ of size at most $|J|$. Let $ (S_0,\bar a_0),\ldots, (S_{|J|-1},\bar a_{|J|-1}) $ list all pairs $(S,\bar a)$ with $S\in L,\bar a\in[n]^{\mathit{ar}(S)}$ and $S^{[n]}(\bar a)\neq 1/2$ in~$\str C$. As observed above, for every $j<|J|$ there is $p^j\in G$ such that the value $S_j^{[n]}(\bar a_j)$ in~$\str B(p^j)$ is equal to this value in~$\str C$. Since $G$ is a filter, there is $p\in G$ extending all~$p^j,j<|J|$. Then~$\str C$ is a partial substructure of $\str B(p)$, so $\str B(p)$ verifies $\varphi$. As $p\in P$ we have that $\str B(p)$ embeds into (the $L$-reduct of) $M$. Hence $M\models\varphi$ by Lemma~\ref{lem:pres}, so $\str B\models\varphi$ by elementarity -- a contradiction. \end{proof} The following is repeated from the Introduction and strengthens of Buresh-Oppenheim and Morioka's Theorem~\ref{thm:morioka}. Recall Definitions~\ref{df:conseq} and~\ref{df:assNPSP} and Example~\ref{ex:iter}. \begin{corollary} If $\varphi$ is finitary combinatorial principle without built-in symbols that fails in some infinite model, then $Q_\varphi$ is independent from $Q_\textit{ITER}$ over $\forall\mathsf{T}^1_2(\PV(\alpha))$ . \end{corollary} \begin{proof} Assume $\varphi$ satisfies the hypothesis, write it as $\exists\bar y\psi(\bar y)$ for $\psi(\bar y)$ quantifier free and say it has language $L$. By the previous theorem and Lemma~\ref{lem:formAB}, $\forall\mathsf{T}^1_2(\PV(\alpha))$ does not prove $\exists y\q{\str B(L,x,\alpha)\models\psi(y)}$. But it is not hard to see that $\forall\mathsf{T}^1_2(\PV(\alpha))$ proves $\exists y\ \q{\str B(L,x,\alpha)\models\textit{ITER}(y)}$ (cf.~\cite[Theorem 4.4]{bk}). Now apply Corollary~\ref{cor:Tredclosed}. \end{proof} \subsection{Proof of Theorem~\ref{thm:main}}\label{sec:riisext} We prove the following stronger version of Theorem~\ref{thm:main}. Its proof is an extension of the previous one. For readability statement (b) blurs the distinction between the symbol $f\in\ensuremath{{\sf PV}}\xspace(\alpha)$ and its interpretation in $\langle N,\alpha^N\rangle$. \begin{theorem}\label{thm:1} Let $L$ be a finite language and $\varphi$ a strong basic $L$-sentence without built-in symbols. Further, let $\tilde \varphi$ be a weak finitary combinatorial principle in the language $\tilde L$. Then there exists a model $\langle N,\alpha^N\rangle$ of $\forall\mathsf{T}^1_2(\PV(\alpha))$ such that \begin{enumerate}\itemsep=0pt \item[(a)] $ \str B(L,n,\alpha^N)\not\models\varphi $ for some $n\in N\setminus\{0\}$; \item[(b)] $ \str B(\tilde L,m,f_{\bar a}^{-1}(0))\models\tilde\varphi $ for all $m\in N\setminus\{0\}$, $f(x,\bar z)\in \ensuremath{{\sf PV}}\xspace(\alpha)$ and tuples $\bar a$ from $N$. \end{enumerate} Moreover, if $\psi(x)$ is a $\ensuremath{{\sf PV}}\xspace$-formula that defines an unbounded set in~$\mathbb{N}$, then $N,n$ can be chosen such that $N$ is a $\ensuremath{{\sf PV}}\xspace$-cut in an elementary extension $M$ of $\mathbb{N}$ such that $N$ is subexponential in $n$ and $M\models\psi(n)$. \end{theorem} \begin{proof} Proceed as in the previous proof with two changes. First, since $\varphi$ is strong, we can additionally assume that the structure $\str B$ chosen in the beginning is $n^{o(1)}$-large. This ensures the assumptions of Corollary~\ref{cor:D2}. Second, we let $\str D$ include additionally the countably many sets $D((t_{\tilde S})_{\tilde S\in\tilde L},m)$ from this corollary, where $m$ runs over $M\setminus\{0\}$ and $(t_{\tilde S})_{\tilde S\in\tilde L}$ runs over families of decision trees of height at most $b_0$ in $M$. We are left to verify (b). Let $f(x,\bar z)\in \ensuremath{{\sf PV}}\xspace(\alpha)$, $m\in N\setminus\{0\}$ and a tuple $\bar a$ from $N$ be given. Choose $(t_{\tilde S})_{\tilde S\in\tilde L}$ according to Lemma~\ref{lem:family}. We show $\str C((t_{\tilde S})_{\tilde S\in\tilde L},m,\alpha^N)\models\tilde\varphi$. We can assume that every~$t_{\tilde S}$ outputs~$0$ on arguments outside $[m]$ (otherwise modify $t_{\tilde S}$ adding $m$ to its parameters). Then every $t_{\tilde S}$ is a decision tree also in~$M$. As~$b_0$ bounds lengths in $N$, the trees~$t_{\tilde S}$ have height at most $b_0$. By genericity, there is $p\in G\cap D((t_{\tilde S})_{\tilde S\in\tilde L},m)$, so $\str C((t_{\tilde S})_{\tilde S\in\tilde L},m,p)$ verifies~$\tilde\varphi$. By Lemma~\ref{lem:alphagood}, $\str C((t_{\tilde S})_{\tilde S\in\tilde L},m,\alpha^N)$ extends $\str C((t_{\tilde S})_{\tilde S\in\tilde L},m,p)$ and hence verifies $\tilde\varphi$ too. \end{proof} \section{Discussion}\label{sec:disc} We discuss the applicability of Theorem~\ref{thm:main} using the examples from Section~\ref{sec:exas}. There we saw many strong finitary combinatorial principles and also that $\mathit{WPHP}$ is weak. To these principles Theorem~\ref{thm:main} applies directly and thus, as stated in the Introduction, gives a simple and general criterion for independence from $Q_\mathit{WPHP}$ over $\forall\mathsf{T}^1_2(\PV(\alpha))$. The main limitation of the applicability of Theorem~\ref{thm:main} is that $\mathit{WPHP}$ is our only natural example of a weak principle. Despite its naturality, weakness seems to be a surprisingly restrictive condition. We are unable to offer any sort of explanation for this. However, one can get independence from principles that are not weak via Theorem~\ref{thm:main}: \begin{corollary}\label{cor:appl} $Q_\varphi$ is independent from $Q_{\tilde\varphi}$ over $\forall\mathsf{T}^1_2(\PV(\alpha))$ for \begin{eqnarray*} \tilde\varphi&\in&\big\{\mathit{WPHP},\mathit{WPHP}',\textit{rPHP}\big\},\\ \varphi&\in&\big\{\mathit{PHP},\textit{LPHP},\textit{OPHP},\mathit{PAR},\mathit{HOP},\textit{IND}\big\}. \end{eqnarray*} \end{corollary} \begin{proof} For $\tilde\varphi=\mathit{WPHP}$ this follows directly from Theorem~\ref{thm:main} because $\mathit{WPHP}$ is weak and all listed choices for $\varphi$ are strong. The principles $\mathit{WPHP}'$ and $\textit{rPHP}$ are not weak but both $Q_{\mathit{WPHP}'}$ and $Q_\textit{rPHP}$ are consequences of $Q_\mathit{WPHP}$ over $\forall\PV(\alpha)$, so our claim follows by Proposition~\ref{prop:trans}. For $\mathit{WPHP}'$ this is well known (see \cite{jwphp} for this and other comparisons of various pigeonhole principles over $\ensuremath{{\sf PV}}\xspace(\alpha)$ and $\forall\mathsf{S}^1_2(\PV(\alpha))$). For $\textit{rPHP}$ note that $Q_\textit{rPHP}$ is many-one reducible to $Q_\mathit{WPHP}$ and apply Proposition~\ref{prop:Tred}~(b). \end{proof} Some of these independence results are known to hold in a much stronger form following Ajtai's work: $Q_\textit{OPHP}$ is not provably total in $\forall\mathsf{T}_2(\PV(\alpha))$ \cite{ajtai,kpwbip} while $Q_\mathit{WPHP}$ is provably total in $\mathsf T^2_2(\ensuremath{{\sf PV}}\xspace(\alpha))$~\cite{mpw}. Further,~$Q_\mathit{PAR}$ is independent from $Q_\mathit{PHP}$ over~$\forall\mathsf{T}_2(\PV(\alpha))$: this follows from Theorem~\ref{thm:bjstrong} and the exponential lower bound on bounded depth Frege proofs~\cite{bp}. We refer to \cite{beameriis} and the references therein for more on counting principles. As mentioned in Example~\ref{ex:rphp}, the choice $\tilde\varphi=\textit{rPHP}$ implies that $Q_\varphi$ for $\varphi$ as in Corollary~\ref{cor:appl} is independent from $\forall\mathsf{S}^1_2(\PV(\alpha))$ plus the surjective weak pigeonhole principle for $\ensuremath{{\sf PV}}\xspace(\alpha)$-functions. For $\varphi=\mathit{HOP}$ this is known~\cite{atstha} even for $\forall\mathsf{T}^1_2(\PV(\alpha))$ instead $\forall\mathsf{S}^1_2(\PV(\alpha))$. \medskip As in the proof of the corollary above we see that $Q_\psi$ is independent from $Q_{\tilde\varphi}$ over $\forall\mathsf{T}^1_2(\PV(\alpha))$ if $Q_\varphi$ is and $Q_\varphi\le^m_pQ_\psi$; here, $\tilde\varphi,\varphi,\psi$ are arbitrary finitary combinatorial principles and $\le^m_p$ denotes (polynomial time) many-one reducibility. In this sense all independence results in Corollary~\ref{cor:appl} follow from the ones for $\textit{OPHP}$ and $\textit{IND}$: $$ \begin{array}{ccccccc} &&&&&&\\ &&\mathit{PHP}&&&&\mathit{HOP}\\[1ex] &&\uparrow&\nwarrow&&\nearrow&\uparrow\\[1ex] \mathit{PAR}&&\textit{LPHP}&&\textit{IND}&&\textit{HDP}\\[1ex] &\nwarrow&\uparrow&&&&\uparrow\\[1ex] &&\textit{OPHP}&&&&\textit{HAP}\\ &&&&&& \end{array} $$ In this figure, e.g.\ the arrow from $\textit{HAP}$ to $\textit{HDP}$ indicates $Q_\textit{HAP}\le^m_pQ_\textit{HDP}$. Recalling that $\mathit{PAR}$ is not total, by $Q_\textit{OPHP}\le^m_p O_\mathit{PAR}$ we mean a many-one reduction $f,g,h$ as in \eqref{eq:manyone} of Section~\ref{sec:searchprbl} with the additional property that $g$ has only odd values. We give the reductions involving $\textit{IND},\textit{HDP}$ and $\textit{HAP}$ below, the others are well-known. \begin{remark} The principles $\textit{HDP}$ and $\textit{HAP}$ are not well studied in proof complexity and Theorem~\ref{thm:main} does not seem to shed any light on their complexity. Their propositional proof complexity is low: the negations of their unary translations have polynomial size refutations in Res$(k)$ for some constant $k\in\mathbb{N}$. This follows from our proof of Proposition~\ref{prop:exareductions} below. There we give quantifier free definitions of $\textit{HAP}$ and $\textit{HDP}$ in $\mathit{HOP}$ in the sense of \cite[p.57f]{sergithesis}, and this allows \cite[Lemma~15]{sergithesis} to translate well known short Resolution refutations of the negation of the unary translation of $\mathit{HOP}$ \cite{stalmark} into short Res$(k)$ refutations as claimed. \end{remark} \begin{proposition} \label{prop:exareductions}\ \begin{enumerate}\itemsep=0pt \item[(a)] $Q_\textit{HAP}\le^m_pQ_\textit{HDP}$. \item[(b)] $Q_\textit{HDP}\le^m_pQ_\mathit{HOP}$. \item[(c)] $Q_\textit{IND}\le^m_pQ_\mathit{HOP}$. \item[(d)] $Q_\textit{IND}\le^m_pQ_\mathit{PHP}$. \end{enumerate} \end{proposition} The proof will be easy based on the following ad hoc lemma: \begin{lemma}\label{lem:fored} Let $\tilde\varphi,\varphi$ be finitary combinatorial principles without built-in symbols in finite languages $\tilde L,L$ respectively. Assume there is a family $I:=(\delta_S)_{S\in L}$ of quantifier free $\tilde L$-formulas such that: \begin{enumerate}\itemsep=0pt \item[(i)] if $S\in L$ is a relation symbol, then $\delta_S$ has $\mathit{ar}(S)$ many free variables; \item[(ii)] if $S\in L$ is a function symbol, then $\delta_S$ has $\mathit{ar}(S)+1$ many free variables and defines in every $\tilde L$-structure the graph of some $\mathit{ar}(S)$-ary function; \item[(iii)] for every $\tilde L$-structure $\str B$ falsifying $\tilde\varphi$, the $L$-structure $I(\str B)$ falsifies $\varphi$; this structure has the same universe as $\str B$ and interprets $S\in L$ by the set defined by $\delta_S$ in $\str B$. \end{enumerate} Then $Q_{\tilde \varphi}\le^m_pQ_{\varphi}$. \end{lemma} \begin{proof} Let $\tilde \varphi=\exists \bar y\tilde\psi(\bar y)$ and $\varphi=\exists \bar w\psi(\bar w)$ for quantifier free $\tilde \psi,\psi$ and recall $Q_{\tilde\varphi}$ and $Q_\varphi$ are $\q{\str B(\tilde L,x,\alpha)\models\tilde \psi(y)}$ and $\q{\str B(L,x,\alpha)\models\psi(w)}$ respectively. \medskip \noindent\textit{Claim:} There exists $\hat{I}(u,v)\in\ensuremath{{\sf PV}}\xspace(\alpha)$ such that in every model $\langle N,\alpha^N\rangle$ of $\forall\mathsf{S}^1_2(\PV(\alpha))$ and every $n\in N\setminus\{0\}$ we have \begin{equation}\label{eq:equI} \str B(\tilde L,n,\hat{ I}^{-1}_n(\alpha^N))= I\big(\str B(\tilde L,n,\alpha^N)\big). \end{equation} \noindent\textit{Proof of the Claim:} We show that for every $S\in L$ there is $f_S(u,\bar x)\in\ensuremath{{\sf PV}}\xspace(\alpha)$ such that $f_S(n,\bar a)=S^{[n]}(\bar a)$ in $\langle\mathbb{N},\alpha^\mathbb{N}\rangle$ for every $\alpha^\mathbb{N}\subseteq\mathbb{N}$ and $\bar a\in[n]^{\mathit{ar}(S)}$ and $n\in\mathbb{N}\setminus\{0\}$; here, $S^{[n]}$ denotes the interpretation of $S$ in $I\big(\str B(\tilde L,n,\alpha^\mathbb{N})\big)$. This is clear for relation symbols. For a function symbol $S\in L$ observe that the empty theory proves $\exists y\delta_S(\bar x,y)$ by (ii). Hence, Herbrand's theorem gives finitely many $\tilde L$-terms $ t_0(\bar x),\ldots, t_{\ell-1}(\bar x)$ such that $\bigvee_{i<\ell}\delta_S(\bar x,t_i(\bar x))$ is valid. Then $S^{[n]}(\bar a)$ can be computed in polynomial time with oracle $\alpha^\mathbb{N}$ by testing which of $ t_0(\bar a),\ldots, t_{\ell-1}(\bar a)$ satisfies $\delta_S(\bar a,y)$ in~$\str B(\tilde L,n,\alpha^\mathbb{N})$. The function $\hat I$ is easily constructed from the functions $f_S,S\in L$, so that \eqref{eq:equI} holds in $\langle \mathbb{N},\alpha^\mathbb{N}\rangle$ for all $\alpha^\mathbb{N}\subseteq\mathbb{N}$ and all $n\in\mathbb{N}\setminus\{0\}$. To see \eqref{eq:equI} holds in $\langle N,\alpha^N\rangle$ let $S\in L$ be a unary function symbol; other symbols are treated similarly. We have to show that $$ \str B(\tilde L,n,\hat{ I}^{-1}_n(\alpha^N))\models S(a){=}b\; \Longleftrightarrow \; I(\str B(\tilde L,n,\alpha^N)\models S(a){=}b. $$ The l.h.s.\ is equivalent to $f_S(a)=b$ (in $\langle N,\alpha^N\rangle$) because this equivalence is expressed by a $\Delta_0^b(\ensuremath{{\sf PV}}\xspace(\alpha))$ sentence, so proved by $\forall\PV(\alpha)$ by Lemma~\ref{lem:QE}. The r.h.s. too is equivalent to $f_S(a)=b$. Indeed, let $\delta'_S(u,v,\bar u)$ be a simple $\tilde L$-formula such that $\exists \bar u\delta_S(u,v,\bar u)$ is logically equivalent to $\delta_S(u,v)$; intuitively, the variables $\bar u$ collect values of (sub)terms appearing in~$\delta_S$. Then $\forall\mathsf{S}^1_2(\PV(\alpha))$ proves (recall Lemma~\ref{lem:formAB}) $$ u{<}x\wedge v{<}x \to \Big(\exists z \Big( \q{\str B(\tilde L,x,\alpha)\models \delta'_S(z) } \wedge (z)_0{=}u\wedge(z)_1{=}v\Big)\leftrightarrow f_S(u){=}v\Big). $$ This implies the claim.\hfill$\dashv$\medskip Recalling Lemma~\ref{lem:formAB}, the Claim and (iii) imply that $\forall\mathsf{S}^1_2(\PV(\alpha))$ proves $$ \q{\str B(L,x,\hat{ I}^{-1}_x(\alpha))\models\psi(w)}\to \exists y\q{\str B(\tilde L,x,\alpha)\models\tilde \psi(y)} $$ By witnessing, there is $h(x,w)\in \ensuremath{{\sf PV}}\xspace(\alpha)$ witnessing $y$. This implies $Q_{\tilde \varphi}\le^m_pQ_{\varphi}$ (using the identity function and $\hat I$ for $g$ and $f$ in \eqref{eq:manyone} of Section~\ref{sec:searchprbl}). \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:exareductions}] For (a), given a Boolean algebra $\str B$ falsifying $\textit{HAP}$ we falsify $\textit{HDP}$ taking for $\prec$ the proper subset relation (in the sense of $\str B)$; a point between $a$ and a proper superset $b$ is obtained adding to $a$ a proper non-empty subset of $b\setminus a$ (in the sense of $\str B$); such a subset is found by $f^B$. More precisely, we apply the previous lemma with~$I$ collecting the following formulas: \begin{eqnarray*} \delta_\prec(x_0,x_1)&:=& x_0{\sqcap} x_1{=}x_0\wedge \neg x_0{=}x_1,\\ \delta_b(x_0,x_1,y)&:=& (y{=} x_0{\sqcup}f(x_1{\sqcap}{\sim}x_0)\wedge \delta_\prec(x_0,x_1))\vee(y{=} 0\wedge \neg\delta_\prec(x_0,x_1)),\\ \delta_0(y)&:=& y{=}0,\\ \delta_1(y)&:=& y{=}1. \end{eqnarray*} For (b) we use a variant of \cite[Example~2, p.65]{sergithesis}: given $\str B$ violating $\textit{HDP}$ we find a $\{\prec,f\}$-structure falsifying $\mathit{HOP}$ by taking the $\prec^B$-interval $[0,1]$, with regressive function $b^B(0,\cdot)$ and declaring everything outside $[0,1]$ to be pairwise incomparable and bigger than~1. More precisely, writing $\q{x\in [0,1]}$ for $ x{=}0 \vee x{=}1 \vee (0{\prec}x\wedge x{\prec} 1) $ and $\q{x\notin[0,1]}$ for its negation, \begin{eqnarray*} \delta_\prec(x_0,x_1)&:=& (\q{x_0\in [0,1]}\wedge \q{x_1\in[0,1]}\wedge x_0{\prec}x_1)\ \vee \ (\q{x_1\notin[0,1]}\wedge \q{x_0\in [0,1]}), \\ \delta_f(x,y)&:=& (\q{x\in[0,1]}\wedge y{=}b(0,x))\vee (\q{x\notin[0,1]}\wedge y{=}1). \end{eqnarray*} For (c), given $\str B$ falsifying $\textit{IND}$ we get a structure falsifying $\mathit{HOP}$ by taking the inverse of the order of $\str B$ restricted to $P^B$, declaring everything outside $P^B$ to be pairwise incompatible and bigger than $\textit{min}^B$, and taking $s^B$ as regressive function. More precisely, \begin{eqnarray*} \delta_\prec(x_0,x_1)&:=& (P(x_0)\wedge P(x_1)\wedge x_1{\prec} x_0)\ \vee\ (\neg P(x_1)\wedge P(x_0)), \\ \delta_f(x,y)&:=& y{=}s(x). \end{eqnarray*} For (d), take $ y{=}\textit{min}$ for $\delta_c(y)$, and $ (P(x)\wedge y{=}s(x))\vee(\neg P(x)\wedge y{=}x) $ for $\delta_f(x,y)$. \end{proof} \subsubsection*{Acknowledgements} I thank the referee for detailed comments. I thank Neil Thapen and Emil Je\v r\'abek for their help understanding the material in Section~\ref{sec:aux} during a visit to Prague supported by the ERC advanced grant 339691 (FEALORA).
{ "timestamp": "2020-12-17T02:20:17", "yymm": "2012", "arxiv_id": "2012.08998", "language": "en", "url": "https://arxiv.org/abs/2012.08998" }
\section{extras} Let's summarize our main results and conclusions. In this letter, we have investigated, for the first time, the dilepton production in diffractive and exclusive production at forward rapidities considering ultraperipheral $PbPb$ collisions at the LHC. We have derived predictions for the $e^+ e^-$, $\mu^+ \mu^-$ and $\tau^+ \tau^-$ cross sections taking into account of realistic cuts that can be implemented by the LHCb experiment in a future data analysis. Our results indicate that the background associated to the diffractive production can be strongly suppressed and the exclusive processes can be cleanly observed. We also demonstrated that selecting events with large acoplanarity, the diffractive events can be studied, allowing to constrain the modeling of the soft survival factor. For the $\tau^+ \tau^-$ production, the semi and purely leptonic decay channels were considered and the final results indicate that the study of the exclusive production can be performed. Finally, the predictions presented in this indicate that a future experimental analysis of the dilepton production at the LHCb is feasible and can be useful to search for BSM physics. \section*{Acknowledgements} This work was partially financed by the Brazilian funding agencies CNPq, CAPES, FAPERGS, FAPERJ and INCT-FNA (processes number 464898/2014-5 and 88887.461636/2019-00).
{ "timestamp": "2020-12-17T02:17:39", "yymm": "2012", "arxiv_id": "2012.08923", "language": "en", "url": "https://arxiv.org/abs/2012.08923" }
\section{Introduction} \IEEEPARstart{T}{he} degrees-of-freedom (DoF) characterization for the multiple-input multiple-output (MIMO) X channel with delayed channel state information at the transmitter (CSIT) has attracted lots of interests \cite{71,72,107,64}. In \cite{71}, a non-trivial sum-DoF lower bound was achieved by a novel transmission scheme for the single-input single-output (SISO) X channel with delayed CSIT. Since each transmitter has messages for all receivers, the scheme in \cite{71} for X channel is different from the schemes for broadcast channel \cite{00} and interference channel \cite{01}. This transmission scheme was shown to be linear sum-DoF optimal in \cite{72}. Thereafter, in \cite{107}, the transmission scheme in \cite{71} was generalized to the MIMO X channel with delayed CSIT. However, the study of \cite{64} showed that the general transmission scheme in \cite{107} was linear sum-DoF optimal, except one antenna configuration case. For this case, a linear sum-DoF optimal transmission scheme was proposed in \cite{64} to fill the gap. Unlike the no delayed CSIT utilization for data transmission phase in \cite{107}, for this antenna configuration case, the scheme in \cite{64} exploits the delayed CSIT in one transmitter when the other transmitter is sending data symbols. The secure degrees-of-freedom (SDoF) region of MIMO interference channel with confidential messages (ICCM) was characterized in \cite{4}. The sum-SDoF of SISO X channel with confidential messages (XCCM) was studied in \cite{301,302}. The research of SDoF with delayed CSIT was stemmed from \cite{31}, where the SDoF region for two-user MIMO broadcast channel with confidential messages (BCCM) was characterized. In \cite{31}, the key idea of the transmission scheme is to add an artificial noise (AN) transmission phase before the data transmission phase. For MIMO ICCM with delayed CSIT, a sum-SDoF lower bound was proposed in \cite{35}. For MIMO XCCM with delayed CSIT and output feedback, the SDoF region was derived in \cite{32}. With alternating no, delayed, and current CSIT, the SDoF region of two-user multiple-input multiple-output (MISO) BCCM was characterized in \cite{111}. Under no eavesdropper's CSIT, the sum-SDoF of one-hop wireless networks were obtained in \cite{300}. However, there is no research explore the SDoF of the MIMO XCCM with delayed CSIT, which is the focus of this paper. The main contribution of this paper is that we obtain a non-trivial sum-SDoF lower bound by designing a transmission scheme. Our transmission scheme cannot be covered by those schemes in \cite{31,35,32} and their extensions. Instead, the proposed transmission scheme can be regraded as a generalization of the scheme in \cite{64} for symmetric antenna configurations, since the security issue is considered by us. To generalize the scheme in \cite{64}, we first add an AN transmission phase before data transmission phase. Next, the transmitted data symbols are masked with feedback received AN signals, where the arrangement of data transmission mimicks that in \cite{64}. However, this raises a problem: What's the optimal duration of the AN transmission phase? We answer this question by performing the security analysis. Similar to the security analysis in \cite{31,35,32}, we apply data processing inequality and Lemma 2 in \cite{31} to transform the mutual information expression for information leakage into matrix rank expressions. Whereas, since the delayed CSIT setting for XCCM is not considered in \cite{31,35,32}, the deduced matrix expressions and their rank analysis are different from that in \cite{31,35,32}. Thus, the derived optimal duration of AN transmission phase is new. Interestingly, our lower bound indicates that if the number of receive antennas is fixed, there is a minimum number of transmit antennas achieving the maximum of the lower bound. \textit{Notations}: The identity matrix of dimensions $m$ is denoted by $\textbf{I}_m$. The determinant of matrix $\textbf{A}$ is denoted by $\det(\textbf{A})$. The block-diagonal matrix with blocks $\textbf{\textsc{P}}$ and $\textbf{Q}$ is denoted by $ \text{bd}\{\textbf{P}, \textbf{Q}\} = [\textbf{P}, \textbf{0}; \textbf{0},\textbf{Q}] $. The $\log$ function is referred to $\log_2$. \section{System Model and Main Results} \subsection{$(M,M,N,N)$ MIMO XCCM with Delayed CSIT} We consider a $(M, M, N, N)$ MIMO XCCM has two transmitters with $M$ antennas and two receivers with $N$ antennas, i.e., transmitters 1, 2, and receivers 1, 2. The transmitter $i=1,2$ has a confidential message $W_{i,j}$ for receiver $j=1,2$. The complex input signal at transmitter $i = 1, 2$ and time slot (TS) $t$ is denoted by $\textbf{x}_i[t] \in \mathbb{C}^M$. The complex received signal at receiver $j = 1, 2$ and TS $t$ is denoted by $\textbf{y}_j[t] \in \mathbb{C}^N$. Mathematically, the input-output relationship is written as \begin{equation} \textbf{y}_j[t] = \textbf{H}_{1,j}[t]\textbf{x}_1[t] + \textbf{H}_{2,j}[t]\textbf{x}_2[t] + \textbf{z}_j[t], \quad j=1,2, \end{equation} where the CSI matrix from the transmitter $i=1,2$ to the receiver $j=1,2$ at TS $t$ is denoted by $\textbf{H}_{i,j}[t] \in \mathbb{C}^{N \times M}$, and the additive white Gaussian noise (AWGN) vector at the receiver $j$ and TS $t$ is denoted by $\textbf{z}_j[t]$. We assume that $\textbf{H}_{i,j}[t],\forall t$ is non-static (time-varying) and linearly independent. We denote the collection of CSI matrices for TS $1$ to TS $t-1$ as $\textbf{H}^{t-1} = [\textbf{H}_{i,j}[1],\cdots,\textbf{H}_{i,j}[t-1]],i,j=1,2$. At TS $t$, due to feedback delay, $\textbf{H}^{t-1}$ available at two transmitters. \subsection{Sum-SDoF} A $(2^{nR_{1,1}(\text{SNR})},2^{nR_{1,2}(\text{SNR})}$,$2^{nR_{2,1}(\text{SNR})}$,$2^{nR_{2,2}(\text{SNR})}$,$n)$ code with secure achievable rates $R_{i,j}(\text{SNR})$, $i,j=1,2$ is defined as follows: The communication process takes $n$ channel uses with confidential messages $W_{i,j}=[1,\cdots,2^{nR_{i,j}(\text{SNR})}],i,j=1,2$. A stochastic encoder $f_i(\cdot)$ at the transmitter $i=1,2$, encodes confidential message $W_{i,1}$, $W_{i,2}$, and $\textbf{H}^{t-1},$ to a codeword $\textbf{x}_i^n = [\textbf{x}_i[1],\cdots,\textbf{x}_i[n]]$. At the TS $t$, the input signal is encoded by $ \textbf{x}_i[t] = f_i(W_{i,1}, W_{i,2}, \textbf{H}^{t-1}), i = 1,2. $ A decoder $g_{i,j}(\cdot)$ at the receiver $j=1,2$ decodes the output signal $\textbf{y}_j^n \triangleq \{\textbf{y}_j[1],\cdots,\textbf{y}_j[n]\}$ to an estimated message $\widehat{W}_{i,j}$, which is given by $ \widehat{W}_{i,j} = g_{i,j}(\textbf{H}^n,\textbf{y}_j^n), j =1,2, $ where two receivers are assumed to have perfect CSI. In addition, the secure code should satisfy the reliability criterion, i.e., $\label{R1} \Pr[W_{i,j} \ne \widehat{W}_{i,j}] \le \epsilon_n, i,j = 1,2, $ and the secrecy criterion, \begin{subequations} \begin{eqnarray} && \frac{1}{n}I(W_{1,1},W_{2,1};\textbf{y}_2^n) \le \epsilon_n, \label{S1} \\ && \frac{1}{n}I(W_{1,2},W_{2,2};\textbf{y}_1^n) \le \epsilon_n, \label{S2} \end{eqnarray} \end{subequations} where $\epsilon_n \rightarrow 0$ as $n \rightarrow {\cal{1}}$. The secure sum-capacity is defined as the maximal achievable sum-rate, which is written as $ C = \max \,\, \sum_{i=1}^2\sum_{j=1}^2 R_{i,j}(\text{SNR}). $ The sum-SDoF is a first-order approximation of the secure sum-capacity in the high SNR regime and defined as follows: \begin{equation} \sum_{i=1}^2\sum_{j=1}^2 d_{ij} = \lim_{\text{SNR} \rightarrow {\cal{1}}} \frac{C}{\log \text{SNR}}. \end{equation} \subsection{Main Results} \textbf{Theorem 1}: Consider the $(M,M,N,N)$ MIMO XCCM with delayed CSIT. The sum-SDoF lower bound is given by \begin{equation} \label{LB} \sum_{i=1}^2\sum_{j=1}^2 d_{ij}\ge \begin{cases} 0, & M \le N, \\ \frac{3N(M-N)}{2M-N},& N < M \le \frac{7+\sqrt{33}}{8}N, \\ \frac{6MN}{8M-N}, & \frac{7+\sqrt{33}}{8}N < M \le 2N, \\ 4N/5, & 2N < M. \\ \end{cases} \end{equation} \begin{IEEEproof} Please refer to Section-III. \end{IEEEproof} \textit{Remark}: Fig. \ref{F1} shows: 1) The derived sum-SDoF lower bound has a gain over the sum-SDoF lower bound of MIMO ICCM with delayed CSI \cite{35}, where the gain comes from the joint data transmission from two transmitters; 2) The derived sum-SDoF lower bound is less than that of the scenarios with better CSIT conditions, i.e., the sum-SDoF of MIMO ICCM with perfect CSIT \cite{4}, the sum-SDoF of SISO XCCM with perfect CSIT \cite{302}, and the sum-SDoF of MIMO XCCM with delayed CSIT and output feedback \cite{32}; 3) By applying the proposed scheme, the derived lower bound decreases in $(7+\sqrt{33})/8 < M/N \le 2$. This implies that we can switch off $M - (7+\sqrt{33})N/8$ antennas for $(7+\sqrt{33})/8 < M/N$, to make the sum-SDoF lower bound non-decreasing. \begin{figure} \centering \includegraphics[width=2.7in]{Fig} \caption{The derived sum-SDoF lower bound is compared with related results.} \label{F1} \end{figure} \section{Proof of Theorem 1} \subsection{$M \le N$: Keep Two Transmitters Silent} Intuitively, the transmitted AN symbols from one transmitter will be immediately decoded by the eavesdropper, which disables the security of data transmission superposed feedback received AN signals. Hence, the sum-SDoF lower bound is $0$. \subsection{$N < M \le 2N$: The Proposed Transmission Scheme} The following pre-assigned matrices: $\phi[k] \in \mathbb{C}^{M \times \tau_1N}, k=1,\cdots,\tau_2$, $\omega[k] \in \mathbb{C}^{M \times \tau_1N}, k=1,\cdots,\tau_3$, $\gamma[k] \in \mathbb{C}^{M \times \tau_2N}, k=1,\cdots,\tau_3$, $\theta[k] \in \mathbb{C}^{N \times \tau_3N}, k=1,\cdots,\tau_4$, are linearly independent and full rank. Holistically, we denote $\Phi = [\phi[1];\cdots;\phi[\tau_2]]$, $\Omega = [\omega[1];\cdots;\omega[\tau_3]]$, $\Gamma=[\gamma[1];\cdots;\gamma[\tau_3]]$, and $\Theta=[\theta[1];\cdots;\theta[\tau_4]]$. \underline{\textit{Phase-I}} \textit{(AN Symbol Transmission for Receiver 1)}: This phase spans $\tau_1$ TSs. At TS $t=1,\cdots,\tau_1$, $M$ AN symbols are sent from transmitter $1$, i.e., $\textbf{x}_1^\text{I}[t] = \textbf{u}_1[t]$. Meanwhile, the transmitter $2$ keeps silent. The holistic transmitted signal for Phase-I is written as \begin{eqnarray} && \textbf{x}_1^\text{I} = \textbf{u}_1. \end{eqnarray} The holistic received signals for Phase-I are written as \begin{eqnarray} && \textbf{y}_j^\text{I} = \textbf{H}_{1,j}^\text{I}\textbf{u}_1 + \textbf{z}_j^\text{I},\quad j=1,2, \end{eqnarray} where the AWGN signal at receiver $j$ is denoted by $\textbf{z}_j^\text{I}$, $\textbf{u}_1 = [\textbf{u}_1[1];\cdots;\textbf{u}_1[\tau_1]] \in \mathbb{C}^{\tau_1M}$, and $\textbf{H}_{1,j}^\text{I} = \text{bd}\{\textbf{H}_{1,j}[1],\cdots,\textbf{H}_{1,j}[\tau_1]\} \in \mathbb{C}^{\tau_1N \times \tau_1M}, j=1,2$. \underline{\textit{Phase-II}} \textit{(AN Symbol Transmission for Receiver 2)}: This phase is same as Phase-I, except the role of the transmitters 1 and 2 is swapped. Hence, this phase spans $\tau_1$ TSs as well. At TS $t=\tau_1+1,\cdots,2\tau_1$, $M$ AN symbols are sent from transmitter $2$, i.e., $\textbf{x}_2^\text{II}[t] = \textbf{u}_2[t-\tau_1]$. Meanwhile, the transmitter $1$ keeps silent. The holistic transmitted signal for Phase-II is written as \begin{eqnarray} && \textbf{x}_2^\text{II} = \textbf{u}_2. \end{eqnarray} The holistic received signals for Phase-II are written as \begin{eqnarray} && \textbf{y}_j^\text{II} = \textbf{H}_{2,j}^\text{II}\textbf{u}_2 + \textbf{z}_j^\text{II}, \quad j=1,2, \end{eqnarray} where the AWGN signal at receiver $j$ is denoted by $\textbf{z}_j^\text{II}$, $\textbf{u}_2 = [\textbf{u}_2[1];\cdots;\textbf{u}_2[\tau_1]] \in \mathbb{C}^{\tau_1M}$, and $\textbf{H}_{2,j}^\text{II} = \text{bd}\{\textbf{H}_{2,j}[\tau_1+1],\cdots,\textbf{H}_{2,j}[2\tau_1]\} \in \mathbb{C}^{\tau_1N \times \tau_1M},j=1,2$. \underline{\textit{Phase-III}} \textit{(Data Symbol Transmission for Receiver 1 from Two Transmitters)}: This phase spans $\tau_2$ TSs. With the CSI matrices of Phase-I and Phase-II, transmitters 1 and 2 re-construct $\textbf{y}_{1}^\text{I}$ and $\textbf{y}_{1}^\text{II}$, respectively, when the AWGN is ignored. At TS $t=2\tau_1+1\cdots,2\tau_1+\tau_2$, $M$ data symbols (for receiver 1) superposed received AN signals are sent from transmitter $1$, i.e., $\textbf{x}_1^\text{III}[t] = \textbf{a}_1^{a}[t-2\tau_1] + \phi[t-2\tau_1] \textbf{y}_{1}^\text{I}$. Meanwhile, $M$ data symbols (for receiver 1) superposed received AN signals are sent from transmitter $2$, i.e., $\textbf{x}_2^\text{III}[t] = \textbf{a}_2[t-2\tau_1] + \phi[t-2\tau_1] \textbf{y}_{1}^\text{II}$. The holistic transmitted signals for Phase-III are written as \begin{subequations} \begin{eqnarray} &&\textbf{x}_1^\text{III} = \textbf{a}_1^{a} + \Phi \textbf{y}_{1}^\text{I}, \\ && \textbf{x}_2^\text{III} = \textbf{a}_2 + \Phi \textbf{y}_{1}^\text{II}. \end{eqnarray} \end{subequations} The holistic received signals for Phase-III are written as \begin{eqnarray} && \textbf{y}_j^\text{III} = \textbf{H}_{1,j}^\text{III}\textbf{x}_1^\text{III} + \textbf{H}_{2,j}^\text{III}\textbf{x}_2^\text{III} + \textbf{z}_j^\text{III}, \quad j=1,2, \end{eqnarray} where the AWGN signal at receiver $j$ is denoted by $\textbf{z}_j^\text{III}$, $\textbf{a}_1^a = [\textbf{a}_1^a[1];\cdots;\textbf{a}_1^a[\tau_2]] \in \mathbb{C}^{\tau_2M}$, $\textbf{a}_2 = [\textbf{a}_2[1];\cdots;\textbf{a}_2[\tau_2]] \in \mathbb{C}^{\tau_2M}$, and $\textbf{H}_{i,j}^\text{III} = \text{bd}\{\textbf{H}_{i,j}[2\tau_1+1],\cdots,\textbf{H}_{i,j}[2\tau_1+\tau_2]\} \in \mathbb{C}^{\tau_2N \times \tau_2M}, i,j=1,2$. \underline{\textit{Phase-IV}} \textit{(Data Symbol Transmission for Receiver 1 from Transmitter 1)}: This phase spans $\tau_3$ TSs. With the CSI matrices of Phase-III, the transmitter 2 re-constructs $\textbf{H}_{2,2}^\text{III}\textbf{x}_2^\text{III}$. At TS $t=2\tau_1+\tau_2+1,\cdots,2\tau_1+\tau_2+\tau_3$, $M$ data symbols (for receiver 1) superposed received AN signals are sent from transmitter $1$, i.e., $\textbf{x}_1^\text{IV}[t] = \textbf{a}_1^{b}[t-2\tau_1-\tau_2] + \omega[t-2\tau_1-\tau_2]\textbf{y}_{1}^\text{I}$. Meanwhile, the transmitter 2 sends $\textbf{x}_2^\text{IV}[t] = \gamma[t-2\tau_1-\tau_2] \textbf{H}_{2,2}^\text{III}\textbf{x}_2^\text{III}$. The holistic transmitted signals for Phase-IV are written as \begin{subequations} \begin{eqnarray} && \textbf{x}_1^\text{IV} = \textbf{a}_1^{b} + \Omega \textbf{y}_{1}^\text{I},\\ && \textbf{x}_2^\text{IV} = \Gamma \textbf{H}_{2,2}^\text{III}\textbf{x}_2^\text{III}, \end{eqnarray} \end{subequations} The holistic received signals for Phase-IV are written as \begin{eqnarray} && \textbf{y}_j^\text{IV} = \textbf{H}_{1,j}^\text{IV}\textbf{x}_1^\text{IV} + \textbf{H}_{2,j}^\text{IV}\textbf{x}_2^\text{IV} + \textbf{z}_j^\text{IV}, \quad j=1,2, \end{eqnarray} where the AWGN signal at receiver $j$ is denoted by $\textbf{z}_j^\text{IV}$, $\textbf{a}_1^b = [\textbf{a}_1^b[1];\cdots;\textbf{a}_1^b[\tau_3]] \in \mathbb{C}^{\tau_3M}$, and $\textbf{H}_{i,j}^\text{IV}=\text{bd}\{\textbf{H}_{i,j}[2\tau_1+\tau_2+1],\cdots,\textbf{H}_{i,j}[2\tau_1+\tau_2+\tau_3]\} \in \mathbb{C}^{\tau_3N \times \tau_3M}, i,j=1,2$. \underline{\textit{Phase-V}} \textit{(Data Symbol Transmission for Receiver 2 from Two Transmitters)}: This phase is the same as Phase-III, except the role of the transmitters 1 and 2 is swapped. Thus, this phase spans $\tau_2$ TSs as well. With the CSI matrices of Phase-I and Phase-II, transmitters 1 and 2 re-construct $\textbf{y}_{2}^\text{I}$ and $\textbf{y}_{2}^\text{II}$, respectively, when the AWGN is ignored. At TS $t=2\tau_1+\tau_2+\tau_3+1,\cdots,2\tau_1+2\tau_2+\tau_3$, $M$ data symbols (for receiver 2) superposed received AN signals are sent from transmitter $1$, i.e., $\textbf{x}_1^\text{V}[t] = \textbf{b}_1[t-2\tau_1-\tau_2-\tau_3] + \phi[t-2\tau_1-\tau_2-\tau_3] \textbf{y}_{2}^\text{I}$. Meanwhile, $M$ data symbols (for receiver 2) superposed received AN signals are sent from transmitter $2$, i.e., $\textbf{x}_2^\text{V}[t] = \textbf{b}_2^{a}[t-2\tau_1-\tau_2-\tau_3] + \phi[t-2\tau_1-\tau_2-\tau_3] \textbf{y}_{2}^\text{II}$. The holistic transmitted signals for Phase-V are written as \begin{subequations} \begin{eqnarray} &&\textbf{x}_1^\text{V} = \textbf{b}_1 + \Phi \textbf{y}_{2}^\text{I}, \\ && \textbf{x}_2^\text{V} = \textbf{b}_2^{a} + \Phi \textbf{y}_{2}^\text{II}, \end{eqnarray} \end{subequations} The holistic received signals for Phase-V are written as \begin{eqnarray} && \textbf{y}_j^\text{V} = \textbf{H}_{1,j}^\text{V}\textbf{x}_1^\text{V} + \textbf{H}_{2,j}^\text{V}\textbf{x}_2^\text{V} + \textbf{z}_j^\text{V}, \quad j=1,2, \end{eqnarray} where the AWGN signal at receiver $j$ is denoted by $\textbf{z}_j^\text{V}[t]$, $\textbf{b}_1 = [\textbf{b}_1[1];\cdots;\textbf{b}_1[\tau_2]] \in \mathbb{C}^{\tau_2M}$, $\textbf{b}_2^a = [\textbf{b}_2^a[1];\cdots;\textbf{b}_2^a[\tau_2]] \in \mathbb{C}^{\tau_2M}$, and $\textbf{H}_{i,j}^\text{V}=\text{bd}\{\textbf{H}_{i,j}[2\tau_1+\tau_2+\tau_3+1],\cdots,\textbf{H}_{i,j}[2\tau_1+2\tau_2+\tau_3]\} \in \mathbb{C}^{\tau_2N \times \tau_2M}, i,j=1,2$. \underline{\textit{Phase-VI}} \textit{(Data Symbol Transmission for Receiver 2 from Transmitter 2)}: This phase is the same as Phase-IV, except the role of the transmitters 1 and 2 is swapped. Hence, this phase spans $\tau_3$ TSs as well. With the CSI matrices of Phase-V, the transmitter 1 re-constructs $\textbf{H}_{1,1}^\text{V}\textbf{x}_1^\text{V}$. At TS $t=2\tau_1+2\tau_2+\tau_3+1,\cdots,2\tau_1+2\tau_2+2\tau_3$, $M$ data symbols for receiver 2 are sent from transmitter $2$, i.e., $\textbf{x}_1^\text{VI}[t] = \textbf{b}_2^{b}[t- 2\tau_1-2\tau_2-\tau_3] + \omega[t-2\tau_1-2\tau_2-\tau_3] \textbf{y}_{2}^\text{II}$. Meanwhile, the transmitter 1 sends $\textbf{x}_2^\text{VI}[t] = \gamma[t-2\tau_1-2\tau_2-\tau_3] \textbf{H}_{1,1}^\text{V}\textbf{x}_1^\text{V}$. The holistic transmitted signals for Phase-VI are written as \begin{subequations} \begin{eqnarray} && \textbf{x}_1^\text{VI} = \Gamma \textbf{H}_{1,1}^\text{V}\textbf{x}_1^\text{V},\\ && \textbf{x}_2^\text{VI} = \textbf{b}_2^{b} + \Omega \textbf{y}_{2}^\text{II}, \end{eqnarray} \end{subequations} The holistic received signals for Phase-VI are written as \begin{eqnarray} && \textbf{y}_j^\text{VI} = \textbf{H}_{1,j}^\text{VI}\textbf{x}_1^\text{VI} + \textbf{H}_{2,j}^\text{VI}\textbf{x}_2^\text{VI}+ \textbf{z}_j^\text{VI},\quad j=1,2, \end{eqnarray} where the AWGN signal at receiver $j$ is denoted by $\textbf{z}_j^\text{VI}$, $\textbf{b}_2^b = [\textbf{b}_2^b[1];\cdots;\textbf{b}_2^b[\tau_3]] \in \mathbb{C}^{\tau_3M}$, and $\textbf{H}_{i,j}^\text{VI}=\text{bd}\{\textbf{H}_{i,j}[2\tau_1+2\tau_2+\tau_3+1],\cdots,\textbf{H}_{i,j}[2\tau_1+2\tau_2+2\tau_3]\} \in \mathbb{C}^{\tau_3N \times \tau_3M}, i,j=1,2$. \underline{\textit{Phase-VII}} \textit{(Interference Recurrence)}: This phase spans $\tau_4$ TSs, which is used to re-transmit the combination of previous interference signals. This re-transmission will not incur new interference, but create useful equations for decoding. With the CSI matrices of Phase-III to Phase-VI, the transmitter 1 re-constructs $\textbf{H}_{2,2}^\text{IV}\Gamma\textbf{H}_{1,2}^\text{III}\textbf{x}_1^\text{III} - \textbf{H}_{1,2}^\text{IV}\textbf{x}_1^\text{IV}$, and the transmitter 2 re-constructs $\textbf{H}_{1,1}^\text{VI}\Gamma\textbf{H}_{2,1}^\text{V}\textbf{x}_2^\text{V} - \textbf{H}_{2,1}^\text{VI}\textbf{x}_2^\text{VI}$. At TS $t=2\tau_1+2\tau_2+2\tau_3+1,\cdots,2\tau_1+2\tau_2+2\tau_3+\tau_4$, the transmitter 1 sends $\textbf{x}_1^\text{VII}[t] = \theta[t-2\tau_1-2\tau_2-2\tau_3](\textbf{H}_{2,2}^\text{IV}\Gamma \textbf{H}_{1,2}^\text{III}\textbf{x}_1^\text{III} - \textbf{H}_{1,2}^\text{IV}\textbf{x}_1^\text{IV})$, and the transmitter 2 sends $\textbf{x}_2^\text{VII}[t] = \theta[t-2\tau_1-2\tau_2-2\tau_3](\textbf{H}_{1,1}^\text{VI}\Gamma\textbf{H}_{2,1}^\text{V}\textbf{x}_2^\text{V} - \textbf{H}_{2,1}^\text{VI}\textbf{x}_2^\text{VI})$, with $N$ antennas. The holistic transmitted signals for Phase-VII are written as \begin{subequations} \begin{eqnarray} && \textbf{x}_1^\text{VII} = \Theta (\textbf{H}_{2,2}^\text{IV}\Gamma\textbf{H}_{1,2}^\text{III}\textbf{x}_1^\text{III} - \textbf{H}_{1,2}^\text{IV}\textbf{x}_1^\text{IV}),\\ &&\textbf{x}_2^\text{VII} = \Theta (\textbf{H}_{1,1}^\text{VI}\Gamma\textbf{H}_{2,1}^\text{V}\textbf{x}_2^\text{V} - \textbf{H}_{2,1}^\text{VI}\textbf{x}_2^\text{VI}). \end{eqnarray} \end{subequations} The holistic received signals for Phase-VII are written as \begin{eqnarray} && \textbf{y}_j^\text{VII} = \textbf{H}_{1,j}^\text{VII}\textbf{x}_1^\text{VII} + \textbf{H}_{2,j}^\text{VII}\textbf{x}_2^\text{VII} + \textbf{z}_j^\text{VII}, \quad j=1,2, \end{eqnarray} where the AWGN signal at receiver $j$ is denoted by $\textbf{z}_j^\text{VII}$, and $\textbf{H}_{i,j}^\text{VII}=\text{bd}\{\textbf{H}_{i,j}[2\tau_1+2\tau_2+2\tau_3+1],\cdots,\textbf{H}_{i,j}[2\tau_1+2\tau_2+2\tau_3+\tau_4]\} \in \mathbb{C}^{\tau_4N \times \tau_4N}, i,j=1,2$. \begin{figure*} \begin{eqnarray} \label{H1} && \begin{bmatrix} \textbf{y}_1^\text{III} \\ \textbf{y}_1^\text{IV} \\ \textbf{y}_1^\text{VII} - \textbf{H}_{2,1}^\text{VII}\Theta(\textbf{H}_{1,1}^\text{VI}\Gamma\textbf{y}_1^\text{V} - \textbf{y}_1^\text{VI}) \end{bmatrix} = \underbrace{\begin{bmatrix} \textbf{H}_{1,1}^\text{III} & \textbf{0} & \textbf{H}_{2,1}^\text{III} \\ \textbf{0} & \textbf{H}_{1,1}^\text{IV} & \textbf{H}_{2,1}^\text{IV} \Gamma \textbf{H}_{2,2}^\text{III}\\ \textbf{H}_{1,1}^\text{VII} \Theta \textbf{H}_{2,2}^\text{IV} \Gamma \textbf{H}_{1,2}^\text{III} & -\textbf{H}_{1,1}^\text{VII}\Theta\textbf{H}_{1,2}^\text{IV} & \textbf{0} \end{bmatrix}}_{\textbf{H}_1} \begin{bmatrix} \textbf{a}_1^a \\ \textbf{a}_1^b \\ \textbf{a}_2 \end{bmatrix} \nonumber \\ && + \begin{bmatrix} \textbf{H}_{1,1}^\text{III}\Phi & \textbf{H}_{2,1}^\text{III}\Phi \\ \textbf{H}_{1,1}^\text{IV} \Omega & \textbf{H}_{2,1}^\text{IV} \Gamma \textbf{H}_{2,2}^\text{III}\Phi \\ \textbf{H}_{1,1}^\text{VII}\Theta(\textbf{H}_{2,2}^\text{IV}\Gamma\textbf{H}_{1,2}^\text{III}\Phi - \textbf{H}_{1,2}^\text{IV}\Omega) & \textbf{0} \end{bmatrix}\begin{bmatrix} \textbf{y}_1^\text{I} \\ \textbf{y}_1^\text{II} \end{bmatrix} + \underline{\textbf{z}}_1. \end{eqnarray} \hrule \begin{eqnarray} && I(\textbf{b}_2^a,\textbf{b}_2^b,\textbf{b}_1;\textbf{y}_1|\textbf{a}_1^a,\textbf{a}_1^b,\textbf{a}_2) \overset{(a)}{\le} I(\textbf{H}^{\text{I}}_{1,1} \textbf{u}_1, \textbf{H}^{\text{II}}_{2,1} \textbf{u}_2, \textbf{H}_{1,1}^\text{V}(\textbf{b}_1 + \Phi \textbf{H}_{1,2}^\text{I}\textbf{u}_1) + \textbf{H}_{2,1}^\text{V} (\textbf{b}_2^{a}+ \Phi \textbf{H}_{2,2}^\text{II}\textbf{u}_2), \nonumber \\ && \textbf{H}_{1,1}^\text{VI}\Gamma\textbf{H}_{1,1}^\text{V}(\textbf{b}_1 + \Phi \textbf{H}_{1,2}^\text{I}\textbf{u}_1) + \textbf{H}_{2,1}^\text{VI} (\textbf{b}_2^{b} + \Omega \textbf{H}_{2,2}^\text{II}\textbf{u}_2);\textbf{y}_1|\textbf{a}_1^a,\textbf{a}_1^b,\textbf{a}_2) - I(\textbf{u};\textbf{y}_1| \textbf{b}_2^a,\textbf{b}_2^b,\textbf{b}_1,\textbf{a}_1^a,\textbf{a}_1^b,\textbf{a}_2) \nonumber \\ && \underset{\text{SNR} \rightarrow {\cal{1}}}{\overset{(b)}{=}} \text{rank} \left\{ \underbrace{\begin{bmatrix} \textbf{I}_{N\tau_1} & \textbf{0} & \textbf{0} & \textbf{0}\\ \textbf{0} & \textbf{I}_{N\tau_1} & \textbf{0} & \textbf{0} \\ \textbf{H}_{1,1}^\text{III}\Phi & \textbf{H}_{2,1}^\text{III}\Phi & \textbf{0} & \textbf{0}\\ \textbf{H}_{1,1}^\text{IV}\Omega & \textbf{H}_{2,1}^\text{IV}\Gamma\textbf{H}_{2,2}^\text{III}\Phi & \textbf{0} & \textbf{0}\\ \textbf{0} & \textbf{0} & \textbf{I}_{N\tau_2} & \textbf{0} \\ \textbf{0} & \textbf{0} & \textbf{0} & \textbf{I}_{N\tau_3} \\ \textbf{H}_{1,1}^\text{VII}\Theta(\textbf{H}_{2,2}^\text{IV}\Gamma\textbf{H}_{1,2}^\text{III}\Phi - \textbf{H}_{1,2}^\text{IV}\Omega) & \textbf{0} & \textbf{H}_{2,1}^\text{VII} \Theta \textbf{H}_{1,1}^\text{VI} \Gamma & -\textbf{H}_{2,1}^\text{VII} \Theta \end{bmatrix}}_{\textbf{A}}\right\} \log \text{SNR} \nonumber \\ && - \text{rank} \left\{\underbrace{\begin{bmatrix} \textbf{H}_{1,1}^\text{I} & \textbf{0} \\ \textbf{0} & \textbf{H}_{2,1}^\text{II} \\ \textbf{H}_{1,1}^\text{III}\Phi\textbf{H}_{1,1}^\text{I} & \textbf{H}_{2,1}^\text{III}\Phi\textbf{H}_{2,1}^\text{II} \\ \textbf{H}_{1,1}^\text{IV}\Omega\textbf{H}_{1,1}^\text{I} & \textbf{H}_{2,1}^\text{IV}\Gamma\textbf{H}_{2,2}^\text{III}\Phi\textbf{H}_{2,1}^\text{II} \\ \textbf{H}_{1,1}^\text{V}\Phi\textbf{H}_{1,2}^\text{I} & \textbf{H}_{2,1}^\text{V}\Phi\textbf{H}_{2,2}^\text{II} \\ \textbf{H}_{1,1}^\text{VI}\Gamma\textbf{H}_{1,1}^\text{V}\Phi\textbf{H}_{1,2}^\text{I} & \textbf{H}_{2,1}^\text{VI}\Omega\textbf{H}_{2,2}^\text{II} \\ \textbf{H}_{1,1}^\text{VII}\Theta(\textbf{H}_{2,2}^\text{IV}\Gamma\textbf{H}_{1,2}^\text{III}\Phi-\textbf{H}_{1,2}^\text{IV}\Omega)\textbf{H}_{1,1}^\text{I} & \textbf{H}_{2,1}^\text{VII}\Theta(\textbf{H}_{1,1}^\text{VI}\Gamma\textbf{H}_{2,1}^\text{V}\Phi-\textbf{H}_{2,1}^\text{VI}\Omega)\textbf{H}_{2,2}^\text{II} \end{bmatrix}}_{\textbf{B}}\right\} \log \text{SNR} \nonumber \\ && \overset{(c)}{=} N(2\tau_1 + \tau_2 + \tau_3)\log \text{SNR} - \min\{N(2\tau_1 + \min\{\tau_1,\tau_2\} + \min\{\tau_1,\tau_3\}),2M\tau_1\} \log \text{SNR}. \label{Q1} \end{eqnarray} \hrule \end{figure*} For decoding, due to the symmetry, we only need to perform analysis at one receiver. The final decoding equation at receiver 1 is given in \eqref{H1}, where the AWGN signal is denoted by $\underline{\textbf{z}}_1$. The decoding of data symbols is only related to $\textbf{H}_1$, since the impact of $\textbf{y}_1^\text{I}$ and $\textbf{y}_1^\text{II}$ can be removed. The rank of $\textbf{H}_1$ in \eqref{H1} is $\min\{N(\tau_2+\tau_3+ \min\{\tau_3,\tau_4\}), M(2\tau_2+\tau_3)\}$, whose reason is given in Appendix A. Since the impact of $\textbf{y}_1^\text{I}$ and $\textbf{y}_1^\text{II}$ is removed for decoding, the optimal $\tau_2^*,\tau_3^*,\tau_4^*$ can be found in \cite{64}, which is given by $(\tau_2^*,\tau_3^*,\tau_4^*) = (2N-M,2M-N,2M-N)$. It can be verified that the rank of $\textbf{H}_1$ is equal to the number of data symbols for receiver 1, i.e., $\min\{N(\tau_2^*+\tau_3^*+ \min\{\tau_3^*,\tau_4^*\}), M(2\tau_2^*+\tau_3^*)\} = M(2\tau_2^*+\tau_3^*)$. For security, due to the symmetry, we only need to perform analysis at one receiver. Given the notations $\textbf{y}_1 = [\textbf{y}_1^\text{I};\cdots;\textbf{y}_1^\text{VII}]$ and $\textbf{u} = [\textbf{u}_1;\textbf{u}_2]$, the information leakage $I(\textbf{b}_2^a,\textbf{b}_2^b,\textbf{b}_1;\textbf{y}_1|\textbf{a}_1^a,\textbf{a}_1^b,\textbf{a}_2)$ is calculated in \eqref{Q1}, where the reason of each step is given as follows: \begin{enumerate}[(a)] \item $I(\textbf{b}_2^a,\textbf{b}_2^b,\textbf{b}_1,\textbf{u};\textbf{y}_1|\textbf{a}_1^a,\textbf{a}_1^b,\textbf{a}_2)$ = $I(\textbf{b}_2^a,\textbf{b}_2^b,\textbf{b}_1;\textbf{y}_1|\textbf{a}_1^a,\textbf{a}_1^b,\textbf{a}_2)$ + $I(\textbf{u};\textbf{y}_1| \textbf{b}_2^a,\textbf{b}_2^b,\textbf{b}_1,\textbf{a}_1^a,\textbf{a}_1^b,\textbf{a}_2)$, and applying the data processing inequality for the Markov chain $(\textbf{b}_2^{a}, \textbf{b}_2^{b}, \textbf{b}_1,\textbf{u}) \rightarrow (\textbf{H}^{\text{I}}_{1,1} \textbf{u}_1, \textbf{H}^{\text{II}}_{2,1} \textbf{u}_2, \textbf{H}_{1,1}^\text{V}(\textbf{b}_1 + \Phi \textbf{H}_{1,2}^\text{I}\textbf{u}_1) + \textbf{H}_{2,1}^\text{V} (\textbf{b}_2^{a}+ \Phi \textbf{H}_{2,2}^\text{II}\textbf{u}_2),\textbf{H}_{1,1}^\text{VI}\Gamma\textbf{H}_{1,1}^\text{V}(\textbf{b}_1 + \Phi \textbf{H}_{1,2}^\text{I}\textbf{u}_1) + \textbf{H}_{2,1}^\text{VI} (\textbf{b}_2^{b} + \Omega \textbf{H}_{2,2}^\text{II}\textbf{u}_2)) \rightarrow \textbf{y}_1$. \item When input is circularly symmetric complex Gaussian, according to \cite{100}, rewriting into $\log \text{det}(I + \text{SNR}\textbf{AA}^H) - \log \text{det}(\textbf{I} + \text{SNR}\textbf{BB}^H)$, and using Lemma 2 in \cite{31}. \item It can be verified by Gaussian elimination that the rank of matrix \textbf{A} is $N(2\tau_1 + \tau_2 + \tau_3)$. The rank of matrix \textbf{B} is $\min\{N(2\tau_1 + \min\{\tau_1,\tau_2\} + \min\{\tau_1,\tau_3\}),2M\tau_1\}$, whose reason is given in Appendix B. \end{enumerate} Therefore, to ensure $I(\textbf{b}_2^a,\textbf{b}_2^b,\textbf{b}_1;\textbf{y}_1|\textbf{a}_1^a,\textbf{a}_1^b,\textbf{a}_2) = o(\log \text{SNR})$, according to \eqref{Q1}, $\tau_1$ should follow that \begin{subequations} \begin{eqnarray} && \tau_2 \le \tau_1, \label{1S} \\ && \tau_3 \le \tau_1, \label{2S} \\ && N(2\tau_1 + \tau_2 + \tau_3) \le 2M\tau_1, \label{3S} \end{eqnarray} \end{subequations} Then, substituting the $(\tau_2^*,\tau_3^*,\tau_4^*) = (2N-M,2M-N,2M-N)$ into \eqref{1S}-\eqref{3S} and simplifying the expression, we have \begin{eqnarray} \max\left \{\frac{N(M+N)}{2(M-N)},2M-N \right\} \le \tau_1. \end{eqnarray} To maximize the sum-SDoF lower bound achieved by our scheme, $\tau_1$ should be as small as possible. This is because, Phase-I and Phase-II do not contain any fresh data symbols. Consequently, the optimal $\tau_1^*$ is given by \begin{equation} \label{O2} \tau_1^* = \begin{cases} \frac{N(M+N)}{2(M-N)}, & N < M \le \frac{7+\sqrt{33}}{8}N, \\ 2M-N, & \frac{7+\sqrt{33}}{8}N < M \le 2N. \end{cases} \end{equation} Our scheme has delivered $2M(2\tau_2+\tau_3)$ data symbols over $2\tau_1+2\tau_2+2\tau_3+\tau_4$ TSs. With the above $(\tau_1^*,\tau_2^*,\tau_3^*,\tau_4^*)$, the sum-SDoF lower bound in \eqref{LB} for $N< M \le 2N$ is achieved. \subsection{$2N < M$: Adopt the Transmission Scheme in \cite{35}} Intuitively, since the number of useful equations at the two receivers is at most $2N$ per TS, the data symbols cannot be decoded by interference recurrence if we send more than $2N$ data symbols per TS. This motivates us to send $2N$ data symbols from one transmitter for one receiver at one TS, as the scheme in \cite{35} does, where the lower bound is $4N/5$. \section{Conclusions} We have obtained a sum-SDoF lower bound of the MIMO XCCM with delayed CSIT by proposing a transmission scheme. This transmission scheme can be deemed as a generalized version of the scheme in \cite{64} for symmetric antenna configurations. We have derived the optimal phase duration for AN transmission based on security analysis. In the future, the research can be devoted to: 1) Finding a linear sum-SDoF upper bound; 2) Extending the proposed scheme to the one for arbitrary antenna configurations with the absence of symmetry. \section*{Appendix} \subsection{Rank Analysis for Matrix $\textbf{H}_1$} The rank of $\textbf{H}_1$ is equal to the sum of the rank of \begin{equation} \textbf{L} = \begin{bmatrix} \textbf{H}_{1,1}^\text{III} & \textbf{0} & \textbf{H}_{2,1}^\text{III} \\ \textbf{0} & \textbf{H}_{1,1}^\text{IV} & \textbf{H}_{2,1}^\text{IV} \Gamma \textbf{H}_{2,2}^\text{III} \end{bmatrix},\nonumber \end{equation} and the rank of \begin{equation} \textbf{U} = \begin{bmatrix}\textbf{H}_{1,1}^\text{VII} \Theta \textbf{H}_{2,2}^\text{IV} \Gamma \textbf{H}_{1,2}^\text{III}& -\textbf{H}_{1,1}^\text{VII}\Theta\textbf{H}_{1,2}^\text{IV} & \textbf{0}\end{bmatrix}. \nonumber \end{equation} Due to the linear independence, the rank of $\textbf{L}$ is equal to the sum of the rank of sub-matrix $[\textbf{H}_{1,1}^\text{III}, \textbf{0}, \textbf{H}_{2,1}^\text{III} ]$ and the rank of sub-matrix $[\textbf{0}, \textbf{H}_{1,1}^\text{IV}, \textbf{H}_{2,1}^\text{IV} \Gamma \textbf{H}_{2,2}^\text{III}]$. The rank of sub-matrix $[ \textbf{H}_{1,1}^\text{III}, \textbf{0}, \textbf{H}_{2,1}^\text{III} ]$ is $N\tau_2$. On the other hand, when $N < M$, the rank of $\textbf{H}_{2,1}^\text{IV} \Gamma \textbf{H}_{2,2}^\text{III}$ is $N\min\{ \tau_2,\tau_3\}$. Thus, the rank of $[\textbf{H}_{1,1}^\text{IV}, \textbf{H}_{2,1}^\text{IV} \Gamma \textbf{H}_{2,2}^\text{III}]$ is $N\tau_3$. Consequently, the rank of $\textbf{L}$ is $N(\tau_2+\tau_3)$. The rank of $\textbf{U}$ is determined by the sub-matrix $[\textbf{H}_{1,1}^\text{VII} \Theta \textbf{H}_{2,2}^\text{IV} \Gamma \textbf{H}_{1,2}^\text{III},-\textbf{H}_{1,1}^\text{VII}\Theta\textbf{H}_{1,2}^\text{IV}]$, which can be decomposed into the multiplication of $\textbf{H}_{1,1}^\text{VII} \Theta$ and $[ \textbf{H}_{2,2}^\text{IV} \Gamma \textbf{H}_{1,2}^\text{III},-\textbf{H}_{1,2}^\text{IV}]$. When $N < M$, the rank of $\textbf{H}_{1,1}^\text{VII} \Theta$ is $N\min\{\tau_4,\tau_3\}$ and the rank of $[ \textbf{H}_{2,2}^\text{IV} \Gamma \textbf{H}_{1,2}^\text{III},-\textbf{H}_{1,2}^\text{IV}]$ is $N\tau_3$. Since the rank of multiplication of two matrices is determined by the minimal rank of one of them, thus the rank of $\textbf{U}$ is $N\min\{ \tau_3,\tau_4\}$. Therefore, we conclude that the rank of $\textbf{H}_1$ is $N(\tau_2+\tau_3 + \min\{\tau_3,\tau_4\})$. \subsection{Rank Analysis for Matrix $\textbf{B}$} For matrix \textbf{B}, the blocks $\textbf{H}_{1,1}^\text{III}\Phi\textbf{H}_{1,1}^\text{I} $, $\textbf{H}_{1,1}^\text{IV}\Omega\textbf{H}_{1,1}^\text{I}$, and $ \textbf{H}_{1,1}^\text{VII}\Theta(\textbf{H}_{2,2}^\text{IV}\Gamma\textbf{H}_{1,2}^\text{III}\Phi-\textbf{H}_{1,2}^\text{IV}\Omega)\textbf{H}_{1,1}^\text{I}$ are generated from $\textbf{H}_{1,1}^\text{I}$, the blocks $\textbf{H}_{2,1}^\text{III}\Phi\textbf{H}_{2,1}^\text{II}$ and $\textbf{H}_{2,1}^\text{IV}\Gamma\textbf{H}_{2,2}^\text{III}\Phi\textbf{H}_{2,1}^\text{II}$ are generated from $\textbf{H}_{2,1}^\text{II}$, the block $\textbf{H}_{2,1}^\text{VII}\Theta(\textbf{H}_{1,1}^\text{VI}\Gamma\textbf{H}_{2,1}^\text{V}\Phi-\textbf{H}_{2,1}^\text{VI}\Omega)\textbf{H}_{2,2}^\text{II}$ is generated from $\textbf{H}_{2,1}^\text{V}\Phi\textbf{H}_{2,2}^\text{II}$ and $\textbf{H}_{2,1}^\text{VI}\Omega\textbf{H}_{2,2}^\text{II}$. Therefore, the rank of $\textbf{B}$ is equivalent to the rank of the following matrix: \begin{equation} \begin{bmatrix} \textbf{H}_{1,1}^\text{I} & \textbf{0} \\ \textbf{0} & \textbf{H}_{2,1}^\text{II} \\ \textbf{H}_{1,1}^\text{V}\Phi\textbf{H}_{1,2}^\text{I} & \textbf{H}_{2,1}^\text{V}\Phi\textbf{H}_{2,2}^\text{II} \\ \textbf{0} & \textbf{H}_{2,1}^\text{VI}\Omega\textbf{H}_{2,2}^\text{II} \end{bmatrix}. \nonumber \end{equation} The rank of the above matrix is $\min\{N(2\tau_1 + \min\{\tau_1,\tau_2\} + \min\{\tau_1,\tau_3\}),2M\tau_1\}$, since the ranks of $\textbf{H}_{1,1}^\text{V}\Phi\textbf{H}_{1,2}^\text{I}$ and $\textbf{H}_{2,1}^\text{V}\Phi\textbf{H}_{2,2}^\text{II}$ are $N\min\{\tau_1,\tau_2\}$, the rank of $\textbf{H}_{2,1}^\text{VI}\Omega\textbf{H}_{2,2}^\text{II}$ is $N\min\{\tau_1,\tau_3\}$, the ranks of $\textbf{H}_{1,1}^\text{I}$ and $\textbf{H}_{2,1}^\text{II}$ are $N\tau_1$. As a result, we conclude that the rank of $\textbf{B}$ is $\min\{N(2\tau_1 + \min\{\tau_1,\tau_2\} + \min\{\tau_1,\tau_3\}),2M\tau_1\}$. \bibliographystyle{IEEEtran}
{ "timestamp": "2021-03-11T02:10:21", "yymm": "2012", "arxiv_id": "2012.08980", "language": "en", "url": "https://arxiv.org/abs/2012.08980" }
\section{Introduction} For $f \in \mathcal{S}(\mathbb{R}),$ one defines the \emph{Carleson operator} to be \[ Cf(x) = \sup_{N \in \mathbb{R}} \left| \int_{\mathbb{R}} f(x-t) e^{iNt} \frac{\mathrm{d} t}{t}\right| . \] A celebrated result related to this operator is that it is bounded on $L^2(\mathbb{R}).$ This was first proved by Carleson \cite{Carleson}, although not directly, as a by-product of his proof of almost everywhere convergence of Fourier series of $L^2(\mathbb{T})-$functions. Subsequent works (see, for instance, \cite{Fefferman, LaceyThiele, Hunt}) have contributed to simplify Carleson's proof, connect it to different contexts and extend it to other $L^p-$spaces. In this note, however, we will concentrate on a question originally raised by E. Stein and other questions derived from it: if we define the \emph{polynomial} Carleson theorem of degree $d$ to be \begin{equation}\label{eq poly carleson} C_d f(x) = \sup_{\text{deg} (P) \le d} \left| \int_{\mathbb{R}} f(x-t) e^{i P(t)} \, \frac{\mathrm{d} t}{t} \right|, \end{equation} is this bounded in $L^p(\mathbb{R})$ for any $d \ge 1$? For $d=1,$ this is Carleson's theorem. A first contribution in the $d \ge 2$ case was made by Stein and Wainger \cite{SteinWainger}, which proved that if one restricts the supremum in \eqref{eq poly carleson} to the class of polynomials of degree at most $d$ such that $P(0) = P'(0) = 0,$ then this new operator is bounded in $L^p(\mathbb{R}),$ for all $p >1.$ We refer the reader aditionally to \cite{GPRY, GHLR, PierceYung, Guo2} for additional developments in this regard. Nevertheless, it was not until the work of V. Lie \cite{Lie1} that the first major contribution in this regard was made. In that work, Lie proved that the \emph{quadratic} Carleson operator $C_2$ in the definition above possesses a weak-type bound in $L^2.$ The methods rely on appropriate adaptations and decompositions of the quadratic Carleson operator in the same fashion as in the work of Fefferman \cite{Fefferman}. In fact, a similar idea has been employed by both Lie \cite{Lie2} and Zorin-Kranich \cite{pavel} to conclude that, in the general case of $d \ge 2,$ the polynomial Carleson theorem $C_d$, together with truncated and generalized versions of it, is bounded in $L^p(\mathbb{R})$ for any $p>1.$ Another related problem is that of bounding other kinds of \emph{oscillatory} Carleson operators, as raised in \cite{GHLR}. There, the authors, driven by the study of certain Hilbert transforms along variable curves, arrive at bounding the \emph{oscillatory} Carleson operators \[ C_{\alpha}f(x) = \sup_{N,M \in \mathbb{R}} \left| \int_{\mathbb{R}} f(x-t) e^{iNt} e^{iM[t]^{\alpha}} \, \frac{\mathrm{d} t}{t} \right|. \] Here, we let $[t]^{\alpha}$ denote either $\text{sign}(t)|t|^{\alpha}$ or $|t|^{\alpha}$ Although the authors prove in \cite{GHLR} that these operators are bounded in $L^p$ whenever $\alpha \in \mathbb{R}, \alpha \not\in\{0,1\},$ the proof is dramatically different for two distinct cases: for $\alpha\neq 2,$ one can compare the operator on a certain subset of $\mathbb{R}$ to a Carleson operator, and outside that region it is possible to use a $TT^*-$method, in the same fashion as Stein and Wainger \cite{SteinWainger}, to obtain $L^p-$bounds for such operator. On the other hand, for $\alpha = 2,$ the only proof known so far is that of Lie \cite{Lie1}, which employs time-frequency analysis methods and a strategy resembling the original Carleson theorem proof. Due to the difference in such proofs, the bounds on the constant $\|C_{\alpha}\|_{L^p \to L^p}$ \emph{blow up} as $\alpha \to 2$ in \cite{GHLR}. In this note, we will mainly focus on proving \emph{uniform bounds} for some instances of such operators. We start by providing an alternative proof for \begin{theorem}\label{poly} Let $P(t)$ be a polynomial of degree $d.$ Then the operator \[ C_Pf(x) = \sup_{N \in \mathbb{R}} \left| \int_{\mathbb{R}} f(x-t) e^{iNt + iP(t)} \frac{\mathrm{d} t}{t} \right| \] is bounded in $L^p, \, 1< p < +\infty,$ with bounds depending \emph{only} on $p,d.$ \end{theorem} Notice that, by the works of Lie \cite{Lie2} and Zorin-Kranich \cite{pavel}, this result is not new as stated, being a consequence of the more general polynomial Carleson theorem. We stress, however, that the proof we provide here is conceptually different: we prove that boundedness of the original (linear) Carleson operator \emph{imply} bounds for this polynomial version. This is connected to recent results by this author, which use operators similar to $C_P$ above in order to study degenerate cases of maximal modulations of the Hilbert transform along the parabola. Indeed, in \cite{Ramos1}, we consider the multiplier $$m(\xi,\eta) = \text{p.v.}\int_{\mathbb{R}} e^{2 \pi i\xi t + 2\pi i\eta t^2} \, \frac{\mathrm{d} t}{t},$$ which is associated to the operator $\mathcal{H}_2f(x,y) = \int_{\mathbb{R}} f(x-t,y-t^2)\, \frac{\mathrm{d} t}{t}$, and possesses an anisotropic dilation invariance. We thus define the family \[ m_{a,b}(\eta) = m(a\eta + b, \eta), a,b \in \mathbb{R}. \] In order to prove that the operators $\mathcal{C}_{a,b}f(x) = \sup_{N \in \mathbb{R}} |T_{a,b}(\mathcal{M}_Nf)(x)|,$ where $T_{a,b}h = (m_{a,b}\widehat{h})^{\vee},$ are bounded in $L^p$ \emph{uniformly} on $a,b \in \mathbb{R},$ one needs to resort to Theorem \ref{poly}. Indeed, in \cite{Ramos1}, we use a stronger version of Theorem \ref{poly} in order to obtain an even stronger statement about uniformity of bounds for $\mathcal{C}_{a,b},$ but with the present methods we are already able to answer the question of how to obtain Theorem \ref{poly} directly from bounds for the Carleson operator. For the proof of Theorem \ref{poly}, we have two main steps: the first is to use a \emph{gap decomposition}, in the same spirit of Guo \cite{Guo1} (see also \cite{CarberyRicciWright} for the original idea behind this decomposition), to reduce matters to the quadratic case. The second step deals is to deal with the version of the quadratic carleson operator arising from that. Although bounds for the operator \[ \tilde{C}_2f(x) = \sup_{N \in \mathbb{R}} \left| \int_{\mathbb{R}} f(x-t) e^{iNt} e^{it^2} \, \frac{\mathrm{d} t}{t}\right| \] follow directly from the boundedness of the Carleson operator by using the mapping $f(t) \mapsto e^{it^2}f(t),$ this also provides us with a natural connecting point with the next result. \begin{theorem}\label{known} Let $\alpha >0, \, \alpha \not\in \{0,1\}.$ Then the oscillatory Carleson operator \[ \mathcal{C}_{\alpha} f(x) := \sup_{N \in \mathbb{R}} \left|\int_{\mathbb{R}} f(x-t) e^{iNt} e^{i[t]^{\alpha}} \frac{\mathrm{d} t}{t} \right| \] is bounded in $L^p(\mathbb{R}), 1 < p < + \infty,$ with $\sup_{\alpha \in (2-\beta,2+\beta)} \|\mathcal{C}_{\alpha}\|_{p \to p} < +\infty$ whenever $0< \beta < 1$. \end{theorem} Of course, the bounds for each of the operators $\mathcal{C}_{\alpha}$ individually is known, but, as previously stated, the point here is to prove that bounds hold uniformly in a viscinity of $\alpha = 2,$ with our proof being new and, to the best of our knowledge, it cannot be derived in a striaghtforward manner from the results of Lie \cite{Lie1, Lie2}. The main new idea for the proof of Theorem \ref{known} is a deeper understanding of a wave packet decomposition guided by the oscillatory factor. Instead of redoing the original Carleson proof, we identify in which regions the best strategy is to compare our operator with a (truncated) Carleson operator, and where the oscillation coming from the $[t]^{\alpha}$ term takes over and induces decay. This paper is organized as follows: in Section \ref{simplerr}, we prove Theorem \ref{poly} by reducing to the quadratic case. We then focus, on Section \ref{uniform}, on proving bounds for the the case $|\alpha - 2| < \frac{1}{2}$ of Theorem \ref{known} with bounds uniform in the parameter $\alpha$ (for the remaining cases, see, for instance, \cite[Corollary~1.7]{GHLR}). We reserve the last section for related remarks and comments. \subsection*{Acknowledgements} The author is thankful to Christoph Thiele, Pavel Zorin--Kranich and Shaoming Guo for discussions that led to the final form of this manuscript. \section{Theorem \ref{poly} and bounds for a simplified polynomial Carleson operator}\label{simplerr} In this section, we prove that the operator \[ C_Pf(x) := \sup_{N \in \mathbb{R}} \left|\int_{\mathbb{R}} f(x-t) e^{iNt + iP(t)} \,\frac{\mathrm{d} t}{t}\right|, \] where $P$ is a fixed polynomial, is bounded in $L^p(\mathbb{R}), p>1,$ with bounds depending only on the degree of $P.$ We focus thus on proving bounds for \[ C_P^Nf(x) := \int_{\mathbb{R}} f(x-t) e^{iN(x)t + iP(t)} \, \frac {\mathrm{d} t}t, \] in $L^p$ independently of $N$, depending only on $\text{deg }P,$ where $N : \mathbb{R} \to \mathbb{R}_{+}$ is a measurable function which we may choose as taking on only finitely many values. \label{redquad} We first assume bounds on the case $\text{deg }P = 2$ to obtain the general case. We follow closely the approach by Guo \cite{Guo1} in order to achieve uniform bounds on oscillatory singular integrals with respect to fewnomials. Namely, we find a decomposition of the real line associated to the polynomial $P$, but with parameters depending only on $\text{deg} P.$ \\ For that purpose we let $N(x)t + P(t) = N'(x)t + at^2 + \tilde{P}(t),$ where $\tilde{P}(0) = \tilde{P}'(0) = \tilde{P}''(0) = 0$. We let $a_i$ be the coefficient of $t^i$ in the expansion of $\tilde{P}(t)$, and define $\lambda_k = 2^{1/k}.$ Finally, fix $C_d > 0$ large enough -- $C_d \ge 10^{d!}$ will work, for instance -- and define the first set of \emph{bad scales} associated to $j,k \in [3,d]$ to be \[ \mathcal{S}_{bad}^1(j,k) :=\{l \in \mathbb{Z} \colon 2^{-C_d} |a_k \lambda_d^{kl}| \le |a_j \lambda_d^{jl}| \le 2^{C_d} |a_k \lambda_d^{kl}|\}. \] This set is connected and has cardinality at most $4dC_d.$ In fact, if $l_1, l_2 \in \mathcal{S}_{bad}^1(j,k)$, then the corresponding equations in the definition imply that, for $\theta \in (0,1),$ \[ 2^{-C_d} |a_k \lambda_d^{k(l_1 \theta + (1-\theta)l_2)} | \le |a_j \lambda_d^{j(l_1 \theta + (1-\theta)l_2)}| \le 2^{C_d} |a_k \lambda_n^{k(l_1\theta + (1-\theta)l_2)}|. \] This plainly implies connectivity. Also, the minimal scale $l_{min} \in \mathcal{S}^1_{bad}(j,k)$ satisfies that $2^{-C_d} |a_k \lambda_d^{kl_{min}}| \sim |a_j \lambda_d^{jl_{min}}|.$ This implies $l_{min} = \frac{1}{j-k} \left(d\log_2(|a_k|/|a_j|) - dC_d\right).$ The same analysis shows that $l_{max} = \frac{1}{j-k} \left(d \log_2 (|a_k|/|a_j|) + d C_d\right).$ The cardinality assertion then follows. \\ We define, therefore, \[ \mathcal{S}_{bad}^1 = \bigcup_{j \ne k} \mathcal{S}_{bad}^1(j,k), \,\,\mathcal{S}_{good}^1 = (\mathcal{S}_{bad}^1)^c. \] The set $\mathcal{S}_{good}^1$ has at most $d^2$ connected components, in each of which a monomial ``dominates" $\tilde{P}$. Indeed, we have at most $ \text{deg}(P)^2$ nonempty sets $\mathcal{S}^1_{bad}(k,j),$ all of which are intervals. Their complement is then a union of at most $d^2$ intervals. We let $\mathcal{S}^2_{bad}(k,j)$ be defined in complete analogy to $\mathcal{S}^1_{bad}(k,j)$, but with respect to the sequence $b_j = j(j-1)(j-2) a_j$ of coefficients of the third derivative of $\tilde{P}.$ As before, we take $$\mathcal{S}_{bad}^2 = \cup_{j \ne k} \mathcal{S}_{bad}^2(j,k), \, \mathcal{S}_{good} = \mathcal{S}_{good} \backslash \mathcal{S}_{bad}^2.$$ $\mathcal{S}_{good}$ has therefore at most $2d^2$ connected components, in each of which both $\tilde{P}$ and $\tilde{P}'''$ are ``dominated" by a monomial. \\ We use this scale decomposition to bound \[ C_P^Nf(x) := \sum_{l \in \mathbb{Z}} \int_{\mathbb{R}} f(x-t) e^{iN(x)t + iP(t)} \psi_l(t) \frac{\mathrm{d} t}{t}, \] where $\psi_0(t)$ is a smooth function supported in $[-\lambda_d^2,-\lambda_d^{-1}] \cup [\lambda_d^{-1},\lambda_d^2]$ such that \[ \sum_{l \in \mathbb{Z}} \psi_l(t) := \sum_{l \in \mathbb{Z}} \psi_0\left(\frac{t}{\lambda_d^l}\right) = 1, \, \forall t \ne 0. \] As we saw before, the set $\mathcal{S}_{bad} := \mathcal{S}^1_{bad} \cup \mathcal{S}^2_{bad}$ is finite, with cardinality bounded by a constant depending only on $d.$ On each of the scales $l \in \mathcal{S}_{bad},$ we have \[ \left|\int_{\mathbb{R}} f(x-t) e^{iN(x)t + iP(t)} \psi_l(t) \frac{\mathrm{d} t}{t}\right| \le 4 Mf(x), \] where $M$ denotes the Hardy--Littlewood maximal function. Therefore, \[ \left|\sum_{l \in \mathcal{S}_{bad}} \int_{\mathbb{R}} f(x-t) e^{iN(x)t +iP(t)} \psi_l(t) \frac{\mathrm{d} t}{t} \right| \le A_d \cdot Mf(x). \] Here, $A_d$ only depends on the number of bad scales, which is bounded by $(4dC_d)^2.$ This completes the treatment of $\mathcal{S}_{bad}$. For the good scales, we pick each connected component $\mathcal{S}_{good}(k_1,k_2) \subset \mathcal{S}_{good}$ associated to a pair $k_1,k_2 \in [3,d]$ so that $l \in \mathcal{S}_{good}(k_1,k_2), \, t \in [\lambda_d^{l-2},\lambda_d^{l+1}],$ \[ \frac{1}{2} \cdot (1-d2^{-C_d})|a_{k_1} t^{k_1}| \le |\tilde{P}(t)| \le 2 (1+d2^{-C_d}) |a_{k_1} t^{k_1}|, \] \[ 2(1+d2^{-C_d})|a_{k_2}t^{k_2-3}| \ge |\tilde{P}^{(3)}(t)| \ge \frac{1}{2} \cdot (1-d2^{-C_d})|a_{k_2}t^{k_2-3}|. \] In particular, $|\tilde{P}(t)| \le 8|a_{k_1} t^{k_1}|, \, |\tilde{P}^{(3)}(t)| \ge \frac{1}{8} |a_{k_1} t^{k_1-3}|.$ Next, we gather scales in a single connected component by \[ \Phi_{k_1,k_2} (t) = \sum_{l \in \mathcal{S}_{good}(k_1,k_2)} \psi_l(t). \] Now let $\phi^k_{0}$ be a smooth positive function supported in $[-\lambda_k^2,-\lambda_k^{-1}] \cup [\lambda_k^{-1},\lambda_k^2]$ such that \[ \sum_{i \in \mathbb{Z}} \phi^k_i(t) := \sum_{i \in \mathbb{Z}} \phi^k_0\left(\frac{t}{\lambda_k^i}\right) = 1, \, \forall t \ne 0. \] Finally, if $B_k \in \mathbb{Z}$ is so that $\lambda_k^{-B_k} \le |a_k| \le \lambda_k^{-B_k + 1},$ we let $\theta_k = B_k/k$. \\ From the previous considerations, we are left to bound \[ \sum_{l \in \mathcal{S}_{good}} \int_{\mathbb{R}} f(x-t) e^{iN(x)t +iP(t)} \psi_l(t) \frac{\mathrm{d} t}{t}= \sum_{(k_1,k_2)} \int_{\mathbb{R}} f(x-t) e^{iN(x)t + iP(t))} \Phi_{k_1,k_2}(t) \frac{\mathrm{d} t}{t}. \] As the number of connected components of $\mathcal{S}_{good}$ is $\le 2d^2,$ we only need to bound each summand individually, which we write as \[ \sum_{i \in \mathbb{Z}} T^{k_1,k_2}_if(x) := \sum_{i \in \mathbb{Z}} \int_{\mathbb{R}} f(x-t) e^{iN(x)t + iP(t)} \Phi_{k_1,k_2}(t) \phi^{k_1}_i(t) \frac{\mathrm{d} t}{t}. \] We split the sum above further as \[ \sum_{i \le \theta_{k_1}} T^{k_1,k_2}_if(x) + \sum_{i > \theta_{k_1}} T^{k_1,k_2}_if(x). \] By comparing each term in the first summand, we arrive at \begin{align*} \left|\sum_{i \le \theta_{k_1}} T^{k_1,k_2}_if(x) \right| \le & \left| \sum_{i \le \theta_{k_1}} \int_{\mathbb{R}} f(x-t) e^{iN(x)t + iat^2} \Phi_{k_1,k_2}(t) \phi^{k_1}_i(t) \frac{\mathrm{d} t}{t}\right| \cr & + \sum_{i \le \theta_{k_1}} \int_{\mathbb{R}}|f(x-t)| |\tilde{P}(t)| \Phi_{k_1,k_2}(t) \phi^{k_1}_i(t) \frac{\mathrm{d} t}{|t|}. \end{align*} In the first integral, the main observation is that $\Phi_{k_1,k_2} \cdot \left(\sum_{i \le \theta_{k_1}} \phi_i^{k_1}\right)$ approximates the characteristic function of an interval. The first term on the right hand is then comparable to truncations of the quadratic Carleson operator, where the supremum is taken over phases of the form $Nt + at^2,\,$ $a$ fixed. The remaining error terms amount to an absolute constant times a maximal function. \\ It is not difficult to show that the integrand in the second term on the right hand side is bounded by $8|f(x-t)|\lambda_{k_1}^{(i-\theta_{k_1}) \cdot k_1} \cdot \lambda_{k_1}^{-i}$ on a set of measure $\sim \lambda_{k_1}^i.$ This is due to the growth estimate on $|\tilde{P}(t)| \le 4 |a_{k_1} t^{k_1}|$ on the support of $\phi_i^{k_1}$. Summing in $i \le \theta_{k_1}$ yields that the summand is bounded by at most $10^2 \cdot Mf(x).$ \\ For the remaining part, we use the $TT^*$ method. We wish to bound \[ \sum_{i = 0}^{\infty} \|T^{k_1,k_2}_{i + \theta_{k_1}} f\|_p. \] In order to do it, we notice the pointwise bound $|T^{k_1,k_2}_{i+\theta_{k_1}}f| \le 2 Mf(x).$ We now need to prove exponential decay for the $L^2$ bounds of $T^{k_1,k_2}_{i+ \theta_{k_1}}.$ This follows from proving the same exponential decay in bounds for $T^{k_1,k_2}_{i+\theta_{k_1}}(T^{k_1,k_2}_{i+\theta_{k_1}})^*$. After a change of variables, the convolution kernel of this last expression is given by \begin{align}\label{eq oscillatory_0} (\lambda_{k_1})^{-(i+\theta_{k_1})} \int_{\mathbb{R}} & e^{i(N(y)-N(y))\cdot (\lambda_{k_1})^{i+\theta_{k_1}} \cdot s'} \cr & \times e^{i [\tilde{P}(\lambda_{k_1}^{i + \theta_{k_1}}s') - \tilde{P}(\lambda_{k_1}^{i + \theta_{k_1}}(s'-\xi'))] } \cdot \frac{\phi^{k_1}_0(s')}{s'} \cdot \frac{\phi^{k_1}_0(s'-\xi')}{s'-\xi'} \, \mathrm{d} s', \end{align} where $\xi = (\lambda_{k_1})^{i+ \theta_{k_1}} \xi'.$ We use, as per usual in such contexts, stationary phase. Our phase function this time is \[ v\cdot (\lambda_{k_1})^{i+\theta_{k_1}} \cdot s' + \tilde{P}(\lambda_{k_1}^{i + \theta_{k_1}}s') - \tilde{P}(\lambda_{k_1}^{i + \theta_{k_1}}(s'-\xi')). \] Differentiating twice, we obtain that the second derivative of the phase in $s'$ is at least as large as \begin{align*} \lambda_{k_1}^{2(i+\theta_{k_1})} |\tilde{P}''(\lambda_{k_1}^{i + \theta_{k_1}}s') - \tilde{P}''(\lambda_{k_1}^{i + \theta_{k_1}}(s'-\xi'))| \cr \ge \frac{1}{8} \cdot \lambda_{k_1}^{3(i+\theta_{k_1})} |\xi'| |a_{k_1}| |(\lambda_{k_1})^{i+\theta_{k_1}}|^{k_1 - 3} \ge 2^{i-4} |\xi'|, \end{align*} where we used the mean value theorem and the lower bound on $\tilde{P}^{(3)}$ on the scale $\lambda_{k_1}^{i+\theta_{k_1}}.$ The proof is finished by splitting between $|\xi'| \ge 2^{-i/10}$ and $|\xi'| \le 2^{-i/10}.$ For the case $|\xi'| \le 2^{-i/10},$ we bound the integral simply by pulling the absolute value inside, which then yields that \eqref{eq oscillatory_0} is controlled by $ C (\lambda_{k_1})^{-(i+\theta_{k_1})} 1_{\left\{|\xi| \le 2^{-i/10} (\lambda_{k_1})^{-(i+\theta_{k_1})}\right\}}.$ For the case $|\xi'| \ge 2^{-i/10},$ the usual van der Corput estimate yields that \eqref{eq oscillatory_0} is bounded by $C (\lambda_{k_1})^{-(i+\theta_{k_1})} \cdot 2^{-\left(\frac{9i}{30} - 4\right)} 1_{\left\{|\xi| \le 4 \cdot (\lambda_{k_1})^{-(i+\theta_{k_1})}\right\}}.$ Using this in the definition of $T^{k_1,k_2}_{i+\theta_{k_1}}(T^{k_1,k_2}_{i+\theta_{k_1}})^*$ yields the pointwise bound \begin{align*} |T^{k_1,k_2}_{i+\theta_{k_1}}(T^{k_1,k_2}_{i+\theta_{k_1}})^*f(x)| \lesssim (2^{-i/10} + 2^{-\frac{9i}{30}})Mf(x), \end{align*} which then implies the desired $L^2-$exponential decay in $i,$ concluding the proof. For more details on the method and estimates used, see, for instance, the proof of Corollary 1.7 in \cite{GHLR} or the proof of Theorem 2 in \cite{Ramos1}. This finishes the reduction to the quadratic case, and, by either Theorem \ref{known} or the argument sketched in the introduction, the general case follows. \section{Theorem \ref{known} and uniform bounds for maximally modulated oscillatory singular integrals}\label{uniform} We focus on bounding \[ C_{\alpha}^Nf(x) := \int_{\mathbb{R}} f(x-t) e^{iN(x)t} e^{i|t|^{\alpha}} \,\frac{\mathrm{d} t}{t} \] in $L^p$ independently of $N:\mathbb{R} \to \mathbb{R}_{+}.$ First we employ a dyadic decomposition: we write, for $\varphi$ smooth, nonnegative function so that $\sum_{n \in \mathbb{Z}} \varphi(2^n t) = 1, \, \forall t \ge 0,$ \[ C_{\alpha}^Nf(x) = \int_{\mathbb{R}} f(x-t) e^{iN(x)t} e^{i|t|^{\alpha}} \varphi_0(t) \, \frac{\mathrm{d} t}{t} + \sum_{n > 0} \int_{\mathbb{R}} f(x-t) e^{iN(x)t} e^{i|t|^{\alpha}} \varphi(2^{-n}t) \, \frac{\mathrm{d} t}{t}. \] Here, $\varphi_0(t) = \sum_{n \le 0} \varphi(2^{-n} t).$ This implies that the first summand is, modulo a maximal function error, a truncated Carleson operator. We analyze the remaining sum. \\ In order to capture the oscillation provided by the $|t|^{\alpha}$ factor, we do a further decomposition in each of the summands. Let $$\gamma_n = \frac{1}{(\alpha (2^{\alpha-1} -1))^{1/2}} 2^{(1- \frac{\alpha}{2})n}.$$ Let also $\beta_n = 2^{ n} - \frac{3}{2} \gamma_n.$ Modulo maximal function errors again, we see that $\int_{\mathbb{R}} f(x-t) e^{iN(x)t} e^{i|t|^{\alpha}} \varphi(2^{-n}t) \, \frac{\mathrm{d} t}{t}$ may be written as a sum of \begin{equation}\label{decalpha} \int_{\mathbb{R}} f(x-t) e^{iN(x)t} e^{i|t|^{\alpha}} \chi_0\left(\frac{t- \beta_n}{\gamma_n} - j\right) \, \frac{\mathrm{d} t}{t} \end{equation} where $j$ ranges from $0$ to $\sqrt{\alpha(2^{\alpha-1}-1)} 2^{\frac{\alpha}{2} \cdot n}.$ Here we choose $\chi_0$ to be a positive smooth function supported on $[-3/4,3/4]$ such that $$\sum_{j \in \mathbb{Z}} \chi_j(y) := \sum_{j \in \mathbb{Z}} \chi_0(y-j) = 1,\, \forall y \in \mathbb{R}.$$ In particular, the difference between \eqref{decalpha} and \begin{equation}\label{model1} \frac{1}{(j+1)\gamma_n + \beta_n} T_{n,j,\alpha}f(x) = \frac{1}{(j+1)\gamma_n + \beta_n}\int_{\mathbb{R}} f(x-t) e^{iN(x)t} e^{i|t|^{\alpha}} \chi_0\left(\frac{t- \beta_n}{\gamma_n} - j\right) \, \mathrm{d} t \end{equation} is bounded by a universal constant times $\frac{\gamma_n}{((j+1/2)\gamma_n + \beta_n)^2}\int_{(j+1/2)\gamma_n + \beta_n}^{(j+4)\gamma_n + \beta_n} |f(x-t)| \, \mathrm{d} t.$ As the function $$t \mapsto \frac{\gamma_n}{((j+1)\gamma_n + \beta_n)^2} \chi_{((j+1/2)\gamma_n + \beta_n),((j+4)\gamma_n + \beta_n)}(t)$$ is in $L^1$ with norm bounded \emph{uniformly} in $\alpha \ge \frac{3}{2},$ we thus focus on \eqref{model1}. The sum \[ \sum_{0 \le j \le \sqrt{\alpha(2^{\alpha-1}-1)} 2^{\frac{\alpha}{2} \cdot n}} \frac{1}{(j+1)\gamma_n + \beta_n} T_{n,j,\alpha}f \] is pointwise bounded by $10 \cdot Mf,$ so we seek for decay in the $L^2$ bounds. We have \begin{align*} & \left|\left\langle \sum_{0 \le j \le \sqrt{\alpha(2^{\alpha-1}-1)} 2^{\frac{\alpha}{2} \cdot n}} \frac{1}{(j+1)\gamma_n + \beta_n} T_{n,j,\alpha}f, g \right\rangle\right| \le 4 \cdot 2^{-n} \sum_{0 \le j \le 2^n \cdot \gamma_n^{-1} } |\langle T_{n,j,\alpha}f, g \rangle|.\cr \end{align*} We further decompose our already localized operators in a wave packet-like manner: \[ T_{n,j, \alpha}f(x) = \sum_{k,l \in \mathbb{Z}} 1_{E^{\alpha}_{k,l}}(x)T_{n,j,\alpha}f(x) := \sum_{k,l \in \mathbb{Z}} T^{k,l}_{n,j,\alpha}f(x), \] where $E^{\alpha}_{k,l} = \{ y \in \mathbb{R} \colon y \in (k \cdot \gamma_n, (k+1)\cdot \gamma_n], N(y) \in (l \cdot \gamma_n^{-1}, (l+1) \cdot \gamma_n^{-1}]\}.$ This decomposition takes into account the spatial localization of the point $x \in \mathbb{R}$ as well as where the measurable function $N$ lands. We thus bound: \begin{align*} \sum_{0 \le j \le 2^n\cdot \gamma_n^{-1}} |\langle T_{n,j,\alpha} f,g \rangle| & \le \sum_{0 \le j \le 2^n \cdot \gamma_n^{-1}} \left\| \sum_{k,l \in \mathbb{Z}} (T^{k,l}_{n,j,\alpha})^{*} g\right\|_{2} \cr & \le \left(\frac{2^{n}}{\gamma_n}\right)^{1/2} \left( \sum_{0 \le j \le 2^n \cdot \gamma_n^{-1}} \left\| \sum_{k,l \in \mathbb{Z}} (T^{k,l}_{n,j,\alpha})^{*} g\right\|_{2}^2 \right)^{1/2}, \end{align*} using that $\|f\|_2 = 1.$ The next step in order to conclude is to analyse $T_{n,j,\alpha}^{k,l}$ and their mutual interactions, in order to capture the oscillatory effect of the phase. \begin{lemma}\label{wavepack2}Let $T_{n,j,\alpha}^{k,l}$ be defined as above. Its adjoint is then given by \begin{equation}\label{adjointt2} (T_{n,j,\alpha}^{k,l})^*g(y) = \int_{\mathbb{R}} 1_{E^{\alpha}_{k,l}}(x) \chi_0\left(\frac{(x-y)- \beta_n}{\gamma_n} - j\right) e^{iN(x)(x-y) + i|x-y|^{\alpha}} g(x)\, \mathrm{d} x. \end{equation} Then we have $\|T_{n,j,\alpha}^{k,l}\|_{2 \to 2} = \|(T_{n,j,\alpha}^{k,l})^*\|_{2 \to 2} \le \gamma_n,$ and \begin{equation}\label{orthh2} |\langle (T_{n,j,\alpha}^{k_1,l_1})^* g_1 , (T_{n,j,\alpha}^{k_2,l_2})^* g_2 \rangle| \le C \cdot 1_{|k_1-k_2| \le 10} \gamma_n \left(1+\frac{|l_1 - l_2|}{\gamma_n}\right)^{-20} \left(\int_{E^{\alpha}_{k_1,l_1}} |g_1|\right)\cdot\left(\int_{E^{\alpha}_{k_2,l_2}} |g_2| \right), \end{equation} whenever $k_1,k_2,l_1,l_2 \in \mathbb{Z}, j \le 2^n \cdot \gamma_n^{-1}$, where the constant $C$ is universal for $\alpha$ close to $2$. \end{lemma} \begin{proof}[Proof of Lemma \ref{wavepack2}] The estimate on the $L^2$ norm is again a trivial application of H\"older's inequality. The inner product estimate can be done by simply writing \begin{equation}\label{bound2} |\langle (T_{n,j,\alpha}^{k_1,l_1})^* g_1 , (T_{n,j,\alpha}^{k_2,l_2})^* g_2 \rangle| \le \int \int |(g_1 1_{E^{\alpha}_{k_1,l_1}})(x_1)| |(g_21_{E^{\alpha}_{k_2,l_2}})(x_2)| | G^{\alpha}_{n,j}(x_1,x_2) |\, \mathrm{d} x_1 \, \mathrm{d} x_2, \end{equation} where we define $G^{\alpha}_{n,j}(x_1,x_2)$ to be $$ \int_{\mathbb{R}} \chi_j\left( \frac{x_1 - y - \beta_n}{\gamma_n} \right) \chi_j \left( \frac{x_2- y - \beta_n}{\gamma_n} \right) e^{i(N(x_1)-N(x_2))y} e^{i(|x_1 - y|^\alpha - |x_2 - y|^{\alpha})} \, \mathrm{d} y.$$ The first observation to make is that, in order for the inner product not to be zero, we must have $|k_1 - k_2| \le 10.$ Moreover, the integral defining $G^{\alpha}_{n,j}(x_1,x_2)$ can be handled via basic oscillatory integral estimates: it has support on an interval of length $\le 20 \cdot \gamma_n,$ and the phase function is given by \begin{align*} & \phi(y) = (N(x_1) - N(x_2))y + (|x_1-y|^{\alpha} - |x_2 - y|^{\alpha}). \end{align*} We analyze its derivative, which, as $x_1 - y, x_2 - y >0,$ is given by $$\phi'(y) = (N(x_1) - N(x_2)) + \alpha (|x_1-y|^{\alpha-1} - |x_2-y|^{\alpha-1}).$$ A calculation shows that, for $x_1,x_2 \in ((k_1-10) \cdot \gamma_n, (k_1 + 10) \cdot \gamma_n],$ $$\alpha (|x_1-y|^{\alpha-1} - |x_2-y|^{\alpha-1}) \le 10^2 \alpha \cdot (\alpha -1) 2^{(\frac{\alpha}{2} - 1) n} \le 10^3 2^{(\frac{\alpha}{2}-1)n} \le 10^5 \gamma_n^{-1},$$ as $\alpha \cdot (2^{\alpha-1}-1), \, \alpha, \alpha -1$ all remain bounded for $|\alpha - 2| \le \frac{1}{2}.$ In addition to that, note the fact that for $N(x_i) \in (l_i \cdot \gamma_n^{-1}, (l_i + 1) \cdot \gamma_n^{-1}], \, i=1,2,$ it holds $$|N(x_1) - N(x_2)| \ge 10^{-5} |l_1 - l_2| \gamma_n^{-1}. $$ Therefore, the derivative of the phase is bounded from below by $10^{-6} |l_1 - l_2| \cdot \gamma_n^{-1}$ for large values of $|l_1 - l_2|,$ which implies along with Proposition 1 in \cite[Chapter~VIII]{Stein1} that $$ |G^{\alpha}_{n,j}(x_1,x_2)| \le C \cdot \gamma_n \left(1+\frac{|l_1 - l_2|}{\gamma_n}\right)^{-20}.$$ Inserting in \eqref{bound2} gives the claim by noting that $C$ did \emph{not} depend on $\alpha$ throughout the proof. \end{proof} Let $\|g\|_1 = 1.$ Lemma \ref{wavepack2} gives: \begin{align*} \sum_{0 \le j \le 2^n \cdot \gamma_n^{-1}} \left\| \sum_{k,l \in \mathbb{Z}} (T^{k,l}_{n,j,\alpha})^{*} g\right\|_{2}^2 \end{align*} \begin{equation*} \le \sum_{\substack{{0 \le j \le 2^n \gamma_n^{-1}}\\{|k_1-k_2| \le 10}\\{ |l_1 - l_2| \le \gamma_n \cdot 2^{n/10}}}} |\langle (T_{n,j,\alpha}^{k_1,l_1})^*g, (T_{n,j,\alpha}^{k_2,l_2})^*g \rangle| + \sum_{\substack{{0 \le j \le 2^n \gamma_n^{-1}}\\{|k_1-k_2| \le 10}\\{ |l_1 - l_2| > \gamma_n \cdot 2^{n/10}}}} |\langle (T_{n,j,\alpha}^{k_1,l_1})^*g, (T_{n,j,\alpha}^{k_2,l_2})^*g \rangle| \end{equation*} \[ \le 10^2 \gamma_n \cdot 2^{n/10} \left(\sum_{\substack{{0 \le j \le 2^n \gamma_n^{-1}}\\{l,k \in \mathbb{Z}}}} \|(T^{k,l}_{n,j,\alpha})^*g\|_2^2 \right) + C \cdot 10^2 \cdot 2^{-n} \cdot \gamma_n. \] The second summand on the right hand side contributes in the end to an universal constant times $\sum_{n \ge 0} 2^{-n} \cdot \left(\frac{2^n}{\gamma_n}\right)^{1/2} \cdot 2^{-n/2} \gamma_n^{1/2} \le 2.$ On the other hand, for the first summand we notice again that $(T_{n,j,\alpha}^{k,l})^*g = (T_{n,j,\alpha}^{k,l})^*(1_{N \in [l \cdot \gamma_n^{-1},(l+1) \cdot \gamma_n^{-1})}g)$ and bound: \begin{align*} & \sum_{\substack{{0 \le j \le 2^n \gamma_n^{-1}}\\{l,k \in \mathbb{Z}}}} \|(T^{k,l}_{n,j,\alpha})^*g\|_2^2 \le \sum_{l \in \mathbb{Z}} \|g 1_{N \in (l \cdot \gamma_n^{-1}, (l+1)\cdot \gamma_n^{-1}]} \|_2 \left\|\sum_{\substack{{k \in \mathbb{Z}}\\{0 \le j \le 2^n \cdot \gamma_n^{-1} }}} T_{n,j,\alpha}^{k,l} ( T_{n,j,\alpha}^{k,l})^* g \right\|_2 \cr & \le \|g\|_2 \left(\sum_{l \in \mathbb{Z}} \left\|\sum_{\substack{{k \in \mathbb{Z}}\\{0 \le j \le 2^n \cdot \gamma_n^{-1} }}} T_{n,j,\alpha}^{k,l} ( T_{n,j,\alpha}^{k,l})^* g \right\|_2^2 \right)^{1/2} \cr \end{align*} \begin{align*} \le 10^3 \|g\|_2 \left(\sum_{\substack{{k,l \in \mathbb{Z}}\\{0 \le j \le 2^n \cdot \gamma_n^{-1} }}} \| T_{n,j,\alpha}^{k,l} ( T_{n,j,\alpha}^{k,l})^* g\|_2^2 \right)^{1/2} \le 10^3 & \gamma_n \left(\sum_{\substack{{k \in \mathbb{Z}}\\{0 \le j \le 2^n \cdot \gamma_n^{-1} }}} \|( T_{n,j,\alpha}^{k,l})^* g\|_2^2 \right)^{1/2}. \cr \end{align*} Here, we have used the fact that $\langle T^{k_1,l}_{n,j_1,\alpha} h_1 , T^{k_2,l}_{n,j_2,\alpha} h_2 \rangle = 0$ if $k_1 \ne k_2, |j_1 - j_2| > 8,$ the $L^2$ bound for each of the $T^{k,l}_{n,j,\alpha}$ and the fact that $\sum_{l \in \mathbb{Z}} \|g 1_{N \in (l \cdot \gamma_n^{-1}, (l+1)\cdot \gamma_n^{-1}]} \|_2^2 = \|g\|_2^2 =1.$ In the end, we obtain \[ \left( \sum_{\substack{{0 \le j \le 2^n \gamma_n^{-1}}\\{l,k \in \mathbb{Z}}}} \|(T^{k,l}_{n,j,\alpha})^*g\|_2^2 \right)^{1/2} \le 10^3 \gamma_n. \] This implies that each summand in $n$ amounts to a contribution of at most an absolute constant independent of $\alpha$ times $2^{-\frac{9n}{20}} \gamma_n.$ The sum in $n$ of the last bounds converges for $\alpha \ge \frac{3}{2}$ to a constant bounded uniformly in $\alpha$, concluding the proof. \section{Comments and remarks} \subsection{Uniform bounds for maximally modulated oscillatory singular integrals.} Although Section \ref{uniform} deals with uniformity of $L^p(\mathbb{R})$ bounds for the $\alpha \sim 2$ case, there are two other natural cases to investigate. Namely, it is conjectured that the $L^p$ constants for $C_{\alpha}$ remain bounded as $\alpha \to 0,$ where we recover the case of the Carleson operator. If $\alpha \to 1,$ Guo \cite{Guo2} observes that the operator without the supremum in the modulation already fails to be bounded in $L^p.$ \\ Analogously, if we consider the operators \[ C^{odd}_{\alpha} f(x) := \sup_N \left| \int_{\mathbb{R}} f(x-t) e^{iNt} e^{i \text{sign}(t) |t|^{\alpha}} \, \frac{\mathrm{d} t}{t} \right|, \] then the same proof of Section \ref{uniform} applies to prove uniform bounds near $\alpha = 2.$ For this operator, there is also an additional possible uniform bound. If $\alpha \to 1,$ then it \emph{formally} holds that $C^{odd}_{\alpha} f \to Cf$, the Carleson operator as defined in the introduction. The proofs in \cite{GHLR} do not provide uniform bounds in either of the cases above. We expect these bounds to hold, but cannot present a proof at the moment. \subsection{Bounds for Stein-Wainger-type operators} Besides the operator $C^{odd}_{\alpha}$ considered before, a natural question to those who analyse the techniques above carefully is of \emph{uniform bounds} for Stein--Wainger type operators. Indeed, Guo \cite{Guo2} was the first to consider such a question, proving that the operators \[ \mathfrak{C}_{\alpha}f(x) = \sup_{N \in \mathbb{R}} \left| \int_{\mathbb{R}} f(x-t) e^{iN|t|^{\alpha} \text{sign}(t)} \, \frac{\mathrm{d} t}{t} \right| \] are bounded in $L^p(\mathbb{R}),$ for any $\alpha > 0.$ Interestingly, there is a dichotomy in the techniques of proof for bounding such operators: for $\alpha = 1,$ one actually results to the celebrated Carleson-Hunt result to derive the conclusion, but for any $\alpha \neq 1,$ the approach by Guo uses a much more direct strategy: one compares the operator in a small neighbourhood of the origin to a maximally truncated Hilbert transform, and estimates the difference by an usual Hardy--Littlewood maximal function. Away from that neighbourhood, the strategy is to use a $TT^*$ method to obtain decay from the oscillatory nature of the phase. As Guo's bounds on $\|\mathfrak{C}_{\alpha}\|_{L^p \to L^p}$ blow up as $\alpha \to 1,$ one interesting question is whether elements of both proofs can be combined in order to make the bounds on $\|\mathfrak{C}_{\alpha}\|_{L^p \to L^p}$ uniform as $\alpha \to 1.$ We believe that our techniques in this manuscript can help shed some light on this question. In particular, we believe that a similar decomposition as that employed in the proof of Theorem \ref{known} might be useful to obtain such uniform bounds, at least in a Walsh version. We leave a more detailed discussion on this matter to a future manuscript.
{ "timestamp": "2020-12-17T02:17:10", "yymm": "2012", "arxiv_id": "2012.08913", "language": "en", "url": "https://arxiv.org/abs/2012.08913" }
\section{Introduction} A pioneering research based on the monitoring of H\&K fluxes for 91 main-sequence stars showed that activity variations, including long-term cyclic behaviour similar to the 11-yr cycle of Sun, were also observed in other stars \citep{1978ApJ...226..379W}. Then, by using the Mount Wilson index ($\mathrm{S}_\mathrm{{MW}}$) defined as the ratio between the flux in the optical \ion{Ca}{ii} H\&K lines and the nearby continuum \citep{1978PASP...90..267V}, \citet{Baliunas98} analyzed a sample of 2200 stars, finding different types of long-term activity behaviour. Those stars with a cyclic behaviour (with periods between 2.5 and 25 yr) showed intermediate activity levels, while those with erratic behaviour had higher activity levels. A third group presented flat activity levels, in general corresponding to inactive stars. These objects are particularly interesting because they could be in a state similar to the solar Maunder Minimum (hereafter MM). The MM was a phase between 1645 and 1715 when the Sun deviated from its usual 11-yr activity cycle \citep{1976Sci...192.1189E}, the number of sunspots was extremely reduced, although they did not disappear \citep{1993A&A...276..549R}. In addition, some evidence, including other solar proxies, have suggested that the solar cycle was still in progress although with reduced amplitude during the MM \citep[e.g.][]{1989AnGeo...7..321R,1993A&A...276..549R,1998SoPh..181..237B,2001JGR...10616039U,2003mmvs.book.....S,2007AstL...33..340N,2015A&A...577A..71V,Zolotova_2015}. In particular, the study of MM analogue stars could be very useful to better understand the Sun’s magnetic field, especially its evolution in the past and future. It is also relevant to improve our knowledge about the current dynamo models \citep[e.g.][]{1992ASPC...27..150S,2006A&A...457L..25U,2010LRSP....7....3C,Shah_2018}. However, the detection of a MM analogue state is a challenging task due to the long-term monitoring that is required, as well as to the lack of a clear criteria to identify MM candidates \citep[e.g.][]{2004AJ....128.1273W,2007ApJ...663..643J}. Initial efforts to establish a criterion were carried out by \citet{1990Natur.348..520B} and \citet{1995ApJ...438..269B}, which was based on the analysis of the relative variation of $\mathrm{S}_\mathrm{{MW}}$ index around its mean ($\sigma_{\mathrm{S}}$/$\overline{\mathrm{S}}$). In this sense, the authors initially considered as MM candidates those stars with $\sigma_{\mathrm{S}}$/$\overline{\mathrm{S}}$ $<$ 1.5\%\footnote{While those stars with an $\sigma_{\mathrm{S}}$/$\overline{\mathrm{S}}$ $\geq$ 2\% are considered variable or erratic.}. They were called ``flat'' stars and were characterized by relatively constant and low activity levels. Then, \citet[][]{1996AJ....111..439H} studied a sample of stars that belong to the Project Phoenix Survey. As a result, the authors define a new class of inactive stars employing the chromospheric activity index log ${R}'_\mathrm{HK}$. According to their definition, stars with a log ${R}'_\mathrm{HK}< -5.1$ dex (corresponding to $\mathrm{S}_\mathrm{{MW}}$ $<$ 0.15 for solar type stars) could be considered as MM candidates. However, \citet{2004AJ....128.1273W} showed that most of these stars were in fact evolved stars, with activity levels significantly lower than main-sequence objects, thereby concluding that the low activity alone is not a sufficient discriminant of a MM state. Moreover, it has been suggested that the low activity level obtained by \citet[][]{1996AJ....111..439H} should be higher than $-5.1$ dex and that the identification of MM candidates should not be constrained only to the visual part of the spectra. In this way, UV and X-ray data can also be used to identify MM candidates \citep[e.g.][]{2004AJ....128.1273W,2007ApJ...663..643J}. Recently, \citet{2017MNRAS.470..276S} reported an average S-index of 0.154 for the unusually deep and long minimum in 2008-2009 of the solar cycle 24. The solar far-UV data also reveals a low activity behaviour during this period. Then, according to the analysis of the authors, the Sun could be entering into a new grand minimum phase. To date, there are only very few firm MM candidates reported in the literature. For instance, \citet{2009A&A...508.1417P} analyzed the exoplanet host star 51 Pegasi by using both X-ray and \ion{Ca}{ii} H\&K data. A constant and low coronal flux in addition to a flat chromospheric activity suggest that this star could be in a MM state. Another example is the star HD\,4915, which has been recently reported as a MM candidate by \citet{Shah_2018}. In that work, the authors studied the activity behaviour of the star by using a long-term database of the \ion{Ca}{ii} H\&K optical lines (acquired between 2006 and 2018). They found a decrease of the magnetic activity over two cycles, revealed by the core flux variation of \ion{Ca}{ii} H\&K lines. This fact could be a strong indication for a possible MM state in HD 4915. An alternative way to identify MM candidates was proposed by \citet[][]{1998ASPC..154.1235D} (hereafter DO98) and \citet{2004AJ....128.1273W}. The authors pointed out that a remarkable difference in the activity behaviour among main-sequence binary components could be used as a MM stars detector. In such systems, a similar MM state could be associated to the star with lower activity. Following this interpretation, \citet{2018MNRAS.476.2751F} suggested that the activity difference, observed between the components of the $\zeta$ Ret binary system, could be attributed to an atypical activity of $\zeta^{2}$ Ret. The F$_X$ values estimated from XMM-Newton database for $\zeta^{1}$ Ret and $\zeta^{2}$ Ret are (5.11 $\pm$ 0.08) $\times 10^{-13}$ and (0.25 $\pm$ 0.32) $\times 10^{-13}$ erg s$^{-1}$ cm$^{-2}$, respectively. This shows that $\zeta^{1}$ Ret is more active than $\zeta^{2}$ Ret in X-rays \citep[see][for more details]{2018MNRAS.476.2751F}. In this way, as a feasible scenario, the star $\zeta^{2}$ Ret is possibly emerging from (or going to) a state similar to the MM. In that work, we stressed the need for additional spectroscopic data in order to verify or rule out this possible scenario. Fortunately, more spectroscopic ESO data were acquired for this remarkable binary system. Moreover, we count with additional spectra taken with the REOSC spectrograph at CASLEO observatory. This binary system is conformed by two solar analogue stars physically connected \citep{2011ApJS..192....2S}, and their spectral types are classified as G2 V and G1 V according to the Hipparcos database \citep[see][for details]{2016A&A...588A..81S}. Both stars have very similar stellar parameters ($T_{eff}$, log $g$, and [Fe/H]) and they are also similar to the Sun \citep[see][for more details]{2016A&A...588A..81S}. Their empirical rotational periods obtained from the \citet{2008ApJ...687.1264M} calibration are 13.2 $\pm$ 2.8 d and 16.5 $\pm$ 1.8 d for $\zeta^{1}$ Ret and $\zeta^{2}$ Ret, respectively. This strong physical similarity could help to diminish or remove a possible dependence of the minimum \ion{Ca}{ii} H\&K activity levels on gravity and metallicity \citep[e.g.][]{1989ApJ...341.1035S,2004AJ....128.1273W,2012IAUS..286..257G}, which is an additional advantage for the mutual comparison in this system. Besides, the large available data set ($\sim$ 19 yrs of observations) converts this system in a unique laboratory that allows us to carry out a detailed long-term activity study in order to explore the possible MM state of $\zeta^{2}$ Ret, following the suggestion of DO98. The paper is organized as follows: in \S 2, the observations and data reduction are described. In \S 3, our stellar activity analysis is presented. Finally, our discussion and main conclusions are exposed in \S 4. \section{Observations and data reduction} Most of the stellar spectra of $\zeta^{1}$ ($=$HD\,20766) and $\zeta^{2}$ Ret ($=$HD\,20807) were downloaded from the European Southern Observatory (ESO) archive\footnote{\url{http://archive.eso.org/wdb/wdb/adp/phase3_spectral/form?phase3_collection=HARPS}}. These observations were acquired with the \textrm{HARPS} spectrograph (resolving power R $\sim$ 115\,000), attached to the La Silla 3.6-m (ESO) telescope between 2003 and 2019. We also included some spectra taken with UVES (between 2002 and 2009) and FEROS (during 2010 and 2014) spectrographs (R$\sim$ 80\,000 and R$\sim$48\,000, respectively), which are coupled to the Unit 8.2-m Telescope 2 (UT2) of the Very Large Telescope (VLT) and to the 2.2-m telescope located at La Silla, respectively. All ESO spectra have been automatically processed by the corresponding pipelines\footnote{\url{http://www.eso.org/sci/facilities/lasilla/instruments/harps/overview.html}}$^{,\thinspace}$\footnote{\url{http://www.eso.org/sci/facilities/paranal/instruments/uves.html}}$^{,\thinspace}$\footnote{\url{http://www.eso.org/sci/facilities/lasilla/instruments/feros.html}}. Additionally, our analysis was complemented with observations performed with the REOSC\footnote{\url{https://casleo.conicet.gov.ar/reosc-ds-dc/}} spectrograph (R$\sim$13\,000), working at the 2.15-m Jorge Sahade telescope at the CASLEO in San Juan, Argentina. These data, taken between 2000 and 2015 under the HK$\alpha$ project\footnote{The main aim of the HK$\alpha$ project consists in the systematic observation of main sequence stars to carry out long-term activity studies \citep[see][for more details]{2004A&A...414..699C}}, were reduced following the standard procedures with IRAF\footnote{IRAF is distributed by the National Optical Astronomical Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc. (AURA), under a cooperative agreement with the National Science Foundation.} tasks, i.e. performing bias subtraction, flat fielding, sky subtraction, order extraction, and wavelength calibration. See Table \ref{tabone} for observation logs details. Before the calculation of the standard $\mathrm{S}_\mathrm{{MW}}$ index defined by \citet{1978PASP...90..267V} at the Mount Wilson Observatory (MWO), we first discard those spectra with low signal-to-noise ratio (S/N $\leq$ 100). As a result, we obtained 79 spectra for $\zeta^{1}$ Ret and 352 for $\zeta^{2}$ Ret. These spectra, with a mean S/N $\sim$175 at 6070 Å, were corrected by radial velocities using standard IRAF tasks. Then, we integrated the flux in two windows centred at the cores of the Ca {\sc ii} H\&K lines (3968.47 {\AA} \ and 3933.66 {\AA}, respectively), weighted with triangular profiles of 1.09 {\AA} full width at half-maximum (FWHM), and computed the ratio of these fluxes to the mean continuum flux, integrated in two passbands of $\sim$ 20 {\AA} width centred at 3891 and 4001 {\AA}. As a result, we obtained the S-index corresponding to each one of the instruments used in this work, which are then converted to the $\mathrm{S}_\mathrm{{MW}}$ following the calibration procedures of \citet[][]{2011arXiv1107.5325L}, \citet[][]{2007A&A...469..309C}, and \citet[][]{2008A&A...485..571J} for HARPS, REOSC and FEROS spectroscopic data. For the case of UVES spectra, there is no calibration available. These data were intercalibrated to the rest of the time-series. \section{Stellar Activity Analysis} In order to search for clear signatures of a possible MM state in the binary system $\zeta$ Ret as suggested in \citet{2018MNRAS.476.2751F}, in Fig. \ref{plot.0} we show the time-series of the $\mathrm{S}_\mathrm{{MW}}$ indexes for both components ($\zeta^{1}$ Ret and $\zeta^{2}$ Ret are plotted in the upper and lower panels, respectively). We have included all spectroscopic data from HARPS, REOSC, FEROS, and UVES. As a result, we have an extensive database for each component of approximately 19 years. Vertical dashed lines were plotted to highlight the time coverage of our current series from those published in \citet{2018MNRAS.476.2751F}. \begin{figure} \centering \includegraphics[width=\columnwidth]{Z1_Z1_all_data_nov.jpeg} \caption{Upper panel corresponds to the $\mathrm{S}_\mathrm{{MW}}$ index variation of $\zeta^{1}$ Ret. HARPS data are indicated with blue circles (for both panels), while REOSC and FEROS data are indicated with green triangles and orange squares, respectively. Lower panel: activity variation for $\zeta^{2}$ Ret. Here, UVES data are indicated with black diamonds. Both vertical dashed lines in each panel represent the time coverage of those series reported initially, while the new data are indicated with red crosses. Red and black dashed lines show the fitted activity maxima $f(t)$ for both peaks.} \label{plot.0} \end{figure} A direct comparison of these new time-series shows a clear decrease in the amplitude of the chromospheric activity of the $\zeta^{2}$ Ret component (from the first peak to the last one). To quantify this decrease of activity, in Fig. \ref{plot.0} we fitted the corresponding time-series assuming a typical solar activity shape $f(t)$ for each peak \citep[see Eq. 8 in][for more details]{2017ApJ...835...25E}. The first cycle fit (red dashed line) is given by the following parameters: $A= 0.0104$, $B=1.32$ yr, $\alpha=-0.24$ yr$^{-2}$, $t_{m}=2008.67$ yr (xJD=4711.72 days), and $f_{min}=0.17422$, while the corresponding parameters to the second cycle (black dashed line) are $A=0.0053$, $B=1.59$ yr, $\alpha\sim 2\times 10^{-26}$ yr$^{-2}$, $t_{m}=2016.42$ yr (xJD=7542.22 days), and $f_{min}=0.1746$. As a result, both fits show an amplitude $\Delta S_{MW}=0.0084$ and $\Delta S_{MW}=0.0045$, meaning a decrease of $\sim$47\% in the activity cycle amplitude. To classify each component according to their variability type, we considered the criteria adopted by \citet{1995ApJ...438..269B}. Then, $\zeta^{1}$ Ret can be classified as a variable star (with $\sigma_{\mathrm{S}}$/$\overline{\mathrm{S}}$ $\sim$ 4.3\%), while $\zeta^{2}$ Ret would be classified as a ``flat star'' ($\sigma_{\mathrm{S}}$/$\overline{\mathrm{S}}$ $\sim$ 1.4\%), although its stellar activity shows a clear variation. For comparative purposes, we also computed the $\log {R}'_\mathrm{HK}$ index by subtracting the photospheric contribution following the prescription given in \citet{1984ApJ...279..763N}, resulting in a mean activity difference of $\sim$0.24 dex, which is slightly higher to the previous value of 0.22 dex reported in \citet{2018MNRAS.476.2751F}. In order to explore the components of the \ion{Ca}{ii} H\&K line-core fluxes responsible for the low $\mathrm{S}_\mathrm{{MW}}$ index in $\zeta^2$ Ret, we computed its basal level of activity. \cite{1989ApJ...341.1035S} conclude that the line-core emission in the \ion{Ca}{ii} lines is composed by a photospheric component, a basal flux probably related to acoustic heating and a third component associated to purely magnetic activity. In this sense, we estimated the \ion{Ca}{ii} photospheric component using a synthetic spectra calculated with SYNTHE and ATLAS9 model atmospheres \citep{1993KurCD..13.....K}, accounting for a photospheric Mount Wilson index $S_{Phot}=0.149$. Following \cite{2013A&A...549A.117M}, we converted this index to the \ion{Ca}{ii} line-core photospheric fluxes of (2.539$\pm$ 0.017)$\times 10^6$ erg cm$^{-2}$ s$^{-1}$, higher than the photospheric flux derived in \cite{1984ApJ...279..763N} for a star of $B-V=0.60$. \citet{2013A&A...549A.117M} revised this historical work and obtained that the photospheric flux was underestimated. They also computed an excess basal flux non-photospheric in erg cm$^{-2}$ s$^{-1}$ given by $log(F'_{HK})=6.42-1.03(B-V)$. Considering this contribution $F'_{HK}=(5.65\pm 0.51)\times 10^5$ erg cm$^{-2}$ s$^{-1}$ and the photospheric flux derived from $S_{phot}$ for $\zeta^2$ Ret, all the basal contribution $F'_{HK}+F^{phot}_{HK}=(3.10\pm 0.07)\times 10^6$ erg cm$^{-2}$ s$^{-1}$ is associated to a Mount Wilson index of $S_{MW}\sim 0.180$. The mean activity level of $\zeta^2$ Ret between 6658.5 and 8849.5 days is slightly lower in less than 1.5$\sigma$, thus mainly related to a basal chromospheric heating. Although a remaining magnetic contribution is still evident in the activity cycle. To search for long-term activity cycles in this binary system, we first calculated the monthly means of all data. This procedure, which has been applied in previous works \citep[e.g.][]{1995ApJ...438..269B,2010ApJ...723L.213M,2011A&A...534A..30G,2018A&A...620A..34F}, enables us to reduce the rotational scatter originated by individual active regions. Following \citet{2009A&A...496..577Z}, we computed the Generalized Lomb-Scargle periodogram (hereafter GLS) and the false-alarm probability (hereafter FAP, see their equation 24) of each significant peak present in the periodograms. In the upper and lower panels of Fig. \ref{plot.2} we show the GLS (blue dashed line) for $\zeta^{1}$ Ret and $\zeta^{2}$ Ret, respectively. Both stars seem to be periodic. In the case of $\zeta^{1}$, we found two prominent peaks which can be associated with activity cycles, one of them has a period of 1548 $\pm$ 62 d with a FAP of 1 $\times 10^{-14}$. While, the second peak is 431 $\pm$ 6 d with a FAP of 2 $\times 10^{-08}$. For the $\zeta^{2}$ Ret component, the large data set collected in this work allowed us to recalculate its previously reported period (3670 $\pm$ 170 d). A prominent peak of 3047 $\pm$ 134 d with a FAP of 4 $\times 10^{-12}$ was detected in the GLS periodogram (see the lower panel of Fig. \ref{plot.2}). Therefore, $\zeta^{2}$ Ret also satisfies the \citet{1995ApJ...438..269B} criteria for a cycling star (i.e. FAP $\leq 10^{-02}$). We also executed the CLEAN deconvolution algorithm \citep{Roberts87} to explore whether the 431-day period which appears in the GLS periodogram of $\zeta^{1}$ Ret is due to sampling. A comparison between GLS (blue dashed line) and CLEAN (red continuous line) periodograms is shown in Fig. \ref{plot.2}. For the case of $\zeta^{1}$, we found that the only predominant peak is around 1527 $\pm$ 43 d, while for $\zeta^{2}$ we obtained a single significant period of 2899 $\pm$ 139 d. The errors of the periods detected with the CLEAN algorithm depends on the finite frequency resolution of the periodograms $\delta\nu$ as given by Eq.(2) in \cite{Lamm04}, $\delta P=\frac{\delta\nu P^2}{2}$. Therefore, the periods found for the binary system employing the CLEAN algorithm correspond to those associated with the GLS method. \begin{figure} \centering \includegraphics[width=\columnwidth]{Z1_Z2_period_GLS_CLEAN.jpeg} \caption{GLS (blue dashed line) and CLEAN (red continuous line) periodograms for the means of the Mount Wilson indexes plotted in Fig. \ref{plot.0}. Upper and lower panels correspond to $\zeta^{1}$ Ret and $\zeta^{2}$ Ret, respectively. The most significant CLEAN periods are indicated in each panel.} \label{plot.2} \end{figure} Finally, in Fig. \ref{plot.3} we show the monthly average values of the $\mathrm{S}_\mathrm{{MW}}$ index for $\zeta^{2}$ Ret phased with the period derived with the CLEAN algorithm. The errors of these data have been calculated as their standard deviation, while for those bins with a single measurement we have considered the typical dispersion of other bins. \begin{figure} \centering \includegraphics[width=\columnwidth]{Z2_fase.jpeg} \caption{Monthly means of the Mount Wilson indexes for $\zeta^{2}$ Ret phased with a period of $\sim$ 7.9 yr. The observing seasons are represented by coloured circles. The error bars of HARPS data and the corresponding mean activity level (dashed horizontal line) have been included.} \label{plot.3} \end{figure} \section{Discussion and conclusions} Following the aim of this study, we carried out a long-term activity of the $\zeta$ Ret binary system employing several spectroscopic data obtained over a span of 19 years. We detected long-term activity cycles of 1527 $\sim$ 43 d ($\sim$ 4.2 yr, not yet reported in the literature) and 2899 $\sim$ 139 d ($\sim$ 7.9 yr) for $\zeta^{1}$ and $\zeta^{2}$ Ret, respectively. In particular, the new data included in this work allowed us to make a better estimation of the period obtained in \citet{2018MNRAS.476.2751F} for $\zeta^{2}$ Ret. \citet{2018MNRAS.476.2751F} proposed two possible scenarios to explain the large difference in activity between the stars $\zeta^{1}$ and $\zeta^{2}$ Ret. In the first scenario, the stars possibly present different rotational periods\footnote{The analysis of these new spectroscopic data does not reveal the presence of any reliable rotational modulation for both stars.}, which could result in different average activity levels. The second scenario suggests that the star $\zeta^{2}$ Ret is possibly in a MM state. In the present work we collected new evidence, including more than 430 spectra acquired with the HARPS, REOSC, FEROS, and UVES spectrographs, which support the idea that $\zeta^{2}$ Ret is possibly in a MM state, due to the following reasons: \begin{itemize} \item[--] A large difference in the average activity levels between both stars (hereafter $\Delta$). DO98 and \citet{2004AJ....128.1273W} suggested that binary systems (presumably co-eval stars), with significant $\Delta$ levels, could point to a MM state of the most inactive star. In particular, DO98 suggest that an age difference (estimated using an activity-age calibration) greater than $\sim$1 Gyr could indicate a MM state. Using the DO98 calibration, we estimate ages of 1.5 and 3.3 Gyr for $\zeta^{1}$ Ret and $\zeta^{2}$ Ret i.e. a notable difference of $\sim$1.8 Gyr. An age difference greater than 1.0 Gyr is also obtained through the \citet{2008ApJ...687.1264M} and \citet{2016A&A...594L...3L} calibrations.\\ \item[--] In fact, the value of $\Delta=0.24$ dex we found in the present work for most recent years, has increased from the value $\Delta=0.22$ dex reported in \citep{2018MNRAS.476.2751F}\\ \item[--] The cycle amplitude of $\zeta^{2}$ Ret decreased notably, from $\Delta S_{MW}=0.0084$ to $\Delta S_{MW}=0.0045$ in the last cycle i.e. decreased by $\sim$47\%. We caution that, until now, there is no clear agreement about the solar cycle behaviour during the MM state. Initially, sunspot records suggest that the cycle was interrupted \citep[e.g.][]{1890MNRAS..50..251S,1976Sci...192.1189E,1994CAS....24.....W}. However, different works using sunspots counts and cosmogenic isotopes, indicate a weaker but persistent cycle \citep[e.g.][]{1993A&A...276..549R,1998SoPh..181..237B,2004SoPh..224..317M,2014SoPh..289.4701P,2015A&A...577A..71V}. In addition to the Sun, \citet{Shah_2018} showed that the G5V star HD 4915, a candidate to present a MM state, also reveals a cycle still in progress with decreasing amplitude, similar to $\zeta^{2}$ Ret.\\ {\item[--] The current activity level of $\zeta^{2}$ Ret is very low ($\langle F_{HK}\rangle\sim 3\times 10^6$ erg cm$^{-2}$ s$^{-1}$). This value is even lower within the statistical error than the theoretical basal level for this object ($F_{HK}= (3.10 \pm 0.07)\times 10^6$ erg cm$^{-2}$ s$^{-1}$). The basal value was determined by estimating the sum of the non-photospheric basal flux $F'_{HK}$ for B-V $=$ 0.60 \citep{2013A&A...549A.117M}, and the photospheric contribution estimated using a synthetic spectra calculated with SYNTHE and ATLAS9 model atmospheres \citep{1993KurCD..13.....K}. The stellar parameters of $\zeta^{2}$ Ret were taken from the high-precision analysis of \citet{2016A&A...588A..81S}.} \end{itemize} To find an unambiguous example of a MM candidate is a very difficult task, which is due in part to the fact that a MM state in general is not totally clear, and requires a large observational time span. Up to now, there are only very few MM candidates reported in the literature, being one of them HD\,4915 \citep{Shah_2018}. For the case of $\zeta^{2}$ Ret, we profit from the fact that this star belongs to a binary system, which makes it a very valuable candidate. In fact, to our knowledge $\zeta^{2}$ Ret is the first MM candidate detected through the activity differences in a binary system, as suggested by DO98. Finally, we strongly recommend the search for valuable MM candidates in those binary systems with high activity differences. They can be used as good laboratories to address many questions related to solar/stellar MM, which are yet subjected to intense scrutiny \citep[see][for more details]{Zolotova_2015,2015A&A...577A..71V,2019ApJ...886...18C}. \begin{acknowledgements} We warmly thank the anonymous referee for constructive comments that improved the paper. MF, MJA, RIB, NN, and PM acknowledge the financial support of PROJOVI/UNSJ, through the project 80020190300048SJ. Also, RIB, JA, and PM acknowledge the financial support from CONICET in the forms of doctoral and post-doctoral fellowships. JYG acknowledges the support from CNPq. \end{acknowledgements} \bibliographystyle{aa} \small
{ "timestamp": "2020-12-17T02:19:27", "yymm": "2012", "arxiv_id": "2012.08983", "language": "en", "url": "https://arxiv.org/abs/2012.08983" }
\section{Introduction}\label{Introduction} Blood vessels, spread throughout the human body, constitute a significant part of the circulatory system. All body tissues rely on the normal functioning of different vessels such as cerebral arteries, retinal vessels, carotid arteries, pulmonary arteries, and coronaries. Any abnormal change in or damage to the vessels will be manifested as diseases at different levels (e.g., stroke, arteriosclerosis, cardiovascular diseases, and hypertension). Medical imaging and image analysis enable novel technologies and applications for better diagnosis and treatment of blood-vessel diseases. Tracking the target vessels from the wide field of view of medical images is a prerequisite for the localization and identification of abnormal vessels or regions of interest. However, manual annotation, which usually demands expertise, is particularly time-consuming and tedious. Vessel tracking aims to solve the problems encountered in vessel image analysis, including key-point (seed point) detection, centerline extraction, and vascular segmentation. These problems considerably differ because of the wide array of vessel anatomies and image characteristics. Accordingly, the problematic factors are categorized into two groups: those related to vessel morphologies (e.g., small size, branching pattern, tortuosity, and severe stenosis of vessels) and image characteristics (e.g., low contrast, noise, artifacts, dislocation, and adjacent hyper-intense structures). Localizing the key points and recognizing the key patterns of vascular structures are fundamental to perform vessel tracking; building models based on various assumptions on vascular appearances (i.e., the prior knowledge and intrinsic features) is also important. Exploring such problems facilitates the development of new algorithms, particularly in learning-based tracking methods. In the literature, several articles focus on the survey of vessel-tracking methods. To the best of our knowledge, \citet{Suri2002} published the first survey on this topic. They concentrated on skeleton and indirect techniques for vascular segmentation. In addition to vessel-tracking methods, \citet{Kirbas2004a} particularly reviewed the methods of detecting similar vessel characteristics, such as neurovascular and tubular structures. The review of \citet{Lesage2009} further focused on lumen segmentations. They categorized the methodologies according to three aspects (i.e., models, features, and extraction schemes) and provided general considerations for each aspects. Recently, more survey papers for tracking the vessels of certain organs in specific imaging modality, e.g., cerebral vessel segmentation from magnetic resonance (MR) images \citep{Klepaczko2016}, coronary reconstruction from X-ray angiography \citep{Cimen2016d}, and lung vessel detection from computed tomography angiography (CT) \citep{Rudyanto2014}, have been published. Reviews on retinal vessel segmentation were presented in the work \citep{Abramoff2010, Fraz2012a, Singh2015, Pohankar2016, Mansour2017, LSrinidhi2017, Vostatek2017, Soomro2019b, Khan2019}. Particularly, \citet{Moreno2015a} were interested in formulating general methods for enhancement technologies, and \citet{Kerrien2017} focused on modeling approaches. In view of the potential of learning-based methods for tracking retinal vessels, \citet{Moccia2018} and \citet{Zhao2019b} reviewed the principles and applications. \citet{Soomro2019b} particularly focused on deep-learning-based works of retinal blood vessel segmentation. \begin{figure*} \center\setlength{\abovecaptionskip}{-0.cm} \setlength{\belowcaptionskip}{-0.cm} \includegraphics[scale=.8]{processingdiagram.pdf} \caption{Recapitulative diagram of vessel tracking using learning-based methods.} \label{fig: processingdiagram}\end{figure*} \vspace{-1em} \subsection{Aims of this paper} This work aims to provide an up-to-date review of vessel tracking based on machine-learning-methods, also referred to as learning-based methods. We focus on the learning-based methods for tracking vessels of various organs using different imaging modalities. The recapitulative diagram of the learning-based methods for vessel tracking is shown in \Zxhreffig{fig: processingdiagram}. To cover the articles to a feasible extent, a search for the term vessel/vascular segmentation/extraction has been performed using engines such as PubMed \footnote{http://www.ncbi.nlm.nih.gov/pubmed}, IEEE Xplore \footnote{http://ieeexplore.ieee.org}, and Google Scholar \footnote{http://scholar.google.com}. Among over 300 collated articles, the focus of attention was on papers published during the last 10 years. Note that this article does not cover all the details of databases and evaluation standards that can be found in the literature \citep{Schaap2009, Hameeteman2011, Kirisli2013, Rudyanto2014, Vostatek2017, Moccia2018, Yan2018e}. The rest of the paper is organized as follows. Section \ref{Vessel tracking using conventional machine learning} reviews the vessel tracking methods using conventional machine learning. Section \ref{Vessel tracking based on deep learning} reviews the vessel tracking approaches using deep learning. Based on the reviewed methods, Section \ref{Evaluation issues} introduces the evaluation issues. Finally, Section \ref{Conclusion and discussion} concludes the review and explores potential directions for future work on learning-based methods for vessel tracking. \section{Vessel tracking using conventional machine learning}\label{Vessel tracking using conventional machine learning} This section reviews the vessel-tracking works that employ conventional learning-based algorithms including the methodologies of hand-crafted features, classifications and statistical models. \Zxhreftbs{tab:traditional learning:retinal vessel}- \ref{tab:traditional learning:other vessels} summarize the decomposition of a selection of representative works in this field according to the applications. \Zxhreftbs{tab:traditional learning techniques} summarizes the existing conventional machine-learning-based works by grouping them into different subcategories. \begin{figure*}[!t] \centering \setlength{\abovecaptionskip}{-0.cm} \setlength{\belowcaptionskip}{-0.cm} \includegraphics[scale=.5]{machinelearningdiagram.pdf} \caption{Diagram of the retinal vessel segmentation using the conventional machine-learning method with supervised training.}\label{fig: Traditional supervised learning} \end{figure*} \begin{sidewaystable*} \begin{minipage}[]{1.2\textwidth} \caption{Overview of conventional machine-learning-based methods for tracking \textbf{retinal vessels}: the evaluation metrics and datasets are presented in Section \ref{Evaluation issues}; see list of abbreviations at the bottom.}\vspace{-0.2cm} \label{tab:traditional learning:retinal vessel} \centering \scriptsize \begin{threeparttable} \begin{tabular}{p{2.8cm}p{3.8cm}p{4cm}p{5.5cm}p{5.2cm}} \hline\hline \multirow{1}{*}{Authors} &\multirow{1}{*}{Methods}&\multirow{1}{*}{Data}&\multirow{1}{*}{Experiment}&\multirow{1}{*}{Results}\\ \cmidrule(r){1-5} \citet{Becker2013}&Boosting-based, learning kernels&Retinal colored image, DRIVE &20 for training, 20 for testing, one-off train + test&Precision-recall curves\\ \cmidrule(r){1-5} \citet{Sironi2015}&Learning separable filters &Retinal colored image, DRIVE&20 for training, 20 for testing, one-off train + test&AUC=0.962\\ \cmidrule(r){1-5} \multirow{2}{*}{\citet{Annunziata2016}} &{Convolutional sparse coding-} &Retinal colored image, DRIVE&20 for training, 20 for testing , one-off train + test&AUPRC=0.87\\ {}&{filter learning}&Retinal colored image, STARE&19 for training, 1 for testing, leave-one-out&AUPRC=0.86 \\ \cmidrule(r){1-5} \multirow{4}{*}{\citet{Gu2017}} &\multirow{4}{*}{Boosting-based , structured features} &Retinal colored image, DRIVE&20 for training, 20 for testing, one-off train + test&Pr=0.7931, Re=0.7595, Sp=0.9711\\ {}&{}&Retinal colored image, STARE&20 images, five-fold cross-validation&Pr=0.7761, Re=0.7791, Sp=0.9741 \\ {}&{}&Retinal colored image, CHASE-DB1 &14 for training, 14 for testing, one-off train + test&Pr=0.6660, Re=0.6850, Sp=0.9664\\ {}&{}&Retinal colored image, HRF & half for training, half for testing, one-off train + test&Pr=0.7775, Re=0.7602, Sp=0.9795\\ \cmidrule(r){1-5} \multirow{2}{*}{\citet{Javidi2017}} &{Dictionary learning, } &\multirow{2}{*}{Retinal colored image, DRIVE}&\multirow{2}{*}{20 for training, 20 for testing , one-off train + test}&\multirow{2}{*}{Acc=0.9446}\\ {}&{vessel and non-vessel features}&{}&{}&{}\\ \cmidrule(r){1-5} \multirow{2}{*}{\citet{Kalaie2017b}} &{Hierarchical probabilistic framework, } &Retinal colored image, REVIEW&16 images, leave-one-out&Acc=0.9446\\ {}&{intensity features of the cross sections}&{Retinal colored image, DRIVE}&{40 images, leave-one-out}&{Acc=0.970}\\ \cmidrule(r){1-5} \multirow{4}{*}{\citet{Orlando2017}} &\multirow{4}{*}{SVM, features in CRF} &Retinal colored image, DRIVE&20 for training, 20 for testing, one-off train + test&Pr=0.7854, Se=0.7897, Sp=0.9684\\ {}&{}&Retinal colored image, STARE&19 for training, 1 for testing, leave-one-out&Pr=0.7740, Se=0.7680, Sp=0.9738\\ {}&{}&Retinal colored image, CHASE-DB1 &8 for training, 20 for testing, one-off train + test&Pr=0.7438, Se=0.7277, Sp=0.9712\\ {}&{}&Retinal colored image, HRF & 5 for training, 40 for testing, one-off train + test&Pr=0.6950, Se=0.7794, Sp=0.9650\\ \cmidrule(r){1-5} \multirow{3}{*}{\citet{Zhang2017d}} &\multirow{3}{*}{\tabincell{l}{Random forest, gaussian-based \\ filters wavelet transform}} &Retinal colored image, DRIVE&20 for training, 20 for testing, one-off train + test&Acc=0.9466, AUC=0.9703, Se=0.7861, Sp=0.9712\\ {}&{}&Retinal colored image, STARE&19 for training, 1 for testing, leave-one-out&Acc=0.9547, AUC=0.9740, Se=0.7882, Sp=0.9729\\ {}&{}&Retinal colored image, CHASE-DB1 &27 for training, 1 for testing, leave-one-out&Acc=0.9502, AUC=0.9706, Se=0.7644, Sp=0.9716\\ \hline \hline \end{tabular} \begin{tablenotes} \footnotesize \item[*] List of abbreviations: Acc=accuracy; AUC=area under the ROC curve; AUPRC=area under the precision-recall curve; CRF=conditional random field; Pr=precision; Re=recall; REVIEW \citep{2008REVIEW}; Se=sensitivity; Sp=specificity; SVM= support vector machine. \end{tablenotes} \end{threeparttable} \end{minipage}\vspace{0.1cm} \\ \begin{minipage}[]{1.2\textwidth} \caption{Overview of conventional machine-learning-based methods for tracking \textbf{coronary vessels}: the evaluation metrics and datasets are presented in Section \ref{Evaluation issues}; see list of abbreviations at the bottom.} \label{tab:traditional learning:coronary} \vspace{-0.2cm} \centering \scriptsize \begin{threeparttable} \begin{tabular}{p{2.8cm}p{3.8cm}p{3.8cm}p{5.5cm}p{5.2cm}} \hline\hline \multirow{1}{*}{Authors} &\multirow{1}{*}{Methods}&\multirow{1}{*}{Data}&\multirow{1}{*}{Experiment}&\multirow{1}{*}{Results}\\ \cmidrule(r){1-5} \multirow{2}{*}{\citet{Schaap2011}} &Nonlinear regression, point- &Coronary CTA, local data&82 for training, 1 for testing, leave-one-out&Distance=0.15mm\\ {}&{distribution and intensity model}&Coronary CTA, CAT08&8 for training, 24 for testing, one-off train + test&AI=0.23mm, OF=0.725, OT=0.971, OV=0.969\\ \cmidrule(r){1-5} \multirow{2}{*}{\citet{Lesage2016}}&Bayesian vessel model and particle- &\multirow{2}{*}{Coronary CTA, local data}&\multirow{2}{*}{10 for training, 51 for testing}&\multirow{2}{*}{AI=0.25mm, OT=0.925, OV=0.862}\\ {}&{filtering, flux-based image feature}&{}&{}\\ \cmidrule(r){1-5} \multirow{2}{*}{\citet{Mehmet2016}}&Boosting-based, image features, &\multirow{2}{*}{Coronary CTA, local data}&\multirow{2}{*}{90 for training, 20 for testing}&\multirow{2}{*}{Se \textgreater 0.9, Sp \textgreater 0.9}\\ {}&{orientation and scale}&{}&{}\\ \hline \hline \end{tabular} \begin{tablenotes} \footnotesize \item[*] List of abbreviations: AI=average inside; CTA = computed tomography angiography; OF=overlap until first error; OT=overlap with the clinically relevant part of the vessel; OV=overlap; Se=sensitivity; Sp=specificity. \end{tablenotes} \end{threeparttable} \end{minipage}\vspace{0.1cm} \\ \begin{minipage}[]{1.2\textwidth} \caption{Overview of conventional machine-learning-based methods for tracking \textbf{other vessels}: the evaluation metrics and datasets are presented in Section \ref{Evaluation issues}; see list of abbreviations at the bottom.} \label{tab:traditional learning:other vessels}\vspace{-0.2cm} \centering \scriptsize \begin{threeparttable} \begin{tabular}{p{2.8cm}p{3.8cm}p{3.8cm}p{5.5cm}p{5.4cm}} \hline\hline \multirow{1}{*}{Authors} &\multirow{1}{*}{Methods}&\multirow{1}{*}{Data}&\multirow{1}{*}{Experiment}&\multirow{1}{*}{Results}\\ \cmidrule(r){1-5} \multirow{2}{*}{\citet{Bogunovic2012a}}&\multirow{2}{*}{SVM, bifurcation features} &Internal carotid artery 3DRA, &\multirow{2}{*}{96 images, cross-validation}&\multirow{2}{*}{Cross-validation success rate =0.99}\\ {}&{}&{local data}&{}\\ \cmidrule(r){1-5} \multirow{2}{*}{\citet{Cheng2012a}}&\multirow{2}{*}{Seed searching}&\multirow{2}{*}{Mammography, local data}&1800 samples, 1200 for training, 400 for testing, &\multirow{2}{*}{Se=0.93, Sp=0.851}\\ {}&{}&{}&{four-fold cross-validation}&{}\\ \cmidrule(r){1-5} \multirow{2}{*}{\citet{Zheng2012a}}&Non-rigid deformation, position,&\multirow{2}{*}{Aorta C-arm CT, local data}&\multirow{2}{*}{319 volumes, four-fold cross-validation}&\multirow{2}{*}{Mean error=1.08 mm}\\ {}&{orientation and scale}&{}&{}\\ \cmidrule(r){1-5} \multirow{2}{*}{\citet{Cherry2015}}&Random forest, intensity, vesselness, &\multirow{2}{*}{Pelvis CT angiograms, local data}&\multirow{2}{*}{10 for training, 30 for testing, one-off train + test}&\multirow{2}{*}{Pr=0.752, Re=0.677}\\ {}&{ray-casting, MIP, and spanning tree}&{}&{}\\ \cmidrule(r){1-5} \multirow{2}{*}{\citet{Rempfler2015b}}&Probabilistic model, &\multirow{2}{*}{Mouse brain MR images, local data}&\multirow{2}{*}{4 for training, 1 for testing, leave-one-out}&\multirow{2}{*}{DSC=0.516}\\ {}&{physiological-geometric properties}&{}&{}\\ \cmidrule(r){1-5} \citet{Schneider2015a}&Random forest, oriented features &Synthetic vascular data, local data&4 for testing, leave-one-out&DSC=0.95\\ \cmidrule(r){1-5} \multirow{2}{*}{\citet{Zhang2017g}} &Random forest, steerable- &\multirow{2}{*}{Perivascular 7T MR, local data}&\multirow{2}{*}{19 image sets, two-fold cross-validation}&\multirow{2}{*}{DSC=0.661, Se=0.651}\\ {}&{frangi-filters and OOF}&{}&{}\\ \cmidrule(r){1-5} \citet{Lorza2018}&SVM, a radial basis function kernel &Carotid bifurcation MRI, local data&49 arteries for testing&DSC wall overlap= 0.741\\ \hline \hline \end{tabular} \begin{tablenotes} \footnotesize \item[*] List of abbreviations: 3DRA=3D rotational angiography; CT=computed tomography; DSC=dice similarity coefficient; MIP=maximum intensity projection; MR=magnetic resonance; MRI=magnetic resonance imaging; OOF=optimally oriented flux; Pr=precision; Re=recall; Se=sensitivity; Sp=specificity. \end{tablenotes} \end{threeparttable} \end{minipage} \end{sidewaystable*} \begin{sidewaystable*} \vspace{0.8cm} \begin{minipage}[t]{1.1\textwidth}\vspace{-0.8cm} \centering \caption{Overview of the techniques in traditional machine-learning-based methods: see list of abbreviations at the bottom. } \label{tab:traditional learning techniques}\vspace{-0.2cm} \begin{threeparttable} \begin{tabular}{p{5cm}p{4cm}p{13cm}} \hline\hline \multirow{1}{*}{Conventinal learning-based methods} &\multirow{1}{*}{Techniques}&\multirow{1}{*}{}\\ \cmidrule(r){1-3} \multirow{9}{*}{Hand-crafted featues} &Intensity features:& \citet{Vukadinovic2010, Cherry2015, Mehmet2016}\\ \cdashline{2-3}[0.8pt/2pt] {}&Intensity gradient features: &\citet{Mehmet2016}\\ \cdashline{2-3}[0.8pt/2pt] {}&Bifurcation feature vectors: &\citet{Bogunovic2012a}\\ \cdashline{2-3}[0.8pt/2pt] {}&Spatial location: &\citet{Vukadinovic2010, Mehmet2016}\\ \cdashline{2-3}[0.8pt/2pt] {}&Angles: &\citet{Bogunovic2012a, Mehmet2016}\\ \cmidrule(r){2-3} {}&Learning-based kernels: &\citet{Poletti2014c, Liu2014c, Lesage2016, Asl2017}\\ \cdashline{2-3}[0.8pt/2pt] \multirow{2}{*}{}&\multirow{2}{*}{Learning-based filters:} &\citet{Lin2012b, Azzopardi2013c, Annunziata2015a, Sironi2015}\\ {}&{}&\citet{Annunziata2016, Zhang2017g, Deng2018, Javidi2017}\\ \cmidrule(r){1-3} \multirow{12}{*}{Classifications}&\multirow{2}{*}{K-means: }&\citet{Coates2012, Saffarzadeh2014, Zhang2014d, Jodas2017}\\ {}{}{}&{}&\citet{Goceri2017a, Lu2017a, Xia2018}\\ \cmidrule(r){2-3} {}&\multirow{2}{*}{Fuzzy C-means clustering:} &\citet{Mapayi2015, Mapayi2016, Khan2016}\\ {}&{}&\citet{Haddad2018, Zeng2018b}\\ \cmidrule(r){2-3} {}&\multirow{2}{*}{Support Vector Machine:} &\citet{A.Osareh2009, You2011, Hanaoka2015, Chen2015c}\\ {}&{}&\citet{Kang2015a, Jawaid2017b, Orlando2017, Lorza2018}\\ \cmidrule(r){2-3} {}&\multirow{3}{*}{Boosting-based methods:} &\citet{Turetken2016, Lupascu2010, Gu2017}\\ {}&{}&\citet{Memari2017, Lupascu2013e}\\ {}&{}&\citet{Fraz2012b, Lupascu2013e, Hashemzadeh2019}\\ \cmidrule(r){2-3} {}&\multirow{2}{*}{Random forest:} &\citet{Annunziata2015, Melki2014, Zhang2017d}\\ {}&{}&\citet{Schneider2015a, Sankaran2016, Cherry2015}\\ \cmidrule(r){2-3} {}&\multirow{1}{*}{Hybrid classifiers:} &\citet{Rani2016, Lugauer2014a, Chapman2015, Hu2018e}\\ \cmidrule(r){1-3} \multirow{5}{*}{Statistical models}&\multirow{1}{*}{Threshold: }&\citet{Vukadinovic2010, Cheng2012a}\\ \cmidrule(r){2-3} {}&\multirow{1}{*}{Intensity/appearance model:} &\citet{Schaap2011, Zheng2012a, Rempfler2014a}\\ \cmidrule(r){2-3} {}&\multirow{2}{*}{Topological model:} &\citet{Rempfler2015b, Asl2017, Zhao2017d}\\ {}&{}&\citet{Chai2013, Kalaie2017b}\\ \hline \hline \end{tabular} \end{threeparttable} \end{minipage} \end{sidewaystable*} \subsection{Hand-crafted features}\label{hand-crafted features} A broad definition of hand-crafted features is provided in \citep{Lesage2009}. Conventional machine-learning-based methods train models with numerous hand-crafted features, which should be well-designed according to the applications. These features (i.e., global and local features) can be obtained by a series of filters such as those given in \citep{Frangi1998, Agam2005a, Manniesing2006}. \citet{Vukadinovic2010} used a set of features for classifying the calcium candidate object of blood vessels. These features include smoothed intensity, Gaussian derivative features, and a set of shape features including spatial locations (distance to the lumen). \citet{Bogunovic2012a} extracted a set of labeled bifurcation feature vectors of vessels to train the classifier. \citet{Mehmet2016} extracted local features based on image intensity, intensity gradient, sample positions, and angles. They claimed that the Hessian-matrix-based features can aid in distinguishing between tubular and non-tubular structures. One application of hand-crafted features is in the development of learning-based kernels. \citet{Poletti2014c} learned a set of optimal discriminative convolution kernels to be used in AdaBoost classifications. The multi-kernel learning method proposed in \citep{Liu2014c} utilizes the features from the Hessian-matrix-based vesselness measures, multi-scale Gabor filter responses, and multi-scale line strengths. \citet{Lesage2016} learned the non-parametric kernel with likelihood terms of direction and radius transition priors. To estimate the vessel direction and diameter, \citet{Asl2017} formulated a kernelized covariance matrix from the training data. Another application is the learning-based filters of vessels. Assuming that the vessel properties changed constantly, \citet{Lin2012b} learned the continuity pattern of the current segment using the extended Kalman filter. \citet{Azzopardi2013c} learned the appropriate prototype features in the filter-configuration process. In addition to the appearance features of learning filters, \citet{Annunziata2015a} introduced the context information (i.e., relationships among objects) in the filter-learning process. The authors assumed that the learned context filters had two clear advantages: incorporating high-level information and obtaining high efficiency and adaptability. To accelerate the learning process, \citet{Sironi2015} computed the filters by linearly combining them with a smaller number of separable filters. This operation can considerably address the problem of the computational complexity at no extra cost in terms of performance. \citet{Annunziata2016} proposed a warm-start strategy to solve this problem. This strategy is based on carefully designing hand-crafted filters and modeling appearance properties of curvilinear structures, which are then refined by convolutional sparse coding. Using vascular filters (e.g., Frangi filter and optimally oriented flux), \citet{Zhang2017g} extracted the corresponding types of vascular features and integrated these feature responses into a structured random forest to classify voxels into positive and negative classes. \citet{Deng2018} named all the features for the random forest (RF) as discriminative integrated features. These features are classified as low-level features, vascular features, context features, and local self-similarity descriptor. In addition to the kernels and filters, \citet{Javidi2017} constructed two separate dictionaries to learn the vessel and non-vessel representations. These learned dictionaries yield a strong representation containing the semantic concepts of the image. \subsection{Classifications}\label{Classification} The conventional machine-learning-based methods obtain vessels using classifiers. For vessel-tracking tasks, the methodologies of classification can be broadly categorized into the unsupervised and supervised learning strategies. Unsupervised learning-based methods train the classifier without using labeled vessel data or explicitly using any supervised classification techniques. To separate related regions, seeds and patches, the main schemes reported in the literature are clustering techniques (e.g., k-means and fuzzy C-means). Instead of explicitly obtaining sparse representations, k-means clustering tends to discover sparse projections of the data \citep{Coates2012}. As a pre-processing step, the k-means algorithm can be used to partition the pixels into several clusters; e.g., three clusters of related regions \citep{Saffarzadeh2014} or five groups of images \citep{Zhang2014d}. In the process, \citet{Jodas2017} employed the k-means algorithm with subtractive clustering to separate the vessel regions in the image according to the gray-scale intensity. The k-means algorithm is also used for the final refinement of vessel segmentation \citep{Goceri2017a}. In addition to vessel regions, \citet{Lu2017a} used the manually annotated seeds to represent the vascular features and utilized k-means clustering to exclude the wrong seeds. To find the representative patches from numerous candidate patches, \citet{Xia2018} used k-means clustering to group the patches under the Euclidean distance metric. Fuzzy C-means clustering is another unsupervised learning method for pattern recognition that employs various image properties for separation. Image pixel intensities are not mutually independent; hense, \citet{Kande2010} used a thresholding technique based on the spatially weighted fuzzy C-means algorithm, which can well preserve the spatial structures in a binarized/thresholded image. In \citep{Mapayi2015}, phase-congruency \citep{Kovesi1999} has been used to preserve the features with in-phase frequency components, and fuzzy C-means clustering is performed for accurate retinal vessel segmentation. \citet{Mapayi2016} further investigated the difference image with fuzzy C-means for the detection of vessels in the retinal image. An improved fuzzy C-means clustering in \citep{Khan2016} was also used for pixel classification based on the texture features. Using the contrast-time curve of the pixel as inputs, \citet{Haddad2018} separated the major vessels from the capillary blush and background noise through fuzzy C-means clustering. \citet{Zeng2018b} constructed the intensity model based on kernel fuzzy C-means to extract the intensity feature of thick vessels. The ground truth is absent; hense, the performance of unsupervised methods relies on particular features based on the statistical distribution of the overall input data. In contrast, supervised learning methods require a manually annotated set of training images for classifications. The extracted features and ground truth of every sample are collected to train the classifier. Most of these methods in the supervised category use various classifiers (\Zxhreffig{fig: Traditional supervised learning})---support vector machine (SVM), boosting-based methods, and random forests---to distinguish the vascular patterns in the images. The SVM classifier performs vessel or non-vessel classification by constructing an N-dimensional hyperplane that optimally separates the vessel samples into different categories. The classification ability of the SVM is based on the feature vectors obtained by different operators and vascular-dedicated filters. The operators can be line operator \citep{Ricci2007}, Gabor filters \citep{A.Osareh2009}, and wavelets \citep{You2011}. The vascular-dedicated filters can be Frangi filter \citep{Frangi1998} and optimally oriented flux \citep{Law2010}. To distinguish between the vessels, the feature vectors can also be formulated via general measures; e.g., distance between adjacent nodes \citep{Hanaoka2015}, geometric shapes \citep{Kang2015a}, and normal cross-sections \citep{Jawaid2017b}. To deal with more complex cases in the curve detection, \citet{Chen2015c} utilized the joint feature representations---e.g., smoothness of points and position relationships---for classification. Instead of classifying the pixels, in the framework of the fully connected conditional random field (CRF), the SVM methods are employed to adjust the weight parameters in the energy function \citep{Orlando2017}. \citet{Lorza2018} used the SVM with a radial basis function kernel to obtain the probability map of the vessel region. Boosting-based methods are dependent on building strong classification models from a linear combination of weak classifiers, making the training easier and faster. Simple functions, such as regression stumps, can be employed in boosting-based methods to detect curvilinear objects \citep{Turetken2016}. In vesse-segmentation tasks, a regression stump is a decision tree with two terminal nodes. For a given vascular feature, the tree selects a branch according to the threshold based on a binary decision function. As a special case of boosting-based methods, the AdaBoost learning model is trained to automatically detect the bifurcation points of vessels with elaborately selected features. To improve the accuracy of classifications, numerous filters are necessary to obtain the vascular features \citep{Zhou2007b}. \citet{Lupascu2010} used feature vectors, which are composed of eight elements including the output of filters, measures, and other transformation results, to encode vascular information on the local intensity structure, spatial properties, and geometry at multiple scales. \citet{Gu2017} constructed boosting-tree methods based on features such as variable sizes and locations. These features represent the encoded global contextual information (e.g., context distance and local spatial label patterns). \citet{Memari2017} completed the segmentation based on feature extraction and selection steps, along with the AdaBoost classifier. If a classification tree has an AdaBoost classifier at each node, then the tree will be the probabilistic boosting-tree (PBT) classifier. \citet{Zheng2011c} exploited the PBT to identify the pixel inside the vessel using 24 feature vectors from the Hessian matrix. To improve the performance of retinal segmentation methods in the presence of lesions, an ensemble classifier of boosted \citep{Freund1995} and bagged decision trees \citep{Breiman1996} has been proposed to manage the healthy and pathological retinal images via several encoded features \citep{Fraz2012b}. The ensembles of bagged decision trees have also been employed to learn the mapping between vessel widths and corresponding points \citep{Lupascu2013e}. In \citep{Hashemzadeh2019}, a root-guided decision tree was used to distinguish between the vessel and non-vessel regions. A collection of tree-structured classifiers can be assembled as an RF classifier. Different from the SVM and decision trees, the RF \citep{Cutler2012, Zhang2016g} tends to deliver high performance because of the embedded feature selection in the model-generation process. In the vessel-tracking process, the selected features are invariably related to the intensity profile or vascular shapes (e.g., tubular structures \citep{Annunziata2015, Melki2014, Zhang2017d}, vessel center \citep{Schneider2015a}, and tree-like structures \citep{Sankaran2016}). To cover more features for the RF classifier, researchers used multiple techniques to generate representations in various spaces. \citet{Cherry2015} used two sets of features to distinguish the abnormal vessels: vessel cues and local information. To improve the performance and avoid the over-fitting problems, hybrid classifiers are used for vessel-classification problems. \citet{Rani2016} combined the SVM and tree-bagger techniques to distinguish between the vessel and non-vessel structures. The works of \citet{Lugauer2014a} and \citet{Chapman2015} used the RF, PBT, and logistic regression classifier to identify the lumen contours and edges of vessels. \citet{Hu2018e} intricately applied the cascade-AdaBoost-SVM classifiers to delineate the vessel boundaries. \subsection{Statistical models}\label{Statistic models} The profiles of intensity and geometries of vessels can be learned using statistical models. \citet{Vukadinovic2010} determined the vessel calcium object threshold by simply observing the calcium objects disappear in the image dataset; this threshold is used to segment the calcium regions of the vessels. In \citep{Cheng2012a}, the vessel center threshold, which is referred to as the strong peak near the center of the profile, was learned from a set of manually-labeled samples of parallel linear structures. For more detailed information, \citet{Schaap2011} learned the local point distribution models and a nonlinear boundary intensity model by statistically analyzing the set of annotated training data. \citet{Zheng2012a} divided the vessel model into four structures, each of which is recognized via learned detectors. Instead of designing individual classifiers for each geometrical constraint, \citet{Rempfler2014a} simply learned the global statistic of the desired geometrical properties of the network from the dataset. Based on the probabilistic model, the topological structures of vessels, e.g., branches and connections, can also be learned for vessel tracking. \citet{Rempfler2015b} learned the physiological geometric properties of vessels such as the relative frequencies of radii and deviation angles of vessel segments. \citet{Asl2017} learned these relationships using kernels. \citet{Zhao2017d} learned the topological tree and geometrical statistics of parameters including tree hierarchy, branch angle, and length statistics. Contrary to the pixel-based and object-based methods, \citet{Chai2013} modeled the vessel connections using graph theory. To describe the shape of the graph, they constructed three sets of parameters: graph connectivity, edge orientation, and line width, These parameters can be learned from annotated image samples through a maximum likelihood estimation. Considering the intensity distributions, \citet{Kalaie2017b} developed a directed probabilistic graphical model whose hyperparameters are estimated using a maximum likelihood solution based on Laplace approximation. \vspace{-1em} \section{Vessel tracking based on deep learning}\label{Vessel tracking based on deep learning} \begin{sidewaystable*} \begin{minipage}[]{1.1\textwidth} \caption{Overview of deep-learning-based methods for tracking \textbf{retinal vessels}: the evaluation metrics and datasets are presented in Section \ref{Evaluation issues}; see list of abbreviations at the bottom.} \label{tab:deep learning:retinal vessel} \vspace{-0.2cm} \centering \scriptsize \begin{threeparttable} \begin{tabular}{p{2.8cm}p{2.6cm}p{3.8cm}p{5.4cm}p{6cm}} \hline\hline \multirow{1}{*}{Authors} &\multirow{1}{*}{Methods}&\multirow{1}{*}{Data}&\multirow{1}{*}{Experiment}&\multirow{1}{*}{Results}\\ \cmidrule(r){1-5} \multirow{3}{*}{\citet{Li2016b}} &\multirow{3}{*}{Encoder-decoder, 2D} &Retinal colored image, DRIVE&20 for training, 20 for testing , one-off train + test&Acc=0.9527, AUC=0.9738, Se=0.7569, Sp=0.9816\\ {}&{}&Retinal colored image, STARE&19 for training, 1 for testing, leave-one-out&Acc=0.9628, AUC=0.9879, Se=0.7726, Sp=0.9844\\ {}&{}&Retinal colored image, CHASE-DB-1 &20 for training, 8 for testing, one-off train + test&Acc=0.9581, AUC=0.9716, Se=0.7507, Sp=0.9793\\ \cmidrule(r){1-5} \multirow{2}{*}{\citet{Liskowski2016}} &\multirow{2}{*}{CNN without pooling, 2D} &Retinal colored image, DRIVE&20 for training, 20 for testing, one-off train + test&Acc=0.9495, AUC=0.9720\\ {}&{}&Retinal colored image, STARE&19 for training, 1 for testing, leave-one-out&Acc=0.9566, AUC=0.9785\\ \cmidrule(r){1-5} \citet{Lahiri2017} &GAN, 2D &Retinal colored image, DRIVE&20 for training, 20 for testing, one-off train + test&AUC=0.962\\ \multirow{1}{*}{\citet{Costa2018}} &\multirow{1}{*}{GANs, 2D} &Retinal colored image, DRIVE&20 for training, 20 for testing, one-off train + test&AUC=0.841\\ \cmidrule(r){1-5} \multirow{2}{*}{\citet{Guo2018b}} &\multirow{2}{*}{Multiple CNNs, 2D} &Retinal colored image, DRIVE&20 for training, 20 for testing, one-off train + test&Acc=0.9597, AUC=0.9726\\ {}&{}&Retinal colored image, STARE&20 for testing&Acc=0.9613, AUC=0.9737\\ \cmidrule(r){1-5} \multirow{3}{*}{\citet{Yan2018e}} &\multirow{3}{*}{Encoder-decoder, 2D} &Retinal colored image, DRIVE&20 for training, 20 for testing, one-off train + test&Acc=0.9542, AUC=0.9752, Se=0.7653, Sp=0.9818\\ {}&{}&Retinal colored image, STARE&19 for training, 1 for testing, leave-one-out&Acc=0.9612, AUC=0.9801, Se=0.7581, Sp=0.9846\\ {}&{}&Retinal colored image, CHASE-DB1&20 for training, 8 for testing, one-off train + test&Acc=0.9610, AUC=0.9781, Se=0.7633, Sp=0.9809\\ {}&{}&Retinal colored image, HRF&5 for training, 40 for testing, one-off train + test&Acc=0.9437, Pr=0.6647, Se=0.7881, Sp=0.9592\\ \cmidrule(r){1-5} \multirow{2}{*}{\citet{Wu2018b}} &\multirow{2}{*}{Multiple CNNs, 2D} &Retinal colored image, DRIVE&20 for training, 20 for testing, one-off train + test&Acc=0.9567, AUC=0.9807, Se=0.7844, Sp=0.9819\\ {}&{}&Retinal colored image, CHASE-DB1&20 for training, 8 for testing, one-off train + test&Acc=0.9637, AUC=0.9825, Se=0.7538, Sp=0.9847\\ \cmidrule(r){1-5} \multirow{3}{*}{\citet{Zhao2018f}} &\multirow{3}{*}{GANs, 2D} &Retinal colored image, DRIVE&20 for training, 20 for testing, one-off train + test&Se=0.8038, Sp=0.9815\\ {}&{}&Retinal colored image, STARE&10 for training, 10 for testing, one-off train + test&Se=0.7896, Sp=0.9841\\ {}&{}&Retinal colored image, HRF&22 for training, 23 for testing, one-off train + test&Se=0.8001, Sp=0.9823\\ \cmidrule(r){1-5} \multirow{3}{*}{\citet{Zhang2018g}} &\multirow{3}{*}{U-net, 2D} &Retinal colored image, DRIVE&20 for training, 20 for testing, one-off train + test&Acc=0.9504, AUC=0.9799, Se=0.8723, Sp=0.9618\\ {}&{}&Retinal colored image, STARE&15 for training, 5 for testing, four-fold cross-validation&Acc=0.9712, AUC=0.9882, Se=0.7673, Sp=0.9901\\ {}&{}&Retinal colored image, CHASE-DB1&21 for training, 7 for testing, four-fold cross-validation&Acc=0.9770, AUC=0.9900, Se=0.7670, Sp=0.9909\\ \cmidrule(r){1-5} \multirow{1}{*}{\citet{Gu2019a}} &\multirow{1}{*}{Context encoder network} &\multirow{1}{*}{Retinal colored image, DRIVE}&\multirow{1}{*}{20 for training, 20 for testing, one-off train + test}&\multirow{1}{*}{Acc=0.955, AUC=0.978}\\ \cmidrule(r){1-5} \multirow{4}{*}{\citet{Jin2019}} &\multirow{4}{*}{Deformable U-net, 2D} &Retinal colored image, DRIVE&20 for training, 20 for testing, one-off train + test&Acc=0.9566, AUC=0.9802, TNR=0.9800, TPR=0.7963\\ {}&{}&Retinal colored image, STARE&19 for training, 1 for testing, leave-one-out&Acc=0.9641, AUC=0.9832, TPR=0.7595, TNR=0.9878\\ {}&{}&Retinal colored image, CHASE-DB1 &14 for training, 14 for testing, one-off train + test&Acc=0.9610, AUC=0.9804, TNR=0.9752, TPR=0.8155\\ {}&{}&Retinal colored image, HRF & 15 for training, 30 for testing, one-off train + test&Acc=0.9651, AUC=0.9831, TNR=0.9874, TPR=0.7464\\ \cmidrule(r){1-5} \multirow{2}{*}{\citet{Lian2019}} &\multirow{1}{*}{U-net, Res-net and } &Retinal colored image, DRIVE&20 for training, 20 for testing, one-off train + test&Acc=0.9692, Pr=0.8637, Se=0.8278, Sp=0.9861\\ {}&{attention scheme, 2D}&Retinal colored image, STARE&10 for training, 10 for testing, one-off train + test&Acc=0.9740, Pr=0.8823, Se=0.8342, Sp=0.9916\\ \cmidrule(r){1-5} \multirow{3}{*}{\citet{Mou2019}} &\multirow{3}{*}{Dense dilate network, 2D} &Retinal colored image, DRIVE&20 for training, 20 for testing, one-off train + test&Acc=0.9594, AUC=0.9796, Se=0.8126, Sp=0.9788\\ {}&{}&Retinal colored image, STARE&15 for training, 5 for testing, four-fold cross-validation&Acc=0.9685, AUC=0.9858, Se=0.8391, Sp=0.9769\\ {}&{}&Retinal colored image, CHASE-DB1&21 for training, 7 for testing, four-fold cross-validation&Acc=0.9637, AUC=0.9812, Se=0.8268, Sp=0.9773\\ \cmidrule(r){1-5} \multirow{4}{*}{\citet{Shin2019}} &\multirow{4}{*}{CNN + GNN, 2D} &Retinal colored image, DRIVE&20 for training, 20 for testing , one-off train + test&Acc=0.9271, AUC=0.9802, Se=0.9382, Sp=0.9255\\ {}&{}&Retinal colored image, STARE&10 for training, 10 for testing, one-off train + test&Acc=0.9378, AUC=0.9877, Se=0.9598, Sp=0.9352\\ {}&{}&Retinal colored image, CHASE-DB1 &20 for training, 8 for testing, one-off train + test&Acc=0.9373, AUC=0.9830, Se=0.9463, Sp=0.9364\\ {}&{}&Retinal colored image, HRF & 15 for training, 30 for testing, one-off train + test&Acc=0.9349, AUC=0.9838, Se=0.9546, Sp=0.9329\\ \cmidrule(r){1-5} \multirow{3}{*}{\citet{Cherukuri2020}} &\multirow{3}{*}{CNN + geometric prior, 2D} &Retinal colored image, DRIVE&20 for training, 20 for testing, four-fold cross-validation&Acc=0.9563, AUC=0.9814\\ {}&{}&Retinal colored image, STARE&10 for training, 10 for testing, one-off train + test&Acc=0.9687, AUC=0.9903\\ {}&{}&Retinal colored image, CHASE-DB1 &14 for training, 14 for testing, one-off train + test&Acc=0.9672, AUC=0.9833\\ \cmidrule(r){1-5} \multirow{2}{*}{\citet{Ding2020}} &\multirow{1}{*}{Pre-trained model, vessel} &\multirow{2}{*}{Retinal UWF FP, PRIME-FP20}&\multirow{2}{*}{15 images, four-fold cross-validation}&\multirow{2}{*}{AUCPR=0.842, Max DSC=0.772}\\ {}&{maps, and noise label, 2D}&{}& {}&{}\\ \hline \hline \end{tabular} \begin{tablenotes} \footnotesize \item[*] List of abbreviations: Acc=accuracy; AUC=area under the ROC curve; CNN=convolutional neural networks; GAN=generative adversarial network; GNN=graph neural network; Pr=precision; Re=recall; Se=sensitivity; Sp=specificity; TNR=true negative rate; TPR=true positive rate; UWF FP=ultra-widefield fundus photography . \end{tablenotes} \end{threeparttable} \end{minipage} \end{sidewaystable*} \begin{sidewaystable*} \begin{minipage}[]{1.1\textwidth} \caption{Overview of deep-learning-based methods for tracking \textbf{coronary vessels}: the evaluation metrics and datasets are presented in Section \ref{Evaluation issues}; see list of abbreviations at the bottom.} \label{tab:deep learning:coronary}\vspace{-0.2cm} \centering \scriptsize \begin{threeparttable} \begin{tabular}{p{2.2cm}p{2.5cm}p{4.5cm}p{5.5cm}p{5.2cm}} \hline\hline \multirow{1}{*}{Authors} &\multirow{1}{*}{Methods}&\multirow{1}{*}{Data}&\multirow{1}{*}{Experiment}&\multirow{1}{*}{Results}\\ \cmidrule(r){1-5} \citet{Lee2019b} &CNN + Shape prior, 3D &Coronary CTA, local data&274 for training, 136 for testing, one-off train + test&DSC=0.768, HD=3.55mm\\ \cmidrule(r){1-5} \multirow{1}{*}{\citet{Shin2019}} &\multirow{1}{*}{CNN + GNN, 2D} &Coronary x-ray, CA-XRA (local data) & 2958 for training, 179 for testing, one-off train + test&Acc=0.9517, AUC=0.9914, Se=0.9700, Sp=0.9507\\ \cmidrule(r){1-5} \multirow{3}{*}{\citet{Wolterink2019}} &\multirow{3}{*}{\tabincell{l}{CNN with dilated-\\convolution, 3D}} &Coronary CTA, CAT08&7 for training, 1 for testing, leave-one-out&AI=0.21mm, OF=0.815, OT=0.970, OV=0.937\\ {}&{}&Coronary CTA, UMCU dataset (local data)&8 for training, 50 for testing, one-off train + test&Median of radius=0.81mm\\ {}&{}&Coronary CTA, orCaScore&8 for training, 36 for testing, one-off train + test&Visual check\\ \hline \hline \end{tabular} \begin{tablenotes} \footnotesize \item[*] List of abbreviations: \item[*] Acc=accuracy; AI=average inside; AUC=area under the ROC curve; CNN=convolutional neural networks; CTA = computed tomography angiography; GNN=graph neural network; DSC=dice similarity coefficient; HD=hausdorff distance; Se=sensitivity; Sp=specificity; OF=overlap until first error; OT=overlap with the clinically relevant part of the vessel; OV=overlap. \end{tablenotes} \end{threeparttable} \end{minipage}\vspace{0.1cm} \begin{minipage}[]{1.1\textwidth} \caption{Overview of deep-learning-based methods for tracking \textbf{other vessels}: the evaluation metrics and datasets are presented in Section \ref{Evaluation issues}; see list of abbreviations at the bottom.} \label{tab:deep learning:other vessels}\vspace{-0.2cm} \centering \scriptsize \begin{threeparttable} \begin{tabular}{p{2.8cm}p{2.6cm}p{4.3cm}p{5.2cm}p{3.8cm}} \hline\hline \multirow{1}{*}{Authors} &\multirow{1}{*}{Methods}&\multirow{1}{*}{Data}&\multirow{1}{*}{Experiment}&\multirow{1}{*}{Results}\\ \cmidrule(r){1-5} \citet{Marques2016} &U-net, 3D &Multiple organs, CT, MR, local data&67 for training, 19 for testing, one-off train + test&Pr=0.362\\ \cmidrule(r){1-5} \multirow{3}{*}{\citet{Huang2018}} &\multirow{3}{*}{Unet, 3D} &Liver contrast-enhanced CT, 3D-IRCADb&10 for training, 10 for testing, one-off train + test&DSC=0.675, Se=0.743\\ {}&{}&Liver CT, SLIVER07&20 for testing&Visual check\\ {}&{}&Liver CT, local data&10 for testing&Visual check\\ \cmidrule(r){1-5} \multirow{1}{*}{\citet{Lian2018a}} &\multirow{1}{*}{Modified U-net, 2D}&Perivascular spaces, 7T MR, local data&6 for training, 14 for testing, one-off train + test&DSC=0.77, PPV=0.83, Se=0.74\\ \cmidrule(r){1-5} \citet{Nardelli2018} &CNN + graph cut, 3D &Pulmonary CT, local data&4 for training, 16 for validation&Acc=0.87\\ \cmidrule(r){1-5} \multirow{2}{*}{\citet{Kitrungrotsakul2019}} &\multirow{2}{*}{DensNet, 2.5D} &hepatic MR, IRCAD&19 for training, 1 for testing, leave-one-out&DSC=0.903, Se=0.929\\ {}&{}&hepatic MR, VASCUSYNTH&9 for training, 1 for testing, leave-one-out&DSC=0.901\\ \cmidrule(r){1-5} \multirow{2}{*}{\citet{He2020a}} &Auto-encoder + &\multirow{2}{*}{Renal artery, CT, local data}&\multirow{2}{*}{52 for training, 104 for testing, one-off train + test}&\multirow{2}{*}{DSC=0.884, HD=25.439mm}\\ {}&{dense bias connection, 3D}&{}&{}&{}\\ \cmidrule(r){1-5} \multirow{2}{*}{\citet{Nazir2020}} &CNN + dilated-&\multirow{2}{*}{Intracranial vessel, CTA, local data}&\multirow{2}{*}{50 for training, 20 for testing, one-off train + test}&\multirow{2}{*}{DSC=0.8946, HD=5.04mm}\\ {}&{convolution, 3D}&{}&{}&{}\\ \cmidrule(r){1-5} \multirow{1}{*}{\citet{Ni2020}} &CNN + channel attention&\multirow{1}{*}{Intracranial vessel, CTA, local data}&\multirow{1}{*}{9488 slices for training, 480 images for testing}&\multirow{1}{*}{DSC=0.965}\\ \hline \hline \end{tabular} \begin{tablenotes} \footnotesize \item[*] List of abbreviations: Acc=accuracy; CNN=convolutional neural networks; CT=computed tomography; CTA = computed tomography angiography; DSC= dice similarity coefficient; HD=hausdorff distance; IRCAD: \url{http://www.ircad.fr}; MR=magnetic resonance; Pr=precision; PPV=positive predictive value; Se=sensitivity; VASCUSYNTH \citep{2010VascuSynth}. \end{tablenotes} \end{threeparttable} \end{minipage} \end{sidewaystable*} \begin{sidewaystable*} \vspace{0.8cm} \begin{minipage}[t]{0.6\textwidth}\vspace{-0.8cm} \centering \caption{Overview of techniques in deep-learning-based methods: see list of abbreviations at the bottom.} \label{tab:deep learning techniques}\vspace{-0.2cm} \begin{threeparttable} \begin{tabular}{p{5cm}p{4.5cm}p{13cm}} \hline\hline \multirow{1}{*}{Deep-learning-based methods} &\multirow{1}{*}{Techniques}&\multirow{1}{*}{}\\ \cmidrule(r){1-3} \multirow{24}{*}{Network architectures}&\multirow{1}{*}{CNN + Dilated convolution:}&\citet{Wolterink2019, Mou2019, Nazir2020}\\ \cmidrule(r){2-3} {}&\multirow{1}{*}{CNN + Deformation convolution:} &\citet{Jin2019}\\ \cmidrule(r){2-3} {}&\multirow{1}{*}{CNN + No Pooling:} &\citet{Liskowski2016, Tetteh2017a}\\ \cmidrule(r){2-3} {}&\multirow{2}{*}{CNN + Probability maps:} &\citet{Ganin2015, Fu2016b, Mo2017}\\ {}&{}&\citet{Hu2018, Uslu2019, Lin2019, Ding2020}\\ \cmidrule(r){2-3} {}&\multirow{1}{*}{CNN + Attention mechanism:} &\citet{Shen2019, Li2019, Lian2019, Ni2020}\\ \cmidrule(r){2-3} {}&\multirow{1}{*}{CNN + Skipping/short connection:} &\citet{Feng2018, Guo2019}\\ \cmidrule(r){2-3} {}&\multirow{1}{*}{CNN + CRF:} &\citet{Fu2016a, Luo2017}\\ \cmidrule(r){2-3} {}&\multirow{1}{*}{CNN + Prior:} &\citet{Lee2019b, Cherukuri2020}\\ \cmidrule(r){2-3} {}&\multirow{1}{*}{Multi-task learning:} &\citet{Maninis2016, Tan2017}\\ \cmidrule(r){2-3} {}&\multirow{2}{*}{Encoder-decoder:} &\citet{Li2016b, Fan2017, Dasgupta2017}\\ {}&{}&\citet{Gu2019a, He2020a}\\ \cmidrule(r){2-3} {}&\multirow{1}{*}{U-net:} &\citet{Fan2018, Huang2018}\\ \cmidrule(r){2-3} {}&\multirow{2}{*}{Modified U-net:} &\citet{Chen2018j, Kandil2018a, Wang2019c, Zhang2019}\\ {}&{}&\citet{Girard2019, Zhang2018g, Dharmawan2019, Zhang2019}\\ \cmidrule(r){2-3} {}&\multirow{1}{*}{GNN:} &\citet{Shin2019}\\ \cmidrule(r){2-3} {}&\multirow{1}{*}{GANs:} &\citet{Costa2018, Yu2019}\\ \cmidrule(r){2-3} {}&\multirow{1}{*}{Multiple CNNs:} &\citet{Wu2018b, Guo2018b}\\ \cmidrule(r){1-3} \multirow{5}{*}{Pre-processing:}&\multirow{1}{*}{Contrast/brightness normalization: }&\citet{Vega2015b, Liskowski2016}\\ \cmidrule(r){2-3} {}&\multirow{1}{*}{Whiting:} &\citet{Liskowski2016, Marques2016}\\ \cmidrule(r){2-3} {}&\multirow{2}{*}{Augmentation:} &\citet{Huang2018, Guo2019, Fan2018}\\ {}&{}&\citet{Livne2019, Zreik2018a, Lin2019}\\ \cmidrule(r){1-3} \multirow{5}{*}{Sampling strategies}&\multirow{1}{*}{Patch as samples:}&\citet{Nardelli2018a, Wolterink2019}\\ \cmidrule(r){2-3} {}&\multirow{1}{*}{Image as samples;} &\citet{Hu2018, Mo2017}\\ \cmidrule(r){2-3} {}&\multirow{2}{*}{Processed image as samples:} &\citet{Nardelli2017, Nardelli2018, Hajabdollahi2018}\\ {}&{}&\citet{Zhao2018g, Zreik2018a}\\ \cmidrule(r){1-3} \multirow{7}{*}{Loss functions:}&\multirow{2}{*}{Loss based on cross-entropy:}&\citet{Dasgupta2017, Nardelli2018a, Wu2018b, Jin2019}\\ {}&{}&\citet{Dharmawan2019, Mo2017,Guo2019, Lin2019}\\ \cmidrule(r){2-3} {}&\multirow{2}{*}{Loss to tackle data imbalance:} &\citet{Li2018b, Zhang2018g, Hu2018, Livne2019}\\ {}&{}&\citet{Soomro2019b, Kitrungrotsakul2019, Huang2018, Lian2018a}\\ \cmidrule(r){2-3} {}&\multirow{1}{*}{Loss based on squared error:} &\citet{Li2016b, Fan2017}\\\cmidrule(r){2-3} {}&\multirow{1}{*}{More complex:} &\citet{Yan2018e, Jiang2019}\\ \hline \hline \end{tabular} \end{threeparttable} \begin{tablenotes} \footnotesize \item[*] List of abbreviations: CNN=convolutional neural networks; CRF=conditional random field; GAN=generative adversarial network; GNN=graph neural network. \end{tablenotes} \end{minipage} \end{sidewaystable*} Using deep-learning-based methods, deep neural networks can be developed to map the input data into vascular patterns such as center points and vascular regions. These patterns can be used to obtain the vessels directly or indirectly. To this end, various deep-learning techniques have been proposed. This section reviews the deep-learning-based methods from three aspects: frameworks of vessel tracking (Section \ref{Frameworks of vessel tracking}), architecture of deep neural networks (Section \ref{Architecture of networks}), and model training (Section \ref{Training of CNN models}). \Zxhreftbs{tab:deep learning:retinal vessel} - \ref{tab:deep learning:other vessels} summarize the decompositions of a selection of works which are representative of the main trends in the field according to the applications. \Zxhreftbs{tab:deep learning techniques} summarizes the existing deep-learning-based works by grouping them into different subcategories. \subsection{Frameworks of vessel tracking}\label{Frameworks of vessel tracking} Vessel tracking can be achieved using hierarchical features via a unified framework or a two-step processing scheme. The unified vessel-tracking methods are implemented by integrating feature extraction and pixel classification into one network. In contrast, the two-step scheme generally employs a conventional method to track the vessel based on the preceding vessel features extracted using a deep convolutional neural network (CNN). The unified framework of vessel tracking can be transformed into resolving a classification or regression problem via the fully connected layers of CNN. The output neurons of CNN that are generally connected to the fully connected layers of the network determine the labels of pixels. To separate vascular regions from the background using CNN, two neurons are typically set as output layers, following the fully connected layers \citep{Liskowski2016, Marques2016, Dasgupta2017, Hu2017a, Oliveira2017}. More neurons are output simultaneously to segment vessels and other structures \citep{Maninis2016, Tan2017}; this can be regarded as multi-task learning. The idea of multiple tasks has been extended in \citep{Lahiri2017}, where a discriminator-classifier network differentiates between fake and real vessel samples and assigns correct class labels. The neurons of fully connected layers in conventional CNNs have large receptive fields of input; hence, the results are extremely coarse in the pixel-level vessel segmentation. To resolve the problem, \citet{Li2016b} improved the coarse results of conventional CNNs by outputting label maps of the same size. A pixel in the label maps can be affected by the multiple image patches in its neighborhood. This idea is similar to the fully connected CRF, which considers the relationships among the neighbor pixels. To achieve vessel segmentation, the CRF layers can also be used after using the convolutional layers \citep{Fu2016a, Luo2017}. The vessel-tracking process can be divided into two steps: feature learning and vessel tracking. In feature learning, the CNN maps the input image into intermediate representations located between the input and tracking results; e.g., probability maps \citep{Khowaja2016, Wu2016a, Nasr-Esfahani2016, Wolterink2019, Mou2019}, geometric priors \citep{Cherukuri2020}, and other feature maps \citep{Wang2015}. In vessel tracking, the conventional tracking method can be applied to these intermediate representations. The simple approach to complete the tracking is thresholding the probability map \citep{Nasr-Esfahani2016, Mo2017, Nasr-Esfahani2018}. \citet{Wang2015} employed ensemble RFs to classify the vascular pixels based on the output feature maps from the selected layers of CNN. \citet{Guo2018b} used a voting scheme to determine the results obtained by the CNN. \citet{Mou2019} performed vessel tracking by integrating the predicted probability maps and local vessel directions into the regularized walk algorithm. To refine the results of CNNs, \citet{Hu2018} added CRF modules at the end of the network and \citet{Chu2013} used the rank-1 tensor-approximation approach to complete the tracking. Inspired by the label-propagation steps of registration methods, \citet{Lee2019b} employed a CNN to learn the deformations between the source and the target vessels. The authors assumed that this template transformer network can provide guarantees on the resulting shapes of vessels. \subsection{Network architectures}\label{Architecture of networks} In vessel-tracking tasks, the CNNs are widely adopted for identifying hierarchical vascular features. To design an effective CNN for the recognition of vascular patterns, two aspects require thorough investigation: network components and integration of multiple networks. \subsubsection{Network components}\label{Components of the networks} \begin{figure*}[!t] \setlength{\abovecaptionskip}{-0.cm} \setlength{\belowcaptionskip}{-0.cm} \centering \includegraphics[scale=.7]{cnnframeworks.pdf} \caption{Illustration of three selected CNN frameworks for the coronary segmentation: pixel-wise CNN (top), encoder-decoder (middle), and U-net (bottom).}\label{CNN frameworks} \end{figure*} A CNN is composed of a series of layers (\Zxhreffig{CNN frameworks}), typically including the convolutional layers, pooling layers, and fully connected layers. The convolutional and pooling layers are used to build the CNNs in the early applications of vascular feature extraction \citep{Chu2013}, whereas fully connected layers are usually added at the end of a CNN as a part of pixel-classification tasks (Section \ref{Classification}). The convolutional layers activate the localized vascular features of the image and feature map by using a set of convolutional units \citep{Nardelli2018a, Zreik2018a}. A stack of dilated convolutions is used in convolutional layers \citep{Wolterink2019, Mou2019} to aggregate the features over multiple scales. In addition to dilated convolution modules, \citet{Nazir2020} adopted the inception module fusion of residual connection, enabling the network to capture advanced visual information under a controlled computational complexity. To capture the various shapes and scales of vessels, \citet{Jin2019} integrated the deformation convolution into the network. After the convolution layers, the pooling layers in the CNNs nonlinearly down-sample the input representation and preserve the feature information in each sub-region. The pooling layer aids in reducing the number of parameters irrelevant to the problem \citep{Nardelli2018a, Lian2018a, Zreik2018a}. However, the scaling operation of the pooling layer is considered to rapidly reduce the already extremely limited information contained in the potentially small patch, causing the classification to be more exigent. Therefore, \citet{Liskowski2016} constructed a NO-POOL architecture, which performs well on the datasets. \citet{Tetteh2017a} also found that the pooling operations can lead to the loss of fine local details, which are extremely crucial in pixel-wise tasks. To solve this problem, they removed all the pooling layers of the CNN, making the feature-extraction layers robust enough to objects of interest of any size. By employing various size-pooling operations, \citet{Gu2019a} used the residual multi-kernel pooling layer that encodes the multi-scale context features without extra learning weights. Feature maps are organized sets of units obtained through convolution operations. In vessel segmentation, different spatial forms of feature maps can be used in the CNNs. Using a three-dimensional (3D) CNN, \citet{Jin2017} generated 3D feature maps to learn the structure variations in the 3D space. They assumed that 3D spatial information (especially the 3D branch-level continuity) and junction-bifurcating patterns are important for segmenting vascular structures. Owing to high computational demands, they selected a relatively small region of interest (ROI) and trimmed the network. \citet{Yun2019} used a 2.5D CNN, which simultaneously takes three adjacent slices in each of the orthogonal directions, including axial, sagittal, and coronal, to improve the segmentation accuracy. However, they assumed that they could use the 3D CNN to entirely capture the 3D vascular information in its 3D convolutional layers. The feature maps created in the network can be applied for final vessel-tracking tasks. \citet{Ganin2015} found that CNNs are insufficient for learning the mapping from the image patch for vessel annotation, leading to a severe under-fitting during the training and suboptimal performance during the test period. To resolve this issue, the network maps the input image or patches into intermediate representations using the CNN. In vessel-segmentation tasks, these mapping results may be the probability maps of vessels. By applying the sigmoid activation function to the final convolution layer, the CNN output is converted to the probability values in the foreground and background regions. The final predicted vessel segmentation can be obtained by fusing these probability maps, which describe the probability distributions of vessels and non-vessels. To predict the vessel boundary, \citet{Fu2016b} utilized the full CNN architecture to output the vessel probability map, followed by the CRF, for a binary segmentation. \citet{Mo2017} generated a weighted fusion layer by fusing the multi-scale feature maps from each branch output. In their framework, the probability map is computed using sigmoid functions on the fusion of feature maps. \citet{Hu2018} obtained the probability map using a multi-scale CNN. By fusing the middle layer feature maps, this CNN model fuses richer multi-scale information for comprehensive feature description to learn more detailed information on the retinal vessels. \citet{Uslu2019} produced the probability maps of vessel interior, centerline, and edge locations. The authors assumed that the probability map can better explain the uncertainty and subjectivity of detecting vessels, especially those at the edge locations appearing in the ground truth data. To formulate the vessel segmentation as a style-transfer problem, \citet{Ding2020} used the binary probability maps as the tentative training data and the style targets. Considering that shallower side-outputs capture rich detailed information, and deeper side-outputs have high-level but fuzzy knowledge, \citet{Lin2019} outputted the feature maps of each intermediate layer using VGGNet \citep{Simonyan2014}. \subsubsection{Integration of multiple networks}\label{Integration of multiple networks} Because the designed forms of feature maps influence the performance of CNNs, feature maps of different layers are fused to further describe the vessels \citep{Wang2015, Fu2016b}. These feature-map forms can be derived by the various architectures of CNNs. Based on the number of CNNs used in the vessel-segmentation task, the architecture can be designed as a single CNN or multiple CNNs. A single CNN for vessel tracking can extract meaningful vascular-structures representations; this is regarded as a problem of dimension reduction or sparse coding feature spaces. For this problem, encoder-decoder architectures (\Zxhreffig{CNN frameworks}) are introduced to encode the hierarchical features. Instead of transforming the input into another space, \citet{Li2016b} developed an auto-encoder network to learn the features, which could recover the input. The auto-encoder network can be embedded into the CNNs to extract features, which could manage large inter-anatomy variations and thin structures such as renal arteries \citep{He2020a}. Inspired by the auto-encoder network, \citet{Fan2017} developed an encoder-decoder style network to learn the mapping functions from the images to the vessels. By formulating the vessel-tracking problem as a multi-label inference problem, \citet{Dasgupta2017} used the encoder-decoder framework to learn the class-label dependencies of neighboring pixels. By employing the skip connections between the encoder and decoder layers, the modified auto-encoder network facilitates the proper memorization of global and local features and alleviates the vanishing gradient problem of deep CNNs. \citet{Feng2018} concatenated different feature maps through a skipping connection. To learn more inherent features from different scales, \citet{Guo2019} further exploited short connections to fuse multiple outputs of side output layers. To fuse multi-scale features, \citet{He2020a} employed a dense biased connection that compresses and transmits the feature maps in each layer to every forward layer. The authors assumed that this connection can reduce feature redundancy and maintain the integrity of the information and gradient flows. \citet{Shin2019} used a graph neural network as a unified CNN to learn the vessel structures. Similar to an encoder-decoder architecture, the U-net \citep{ronneberger2015u} can extract vascular features using skipping connections (\Zxhreffig{CNN frameworks}). One feature map generated from a lower layer was concatenated to a corresponding higher layer. The U-net has been used to segment the coronary arteries in X-ray angiograms \citep{Fan2018} and liver vessels in CT images \citep{Huang2018}. The global contextual information from the low-level features and the spatial details from the previous convolution guide the precise segmentation. Several methods attempt to efficiently extract or fuse vascular features by improving the structures of U-net \citep{Chen2018j, Kandil2018a, Wang2019c}. \citet{Yan2018e} added two separate branches at the end of U-net to simultaneously train the model with the segment-level and the pixel-wise losses. To improve the robustness and facilitate convergence, \citet{Zhang2018g} applied a residual connection inside each resample block, which added feature maps before the convolution layers. To reduce the over-fitting problems, the elements in the U-net framework were modified; e.g., adding a dropout layer \citep{Dharmawan2019} and reducing the number of channels \citep{Livne2019}. \citet{Zhang2019} modified the original U-net by applying an additional convolutional layer before implementing concatenation using the corresponding decoder layer. This configuration also aids in transferring low-dimensional features to a higher-dimensional space. However, the general U-net may fail to extract some minuscule vessels because this feature accumulation is limited by the depth of U-net \citep{Jin2019}; accordingly, modified U-Nets are developed to focus on vascular structures. \citet{Jin2019} developed a deformable CNN to capture various vessel shapes and scales via deformable receptive fields, which are adaptive to input features. To highlight the vessel-like structures, the attention gate (AG) mechanism is introduced into the CNN \citep{Shen2019, Li2019}. This AG mechanism can highlight salient features and gradually suppress the characteristic response in unrelated background regions without passing multi-level information \citep{Shen2019, Li2019}. \citet{Lian2019} incorporated a weighted attention mechanism into the U-net framework. Using this mechanism, the network only focuses on the target ROI and eliminates the irrelevant background noise. To better learn the global feature information, \citet{Ni2020} introduced the channel attention mechanism when aggregating high-level and shallow features. Different from the single CNN framework, multiple CNNs can be jointly adopted in a framework for vessel tracking. These CNNs can be designed according to different views of the image; e.g., three views (sagittal, coronal, and transverse) of patches \citep{Kitrungrotsakul2017a}. \citet{Guo2018b} employed multiple CNNs as a voted classifier to improve the performance. \citet{Zhao2018g} accomplished the voxel-level vessel segmentation via the hierarchical update of CNNs. The authors assumed that this network absorbed the learning experience of the previous iteration, which gradually transformed a semi-supervised task into a supervised one. \citet{Zhang2019} proposed a more complicated cascade U-net network (i.e., three sub-networks are designed for different detection tasks). \subsection{Training strategies}\label{Training of CNN models} The successful training of a useful CNN model relies on a series of strategies. It is essential to design a suitable training strategy to ensure that the network can focus on vascular regions. Considering the vascular profiles and features, the training strategies should be carefully designed while considering the following aspects: pre-processing, sampling strategies, and formulation of loss functions. \subsubsection{Pre-processing}\label{Pre-processing} The trained networks tend to perform better on appropriately pre-processed images. The data pre-processing techniques include contrast/brightness normalization, whitening, and augmentation. The image brightness may vary across the fields of view, affectings the network performance. To resolve this problem, the contrast or brightness normalization abstracts from these fluctuations and further focuses on vessel regions. A Gaussian kernel is used to homogenize the background \citep{Vega2015b}. \citet{Liskowski2016} normalized the patches by subtracting the mean and dividing by the standard deviation of its elements. Similar to principal component analysis (PCA) processing, the whitening processing can remove the universal correlations among the neighbor pixels of the image. These universal correlations are redundant for training the network. \citet{Liskowski2016} used the zero-phase component analysis to process the image data using rotations, resulting in whitened data that are as close as possible to the original data. Whitening pre-processing has also been used in \citep{Marques2016}. Training a CNN for a computer-vision task requires tens of thousands of natural images. However, for vessel tracking, the number of training images is relatively small, which may cause an over-fitting problem. To solve this problem, data augmentation can increase the number of training samples using image-transformation approaches (e.g., rotation, scaling, flipping, and mirroring). These transformations yield the desired invariance and robustness properties of the resulting network. The augmentation methods vary according to different tasks. \citet{Charbonnier2017} preserved the orientation of patches with four angles and obtained examples using horizontal flipping. In addition to the rotation and flipping used in \citep{Zhao2018f}, scaling and mirror operations were used in \citep{Huang2018} and \citep{Guo2019}, respectively. Moreover, random elastic deformation was used to deform the training set \citep{Fan2018, Livne2019}. \citet{Zreik2018a} observed the signs of over-fitting when training was implemented without data augmentation. The results in \citep{Lin2019} show that data augmentation is essential to achieve excellent performance. The application of generative adversarial networks (GANs) is highly promising augmentation approach \citep{Costa2018, Yu2019}. By simply sampling a multi-dimensional normal distribution, \citet{Costa2018} employed the encoder-decoder network to generate realistic vessel networks and extend the training samples. \citet{Yu2019} used the shape-consistent GAN to generate synthetic images that maintain the background of coronary angiography and preserve the vascular structures of retinal vessels. This model can transfer the knowledge of the vessel segmentation from a public dataset to an unlabeled dataset. \subsubsection{Sampling strategies}\label{Strategies of sampling} There are two categories of CNNs according to the input: patch-based and image-based networks. The former extracts numerous patches from the image data as training samples of the network, whereas the latter considers the entire image as a training sample. For patch-based networks, an efficient extraction strategy should be adopted. \citet{Nardelli2018a} extracted the patches from the CT image around the vessel of interest. \citet{Wolterink2019} directly selected the positive and negative samples focused on the vessel centerlines. Instead of extracting the patches, images can be directly inputted into the network to optimize the model; examples can be found in \citep{Hu2018, Mo2017}. More flexible, \citet{Girard2019} used a scalable encoding-decoding CNN model that can input either the entire image directly or patches of any size to the network. In addition to the samples directly extracted from the image, several works selected the training samples from enhanced images to focus on problems originating from vessel tracking. For example, \citet{Nardelli2017, Nardelli2018} extracted patches from the bronchus image enhanced by the scale-space particle approach. \citet{Hajabdollahi2018} trained the CNN on the enhanced gray-scale level image. \citet{Zhao2018g} selected the patches from both the original and tube-level label images. To directly reflect the stenosis of vessels, \citet{Zreik2018a} collected the patches from multi-planar reformatted images. \subsubsection{Formulation of loss functions}\label{Formulation of loss functions} The CNNs obtain the optimal network weights by optimizing the loss function. The cross-entropy-based loss functions are generally used in vessel-tracking tasks \citep{Dasgupta2017, Nardelli2018a, Wu2018b, Jin2019, Dharmawan2019, Mo2017,Guo2019, Lin2019}. However, the vascular regions in the images are considerably smaller than the non-vascular regions, thereby inducing the imbalance problem for loss-function optimization. Data-imbalance problems occur in image segmentation, where the number of foreground pixels is usually less than background. To resolve the imbalance problem, some researchers formulated weighted schemes for the categorical cross-entropy loss functions \citep{Li2018b, Zhang2018g, Hu2018}. These loss functions incorporate the coefficients to reduce the importance of well-classified examples and focus on problematic samples. Another solution that has been employed is the use of the dice-coefficient-based loss functions \citep{Livne2019, Soomro2019b, Kitrungrotsakul2019}. To balance the classes of voxels, weighted schemes of the dice coefficient are employed to formulate the loss functions. \citet{Huang2018} adjusted the penalty weights of misclassified voxels to obtain higher correct classification scores and lower number of misclassified voxels. \citet{Lian2018a} employed a tuning parameter to determine whether precision (i.e., positive prediction value) contributes more than recall (i.e., true positive rate or sensitivity) or conversely during the training procedure. Alternative methods employ the squared error \citep{Li2016b, Fan2017} or more complex formulations as loss functions to train CNNs. Based on the L1 norm, \citet{Yan2018e} generated a joint loss to simultaneously train the model with the vessel-segment-level loss and pixel-wise loss. \citet{Jiang2019} formulated the loss function by adjusting the weights of two parts: cross entropy and L2 norm. \vspace{-1em} \section{Evaluation issues}\label{Evaluation issues} \subsection{Metrics for performance evaluation}\label{Assessment parameters} The results of vessel tracking can be presented as key points, vessel centerlines, and label images depending on requirements of the clinical applications. The performances of the methods are evaluated by comparing the results with the ground truth. The key points can be checked visually. For the remaining two types of results, two groups of metrics are generally used: overlap metrics and classification metrics. Overlap metrics are used to evaluate the similarity between the extracted vessels and ground truth. For label images, true positive (TP), true negative (TN), false negative (FN), and false positive (FP), compared with the ground truth, are typically used to evaluate the vessel and non-vessel patterns; these four metrics can be further formulated as accuracy (Acc), sensitivity (Se), specificity (Sp), precision (Pr), recall (Re), positive predictive value (PPV), negative predictive value (NPV), and dice similarity coefficient (DSC \citep{1945Measures}). The DSC and the Hausdorff distance (HD) are widely adopted overlap metrics to assess the similarity between label images. For centerlines, the four metrics can be computed according to the point-to-point correspondence between the ground truth and computed centerline. The four metrics can be further formulated as overlap (OV), overlap until the first error (OF), and overlap with the clinically relevant part of the vessel (OT). The average inside (AI) distance can also be used to describe the average distance of connections between two centerlines. The details of the four metrics for assessing vessel centerlines are found in \citep{Schaap2009}. Classification metrics are derived from the curves (i.e., receiver operating characteristic (ROC) curve and precision-recall curve) to assess a binary classifier system. The area under the receiver operating characteristic curve (AUC) metric can be used to indicate the probability that a classifier will rank a randomly chosen vessel instance higher than it will rank a randomly chosen non-vessel instance. The AUPRC metric, i.e., the area under the precision-recall curve, can also be exploited to evaluate the results of vessel tracking. \subsection{Public datasets and validation strategies}\label{Public datasets and validation strategy} Standard datasets are required for an objective evaluation of vessel-tracking methods. Here, we summarize the challenges and the public datasets related to vessel tracking. \begin{enumerate} \small \item[(1)] Retinal vessel segmentation: DRIVE\\ (\url{http://www.isi.uu.nl/Research/Databases/DRIVE/}) \item[(2)] Retinal vessel segmentation: STARE \\(\url{http://cecas.clemson.edu/~ahoover/stare/}) \item[(3)] Retinal vessel segmentation: CHASE-DB1 \\(\url{http:// blogs.kingston.ac.uk/retinal/chasedb1}) \item[(4)] Retinal vessel segmentation: HRF \citep{Odstrcilik2013} \item[(5)] Retinal vessel segmentation: PRIME-FP20 \citep{Ding2020a} \item[(6)] Coronary artery stenosis detection: CASDQEF \\(\url{http://coronary.bigr.nl/stenoses}) \item[(7)] Coronary centerline extraction: CAT08\\ (\url{http://coronary.bigr.nl/centerlines}) \item[(8)] Identify coronary calcifications: orCaScore \\(\url{https://orcascore.grand-challenge.org/}) \item[(9)] Coronary segmentation: ASOCA\\ (\url{https://asoca.grand-challenge.org/}) \item[(10)] Lung vessel segmentation: VESSEL12 \citep{Rudyanto2014} \item[(11)] Liver segmentation: SLIVER07\\ (\url{https://sliver07.grand-challenge.org/}) \item[(12)] Liver vessel segmentation: 3D-IRCADb \\(\url{https://www.ircad.fr/research/3d-ircadb-01/}) \end{enumerate} To validate the vessel-tracking methods, the dataset is divided into different training and test groups according, e.g., one-off train + test, leave-one-out and k-fold cross-validation. The experiment columns in \Zxhreftbs{tab:traditional learning:retinal vessel} - \ref{tab:deep learning:other vessels} present the validation strategies of the selected methods. Moreover, \Zxhreftb{tab:typical results} presents the public datasets and selected state-of-the-art results. \begin{table*} \caption{The public datasets and selected results. Please refer to Section \ref{Evaluation issues} for the abbreviations.} \label{tab:typical results}\vspace{-0.2cm} \centering \scriptsize \begin{tabular}{p{1.2cm}p{4cm}p{5.5cm}} \hline\hline \multirow{1}{*}{Public dataset}&\multirow{1}{*}{Selected Results}&\multirow{1}{*}{Validation strategy}\\ \hline \multirow{4}{*}{DRIVE}&Acc=0.9692 \citep{Lian2019} &20 for training, 20 for testing, one-off train + test\\ {}&AUC=0.9814 \citep{Cherukuri2020}&20 for training, 20 for testing, five-fold cross-validation\\ {}&Se=0.9382 \citep{Shin2019} &20 for training, 20 for testing, one-off train + test\\ {}&Sp=0.9861 \citep{Lian2019} &20 for training, 20 for testing, one-off train + test\\ \hline \multirow{4}{*}{STARE}&Acc=0.9740 \citep{Lian2019} &10 for training, 10 for testing, one-off train + test\\ {}&AUC=0.9882 \citep{Zhang2018g}&15 for training, 5 for testing, four-fold cross-validation\\ {}&Se=0.9598 \citep{Shin2019} &10 for training, 10 for testing, one-off train + test\\ {}&Sp=0.9916 \citep{Lian2019} &10 for training, 10 for testing, one-off train + test\\ \hline \multirow{4}{*}{CHASE-DB1}&Acc=0.9770 \citep{Zhang2018g} &21 for training, 7 for testing, four-fold cross-validation\\ {}&AUC=0.9900 \citep{Zhang2018g}&21 for training, 7 for testing, four-fold cross-validation\\ {}&Se=0.9463 \citep{Shin2019} &20 for training, 8 for testing, one-off train + test\\ {}&Sp=0.9909 \citep{Lian2019} &21 for training, 7 for testing, four-fold cross-validation\\ \hline \multirow{4}{*}{HRF}&Acc=0.9651 \citep{Jin2019} &15 for training, 30 for testing, one-off train + test\\ {}&AUC=0.9838 \citep{Shin2019}&15 for training, 30 for testing, one-off train + test\\ {}&Se=0.9546 \citep{Shin2019} &15 for training, 30 for testing, one-off train + test\\ {}&Sp=0.9823 \citep{Zhao2018f} &22 for training, 23 for testing, one-off train + test\\ \hline \multirow{4}{*}{CAT08}&AI=0.21mm \citep{Wolterink2019} &7 for training, 1 for testing, leave-one-out\\ {}&OF=0.815 \citep{Wolterink2019}&7 for training, 1 for testing, leave-one-out\\ {}&OT=0.971 \citep{Schaap2011} &8 for training, 24 for testing, one-off train + test\\ {}&OV=0.969 \citep{Schaap2011} &8 for training, 24 for testing, one-off train + test\\ \hline\hline \end{tabular} \end{table*} \section{Conclusion and discussion}\label{Conclusion and discussion} We have reviewed the recent literature on vessel tracking, particularly those on the methodologies that apply machine learning, including conventional machine-learning and deep-learning algorithms. Instead of reviewing the methods for a single application (e.g., retinal vessel segmentation and coronary centerline extraction) or based on a specific imaging modality (e.g., colored image and CTA), this paper focuses on reviewing the learning-based methods of tracking vessels of various organs in different imaging modalities. Learning-based methods offer the advantages of mapping the input data into representative and discriminative vascular features. Particularly, conventional learning-based methods learn the vessel-dedicated information from numerous hand-crafted features. They can employ different classifiers to distinguish the vessels from an analogous background according to the learnt features. Moreover, these techniques can describe the vessels with learnt vessel-dedicated parameters using statistical models. In contrast, based on various CNN architectures, deep-learning-based methods leverage hierarchical features that can encode global and local vascular structures. Owing to the complex morphologies of objects and image characteristics, vessel tracking is an exigent task. Thin vessels are not observed in many vessel-segmentation tasks because of their complex structures and small sizes, and to distinguish the small-sized vessels from artifacts and noise, high-quality local textures are required. Moreover, vessels with uncertain branches and tortuosity are difficult to track because of the complex branch connections. More auxiliary information (e.g., key points and orientations) should be obtained to reconstruct these vessels. Specifically, surrounding tissues and image noise may interfere with vessel tracking because of their positions and image intensities. To reduce the interference, a series of pre-processing techniques should be considered. The recent literature on vessel tracking mainly reports the advanced machine-learning methodologies in view of their considerable modeling capacities and potential in extracting effective features. Nevertheless, the following two problems should be considered when a new algorithm for this task is developed. First, learning-based methods may deliver limited performance in tracking a complete vessel because of the lack of high-level vascular features (e.g., branches and connections). The models can be trained based on hand-crafted features (Section \ref{Vessel tracking using conventional machine learning}) or hierarchical features (Section \ref{Vessel tracking based on deep learning}). However, in clinical practice, developing vessel-tracking methods may be exigent because of the problems involved in detecting abnormal vessels or vessels with pathologies. To describe these vessels in detail, the extraction of high-level features for future learning-based tracking methods is required. The second problem is related to the strategies employed to deal with the limited training data because this insufficiency generally leads to poor generalization capacity of models. Deep learning has achieved considerable success in many applications where public datasets with annotation are available. However, in the field of medical image analysis, overcoming the limitation of training data is still a major challenge. Currently, data augmentation is a common strategy employed to alleviate this issue. Moreover, weakly supervised and self-supervised learning are potential approaches to resolve the problems of lacking annotated data. Hence, in the future, more public databases are expected to be available, such as via open challenges, to promote the learning-based vessel-tracking algorithms. \section*{Acknowledgment} This work was funded by the National Natural Science Foundation of China (grant no. 61971142 and 62011540404) and the development fund for Shanghai talents (no. 2020015) \bibliographystyle{cas-model2-names}\biboptions{authoryear}
{ "timestamp": "2020-12-17T02:18:03", "yymm": "2012", "arxiv_id": "2012.08929", "language": "en", "url": "https://arxiv.org/abs/2012.08929" }
\section{Introduction} In the Internet era, large global e-commerce portals such as Amazon and AliExpress often serve customers all over the world and contain billions of items. It is particularly challenging for the recommender systems to meet the enormous needs of users with different preferences. Personalization techniques are critical for these systems in that modeling user's interests more precisely can help improve user experience and generate more business value. In practice, the log data collected from e-commerce portals can be naturally divided into different scenarios (e.g., country, city, culture). Those scenarios are heterogeneous and there may be complex correlations between different scenarios, such as huge differences in user's interests, preferences, etc. On the contrary, in some cases users may have some similar interests as well (like countries with similar geographic locations). There has been a large body of researches in recommender systems~\cite{zhang2017joint, lian2018towards}. Most of the researches are based on deep neural networks (DNNs) and recurrent neural networks (RNNs). More recently, attention mechanism~\cite{zhou2018deep, zhou2019deep, feng2019deep, chen2019behavior} has been introduced for better performance. Many of these techniques have been successfully deployed in real-world applications~\cite{he2014practical, covington2016deep, borisyuk2017lijar}. However, existing recommendation methods mainly ignore the complex correlations between multiple scenarios and simply apply a general model for all scenarios, which may be sub-optimal as valuable information is not clearly captured across different scenarios. Regarding the above issues, the intuitive consideration is to model for each scenario with the data of itself respectively. However, this may cause insufficient training problems on part of small-traffic scenarios and ignore the correlation between scenarios as well. In addition, Multi-Task Learning (MTL) may be a feasible solution. By treating each scenario as a separate task, a MTL scheme can be used to model the correlation between multiple scenarios, such as MMoE~\cite{ma2018modeling} that implicitly integrate information between relevant scenarios through MoE structure and gate units. Different from existing research, this work focus on perceiving scenario awareness in an explicit manner. We propose \textbf{S}cenario-\textbf{a}ware \textbf{M}utual \textbf{L}earning (SAML) which targets at learning both global and scenario-specific representations across multiple scenarios simultaneously. The global representation can extract shared knowledge from various scenarios, and the scenario-specific representation can learn the specific representation of each scenario individually. In practice, we first build both global and scenario-specific subspace for embedding and attention module, and then combine the features from each subspace respectively to construct two types of features, named scenario-independent and scenario-dependent features. Second, an auxiliary network and a multi-branch network are established upon the features to learn shared knowledge across scenarios as well as scenario-specific representations respectively. Finally, a novel mutual unit is designed and incorporated into multi-branch network to capture correlations among multiple scenarios, which not only maintain the dominance of the current scenario but also leverages the knowledge from some of the similar scenarios adaptively. Extensive experiments have been conducted to verify the effectiveness of the scenario-aware mutual learning (SAML) method. Evaluations of Click-Through Rate (CTR) prediction on public and industrial datasets show that the proposed SAML can generate better performance for multi-scenario recommendation compared to most advanced recommendation methods (without an explicit perception of scenario). Furthermore, ablation studies and visualization analysis on real-world industrial datasets demonstrate the proposed SAML does have an effective generalization. The main contributions of this paper are summarized as follows: \begin{itemize} \item Considering the feature diversity in multiple scenarios, we transform the embedding and attention module to map the features into both global and scenario-specific subspace, and then combine the feature vectors in the corresponding subspace to construct the scenario-independent and scenario-dependent features respectively. \item We propose to learn scenario-independent and scenario-dependent features separately, thus an auxiliary network, as well as a multi-branch network, are built to learn deep representations from corresponding feature spaces respectively. \item We propose to model the complex correlations between multiple scenarios in an explicit manner, thus introduce a mutual unit and incorporate it into the multi-branch network to simultaneously model the differences and similarities between scenarios, which can maintain the dominance of the current scenario and leverage the knowledge from some of the similar scenarios adaptively. \item We conduct extensive experiments on both public and industrial datasets. Experimental results show that our proposed SAML can generate more accurate results in the multi-scenario recommendation task, and can also generalize to other scenarios effectively. \end{itemize} The remaining parts of this paper are organized as follows: Section 2 introduces some related work. Section 3 describes and analyses the design of proposed SAML in detail. Experimental results and corresponding analysis are presented in Section 4 and conclusion in Section 5. \section{Related Work} In this section, we mainly introduce existing studies of feature representation and multi-task learning in recommender systems as well as deep mutual learning. \subsection{Feature Representation in Recommendation} In recommender systems, feature representation plays an important role in estimating the probability of corresponding user events, e.g., click and purchase. Enormous efforts have been put into modeling efficient features interactions and varied sequential behaviors. Wide\&Deep~\cite{cheng2016wide} combines the benefits of linear and deep representations, serving as a good solution for this task. DeepFM~\cite{guo2017deepfm} replaces the wide component of Wide\&Deep with factorization machines (FM) to model second-order feature interactions. DCN~\cite{wang2017deep} further introduces a multi-layer residual~\cite{he2016deep} structure to learn high-order representation of features. Besides, users' sequential behavior implies the dynamic and evolving interests and has been proven effective in tasks of user interest estimation. DIN~\cite{zhou2018deep} applies attention mechanism to learn the representation of users' historical behaviors towards the target item. DIEN~\cite{zhou2019deep} further introduces an auxiliary loss and AUGRU to capture the evolution trend of users' interests. DSIN~\cite{feng2019deep} divides users' behaviors into different sessions and use self-attention to extract users' interests in each session. Most recently, BST~\cite{vaswani2017attention} deploys Transformer to the E-commerce recommendation and verify its effectiveness. However, these existing methods are mainly designed without consideration of multiple scenarios, thus the learned feature representation is from a global perspective and only in a homogeneous representation space, e.g., learning unified feature embedding and attention vector to express feature attributes and users' interests across various situations. This may become a bottleneck for distinguishing the interests of different users in multiple (heterogeneous) scenarios. \subsection{MTL in Recommendation} MTL~\cite{caruana1997multitask, argyriou2008spectral} have been actively researched in recommender systems and numerous deep learning applications benefit from the multi-objective optimization. DUPN~\cite{ni2018perceive} proposes a robust and practical representation learning framework, which learns sharing user representations in an end-to-end setting across multiple e-commerce tasks. Considering the sequential pattern of user actions, ESMM~\cite{ma2018entire} introduces two auxiliary networks for CTR and CTCVR tasks, tackling the challenges of \textit{sample selection bias} and \textit{data sparsity} problems. ESM$^2$~\cite{wen2019conversion} further decomposes the post-click behavior for modeling CVR task in e-commerce recommender system. MMoE~\cite{ma2018modeling} uses computational efficient Mixture-of-Experts (MoE)~\cite{jacobs1991adaptive} as shared-bottom as well as light-weight gating network to model task relationship, which proved can better handle the scenario where tasks are less related. In the context of our problem, although we can build individual networks for each scenario on top of a shared-bottom structure, and do multi-objective optimization as classical MTL methodology, thereby modeling complex correlations between multiple scenarios. However, when scenarios in recommender system share the same item candidates and label space, the consistency and discrepancy of scenarios are coupled with each other tightly, thus the sophisticated relationship between different scenarios are hard to capture. \subsection{Deep Mutual Learning} Deep mutual learning~\cite{zhang2018deep} is proposed for knowledge distillation~\cite{hinton2015distilling}, which builds an ensemble of student networks to teach each other with distillation losses of Kullback-Leibler divergence. Inspired by this learning strategy, we explore a different idea to solve multi-scenario problem with a novel mutual unit to learn collaboratively scenario correlations in recommender systems. The essential difference is that our dataset shares a consistent label domain and the mutual unit uses a tailored similarity and gate mechanism to control the learning process instead of indirect approximation of data distribution by distillation. \section{Methods} In this section, we elaborate on the design of Scenario-aware Mutual Learning (SAML) model. First, we recapitulate the basic structure of deep learning based recommendation model from two aspects: feature representation and multi-layer perceptron. And then we introduce the overall structure of SAML corresponding to the above two aspects respectively. \subsection{Feature Representation} \label{sec 3.1} There are four categories of features in our recommender system: \textit{User Profile}, \textit{Item Profile}, \textit{User Behavior} and \textit{Context}. Each category of feature has several fields, \textit{User Profile} contains \textit{user\_id}, \textit{gender}, \textit{age} etc.; \textit{Item Profile} contains \textit{item\_id}, \textit{shop\_id}, \textit{price}, etc.; \textit{User Behavior} is the sequential list of user behavior, which contains the user's interacted items with corresponding features such as \textit{item\_id}, \textit{shop\_id}, etc.; \textit{Context} contains \textit{time}, \textit{matchtype}, \textit{scenario} and so on. \subsubsection{Embedding Module} Features in each field are numerical value or categorical id. For numerical features, we use normalization to transform them into the same scale. For categorical features, which typically represented by one-hot vectors, we use embedding technology to transform them into low-dimension dense vectors. For example, the embedding matrix of \textit{item\_id} can be represented by $E_{item} = [e_1;e_2;...;e_K] \in R^{N \times K}$, where $N$ is the total number of different items, $K$ is the dimension size of the embedding, $e_{i} \in R^{K}$ represents an embedding vector with dimension $K$. \subsubsection{Attention Module} Most of the advanced recommender systems use attention mechanism to capture user interests, especially on the representation learning of user sequential behavior. We follow BST~\cite{chen2019behavior} and use multi-head self-attention~\cite{vaswani2017attention} to learn deep representation of user interests based on the \textit{User Behavior} features, which can be formulated as follows: \begin{equation} \label{for:multi-head attention} \text{MultiHead}(Q,K,V) = \text{Concat}(head_1,...,head_H) W^O \\ \end{equation} \begin{equation} \begin{split} head_i &= \text{Attention}(QW_i^Q,KW_i^K,VW_i^V) \\ &= \text{Softmax}(\frac{QW_i^{Q} (KW_i^K)^T}{\sqrt{d_k}})VW_i^V \\ \end{split} \end{equation} where $Q$, $K$, $V$ are embedding matrices of \textit{User Behavior} features, which are converted through linear projection. $H$ denotes the number of attention heads, $d$ is the last dimension of embedding, $W_i^Q, W_i^K, W_i^V, W^O$ are all linear projection matrices. \subsection{Multi-layer Perceptron (MLP)} As most of the recent deep models in recommender systems, the output of \textit{Embedding Module} and \textit{Attention Module} are concatenated, then fed into MLP with fully connected layers for final prediction. The widely used loss function in recommendation is negative log-likelihood function, which can be defined as follows: \begin{equation} \begin{aligned} \mathcal{L} &= -\frac{1}{N}\sum_{(x,y)\in D}(y \log p(x) + (1-y) \log (1-p(x))) \label{formula:loss} \end{aligned} \end{equation} where $x$ is the training sample, $y \in \{0,1\}$ is the corresponding label, represents whether user clicks target item. $p( \cdot )$ is the predicted output of model. \begin{figure*}[ht] \centering \includegraphics[scale=1,width=\textwidth]{images/overall_HD.png} \caption{The overview of our proposed SAML model. The left part describes the model structure, the right part makes detail description of the modules involved. From the bottom up, SAML has two main components: scenario-aware feature representation and scenario-mutual network. In the first component, the raw features first go through the embedding module to obtain global and scenario-specific embedding vectors. Then part of features will enter the attention module to calculate the weights, and multiply the embedding of the user behavior to obtain the attention vector. Finally, the feature vectors are concatenated and then fed into the two sub-networks of scenario-mutual network respectively. The detail description of scenario-mutual network will be introduced in section \ref{sec:Scenario-mutual Network}. } \label{fig:overall-HD} \end{figure*} \subsection{Scenario-aware Feature Representation} Most existing recommendation methods mainly ignore the complex correlation between multiple scenarios, we first build both global and scenario-specific subspace for embedding and attention module, and then combine the features from each subspace respectively, thus construct two types of feature which we named scenario-independent and scenario-dependent features. \subsubsection{Embedding Module} As depicted in Figure \ref{fig:overall-HD}, the embedding module explicitly embeds each feature into both global and scenario-specific subspace in parallel. And then combine the feature vectors from each subspace respectively to construct two types of embedding vectors. The motivation is to realize the perception of features to the global and specific scenarios. As a comparison, that is more effective than directly increasing the embedding size, because no matter how to expand the dimension, the information between various scenarios is still mixed together without distinction. While our method has an explicit distinction between different scenarios. The ablation study in section \ref{sec:Q4} also confirms this. \subsubsection{Attention Module} On top of the embedding module, the attention module is subsequently enriched to capture use interests both in global and specific scenarios. According to formula \ref{for:multi-head attention}, we can define the formula here as: $\text{MultiHead}(Q_{g},K_{g},V_{g})$ and $\text{MultiHead}(Q_{l},K_{l},V_{g})$, where subscript $g$ and $l$ indicate the source of embedding vector (i.g., global and scenario-specific). It is notable that we share the same embedding vector $V_{g}$ above but the attention weights are computed from each scenario separately, which bridges the universal and specific knowledge as well as leveraging the rich diversity of user behaviors in global. \subsection{Scenario-mutual Network} \label{sec:Scenario-mutual Network} The scenario-mutual network contains two subnetworks: an auxiliary network and a multi-branch network, which are used to learn the scenario-independent and scenario-dependent features in parallel. In addition, a novel mutual unit is incorporated in the multi-branch network to model the complex correlations (e.g., differences and similarities) between multiple scenarios in an explicit manner. Now we introduce these three components in detail. \subsubsection{Auxiliary Network} In MMoE, all experts learn the shared knowledge together, if a definite domain knowledge exists for each task, integrating it into the expert is not convenient in practice. Therefore, we build the auxiliary network on top of scenario-independent features, which is used to learn shared knowledge from a global perspective. Specifically, we not only obtain its final output (for supervised learning), but also extract its hidden layer representation and use as an additional input to the multi-branch network. As shown in Figure~\ref{fig:overall-HD}, the knowledge propagates from auxiliary network to multi-branch network unidirectionally. The advantage is that we can extract some universal and scenario-independent knowledge to enhance the global perception of each specific scenario. An auxiliary loss function of negative log-likelihood is used to supervise the learning process. \subsubsection{Multi-branch Network} As shown in Figure~\ref{fig:overall-HD}, the input of each layer in multi-branch network contains two parts: output from the previous layer and hidden layer representation transferred from the auxiliary network. Taking the $i$ th branch and $l$ th layer as example, the process can be formulated as \begin{equation} \begin{split} {V_a^l} &= \delta (V_a^{l-1}W_{a}^l + b_a^l) \\ {V_{m_{i}}^l} &= \delta (\lbrack V_{m_{i}}^{l-1}, V_{a}^l \rbrack W_{m_{i}}^{l} + b_{m_{i}}^l) \\ \end{split} \end{equation} where $ V_a^l $ is the $l$ th layer output from auxiliary network, $ V_{m_{i}}^l $ is the $l$ th layer output from $i$ th branch in multi-branch network, $W_{a}^l$, $b_a^l$, $W_{m_{i}}^{l}$, $b_{m_{i}}^l$ are the corresponding weight and bias, $\delta$ is the activation function. In order to make each branch clearly correspond to a specific scenario, and can only be optimized by the data of the scenario itself, we implement this by adding a mask on the connection between networks to stop the gradient back-propagation. Thus the gradient of instance $t$ which belongs to scenario $S_i$ will only back-propagate to update the parameters in branch $i$. Finally, the total loss can be calculated as: \begin{equation} \begin{split} \mathcal{L}_{total} &= \mathcal{L}_{target} + \mathcal{L}_{aux} \\ &= \sum_{i}^{N} \mathcal{L}_{i}^{t} \cdot {I}_{i}^{t} + \mathcal{L}_{aux} \\ \textstyle{where} & \quad {I}_{i}^{t} = \left\{\begin{array}{lr} 1 \qquad \scriptstyle{if\ t\ \in S_{i}} \\ 0 \qquad \scriptstyle{otherwise} \\ \end{array} \right. \end{split} \end{equation} where $\mathcal{L}_{target}$ and $\mathcal{L}_{aux}$ are the losses of multi-branch network and auxiliary network respectively, $N$ is the number of branches, $t$ is the training sample, $S_{i}$ is the data set of the $i$ th scenario, $\mathcal{L}_{i}^{t}$ is the loss of the $i$ th branch on sample $t$, $I(\cdot)$ is used as indicator function to constraint the gradient. \subsubsection{mutual unit} By combining the learned representation from auxiliary and multi-branch network, we can take advantage of both global and scenario-specific features. However, the branches are independent of each other, which means that we actually isolate the relationship between scenarios and ignore the fact that similarity exists between part of scenarios to some extent. In order to simultaneously model the differences and similarities between scenarios, we introduce a novel mutual unit that can enhance the representation learning by considering the similarity among multiple scenarios and alleviating the problem of insufficient training on part of scenarios as well. As shown in Figure~\ref{fig:overall-HD}, the mutual unit use the hidden layer output $V_i$ from $i$ th branch to calculate the cosine distance with other $V_{j,j \ne i}$ to capture the similarities between different scenarios. A light-weight gate network is designed to control the degree learning from other similar scenarios. The learning procedure can be defined as follows: \begin{equation} {M_i} = V_i + g_i *\sum_{j=1,j \ne i}^{N}(\alpha_{ij}*V_j) \end{equation} \begin{equation} g_i = \text{Sigmoid}(W_i V_i + b_i) \end{equation} \begin{equation} \alpha_{ij} = \text{Softmax}(\frac{\cos{<V_i, V_j>}}{\sum_{j=1,j\ne i}^{N} {\cos{<V_i, V_j>}}}) \end{equation} where $V_i \in \mathbb{R}^{D}$, $D$ is the dimension of the hidden layer output of each branch, $g_i \in \mathbb{R}$ is the gate coefficient learned from $V_i$. $\alpha_{ij} \in \mathbb{R}$ is the normalized similarity coefficient indicating the similarity between scenario $i$ and $j$. $W_i \in \mathbb{R}^{D}$, $b_i \in \mathbb{R}$ are the linear matrix and bias of gate network. Finally, $M_{i}$ of each branch are sent to the next layer respectively, which benefits from the assistance of the similar scenarios, having the advantages of: (1) it maintain the dominance of the current branch to the greatest extent, so it can accurately model the differences between scenarios; (2) it can leverage the knowledge from some of the similar scenarios to enhance itself adaptively and vice versa. Note that when gate coefficient $g$ equal to 0, the similarity between scenarios is not considered, thus the network degenerates into multi-branch network with independent branches. \section{Experiments and analysis} In this section, we present our experiments in detail, including datasets, competitors, experimental setup, evaluation metrics, and the corresponding analysis. The experiments are intended to answer the following questions: \begin{itemize} \item \textbf{Q1}: How does our proposed model SAML compare with state-of-the-art methods on the recommendation task? \item \textbf{Q2}: Does SAML really help to improve the recommendation results on each scenario? \item \textbf{Q3}: How is the effectiveness of critical technical designs in SAML? \item \textbf{Q4}: How do different experimental settings (i.g., embedding size, number of attention heads, etc.) influence the performance of SAML? \item \textbf{Q5}: How does SAML provide effective recommendation results intuitively? \end{itemize} \subsection{Datasets} We conduct experiments on public and industrial datasets respectively, both of them are collected from real-world e-commerce platforms. Table \ref{tab:dataset table} summarizes the statistics of datasets used in this paper. \textbf{Public Dataset\footnote{\url{https://tianchi.aliyun.com/dataset/dataDetail?dataId=56}}.} It's a public dataset released by Alimama, an online advertising platform in China. The dataset consists of 8 days of ad display/click logs from 2017-05-06 to 2017-05-12. We use the first 7 days for training and the last day for testing, and set behavior sequence length to 15. We filter the samples of which user profile is missing, and then divide the scenario according to \textit{City\_level}. \textbf{Industrial Dataset.} It's an industrial dataset collected from the online recommender system of AliExpress, a cross-border e-commerce platform. Logs from 2019-08-24 to 2019-08-30 are used for training and 2019-08-31 is for testing, users' recent 30 behaviors are also recorded in logs. In this dataset, more than two hundred countries are contained, the performance of each country varies greatly, thus we divide the scenario according to \textit{Country\_id}. \subsection{Competitors} To evaluate the performance of proposed method, we compare SAML with the following models: \begin{itemize} \item \textbf{Wide\&Deep}\cite{cheng2016wide} contains a wide part for memorization and a deep part for generalization, we implement the wide part by linear regression (LR) and the deep part by MLP. \item \textbf{DeepFM}\cite{guo2017deepfm} replace LR with factorization machines (FM) and combine with MLP to model low- and high-order feature interactions. The two components share embedding space and summed up their outputs as the final prediction. \item \textbf{DCN}\cite{wang2017deep} introduces a novel cross network that is more efficient in learning certain bounded-degree feature interactions. Our implementation follows the same structure as original paper and stacks only three cross layers. \item \textbf{DIN}\cite{zhou2018deep} represents user interest with regard to the target item by adaptively learning the attention weight. \item \textbf{MMoE}\cite{ma2018modeling} is a multi-task learning approach that use Multi-gate Mixture-of-Experts to model task relationships from data. We implement each expert with a two-layer MLP to learn representation from multiple scenarios. \item \textbf{BST}\cite{chen2019behavior} utilizes the powerful Transformer model to capture the sequential signals underlying users’ behavior sequences for recommendation. We regard it as a base model for comparison in this paper. \end{itemize} \subsection{Experimental Setup} We implement our experiments on a distribution TensorFlow framework\footnote{\url{https://www.aliyun.com/product/bigdata/product/learn}}. All competitors in the experiments use ReLU activation function and Adam~\cite{kingma2014adam} optimizer. For each dataset, their settings are as follows: \textbf{Public settings.} Learning rate is tuned and set to be 5e-4, mini-batch size is set to 128. The hidden layer size of MLP involved are set by 128 $\times$ 64. The size of global embedding and scenario-specific embedding for each attribute is set to 12 and 4 respectively. The number of attention heads is set to 4. Corresponding to \textit{Country\_level} attribute, the scenarios are divided into 5 parts. \textbf{Industrial settings.} Learning rate is tuned and set to be 1e-4, mini-batch size is set to 1024. The hidden layer size of MLP involved is set by 256 $\times$ 128. The size of global embedding and scenario-specific embedding for each attribute are set to 8 and 2 respectively. The number of attention heads is set to 4. Corresponding to \textit{Country\_id} attribute, the traffic of top 9 countries exceeds sixty percent, thus we select top-9 countries with the highest traffic volume and group the rest into one. \subsection{Evaluation Metrics.} \textbf{AUC.} We use AUC (Area under ROC Curve) as our metric for measurement of model performance. It is defined as: \begin{equation} {AUC} = \frac{1}{|D^+||D^-|}\sum_{x^+ \in D^+} \sum_{x^- \in D^-}{I}(f(x^+)>f(x^-)) \end{equation} where $D^+$ and $D^-$ donate the collection of positive and negative samples respectively, $|D^+|$ and $|D^-|$ donate the number of samples in $D^+$ and $D^-$, $f(\cdot)$ is the output of model, $I(\cdot)$ is the indicator function. \textbf{RelaImpr.} We follow~\cite{yan2014coupled} to introduce RelaImpr metric to measure relative improvement over models. For a random guesser, the value of AUC is 0.5. Hence RelaImpr is defined as below: \begin{equation} {RelaImpr} = \left(\frac{AUC(measured~model)-0.5}{AUC(base~model)-0.5}-1 \right) \times 100\% \end{equation} \begin{table} \centering \caption{STATISTICS OF PUBLIC AND INDUSTRIAL DATASETS} \label{tab:dataset table} \begin{tabular}{c r r r r r} \toprule Dataset & User & Item & Click & Conversion & Samples \\ \midrule Public & 1.32M & 1.08M & 1.23M & - & 24.6M \\ Industrial & 35.9M &111M & 307M & 1.90M & 2.85B \\ \bottomrule \end{tabular} \end{table} \subsection{Overall Performance (Q1)} We conduct experiments of multi-scenario recommendation on both public and industrial datasets. The corresponding results are present in table \ref{tab:CTRCVR on AEAM}, which are recorded from comparison models learning through the whole scenarios, and then testing the overall performance. From above results, we have several important observations: (1) DIN performs better than previous models, mainly attributed to the attention mechanism which captures the user interest with regard to target item; (2) MMoE performs better than DIN and previous models, especially on industrial dataset of which expert networks can implicitly model correlations between scenarios, indicating that discrepancy between scenarios can not be neglected; (3) BST is a strong competitor, and is just slightly worse than MMoE, indicating the effectiveness of incorporating Transformer into the recommendation model; (4) SAML achieves the best performance on both datasets, demonstrates its effectiveness and generalization capability on multiple scenarios. Note that on industrial dataset, SAML achieves 0.0096 absolute AUC gain over BST, which is a significant improvement for business. \begin{table} \centering \caption {MODEL COMPARISON ON PUBLIC AND INDUSTRIAL DATASETS} \label{tab:CTRCVR on AEAM} \begin{tabular}{c c c c c} \toprule \multirow{2}{*}{Model} & \multicolumn{2}{c}{Public} & \multicolumn{2}{c}{Industrial} \\ & AUC & RelaImpr$^{\mathrm{a}}$ & AUC & RelaImpr$^{\mathrm{a}}$ \\ \midrule W\&D & 0.6320 & -2.07\% & 0.7245 & -7.61\% \\ DeepFM & 0.6329 & -1.40\% & 0.7308 & -5.02\% \\ DCN & 0.6333 & -1.11\% & 0.7308 & -5.02\% \\ DIN & 0.6343 & -0.37\% & 0.7409 & -0.86\% \\ MMoE & 0.6357 & 0.66\% & 0.7449 & 0.78\% \\ BST & 0.6348 & 0.00\% & 0.7430 & 0.00\% \\ SAML & \textbf{0.6392} & \textbf{3.26}\% & \textbf{0.7526} & \textbf{3.95\%} \\ \bottomrule \multicolumn{4}{l}{$^{\mathrm{a}}$RelaImpr is based on BST.} \end{tabular} \end{table} \subsection{Single Scenario Performance (Q2)} To verify whether our proposed SAML can really help to improve the recommendation results in each scenario, we take BST and SAML as comparison, and conduct experiments on industrial dataset according to the following procedure: (1) BST-Individual trains with the data of each scenario individually; (2) BST trains with the data combined from all scenarios; (3) SAML trains with the data combined from all scenarios and incorporates the critical technical designs we proposed above. And then all the models are tested on each scenario separately. Table \ref{tab:CTR on each country} shows the comparison results of several models in each scenario. According to the results, we have several observations: (1) By learning from multiple scenarios, BST is better than BST-Individual, indicating that the information between different scenarios can be mutually used to promote each other; (2) SAML consistently outperforms BST on each scenario, demonstrating the effectiveness of our scenario-aware mutual learning approach, which explicitly consider the differences and similarities between scenarios. \begin{table} \centering \caption {SINGLE SCENARIO COMPARISON (AUC) ON INDUSTRIAL DATASET} \label{tab:CTR on each country} \begin{tabular}{c c c c} \toprule Scenario & BST-Individual & BST & SAML \\ \midrule RU & 0.7391 & 0.7475 & 0.7574\\ BR & 0.7399 & 0.7572 & 0.7683\\ ES & 0.7261 & 0.7499 & 0.7600\\ US & 0.7128 & 0.7407 & 0.7517\\ FR & 0.7219 & 0.7502 & 0.7592\\ PL & 0.7199 & 0.7496 & 0.7608\\ NL & 0.7083 & 0.7403 & 0.7520\\ CL & 0.7107 & 0.7466 & 0.7550\\ UA & 0.7051 & 0.7429 & 0.7500\\ Others & 0.7356 & 0.7421 & 0.7523\\ \bottomrule \end{tabular} \end{table} \begin{figure*}[htbp] \centering \label{tab:ablation study experiment setting} \subfigure[embedding size]{ \begin{minipage}[bt]{0.33\linewidth} \centering \includegraphics[width=2.3in]{images/emb.png} \label{fig:embedding size} \end{minipage}% }% \subfigure[attention heads]{ \begin{minipage}[bt]{0.33\linewidth} \centering \includegraphics[width=2.3in]{images/att.png} \label{fig:number of attention heads} \end{minipage}% }% \subfigure[depth of network]{ \begin{minipage}[bt]{0.33\linewidth} \centering \includegraphics[width=2.2in]{images/net.png} \label{fig:depth of network} \end{minipage} }% \centering \caption{The results of different experimental settings in BST and SAML.} \label{fig:ablation study} \end{figure*} \begin{table} \centering \caption {ABLATION RESULTS OF VARIANT MODELS ON INDUSTRIAL DATASET} \label{tab:ablation study} \begin{tabular}{c c c} \toprule Model & AUC & RelaImpr$^{\mathrm{a}}$ \\ \midrule SAML & \textbf{0.7526} & \textbf{3.95\%} \\ SAML w/o gate & 0.7489 & 2.42\% \\ SAML w/o aux & 0.7472 & 1.72\% \\ SAML w/o gate\&mut & 0.7457 & 1.11\% \\ BST & 0.7430 & 0.00\% \\ \bottomrule \multicolumn{2}{l}{$^{\mathrm{a}}$RelaImpr is based on BST.} \end{tabular} \end{table} \subsection{Ablation Study of Critical Technical Designs (Q3)} To further verify the effectiveness of critical technical designs in SAML, such as the effectiveness of learning scenario-aware feature representation and the benefits of modeling differences and similarities between scenarios, we conduct ablation experiments to compare SAML with the following variant models: \begin{itemize} \item {SAML w/o gate}: removes the mutual unit between scenarios, equal to fix the gate coefficient $g$ as 0, which means that it will not consider the similarity between scenarios. \item {SAML w/o aux}: removes the auxiliary network upon scenario-independent features, combine the two features together then fed into mutual network, thus the differences between scenarios are not fully considered. \item {SAML w/o gate\&mut}: removes the mutual network upon scenario-dependent features, thus the features are combined and fed into auxiliary network (a unify MLP), which means that it considers neither similarities nor differences between scenarios. \end{itemize} Table \ref{tab:ablation study} shows the performance of SAML and different variants, the state-of-the-art recommendation model BST is also included as a base model. Based on the experiment results, we have the following observations: \begin{itemize} \item {SAML w/o gate} performs worse than SAML about 0.0037 absolute AUC, indicating the effectiveness of mutual unit which can automatically select similar representations from other scenarios for enhancement. \item {SAML w/o aux} declines by about 0.0054 absolute AUC than SAML, demonstrating the effectiveness of separation of scenario-specific representation learning from the global representation learning. \item From the comparison between SAML w/o gate\&mut and BST, we can get a rough view of the contribution from scenario-aware feature representation, which achieves 0.0024 absolute AUC improvement than base model. \end{itemize} \subsection{Ablation Study of Experimental Settings (Q4)} \label{sec:Q4} In addition to the ablation study of critical technical designs in SAML, we also study the sensitivity of the model to different experimental settings includes embedding size, number of attention heads, and depth of network layers. The embedding and attention module in scenario-aware feature representation are different from directly increasing the embedding size or number of attention heads in that the feature diversity between scenarios are explicitly considered. However, one may question that the reason for the improvement we obtained may be purely due to the increase in embedding size or attention heads rather than the perception of scenario. Therefore, we compared the two models of BST and SAML, first fixed other settings, and then tune the embedding size and attention heads from 8, 12, 16, 24, 32 and 4, 8, 12, 16, respectively. Similarly, we also compared different depths of network layers, including 2, 3, 4, and 5, respectively. From the results in Figure~\ref{fig:embedding size} and Figure~\ref{fig:number of attention heads}, we can see that SAML model incorporates with scenario awareness consistently outperform the comparison model in various setting of embedding size and attention heads, while the improvement of purely increasing the embedding size or attention heads are all very limited. We consider that since increasing the embedding size or attention heads do have improvement to some extent, while the lack of scenario awareness may limit the expression capability of each attribute, hence resulting in limited improvement. Increasing the depth of network layers can enhance the model capacity but also potentially leads to over-fitting. As can be seen from Figure~\ref{fig:depth of network}, at the beginning stage, i.g., from two layers to four layers, increasing the number of hidden layers consistently improves the model’s performance. However, it saturates at five layers that increasing more layers even marginally decreases the AUC scores, where the model may overfit the training set. Therefore, we use three/two hidden layers for SAML in industrial/public dataset. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{images/ESUAgate.png} \caption{Visualization of $\alpha$ and $g$ in mutual unit.} \label{fig:Visualalpha} \end{figure} \subsection{Visualization Analysis (Q5)} The key to understanding how SAML provides effective recommendation results is to understand how mutual units help optimizing scenario correlation and enhance representation learning for each scenario. Thus we conduct experiments on industrial dataset to visualize the accumulated probability of $\alpha$ and $g$ in mutual units. The similarity coefficient $\alpha$ is used to learn the similarity between scenarios and the gate coefficient $g$ is used to control the degree of learning from other similar scenarios, which is based on the learning situation of each scenario itself. Taking Spain (ES) and Ukraine (UA) as example, the distributions of $\alpha$ are dramatically different as shown in Figure~\ref{fig:Visualalpha}. The relevance of ES to Brazil (BR) is much higher than that of UA because of the sport-related categories. In contrast, the similarity of UA with Poland (PL) is higher than that of ES for the geographical location. What's more, both of them are similar to Russia (RU), United States (US) and Others because these three scenarios have relatively larger volumes of traffic to affect model learning. The gate coefficient $g$ and the similarity coefficient $\alpha$ show a certain correlation, which further validates our assumptions: (1) scenarios with more traffic such as RU and BR can learn better representation through themselves thus tend to suppress the $g$ to reduce the effect from other scenarios; (2) scenarios with low traffic such as FR, NL, etc., due to insufficient learning of their own representations, thus tend to increase the $g$ to rely more on the representations of other similar scenarios. \section{Conclusion} In this paper, a novel recommendation model named Scenario-aware Mutual Learning (SAML) is proposed in the field of e-commerce recommendation to capture the complex correlations (e.g., differences and similarities) between multiple scenarios. First, we introduce scenario-aware feature representation to learn feature representations both in global and scenario-specific. Then we introduce an auxiliary network to model the shared knowledge across all scenarios, and use a multi-branch network to model the differences among specific scenarios. Finally, we employ a mutual unit to adaptively learn the similarity of user's interests between various scenarios. An extensive set of experiments are provided to show the competitive performance of SAML and transferability of the learning framework. Detailed discussion of ablation studies and visualization analysis are also provided to show the insight of how the SAML works in real-world datasets. \bibliographystyle{./bibliography/IEEEtran}
{ "timestamp": "2020-12-17T02:18:49", "yymm": "2012", "arxiv_id": "2012.08952", "language": "en", "url": "https://arxiv.org/abs/2012.08952" }
\section{Introduction} Axion-like particles (ALPs), $s$, are pseudo-scalar singlets of the Standard Model (SM) gauge group, whose interactions are typically assumed to respect an approximate shift symmetry $s\to s+\sigma$ with constant $\sigma$. They arise naturally in theories with a spontaneously broken global symmetry such as the Peccei-Quinn solution to the strong CP problem~\cite{Peccei:1977hh,Peccei:1977ur,Weinberg:1977ma,Wilczek:1977pj}, composite Higgs models~\cite{Gripaios:2009pe,Gripaios:2016mmi,Chala:2017sjk} and others~\cite{Wilczek:1982rv,Chikashige:1980ui}, as well as in different explanations for dark matter~\cite{Preskill:1982cy,Abbott:1982af,Dine:1982ah} and for the flavour~\cite{Davidson:1981zd,Wilczek:1982rv,Ema:2016ops,Calibbi:2016hwq} and hierarchy~\cite{Graham:2015cka} problems. The shift symmetry must be broken explicitly by at least the ALP mass, and potentially also by its marginal couplings to the Higgs boson. The phenomenology of ALPs is mostly triggered by effective operators, the first of which arise at dimension five. Their impact has been studied at photon regeneration experiments~\cite{Povey:2010hs,Wagner:2010mi,Essig:2013lka,Betz:2013dza}, beam dumps~\cite{Bjorken:2009mm,Andreas:2012mt} and high-energy colliders including LEP~\cite{Jaeckel:2015jla,Bauer:2017ris,Craig:2018kne} and more recently the LHC~\cite{Jaeckel:2015jla,Knapen:2016moh,Brivio:2017ije,Bauer:2017ris,Bauer:2017nlg,Alonso-Alvarez:2018irt,Craig:2018kne,Ebadi:2019gij,Gavela:2019cmq,Coelho:2020saz,Haghighat:2020nuh,Goncalves:2020bqi} and future facilities~\cite{Bauer:2018uxu,Yue:2019gbh,Inan:2020aal}. Searches for ALPs produced from the blackbody photons in the solar core~\cite{Arik:2011rx} and in other astrophysical events~\cite{Raffelt:2006cw,Lee:2018lcj,Chang:2018rso,Jaeckel:2019xpa,Ertas:2020xcc} have been perfomed too. ALP searches in flavour experiments have been studied recently in Refs.~\cite{Gavela:2019wzg,Calibbi:2020jvd,MartinCamalich:2020dfe}; while CP violation signatures of ALPs have been considered in Ref.~\cite{DiLuzio:2020oah}. (Recent reviews on the physics of ALPs can be found for example in Refs.~\cite{DiLuzio:2020wdo,Choi:2020rgn}.) These experiments span a huge range of energies, across which the Wilson coefficients of the ALP effective field theory (EFT) run, and mix, following the corresponding renormalization group equations (RGEs). Different computations of parts of the RGEs are spread in the literature~\cite{Bauer:2016lbe,Choi:2017gpf,Bauer:2017ris}. However, to the best of our knowledge, there is no systematic study of the entire ALP anomalous dimension matrix in any concrete basis of operators. We fill this gap in this work, extending also previous computations in different ways. In particular we compute the gauge dependence of the ALP-fermion-fermion operators running, as well as the RGE dependence on the ALP-Higgs marginal coupling. Moreover, we work in a basis closer in spirit to the Warsaw basis~\cite{Grzadkowski:2010es} of the SMEFT; \textit{i.e.} involving operators with less derivatives, not limiting to purely shift-invariant interactions. However, we cross check (and update where necessary) previous partial results performed with different sets of operators. Most importantly, we match at tree level at the electroweak (EW) scale the ALP EFT onto the low-energy version in which the heavy top quark, the Higgs and the $Z$ and $W$ gauge bosons are integrated out, and we compute the running within this ALP low-energy EFT (ALP LEFT) too, including the mixing of higher-dimensional operators into renormalizable ones, as well as the mixing between purely SM EFT operators and others that do involve the ALP. For the sake of generality, we compute this running for an arbitrary ALP LEFT, \textit{i.e.} independently of whether the EFT above the EW scale is the ALP EFT or a more generic theory. As far as we are aware, essentially all results in this latter EFT are completely new. The article is organised as follows. In Section~\ref{sec:eft} we introduce the ALP Lagrangian, including a Green basis of effective operators and their on-shell relations. We compute the one-loop counterterms for effective operators in Section~\ref{sec:divergences}. In Section \ref{sec:rges} we obtain the complete anomalous dimension matrix for dimension-five operators at one loop. The different layers of the EFT valid at energies below the EW scale as well as their connection through renormalization and matching are discussed in Section~\ref{sec:lalp}. In Section~\ref{sec:pheno} we present some phenomenological implications of the previous results, most importantly the possibility of probing ALP interactions to the $Z$ boson or to the top quark through their mixing into ALP-lepton operators. We conclude in Section~\ref{sec:conclusions}. In Appendix~\ref{app:HEdiags} we provide the different Feynman diagrams computed for the renormalization of the ALP EFT. In Appendix~\ref{app:bases} we report our results in a different basis commonly used too in phenomenological studies, while in Appendix~\ref{app:4dim} we collect the renormalization group running of renormalizable parameters within this EFT. Finally, in Appendix~\ref{diagrams:left} we report the Feynman diagrams neccessary for the computation of the RGEs in the ALP LEFT. \section{Effective field theory for ALPs} \label{sec:eft} The renormalizable Lagrangian of the SM extended with a real pseudo-scalar singlet, $s$, reads, \begin{align}\nonumber \mathcal{L}_{SM+s} =& -\frac{1}{4}G_{\mu\nu}^{A}G_{A}^{\mu\nu} -\frac{1}{4}W_{\mu\nu}^{a}W_{a}^{\mu\nu} -\frac{1}{4}B_{\mu\nu}B^{\mu\nu}\\\nonumber & +\overline{q_{L}^{\alpha}}\ensuremath{\mathrm{i}}\slashed{D}q_{L}^{\alpha} +\overline{l_{L}^{\alpha}}\ensuremath{\mathrm{i}}\slashed{D}l_{L}^{\alpha} +\overline{u_{R}^{\alpha}}\ensuremath{\mathrm{i}}\slashed{D}u_{R}^{\alpha} +\overline{d_{R}^{\alpha}}\ensuremath{\mathrm{i}}\slashed{D}d_{R}^{\alpha} +\overline{e_{R}^{\alpha}}\ensuremath{\mathrm{i}}\slashed{D}e_{R}^{\alpha} \\\nonumber % & +\left(D_{\mu}\phi\right)^{\dagger}\left(D^{\mu}\phi\right) -\mu^{2}|\phi|^{2}-\lambda|\phi|^{4} -\left( y_{\alpha\beta}^{u}\overline{q_{L}^{\alpha}}\widetilde{\phi}u_{R}^{\beta} +y_{\alpha\beta}^{d}\overline{q_{L}^{\alpha}}\phi d_{R}^{\beta} +y_{\alpha\beta}^{e}\overline{l_{L}^{\alpha}}\phi e_{R}^{\beta} +\text{h.c.}\right)\\%\nonumber % & +\frac{1}{2}\left(\partial_{\mu}s\right)\left(\partial^{\mu}s\right)-\frac{1}{2}m^{2}s^{2} -\frac{\kappa_s}{3!}s^{3}-\frac{\lambda_s}{4!}s^{4} -\kappa_{s\phi}s|\phi|^{2}-\frac{\lambda_{s\phi}}{2}s^{2}|\phi|^{2}\,, \end{align} where $\alpha$ and $\beta$ are flavour indices, $q_L$ and $l_L$ denote the left-handed (LH) quark and lepton doublets, respectively, and $u_R$, $d_R$ and $e_R$ the right-handed (RH) up-type, down-type quark and charged lepton singlets, respectively. The gluon and the EW gauge bosons are represented, as usual, by $G$ and by $W$ and $B$, respectively. The Higgs doublet is called $\phi$, while its conjugate is given by $\widetilde{\phi} =\epsilon \phi^\ast= \mathrm{i}\sigma_2 \phi^\ast $, with $\sigma_2$ being the second Pauli matrix. A possible tadpole term has been eliminated via a field redefinition of $s$ and we use the minus-sign convention for the covariant derivative. In the renormalizable Lagrangian above, all coefficients are real except for the Yukawa couplings. Complex phases in these Yukawa couplings, as well as the couplings $\kappa_s$ and $\kappa_{s\phi}$, induce CP violation. \begin{table}[t] \centering{} \begin{tabular}{|l|l|l|l|} \hline \multicolumn{1}{|c|}{Scalar} & \multicolumn{1}{|c|}{Yukawa} & \multicolumn{1}{|c|}{Derivative} & \multicolumn{1}{|c|}{Gauge} \\ \hline \rule{0pt}{16pt} & $\mathcal{O}_{\ensuremath{su\phi}}^{\alpha\beta}=\ensuremath{\mathrm{i}} s( \overline{q_{L}^\alpha} \widetilde{\phi}u_{R}^\beta- \overline{u_{R}^\beta} \widetilde{\phi}^\dagger q_L^\alpha)$ & $\mathcal{R}_{\ensuremath{s\phi\square}}=\ensuremath{\mathrm{i}} s(\phi^{\dagger}D^{2}\phi- (D^2 \phi)^\dagger \phi)$ & $\mathcal{O}_{s\widetilde{G}}=sG_{\mu\nu}^{A}\widetilde{G}_{A}^{\mu\nu}$ \\ & $\mathcal{O}_{\ensuremath{sd\phi}}^{\alpha\beta}=\ensuremath{\mathrm{i}} s (\overline{q_{L}^\alpha}\phi d_{R}^\beta - \overline{d_R^\beta} \phi^\dagger q_L^\alpha)$ & $\mathcal{R}_{sq}^{\alpha\beta}= s(\overline{q_{L}^\alpha}\slashed{D}q_{L}^\beta + \overline{q_L^\beta} \overleftarrow{\slashed{D}}q_L^\alpha)$ & $\mathcal{O}_{s\widetilde{W}}=sW_{\mu\nu}^{a}\widetilde{W}_{a}^{\mu\nu}$ \\ & $\mathcal{O}_{\ensuremath{se\phi}}^{\alpha\beta}=\ensuremath{\mathrm{i}} s (\overline{l_{L}^\alpha}\phi e_{R}^\beta - \overline{e_R^\beta} \phi^\dagger l_L^\alpha)$ & $\mathcal{R}_{sl}^{\alpha\beta}= s(\overline{l_{L}^\alpha}\slashed{D}l_{L}^\beta + \overline{l_L^\beta} \overleftarrow{\slashed{D}}l_L^\alpha)$ & $\mathcal{O}_{s\widetilde{B}}=sB_{\mu\nu}\widetilde{B}^{\mu\nu}$ \\ & & $\mathcal{R}_{su}^{\alpha\beta}= s(\overline{u_{R}^\alpha}\slashed{D}u_{R}^\beta + \overline{u_R^\beta} \overleftarrow{\slashed{D}}u_R^\alpha)$ &\\ & & $\mathcal{R}_{sd}^{\alpha\beta}= s(\overline{d_{R}^\alpha}\slashed{D}d_{R}^\beta + \overline{d_R^\beta} \overleftarrow{\slashed{D}}d_R^\alpha)$ & \\ & & $\mathcal{R}_{se}^{\alpha\beta}= s(\overline{e_{R}^\alpha}\slashed{D}e_{R}^\beta + \overline{e_R^\beta} \overleftarrow{\slashed{D}}e_R^\alpha)$ & \rule[-6pt]{0pt}{17pt} \\ \hline \rule{0pt}{16pt} $\mathcal{O}_{s^5}=s^{5}$ & $\mathcal{O}_{\widetilde{\ensuremath{su\phi}}}^{\alpha\beta}=s( \overline{q_{L}^\alpha} \widetilde{\phi}u_{R}^\beta+ \overline{u_{R}^\beta} \widetilde{\phi}^\dagger q_L^\alpha)$ & $\mathcal{R}_{s\square}=s^{2}\partial_{\mu}\partial^{\mu}s$ & $\mathcal{O}_{sG}=sG_{\mu\nu}^{A}G_{A}^{\mu\nu}$ \\[0.2cm] $\mathcal{O}_{s^3}=s^{3}|\phi|^{2}$ & $\mathcal{O}_{\widetilde{\ensuremath{sd\phi}}}^{\alpha\beta}=s (\overline{q_{L}^\alpha}\phi d_{R}^\beta + \overline{d_R^\beta} \phi^\dagger q_L^\alpha)$ & $\mathcal{R}_{\phi s \square}=|\phi|^{2}\partial^{2}s$ & $\mathcal{O}_{sW}=sW_{\mu\nu}^{a}W_{a}^{\mu\nu}$\\[0.2cm] $\mathcal{O}_{s}=s|\phi|^{4}$ & $\mathcal{O}_{\widetilde{\ensuremath{se\phi}}}^{\alpha\beta}=s (\overline{l_{L}^\alpha}\phi e_{R}^\beta + \overline{e_R^\beta} \phi^\dagger l_L^\alpha)$ & $\mathcal{R}_{\widetilde{\ensuremath{s\phi\square}}}=s(\phi^{\dagger}D^{2}\phi+ (D^2 \phi)^\dagger \phi)$ & $\mathcal{O}_{sB}=sB_{\mu\nu}B^{\mu\nu}$ \\[0.2cm] & & $\mathcal{R}_{\widetilde{sq}}^{\alpha\beta}= s(\overline{q_{L}^\alpha}\ensuremath{\mathrm{i}}\slashed{D}q_{L}^\beta - \overline{q_L^\beta} \ensuremath{\mathrm{i}} \overleftarrow{\slashed{D}}q_L^\alpha) $ & \\[0.2cm] % & & $\mathcal{R}_{\widetilde{sl}}^{\alpha\beta}= s(\overline{l_{L}^\alpha}\ensuremath{\mathrm{i}}\slashed{D}l_{L}^\beta - \overline{l_L^\beta} \ensuremath{\mathrm{i}} \overleftarrow{\slashed{D}}l_L^\alpha) $ & \\[0.2cm] % & & $\mathcal{R}_{\widetilde{su}}^{\alpha\beta}= s(\overline{u_{R}^\alpha}\ensuremath{\mathrm{i}}\slashed{D}u_{R}^\beta - \overline{u_R^\beta} \ensuremath{\mathrm{i}} \overleftarrow{\slashed{D}}u_R^\alpha) $ & \\[0.2cm] & & $\mathcal{R}_{\widetilde{sd}}^{\alpha\beta}= s(\overline{d_{R}^\alpha}\ensuremath{\mathrm{i}}\slashed{D}d_{R}^\beta - \overline{d_R^\beta} \ensuremath{\mathrm{i}} \overleftarrow{\slashed{D}}d_R^\alpha) $ & \\[0.2cm] % & & $\mathcal{R}_{\widetilde{se}}^{\alpha\beta}= s(\overline{e_{R}^\alpha}\ensuremath{\mathrm{i}}\slashed{D}e_{R}^\beta - \overline{e_R^\beta} \ensuremath{\mathrm{i}} \overleftarrow{\slashed{D}}e_R^\alpha) $ & \\[0.2cm] % \hline % \end{tabular} \caption{\it Green basis of effective operators of dimension five. All operators are hermitian (operators with flavour indices are hermitian for each fixed value of $\alpha$ and $\beta$, $(\ensuremath{\mathcal{O}}_{\alpha \beta})^\dagger = \ensuremath{\mathcal{O}}_{\alpha\beta}$). The ones in the top (bottom) panel are CP conserving (violating). The dual field strength tensor is defined by $\widetilde{B}^{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\rho\sigma} B^{\rho \sigma}$ and likewise for $W$ and $G$.}\label{tab:eft} \end{table} The first tower of effective interactions arise at dimension five. We provide a Green basis (checked with \texttt{BasisGen}~\cite{Criado:2019ugp}) of such set of operators in Table~\ref{tab:eft}. Any other dimension-five operator can be written in terms of these via algebraic or integration by parts identities. We have collected the operators in the table according to their CP properties, with CP-even and CP-odd operators in the top and bottom panels of the table, respectively. All the operators in the table are hermitian (for fixed values of flavour indices if present) and the corresponding Wilson coefficients are therefore real parameters (real matrices for operators involving flavour). A minimal basis of non-redundant operators, enough to describe all physical processes, is given by the operators named with $\ensuremath{\mathcal{O}}$. The ones denoted by $\ensuremath{\mathcal{R}}$ can be written in terms of the ones in the minimal basis by performing field redefinitions and are therefore equivalent to them in all physical observables. To $\mathcal{O}(1/\Lambda)$ accuracy, these field redefinitions can be enforced through the equations of motion of the dimension-four Lagrangian, namely, \begin{align} \partial^{2}s&=-m^{2}s-\frac{\kappa_s}{2}s^{2}-\frac{\lambda_s s^{3}}{3!}-\kappa_{s\phi}|\phi|^{2}-\lambda_{s\phi}s|\phi|^{2}\,, \\ D^{2}\phi_{k}&= -\mu^{2}\phi_{k}-2\lambda|\phi|^{2}\phi_{k} -\kappa_{s\phi}s\phi_{k}-\frac{\lambda_{s\phi}}{2}s^{2}\phi_{k} -y^{u}_{\alpha\beta}\overline{q_{Lj}^\alpha}\epsilon_{jk}u_{R}^\beta -y^{d\,\ast}_{\alpha\beta} \overline{d_{R}^\beta}q_{Lk}^\alpha -y^{e\,\ast}_{\alpha \beta} \overline{e_{R}^\beta}l_{Lk}^\alpha \,, \\ i\slashed{D}q_{Lk}^\alpha&= y^{d}_{\alpha\beta}\phi_{k}d_{R}^\beta +y^{u}_{\alpha\beta}\widetilde{\phi}_{k}u_{R}^\beta\,, \qquad i\slashed{D}l_{Lk}^\alpha= y^{e}_{\alpha\beta}\phi_{k}e_{R}^\beta\,, \\[0.2cm] i\slashed{D}u_{R}^\alpha&= y^{u\,\ast}_{\beta\alpha}\widetilde{\phi}_{k}^{\dagger}q_{Lk}^\beta\,, \qquad \qquad \qquad \,\,\, \, i\slashed{D}d_{R}^\alpha y^{d\,\ast}_{\beta \alpha}\phi_{k}^{\dagger}q_{Lk}^{\beta}\,, \qquad i\slashed{D}e_{R}^\alpha y^{e\,\ast}_{\beta\alpha}\phi_{k}^{\dagger}l_{Lk}^{\beta}\,; \end{align} where we use latin indices for $SU(2)$. Using these equations we can arrive at the following identities, valid for physical observables: \begin{align} r_{\ensuremath{s\phi\square}} \ensuremath{\mathcal{R}}_{\ensuremath{s\phi\square}} =& -r_{\ensuremath{s\phi\square}}\mathrm{Re}(y^u) \ensuremath{\mathcal{O}}_{\ensuremath{su\phi}} +r_{\ensuremath{s\phi\square}}\mathrm{Re}(y^d) \ensuremath{\mathcal{O}}_{\ensuremath{sd\phi}} +r_{\ensuremath{s\phi\square}}\mathrm{Re}(y^e) \ensuremath{\mathcal{O}}_{\ensuremath{se\phi}} \nonumber \\ & +r_{\ensuremath{s\phi\square}}\mathrm{Im}(y^u) \ensuremath{\mathcal{O}}_{\widetilde{\ensuremath{su\phi}}} -r_{\ensuremath{s\phi\square}}\mathrm{Im}(y^d) \ensuremath{\mathcal{O}}_{\widetilde{\ensuremath{sd\phi}}} -r_{\ensuremath{s\phi\square}}\mathrm{Im}(y^e) \ensuremath{\mathcal{O}}_{\widetilde{\ensuremath{se\phi}}} \,, \\ r_{sq}\ensuremath{\mathcal{R}}_{sq} =& -r_{sq}\mathrm{Re}(y^u) \ensuremath{\mathcal{O}}_{\ensuremath{su\phi}} -r_{sq}\mathrm{Re}(y^d) \ensuremath{\mathcal{O}}_{\ensuremath{sd\phi}} +r_{sq}\mathrm{Im}(y^u) \ensuremath{\mathcal{O}}_{\widetilde{\ensuremath{su\phi}}} +r_{sq}\mathrm{Im}(y^d) \ensuremath{\mathcal{O}}_{\widetilde{\ensuremath{sd\phi}}} \,, \\ r_{sl}\ensuremath{\mathcal{R}}_{sl} =& -r_{sl}\mathrm{Re}(y^e) \ensuremath{\mathcal{O}}_{\ensuremath{se\phi}} +r_{sl}\mathrm{Im}(y^e) \ensuremath{\mathcal{O}}_{\widetilde{\ensuremath{se\phi}}} \,, \\ r_{su}\ensuremath{\mathcal{R}}_{su} =&\, \mathrm{Re}(y^u)r^{\mkern-1.5mu\mathsf{T}}_{su} \ensuremath{\mathcal{O}}_{\ensuremath{su\phi}} -\mathrm{Im}(y^u) r^{\mkern-1.5mu\mathsf{T}}_{su}\ensuremath{\mathcal{O}}_{\widetilde{\ensuremath{su\phi}}} \,, \\ r_{sd} \ensuremath{\mathcal{R}}_{sd} =&\, \mathrm{Re}(y^d)r^{\mkern-1.5mu\mathsf{T}}_{sd}\ensuremath{\mathcal{O}}_{\ensuremath{sd\phi}} -\mathrm{Im}(y^d)r^{\mkern-1.5mu\mathsf{T}}_{sd} \ensuremath{\mathcal{O}}_{\widetilde{\ensuremath{sd\phi}}} \,, \\ r_{se} \ensuremath{\mathcal{R}}_{se} =&\, \mathrm{Re}(y^e)r^{\mkern-1.5mu\mathsf{T}}_{se} \ensuremath{\mathcal{O}}_{\ensuremath{se\phi}} -\mathrm{Im}(y^e) r^{\mkern-1.5mu\mathsf{T}}_{se}\ensuremath{\mathcal{O}}_{\widetilde{\ensuremath{se\phi}}} \,; \label{redundancies:cpeven} \end{align} where flavour indices are left implicit and we have always assumed that each Wilson coefficient and its corresponding operator have the flavour indices in the same order so that, for instance, \begin{equation} \mathrm{Re}(y^e)r^{\mkern-1.5mu\mathsf{T}}_{se} \ensuremath{\mathcal{O}}_{\ensuremath{se\phi}} \equiv \mathrm{Re}(y^e)_{\alpha\gamma} (r^{\mkern-1.5mu\mathsf{T}}_{se})_{\gamma \beta} \ensuremath{\mathcal{O}}_{\ensuremath{se\phi}}^{\alpha \beta}\,, \end{equation} with repeated indices summed over. For the CP-odd ones we have \begin{align} r_{\ensuremath{s\square}} \mathcal{R}_{\ensuremath{s\square}} =& -r_{\ensuremath{s\square}}m^2 s^3 -r_{\ensuremath{s\square}}\frac{\kappa_s}{2}s^4-r_{\ensuremath{s\square}}\frac{\lambda_s}{3!}\mathcal{O}_{s^5} -r_{\ensuremath{s\square}}\kappa_{s\phi} s^2 |\phi|^2 -r_{\ensuremath{s\square}}\lambda_{s\phi}\mathcal{O}_{s^3}\,, \\ r_{\widetilde{\ensuremath{s\phi\square}}} \mathcal{R}_{\widetilde{\ensuremath{s\phi\square}}} =& -2r_{\widetilde{\ensuremath{s\phi\square}}}\mu^2s|\phi|^2 - 4r_{\widetilde{\ensuremath{s\phi\square}}}\lambda\mathcal{O}_s -2 r_{\widetilde{\ensuremath{s\phi\square}}}\kappa_{s\phi} s^2|\phi|^2 -r_{\widetilde{\ensuremath{s\phi\square}}}\lambda_{s\phi}\mathcal{O}_{s^3} \nonumber \\ & -r_{\widetilde{\ensuremath{s\phi\square}}}\mathrm{Re}(y^u) \ensuremath{\mathcal{O}}_{\widetilde{\ensuremath{su\phi}}} -r_{\widetilde{\ensuremath{s\phi\square}}}\mathrm{Re}(y^d) \ensuremath{\mathcal{O}}_{\widetilde{\ensuremath{sd\phi}}} -r_{\widetilde{\ensuremath{s\phi\square}}}\mathrm{Re}(y^e) \ensuremath{\mathcal{O}}_{\widetilde{\ensuremath{se\phi}}} \nonumber \\ & -r_{\widetilde{\ensuremath{s\phi\square}}}\mathrm{Im}(y^u) \ensuremath{\mathcal{O}}_{\ensuremath{su\phi}} -r_{\widetilde{\ensuremath{s\phi\square}}}\mathrm{Im}(y^d) \ensuremath{\mathcal{O}}_{\ensuremath{sd\phi}} -r_{\widetilde{\ensuremath{s\phi\square}}}\mathrm{Im}(y^e) \ensuremath{\mathcal{O}}_{\ensuremath{se\phi}}\,, \\ r_{\phi s \square} \mathcal{R}_{\phi s \square} &= -r_{\phi s \square}m^2 s |\phi|^2 -r_{\phi s \square}\frac{\kappa_s}{2} s^2 |\phi|^2 -r_{\phi s \square}\frac{\lambda_s}{3!}\mathcal{O}_{s^3} -r_{\phi s \square}\kappa_{s\phi}|\phi|^4-\lambda_{s\phi} \mathcal{O}_{s} \,, \\ r_{\widetilde{sq}} \ensuremath{\mathcal{R}}_{\widetilde{sq}} =&\, r_{\widetilde{sq}}\mathrm{Re}(y^u) \ensuremath{\mathcal{O}}_{\widetilde{\ensuremath{su\phi}}} +r_{\widetilde{sq}}\mathrm{Re}(y^d) \ensuremath{\mathcal{O}}_{\widetilde{\ensuremath{sd\phi}}} +r_{\widetilde{sq}}\mathrm{Im}(y^u) \ensuremath{\mathcal{O}}_{\ensuremath{su\phi}} +r_{\widetilde{sq}}\mathrm{Im}(y^d) \ensuremath{\mathcal{O}}_{\ensuremath{sd\phi}} \,, \\ r_{\widetilde{sl}} \ensuremath{\mathcal{R}}_{\widetilde{sl}} =&\, r_{\widetilde{sl}}\mathrm{Re}(y^e) \ensuremath{\mathcal{O}}_{\widetilde{\ensuremath{se\phi}}} +r_{\widetilde{sl}}\mathrm{Im}(y^e) \ensuremath{\mathcal{O}}_{\ensuremath{se\phi}} \,, \\ r_{\widetilde{su}} \ensuremath{\mathcal{R}}_{\widetilde{su}} =&\, \mathrm{Re}(y^u) r_{\widetilde{su}}^{\mkern-1.5mu\mathsf{T}} \ensuremath{\mathcal{O}}_{\widetilde{\ensuremath{su\phi}}} +\mathrm{Im}(y^u)r_{\widetilde{su}}^{\mkern-1.5mu\mathsf{T}} \ensuremath{\mathcal{O}}_{\ensuremath{su\phi}} \,, \\ r_{\widetilde{sd}}\ensuremath{\mathcal{R}}_{\widetilde{sd}} =&\, \mathrm{Re}(y^d)r_{\widetilde{sd}}^{\mkern-1.5mu\mathsf{T}} \ensuremath{\mathcal{O}}_{\widetilde{\ensuremath{sd\phi}}} +\mathrm{Im}(y^d)r_{\widetilde{sd}}^{\mkern-1.5mu\mathsf{T}} \ensuremath{\mathcal{O}}_{\ensuremath{sd\phi}} \,, \\ r_{\widetilde{se}}\ensuremath{\mathcal{R}}_{\widetilde{se}} =&\, \mathrm{Re}(y^e) r_{\widetilde{se}}^{\mkern-1.5mu\mathsf{T}} \ensuremath{\mathcal{O}}_{\widetilde{\ensuremath{se\phi}}} +\mathrm{Im}(y^e)r_{\widetilde{sd}}^{\mkern-1.5mu\mathsf{T}} \ensuremath{\mathcal{O}}_{\ensuremath{se\phi}} \,. \label{redundancies:cpodd} \end{align} For the sake of generality we have not made use of the freedom to make $y^e$ and one of $y^u$ or $y^d$ diagonal, with real and positive entries, in the equations above. Note that, despite their field content, the operators $\ensuremath{\mathcal{R}}_{\ensuremath{s\phi\square}}$ and $\ensuremath{\mathcal{R}}_{\widetilde{\ensuremath{s\phi\square}}}$ do not induce the process $s\phi\to \phi Z$. Indeed, by using the field redefinitions above we note that these operators are equivalent to Yukawa-like operators (or to operators with no gauge bosons in the CP-odd case), which clearly do not trigger the aforementioned process. An explicit calculation with these operators shows indeed that the corresponding amplitude goes with $p_Z^2$, which vanishes on-shell as the $Z$ boson is massless before EW symmetry breaking (EWSB). In the following we will consider that CP is a good symmetry of the EFT. This amounts to setting the coefficients of the CP-odd operators to zero, including $\kappa_s=\kappa_{s\phi}=0$ in the renormalizable Lagrangian and the ones in the bottom panel of Table~\ref{tab:eft}. This is a radiatively stable choice up to the complex phase in the SM Yukawa couplings. The main goal of the present paper is to obtain the RGEs for the CP-even sector in isolation. However, we will provide our results for arbitrary Yukawa couplings, so that the mixing of the CP-even operators into the CP-odd sector via the imaginary part of the SM Yukawa couplings can be easily obtained. Under these conditions, the relevant Lagrangian reads: % \begin{align} \mathcal{L}_{\mathrm{CP-even}}&= \sum_{\psi=u,d,e} a_{s\psi \phi} \ensuremath{\mathcal{O}}_{s\psi \phi} +\sum_{X=G,W,B} a_{s\widetilde{X}} \ensuremath{\mathcal{O}}_{s\widetilde{X}} +r_{\ensuremath{s\phi\square}} \ensuremath{\mathcal{R}}_{\ensuremath{s\phi\square}} +\sum_{\Psi=q,l,u,d,e} r_{s\Psi} \ensuremath{\mathcal{R}}_{s\Psi} \,,\label{LCP-even} \end{align} where all the Wilson coefficients are real or real matrices in flavour space. For the sake of generality we are not enforcing shift symmetry. However, in Appendix~\ref{app:bases} we provide conditions on the Wilson coefficients that guarantee that this symmetry is preserved. We also explain why, although customary, trading the Yukawa-like operators $\mathcal{O}_{s\psi \phi}$, with $\psi=u,d,e$, by the explicitly shift-invariant terms $\partial_\mu s (\overline{\Psi}\gamma^\mu \Psi)$, with $\Psi=q_L,l_L,u_R,d_R,e_R$~\cite{Georgi:1986df,Brivio:2017ije,Choi:2017gpf,Bauer:2017ris,Alonso-Alvarez:2018irt}, is not necessarily an optimal choice, as the set of operators thus constructed is overcomplete. Still, in the same appendix we will also provide the RGEs of these operators under some simplifying assumptions. \section{Divergences at one loop} \label{sec:divergences} In order to obtain the RGEs of the ALP EFT, we have computed the divergences generated by one-particle-irreducible (1PI) diagrams at one loop with off-shell momenta and to order $\mathcal{O}(1/\Lambda)$. In doing so, we have employed the background field method in the Feynman gauge in dimensional regularisation with space-time dimension $d = 4-2\epsilon$. The $1/\epsilon$ poles obtained this way are gauge invariant. We have subsequently matched these onto the Green basis of operators of Table~\ref{tab:eft}. We have implemented the model in \texttt{FeynRules}~\cite{Alloul:2013bka} and used \texttt{FeynArts}~\cite{Hahn:2000kx} and \texttt{FormCalc}~\cite{Hahn:1998yk} for the calculations. In a completely independent cross-check, we have evaluated by hand the Yukawa and $\lambda_{s\phi}$ pieces of each of the Feynman diagrams as obtained with \texttt{Qgraf}~\cite{Nogueira:1991ex}. In the remainder of this section we will go through the different amplitudes that we need for matching the divergences in the ALP EFT. For each amplitude we will provide the ultraviolet (UV) divergence, matched onto our Green basis. We will denote the corresponding Wilson coefficients with a prime in order to distinguish them from the ones of the operator insertion in the one-loop calculation (which appear, without a prime, on the right-hand side of our equations). Recall that we assume the EFT to preserve CP and we are interested in the RGEs of the CP-even operators among themselves. In particular we will consider only insertions of CP-even operators in the one-loop calculation. Within our assumptions the corresponding divergences can be again parameterized in terms of the CP-even operators up to the imaginary part of the SM Yukawa couplings. In this section we will provide the matching in the full basis, including the contribution via complex Yukawa couplings to the CP-odd operators so that the interested reader can obtain the corresponding mixing. In all these equations we leave flavour indices implicit. \begin{itemize} \item{$s(p_1)\phi_i^\dagger(p_2)\rightarrow q_{Lj}^\alpha(p_3) \overline{u_R^\beta}(p_4)$} The relevant diagrams, given in Fig.~\ref{fig:shqu}, produce the following UV divergence, \begin{align} % a_{\ensuremath{su\phi}}'-\ensuremath{\mathrm{i}} a_{\widetilde{\ensuremath{su\phi}}}' = -\frac{1}{(4\pi)^2 \epsilon} & \bigg\{ \left[\lambda_{s\phi} - \left(\frac{25g_1^2}{36} + \frac{3 g_2^2}{4} + \frac{16g_3^2}{3} \right)\right]a_{\ensuremath{su\phi}} \nonumber \\& - y^d y^{d\dagger} a_{\ensuremath{su\phi}} - a_{\ensuremath{sd\phi}} y^{d\dagger} y^u + y^d a_{\ensuremath{sd\phi}}^{\mkern-1.5mu\mathsf{T}} y^u \bigg\} ~. % \end{align} \newpage \item{$s(p_1)\phi_i(p_2)\rightarrow q_{Lj}^\alpha(p_3) \overline{d_R^\beta}(p_4)$} The relevant diagrams, given in Fig.~\ref{fig:shqd}, produce the following UV divergence, \begin{align} % a_{\ensuremath{sd\phi}}' -\ensuremath{\mathrm{i}} a_{\widetilde{\ensuremath{sd\phi}}}'= -\frac{1}{(4\pi)^2 \epsilon} &\bigg\{ \bigg[ \lambda_{s\phi} - \bigg( \frac{g_1^2}{36} + \frac{3 g_2^2}{4} + \frac{16 g_3^2 }{3}\bigg) \bigg]a_{\ensuremath{sd\phi}} \nonumber \\ & - y^u y^{u\dagger} a_{\ensuremath{sd\phi}} - a_{\ensuremath{su\phi}} y^{u\dagger} y^d + y^u a_{\ensuremath{su\phi}}^{\mkern-1.5mu\mathsf{T}} y^d \bigg\} ~. % \end{align} \item{$s(p_1)\phi_i(p_2)\rightarrow l_{Lj}^\alpha (p_3) \overline{e_R^\beta}(p_4)$} The diagrams in Fig.~\ref{fig:shle} give, \begin{align} % a_{\ensuremath{se\phi}}'= -\frac{1}{(4\pi)^{2}\epsilon} \left[\lambda_{s\phi} - \frac{9g_1^2}{4} - \frac{3g_2^2}{4} \right] a_{\ensuremath{se\phi}}\,. % \end{align} \item{$s(p_1)\rightarrow \phi_i (p_2) \phi_j^\dagger (p_3)$} The diagrams are shown in Fig.~\ref{fig:shh}, and give \begin{align} % r^\prime_{\ensuremath{s\phi\square}} -\ensuremath{\mathrm{i}} r^\prime_{\widetilde{\ensuremath{s\phi\square}}} &= - \frac{1}{16\pi^2 \epsilon} \left\{ {\rm Tr}\left[ y^e a_{\ensuremath{se\phi}}^{\mkern-1.5mu\mathsf{T}}\right] + 3\, {\rm Tr} \left[ y^d a_{\ensuremath{sd\phi}}^{\mkern-1.5mu\mathsf{T}} - a_{\ensuremath{su\phi}} y^{u\dagger}\right] \right\},\nonumber \\ r_{\phi s \square}'&=0. % \end{align} \item{$s(p_1)\rightarrow \Psi^\alpha(p_2) \overline{\Psi^\beta}(p_3)$} For the process $s(p_1)\rightarrow \Psi^\alpha(p_2) \overline{\Psi^\beta}(p_3)$, with $\Psi=q_L,l_L,u_R,d_R$ and $e_R$, we collect the one-loop diagrams in Figs.~\ref{fig:sqq}, \ref{fig:sll}, \ref{fig:suu}, \ref{fig:sdd}, \ref{fig:see}, respectively. The resulting divergences read:~\footnote{Note that only the terms proportional to the imaginary part of the Yukawa couplings contribute to the corresponding CP-odd operators.} \begin{align} r_{sq}^\prime +\ensuremath{\mathrm{i}} r_{\widetilde{sq}}^\prime & = \frac{1}{32 \pi^2 \epsilon} \bigg[ a_{\ensuremath{su\phi}} y^{u\,\dagger } + a_{\ensuremath{sd\phi}} y^{d\,\dagger} - \frac{g_1^2}{3} a_{s\widetilde{B}} - 9 g_2^2 a_{s\widetilde{W}} - 16 g_3^2 a_{s\widetilde{G}} \bigg]\,,\\ r_{sl}^\prime +\ensuremath{\mathrm{i}} r_{\widetilde{sl}}^\prime &= \frac{1}{32\pi^2\epsilon} \bigg[ a_{\ensuremath{se\phi}} y^{e\,\dagger} - 3 g_1^2 a_{s\widetilde{B}} - 9 g_2^2a_{s\widetilde{W}} \bigg]\,,\\ r_{su}^\prime +\ensuremath{\mathrm{i}} r_{\widetilde{su}}^\prime &= -\frac{1}{16\pi^2\epsilon} \bigg[ a_{\ensuremath{su\phi}}^{\mkern-1.5mu\mathsf{T}} y^{u} - \frac{8}{3} g_1^2 a_{s\widetilde{B}} - 8 g_3^2 a_{s\widetilde{G}} \bigg]\,,\\ r_{sd}^\prime +\ensuremath{\mathrm{i}} r_{\widetilde{sd}}^\prime &=- \frac{1}{16\pi^2\epsilon} \bigg[ a_{\ensuremath{sd\phi}}^{\mkern-1.5mu\mathsf{T}} y^{d} - \frac{2}{3} g_1^2 a_{s\widetilde{B}} - 8 g_3^2 a_{s\widetilde{G}} \bigg]\,,\\ r_{se}^\prime +\ensuremath{\mathrm{i}} r_{\widetilde{se}}^\prime &=- \frac{1}{16\pi^2\epsilon} \bigg[a_{\ensuremath{se\phi}}^{\mkern-1.5mu\mathsf{T}} y^{e}- 6 g_1^2 a_{s\widetilde{B}}\bigg]\,. \end{align} We have partially cross-checked the results above by computing some amplitudes related to the ones above by gauge invariance. As an example the process $s B\to\overline{\Psi}\Psi$ allows us to cross check the anti-symmetric combination $(r_{s\Psi})_{\alpha \beta}-(r_{s\Psi})_{\beta\alpha}$. \item{$s(p_1) \to V(p_2) V(p_3)$} No diagrams can be written at the order we are considering for the process $s\to BB$. The Feynman diagrams for the amplitudes $s\to W^3 W^3$ and $s\to GG$ are shown in Figs.~\ref{fig:sWW} and \ref{fig:sGG}, respectively. The corresponding amplitudes are all non divergent. For $W$ bosons, the second diagram is zero while the divergences of all others together vanish. In the case of gluons, the second and third diagrams vanish, while the divergences of the rest of the diagrams cancel each other. It is evident from the diagrams that Yukawa-like operators do not renormalize ALP-vector-vector ones (redundant operators do not contribute to ALP-vector-vector couplings either). This is in agreement with the non-renormalization results in Refs.~\cite{Cheung:2015aba,Bern:2019wie}. \end{itemize} CP-even renormalizable couplings do not receive any contribution from dimension-five operators at one loop. This is easy to see from the fact that CP-even renormalizable operators are even under $s\to -s$ whereas all dimension-five operators are odd under such replacement and therefore they cannot induce one-loop corrections to the renormalizable ones. \subsection{Eliminating redundancy} Once we have matched all the possible one-loop divergences onto our Green basis, we can use the relations in Eq.~\eqref{redundancies:cpeven} to obtain the divergences in the minimal basis. From this point on, even though we will continue writing Yukawa couplings in matrix form, we will neglect their complex phases. This amounts to the following replacements: \begin{align} % a'_{\ensuremath{su\phi}} &\to a^\prime_{\ensuremath{su\phi}} - r_{\ensuremath{s\phi\square}}' y^u - r_{sq}' y^u + y^{u} r_{su}^{\prime\,{\mkern-1.5mu\mathsf{T}}} \nonumber\\ &=\frac{-1}{(4\pi)^2 \epsilon} \bigg[ \left(\lambda_{s\phi} -\frac{25g_1^2}{36} -\frac{3 g_2^2}{4} - \frac{16g_3^2}{3} \right) a_{\ensuremath{su\phi}}- y^d y^{d\dagger} a_{\ensuremath{su\phi}} - a_{\ensuremath{sd\phi}} y^{d\dagger} y^u \bigg.\nonumber \\ % & \phantom{=\frac{1}{(4\pi)^2}}\bigg. + y^d a_{\ensuremath{sd\phi}}^{\mkern-1.5mu\mathsf{T}} y^u + \frac{1}{2} a_{\ensuremath{su\phi}} y^{u\,\dagger} y^u + \frac{1}{2} a_{\ensuremath{sd\phi}} y^{d\,\dagger} y^u + y^u y^{u\,\dagger} a_{\ensuremath{su\phi}} \bigg. \nonumber\\ % & \phantom{=\frac{1}{(4\pi)^2}}\bigg. - {\rm Tr}\left[y^e a_{\ensuremath{se\phi}}^{\mkern-1.5mu\mathsf{T}} + 3 y^d a_{\ensuremath{sd\phi}}^{\mkern-1.5mu\mathsf{T}} - 3 a_{\ensuremath{su\phi}} y^{u\dagger} \right] y^u - \left(\frac{17}{6} g_1^2 a_{s\widetilde{B}} +\frac{9}{2} g_2^2 a_{s\widetilde{W}} + 16 g_3^2 a_{s\widetilde{G}} \right) y^u \bigg]\,, \end{align} \begin{align} a'_{\ensuremath{sd\phi}} &\to a'_{\ensuremath{sd\phi}} + r_{\ensuremath{s\phi\square}}' y^d - r_{sq}' y^{d} + y^d r_{sd}^{\prime\,{\mkern-1.5mu\mathsf{T}}} \nonumber \\ &= \frac{-1}{(4\pi)^2 \epsilon} \left[ \left(\lambda_{s\phi} - \frac{g_1^2}{36} - \frac{3 g_2^2}{4} - \frac{16 g_3^2 }{3}\right) a_{\ensuremath{sd\phi}} - y^u y^{u\dagger} a_{\ensuremath{sd\phi}} - a_{\ensuremath{su\phi}} y^{u\dagger} y^d \right. \nonumber\\ & \phantom{=\frac{1}{(4\pi)^2}}\left. + y^u a_{\ensuremath{su\phi}}^{\mkern-1.5mu\mathsf{T}} y^d + \frac{1}{2} a_{\ensuremath{su\phi}} y^{u\dagger} y^d + \frac{1}{2} a_{\ensuremath{sd\phi}} y^{d\dagger} y^d + y^d y^{d\,\dagger} a_{\ensuremath{sd\phi}} \right. \nonumber\\ % & \phantom{=\frac{1}{(4\pi)^2}}\left. +{\rm Tr}\left[y^e a_{\ensuremath{se\phi}}^{\mkern-1.5mu\mathsf{T}} + 3 y^d a_{\ensuremath{sd\phi}}^{\mkern-1.5mu\mathsf{T}} - 3 a_{\ensuremath{su\phi}} y^{u\dagger} \right] y^d - \left(\frac{5}{6}g_1^2 a_{s\widetilde{B}} +\frac{9}{2} g_2^2 a_{s\widetilde{W}} + 16 g_3^2 a_{s\widetilde{G}} \right) y^d\right]\,, \end{align} \begin{align} a'_{\ensuremath{se\phi}} &\to a'_{\ensuremath{se\phi}} + r_{\ensuremath{s\phi\square}}' y^e - r_{sl}' y^{e} + y^e r_{se}^{\prime\,{\mkern-1.5mu\mathsf{T}}} \nonumber \\ &=\frac{-1}{(4\pi)^2 \epsilon} \left[ \left( \lambda_{s\phi} - \frac{9g_1^2}{4} - \frac{3g_2^2}{4} \right) a_{\ensuremath{se\phi}} + \frac{1}{2} a_{\ensuremath{se\phi}} y^{e\dagger} y^e + y^e y^{e\,\dagger} a_{\ensuremath{se\phi}} \right. \nonumber\\ & \phantom{=\frac{1}{(4\pi)^2}}\left. + {\rm Tr}\left[y^e a_{\ensuremath{se\phi}}^{\mkern-1.5mu\mathsf{T}} + 3 y^d a_{\ensuremath{sd\phi}}^{\mkern-1.5mu\mathsf{T}} - 3 a_{\ensuremath{su\phi}} y^{u\dagger} \right] y^e - \left(\frac{15}{2} g_1^2 a_{s\widetilde{B}} +\frac{9}{2}g_2^2 a_{s\widetilde{W}} \right) y^e \right]\,. \end{align} \section{Anomalous dimensions and comparison with the literature} \label{sec:rges} In the previous section we have determined completely the divergent Lagrangian in the physical basis as \begin{equation} % \mathcal{L}_{div} =\mathcal{O}_na_n' \equiv \mathcal{O}_n \frac{\mathcal{C}_{nm}}{32\pi^2 \epsilon}a_m\,, % \end{equation} where $n,m$ run over all operators (including flavour indices when present) and the coefficients $\mathcal{C}_{nm}$ involve only dimension-four couplings. The $\beta$-function governing the RGEs is given by \begin{equation} % \beta_{a_n} = 16\pi^2\mu \frac{d a_n}{d\mu} = \gamma_{nm} a_m\,, % \end{equation} where $\gamma$ is the anomalous dimension matrix. It is completely determined by the divergence matrix $\mathcal{C}$ up to the wave function renormalization factor for the different operators: \begin{equation} % \gamma_{nm} = -(\mathcal{C}_{nm} + K^F_n \delta_{nm})\,, % \end{equation} where $K^F$ parametrises the divergences in the wave function renormalization factors of each operator: \begin{equation} Z^F_n = 1 + \frac{K^F_n}{32\pi^2 \epsilon}, \end{equation} with \begin{align} &Z^F_{\ensuremath{\mathcal{O}}_{\ensuremath{su\phi}}}=\sqrt{Z_{q_L} Z_\phi Z_{u_R}}, &Z^F_{\ensuremath{\mathcal{O}}_{s\widetilde{G}}}=\sqrt{Z_G}, \\ &Z^F_{\ensuremath{\mathcal{O}}_{\ensuremath{sd\phi}}}=\sqrt{Z_{q_L} Z_\phi Z_{d_R}}, &Z^F_{\ensuremath{\mathcal{O}}_{s\widetilde{W}}}=\sqrt{Z_W}, \\ &Z^F_{\ensuremath{\mathcal{O}}_{\ensuremath{se\phi}}}=\sqrt{Z_{l_L} Z_\phi Z_{e_R}}, &Z^F_{\ensuremath{\mathcal{O}}_{s\widetilde{B}}}=\sqrt{Z_B}, \end{align} % and the following wave function renormalization factors, in agreement with Refs.~\cite{Buchalla:2019wsc,Chala:2020pbn}, \footnote{Note however that in Ref.~\cite{Buchalla:2019wsc} the Higgs is also split into background and quantum fields, therefore comparing $Z_{\phi}$ in this case is not straightforward. It can be also trivially seen that $Z_s$ vanishes.} \begin{align} \label{Z:qL} Z_{q_L} &= 1 - \frac{1}{96 \pi^2 \epsilon} \bigg[ \frac{1}{6} g_1^2 + \frac{9}{2} g_2^2 + 8 g_3^2 +3y^uy^{u\,\dagger} +3 y^d y^{d\,\dagger}\bigg]\,, \\ \label{Z:lL} Z_{l_L} &= 1 -\frac{1}{64 \pi^2 \epsilon} \bigg[g_1^2 + 3 g_2^2 + 2 y^e y^{e\,\dagger}\bigg]\,, \end{align}\begin{align} Z_{u_R} &= 1 -\frac{1}{48\pi^2 \epsilon} \bigg[ \frac{4}{3} g_1^2 + 4 g_3^2 + 3 y^{u\,\dagger} y^u\bigg]\,, \\ Z_{d_R} &= 1 -\frac{1}{48 \pi^2 \epsilon}\bigg[ \frac{1}{3} g_1^2 + 4 g_3^2 + 3 y^{d\,\dagger}y^d \bigg]\,, \\ Z_{e_R} &= 1 -\frac{1}{16 \pi^2 \epsilon} \bigg[ g_1^2 + y^{e\,\dagger}y^e\bigg]\,, \\% Z_{\phi} &= 1 + \frac{1}{32\pi^2 \epsilon} \bigg[ g_1^2 + 3 g_2^2 -2 \gamma_\phi^{(Y)} \bigg]\,, \\ Z_{B} &= 1 - \frac{41 g_1^2}{96 \pi^2 \epsilon}\,, \\ Z_{W} &= 1 + \frac{19 g_2^2}{96 \pi^2 \epsilon}\,, \\ Z_{G} &= 1 + \frac{14 g_3^2}{32 \pi^2 \epsilon}\,, \end{align} where we have defined \begin{equation} \gamma_\phi^{(Y)} \equiv {\rm Tr}\Big[y^{e\,\dagger} y^e + 3 y^{u\,\dagger} y^u + 3 y^{d\,\dagger} y^d\Big]\,. \end{equation} The final result for the $\beta$-functions, written as usual in matrix form with flavour indices implicit, reads: \begin{align}\label{eq:beta10} \beta_{a_{\ensuremath{su\phi}}} = \, &2 \bigg[ \bigg( \lambda_{s\phi} - \frac{17g_1^2}{24} - \frac{9 g_2^2}{8} -4 g_3^2 + \frac{1}{2} \gamma_\phi^{(Y)} \bigg) a_{\ensuremath{su\phi}} \nonumber\\ & -\frac{3}{4} y^d y^{d\dagger} a_{\ensuremath{su\phi}} % + \frac{5}{4}y^u y^{u\dagger} a_{\ensuremath{su\phi}} + a_{\ensuremath{su\phi}} y^{u\dagger} y^u +y^d a_{\ensuremath{sd\phi}}^{\mkern-1.5mu\mathsf{T}} y^u - \frac{1}{2} a_{\ensuremath{sd\phi}} y^{d\dagger} y^u \nonumber\\ & - \bigg(\frac{17 g_1^2}{6}a_{s\widetilde{B}} +\frac{9g_2^2}{2} a_{s\widetilde{W}} + 16 g_3^2 a_{s\widetilde{G}} + {\rm Tr}\left[y^e a_{\ensuremath{se\phi}}^{\mkern-1.5mu\mathsf{T}} + 3 y^d a_{\ensuremath{sd\phi}}^{\mkern-1.5mu\mathsf{T}} - 3 a_{\ensuremath{su\phi}} y^{u\dagger}) \right] \bigg) y^u \bigg] \,, \end{align} \begin{align}\label{eq:beta11} \beta_{a_{\ensuremath{sd\phi}}} = \, & 2 \bigg[ \bigg(\lambda_{s\phi} - \frac{5 g_1^2}{24} - \frac{9 g_2^2}{8} - 4 g_3^2 + \frac{1}{2} \gamma_\phi^{(Y)} \bigg) a_{\ensuremath{sd\phi}} \nonumber\\& - \frac{3}{4}y^u y^{u\dagger} a_{\ensuremath{sd\phi}} % % + \frac{5}{4} y^d y^{d\dagger} a_{\ensuremath{sd\phi}} +a_{\ensuremath{sd\phi}} y^{d\dagger} y^d + y^u a_{\ensuremath{su\phi}}^{\mkern-1.5mu\mathsf{T}} y^d - \frac{1}{2} a_{\ensuremath{su\phi}} y^{u\dagger} y^d \nonumber\\& - \bigg(\frac{5 g_1^2}{6}a_{s\widetilde{B}} +\frac{9 g_2^2}{2} a_{s\widetilde{W}} + 16 g_3^2 a_{s\widetilde{G}} - {\rm Tr}\left[y^e a_{\ensuremath{se\phi}}^{\mkern-1.5mu\mathsf{T}} + 3 y^d a_{\ensuremath{sd\phi}}^{\mkern-1.5mu\mathsf{T}} - 3 a_{\ensuremath{su\phi}} y^{u\dagger}) \right] \bigg) y^d\bigg] \,, \end{align} \begin{align}\label{eq:beta12} \beta_{a_{\ensuremath{se\phi}}} = \,& 2 \bigg[ a_{\ensuremath{se\phi}} \bigg(\lambda_{s\phi} - \frac{15g_1^2}{8} - \frac{9g_2^2}{8} + \frac{1}{2} \gamma_\phi^{(Y)} \bigg) % + \frac{5}{4} y^e y^{e\dagger} a_{\ensuremath{se\phi}} + a_{\ensuremath{se\phi}} y^{e\dagger} y^e \nonumber\\ & - \bigg(\frac{15 g_1^2}{2}a_{s\widetilde{B}} +\frac{9g_2^2}{2} a_{s\widetilde{W}} - {\rm Tr}\left[y^e a_{\ensuremath{se\phi}}^{\mkern-1.5mu\mathsf{T}} + 3 y^d a_{\ensuremath{sd\phi}}^{\mkern-1.5mu\mathsf{T}} - 3 a_{\ensuremath{su\phi}} y^{u\dagger}) \right] \bigg) y^e \bigg] \,, \end{align} \begin{align}\label{eq:beta18} \beta_{a_{s\widetilde{B}}} &= \frac{41}{3} g_1^2 a_{s\widetilde{B}}\,, \\ \label{eq:beta16} \beta_{a_{s\widetilde{W}}} &= -\frac{19}{3} g_2^2 a_{s\widetilde{W}}\,, \\ \label{eq:beta14} \beta_{a_{s\widetilde{G}}} &= -14g_3^2 a_{s\widetilde{G}}\,. \end{align} A more graphic picture of the operator mixing can be obtained for the case in which different fermion families factorize, so that all dimension-five Wilson coefficients are flavour diagonal, $a_{\alpha \beta} = \delta_{\alpha \beta} a_{\alpha}$ (and neglecting also off-diagonal Yukawa couplings). Writing $\gamma_{nm}$, where $n$ runs over $\ensuremath{\mathcal{O}}_{\ensuremath{su\phi}}^\alpha$, $\ensuremath{\mathcal{O}}_{\ensuremath{sd\phi}}^\alpha$, $\ensuremath{\mathcal{O}}_{\ensuremath{se\phi}}^\alpha$, $\ensuremath{\mathcal{O}}_{s\widetilde{G}}$, $\ensuremath{\mathcal{O}}_{s\widetilde{W}}$ and $\ensuremath{\mathcal{O}}_{s\widetilde{B}}$, and $m$ over the same operators but with flavour index $\rho$, we can express the anomalous dimensions in the following form: \begin{equation}\label{eq:result1} \gamma = \begin{pmatrix} \gamma_{11} + 6 y_u^\alpha y_u^\rho \quad& y_d^\alpha y_u^\alpha - 6y_u^\alpha y_d^\rho \quad& -2 y_u^\alpha y_e^\rho \quad& -32 g_3^2 y_ u^\alpha \quad& -9 g_2^2 y_u^\alpha \quad& -\frac{17}{3} g_1^2 y_u^\alpha \\[0.5cm] y_u^\alpha y_d^\alpha - 6 y_d^\alpha y_u^\rho \quad& \gamma_{22} + 6 y_d^\alpha y_d^\rho & 2 y_d^\alpha y_e^\rho & -32 g_3^2 y_d^\alpha & -9 g_2^2 y_d^\alpha& -\frac{5}{3}g_1^2 y_d^\alpha \\[0.5cm] -6 y_e^\alpha y_u^\rho & 6 y_e^\alpha y_d^\rho & \gamma_{33} + 2y_e^\alpha y_e^\rho & 0 & -9 g_2^2 y_e^\alpha & -15g_1^2 y_e^\alpha \\[0.5cm] 0 & 0 & 0 & -14 g_3^2 & 0 & 0 \\[0.5cm] 0 & 0 & 0 & 0 & -\frac{19}{3} g_2^2 & 0 \\[0.5cm] 0 & 0 & 0 & 0 & 0 & \frac{41}{3} g_1^2 \\ \end{pmatrix}\,, \end{equation} where a $\delta_{\alpha \rho}$ should be understood in every entry in which the $\rho$-index does not explicitly appear, and we have defined \begin{align} \gamma_{11} &= 2 \lambda_{s\phi} - \frac{3}{2} \left(y_d^\alpha\right)^2 + \frac{9}{2} \left(y_u^\alpha\right)^2 -\frac{17}{12} g_1^2 - \frac{9}{4} g_2^2 -8 g_3^2 + \gamma_\phi^{(Y)}\,,\\ \gamma_{22} &= 2 \lambda_{s\phi} - \frac{3}{2} \left(y_u^\alpha\right)^2 + \frac{9}{2} \left(y_d^\alpha\right)^2 -\frac{5}{12} g_1^2 - \frac{9}{4} g_2^2 -8 g_3^2 + \gamma_\phi^{(Y)}\,,\\ \gamma_{33} &= 2 \lambda_{s\phi} + \frac{9}{2} \left(y_e^\alpha\right)^2 -\frac{15}{4} g_1^2 - \frac{9}{4} g_2^2+ \gamma_\phi^{(Y)}\,. \end{align} Note that, due to the contribution to $r_{\ensuremath{s\phi\square}}$, even in the flavour diagonal case there is inter-generational mixing but the choice of diagonal Wilson coefficients is radiatively stable (up to the small non-diagonal terms in the SM Yukawa couplings). Different pieces of the anomalous dimension matrix have been previously computed in the literature. In particular, the mixing of the operators $\mathcal{O}_{\ensuremath{su\phi}}$, $\mathcal{O}_{\ensuremath{sd\phi}}$ and $\mathcal{O}_{\ensuremath{se\phi}}$ driven by Yukawa interactions has been obtained in Ref.~\cite{Choi:2017gpf}. Such work relies, however, on a different basis of effective interactions, where the fermionic operators take the form $(\partial_\mu s) \overline{\Psi} \gamma^\mu \Psi$. Using the relation between their Wilson coefficients and $a_{\ensuremath{su\phi}}$, $a_{\ensuremath{sd\phi}}$ and $a_{\ensuremath{se\phi}}$ we have obtained the beta functions for the latter from the results in~Ref.\cite{Choi:2017gpf} and compared them with our direct calculation reported in Eqs.~\eqref{eq:beta10}--\eqref{eq:beta12}. The results completely agree up to a sign difference in the terms proportional to ${\rm Tr}\left[y^l a_{\ensuremath{se\phi}}^{\mkern-1.5mu\mathsf{T}} + 3 y^d a_{\ensuremath{sd\phi}}^{\mkern-1.5mu\mathsf{T}} - 3 a_{\ensuremath{su\phi}} y^{u\dagger}) \right] y^\psi$, with $\psi=u,d,e$. Unfortunately, we do not find enough information to track the origin of this discrepancy. It is also worth emphasizing that the set of effective operators used in Ref.~\cite{Choi:2017gpf} is over-complete. We provide in Appendix~\ref{app:bases} the RGEs for a new basis in which the redundant operators have been removed, under some simplifying assumptions. In Ref.~\cite{Bauer:2016lbe} it was shown that the Wilson coefficients of the operators $g_3^2 \mathcal{O}_{s\widetilde{G}}$, $g_2^2 \mathcal{O}_{s\widetilde{W}}$ and $g_1^2 \mathcal{O}_{s\widetilde{B}}$, are scale invariant, \textit{i.e.} they do not depend on $\mu$. This is consistent with our results, for which all the running of $a_{s\widetilde{G},s\widetilde{W},s\widetilde{B}}$ can be accounted for by the running of the corresponding gauge couplings, which are determined by the wave function renormalization of the gauge fields in the background field method. Similarly, the 1-loop ALP couplings to fermions induced by the $\mathcal{O}_{s\widetilde{G},s\widetilde{W},s\widetilde{B}}$ operators in the effective Lagrangian were computed in Ref.~\cite{Bauer:2017ris}, again in a different basis from ours. We have compared the results, using the RGEs derived in Appendix ~\ref{app:bases} and found exact agreement. Finally, the RGEs for the dimension-four operators in the ALP EFT Lagrangian must be provided in order to fully determine the theory. We have obtained these with the help of \texttt{Pyr@te}~\cite{Lyonnet:2016xiz}; they are reported in Appendix~\ref{app:4dim}. Let us remind once more that the RGEs of renormalizable interactions are not perturbed by higher-dimensional operators at order $\mathcal{O}(1/\Lambda)$. \section{Matching and running below the electroweak scale} \label{sec:lalp} At energies smaller than the EW scale, set by the Higgs vacuum expectation value (VEV), $v\sim 246$ GeV, the ALP phenomenology must be described by a different EFT, that we call ALP LEFT, organised in inverse powers of $v$, in which the now massive top quark and the Higgs, $Z$ and $W$ bosons are not present. Assuming still CP conservation, the corresponding ALP LEFT Lagrangian, to dimension five, takes the following form: \begin{align} % \mathcal{L}_{\text{LEFT}} &= \frac{1}{2}(\partial_\mu s)(\partial^\mu s) -\frac{1}{2}\tilde{m}^2s^2 - \frac{\tilde{\lambda}_s}{4!} s^4 -\frac{1}{4} A_{\mu\nu} A^{\mu\nu} -\frac{1}{4} G^A_{\mu\nu} G^{A\,\mu\nu} \nonumber \\ % &+\sum_{\psi=u,d,e}\bigg\{ \overline{\psi^\alpha}\ensuremath{\mathrm{i}}\slashed{D}\psi^\alpha - \bigg[(\tilde{m}_\psi)_{\alpha\beta}\overline{\psi^\alpha_L}\psi_R^\beta - s \,\ensuremath{\mathrm{i}} (\tilde{c}_\psi)_{\alpha\beta} \overline{\psi^\alpha_L}\psi_R^\beta + \text{h.c.}\bigg]\bigg\} \nonumber \\[0.2cm] % &+ \tilde{a}_{s\widetilde{G}}s\,G_{\mu\nu}^A \widetilde{G}^{A\,\mu\nu} + \tilde{a}_{s\widetilde{A}} s A_{\mu\nu} \widetilde{A}^{\mu\nu} \nonumber \\[0.2cm] % &+\sum_{\psi=u,d,e}\bigg\{(\tilde{a}_{\psi A})_{\alpha\beta}\overline{\psi_L^\alpha}\sigma^{\mu\nu} \psi_R^\beta A_{\mu\nu} + (\tilde{a}_{\psi G})_{\alpha\beta} \overline{\psi_L^\alpha}\sigma^{\mu\nu} T_A \Psi_R^\beta G_{\mu\nu}^A + s^2 (\tilde{a}_\psi)_{\alpha\beta}\overline{\psi_L^\alpha}\psi_R^\beta+\text{h.c.}\bigg\}\,, \label{lag:left} \end{align} where $\alpha,\beta$ are flavour indices that run over the three families for $d$ and $e$ and over the lighter two for the case of $u$. The assumed CP invariance forces all coefficients to be real (matrices in case flavour is involved). When no chiralities for the fermions are explicitly written we assume $\psi=\psi_L+\psi_R$ and the covariant derivative in this regime reads $D_\mu = \partial_\mu -\ensuremath{\mathrm{i}}\tilde{e}QA_\mu -\ensuremath{\mathrm{i}}\tilde{g}_3 T^A G_\mu^A$, with $Q$ being the electric charge. We emphasise that contrary to the ALP EFT above the EW scale, in this case there are (lepton-number conserving) operators of the same dimension with and without the ALP. Note that, as explicitly written above, we work in a flavour basis in which mass matrices are not necessarily diagonal. At the level of computation, this is equivalent to promoting the masses to Yukawa couplings of a spurion scalar field which is later set to its VEV. Technically, every time we have to integrate out one of the SM fermions (the top in this case, lighter fermions as we go to lower energies, see below) we go to the physical basis in which the mass matrix is diagonalised; off-diagonal mass terms being generated by running to lower energies. The following redundant operators arise at dimension five: \begin{equation} \mathcal{L}_R = \sum_{\psi=u,d,e} \bigg[ \left(\tilde{r}_{\psi \Box }\right)_{\alpha\beta} \overline{\psi_L^\alpha} D^2 \psi_R^\beta + \ensuremath{\mathrm{i}} \left(\tilde{r}_{s \psi_L}\right)_{\alpha\beta} s \overline{\psi_L^\alpha} \ensuremath{\mathrm{i}} \slashed{D} \psi_L^\beta + \ensuremath{\mathrm{i}} \left(\tilde{r}_{s \psi_R}\right)_{\alpha\beta} s \overline{\psi_R^\alpha} \ensuremath{\mathrm{i}} \slashed{D} \psi_R^\beta + \text{h.c.}\bigg]\,, \label{eq:LEFTred} \end{equation} where again the operators are CP-even for real Wilson coefficients. The purely SMEFT redundant operator can be removed by making use of the relation \begin{equation} D^2 = \slashed{D}^2 + \frac{\sigma_{\mu\nu}}{2} \left( \tilde{e} Q A^{\mu\nu} + \tilde{g}_3 G^{\mu\nu}_A T_A\right)~. \label{eq:Dslashed} \end{equation} The first term in the equation above can then be further reduced by applying the equations of motion for fermions in the ALP LEFT, \begin{equation} \ensuremath{\mathrm{i}}\slashed{D} \psi_\alpha = m_{\alpha\beta} \psi_R^\beta + m^\dagger_{\alpha\beta} \psi_L^\beta - \ensuremath{\mathrm{i}} \left( \tilde{c}_\psi \right)_{\alpha\beta} s \psi_R^\beta + \ensuremath{\mathrm{i}} ( \tilde{c}_\psi^{\dagger} )_{\alpha\beta} s \psi_L^\beta~. \end{equation} There is an apparent ambiguity in this process for the $\slashed{D}^2$ term due to the possibility of performing integration by parts before applying the equations of motion. However, this ambiguity simply corresponds to a chiral rotation and therefore has no physical consequences (see~Ref.~\cite{Jenkins:2017dyc} for a related discussion).~\footnote{As an example of the mentioned apparent ambiguity, we could choose to apply the equations of motion without the splitting in Eq.~\eqref{eq:split}. In that case we obtain the same contribution to the dimension-five operators but a different contribution to the renormalizable ones. This difference is however removed, after canonical normalization, by the following chiral unitary rotation: \begin{equation} \psi_L \to \left(1+\frac{ \tilde{r}_{\psi\Box} \tilde{m}_\psi^\dagger -\tilde{m}_\psi \tilde{r}_{\psi\Box}^\dagger }{4}\right)\psi_L\,, \qquad \psi_R \to \left(1+\frac{ \tilde{m}_\psi^\dagger \tilde{r}_{\psi\Box} -\tilde{r}_{\psi\Box}^\dagger \tilde{m}_\psi }{4}\right)\psi_R\,. \end{equation} } We choose to split the covariant derivative symmetrically: \begin{equation} \slashed{D}^2=\frac{1}{2}(\slashed{D}^2+\overleftarrow{\slashed{D}}^2)\,. \label{eq:split} \end{equation} In this case we obtain the following on-shell equivalence relations (as usual we write our equations in matrix form in flavour space): \begin{align} \overline{\psi_L} \tilde{r}_{\psi \Box} D^2 \psi_R + \text{h.c.}=& - \overline{\psi_L} \frac{\tilde{r}_{\psi \Box} \tilde{m}_\psi^\dagger +\tilde{m}_\psi\tilde{r}_{\psi \Box}^\dagger}{2} \ensuremath{\mathrm{i}} \slashed{D} \psi_L - \overline{\psi_R} \frac{\tilde{r}_{\psi \Box}^\dagger \tilde{m}_\psi +\tilde{m}_\psi^\dagger\tilde{r}_{\psi \Box}}{2} \ensuremath{\mathrm{i}} \slashed{D} \psi_R \nonumber \\ +&\bigg[ \ensuremath{\mathrm{i}} s \overline{\psi_L} \frac{\tilde{m}_\psi \tilde{r}_{\psi \Box}^\dagger \tilde{c}_\psi +\tilde{c}_\psi \tilde{r}_{\psi \Box}^\dagger \tilde{m}_\psi }{2} \psi_R +s^2 \overline{\psi_L} \tilde{c}_\psi\tilde{r}_{\psi\Box}^\dagger\tilde{c}_\psi \psi_R \nonumber \\ &+ \overline{\psi_L} \frac{e Q_\psi \tilde{r}_{\psi \Box}}{2} \sigma_{\mu\nu} \psi_R A^{\mu\nu} + \overline{\psi_L} \frac{\tilde{g}_3 \tilde{r}_{\psi \Box}}{2} T_A \sigma_{\mu\nu} \psi_R G_A^{\mu\nu} +\text{h.c.}\bigg] \,,\label{red:psiBox} \\ \ensuremath{\mathrm{i}} s \overline{\psi_L} \tilde{r}_{s\psi_L} \ensuremath{\mathrm{i}} \slashed{D} \psi_L + \text{h.c.} = &\, \ensuremath{\mathrm{i}} s \overline{\psi_L} \tilde{r}_{s\psi_L} \tilde{m}_\psi \psi_R + s^2 \overline{\psi_L} \tilde{r}_{s\psi_L} \tilde{c}_\psi \psi_R +\text{h.c.} \,,\label{red:RspsiL} \\ \ensuremath{\mathrm{i}} s \overline{\psi_R} \tilde{r}_{s\psi_R} \ensuremath{\mathrm{i}} \slashed{D} \psi_R + \text{h.c.} = & -\ensuremath{\mathrm{i}} s \overline{\psi_L} \tilde{m}_\psi \tilde{r}_{s\psi_R}^\dagger \psi_R - s^2 \overline{\psi_L} \tilde{c}_\psi \tilde{r}_{s\psi_R}^\dagger \psi_R +\text{h.c.} \,.\label{red:RspsiR} % \end{align} The parameters of the ALP LEFT can be fully fixed at the scale $\mu =v$ by requiring that it describes exactly the same physics as the EFT before EWSB at the scale $\mu$. Proceeding this way at tree level, we obtain the following matching conditions for the interactions in Eq.~\eqref{lag:left}: \begin{align}\label{eq:matching} &\tilde{e} = g_2 s_w=g_1 c_w\,,& &\tilde{m}^2 = m^2 + \frac{\lambda_{s\phi}}{2} v^2\,,& \\ & \tilde{g}_3 =g_3 \,,& & \tilde{\lambda}_s = \lambda_s - 3\frac{v^2}{m_h^2} \lambda_{s\phi}^2\,,& \\ & (\tilde{m}_u)_{\alpha\beta} =\frac{v}{\sqrt{2}} (y^u)_{\alpha\beta} \,,& & (\tilde{c}_u)_{\alpha\beta} = \frac{v}{\sqrt{2}} (a_{\ensuremath{su\phi}})_{\alpha\beta} \,,&\\ & (\tilde{m}_d)_{\alpha\beta} = \frac{v}{\sqrt{2}} (y^d)_{\alpha\beta} \,,& & (\tilde{c}_d)_{\alpha\beta} = \frac{v}{\sqrt{2}} (a_{\ensuremath{sd\phi}})_{\alpha\beta}\,,&\\ & (\tilde{m}_e)_{\alpha\beta} = \frac{v}{\sqrt{2}} (y^e)_{\alpha\beta} \,,& & (\tilde{c}_e)_{\alpha\beta} = \frac{v}{\sqrt{2}} (a_{\ensuremath{se\phi}})_{\alpha\beta}\,,&\\ & \tilde{a}_{s\widetilde{G}} = a_{s\widetilde{G}}\,,& & \tilde{a}_{s\widetilde{A}} = a_{s\widetilde{W}} s_w^2 + a_{s\widetilde{B}} c_w^2\,;&\label{eq:matching2} \end{align} where, as before, $\alpha$ and $\beta$ are flavour indices that run over the three families for $d$ and $e$ and over the first two for $u$; $c_\omega$ and $s_\omega$ are the cosine and sine of the Weinberg angle, respectively. All the other Wilson coefficients vanish at the order we are computing. The fact that the three coefficients $\tilde{a}_{u,d,e}$ vanish might be surprising at first glance, as the Higgs couples to both $s^2$ and to fermionic currents with overall strength $\sim \lambda_{s\phi} y^\psi/v$. However, precisely because the Higgs boson sets the scale of light masses~\cite{Jenkins:2017jig}, \textit{i.e.} because $y^\psi\sim m_\psi/v$, the product $\lambda_{s\phi} y^\psi/v \sim \lambda_{s\phi} m_\psi/v^2$ is of higher order in the low-energy power counting and therefore negligible. This is no longer true at dimension six (it is in the pure SM EFT even at dimension six because an s-channel Higgs always involves two powers of Yukawa couplings). Similarly, a possible contribution proportional to two powers of $v a_{\ensuremath{su\phi}}$ is also higher order in the $1/\Lambda$ expansion. At energies below the bottom quark mass $m_b$, the effective Lagrangian takes exactly the same form as in Eq.~\eqref{lag:left} except that the flavour indices now run only over the remaining fermions and the Wilson coefficients in the new EFT have to be matched accordingly. The same logic applies as we cross new fermionic thresholds. At the order we are considering, however, the matching is straightforward and the only thing we have to do is to remove the Wilson coefficients involving the particle being integrated out. The only exception arises if $\tilde{c}_\psi$ is unsuppressed, in which case integrating out a massive fermion would result in the following matching condition: \begin{equation} (\tilde{a}_\psi)_{\alpha \beta} = -\frac{(\tilde{c}_\psi)_{\alpha \gamma} (\tilde{c}_\psi)_{\gamma \beta}}{(m_\psi)_\gamma} \mbox{ (no sum over $\gamma$)}\,, \end{equation} where $\gamma$ corresponds to the flavour that is being integrated out while $\alpha$ and $\beta$ run over lighter flavours of the same type of fermion. This term is higher order if the ALP LEFT is obtained from the ALP EFT. However, we prefer to keep this section completely general, independently of which theory completes the ALP LEFT in the UV. The running of the Wilson coefficients between different thresholds is very different from the running above the EW scale (the operators in Table~\ref{tab:eft}). In particular, operators of different energy dimensions, as well as operators with and without the ALP field, will now mix under renormalization. \subsection{Divergences at one loop} Similar to how we proceeded in Section~\ref{sec:divergences}, we fix the ALP LEFT divergences by computing a reduced set of 1PI amplitudes, the $1/v$ term of which we reproduce below. Since the dimension-five operators mix into renormalizable ones, we start with the divergences that can be absorbed in the renormalizable operators. In particular, the divergences associated to the kinetic terms can be parametrised, at the one-loop order, in terms of the wave function renormalization factors as follows: \begin{align} \mathcal{L}_{\text{kin}}&= \overline{\psi_L} (1-\delta Z_L) \ensuremath{\mathrm{i}} \slashed{D} \psi_L + \overline{\psi_R} (1-\delta Z_R) \ensuremath{\mathrm{i}} \slashed{D} \psi_R +\frac{1}{2} (1-\delta Z_s) (\partial_\mu s)(\partial^\mu s) \nonumber \\ &-\frac{1}{4} (1-\delta Z_A) A_{\mu\nu} A^{\mu\nu} -\frac{1}{4} (1-\delta Z_G) G^A_{\mu\nu} G^{A\,\mu\nu}\,, \end{align} where the wave function renormalization factors are defined in general by \begin{equation} Z= 1+\delta Z\,, \end{equation} and the relative minus sign is due to the fact that the $Z$ factors are conventionally defined to absorb, rather than parametrise, the corresponding divergences. As discussed above, the wave function renormalization factors have contributions proportional to only renormalizable couplings (that contribute to the running of the non-renormalizable ones; see Figs.~\ref{fig:A_A}--\ref{fig:d_d}) and to dimension-five couplings (that contribute to the mixing into renormalizable ones). We obtain the following result: \begin{align} & Z_{e_L} = 1 - \frac{\alpha}{4 \pi \epsilon} - \frac{1}{32\pi^2\epsilon} \left(\tilde{c}_e \tilde{c}_e^{\dagger} \right) - \frac{3 \tilde{e} }{16\pi^2\epsilon} \left( m_e \tilde{a}_{eA}^\dagger + \tilde{a}_{eA} m_e^\dagger \right)\,, \label{ZeL:offshell} \\ & Z_{e_R} = 1 - \frac{\alpha}{4 \pi \epsilon} - \frac{1}{32\pi^2\epsilon} \left(\tilde{c}_e^{\dagger} \tilde{c}_e \right) - \frac{3 \tilde{e} }{16\pi^2\epsilon} \left( \tilde{a}_{eA}^\dagger m_e + m_e^\dagger \tilde{a}_{eA} \right)\,, \label{ZeR:offshell}\\ & Z_{d_{L}} = 1 - \frac{1}{3 \pi \epsilon}\bigg[ \frac{1}{12} \alpha + \alpha_s \bigg] - \frac{1}{32\pi^2\epsilon} \left(\tilde{c}_d \tilde{c}_d^{\dagger} \right) \nonumber \\ & ~~~~~~~ -\frac{ \tilde{e} }{16\pi^2\epsilon} \left( m_d \tilde{a}_{dA}^\dagger + \tilde{a}_{dA} m_d^\dagger \right)+\frac{\tilde{g}_3}{4\pi^2\epsilon} \left( m_d \tilde{a}_{dG}^\dagger + \tilde{a}_{dG}m_d^\dagger \right) \,, \label{ZdL:offshell}\\ % & Z_{d_{R}} = 1 - \frac{1}{3 \pi \epsilon}\bigg[ \frac{1}{12} \alpha + \alpha_s \bigg] - \frac{1}{32\pi^2\epsilon} \left( \tilde{c}_d^{\dagger} \tilde{c}_d \right) \nonumber\\ & ~~~~~~~ -\frac{\tilde{e} }{16\pi^2\epsilon} \left(\tilde{a}_{dA}^\dagger m_d + m_d^\dagger \tilde{a}_{dA} \right)+\frac{\tilde{g}_3}{4\pi^2\epsilon} \left( \tilde{a}_{dG}^\dagger m_d + m_d^\dagger \tilde{a}_{dG} \right) \,, \label{ZdR:offshell}\\ & Z_{u_{L}} = 1 -\frac{1}{3 \pi \epsilon}\bigg[ \frac{1}{3} \alpha + \alpha_s \bigg] - \frac{1}{32\pi^2\epsilon} \left(\tilde{c}_u \tilde{c}_u^{\dagger} \right) \nonumber \\ & ~~~~~~~+ \frac{2 \tilde{e} }{16\pi^2\epsilon} \left( m_u \tilde{a}_{uA}^\dagger + \tilde{a}_{uA} m_u^\dagger \right)+\frac{\tilde{g}_3}{4\pi^2\epsilon} \left( m_u \tilde{a}_{uG}^\dagger + \tilde{a}_{uG}m_u^\dagger \right)\,, \label{ZuL:offshell} % \end{align} % \begin{align} & Z_{u_{R}} = 1 -\frac{1}{3 \pi \epsilon}\bigg[ \frac{1}{3} \alpha + \alpha_s \bigg] - \frac{1}{32\pi^2\epsilon} \left( \tilde{c}_u^{\dagger}\tilde{c}_u \right) \nonumber \\ & ~~~~~~~+ \frac{2 \tilde{e} }{16\pi^2\epsilon} \left( \tilde{a}_{uA}^\dagger m_u + m_u^\dagger \tilde{a}_{uA} \right)+\frac{\tilde{g}_3}{4\pi^2\epsilon} \left( \tilde{a}_{uG}^\dagger m_u + m_u^\dagger \tilde{a}_{uG} \right)\,, \label{ZuR:offshell}\\ & Z_{A} = 1 - \frac{\alpha}{3 \pi \epsilon} \left[ n_\ell + \frac{1}{3} n_d + \frac{4}{3} n_u \right]\nonumber \\ & ~~~~~~~ +\frac{\tilde{e}}{2\pi^2\epsilon} {\rm Tr} \left[ (\tilde{a}_{eA}^\dagger m_e + m_e^\dagger \tilde{a}_{eA}) - 2 (\tilde{a}_{uA}^\dagger m_u + m_u^\dagger \tilde{a}_{uA}) + (\tilde{a}_{dA}^\dagger m_d + m_d^\dagger \tilde{a}_{dA})\right]\,, \\ & Z_{G} = 1 + \frac{\alpha_s}{4\pi \epsilon} \left[11 - \frac{2}{3} (n_u+n_d) \right] - \frac{\tilde{g}_3}{4\pi^2\epsilon} {\rm Tr} \left[ \tilde{a}_{dG}^\dagger m_d + m_d^\dagger \tilde{a}_{dG} + \tilde{a}_{uG}^\dagger m_u + m_u^\dagger \tilde{a}_{uG} \right]\,, \\ & Z_s = 1 - \frac{1}{8\pi^2 \epsilon} {\rm Tr} \left[ \tilde{c}_e \tilde{c}_e^{\dagger} + 3 \left( \tilde{c}_d \tilde{c}_d^{\dagger} + \tilde{c}_u \tilde{c}_u^{\dagger} \right)\right]\,. \label{Zs:offshell} \end{align} For the remaining couplings we only need to consider diagrams with a single insertion of a dimension-five operator. The result is given below, organised according to the amplitudes we have used to compute the corresponding divergences. We provide all the relevant diagrams in Appendix~\ref{diagrams:left}. \begin{itemize} \item $s(p_1)\to s(p_2)$ The amplitude given by the diagrams in Fig.~\ref{fig:s_s} fixes the divergence of the mass term, % \begin{align} % \tilde{m}'^2 & = -\frac{3}{4\pi^2\epsilon} \left( {\rm Tr} \left[m_d^\dagger \tilde{a}_d m_d^\dagger m_d + m_d^\dagger m_d \tilde{a}_d^{\dagger} m_d \right] + {\rm Tr} \left[m_u^\dagger \tilde{a}_u m_u^\dagger m_u + m_u^\dagger m_u \tilde{a}_u^{\dagger} m_u \right] \right)\nonumber \\ & -\frac{1}{4\pi^2\epsilon} {\rm Tr} \left[m_e^\dagger \tilde{a}_e m_e^\dagger m_e + m_e^\dagger m_e \tilde{a}_e^{\dagger} m_e \right]\,. % \end{align} \item $\psi (p_1)\to\psi (p_2)$ The diagrams for $\psi=e,u,d$ are shown in Figs.~\ref{fig:e_e}, \ref{fig:u_u}, \ref{fig:d_d}. They contribute to the fermion mass divergences, % \begin{align} % \tilde{m}_e' & = -\frac{3}{8\pi^2 \epsilon}e \left(m_e m_e ^\dagger \tilde{a}_{eA} + \tilde{a}_{eA} m_e^\dagger m_e\right) +\frac{1}{16\pi^2 \epsilon} \tilde{m}^2 \tilde{a}_{e}\,,\\ % \tilde{m}_u' & = \frac{2}{8\pi^2 \epsilon}e \left(m_u m_u ^\dagger \tilde{a}_{uA} + \tilde{a}_{uA} m_u^\dagger m_u\right) + \frac{1}{2\pi^2 \epsilon} \tilde{g}_3 \left( m_u m_u^\dagger \tilde{a}_{uG} + \tilde{a}_{uG} m_u^\dagger m_u \right) +\frac{1}{16\pi^2 \epsilon} \tilde{m}^2 \tilde{a}_u\,,\\ % \tilde{m}_d' & = -\frac{1}{8\pi^2 \epsilon}e \left(m_d m_d ^\dagger \tilde{a}_{dA} + \tilde{a}_{dA} m_d^\dagger m_d\right) + \frac{1}{2\pi^2 \epsilon} \tilde{g}_3 \left( m_d m_d^\dagger \tilde{a}_{dG} + \tilde{a}_{dG} m_d^\dagger m_d \right) +\frac{1}{16\pi^2 \epsilon} \tilde{m}^2 \tilde{a}_d\,, % \end{align} % as well as to the dimension-five contribution to the kinetic terms reported above. \item $s(p_1) s(p_2)\to s (p_3) s(p_4)$ The corresponding amplitude, represented by the diagrams in Fig.~\ref{fig:ss_ss}, fixes the ALP quartic coupling, % \begin{align} % \tilde{\lambda}_s' = -\frac{3}{ \pi^2 \epsilon}\bigg[{\rm Tr}^{\tilde{\lambda}}_e + 3\left({\rm Tr}^{\tilde{\lambda}}_u+{\rm Tr}^{\tilde{\lambda}}_d\right)\bigg]\,, % \end{align} where we have defined \begin{equation} {\rm Tr}^{{\tilde{\lambda}}}_\psi \equiv {\rm Tr}\bigg[ \tilde{a}_\psi \tilde{c}_\psi^{\dagger} \tilde{c}_\psi m_\psi^\dagger + \tilde{a}_\psi m_\psi^\dagger \tilde{c}_\psi \tilde{c}_\psi^{\dagger} - \tilde{a}_\psi^{\dagger} \tilde{c}_\psi m_\psi^\dagger \tilde{c}_\psi - \tilde{a}_\psi \tilde{c}_\psi^{\dagger} m_\psi \tilde{c}_\psi^{\dagger} + \tilde{a}_\psi^{\dagger} \tilde{c}_\psi \tilde{c}_\psi^{\dagger} m_\psi + \tilde{a}_\psi^{\dagger} m_\psi \tilde{c}_\psi^{\dagger} \tilde{c}_\psi\bigg]~.\nonumber \end{equation} \item $s(p_1)\to \overline{\psi^\alpha}(p_2)\psi^\beta (p_3)$ The corresponding diagrams are shown in Figs.~\ref{fig:s_ee}, \ref{fig:s_uu}, \ref{fig:s_dd} for $\psi = e,u,d$. We obtain the following divergences for the renormalizable operators: % \begin{align} % \tilde{c}_e' & = \frac{1}{8\pi^2 \epsilon} \left(\tilde{a}_e m_e^\dagger\tilde{c}_e + \tilde{c}_e m_e^\dagger \tilde{a}_e\right) + \frac{3 e}{8\pi^2\epsilon} \left(m_e \tilde{c}_e^{\dagger} \tilde{a}_{eA} + \tilde{a}_{eA} \tilde{c}_e^{ \dagger} m_e- \tilde{c}_e m_e^\dagger \tilde{a}_{eA} - \tilde{a}_{eA} m_e^\dagger \tilde{c}_e\right) \,,\\ % \tilde{c}_u' &= \frac{1}{8\pi^2 \epsilon} \left(\tilde{a}_u m_u^\dagger\tilde{c}_u + \tilde{c}_u m_u^\dagger \tilde{a}_u\right) - \frac{2 e}{8\pi^2\epsilon} \left(m_u \tilde{c}_u^{\dagger} \tilde{a}_{uA} + \tilde{a}_{uA} \tilde{c}_u^{ \dagger} m_u - \tilde{c}_u m_u^\dagger \tilde{a}_{uA} - \tilde{a}_{uA} m_u^\dagger \tilde{c}_u\right) \nonumber \\ & - \frac{\tilde{g}_3}{2\pi^2\epsilon} \left(m_u \tilde{c}_u^{\dagger} \tilde{a}_{uG} + \tilde{a}_{uG} \tilde{c}_u^{\dagger} m_u - \tilde{c}_u m_u^\dagger \tilde{a}_{uG} - \tilde{a}_{uG} m_u^\dagger \tilde{c}_u\right) \,,\\ % \tilde{c}_d' & = \frac{1}{8\pi^2 \epsilon} \left(\tilde{a}_d m_d^\dagger\tilde{c}_d + \tilde{c}_d m_d^\dagger \tilde{a}_d\right) + \frac{ e}{8\pi^2\epsilon} \left(m_d \tilde{c}_d^{\dagger} \tilde{a}_{dA} + \tilde{a}_{dA} \tilde{c}_d^{ \dagger} m_d - \tilde{c}_d m_d^\dagger \tilde{a}_{dA} - \tilde{a}_{dA} m_d^\dagger \tilde{c}_d\right) \nonumber \\ & - \frac{\tilde{g}_3}{2\pi^2\epsilon} \left(m_d \tilde{c}_d^{\dagger} \tilde{a}_{dG} + \tilde{a}_{dG} \tilde{c}_d^{\dagger} m_d - \tilde{c}_d m_d^\dagger \tilde{a}_{dG} - \tilde{a}_{dG} m_d^\dagger \tilde{c}_d\right) \,, \end{align} and for the non-renormalizable ones: \begin{align} % \tilde{r}_{se_L}' & = -\frac{1}{16\pi^2 \epsilon} \tilde{a}_e \tilde{c}_e^{\dagger} + \frac{3}{2\pi \epsilon} \alpha \tilde{a}_{s\widetilde{A}} -\frac{3 e}{16 \pi^2 \epsilon} \tilde{c}_e \tilde{a}_{eA}^{\dagger} \,,\\ % % \tilde{r}_{se_R}' & = +\frac{1}{16\pi^2 \epsilon} \tilde{a}_e^\dagger \tilde{c}_e - \frac{3}{2\pi \epsilon} \alpha \tilde{a}_{s\widetilde{A}} +\frac{3 e}{16 \pi^2 \epsilon} \tilde{c}_e^\dagger \tilde{a}_{eA} \,,\\ % \tilde{r}_{su_L}' & = -\frac{1}{16\pi^2 \epsilon} \tilde{a}_u \tilde{c}_u^{\dagger} +\frac{2}{\pi \epsilon} \left(\frac{1}{3} \alpha \tilde{a}_{s\widetilde{A}} + \alpha_s \tilde{a}_{s\widetilde{G}}\right) +\frac{2 e}{16 \pi^2 \epsilon} \tilde{c}_u \tilde{a}_{uA}^{\dagger} + \frac{1}{4\pi^2 \epsilon } \tilde{g}_3 \tilde{c}_u\tilde{a}_{uG}^{\dagger}\,,\\ % % \tilde{r}_{su_R}' & = +\frac{1}{16\pi^2 \epsilon} \tilde{a}_u^\dagger \tilde{c}_u - \frac{2}{\pi \epsilon} \left(\frac{1 }{3} \alpha \tilde{a}_{s\widetilde{A}} + \alpha_s \tilde{a}_{s\widetilde{G}}\right) -\frac{2 e}{16 \pi^2 \epsilon} \tilde{c}_u^\dagger \tilde{a}_{uA} + \frac{1}{4\pi^2 \epsilon } \tilde{g}_3 \tilde{c}_u^\dagger \tilde{a}_{uG}\,,\\ % \tilde{r}_{sd_L}' & = -\frac{1}{16\pi^2 \epsilon} \tilde{a}_d \tilde{c}_d^{\dagger}+ \frac{2}{\pi \epsilon} \left(\frac{1 }{12} \alpha \tilde{a}_{s\widetilde{A}} + \alpha_s \tilde{a}_{s\widetilde{G}}\right) -\frac{ e}{16 \pi^2 \epsilon} \tilde{c}_d \tilde{a}_{dA}^{\dagger} + \frac{1}{4\pi^2 \epsilon } \tilde{g}_3 \tilde{c}_d\tilde{a}_{dG}^{\dagger}\,,\\ % % \tilde{r}_{sd_R}' & = +\frac{1}{16\pi^2 \epsilon} \tilde{a}_d^\dagger \tilde{c}_d - \frac{2}{\pi \epsilon} \left(\frac{1}{12} \alpha \tilde{a}_{s\widetilde{A}} + \alpha_s \tilde{a}_{s\widetilde{G}}\right) +\frac{ e}{16 \pi^2 \epsilon} \tilde{c}_d^\dagger \tilde{a}_{dA} + \frac{1}{4\pi^2 \epsilon } \tilde{g}_3 \tilde{c}_d^\dagger \tilde{a}_{dG}\,. % \end{align} % % \item $A(p_1)\to \overline{\psi^\alpha}(p_2) \psi^\beta (p_3)$ The corresponding diagrams are shown in Figs.~\ref{fig:a_ee}, \ref{fig:a_uu} and \ref{fig:a_dd} for $\psi = e,u,d$, respectively. This process fixes the divergences \begin{align} % \tilde{a}_{eA}' &= -\frac{e}{16\pi^2\epsilon}\left(e \tilde{a}_{eA} + 2\tilde{c}_e \tilde{a}_{s\widetilde{A}}\right)\,,\\ % \tilde{a}_{uA}' &= -\frac{ e}{12\pi^2\epsilon}\left(\frac{1}{3} e \tilde{a}_{uA} - \tilde{c}_u \tilde{a}_{s\widetilde{A}}\right) - \frac{ e}{18\pi^2 \epsilon} \tilde{g}_3 \tilde{a}_{u G}\,,\\ % \tilde{a}_{dA}' &= -\frac{e}{24\pi^2\epsilon}\left(\frac{1}{6} e \tilde{a}_{dA} + \tilde{c}_d \tilde{a}_{s\widetilde{A}}\right) + \frac{e}{36\pi^2 \epsilon} \tilde{g}_3 \tilde{a}_{d G}\,,\\ % \tilde{r}_{e\square}' &= \frac{3}{8\pi^2\epsilon} e \tilde{a}_{eA}\,, \label{resquare}\\ % \tilde{r}_{u\square}' &= -\frac{1}{4\pi^2\epsilon} e \tilde{a}_{uA} - \frac{1}{2\pi^2 \epsilon} \tilde{g}_3 \tilde{a}_{u G} \,,\label{rusquare}\\ % \tilde{r}_{d\square}' &= \frac{1}{8\pi^2\epsilon} e \tilde{a}_{Ad} - \frac{1}{2\pi^2 \epsilon} \tilde{g}_3 \tilde{a}_{d G} \,, \label{rdsquare} % \end{align} % as well as a contribution to the fermion kinetic term, which we have provided fully (\textit{i.e.} including all contributions up to order $1/v$) above. % \item $G(p_1)\to \overline{\psi^\alpha}(p_2)\psi^\beta (p_3)$ The diagrams for $\psi = u,d$ are respectively shown in Figs.~\ref{fig:g_uu} and \ref{fig:g_dd}. Similar to the previous case, we obtain: % \begin{align} % \tilde{a}_{uG}' &= \frac{1}{8\pi^2\epsilon} \tilde{g}_3 \tilde{c}_u \tilde{a}_{s\widetilde{G}} +\frac{7}{6\pi \epsilon} \alpha_s \tilde{a}_{uG}\,,\\ % \tilde{a}_{dG}' &= \frac{1}{8\pi^2\epsilon} \tilde{g}_3 \tilde{c}_d \tilde{a}_{s\widetilde{G}} +\frac{7}{6\pi \epsilon} \alpha_s \tilde{a}_{dG} \,, % \end{align} % as well as cross-check the previous redundant operators and contribution to the kinetic terms for the quarks. \item $s(p_1) s(p_2) \to \overline{\psi^\alpha}(p_2)\psi^\beta (p_3)$ The diagrams for $\psi=u,d,e$ are given in Figs.~\ref{fig:ss_ee}, \ref{fig:ss_uu}, \ref{fig:ss_dd}. We get: % \begin{align} % \tilde{a}_e' &= \bigg[ \frac{1 }{ \pi \epsilon} \alpha - \frac{\tilde{\lambda}}{32\pi^2\epsilon}\bigg] \tilde{a}_u + \frac{1}{16 \pi^2 \epsilon} \left(\tilde{c}_e \tilde{a}_e^{\dagger} \tilde{c}_e - 2 \tilde{a}_e \tilde{c}_e^{\dagger} \tilde{c}_e - 2 \tilde{c}_e \tilde{c}_e^{\dagger} \tilde{a}_e\right) \nonumber \\ & + \frac{3 e}{8\pi^2\epsilon} \left(\tilde{c}_e \tilde{c}_e^{\dagger} \tilde{a}_{eA} + \tilde{a}_{eA} \tilde{c}_e^{\dagger} \tilde{c}_e\right) \,,\\ % \tilde{a}_u' &= \bigg[ \frac{4}{3 \pi \epsilon} \left( \frac{1}{3} \alpha + \alpha_s\right) - \frac{\tilde{\lambda}}{32\pi^2\epsilon}\bigg] \tilde{a}_u + \frac{1}{16 \pi^2 \epsilon} \left(\tilde{c}_u \tilde{a}_u^{\dagger} \tilde{c}_u - 2 \tilde{a}_u \tilde{c}_u^{\dagger} \tilde{c}_u - 2 \tilde{c}_u \tilde{c}_u^{\dagger} \tilde{a}_u\right) \nonumber \\ & - \frac{ e}{4\pi^2\epsilon} \left(\tilde{c}_u \tilde{c}_u^{\dagger} \tilde{a}_{uA} + \tilde{a}_{uA} \tilde{c}_u^{\dagger} \tilde{c}_u\right) - \frac{\tilde{g}_3}{2\pi^2\epsilon} \left( \tilde{c}_u \tilde{c}_u^{\dagger} \tilde{a}_{uG}+ \tilde{a}_{uG} \tilde{c}_u^{\dagger} \tilde{c}_u \right) \,,\\ % \tilde{a}_d' &=\bigg[ \frac{1}{3 \pi \epsilon} \left(\frac{1}{3} \alpha + 4\alpha_s\right) - \frac{\tilde{\lambda}}{32\pi^2\epsilon}\bigg] \tilde{a}_d + \frac{1}{16 \pi^2 \epsilon} \left(\tilde{c}_d \tilde{a}_d^{\dagger} \tilde{c}_d - 2 \tilde{a}_d \tilde{c}_d^{\dagger} \tilde{c}_d - 2 \tilde{c}_d \tilde{c}_d^{\dagger} \tilde{a}_d\right) \nonumber \\ & + \frac{e}{8\pi^2\epsilon} \left(\tilde{c}_d \tilde{c}_d^{\dagger} \tilde{a}_{dA} + \tilde{a}_{dA} \tilde{c}_d^{\dagger} \tilde{c}_d\right) - \frac{\tilde{g}_3}{2\pi^2\epsilon} \left( \tilde{c}_d \tilde{c}_d^{\dagger} \tilde{a}_{dG}+ \tilde{a}_{dG} \tilde{c}_d^{\dagger} \tilde{c}_d \right) \,. % \end{align} \item $s(p_1)\to V(p_2)V(p_3)$ The diagrams for $V=A, G$ are given in Figs.~\ref{fig:s_aa}, \ref{fig:s_gg}. The corresponding divergences read: % \begin{align} % \tilde{a}_{s\tilde{A}} &= \frac{e}{8\pi^2\epsilon} {\rm Tr}\bigg[- \left(\tilde{c}_e \tilde{a}_{eA}^\dagger + \tilde{c}_e^{\dagger} \tilde{a}_{eA} \right) - \left( \tilde{c}_d \tilde{a}_{dA}^\dagger + \tilde{c}_d^{\dagger} \tilde{a}_{dA} \right) + 2 \left(\tilde{c}_u \tilde{a}_{uA}^\dagger + \tilde{c}_u^{\dagger} \tilde{a}_{uA} \right) \bigg] \,,\\ % \tilde{a}_{s\tilde{G}} & = \frac{\tilde{g}_3} {16\pi^2\epsilon} {\rm Tr} \left[ \tilde{c}_d \tilde{a}_{dG}^\dagger + \tilde{c}_d^{\dagger }\tilde{a}_{dG} + \tilde{c}_u \tilde{a}_{uG}^\dagger + \tilde{c}_u^{\dagger }\tilde{a}_{uG} \right] \,. % \end{align} % \end{itemize} \subsection{Eliminating redundancy} We can now go to the on-shell basis by using the redundancy relations in Eqs.~\eqref{red:psiBox}--\eqref{red:RspsiR}. The kinetic terms for fermions receive an extra contribution from the coefficient of the redundant operators, \begin{align} -\delta Z_{\psi_L} \to& -\delta Z_{\psi_L} -\frac{ \tilde{r}_{\psi \Box} \tilde{m}_\psi^\dagger +\tilde{m}_\psi \tilde{r}_{\psi \Box}^\dagger }{2}\,, \\ -\delta Z_{\psi_R} \to& -\delta Z_{\psi_R} -\frac{ \tilde{r}_{\psi \Box}^\dagger \tilde{m}_\psi +\tilde{m}_\psi^\dagger \tilde{r}_{\psi \Box} }{2}\,. \end{align} Upon replacing the values in Eqs.~\eqref{ZeL:offshell}--\eqref{ZuR:offshell} and Eqs.~\eqref{resquare}--\eqref{rdsquare}, we find that the contributions of dimension-five operators to the fermion kinetic terms precisely cancel in the on-shell basis. The resulting wave function renormalization factors then read in the on-shell basis: \begin{align} & Z_{e_L} = 1 - \frac{\alpha}{4 \pi \epsilon} - \frac{1}{32\pi^2\epsilon} \left(\tilde{c}_e \tilde{c}_e^{\dagger} \right) \,, \label{ZeL:onshell} \\ & Z_{e_R} = 1 - \frac{\alpha}{4 \pi \epsilon} - \frac{1}{32\pi^2\epsilon} \left(\tilde{c}_e^{\dagger} \tilde{c}_e \right) \,, \label{ZeR:onshell}\\ & Z_{d_{L}} = 1 - \frac{1}{3 \pi \epsilon}\bigg[ \frac{1}{12} \alpha + \alpha_s \bigg] - \frac{1}{32\pi^2\epsilon} \left(\tilde{c}_d \tilde{c}_d^{\dagger} \right) \,, \label{ZdL:onshell}\\ & Z_{d_{R}} = 1 - \frac{1}{3 \pi \epsilon}\bigg[ \frac{1}{12} \alpha + \alpha_s \bigg] - \frac{1}{32\pi^2\epsilon} \left( \tilde{c}_d^{\dagger} \tilde{c}_d \right) \,, \label{ZdR:onshell}\\ & Z_{u_{L}} = 1 -\frac{1}{3 \pi \epsilon}\bigg[ \frac{1}{3} \alpha + \alpha_s \bigg] - \frac{1}{32\pi^2\epsilon} \left(\tilde{c}_u \tilde{c}_u^{\dagger} \right) \,, \label{ZuL:onshell}\\ & Z_{u_{R}} = 1 -\frac{1}{3 \pi \epsilon}\bigg[ \frac{1}{3} \alpha + \alpha_s \bigg] - \frac{1}{32\pi^2\epsilon} \left( \tilde{c}_u^{\dagger}\tilde{c}_u \right) \,, \label{ZuR:onshell}\\ & Z_{A} = 1 - \frac{\alpha}{3 \pi \epsilon} \left[ n_\ell + \frac{1}{3} n_d + \frac{4}{3} n_u \right]\nonumber \\ & ~~~~~~~-\frac{e}{2\pi^2\epsilon} {\rm Tr} \left[ - (\tilde{a}_{eA}^\dagger m_e + m_e^\dagger \tilde{a}_{eA}) + 2 (\tilde{a}_{uA}^\dagger m_u + m_u^\dagger \tilde{a}_{uA}) - (\tilde{a}_{dA}^\dagger m_d + m_d^\dagger \tilde{a}_{dA})\right]\,, \\ & Z_{G} = 1 + \frac{\alpha_s}{4\pi \epsilon} \left[11 - \frac{2}{3} (n_u+n_d) \right] - \frac{\tilde{g}_3}{4\pi^2\epsilon} {\rm Tr} \left[ \tilde{a}_{dG}^\dagger m_d + m_d^\dagger \tilde{a}_{dG} + \tilde{a}_{uG}^\dagger m_u + m_u^\dagger \tilde{a}_{uG} \right]\,, \\ & Z_s = 1 - \frac{1}{8\pi^2 \epsilon} {\rm Tr} \left[ \tilde{c}_e \tilde{c}_e^{\dagger} + 3 \left( \tilde{c}_d \tilde{c}_d^{\dagger} + \tilde{c}_u \tilde{c}_u^{\dagger} \right)\right]\,. \label{Zs:onshell} \end{align} We also have a contribution to renormalizable coefficients from redundant ones: \begin{equation} \tilde{c}_\psi \to \tilde{c}_\psi + \frac{ \tilde{m}_\psi \tilde{r}_{\psi \Box}^\dagger \tilde{c}_\psi +\tilde{c}_\psi \tilde{r}_{\psi \Box}^\dagger \tilde{m}_\psi }{2} +\tilde{r}_{s\psi_L} \tilde{m}_\psi -\tilde{m}_{\psi} \tilde{r}_{s\psi_R}^\dagger \,, \end{equation} which results in the following values: \begin{align} \tilde{c}_e' &= \frac{\tilde{c}_e}{4\pi^2 \epsilon} \tilde{e}^2 - \frac{1}{16\pi^2 \epsilon} \tilde{c}_e \tilde{c}_e^{\dagger} \tilde{c}_e + \frac{3}{4\pi^2\epsilon}\tilde{e}^2 \tilde{a}_{s\tilde{A}} \tilde{m}_e \nonumber \\ &- \frac{1}{16 \pi^2 \epsilon} \bigg[ \tilde{a}_e \left( \tilde{c}_e^\dagger \tilde{m}_e - 2 \tilde{m}_e^\dagger \tilde{c}_e\right) + \left( \tilde{m}_e \tilde{c}_e^\dagger - 2 \tilde{c}_e \tilde{m}_e^\dagger\right) \tilde{a}_e\bigg] \nonumber \\ &+ \frac{3 \tilde{e}}{8\pi^2 \epsilon} \bigg[ \tilde{m}_e\tilde{c}^{\dagger} \tilde{a}_{eA} + \tilde{a}_{eA} \tilde{c}^{\dagger} \tilde{m}_e - \tilde{c}_e \tilde{m}_e^\dagger \tilde{a}_{eA} - \tilde{a}_{eA} \tilde{m}_e^\dagger \tilde{c}_e \bigg] \,, \\ \tilde{c}_u' &=\frac{\tilde{c}_u}{3\pi^2 \epsilon} \left(\frac{1}{3} \tilde{e}^2 + \tilde{g}_3^2\right) - \frac{1}{16\pi^2 \epsilon} \tilde{c}_u \tilde{c}_u^{\dagger} \tilde{c}_u + \frac{1}{\pi^2\epsilon}\bigg[\frac{1}{3}\tilde{e}^2 \tilde{a}_{s\tilde{A}} + \tilde{g}_3^2 \tilde{a}_{s\tilde{G}}\bigg] \tilde{m}_u \nonumber \\ &- \frac{1}{16 \pi^2 \epsilon} \bigg[ \tilde{a}_u \left( \tilde{c}_u^\dagger \tilde{m}_u - 2 \tilde{m}_u^\dagger \tilde{c}_u\right) + \left( \tilde{m}_u \tilde{c}_u^\dagger - 2 \tilde{c}_u \tilde{m}_u^\dagger\right) \tilde{a}_u\bigg] \nonumber \\ &- \frac{\tilde{e}}{4 \pi^2 \epsilon} \bigg[\tilde{m}_u\tilde{c}_u^{\dagger} \tilde{a}_{uA} + \tilde{a}_{uA} \tilde{c}_u^{\dagger} \tilde{m}_u - \tilde{c}_u \tilde{m}_u^\dagger \tilde{a}_{uA} - \tilde{a}_{uA} \tilde{m}_u^\dagger \tilde{c}_u \bigg] \nonumber \\ &-\frac{\tilde{g}_3}{2 \pi^2 \epsilon} \bigg[ \tilde{m}_u\tilde{c}_u^{\dagger} \tilde{a}_{uG} + \tilde{a}_{uG} \tilde{c}_u^{\dagger} \tilde{m}_u - \tilde{c}_u \tilde{m}_u^\dagger \tilde{a}_{uG} - \tilde{a}_{uG} \tilde{m}_u^\dagger \tilde{c}_u \bigg] \,, \\ \tilde{c}_d' &=\frac{\tilde{c}_d}{12\pi^2 \epsilon} \left(\frac{1}{3} \tilde{e}^2 + 4 \tilde{g}_3^2\right) - \frac{1}{16\pi^2 \epsilon} \tilde{c}_d \tilde{c}_d^{\dagger} \tilde{c}_d + \frac{1}{4\pi^2\epsilon}\bigg[\frac{1}{3}\tilde{e}^2 \tilde{a}_{s\tilde{A}} + 4 \tilde{g}_3^2 \tilde{a}_{s\tilde{G}}\bigg] \tilde{m}_d \nonumber \\ &- \frac{1}{16 \pi^2 \epsilon} \bigg[ \tilde{a}_d \left( \tilde{c}_d^\dagger \tilde{m}_d - 2 \tilde{m}_d^\dagger \tilde{c}_d\right) + \left( \tilde{m}_d \tilde{c}_d^\dagger - 2 \tilde{c}_d \tilde{m}_d^\dagger\right) \tilde{a}_d\bigg] \nonumber \\ &+ \frac{ \tilde{e}}{8 \pi^2 \epsilon} \bigg[\tilde{m}_d\tilde{c}_d^{\dagger} \tilde{a}_{dA} + \tilde{a}_{dA} \tilde{c}_d^{\dagger} \tilde{m}_d - \tilde{c}_d \tilde{m}_d^\dagger \tilde{a}_{qdA} - \tilde{a}_{dA} \tilde{m}_d^\dagger \tilde{c}_d \bigg] \nonumber \\ &-\frac{\tilde{g}_3}{2 \pi^2 \epsilon} \bigg[ \tilde{m}_d\tilde{c}_d^{\dagger} \tilde{a}_{dG} + \tilde{a}_{dG} \tilde{c}_d^{\dagger} \tilde{m}_d - \tilde{c}_d \tilde{m}_d^\dagger \tilde{a}_{dG} - \tilde{a}_{dG} \tilde{m}_d^\dagger \tilde{c}_d \bigg] \,. \end{align} Finally, the redundancies imply the following relations for the coefficients of non-renormalizable operators: \begin{align} \tilde{a}_\psi \to&\, \tilde{a}_\psi + \tilde{c}_\psi \tilde{r}_{\psi \Box}^\dagger \tilde{c}_\psi +\tilde{r}_{s\psi_L} \tilde{c}_\psi -\tilde{c}_\psi \tilde{r}_{s\psi_R}^\dagger\,, \\ \tilde{a}_{\psi A} \to&\, \tilde{a}_{\psi A} +\frac{\tilde{e} Q_\psi \tilde{r}_{\psi \Box}}{2}\,, \\ \tilde{a}_{\psi G} \to&\, \tilde{a}_{\psi G} +\frac{\tilde{g}_3 \tilde{r}_{\psi \Box}}{2}\,, \end{align} resulting in the following on-shell non-renormalizable divergences: \begin{align} \tilde{a}_u' &= \bigg[ \frac{1}{9\pi^2 \epsilon} \tilde{e}^2 + \frac{1}{3\pi \epsilon}\tilde{g}_3^2 - \frac{\tilde{\lambda}}{32\pi^2\epsilon}\bigg] \tilde{a}_u +\frac{1}{\pi^2\epsilon}\bigg[\frac{1}{3}\tilde{e}^2 \tilde{a}_{s\tilde{A}} + \tilde{g}_3^2 \tilde{a}_{s\tilde{G}}\bigg] \tilde{c}_q \nonumber \\ &- \frac{1}{16 \pi^2 \epsilon} \bigg[ - \tilde{c}_u \tilde{a}_u^{\dagger} \tilde{c}_u + 3 \tilde{a}_u \tilde{c}_u^{\dagger} \tilde{c}_u + 3 \tilde{c}_u \tilde{c}_u^{\dagger} \tilde{a}_q\bigg] - \frac{1}{4\pi^2\epsilon}\tilde{e} \bigg[ \tilde{c}_u \tilde{c}_u^{\dagger} \tilde{a}_{uA} + \tilde{a}_{uA} \tilde{c}_u^{\dagger} \tilde{c}_u\bigg] \nonumber \\ & - \frac{\tilde{g}_3}{2\pi^2 \epsilon }\bigg[ \tilde{c}_u \tilde{c}_u^{\dagger} \tilde{a}_{uG} + \tilde{a}_{uG} \tilde{c}_u^{\dagger} \tilde{c}_u\bigg] \,, \\ \tilde{a}_d' &= \bigg[ \frac{1}{36\pi^2 \epsilon} \tilde{e}^2 + \frac{1}{3\pi \epsilon}\tilde{g}_3^2 - \frac{\tilde{\lambda}}{32\pi^2\epsilon}\bigg] \tilde{a}_q +\frac{1}{4\pi^2\epsilon}\bigg[\frac{1}{3}\tilde{e}^2 \tilde{a}_{s\tilde{A}} + 4 \tilde{g}_3^2 \tilde{a}_{s\tilde{G}}\bigg] \tilde{c}_q \nonumber \\ &- \frac{1}{16 \pi^2 \epsilon} \bigg[ - \tilde{c}_q \tilde{a}_q^{\dagger} \tilde{c}_q + 3 \tilde{a}_q \tilde{c}_q^{\dagger} \tilde{c}_q + 3 \tilde{c}_q \tilde{c}_q^{\dagger} \tilde{a}_q\bigg] + \frac{1}{8\pi^2\epsilon} \tilde{e} \bigg[ \tilde{c}_q \tilde{c}_q^{\dagger} \tilde{a}_{qA} + \tilde{a}_{qA} \tilde{c}_q^{\dagger} \tilde{c}_q\bigg] \nonumber \\ & - \frac{\tilde{g}_3}{2\pi^2 \epsilon }\bigg[ \tilde{c}_q \tilde{c}_q^{\dagger} \tilde{a}_{qG} + \tilde{a}_{qG} \tilde{c}_q^{\dagger} \tilde{c}_q\bigg] \,, \\ \tilde{a}_e' &= \bigg[ \frac{1}{4\pi^2 \epsilon} \tilde{e}^2 - \frac{\tilde{\lambda}}{32\pi^2\epsilon}\bigg] \tilde{a}_e +\frac{3}{4\pi^2\epsilon}\tilde{e}^2 \tilde{a}_{s\tilde{A}} \tilde{c}_e\nonumber \\ &- \frac{1}{16 \pi^2 \epsilon} \bigg[ - \tilde{c}_e \tilde{a}_e^{\dagger} \tilde{c}_e + 3 \tilde{a}_e \tilde{c}_e^{\dagger} \tilde{c}_e + 3 \tilde{c}_e \tilde{c}_e^{\dagger} \tilde{a}_e\bigg] + \frac{3}{8\pi^2\epsilon} \tilde{e} \bigg[ \tilde{c}_e \tilde{c}_e^{\dagger} \tilde{a}_{eA} + \tilde{a}_{eA} \tilde{c}_e^{\dagger} \tilde{c}_e\bigg] \,, \\% \tilde{a}_{eA}' &= -\frac{ \tilde{e}}{8\pi^2\epsilon}\left[ 2 \tilde{e} \tilde{a}_{eA} + \tilde{c}_e \tilde{a}_{s\tilde{A}}\right] \,, \\ \tilde{a}_{uA}' &= -\frac{ \tilde{e}}{12\pi^2\epsilon}\left[\frac{4}{3} \tilde{e} \tilde{a}_{qA} - \tilde{c}_u \tilde{a}_{s\tilde{A}}\right] - \frac{2 \tilde{e} }{9\pi^2 \epsilon} \tilde{g}_3 \tilde{a}_{u\tilde{G}} \,, \\ \tilde{a}_{dA}' &= -\frac{ \tilde{e}}{24\pi^2\epsilon}\left[\frac{2}{3} \tilde{e} \tilde{a}_{dA} + \tilde{c}_d \tilde{a}_{s\tilde{A}}\right] + \frac{ \tilde{e} }{9\pi^2 \epsilon} \tilde{g}_3 \tilde{a}_{d\tilde{G}} \,, \\ \tilde{a}_{uG}' &= -\frac{\tilde{g}_3}{8\pi^2\epsilon}\left[ \frac{4}{3}\tilde{e} \tilde{a}_{uA} -\tilde{c}_u \tilde{a}_{s\tilde{G}} \right] + \frac{1}{24\pi^2\epsilon} \tilde{g}_3^2 \tilde{a}_{uG} \,, \\ \tilde{a}_{dG}' &= \frac{\tilde{g}_3}{8\pi^2\epsilon}\left[ \frac{2}{3}\tilde{e} \tilde{a}_{dA} +\tilde{c}_d \tilde{a}_{s\tilde{G}} \right] + \frac{1}{24\pi^2\epsilon} \tilde{g}_3^2 \tilde{a}_{dG} \,. \end{align} \subsection{Anomalous dimensions} Once we have parametrised all the relevant divergences in the on-shell basis, we can obtain the beta functions of the different parameters following the standard procedure outlined in Section~\ref{sec:rges}. We start reporting the beta functions for the renormalizable couplings, which read: \begin{align}\label{eq:dim4runLEFT} % \beta_{\tilde{e}} &= \frac{4}{3}\left[ n_\ell + \frac{1}{3} n_d + \frac{4}{3} n_u \right] \tilde{e}^3 \\ & \hd{ +8 \tilde{e}^2 {\rm Tr} \bigg[- (\tilde{a}_{eA}^\dagger \tilde{m}_e + \tilde{m}_e^\dagger \tilde{a}_{eA}) + 2 (\tilde{a}_{uA}^\dagger \tilde{m}_u + \tilde{m}_u^\dagger \tilde{a}_{uA}) - (\tilde{a}_{dA}^\dagger \tilde{m}_d + \tilde{m}_d^\dagger \tilde{a}_{dA})\bigg]}\nonumber\,, \label{eq:erunLEFT} % \end{align} \begin{align} \beta_{\tilde{g}_3} &= [-11+\frac{2}{3}(n_u+n_d)]\tilde{g}_3^3 \hd{+ 4 \tilde{g}_3^2 {\rm Tr}\left[\tilde{a}_{dG}^\dagger \tilde{m}_d + \tilde{m}_d^\dagger \tilde{a}_{dG} + \tilde{a}_{uG}^\dagger \tilde{m}_u + \tilde{m}_u^\dagger \tilde{a}_{uG}\right]}\,, % \end{align} \begin{align}\nonumber \beta_{\tilde{m}_e} = & -6\tilde{e}^2 \tilde{m}_e + \frac{1}{2}(\tilde{m}_e \tilde{c}_e^{\dagger}\tilde{c}_e + \tilde{c}_e \tilde{c}_e^{\dagger} \tilde{m}_e + 4 \tilde{c}_e\tilde{m}_e^\dagger\tilde{c}_e)\\ &+ \text{Tr}(\tilde{c}_e\tilde{m}_e^\dagger + \tilde{c}_e^{\dagger} \tilde{m}_e + 3 \tilde{c}_u \tilde{m}_u^\dagger + 3\tilde{c}_u^{\dagger} \tilde{m}_u + 3\tilde{m}_d\tilde{c}_d^{\dagger} + 3 \tilde{m}_d^\dagger \tilde{c}_d)\tilde{c}_e \nonumber \\ & \hd{+12 \tilde{e} \left(\tilde{m}_e \tilde{m}_e ^\dagger \tilde{a}_{eA} + \tilde{a}_{eA} \tilde{m}_e^\dagger \tilde{m}_e\right) - 2 \tilde{m}^2 \tilde{a}_e} \,, % \end{align} % \begin{align}\nonumber \beta_{\tilde{m}_u} = & -8 \tilde{g}_3^2 \tilde{m}_u - \frac{8}{3}\tilde{e}^2\tilde{m}_u + \frac{1}{2}(\tilde{m}_u\tilde{c}_u^{\dagger}\tilde{c}_u +\tilde{c}_u\tilde{c}_u^{\dagger}\tilde{m}_u + 4\tilde{c}_u\tilde{m}_u^\dagger\tilde{c}_u)\\ % &+ \text{Tr}(\tilde{c}_e\tilde{m}_e^\dagger + \tilde{c}_e^{\dagger} \tilde{m}_e + 3 \tilde{c}_u \tilde{m}_u^\dagger + 3\tilde{c}_u^{\dagger} \tilde{m}_u + 3\tilde{m}_d\tilde{c}_d^{\dagger} + 3 \tilde{m}_d^\dagger \tilde{c}_d)\tilde{c}_u \nonumber \\ &\hd{ -8 \tilde{e} \left(\tilde{m}_u \tilde{m}_u ^\dagger \tilde{a}_{uA} + \tilde{a}_{uA} \tilde{m}_u^\dagger \tilde{m}_u\right) -16 \tilde{g}_3 \left( \tilde{m}_u \tilde{m}_u^\dagger \tilde{a}_{uG} + \tilde{a}_{uG} \tilde{m}_u^\dagger \tilde{m}_u \right) - 2 \tilde{m}^2 \tilde{a}_u } \,, % \end{align} % \begin{align}\nonumber \beta_{\tilde{m}_d} = & -8\tilde{g}_3^2\tilde{m}_d-\frac{2}{3}\tilde{e}^2\tilde{m}_d + \frac{1}{2}(\tilde{m}_d\tilde{c}_d^{\dagger}\tilde{c}_d +\tilde{c}_d\tilde{c}_d^{\dagger}\tilde{m}_d + 4\tilde{c}_d\tilde{m}_d^\dagger\tilde{c}_d)\\ % &+ \text{Tr}(\tilde{c}_e\tilde{m}_e^\dagger + \tilde{c}_e^{\dagger} \tilde{m}_e + 3 \tilde{c}_u \tilde{m}_u^\dagger + 3\tilde{c}_u^{\dagger} \tilde{m}_u + 3\tilde{m}_d\tilde{c}_d^{\dagger} + 3 \tilde{m}_d^\dagger \tilde{c}_d)\tilde{c}_u \nonumber \\ &\hd{ 4 \tilde{e} \left(\tilde{m}_d \tilde{m}_d ^\dagger \tilde{a}_{dA} + \tilde{a}_{dA} \tilde{m}_d^\dagger \tilde{m}_d\right) -16 \tilde{g}_3 \left( \tilde{m}_d \tilde{m}_d^\dagger \tilde{a}_{dG} + \tilde{a}_{dG} \tilde{m}_d^\dagger \tilde{m}_d \right) - 2 \tilde{m}^2 \tilde{a}_d }\,, % \end{align} \begin{align}\nonumber % \beta_{\tilde{m}^2} &= \tilde{\lambda}_s \tilde{m}^2 +4 \tilde{m}^2 \text{Tr}(\tilde{c}_e \tilde{c}_e^{\dagger}) + 12\tilde{m}^2\text{Tr}(\tilde{c}_d\tilde{c}_d^{\dagger}) + 12\tilde{m}^2\text{Tr}(\tilde{c}_u\tilde{c}_u^{\dagger})\\\nonumber % &-24\text{Tr}(\tilde{c}_u\tilde{c}_u^{\dagger}\tilde{m}_u\tilde{m}_u^\dagger +\tilde{c}_d\tilde{c}_d^{\dagger}\tilde{m}_d\tilde{m}_d^\dagger) - 18\text{Tr}(\tilde{c}_u^{\dagger}\tilde{c}_u\tilde{m}_u^\dagger\tilde{m}_u +\tilde{c}_d^{\dagger}\tilde{c}_d\tilde{m}_d^\dagger\tilde{m}_d)\\\nonumber % &-12\text{Tr}(\tilde{c}_u\tilde{m}_u^\dagger\tilde{c}_u\tilde{m}_u^\dagger + \tilde{c}_u^{\dagger}\tilde{m}_u\tilde{c}_u^{\dagger}\tilde{m}_u + \tilde{c}_d\tilde{m}_d^\dagger\tilde{c}_d\tilde{m}_d^\dagger + \tilde{c}_d^{\dagger}\tilde{m}_d\tilde{c}_d^{\dagger}\tilde{m}_d)\\\nonumber % &-6\text{Tr}(\tilde{c}_u\tilde{m}_u^\dagger\tilde{m}_u\tilde{c}_u^{\dagger} + \tilde{m}_d\tilde{c}_d^{\dagger}\tilde{c}_d\tilde{m}_d^\dagger + \tilde{c}_e^{\dagger}\tilde{c}_e\tilde{m}_e^\dagger\tilde{m}_e)\\ % &-2\text{Tr}(4\tilde{c}_e\tilde{c}_e^{\dagger}\tilde{m}_e\tilde{m}_e^\dagger +2\tilde{c}_e\tilde{m}_e^\dagger\tilde{c}_e\tilde{m}_e^\dagger +2\tilde{c}_e^\dagger\tilde{m}_e\tilde{c}_e^\dagger\tilde{m}_e+\tilde{c}_e\tilde{m}_e^\dagger\tilde{m}_e\tilde{c}_e^\dagger) \nonumber \\ &+ \hd{ 8 \bigg[3 {\rm Tr} \left(\tilde{m}_d^\dagger \tilde{a}_d \tilde{m}_d^\dagger \tilde{m}_d + \tilde{m}_d^\dagger \tilde{m}_d \tilde{a}_d^{\dagger} \tilde{m}_d \right) + 3 {\rm Tr} \left(\tilde{m}_u^\dagger \tilde{a}_u \tilde{m}_u^\dagger \tilde{m}_u + \tilde{m}_u^\dagger \tilde{m}_u \tilde{a}_u^{\dagger} \tilde{m}_u \right) } \nonumber \\ & \hd{ + {\rm Tr} \left(\tilde{m}_e^\dagger \tilde{a}_e \tilde{m}_e^\dagger \tilde{m}_e + \tilde{m}_e^\dagger \tilde{m}_e \tilde{a}_e^{\dagger} m_e \right)\bigg]} \,, % \end{align} % \begin{align}\nonumber % \beta_{\tilde{\lambda}_s} &= 3\tilde{\lambda}_s^2 -144\text{Tr}(\tilde{c}_d\tilde{c}_d^{\dagger} \tilde{c}_d\tilde{c}_d^{\dagger}) -144\text{Tr}(\tilde{c}_u\tilde{c}_u^{\dagger} \tilde{c}_u\tilde{c}_u^{\dagger}) -48\text{Tr}(\tilde{c}_e\tilde{c}_e^{\dagger} \tilde{c}_e\tilde{c}_e^{\dagger})\\ % &+24\tilde{\lambda}_s \text{Tr}(\tilde{c}_d\tilde{c}_d^{\dagger}) + 24\tilde{\lambda}_s \text{Tr}(\tilde{c}_u\tilde{c}_u^{\dagger}) +8\tilde{\lambda}_s \text{Tr}(\tilde{c}_e\tilde{c}_e^{\dagger}) + \hd{96 \bigg[{\rm Tr}^{\tilde{\lambda}}_e + 3\left({\rm Tr}^{\tilde{\lambda}}_u+{\rm Tr}^{\tilde{\lambda}}_d\right)\bigg] } \,, % \end{align} \begin{align} \beta_{\tilde{c}_u} &= -\frac{24}{9}(\tilde{e}^2 + 3\tilde{g}_3^2) \tilde{c}_u +3\tilde{c}_u\tilde{c}_u^{\dagger}\tilde{c}_u + 2\bigg[\text{Tr}(\tilde{c}_e\tilde{c}_e^{\dagger}) + 3\text{Tr}(\tilde{c}_d\tilde{c}_d^{\dagger}) +3\text{Tr}(\tilde{c}_u\tilde{c}_u^{\dagger})\bigg]\tilde{c}_u+\nonumber\\ &\hd{- 32\bigg[\frac{1}{3}\tilde{e}^2 \tilde{a}_{s\tilde{A}} + \tilde{g}_3^2 \tilde{a}_{s\tilde{G}}\bigg] \tilde{m}_u + 2\bigg[ \tilde{a}_u \left( \tilde{c}_u^\dagger \tilde{m}_u - 2 \tilde{m}_u^\dagger \tilde{c}_u\right) + \left( \tilde{m}_u \tilde{c}_u^\dagger - 2 \tilde{c}_u \tilde{m}_u^\dagger\right) \tilde{a}_u\bigg]} \nonumber\\ & \hd{+ 8 \tilde{e} \bigg[\tilde{m}_u\tilde{c}_u^{\dagger} \tilde{a}_{uA} + \tilde{a}_{uA} \tilde{c}_u^{\dagger} \tilde{m}_u - \tilde{c}_u \tilde{m}_u^\dagger \tilde{a}_{uA} - \tilde{a}_{uA} \tilde{m}_u^\dagger \tilde{c}_u \bigg] }\nonumber \\ & \hd{+16 \tilde{g}_3\bigg[ \tilde{m}_u\tilde{c}_u^{\dagger} \tilde{a}_{uG} + \tilde{a}_{uG} \tilde{c}_u^{\dagger} \tilde{m}_u - \tilde{c}_u \tilde{m}_u^\dagger \tilde{a}_{uG} - \tilde{a}_{uG} \tilde{m}_u^\dagger \tilde{c}_u \bigg] } \,, % \end{align} \begin{align} \beta_{\tilde{c}_d} &= -\frac{2}{3}(\tilde{e}^2 + 12\tilde{g}_3^2) \tilde{c}_d +3\tilde{c}_d\tilde{c}_d^{\dagger}\tilde{c}_d + 2\bigg[\text{Tr}(\tilde{c}_e\tilde{c}_e^{\dagger}) + 3\text{Tr}(\tilde{c}_d\tilde{c}_d^{\dagger}) +3\text{Tr}(\tilde{c}_u\tilde{c}_u^{\dagger})\bigg]\tilde{c}_d\nonumber\\ &\hd{- 8\bigg[\frac{1}{3}\tilde{e}^2 \tilde{a}_{s\tilde{A}} + 4 \tilde{g}_3^2 \tilde{a}_{s\tilde{G}}\bigg] \tilde{m}_d + 2\bigg[ \tilde{a}_d \left( \tilde{c}_d^\dagger \tilde{m}_d - 2 \tilde{m}_d^\dagger \tilde{c}_d\right) + \left( \tilde{m}_d \tilde{c}_d^\dagger - 2 \tilde{c}_d \tilde{m}_d^\dagger\right) \tilde{a}_d\bigg]} \nonumber\\ & \hd{-4 \tilde{e} \bigg[\tilde{m}_d\tilde{c}_d^{\dagger} \tilde{a}_{dA} + \tilde{a}_{dA} \tilde{c}_d^{\dagger} \tilde{m}_d - \tilde{c}_d \tilde{m}_d^\dagger \tilde{a}_{dA} - \tilde{a}_{dA} \tilde{m}_d^\dagger \tilde{c}_d \bigg] }\nonumber \\ & \hd{+16 \tilde{g}_3\bigg[ \tilde{m}_d\tilde{c}_d^{\dagger} \tilde{a}_{dG} + \tilde{a}_{dG} \tilde{c}_d^{\dagger} \tilde{m}_d - \tilde{c}_d \tilde{m}_d^\dagger \tilde{a}_{dG} - \tilde{a}_{dG} \tilde{m}_d^\dagger \tilde{c}_d \bigg] } \,, % \end{align} \begin{align} \beta_{\tilde{c}_e} &= -6\tilde{e}^2 \tilde{c}_e +3\tilde{c}_e\tilde{c}_e^{\dagger}\tilde{c}_e + 2\bigg[\text{Tr}(\tilde{c}_e\tilde{c}_e^{\dagger}) + 6\text{Tr}(\tilde{c}_d\tilde{c}_d^{\dagger}) +6\text{Tr}(\tilde{c}_u\tilde{c}_u^{\dagger})\bigg]\tilde{c}_e\nonumber\\ &\hd{- 8\bigg[3\tilde{e}^2 \tilde{a}_{s\tilde{A}} \bigg] \tilde{m}_e + 2\bigg[ \tilde{a}_e \left( \tilde{c}_e^\dagger \tilde{m}_e - 2 \tilde{m}_e^\dagger \tilde{c}_e\right) + \left( \tilde{m}_e \tilde{c}_e^\dagger - 2 \tilde{c}_e \tilde{m}_e^\dagger\right) \tilde{a}_e\bigg]} \nonumber\\ & \hd{-12 \tilde{e} \bigg[\tilde{m}_e\tilde{c}_e^{\dagger} \tilde{a}_{eA} + \tilde{a}_{eA} \tilde{c}_e^{\dagger} \tilde{m}_e - \tilde{c}_e \tilde{m}_e^\dagger \tilde{a}_{eA} - \tilde{a}_{eA} \tilde{m}_e^\dagger \tilde{c}_e \bigg] } \,; \label{eq:cerunLEFT} % \end{align} where the contributions from the effective operators are apparent from the presence of the corresponding Wilson coefficients, and we have used \texttt{Pyr@te}~\cite{Lyonnet:2016xiz} with manual cross-checks to compute the part of the beta functions that depend only on renormalizable couplings. As a byproduct of this work, we have reproduced the anomalous dimensions of purely SM operators to dimension five given in Ref.~\cite{Jenkins:2017dyc}. One can also trivially reproduce the $\log{(m_W/m_\psi)}$ piece of the ALP-fermion-fermion couplings induced by ALP-vector-vector ones in Eqs. 3.15 and 3.20 of Ref.~\cite{Bauer:2017ris}. In the case of the non-renormalizable Wilson coefficients, the beta functions read: \begin{align} % \beta_{\tilde{a}_u} &=\bigg[-\frac{8}{3} \tilde{e}^2 - 8 \tilde{g}_3^2 + \tilde{\lambda}\bigg] \tilde{a}_u -32 \bigg[\frac{1}{3}\tilde{e}^2 \tilde{a}_{s\tilde{A}} + \tilde{g}_3^2 \tilde{a}_{s\tilde{G}}\bigg] \tilde{c}_u \nonumber \\ & + 2 \bigg[ - \tilde{c}_u \tilde{a}_u^{\dagger} \tilde{c}_u + \frac{13}{4} \tilde{a}_u \tilde{c}_u^{\dagger} \tilde{c}_u + \frac{13}{4} \tilde{c}_u \tilde{c}_u^{\dagger} \tilde{a}_u\bigg] + 4 {\rm Tr} \left[ \tilde{c}_e \tilde{c}_e^{\dagger} + 3 \left( \tilde{c}_d \tilde{c}_d^{\dagger} + \tilde{c}_u \tilde{c}_u^{\dagger} \right)\right] \tilde{a}_u\nonumber \\ & + 8\tilde{e}\bigg[ \tilde{c}_u \tilde{c}_u^{\dagger} \tilde{a}_{qA} + \tilde{a}_{uA} \tilde{c}_u^{\dagger} \tilde{c}_u\bigg] + 16 \tilde{g}_3\bigg[ \tilde{c}_u \tilde{c}_u^{\dagger} \tilde{a}_{uG} + \tilde{a}_{uG} \tilde{c}_u^{\dagger} \tilde{c}_u\bigg] ~,\\ % \beta_{\tilde{a}_d} &=\bigg[-\frac{2}{3} \tilde{e}^2 - 8 \tilde{g}_3^2 + \tilde{\lambda}\bigg] \tilde{a}_d -32 \bigg[\frac{1}{12}\tilde{e}^2 \tilde{a}_{s\tilde{A}} + \tilde{g}_3^2 \tilde{a}_{s\tilde{G}}\bigg] \tilde{c}_d \nonumber \\ & + 2 \bigg[ - \tilde{c}_d \tilde{a}_d^{\dagger} \tilde{c}_d + \frac{13}{4} \tilde{a}_d \tilde{c}_d^{\dagger} \tilde{c}_d + \frac{13}{4} \tilde{c}_d \tilde{c}_d^{\dagger} \tilde{a}_d\bigg] + 4 {\rm Tr} \left[ \tilde{c}_e \tilde{c}_e^{\dagger} + 3 \left( \tilde{c}_d \tilde{c}_d^{\dagger} + \tilde{c}_u \tilde{c}_u^{\dagger} \right)\right] \tilde{a}_d \nonumber \\ & -4 \tilde{e}\bigg[ \tilde{c}_d \tilde{c}_d^{\dagger} \tilde{a}_{dA} + \tilde{a}_{dA} \tilde{c}_d^{\dagger} \tilde{c}_d\bigg] + 16 \tilde{g}_3\bigg[ \tilde{c}_d\tilde{c}_d^{\dagger} \tilde{a}_{dG} + \tilde{a}_{dG} \tilde{c}_d^{\dagger} \tilde{c}_d\bigg] ~,\\ % \beta_{\tilde{a}_e} &=\bigg[-6 \tilde{e}^2 + \tilde{\lambda}\bigg] \tilde{a}_e -24\tilde{e}^2 \tilde{a}_{s\tilde{A}} \tilde{c}_e - 12 \tilde{e} \bigg[ \tilde{c}_e \tilde{c}_e^{\dagger} \tilde{a}_{eA} + \tilde{a}_{eA} \tilde{c}_e^{\dagger} \tilde{c}_e\bigg] \nonumber \\ & + 2 \bigg[ - \tilde{c}_e \tilde{a}_e^{\dagger} \tilde{c}_e + \frac{13}{4} \tilde{a}_e \tilde{c}_e^{\dagger} \tilde{c}_e + \frac{13}{4} \tilde{c}_e \tilde{c}_e^{\dagger} \tilde{a}_e\bigg] + 4 {\rm Tr} \left[ \tilde{c}_e \tilde{c}_e^{\dagger} + 3 \left( \tilde{c}_d \tilde{c}_d^{\dagger} + \tilde{c}_u \tilde{c}_u^{\dagger} \right)\right] \tilde{a}_e \,,\\ % \end{align} % \begin{align} \beta_{\tilde{a}_{s\widetilde{A}}} &=4\tilde{e} {\rm Tr}\bigg[ \left(\tilde{c}_e \tilde{a}_{eA}^\dagger + \tilde{c}_e^{\dagger} \tilde{a}_{eA} \right) + \left( \tilde{c}_d \tilde{a}_{dA}^\dagger + \tilde{c}_d^{\dagger} \tilde{a}_{dA} \right) - 2 \left(\tilde{c}_u \tilde{a}_{uA}^\dagger + \tilde{c}_u^{\dagger} \tilde{a}_{uA} \right) \bigg] \nonumber \\ & + 2 {\rm Tr} \left[ \tilde{c}_e \tilde{c}_e^{\dagger} + 3 \left( \tilde{c}_d \tilde{c}_d^{\dagger} + \tilde{c}_u \tilde{c}_u^{\dagger} \right)\right] \tilde{a}_{s\tilde{A}} + \frac{8}{3}\tilde{e}^2 \bigg[ n_\ell + \frac{1}{3} n_d + \frac{4}{3} n_u \bigg]\tilde{a}_{s\tilde{A}} \,,\\ % \beta_{\tilde{a}_{s\widetilde{G}}} &=- 2\tilde{g}_3{\rm Tr} \left[ \tilde{c}_d \tilde{a}_{dG}^\dagger + \tilde{c}_d^{\dagger }\tilde{a}_{dG} + \tilde{c}_u \tilde{a}_{uG}^\dagger + \tilde{c}_u^{\dagger }\tilde{a}_{uG} \right] + 2 {\rm Tr} \left[ \tilde{c}_e \tilde{c}_e^{\dagger} + 3 \left( \tilde{c}_d \tilde{c}_d^{\dagger} + \tilde{c}_u \tilde{c}_u^{\dagger} \right)\right] \tilde{a}_{s\tilde{G}} \nonumber \\ & + 2 \tilde{g}_3^2 \left[\frac{2}{3} (n_u+n_d) -11\right] \tilde{a}_{s\tilde{G}} \,,\\ % \beta_{\tilde{a}_{eA}} &= 10 \tilde{e}^2 \tilde{a}_{eA} + 4 \tilde{e} \tilde{c}_e \tilde{a}_{s\tilde{A}}+ \frac{1}{2} \left( \tilde{c}_e \tilde{c}_e^{\dagger}\tilde{a}_{eA} + \tilde{a}_{eA} \tilde{c}_e^{\dagger} \tilde{c}_e \right) \nonumber \\ &+ \frac{4}{3}\tilde{e}^2 \bigg[ n_\ell + \frac{1}{3} n_d + \frac{4}{3} n_u\bigg] \tilde{a}_{eA}\,, \\ % \beta_{\tilde{a}_{uA}} &= \frac{40}{9} \tilde{e}^2 \tilde{a}_{uA} - \frac{8}{3}\tilde{e} \tilde{c}_u \tilde{a}_{s\tilde{A}} + \frac{64 }{9} \tilde{e} \tilde{g}_3 \tilde{a}_{u G} + \frac{8}{3} \tilde{g}_3^2 \tilde{a}_{uA} + \frac{1}{2} \left( \tilde{c}_u \tilde{c}_u^{\dagger}\tilde{a}_{uA} + \tilde{a}_{uA} \tilde{c}_u^{\dagger} \tilde{c}_u \right) \nonumber \\ &+\frac{4}{3}\tilde{e}^2 \bigg[ n_\ell + \frac{1}{3} n_d + \frac{4}{3} n_u\bigg] \tilde{a}_{uA}\,, \\ % \beta_{\tilde{a}_{dA}} &= \frac{10}{9}\tilde{e}^2 \tilde{a}_{dA} + \frac{4}{3} \tilde{e} \tilde{c}_d \tilde{a}_{s\tilde{A}} - \frac{32 \tilde{e} }{9} \tilde{g}_3 \tilde{a}_{d G} + \frac{8}{3} \tilde{g}_3^2 \tilde{a}_{dA} + \frac{1}{2} \left( \tilde{c}_d \tilde{c}_d^{\dagger}\tilde{a}_{dA} + \tilde{a}_{dA} \tilde{c}_d^{\dagger} \tilde{c}_d \right) \nonumber \\ & +\frac{4}{3}\tilde{e}^2 \bigg[ n_\ell + \frac{1}{3} n_d + \frac{4}{3} n_u\bigg]\tilde{a}_{dA}\,, \\ % \beta_{\tilde{a}_{uG}} &= \frac{16}{3} \tilde{g}_3 \tilde{e} \tilde{a}_{uA} - 4\tilde{g}_3\tilde{c}_u \tilde{a}_{s\tilde{G}} + \frac{8}{9} \tilde{e}^2 \tilde{a}_{uG} + \frac{1}{2} \left( \tilde{c}_u \tilde{c}_u^{\dagger}\tilde{a}_{uG} + \tilde{a}_{uG} \tilde{c}_u^{\dagger} \tilde{c}_q \right) \nonumber \\ &+\frac{1}{3}\bigg[ 2 \left(n_u + n_d\right) - 29 \bigg] \tilde{g}_3^ 2\tilde{a}_{uG}\,, \\ % \beta_{\tilde{a}_{dG}} &= -\frac{8}{3} \tilde{g}_3 \tilde{e} \tilde{a}_{dA} - 4\tilde{g}_3\tilde{c}_d^1 \tilde{a}_{s\tilde{G}} + \frac{2}{9} \tilde{e}^2 \tilde{a}_{dG} + \frac{1}{2} \left( \tilde{c}_d \tilde{c}_d^{\dagger}\tilde{a}_{dG} + \tilde{a}_{dG} \tilde{c}_d^{\dagger} \tilde{c}_d \right) \nonumber \\ & +\frac{1}{3}\bigg[ 2 \left(n_u + n_d\right) - 29 \bigg] \tilde{g}_3^ 2\tilde{a}_{dG}\,; \end{align} where $n_u$, $n_d$ and $n_e$ are the number of dynamical up-type quarks, down-type quarks and charged leptons, respectively, for the EFT we are considering. The equations above are fully generic, meaning that they hold irrespective of whether the EFT in the UV is the one we have assumed in Section~\ref{sec:eft} and that leads to the matching conditions in Eqs.~\eqref{eq:matching}--\eqref{eq:matching2}, or rather any other EFT containing different degrees of freedom such as for example a second scalar doublet with arbitrary Yukawa couplings which would lead to non vanishing $\tilde{a}_{u,d,e}$. In the former case, though, one should take into account that $\tilde{c}_{u,d,e}$ are already $1/\Lambda$ suppressed and therefore terms with more than one appearance of these Wilson coefficients should be neglected for consistency. Within our EFT(s), the parameters $\tilde{a}_u$, $\tilde{a}_d$ and $\tilde{a}_e$ vanish at all scales, and the renormalizable fermion and ALP masses do not get contributions from dimension-five operators. In general this is not the case and we have, for example, \begin{equation} % \delta\tilde{m}^2\sim \tilde{m}_e^3\tilde{a}_e\,, \quad \delta\tilde{m}_e\sim \tilde{m}^2\tilde{a}_e\,, % \end{equation} and likewise for quarks. The running of the operators involving coloured particles should not be taken at face value at energies close or below $\Lambda_{QCD}\sim 200$ MeV, where QCD becomes strongly coupled. Otherwise, the equations above, together with Eqs.~\eqref{eq:beta10}--\eqref{eq:beta18}, can be used to make predictions within the ALP EFT to leading-log accuracy across all energy scales. \section{Some phenomenological applications} \label{sec:pheno} The mixing of different operators can have a significant impact in the understanding of extensions of the SM with ALPs. To exemplify this point, we consider in this section the following simple Lagrangian, defined at the scale $\Lambda=10$ TeV: \begin{equation} % \mathcal{L} = \mathcal{L}_{\mathrm{SM}}+ \frac{1}{2}\partial_\mu s\partial^\mu s + \frac{1}{2}\tilde{m}^2 s^2 + \frac{a_{s\widetilde{Z}}}{c_\omega^2 - s_\omega^2} s \left( c_\omega^2 W_{\mu\nu} \widetilde{W}^{\mu\nu} - s_\omega^2 B_{\mu\nu} \widetilde{B}^{\mu\nu} \right) \,, \end{equation} where $\mathcal{L}_{\mathrm{SM}}$ stands for the SM Lagrangian. In this Lagrangian, the ALP couples to pairs of $Z$ bosons but not to pairs of photons. Despite its simplicity, this structure arises for example in the next-to-minimal composite Higgs model based on $SO(6)/SO(5)$ as a result of quantum anomalies~\cite{Gripaios:2016mmi}. Within this framework, the photophobic condition is stable, namely $a_{s\tilde{A}}$ remains vanishing at all scales. (The ALP can still couple to photons proportionally to $\tilde{m}$ via loops of heavy gauge bosons~\cite{Craig:2018kne}, for instance.) However, below the EW scale, the ALP coupling to photons could be induced by (purely SM) dipole operators even at dimension five if the high-energy theory is not just the ALP EFT, but it rather involves other states near the EW scale. Let us assume that the physical ALP mass is $\mathcal{O}(\text{KeV})$. While $a_{s\widetilde{Z}}$ can be directly constrained at colliders, \textit{e.g.} in $pp\to Zs$, the corresponding bounds are very weak. For example, values of $a_{s\widetilde{Z}}$ larger than $0.2\,\mathrm{TeV}^{-1}$ can be bound, depending on the value of $\tilde{m}$, from LHC Run II data~\cite{Brivio:2017ije}. This bound can be extended to values above $0.04\, \mathrm{TeV}^{-1}$ for the High-Luminosity phase of the LHC~\cite{Brivio:2017ije}. However, $a_{s\widetilde{Z}}$ does generate, through mixing, other operators with non-vanishing Wilson coefficients, in particular $a_{se\phi}$, which is very constrained experimentally. Indeed, the most stringent bound for ALPs with masses $\sim \mathrm{KeV}$ comes from the modification of Red Giants cooling due to ALP radiation. This sets a bound on the ALP coupling to electrons $\tilde{c}_{e} \lesssim 3\times10^{-13}$, for a typical core temperature of $T \approx 10^{8}$ K~\cite{Raffelt:2006cw}. Since we are assuming that there are no degrees of freedom other than the ALP and the SM ones below $\Lambda$, $\tilde{c}_e$ runs proportional to itself. Resumming Eqs.~\eqref{eq:erunLEFT} and \eqref{eq:cerunLEFT}, including $\mathcal{O}(1/\Lambda)$ effects, we obtain at the EW scale: \begin{equation} \tilde{c}_e(v)\lesssim 2.8\times 10^{-13}, \label{cev} \end{equation} which translates into \begin{equation} % a_{se\phi} (v) \lesssim 1.6 \times 10^{-12} \; \mathrm{TeV}^{-1}\,. % \end{equation} Solving numerically Eqs.~\eqref{eq:beta10}-\eqref{eq:beta14},~\eqref{eq:betag1}-\eqref{eq:betag3} and \eqref{eq:betayu}-\eqref{eq:betaye} for $\lambda_{s\phi} = 0$, we can compute the maximum allowed value for $a_{s\widetilde{Z}}(10\,\text{TeV})$: \begin{equation} % a_{s\widetilde{Z}}(10\,\text{TeV}) \lesssim 4.8 \times 10^{-6} \; \mathrm{TeV}^{-1}\,. % \end{equation} Despite the electron Yukawa suppression, this is four orders of magnitude stronger than prospects from direct searches. Such analysis of the photophobic ALP was previously performed in Ref.~\cite{Craig:2018kne} by considering solely the running of the gauge operators. As shown in Eq. (\ref{cev}) the running from the EW scale down to the KeV amounts to a $\sim 6\%$ effect, which can be taken as a systematic theory error when using only the EFT before EWSB. See also Ref.~\cite{Terol-Calvo:2019vck} for an analysis of the RGE effects in bounds on neutrino interactions resulting in similar numbers. As another example, we consider the case of a top-philic ALP at $\Lambda=10$ TeV, with Lagrangian \begin{equation} % \mathcal{L} = \mathcal{L}_{\mathrm{SM}}+\frac{1}{2}\partial_\mu s\partial^\mu s + \frac{1}{2}\tilde{m}^2 s^2 + a_t s[i \overline{q}_L\tilde{\phi}t_R + \text{h.c.}] \,, % \end{equation} where $q_L$ stands for the third generation quark doublet and, in our notation, $a_t=(a_{su\phi})^{33}$. As before, $a_t$ generates, via renormalization mixing, a non-vanishing $a_{se\phi}$. Proceeding in the same way as above, we obtain $a_t(10\,\text{TeV}) \lesssim 4.3 \times 10^{-6}\; \mathrm{TeV}^{-1}$ from the bound $\tilde{c}_e(\mu\sim\mathrm{KeV}) \lesssim 3\times 10^{-13}$. Direct bounds on this coupling could in principle be obtained from $pp\to t\overline{t} s$, but they are likely to be very weak due to the challenging final state. Other indirect constraints on $a_t$ have been studied in~Ref.\cite{Ebadi:2019gij} but they are again much weaker than the one we have obtained. Other interesting phenomenological implications, like the possible impact of non-SM degrees of freedom using the generic RGEs, and in particular the mixing between operators of different dimensions and different ALP content, are left for future work. \section{Conclusions} \label{sec:conclusions} In this article we have investigated the EFT for ALPs up to order $\mathcal{O}(1/\Lambda)$ in the cutoff scale $\Lambda$. We have worked in a complete basis of EFT operators to dimension five, including both shift-preserving and shift-breaking interactions. Assuming that CP is conserved in the UV, we have computed, at one loop, the evolution of the CP-even effective operators under renormalization group running from arbitrarily high energies down to the EW scale. In the ALP LEFT, relevant at smaller energies, in which the heavy top quark and the Higgs, $Z$ and $W$ bosons are no longer dynamical, we have also computed the renormalization of all, relevant and marginal, parameters. We have found that, in general, effective interactions can renormalize dimension-four ones, and operators involving the ALP mix with pure SM operators; although the latter effect vanishes if the theory above the EW scale involves only the ALP and SM degrees of freedom. For this latter case we have also provided the matching conditions between the EFTs above and below the EW threshold. Interestingly, we have shown that in the presence of SM dimension-five interactions below the EW scale, the ALP coupling to photons no longer renormalizes proportionally to itself. To make our work more useful, we have not only given the full list of beta functions in a minimal basis but we have also explicitly written the corresponding counterterms of all independent (off-shell) Green functions. This is important for two reasons: first because without this information, the RGEs of extensions of our EFT, \textit{e.g} by adding right-handed neutrinos, can not be built on our results, as redundancies are different in different EFTs; second because while in analytical one-loop computations within $\overline{\text{MS}}$ the counterterms might be in principle ignored by just dropping the $1/\epsilon$ poles, their precise value is of fundamental importance in numerical Monte Carlo simulations. For the ALP EFT in the unbroken phase, several RGEs have been obtained previously in the literature. These assumed however shift invariance, and were therefore presented in a different set of operators, in which the ALP comes always in derivatives. We have discussed some redundancies that appear in this set of operators, and obtained conditions on the corresponding coefficients under which they actually form a basis of independent interactions. Upon relating this basis to ours, we have obtained the RGEs in the former. In this way, we have not only completely solved the full dimension-five ALP EFT to leading-log order, but also cross-checked several of the partial results which were somewhat spread over different references~\cite{Bauer:2016lbe,Choi:2017gpf,Bauer:2017ris}. Finally, as an example of the utilization of our results, we have explored the possibility of indirectly probing ALP-$Z$ as well as ALP-top interactions through their contribution to the ALP-electron coupling, which is bounded by very low-energy measurements. (We leave the interesting possibility of testing ALP interactions through its mixing into pure SM ones, and vice-versa, to future work.) This shows the potential of our results to study the ALP EFT phenomenology to leading-log accuracy across all energy scales. \section*{Acknowledgments} We would like to thank Adri\'an Carmona for useful feedback and for suggesting the title. MC thanks also Sara Khatibi for useful discussions. MC and JS are partially supported by the Ministry of Science and Innovation under grant number FPA2016-78220-C3-1/3-P (FEDER), SRA under grant PID2019-106087GB-C21/C22 (10.13039/501100011033), and by the Junta de Andaluc\'ia grants FQM 101, and A-FQM-211-UGR18 and P18-FR-4314 (FEDER). MC is also supported by the Spanish MINECO under the Ram\'on y Cajal programme. GG and MR acknowledge support by LIP (FCT, COMPETE2020-Portugal2020, FEDER, POCI-01-0145-FEDER-007334) as well as by FCT under project CERN/FIS-PAR/0024/2019. GG is also supported by FCT under the grant SFRH/BD/144244/2019. MR is also supported by Funda\c{c}\~ao para a Ci\^encia e Tecnologia (FCT) under the grant PD/BD/142773/2018.
{ "timestamp": "2020-12-17T02:21:01", "yymm": "2012", "arxiv_id": "2012.09017", "language": "en", "url": "https://arxiv.org/abs/2012.09017" }
\section{Introduction} The purpose of this work is to study high frequency asymptotics of eigenfunctions to the Schr\"odinger operator on the $2$-sphere \begin{equation}\label{e:sphere}\mathbb{S}^2:=\left\{(x_1,x_2,x_3)\in\mathbb{R}^3:x_1^2+x_2^2+x_3^2=1\right\}.\end{equation} We endow $\IS^2$ with the Riemannian metric $g_0$ induced by the Euclidean metric on $\mathbb{R}^3$. In that geometric context and given an element $V\in\mathcal{C}^{\infty}(\mathbb{S}^2,\mathbb{R})$, there exists an orthonormal basis~\cite[Th.~14.7]{Zw12} of $L^2(\mathbb{S}^2,d\upsilon_{g_0})$ made of solutions to \begin{equation}\label{e:schrodinger}-\Delta_{g_0}\psi_\lambda +V\psi_\lambda =\lambda^2\psi_\lambda,\quad\lambda^2\in\mathbb{R},\end{equation} where $\Delta_{g_0}$ is the Laplace-Beltrami operator and $d\upsilon_{g_0}$ is the Riemannian volume, both induced by $g_0$. By elliptic regularity, solutions to~\eqref{e:schrodinger} are smooth~\cite[\S~14.3]{Zw12} and a classical Theorem of Sogge~\cite{So88} states that, for every $2\leq p\leq +\infty$, there exists $C_p>0$ such that, for any solution $(\psi_\lambda,\lambda)$ to~\eqref{e:schrodinger}, \begin{equation}\label{e:Lp-sogge}\|\psi_\lambda\|_{L^p(\IS^2)}\leq C_p(1+|\lambda|)^{\sigma_0(p)}\|\psi_\lambda\|_{L^2(\IS^2)},\end{equation} where\footnote{The case $p=\infty$ is a consequence of the local Weyl law~\cite{Ho68}.} $$\sigma_0(p):=\max\left\{\frac{1}{4}-\frac{1}{2p},\frac{1}{2}-\frac{2}{p}\right\}.$$ The critical exponent for which both quantities in the maximum coincide is given by $p_c=6$. Recall that Sogge's result can be extended to $\ml{O}(\lambda)$-quasimodes of $-\Delta_{g_0}$ as solutions to~\eqref{e:schrodinger} are -- see also~\cite{KTZ}. In the case where $V\equiv 0$, these upper bounds are optimal using appropriate sequences of spherical harmonics~\cite{So15}. However, for generic sequences~\cite{VdK97, Ze08, BuLe13} or for families satisfying certain extra invariance properties~\cite{BrLM20}, these bounds can drastically be improved when $V\equiv 0$. Our aim is to show that the presence of a potential allows to improve~\eqref{e:Lp-sogge} away from certain critical geodesics and for \emph{any} sequence of eigenfunctions. In order to state our results, we introduce the space of oriented closed geodesics $G(\mathbb{S}^2)$ of the sphere. By identifying each oriented closed geodesic with an oriented plane of $\mathbb{R}^3$, $G(\mathbb{S}^2)$ is diffeomorphic to $\mathbb{S}^2$. Through this identification, $G(\mathbb{S}^2)\simeq\mathbb{S}^2$ is endowed with the symplectic structure induced by the one on the cotangent bundle $T^*\mathbb{S}^2$~\cite[p.~58]{Bes78}. We also define the Radon transform of the potential $V$: $$\mathcal{R}(V):\gamma\in G(\mathbb{S}^2)\mapsto \frac{1}{2\pi}\int_0^{2\pi} V(\gamma(s))ds \in\mathbb{R},$$ which belongs to $\mathcal{C}^{\infty}(G(\mathbb{S}^2))$. Thanks to the symplectic structure on $G(\IS^2)$, one can define its Hamiltonian vector field $X_{\langle V\rangle}$. We denote its critical points by $$\text{Crit}(\mathcal{R}(V)):=\left\{\gamma\in G(\mathbb{S}^2): D_\gamma\mathcal{R}(V)=0\right\}=\left\{\gamma\in G(\mathbb{S}^2): X_{\langle V\rangle}(\gamma)=0\right\}.$$ Observe that $\ml{R}(V)$ is always an even function on $G(\IS^2)$. In particular, it can be identified with a function on $\mathbb{R}P^2$ and it has thus at least $6$ critical points on $G(\IS^2)$ by Morse inequalities. In fact, Guillemin showed~\cite{Gu76} that $$\mathcal{R}:\mathcal{C}^{\infty}(\mathbb{S}^2)\rightarrow \mathcal{C}^{\infty}_{\text{even}}(G(\mathbb{S}^2))\simeq\mathcal{C}^{\infty}(\IR P^2)$$ is a surjective map whose kernel is equal to $\mathcal{C}^{\infty}_{\text{odd}}(\mathbb{S}^2)$. As a corollary, for a generic choice of $V$ in the $\mathcal{C}^{\infty}$-topology, $\text{Crit}(\mathcal{R}(V))$ is a finite set. Finally, given $x_0\in\IS^2$, we set $$\Gamma_{x_0}:=\left\{\gamma\in G(\IS^2):\ x_0\in\gamma\right\}.$$ Our main result reads as follows \begin{theo}\label{t:maintheo} Let $x_0\in \IS^2$ such that \begin{equation}\label{e:hyp-crit}\operatorname{Crit}(\mathcal{R}(V))\cap \Gamma_{x_0}=\emptyset,\end{equation} and \begin{equation}\label{e:hyp-trans} \ml{R}(V)|_{\Gamma_{x_0}}\ \text{is a Morse function.}\end{equation} Then, there exists $r_0>0$ such that, for every $2\leq p\leq +\infty$, one can find $C_{x_0,p}>0$ so that, for any solution $(\psi_\lambda,\lambda)$ to~\eqref{e:schrodinger}, $$\|\psi_\lambda\|_{L^p(B_{r_0}(x_0))}\leq C_{x_0,p}(\log(2+|\lambda|))^{\varepsilon(p)}(1+|\lambda|)^{\sigma_0(p)-\delta(p)}\|\psi_\lambda\|_{L^2(\mathbb{S}^2)},$$ where $B_{r_0}(x_0)$ is the closed (geodesic) ball of radius $r_0$ centered at $x_0$ and where, for $4< p\leq \infty,$ $$\delta(p):=\frac{1}{18}\left|1-\frac{6}{p}\right|,\quad \varepsilon(p)=0$$ and, for $2\leq p\leq 4$ $$\delta(p):=\frac{1}{18}\left(1-\frac{2}{p}\right),\quad\varepsilon(p):=2\left(1-\frac{2}{p}\right).$$ \end{theo} \begin{rema} Given a point $x_0$, we note that~\eqref{e:hyp-crit} and~\eqref{e:hyp-trans} are satisfied for an open and dense subset $\mathcal{U}_{x_0}$ of potentials in $\mathcal{C}^{\infty}(\IS^2,\IR)$ (endowed with its natural Fr\'echet topology). Assumption~\eqref{e:hyp-trans} implies that the Hamiltonian vector field is transverse to $\Gamma_{x_0}$ except at finitely many points. Combined with~\eqref{e:hyp-crit}, one has that, at the points where $X_{\langle V\rangle}(\gamma)$ is tangent to $\Gamma_{x_0}$, the tangency is of order $1$. See Remark~\ref{r:caustic} for an interpretation of these assumptions in terms of Lagrangian tori. \end{rema} \begin{rema} A direct Corollary of Theorem~\ref{t:maintheo} is that, if $K$ is a compact subset of $\IS^2$ such that, for every $x_0\in K$,~\eqref{e:hyp-crit} and~\eqref{e:hyp-trans} hold, then, for any solution $(\psi_\lambda,\lambda)$ to~\eqref{e:schrodinger}, \begin{equation}\label{e:Lp-compact}\|\psi_\lambda\|_{L^p(K)}\leq C_{K,p}\log(2+|\lambda|)^{\varepsilon(p)}(1+|\lambda|)^{\sigma_0(p)-\delta(p)}\|\psi_\lambda\|_{L^2(\mathbb{S}^2)}.\end{equation} Yet, our main result does not allow to take $K=\IS^2$ as $\operatorname{Crit}(\mathcal{R}(V))$ cannot be empty. \end{rema} This Theorem yields a \emph{local} improvement for $p\neq 6$ over Sogge's upper bounds near certain points of $\IS^2$ which are \emph{independent} of the sequence $(\psi_{\lambda})_\lambda$ under consideration. The condition on these points are of purely dynamical nature and they depend on the subprincipal symbol of our operator. It may happen that Sogge's upper bounds are saturated for these operators but this can only occur away from points $x_0$ verifying~\eqref{e:hyp-crit} and~\eqref{e:hyp-trans}. The critical case $p_c=6$ could maybe be treated using similar ideas and the methods of Blair and Sogge to handle this exponent on nonpositively curved surfaces~\cite{So17, BlSo19}. Yet, this would probably require a much more delicate analysis than the one presented in this article. Our hypothesis~\eqref{e:hyp-crit} and~\eqref{e:hyp-trans} are reminiscent from assumptions that appear when studying joint eigenfunctions of quantum completely integrable systems -- see~\cite[\S 1]{ToZe02} for a definition. For instance, the critical points involved in hypothesis~\eqref{e:hyp-crit} were used to obtain lower bounds by Toth in~\cite{To96} and by Toth-Zelditch in~\cite{ToZe02, ToZe03}. Similarly, assumption~\eqref{e:hyp-trans} was recently used by Galkowski--Toth~\cite{GaTo20} and by Tacy~\cite{Ta19b} to study the growth of $L^{\infty}$-norms of joint eigenfunctions. The main differences with these last references are that we handle every $p\neq 6$ and that we consider here eigenfunctions of the \emph{single} operator $-\Delta_{g_0}+V$. In fact recall from~\cite[Lemma~1]{Gu81} (see also~\cite{W77}) that there exists a unitary pseudodifferential operator $\mathcal{U}$ of order $0$ such that \begin{equation}\label{e:guillemin-weinstein}\mathcal{U}^{-1}\left(-\Delta_{g_0}+V\right)\mathcal{U}=-\Delta_{g_0}+V^\sharp,\end{equation} where $[\Delta_{g_0},V^\sharp]=0$ and where the principal symbol of $V^\sharp$ is $\mathcal{R}(V)$. In other words, $-\Delta_{g_0}+V$ is the sum of two commuting pseudodifferential operators $\widehat{H}_1:=\mathcal{U}\Delta_{g_0}\mathcal{U}^{-1}$ and $\widehat{H}_2:=\mathcal{U}V^\sharp\mathcal{U}^{-1}$. In particular, it is a quantum completely integrable operator in the sense of~\cite[\S 1]{ToZe02} whenever $X_{\langle V\rangle}$ does not vanish on a dense and open subset of finite complexity (say outside finitely many points). Hence, upper bounds on $L^p$ norms of solutions to~\eqref{e:schrodinger} which are joint eigenfunctions of $(\widehat{H}_1,\widehat{H}_2)$ would follow from the results in~\cite{GaTo20, Ta19b} in the range $p>6$. However, in Theorem~\ref{t:maintheo}, we only suppose $p\neq 6$ and we do not make any assumption on the fact that $\psi_\lambda$ is a joint eigenfunction\footnote{We are not aware of a geometric criterion ensuring that all eigenfunctions are joint eigenfunctions. Yet, this is for instance achieved when the spectrum of $-\Delta_{g_0}+V$ is simple, which is the case for a residual set of potentials~\cite[Th.~7]{Uh76}.} of $(\widehat{H}_1,\widehat{H}_2)$ which makes the analysis slightly more delicate. Despite that, Theorem~\ref{t:maintheo} shows that there is room for (weaker) polynomial improvements on~\eqref{e:Lp-sogge} even for such eigenfunctions and even for $p<6$. In~\cite{Ta19}, Tacy obtained better estimates up to $p=2$ but she made stronger assumptions than ours on the sequence of eigenfunctions. Indeed, when restricted to our framework, the main result from this reference applies to sequences of joint eigenfunctions that concentrate away from the critical points of $\mathcal{R}(V)|_{\Gamma_{x_0}}$. \subsection{Earlier and related results} The upper bounds~\eqref{e:Lp-sogge} are in fact valid in the general framework of compact Riemannian surfaces and, up to modifying the exponent $\sigma_0(p)$, they remain true in higher dimensions~\cite{So88}. Trying to improve them using the geometry of the manifold has been a classical topic in global harmonic analysis over the last thirty years. \begin{itemize} \item \textbf{Flat tori.} In the case of flat tori and where $V\equiv 0$, this was achieved by Cooke~\cite{Co71} and Zygmund~\cite{Zy74} in dimension $2$ while the higher dimensional case was pursued by Bourgain~\cite{Bo93} and by Bourgain-Demeter~\cite{BoDe15}. In that case, one can use the arithmetic structure of the torus to get polynomial improvements over~\eqref{e:Lp-sogge}. See also~\cite{Wa11} for the case of Schr\"odinger operators on $2$-dimensional tori. To the best of the author's knowledge, flat tori is almost the only geometric framework where one can get global polynomial improvements without any further assumptions on the sequence of eigenfunctions (see below for the case of joint eigenfunctions). We can also mention~\cite{Zh20, Zh21} for recent improvements on compact Lie groups. \item \textbf{Negatively curved manifolds.} Another important class of examples where one expects improvements are negatively curved manifolds. For $p=\infty$, B\'erard showed how to get logarithmic improvements~\cite{Be77}. This logarithmic gain was extended to the range $p>p_c$ by Hassell and Tacy~\cite{HaTa15} and to manifolds without conjugate points by Bonthonneau~\cite{Bon17}. Still on negatively curved manifolds and for $p\leq p_c$, we obtained together with Hezari a logarithmic gain along generic sequences of eigenfunctions~\cite{HR16}. In a series of works related to Kakeya-Nikodym norms~\cite{So17, BlSo18, BlSo19}, Blair and Sogge proved logarithmic gains (with a slightly worst exponent) in this geometric context without any restriction on the sequence of eigenfunctions. \item \textbf{Arithmetic eigenfunctions.} A natural way to look for improvements over~\eqref{e:Lp-sogge} is to consider families of eigenfunctions that verify extra symmetries, for instance joint eigenfunctions of the Laplacian and of a family of commuting operators. In the case of a compact arithmetic surface, Iwaniec and Sarnak considered joint eigenfunctions of the Laplacian and of Hecke operators. For such sequences of eigenfunctions, they proved a polynomial improvement in the case of the $L^{\infty}$-norm~\cite{IwSar95}. In the case of the sphere, Brooks and Le Masson considered the related problem of joint eigenfunctions of $\Delta_{g_0}$ and the averaging operator for a finitely-generated free algebraic subgroup of $SO(3)$~\cite{BrLM20}. For such eigenfunctions, they obtained the same logarithmic improvement as Hassell and Tacy in the negatively curved case. On a rank $r$ symmetric space of dimension $n$, Sarnak improved the bound on the $L^{\infty}$-norm by a polynomial factor for eigenfunctions of the full ring of differential operators~\cite{Sar04}. This was generalized to the case of $L^p$-norms by Marshall~\cite{Ma16}. \item \textbf{Completely integrable systems.} Another context (closely related to ours) is the case of \emph{joint} eigenfunctions of a quantum completely integrable system. Toth and Zelditch proved that such eigenfunctions cannot have their $L^p$ norms uniformly bounded except in the case of flat tori~\cite{ToZe02, ToZe03}. See~\cite[Ch.~11]{Ze17} for a detailed discussion on joint eigenfunctions of quantum completely integrable systems. More recently, Galkowski and Toth obtained polynomial improvements on the $L^{\infty}$-bound for joint eigenfunctions of a quantum completely integrable systems~\cite{GaTo20} and Tacy proved improved Sogge's bounds for joint eigenfunctions of general families of semiclassical pseudodifferential operators~\cite{Ta19, Ta19b}. \item \textbf{Local improvements.} Sogge and Zelditch considered the problem from a more local perspective as we are doing here. They proved that, if, for a given point $x_0$ on a Riemannian manifold $(M,g)$, the set of covectors $\xi\in S_{x_0}^*M$ that come back to $x_0$ in finite time has zero measure, then one can improve locally near $x_0$ the upper bound on the $L^{\infty}$-norm by a $o(1)$ term~\cite{SoZe02}. This was based on improvements on the remainder in the local Weyl law. See also~\cite{Sa88} for earlier related results of Safarov. This result was later extended by Sogge, Toth and Zelditch under the weaker assumptions that the set of recurrent co-vectors at $x_0$ has $0$-measure\footnote{We emphasize that Theorem~\ref{t:maintheo} considers the somehow opposite case where the set of recurrent vectors has full measure. Despite that, we are able to get local polynomial improvements using the periodicity of the geodesic flow and the presence of a subprincipal symbol.}~\cite{SoToZe11}. We also refer to~\cite{SoZe16} for further developments of this approach when the metric is analytic and to~\cite[Ch.~10]{Ze17} for a detailed review. Related to these works, Galkowski and Toth showed how to relate precisely the growth of the $L^{\infty}$-norm near a point $x_0$ to the semiclassical measure restricted to the (geodesic) flow-out of the fiber $S_{x_0}^*M$~\cite{GaTo18} -- see also~\cite{Ga19}. More precisely, they proved that, if the $n$-dimensional Hausdorff measure of the support of this restriction is $0$, then one can get a $o(1)$-improvement on the growth of $L^{\infty}$-norm near $x_0$. \item \textbf{Using Gaussian beams.} This local approach was further improved by Canzani-Galkowski in a series of work using Gaussian beams~\cite{CaGa19, CaGa20}. In~\cite[Th.~1]{CaGa20}, they showed how to use this notion in order to give quantitative and at most logarithmic improvements on the growth of $L^p$-norms near a point $x_0$ when the conjugate points to $x_0$ do not pass too close to $x_0$. Among other things, they recover in that manner the results of B\'erard, Hassell-Tacy and Bonthonneau on manifolds without conjugate points. Besides that, they manage to deduce from their main results local improvements near $x_0$ on the growth of $L^p$ norms (for $p>p_c$) under quantitative assumptions on the geodesics passing through the point $x_0$ as in the works of Sogge, Toth and Zelditch. Finally, they also applied their main results to certain integrable (non-periodic) geometries on $\IS^2$ and obtain logarithmic improvements away from certain critical points when $p=\infty$~\cite[Th.~5]{CaGa19}. As in our framework, their result holds for the eigenfunctions of a single operator. \end{itemize} \subsection{Strategy of proof} In the range $p>6$, the proof is based on an argument to study the growth of $L^p$ norms that was used by Hezari and the author in~\cite{HR16} and further improved by Sogge in~\cite{So16}. It consists in relating the growth of $L^p$-norms to the growth of \begin{equation}\label{e:ave-ball}\int_{B_r(x)}|\psi_\lambda(y)|^2d\upsilon_{g_0}(y)\end{equation} as $\lambda\rightarrow +\infty$ and $r\rightarrow 0^+$ (in a way that depends on $\lambda$). For $2\leq p<6$, we rather make use of results due to Blair and Sogge~\cite{So11, BlSo15, BlSo17} to control $L^p$-norms in terms of Kakeya-Nikodym averages around closed geodesics. See also~\cite{Bo09} for earlier related results of Bourgain. Then, we obtain rough bounds on these averages in terms of~\eqref{e:ave-ball}. The results from these references are briefly recalled (and adapted to Schr\"odinger eigenfunctions) in Sections~\ref{s:Lp} and~\ref{s:kakeya}. Up to smoothing the characteristic function of the balls, these local quantities can be interepreted in terms of Wigner distributions (or microlocal lifts). In particular, as was for instance observed by Shnirelman in his seminal work on quantum ergodicity~\cite{Sh74, Sh74b}, these distributions verify an almost invariance property by the geodesic flow. See for instance~\cite[Lemma 2, Eq.~(10)]{Sh74b}. This yields an upper bound of order $\mathcal{O}(r)$ on~\eqref{e:ave-ball} at least if $r$ does not go too fast to $0$ (say $r\gg\lambda^{-\frac{1}{2}}$). This is valid in a quite general framework. Yet, this is not sufficient to get an improvement over Sogge's upper bound. In order to implement this approach, one needs to have upper bounds of order $\mathcal{O}(r^{1+\alpha})$ for some $\alpha>0$, or at least $\mathcal{O}(\delta(r)r)$ with $\delta(r)\rightarrow 0$ as $r\rightarrow 0^+$. As pointed out by Sarnak in~\cite{Sar04}, a natural manner to look for improvements over Sogge's upper bounds is to consider operators commuting with the Laplacian and to study the $L^p$ norm of joint eigenfunctions. These joint eigenfunctions enjoy more symmetries which may lead to improvements. This was for instance the strategy followed in~\cite{IwSar95, BrLM20, Ma16, GaTo18, Ta19, Ta19b}. Here, we are not a priori in this situation as we consider eigenfunctions of the \emph{single} operator $-\Delta_{g_0}+V$ -- see the discussion following Theorem~\ref{t:maintheo}. However, the periodicity of the geodesic flow and the presence of the potential imply the existence of an extra invariance property besides the one by the geodesic flow. More precisely, in~\cite{MR16, MR19}, together with Maci\`a, we showed that Schr\"odinger eigenfunctions satisfy an extra invariance property by the Hamiltonian flow of $\mathcal{R}(V)$ which is reminiscent from the properties of joint eigenfunctions. This was achieved using Weinstein averaging method~\cite{W77}. Using this extra property, we will be able to get an upper bound of order $\mathcal{O}(r^{\frac{3}{2}})$ on~\eqref{e:ave-ball} up to scales $r\approx\lambda^{-\frac{2}{9}}$ near points verifying~\eqref{e:hyp-crit} and~\eqref{e:hyp-trans}. This will be the content of Section~\ref{s:invariance}. This additional invariance will be the reason for the polynomial improvement of Theorem~\ref{t:maintheo}. As we shall see in our proof\footnote{See for instance~\eqref{e:range-p}.}, the reason for being limited to $p\neq 6$ comes from this exponent $3/2$ and, in dimension $2$, any bound on~\eqref{e:ave-ball} of order $\mathcal{O}(r^{1+\alpha})$ with $\alpha>1/2$ would give a local improvement over Sogge's upper bound~\eqref{e:Lp-sogge} even for $p=6$ (using the arguments of \S~\ref{s:Lp}). \section*{Acknowledgements} I would like to address my warmest thanks to Hamid Hezari and Fabricio Maci\`a for my joint works with them~\cite{HR16, MR16, MR19} and for their many insights on these topics. I also thank Xiaolong Han for pointing me reference~\cite{Uh76} regarding the generic simplicity of the spectrum of Schr\"odinger operators and the anonymous referee for helpful suggestions. This work was supported by the Institut Universitaire de France and by the Agence Nationale de la Recherche through the PRC projects ODA (ANR-18-CE40-0020) and ADYCT (ANR-20-CE40-0017). \section{Reduction to $L^2$ localized estimates for $p>6$} \label{s:Lp} In this section, we revisit an argument due to Sogge\footnote{See also~\cite{HR16} for earlier related arguments of Hezari and the author using semiclassical methods~\cite[\S 10]{Zw12}.} in order to relate $L^p$ estimates to localized $L^2$-estimates in small balls. This argument will allow us to get our upper bounds in the range $6<p\leq\infty$. The proof given in~\cite{So16} was for Laplace eigenfunctions and we verify that it can be adapted to Schr\"odinger eigenfunctions. \begin{rema} Due to our $L^2$-localized estimates in Section~\ref{s:invariance}, we could as well work only with $p=\infty$ and conclude by interpolation with the case $p=6$ in~\eqref{e:Lp-sogge}. Yet, we write things down for general $p$ in order to identify the quantitative improvements one would need to reach the case $p=6$. See Equation~\eqref{e:range-p} below. \end{rema} Let $\psi_\lambda$ be a solution to~\eqref{e:schrodinger} that we suppose to be $L^2$-normalized. In the following, we suppose that $\lambda^2$ is large enough so that we can pick $\lambda>0$. Following~\cite[\S 2]{So16} and for $j\in\mathbb{Z}_+$, we denote by $E_j$ the spectral projector onto the eigenspace of $\sqrt{-\Delta_{g_0}}$ with eigenvalue $\lambda_j:=\sqrt{j(j+1)}$. We write \begin{equation}\label{e:quasimode-explicit}(\sqrt{-\Delta_{g_0}}-\lambda)\psi_\lambda=-(\sqrt{-\Delta_{g_0}}+\lambda)^{-1}V\psi_\lambda.\end{equation} In particular, one has \begin{equation}\label{e:quasimode}\left\|(\sqrt{-\Delta_{g_0}}-\lambda)\psi_\lambda\right\|_{L^2}=\left(\sum_{j\in\IZ^+}\frac{1}{(\lambda+\lambda_j)^2}\|E_j(V\psi_\lambda)\|^2\right)^{\frac{1}{2}}\leq \frac{1}{\lambda}\|V\psi_{\lambda}\|_{L^2}\leq \frac{\|V\|_{L^\infty}}{\lambda}.\end{equation} We also fix a nonnegative $\rho\in\mathcal{S}(\IR)$ satisfying \begin{equation}\label{e:rho} \rho(0)=1\quad\text{and}\quad\text{supp}(\hat{\rho})\subset[-1,1], \end{equation} where $\hat{\rho}$ is the Fourier transform of $\rho$. For $\lambda>0$ and $0<r\leq 1$, setting $$T_{\lambda,r}:=\frac{1}{\pi}\int_{-\infty}^{+\infty}r^{-1}\hat{\rho}(r^{-1}t)e^{it\lambda}\cos(t\sqrt{-\Delta_{g_0}})dt,$$ one finds $$T_{\lambda,r}=\rho\left(r\left(\lambda-\sqrt{-\Delta_{g_0}}\right)\right)+\rho\left(r\left(\lambda+\sqrt{-\Delta_{g_0}}\right)\right).$$ The main result of~\cite[Eq.~(3.1)]{So16} is that, for every $p>2$ and for every $f\in L^2(\mathbb{S}^2)$, \begin{equation}\label{e:sogge} \|T_{\lambda,r}f\|_{L^p(\IS^2)}\leq C_p r^{-\frac{1}{2}}\lambda^{\sigma_0(p)}\|f\|_{L^2(\IS^2)},\quad\lambda\geq 1,\quad\lambda^{-1}\leq r\leq\frac{\pi}{2}, \end{equation} where the constant $C_p$ is uniform for $(\lambda,r)$ in the above range. Recall now from Huygens principle that the Schwartz kernel $\cos(t\sqrt{-\Delta_{g_0}})(x,y)$ vanishes if the geodesic distance between $x$ and $y$ is $>t$. In particular, the Shwartz kernel $T_{\lambda,r}(x,y)$ of $T_{\lambda,r}$ vanishes if $d_{g_0}(x,y)>r$ thanks to our assumptions~\eqref{e:rho} on the support of $\rho$. Gathering these informations, Sogge observed that, for every $p>2$ and for every $f\in L^2(\mathbb{S}^2)$, \begin{equation}\label{e:sogge2} \|T_{\lambda,r}f\|_{L^p(B_r(x_0))}\leq C_p r^{-\frac{1}{2}}\lambda^{\sigma_0(p)}\|f\|_{L^2(B_{2r}(x_0))},\quad\lambda\geq 1,\quad\lambda^{-1}\leq r\leq\frac{\pi}{2}, \end{equation} where the constant $C_p$ is uniform for $(\lambda,r)$ in the above range and for $x_0\in\IS^2$. This will be referred as the Sogge's local $L^p$-estimate. Fix now some compact subset $K$ of $\mathbb{S}^2$. We can cover $K$ by finitely many balls $(B_r(x_l))_{l=1,\ldots, N(r)}$ of radius $r$ and centered at points inside $K$. We require that the number $N(r)$ is of order $\sim r^{-2}$ and that each point of $K$ is contained in at most $C_0$ balls of the covering $(B_{2r}(x_l))_{l=1,\ldots, N(r)}$. Here $C_0>0$ is independent of $r$ -- see for instance~\cite[Lemma~2]{CM11}. Recall that we have in mind to apply this result when $K=B_{r_0}(x_0)$ is a fixed ball. Hence, one has, for $2<p<\infty$ and for $f$ in $L^2(\IS^2)$, \begin{eqnarray*} \|f\|_{L^p(K)}^p &\leq & 2^{p-1}\left(\sum_{l=1}^{N(r)}\|T_{\lambda,r}f\|_{L^p(B_r(x_l))}^p+\|(T_{\lambda,r}-\text{Id})f\|_{L^p(\IS^2)}^p\right)\\ &\leq & C_p r^{-\frac{p}{2}}\lambda^{\sigma_0(p)p}\sum_{l=1}^{N(r)}\|f\|_{L^2(B_{2r}(x_l))}^p+C_p\|(T_{\lambda,r}-\text{Id})f\|_{L^p(\IS^2)}^p\\ &\leq & C_pC_0 r^{-\frac{p}{2}}\lambda^{\sigma_0(p)p}\left(\max_{1\leq l\leq N(r)}\left\{\|f\|_{L^2(B_{2r}(x_l))}^{p-2}\right\}\right)\|f\|_{L^2(\IS^2)}^2 +C_p\|(T_{\lambda,r}-\text{Id})f\|_{L^p(\IS^2)}^p. \end{eqnarray*} Hence, one finds \begin{lemm}\label{l:step-lemma} Let $K$ be a compact subset of $\IS^2$ and let $(B_r(x_l))_{l=1,\ldots, N(r)}$ be a cover of $K$ with the above properties. Then, one has \begin{equation}\label{e:lpbound} \|f\|_{L^p(K)}\leq C_p'\left(r^{-\frac{1}{2}}\lambda^{\sigma_0(p)}\left(\max_{1\leq l\leq N(r)}\left\{\|f\|_{L^2(B_{2r}(x_l))}^{1-\frac{2}{p}}\right\}\right)\|f\|_{L^2(\IS^2)}^2 +\|(T_{\lambda,r}-\text{Id})f\|_{L^p(\IS^2)}\right). \end{equation}\end{lemm} This upper bound is valid uniformly in the range $\lambda\geq 1$ and $ \lambda^{-1}\leq r\leq\frac{\pi}{2}.$ Similarly, in the case of the $L^{\infty}$ norm, we would get \begin{equation}\label{e:linftybound}\|f\|_{L^\infty(K)}\leq Cr^{-\frac{1}{2}}\lambda^{\frac{1}{2}}\left(\max_{1\leq l\leq N(r)}\left\{\|f\|_{L^2(B_{2r}(x_l))}\right\}\right)+\|(T_{\lambda,r}-\text{Id})f\|_{L^\infty(\IS^2)}. \end{equation} Note that so far we did not use the eigenvalue equation~\eqref{e:quasimode-explicit} and this is valid for any $f$ in $L^2(\IS^2)$. We will now specify these results in the case where $f=\psi_\lambda$. We begin with the remainder term: \begin{prop}\label{p:remainder} Let $2<p\leq\infty$ and let $0< \beta<1$. Then, there exists a constant $C>0$ such that, for any solution $\psi_\lambda$ to~\eqref{e:schrodinger} with $\lambda\geq 1$ and for any $\lambda^{-\beta}\leq r\leq\frac{\pi}{2}$, one has $$\left\|\left(T_{\lambda,r}-\text{Id}\right)\psi_\lambda\right\|_{L^p(\mathbb{S}^2)}\leq C(r\lambda)^{\sigma_0(p)} \|\psi_\lambda\|_{L^2(\mathbb{S}^2)},$$ \end{prop} Regarding Lemma~\ref{l:step-lemma} which already used the Sogge's local $L^p$-estimate, this proposition is the additional ingredient we need to take into account the terms coming from the potential $V$. Gathering this Proposition with our estimates~\eqref{e:lpbound} and~\eqref{e:linftybound} on $\|f\|_{L^p(K)}$, we find that, for $\lambda\geq 1$, $\lambda^{-\beta}\leq r\leq \frac{\pi}{2}$ (with $\beta<1$), for any $2< p\leq +\infty$ and for any $L^2$-normalized solution $\psi_\lambda$ to~\eqref{e:schrodinger}, \begin{equation}\label{e:localized-Lp} \|\psi_{\lambda}\|_{L^p(K)}\leq C_p \left(r^{-\frac{1}{2}}\lambda^{\sigma_0(p)}\max_{1\leq l\leq N(r)}\left\{\|\psi_\lambda\|_{L^2(B_{2r}(x_l))}^{1-\frac{2}{p}}\right\}+(r\lambda)^{\sigma_0(p)}\right). \end{equation} The involved constants $C_p>0$ depend only on $V$, $K$, $\rho$, $\beta$ and $p$. Hence, as in~\cite{HR16, So16}, we have reduced the problem of estimating the $L^p$ norm of Schr\"odinger eigenfunctions to determining bounds on $L^2$-localized norms, \begin{equation}\label{e:smallball}\int_{B_{2r}(x_l)}|\psi_\lambda(x)|^2d\upsilon_{g_0}(x),\end{equation} as $\lambda\rightarrow +\infty$ with $r$ verifying $\lambda^{-\beta}\leq r\leq \frac{\pi}{2}$. In particular, if, for some $0<\alpha\leq 1$, we were able to bound~\eqref{e:smallball} uniformly (in terms of $\lambda$) by $Cr^{1+\alpha}$, then we would be able to get an improved upper bound inside $K$ of the form $$\|\psi_{\lambda}\|_{L^p(K)}\leq C_{p,K}\left(r^{\frac{\alpha}{2}-\frac{1+\alpha}{p}}\lambda^{\sigma_0(p)}+ (r\lambda)^{\sigma_0(p)}\right),$$ in the range \begin{equation}\label{e:range-p}\frac{\alpha}{2}-\frac{1+\alpha}{p}>0\quad\Longleftrightarrow\quad p>2\left(1+\frac{1}{\alpha}\right).\end{equation} However, as explained in~\cite[\S 4]{So16}, one cannot expect such improved bounds on the sphere when $V\equiv 0$ thanks to the example of the spherical harmonics. In section~\ref{s:invariance}, we shall see how to get \emph{locally} improved bounds on~\eqref{e:smallball} when $V$ does not identically vanish. Before going to this question, we give the proof of Proposition~\ref{p:remainder}. \begin{proof}The ideas of the proof are standard (see e.g.~\cite{So16}) and we detail them for the sake of completeness. Considering a solution to~\eqref{e:quasimode-explicit} and letting $2\leq p\leq +\infty$, one has \begin{eqnarray*}\left\|\left(T_{\lambda,r}-\text{Id}\right)\psi_\lambda\right\|_{L^p(\mathbb{S}^2)}&\leq&\sum_{j\in\mathbb{Z}_+}\left\|E_j\left(T_{\lambda,r}-\text{Id}\right)E_j\psi_\lambda\right\|_{L^p(\mathbb{S}^2)}\\ &\leq &\sum_{j\in\mathbb{Z}_+}\left(\left|\rho(r(\lambda-\lambda_j))-1\right|+\left|\rho(r(\lambda+\lambda_j))\right|\right)\left\|E_j(\psi_\lambda)\right\|_{L^p(\mathbb{S}^2)}. \end{eqnarray*} As $\rho$ belongs to the Schwartz class, we find using Sogge's estimate~\eqref{e:Lp-sogge} that, for every $N\geq 1$, there exists $C_N>0$ such that, for $\lambda\geq 1$ and $r\geq\lambda^{-\beta}$, $$\sum_{j\in\mathbb{Z}_+}\left|\rho(r(\lambda+\lambda_j))\right|\left\|E_j(\psi_\lambda)\right\|_{L^p(\mathbb{S}^2)}\leq C_N(1+r\lambda)^{-N}\|\psi_\lambda\|_{L^2(\mathbb{S}^2)}.$$ Using one more time Sogge's estimate, we deduce that \begin{equation}\label{e:remainder}\left\|\left(T_{\lambda,r}-\text{Id}\right)\psi_\lambda\right\|_{L^p(\mathbb{S}^2)}\leq\sum_{j\in\mathbb{Z}_+}\left|\rho(r(\lambda-\lambda_j))-1\right|\lambda_j^{\sigma_0(p)}\left\|E_j(\psi_\lambda)\right\|_{L^2(\mathbb{S}^2)}+C_N(1+r\lambda)^{-N}\|\psi_\lambda\|_{L^2(\mathbb{S}^2)}.\end{equation} We now fix some $\delta\geq r$ so that $\delta\leq r\lambda$ and we split the sum over $j\in\mathbb{Z}_+$ in two parts. On the one hand, we consider the $j$ such that $|\lambda-\lambda_j|\leq \delta/r$ and on the other hand, the integers such that $|\lambda-\lambda_j|> \delta/r$. Recall that $\lambda_j^2=j(j+1)$. Hence, the number of terms in the first sum is $\mathcal{O}(\delta/r)$ and one is left with \begin{eqnarray*} \left\|\left(T_{\lambda,r}-\text{Id}\right)\psi_\lambda\right\|_{L^p(\mathbb{S}^2)}&\leq&\sum_{j\in\mathbb{Z}_+: |\lambda-\lambda_j|> \delta/r}\left|\rho(r(\lambda-\lambda_j))-1\right|\lambda_j^{\sigma_0(p)}\left\|E_j(\psi_\lambda)\right\|_{L^2(\mathbb{S}^2)}\\ &+& \left(C\frac{\delta^2}{r}\lambda^{\sigma_0(p)}+C_N(1+r\lambda)^{-N}\right)\|\psi_\lambda\|_{L^2(\mathbb{S}^2)}. \end{eqnarray*} For the remaining sum, we can finally make use of the eigenvalue equation~\eqref{e:quasimode-explicit}. It implies the existence of some constant $C_{\rho,V}>0$ depending only on $\rho$ and $V$ such that $$\sum_{j\in\mathbb{Z}_+:|\lambda-\lambda_j|> \delta/r}\left|\rho(r(\lambda-\lambda_j))-1\right|\lambda_j^{\sigma_0(p)}\left\|E_j(\psi_\lambda)\right\|_{L^2(\mathbb{S}^2)}\leq C_{\rho,V}\sum_{j\in\mathbb{Z}_+:|\lambda-\lambda_j|> \delta/r}\frac{\lambda_j^{\sigma_0(p)}}{|\lambda^2-\lambda_j^2|}\|\psi_{\lambda}\|_{L^2(\IS^2)}.$$ As $\sigma_0(p)$ varies between $0$ (for $p=2$) and $1/2$ (for $p=\infty$), this last quantity is finite and it remains to evaluate \begin{equation}\label{e:bad-remainder}\sum_{j\in\mathbb{Z}_+:|\lambda-\lambda_j|> \delta/r}\frac{\lambda_j^{\sigma_0(p)}}{|\lambda^2-\lambda_j^2|}\end{equation} in terms of $\delta$, $r$, $\lambda$ and $p$. We now recall that, for $X>0$, one has $(1+X)^{\sigma_0(p)}\leq 1+X^{\sigma_0(p)}$ (as $\sigma_0(p)\leq1/2$). Hence, one has \begin{eqnarray*}\sum_{j\in\mathbb{Z}_+:|\lambda-\lambda_j|> \delta/r}\frac{\lambda_j^{\sigma_0(p)}}{|\lambda^2-\lambda_j^2|}&\leq& \sum_{j\in\mathbb{Z}_+:|\lambda-\lambda_j|> \delta/r}\frac{|\lambda-\lambda_j|^{\sigma_0(p)}}{|\lambda^2-\lambda_j^2|}+\sum_{j\in\mathbb{Z}_+:|\lambda-\lambda_j|> \delta/r}\frac{\lambda^{\sigma_0(p)}}{|\lambda^2-\lambda_j^2|}\\ &\leq& 2\sum_{j\in\mathbb{Z}_+:|\lambda-\lambda_j|> \delta/r}\frac{\lambda^{-1+\frac{3}{2}\sigma_0(p)}}{|\lambda-\lambda_j|^{1+\frac{\sigma_0(p)}{2}}}\\ &\leq & 2\lambda^{-\frac{1}{4}}\sum_{j\in\mathbb{Z}_+:|\lambda-\sqrt{j(j+1)}|> \delta/r}\frac{1}{|\lambda-\sqrt{j(j+1)}|^{1+\frac{\sigma_0(p)}{2}}}\\ &\leq&C\lambda^{-\frac{1}{4}}\sum_{j\in\mathbb{Z}_+^*}j^{-1-\frac{\sigma_0(p)}{2}}. \end{eqnarray*} In summary, if we suppose that $r\geq\lambda^{-\beta}$ (for some $\beta<1$), we obtain the following upper bound $$\left\|\left(T_{\lambda,r}-\text{Id}\right)\psi_\lambda\right\|_{L^p(\mathbb{S}^2)}\leq C\left(\frac{\delta^2}{r}\lambda^{\sigma_0(p)}+\lambda^{-\frac{1}{4}}\right) \|\psi_\lambda\|_{L^2(\mathbb{S}^2)},$$ where $C>0$ depends on $\rho$, $V$, $\beta$ and $p$. Recall that we supposed $r\leq \delta\leq r\lambda$. Hence, as $0\leq\sigma(p)\leq \frac{1}{2}$, we can set $\delta=r^{\frac{1+\sigma_0(p)}{2}}$ provided $r\geq \lambda^{-\frac{2}{\sigma_0(p)+1}}$, which is ensured by our assumption $r\geq\lambda^{-\beta}$. Implementing this, we obtain the existence of a constant $C_{\rho,V,\beta,p}>0$ such that $$\left\|\left(T_{\lambda,r}-\text{Id}\right)\psi_\lambda\right\|_{L^p(\mathbb{S}^2)}\leq C_{\rho,V,\beta,p}(r\lambda)^{\sigma_0(p)} \|\psi_\lambda\|_{L^2(\mathbb{S}^2)},$$ as long as $r\geq \lambda^{-\beta}$. \end{proof} \begin{rema}\label{r:Lp-semiclassical} In view of applications of our method to semiclassical problems, it is worth noting that the above arguments work as well for solutions to \begin{equation}\label{e:schrodinger-pert-lambda}-\Delta_{g_0}\psi_\lambda+\beta_\lambda V\psi_\lambda =\lambda^2\psi_\lambda,\quad\|\psi_{\lambda}\|_{L^2(\IS^2)}=1,\end{equation} where $(\beta_\lambda)_{\lambda}$ is a given nonnegative sequence that may tend to $+\infty$. In that case, the upper bound~\eqref{e:localized-Lp} becomes, for every $\epsilon>0$, \begin{equation}\label{e:localized-Lp-bis} \|\psi_{\lambda}\|_{L^p(K)}\leq C_{p,\epsilon} \left(r^{-\frac{1}{2}}\lambda^{\sigma_0(p)}\max_{1\leq l\leq N(r)}\left\{\|\psi_\lambda\|_{L^2(B_{2r}(x_l))}^{1-\frac{2}{p}}\right\}+(r\lambda)^{\sigma_0(p)}+\beta_\lambda \lambda^{-1+\epsilon}\lambda^{\sigma_0(p)}\right). \end{equation} The calculation is indeed exactly the same except for the upper bound on the size of the remainder in~\eqref{e:bad-remainder} that we need to improve. Hence, we have potentially improvements as long as\footnote{This can probably sligthly improved to replace the $\lambda^{\epsilon}$ by some logarithmic factor but we did not try to optimize that.} $\beta_\lambda \lambda^{-1+\epsilon}\rightarrow 0$. \end{rema} \section{Reduction to $L^2$ localized estimates for $p<6$ via Kakeya-Nikodym bounds} \label{s:kakeya} We now deal with the range $2<p<6$ which can also be reduced to estimating similar quantities. For such $p$, we can make use of the results of Blair and Sogge relating the growth of $L^p$ norms for small $p$ to Kakeya-Nikodym averages. We let $0\leq \chi\leq 1$ be a smooth cutoff function which is equal to $1$ on $[-1,1]$ and to $0$ outside $[-2,2]$. Given $x\in \IS^2$, we denote by $\exp_{x}$ the exponential map induced by the metric $g_0$ and we set $$\chi_{x,r}(y):=\chi\left(\frac{\|\exp_{x}^{-1}(y)\|}{r}\right)\in\mathcal{C}^{\infty}(\IS^2).$$ This function is equal to $1$ on $B_r(x)$ and to $0$ outside $B_{2r}(x)$. We fix some $r_0>0$ and some $x_0\in\IS^2$. For any \emph{normalized} solution to~\eqref{e:schrodinger}, one has $$-\lambda^{-2}\Delta_{g_0}\psi_\lambda-\psi_{\lambda}=\lambda^{-2}V\psi_{\lambda}.$$ In particular, one can verify, using commutation rules for semiclassical pseudodifferential operators~\cite[\S~4 and~14]{Zw12}, \begin{equation}\label{e:KN-quasimode}(-\lambda^{-2}\Delta_{g_0}-1)^k\left(\chi_{x_0,r_0}\psi_\lambda\right)=\mathcal{O}(\lambda^{-k}),\quad k=1,2.\end{equation} These two assumptions are exactly the ones needed to apply~\cite[Th.~1.1]{BlSo17} in dimension $2$. In order to formulate this result, we denote by $\tilde{G}(\IS^2)$ the set of unit length geodesic segments in $\IS^2$ and, for every $r>0$ and for every $\gamma\in \tilde{G}(\IS^2)$, $$\mathcal{T}_r(\gamma):=\left\{x\in\IS^2:\ d_{g_0}(x,\gamma)\leq r\right\}.$$ With these conventions, the main result from~\cite{BlSo17} applied to $\chi_{x_0,r_0}\psi_\lambda$ tells us that, for $4<p<6$, \begin{equation}\label{e:KN-no-log}\left\|\psi_{\lambda}\right\|_{L^p(B_{r_0}(x_0))}\leq C_p\lambda^{\sigma_0(p)}\left(\sup_{\gamma\in \tilde{G}(\IS^2)}\int_{B_{2r_0}(x_0)\cap \mathcal{T}_{\lambda^{-\frac{1}{2}}}(\gamma)}|\psi_\lambda(x)|^2d\upsilon_{g_0}(x)\right)^{\frac{1}{2}\left(\frac{6}{p}-1\right)},\end{equation} and \begin{equation}\label{e:KN-log}\left\|\psi_{\lambda}\right\|_{L^4(B_{r_0}(x_0))}\leq C_p(\log\lambda)\lambda^{\frac{1}{8}}\left(\sup_{\gamma\in \tilde{G}(\IS^2)}\int_{B_{2r_0}(x_0)\cap \mathcal{T}_{\lambda^{-\frac{1}{2}}}(\gamma)}|\psi_\lambda(x)|^2d\upsilon_{g_0}(x)\right)^{\frac{1}{4}},\end{equation} where the constants $C_p>0$ depend only on $p$. These kinds of upper bounds are referred to as Kakeya-Nikodym bounds. They were initially introduced by Bourgain~\cite{Bo09} and further developped by Sogge~\cite{So11, So17} and Blair-Sogge~\cite{BlSo15, BlSo17, BlSo18, BlSo19}. One of the main objectives is to reduce (at least for small $p$) improvements on $L^p$-estimates to $L^2$-estimates on tubular neighborhoods of geodesics. This strategy culminated in~\cite{BlSo19} where logarithmic improvements on Sogge's $L^p$ estimates were obtained on nonpositively curved manifolds for every $p\leq p_c$. As we shall see below, this strategy remains efficient for integrable geometries where we can also analyze the $L^2$-mass near geodesics via averaging methods. Thanks to these results, it is sufficient to derive nontrivial upper bounds on the Kakeya-Nikodym averages $$\int_{B_{2r_0}(x_0)\cap \mathcal{T}_{\lambda^{-\frac{1}{2}}}(\gamma)}|\psi_\lambda(x)|^2d\upsilon_{g_0}(x).$$ in order to improve locally Sogge's upper bounds~\eqref{e:sogge} in the range $4<p<6$. By interpolation, it will automatically yields an improvement for $2<p<4$. Finally, we can relate these quantities to the ones appearing in~\eqref{e:smallball}. Indeed, we can pick $0<\beta<1/2$ and we can cover $B_{2r_0}(x_0)\cap \mathcal{T}_{\lambda^{-\frac{1}{2}}}(\gamma)$ by a family of $2r_0r^{-1}$ balls of radius $r\geq \lambda^{-\beta}$ centered on a point of $\gamma\cap B_{2r_0}(x_0)$. Hence, one has \begin{equation}\label{e:KN-smallball} \int_{B_{2r_0}(x_0)\cap \mathcal{T}_{\lambda^{-\frac{1}{2}}}(\gamma)}|\psi_\lambda(x)|^2d\upsilon_{g_0}(x)\leq 4r_0r^{-1}\sup_{x\in\gamma\cap B_{2r_0}(x_0)}\left\{\int_{B_{r}(x)}|\psi_\lambda(y)|^2d\upsilon_{g_0}(y)\right\}, \end{equation} which are exactly the quantities that appeared in Section~\ref{s:Lp}. Hence, in both cases, we are reduced to estimating these localized $L^2$-estimates. \begin{rema}\label{r:KN-Lp} As in Remark~\ref{r:Lp-semiclassical}, we can consider solutions to~\eqref{e:schrodinger-pert-lambda}. One can verify that the assumption~\eqref{e:KN-quasimode} is still verified as long as $0\leq \beta_\lambda\leq\lambda$. Hence,~\eqref{e:KN-no-log} and~\eqref{e:KN-log} remain true in that generalized framework. \end{rema} \begin{rema} As we will only consider balls of radius $r\gg\lambda^{-\frac{1}{2}}$, the logarithmic factor appearing in~\eqref{e:KN-log} could probably be removed following~\cite{BlSo15}. \end{rema} \section{$L^2$-localized estimates using invariance by the classical flows}\label{s:invariance} Thanks to~\eqref{e:localized-Lp},~\eqref{e:KN-no-log},~\eqref{e:KN-log} and~\eqref{e:KN-smallball}, we know that proving Theorem~\ref{t:maintheo} amounts to control uniformly the following quantity $$M_{B_{r_0}(x_0), \alpha,r}(\psi_\lambda):=\sup\left\{\frac{1}{r^{1+\alpha}}\int_{B_{r}(x)}|\psi_\lambda(y)|^2d\upsilon_{g_0}(y):x\in B_{r_0}(x_0)\right\},$$ with $0<\alpha\leq 1$ and $\lambda^{-\beta}\leq r$ that goes to $0$ as $\lambda\rightarrow +\infty$. The following Proposition answers this problem and it is the main new technical result of the article: \begin{prop}\label{p:main-prop} Let $x_0$ be a point in $\IS^2$ verifying the assumption of Theorem~\ref{t:maintheo}. Then, there exist $r_0>0$ and $C_0>0$ such that, for any $(\psi_\lambda,\lambda)$ solution to~\eqref{e:schrodinger}, $$\lambda^{-\frac{2}{9}}\leq r\leq \frac{\pi}{2}\quad\Longrightarrow\quad M_{B_{r_0}(x_0),\frac{1}{2},r}\left(\psi_\lambda\right)\leq C_0 \|\psi_\lambda\|_{L^2(\IS^2)}^2.$$ \end{prop} \begin{rema} The exponent in $\lambda^{-2/9}$ appears as follows in the argument. On the one hand, we use semiclassical arguments for exotic class of symbols (with $\lambda^{-\beta}$ loss in the derivatives) and this yields remainder terms of size $\ml{O}(\lambda^{-1+3\beta})$. This semiclassical part of the argument is based on Egorov and composition theorems and the remainders cannot be drastically improved. See for instance~\eqref{e:final-bound-ball}. On the other hand, we need to estimate classical averages by some Hamiltonian flow and this is where we use in an essential way our assumption on the potential $V$. Without these assumptions, we would get a crude upper bound $\ml{O}(r)$ and these hypothesis allow to upgrade this bound to $\ml{O}(r^{1+\frac{1}{2}})$. See for instance~\eqref{e:bound-tangent-case}. Here the fact that $\ml{R}(V)|_{\Gamma_{x_0}}$ is a Morse function implies that the tangency have order at most $1$. For higher order tangencies, we would have probably obtained some slightly worst bound $\ml{O}(r^{1+\frac{1}{k}})$ (for some large enough $k$) at the expense of some extra tedious work. In the end, we take $r$ such that $\ml{O}(r^{1+\frac{1}{2}})$ and $\ml{O}(\lambda^{-1+3\beta})$ are of the same order which yields the exponent $2/9$. \end{rema} Implementing this bound in~\eqref{e:localized-Lp} and in~\eqref{e:KN-no-log}, we find that, for $4<p\leq\infty$ and for $\lambda>0$, $$\left\|\psi_{\lambda}\right\|_{L^p(B_{r_0}(x_0))}\leq C_{p,x_0}\lambda^{\sigma_0(p)-\frac{1}{18}\left|1-\frac{6}{p}\right|}\|\psi_\lambda\|_{L^2(\IS^2)}.$$ Finally, for $p=4$, we derive from~\eqref{e:KN-log} that, for $\lambda>1$, $$\left\|\psi_{\lambda}\right\|_{L^4(B_{r_0}(x_0))}\leq C_{4,x_0}(\log\lambda) \lambda^{\frac{1}{8}-\frac{1}{36}}\|\psi_\lambda\|_{L^2(\IS^2)},$$ which also yields the result for $2<p\leq 4$ by interpolation. Hence, in order to prove Theorem~\ref{t:maintheo}, we are left with the proof of Proposition~\ref{p:main-prop} which will be the object of the rest of the article. Coming back to Proposition~\ref{p:main-prop}, it is in fact sufficient to get an uniform upper bound on $$\tilde{M}_{B_{r_0}(x_0),\alpha,r}(\psi_\lambda):=\sup\left\{\frac{1}{r^{1+\alpha}}\int_{\IS^2}\chi_{x,r}(y)|\psi_\lambda(y)|^2d\upsilon_{g_0}(y):x\in B_{r_0}(x_0)\right\},$$ where we used the conventions of \S\ref{s:kakeya} for the function $\chi_{x,r}$. In order to get this uniform control, we will make use of the invariance properties of semiclassical Wigner distributions that we recently obtained with Maci\`a~\cite{MR16, MR19}. In order to make use of semiclassical methods~\cite{Zw12}, we set $h=\lambda^{-1}$ and $u_h=\psi_{\lambda}$. Hence, one has \begin{equation}\label{e:semiclassical-schrodinger} -h^2\Delta_{g_0}u_h+h^2 Vu_h=u_h,\quad\|u_h\|_{L^2(\IS^2)}=1. \end{equation} Let now $x$ be a point in $B_{r_0}(x_0)$ and $h^{\beta}\leq r\leq \frac{\pi}{4}$. In terms of pseudodifferential operators on $\IS^2$~\cite[\S14.2]{Zw12}, the quantity we are interested in can be rewritten as $$\int_{\IS^2}\chi_{x,r}(y)|u_h(y)|^2d\upsilon_{g_0}(y)=\left\langle \operatorname{Op}_h\left(\chi_{x,r}\right)u_h,u_h\right\rangle_{L^2(\mathbb{S}^2)},$$ where $\operatorname{Op}_h$ is a semiclassical quantization~\cite[\S14.2.3]{Zw12}. Note that, in order to have $\chi_{x,r}$ amenable to semiclassical pseudodifferential calculus~\cite[\S4.4.1]{Zw12} (see also~\cite[\S2.2, App.A]{DJN19} for the case of manifolds), we need to impose that \begin{equation}\label{e:admissible}r\geq h^{\beta}\quad\text{and}\quad 0\leq\beta<\frac{1}{2}.\end{equation} We will now revisit the arguments of~\cite{MR16, MR19} in that specific framework and show how they yield the expected result. \subsection{Spectral cutoff}\label{ss:cutoff} We fix some smooth cutoff function $0\leq\chi_0\leq 1$ which is equal to $1$ on the interval $[1/2,2]$ and to $0$ outside $[1/4,4]$. Thanks to~\eqref{e:semiclassical-schrodinger}, one has $$\left\langle \operatorname{Op}_h\left(\chi_{x,r}\right)u_h,u_h\right\rangle_{L^2(\mathbb{S}^2)}=\left\langle \operatorname{Op}_h\left(\chi_{x,r}\right)\chi_0(-h^2\Delta_{g_0}+h^2V)u_h,u_h\right\rangle_{L^2(\mathbb{S}^2)}.$$ According to~\cite[Th.~14.9]{Zw12}, $\chi_0(-h^2\Delta_{g_0}+h^2V)$ is a semiclassical pseudodifferential operator in the class $\Psi^{-\infty}(\IS^2)$ with principal symbol equal to $\chi_0(\|\eta\|_{g_0^*(y)}^2).$ Hence, the composition rule for pseudodifferential operators~\cite[Th.~4.18 and~14.1]{Zw12} implies that $$\left\langle \operatorname{Op}_h\left(\chi_{x,r}\right)u_h,u_h\right\rangle_{L^2(\mathbb{S}^2)}=\left\langle \operatorname{Op}_h\left(\chi_{x,r}(y)\chi_0(\|\eta\|^2)\right)u_h,u_h\right\rangle_{L^2(\mathbb{S}^2)}+\mathcal{O}(h^{1-2\beta}),$$ where the constant in the remainder is uniform for $x\in \IS^2$ and $r\geq h^\beta$. In the following, we set $$a_{x,r}(y,\eta):=\chi_{x,r}(y)\chi_0(\|\eta\|_{g_0^*(y)}^2).$$ \subsection{Applying the evolution by the free Schr\"odinger flow}\label{ss:egorov} We write \begin{equation}\label{e:decompose-Laplace} -\Delta_{g_0}= A^2-\frac{1}{4}, \end{equation} where $A$ is a selfadjoint pseudodifferential operator of order $1$ with principal symbol $\|\eta\|_{g_0^*(y)}$ and satisfying \begin{equation}\label{e:period-quantum} e^{2i\pi A}=-\text{Id}. \end{equation} Equivalently, one has $A=\sqrt{\frac{1}{4}-\Delta_{g_0}}.$ The eigenvalue equation~\eqref{e:semiclassical-schrodinger} can be rewritten as $$\left(A^2-\frac{1}{h^2}\right)u_h=\left(\frac{1}{4}-V\right)u_h\quad\Longrightarrow\quad \left(A-\frac{1}{h}\right)u_h=\mathcal{O}_{L^2}(h).$$ In particular, one has \begin{equation}\label{e:small-quasimode}e^{is \left(A-\frac{1}{h}\right)}u_h=u_h+\int_0^se^{i\tau \left(A-\frac{1}{h}\right)}\left(A-\frac{1}{h}\right)u_hd\tau=u_h+\mathcal{O}_{L^2}(|s|h).\end{equation} This leads to \begin{equation}\label{e:av-free-schr} \int_{\IS^2}\chi_{x,r}(y)|u_h(y)|^2d\upsilon_{g_0}(y)=\left\langle \left(\frac{1}{2\pi}\int_0^{2\pi}e^{isA}\operatorname{Op}_h\left(a_{x,r}\right)e^{-isA} ds\right)u_h,u_h\right\rangle_{L^2(\mathbb{S}^2)}+\mathcal{O}(h^{1-2\beta}) \end{equation} In the following, given $a$ in $\ml{C}^{\infty}_c(T^*\IS^2\setminus \underline{0})$, we set, by analogy with the Radon transfom, $$\ml{R}_{\text{qu}}(\operatorname{Op}_h(a)):=\frac{1}{2\pi}\int_0^{2\pi}e^{is A}\operatorname{Op}_h(a)e^{-isA}ds.$$ According to Remark~\ref{r:egorov} below, the Egorov Theorem allows to relate the operator $\ml{R}_{\text{qu}}(\operatorname{Op}_h(a_{x,r}))$ to the classical average by the geodesic flow: \begin{equation}\label{e:egorov} \ml{R}_{\text{qu}}(\operatorname{Op}_h(a_{x,r}))= \operatorname{Op}_h\left(\frac{1}{2\pi}\int_0^{2\pi} a_{x,r}\circ \varphi_0^{t}dt\right)+\mathcal{O}_{L^2\rightarrow L^2}(h^{1-2\beta}), \end{equation} where the constant in the remainder is uniform for $x\in\IS^2$ and $r\geq h^{\beta}$ and where $\varphi_0^t$ is the Hamiltonian flow associated with the Hamiltonian function\footnote{This is just a reparametrization of the standard geodesic flow.} $H_0(y,\eta):=\|\eta\|_{g_0(y)}.$ Given $a$ in $\ml{C}^{\infty}_c(T^*\IS^2\setminus \underline{0})$, we set $$\ml{R}_{\text{cl}}(a):=\frac{1}{2\pi}\int_0^{2\pi} a\circ \varphi_0^{t}dt.$$ \begin{rema}\label{r:egorov} Let us briefly remind how to prove~\eqref{e:egorov}. This is standard~\cite[App.~A.3]{DJN19} and we just need to pay attention to our class of symbols. First, we write, for every $s,t\in[0,2\pi]$, $$\frac{d}{ds}\left(e^{is A}\operatorname{Op}_h(a_{x,r}\circ\varphi_0^{t-s})e^{-isA}\right)=e^{is A}\left(\frac{i}{h}\left[hA,\operatorname{Op}_h(a_{x,r}\circ\varphi_0^{t-s})\right]-\operatorname{Op}_h\left(\{H_0,a_{x,r}\circ\varphi_0^{t-s}\}\right)\right)e^{-isA}.$$ We now let $\chi_1$ be a smooth function which is equal to $1$ in a neighborhood of $[1/4,4]$ and to $0$ outside $[1/8,8]$. In particular, $\chi_1(H_0^2)$ is equal to $1$ on the support of $a_{x,r}$. Combining this with the composition rules for pseudodifferential operators with exotic symbols on manifolds~\cite[Lemma~A.6]{DJN19}, we know that, for every $\tau\in [0,2\pi]$, \begin{eqnarray*}\operatorname{Op}_h(a_{x,r}\circ\varphi_0^\tau)&=&\operatorname{Op}_h(a_{x,r}\circ\varphi_0^\tau)\operatorname{Op}_{h}(\chi_1(H_0^2))+\mathcal{O}_{L^2\rightarrow L^2}(h^2)\\ &=&\operatorname{Op}_{h}(\chi_1(H_0^2))\operatorname{Op}_h(a_{x,r}\circ\varphi_0^{\tau})+\mathcal{O}_{L^2\rightarrow L^2}(h^2). \end{eqnarray*} We can also remark using the composition rules for pseudodifferential operators that $$hA\operatorname{Op}_{h}(\chi_1(H_0^2))=\operatorname{Op}_h(\chi_1(H_0^2))hA+h\operatorname{Op}_h(r)+\mathcal{O}_{L^2\rightarrow L^2}(h^2),$$ where $r$ is a smooth compactly supported function that depends in a multilinear way of the derivatives of order $\geq 1$ of the function $\chi_1(H_0^2)$. Thus its support does not intersect the support of $a_{x,r}$. In particular, using the composition rule~\cite[Lemma~A.6]{DJN19} one more time and the support properties of $a_{x,r}$, one has $\operatorname{Op}_h(a_{x,r})\operatorname{Op}_h(r)=\mathcal{O}_{L^2\rightarrow L^2}(h^2)$. Hence, after integration over the interval $[0,2\pi]$ and applying the Calder\'on-Vaillancourt Theorem, one finds \begin{eqnarray*} \ml{R}_{\text{qu}}(\operatorname{Op}_h(a_{x,r})) &= &\operatorname{Op}_h\left(\frac{1}{2\pi}\int_0^{2\pi} a_{x,r}\circ \varphi_0^{t}dt\right) +\mathcal{O}_{L^2\rightarrow L^2}(h)\\ &+&\frac{1}{2\pi }\int_0^{2\pi}\int_0^t\left(\frac{i}{h}\left[hA\operatorname{Op}_{h}(\chi_1(H_0^2)),\operatorname{Op}_h(a_{x,r}\circ\varphi_0^{t-s})\right]\right) dsdt\\ &-&\frac{1}{2\pi }\int_0^{2\pi}\int_0^t\operatorname{Op}_h\left(\{H_0,a_{x,r}\circ\varphi_0^{t-s}\}\right) dsdt. \end{eqnarray*} As all our pseudodifferential operators are microlocally supported in a compact\footnote{This was the main reason for inserting the pseudodifferential cutoff $\operatorname{Op}_{h}(\chi_1(H_0^2))$.} set of $T^*\IS^2$, we can again apply the composition rule for exotic symbols on a compact manifold as stated in~\cite[Lemma~A.6]{DJN19}. Thus, we can conclude that~\eqref{e:egorov} holds. Inspecting carefully the argument, we can in fact conclude that \begin{lemm} With the above conventions, one can find $\tilde{a}_{x,r}\in S^{\text{comp}}_{\beta}(T^*\IS^2)$ (as defined in~\cite[\S 2.2]{DJN19}) such that \begin{equation}\label{e:egorov2}\ml{R}_{\text{qu}}(\operatorname{Op}_h(a_{x,r}))=\operatorname{Op}_h(\tilde{a}_{x,r})+\mathcal{O}_{L^2\rightarrow L^2} (h^2),\end{equation} where the constant in the remainder is uniform for $x\in\IS^2$ and $r\geq h^{\beta}$. Moreover, $\tilde{a}_{x,r}$ is equal to $\mathcal{R}_{\text{cl}}(a_{x,r})$ modulo $h^{1-2\beta}S^{\text{comp}}_{\beta}(T^*\IS^2)$ and its support is contained in the support of $\mathcal{R}_{\text{cl}}(a_{x,r})$. \end{lemm} \end{rema} \begin{rema}\label{r:semiclassical-pert} The arguments used from the beginning of this Section would work as well for the following semiclassical problem: $$-h^2\Delta_{g_0}u_h+\varepsilon_h Vu_h=u_h,\quad\|u_h\|_{L^2(\IS^2)}=1,$$ where $\varepsilon_h\rightarrow 0$ fast enough. More precisely, the above proofs only require $h^{-1}\varepsilon_h\rightarrow 0$ in order to have a small remainder in~\eqref{e:small-quasimode}. In this case, this would yield the bound $$\int_{\IS^2}\chi_{x,r}(y)|u_h(y)|^2d\upsilon_{g_0}(y)=\left\langle \left(\frac{1}{2\pi}\int_0^{2\pi}e^{-isA}\operatorname{Op}_h\left(a_{x,r}\right)e^{isA} ds\right)u_h,u_h\right\rangle_{L^2(\mathbb{S}^2)}+\mathcal{O}(h^{1-2\beta})+\ml{O}(h^{-1}\varepsilon_h).$$ The argument from~\cite{MR16} would allow to remove this extra remainder $\ml{O}(h^{-1}\varepsilon_h)$ and to handle the case $\varepsilon_h\rightarrow 0^+$. Yet, as this kind of condition on the size of the potential already appeared in Remarks~\ref{r:Lp-semiclassical} and~\ref{r:KN-Lp}, we do not pursue this here. \end{rema} \subsection{Weinstein averaging method} Following Weinstein~\cite{W77}, one can use~\eqref{e:period-quantum} to obtain the following exact commutation relation: $$\left[\ml{R}_{\text{qu}}(\operatorname{Op}_h(a_{x,r})),A\right]=0.$$ In particular, thanks to~\eqref{e:decompose-Laplace}, one has \begin{equation}\label{e:commute} \left[\ml{R}_{\text{qu}}(\operatorname{Op}_h(a_{x,r})),\Delta_{g_0}\right]=0. \end{equation} Using~\eqref{e:semiclassical-schrodinger}, this implies that $$\left\langle \left[V,\ml{R}_{\text{qu}}(\operatorname{Op}_h(a_{x,r}))\right]u_h,u_h\right\rangle_{L^2(\mathbb{S}^2)}=0.$$ Thanks to~\eqref{e:egorov2}, this can be rewritten as $$\left\langle \left[V,\operatorname{Op}_h(\tilde{a}_{x,r})\right]u_h,u_h\right\rangle_{L^2(\mathbb{S}^2)}=\mathcal{O}(h^2).$$ As in Remark~\ref{r:egorov}, we can insert pseudodifferential cutoffs and we find $$\left\langle \left[V\operatorname{Op}_h(\chi_1(H_0^2)),\operatorname{Op}_h(\tilde{a}_{x,r})\right]u_h,u_h\right\rangle_{L^2(\mathbb{S}^2)}=\mathcal{O}(h^2).$$ Hence, thanks to the composition rule for pseudodifferential operators~\cite[Lemma~A.6]{DJN19} with exotic symbols, we get $$\left\langle \operatorname{Op}_h\left(\left\{V,\mathcal{R}_{\text{cl}}(a_{x,r})\right\}\right)u_h,u_h\right\rangle_{L^2(\mathbb{S}^2)}=\ml{O}(h^{1-3\beta}),$$ where the constant in the remainder is uniform for $x\in \IS^2$ and $r\geq h^\beta$. Observe that the extra loss in $\ml{O}(h^{1-3\beta})$ (compared with $\ml{O}(h^{1-2\beta})$) comes from the subprincipal term in $\tilde{a}_{x,r}$. Applying the argument of paragraph~\ref{ss:egorov} one more time, we find that $$\left\langle \operatorname{Op}_h\left(\frac{1}{2\pi}\int_0^{2\pi}\left\{V,\mathcal{R}_{\text{cl}}(a_{x,r})\right\}\circ\varphi_0^tdt\right)u_h,u_h\right\rangle_{L^2(\mathbb{S}^2)}=\ml{O}(h^{1-3\beta}),$$ from which we infer $$\left\langle \operatorname{Op}_h\left(\left\{\ml{R}_{\text{cl}}(V),\mathcal{R}_{\text{cl}}(a_{x,r})\right\}\right)u_h,u_h\right\rangle_{L^2(\mathbb{S}^2)}=\ml{O}(h^{1-3\beta}),$$ with the constant in the remainder enjoying the same uniformity property as before. Here $V$ is identified with its pullback on $T^*\IS^2\setminus \underline{0}$ via the canonical projection $\Pi(y,\eta)=y$. Let us now denote by $\varphi^t_{\langle V\rangle}$ the Hamiltonian flow induced by $\ml{R}_{\text{cl}}(V)$. As $\ml{R}_{\text{cl}}(V)$ and $H_0$ Poisson commute, one has $\varphi_0^t\circ\varphi_{\langle V\rangle}^s=\varphi^s_{\langle V\rangle}\circ\varphi_0^t$ for every $t$ and $s$ in $\mathbb{R}$. We note that all the above argument would work as well if we replace $a_{x,r}$ by $a_{x,r}\circ\varphi_{\langle V\rangle}^\tau$ and the remainder would remain uniform in $\tau$ (and in $(x,r)$) provided that $\tau$ remains on a bounded interval. Hence, one has, uniformly for $\tau\in[-\tau_0,\tau_0]$, $x\in \IS^2$ and $r\geq h^{\beta}$, \begin{equation}\label{e:inv-V}\left\langle \operatorname{Op}_h\left(\left\{\ml{R}_{\text{cl}}(V),\mathcal{R}_{\text{cl}}(a_{x,r})\circ\varphi_{\langle V\rangle}^{\tau}\right\}\right)u_h,u_h\right\rangle_{L^2(\mathbb{S}^2)}=\ml{O}(h^{1-3\beta}). \end{equation} We integrate this expression between $0$ and $\tau$: $$\left\langle \operatorname{Op}_h\left(\mathcal{R}_{\text{cl}}(a_{x,r})\circ\varphi_{\langle V\rangle}^{\tau}\right)u_h,u_h\right\rangle_{L^2(\mathbb{S}^2)}=\left\langle \operatorname{Op}_h\left(\mathcal{R}_{\text{cl}}(a_{x,r})\right)u_h,u_h\right\rangle_{L^2(\mathbb{S}^2)}+\ml{O}(h^{1-3\beta}).$$ Combining this with~\eqref{e:av-free-schr}, we find \begin{equation}\label{e:final-average}\int_{\IS^2}\chi_{x,r}(y)|u_h(y)|^2d\upsilon_{g_0}(y)=\left\langle \operatorname{Op}_h\left(\frac{1}{2\tau_0}\int_{-\tau_0}^{\tau_0}\mathcal{R}_{\text{cl}}(a_{x,r})\circ\varphi_{\langle V\rangle}^{\tau}d\tau\right)u_h,u_h\right\rangle_{L^2(\mathbb{S}^2)}+\mathcal{O}(h^{1-3\beta}),\end{equation} where the constant in the remainder is uniform for $x$ in $K$ and $r\geq h^{\beta}$. \begin{rema} Rather than for studying eigenfunctions, Weinstein's argument was initially developed to study the distribution of eigenvalues of $-\Delta_{g_0}+V$ inside each cluster near $\lambda_j^2=j(j+1)$~\cite{W77}. This was achieved by showing via this kind of averaging arguments that $-\Delta_{g_0}+V$ is conjugated to $-\Delta_{g_0}+\ml{R}_{\text{qu}}(V)$ modulo small error terms. See~\cite{CdV79, Gu78, Gu81, Ze96, Ze97} for further developments on these eigenvalue problems. \end{rema} \subsection{Applying Calder\'on-Vaillancourt Theorem} We are now in position to apply the Calder\'on-Vaillancourt Theorem~\cite[Th.~5.1]{Zw12} which tells us that $$\left\|\operatorname{Op}_h\left(\frac{1}{2\tau_0}\int_{-\tau_0}^{\tau_0}\mathcal{R}_{\text{cl}}(a_{x,r})\circ\varphi_{\langle V\rangle}^{\tau}d\tau\right)\right\|_{L^2\rightarrow L^2}\leq C\left\|\frac{1}{2\tau_0}\int_{-\tau_0}^{\tau_0}\mathcal{R}_{\text{cl}}(a_{x,r})\circ\varphi_{\langle V\rangle}^{\tau}d\tau\right\|_{L^{\infty}(T^*\mathbb{S}^2)}+\mathcal{O}(h^{1-3\beta}),$$ where $C_0$ is some universal constant and where the constant in the remainder is one more time uniform for $x$ in $\IS^2$ and $r\geq h^{\beta}$. Together with~\eqref{e:final-average}, we finally get $$\int_{\IS^2}\chi_{x,r}(y)|u_h(y)|^2d\upsilon_{g_0}(y)\leq C\left\|\frac{1}{2\tau_0}\int_{-\tau_0}^{\tau_0}\mathcal{R}_{\text{cl}}(a_{x,r})\circ\varphi_{\langle V\rangle}^{\tau}d\tau\right\|_{L^{\infty}(T^*\mathbb{S}^2)}+\mathcal{O}(h^{1-3\beta}).$$ From the construction of $a_{x,r}$, one can in fact reduce to the unit cotangent bundle and conclude that the following key lemma holds \begin{lemm} With the above conventions, one has \begin{equation}\label{e:final-bound-ball}\int_{\IS^2}\chi_{x,r}(y)|u_h(y)|^2d\upsilon_{g_0}(y)\leq C\left\|\frac{1}{4\pi\tau_0}\int_{-\tau_0}^{\tau_0}\int_0^{2\pi}\chi_{x,r}\circ\varphi^t_0\circ\varphi_{\langle V\rangle}^{\tau}dtd\tau\right\|_{L^{\infty}(S^*\mathbb{S}^2)}+\mathcal{O}(h^{1-3\beta}),\end{equation} where we identify $\chi_{x,r}$ with its pullback on $S^*\IS^2$ and where the constant in the remainder is uniform for $x$ in $\IS^2$ and $r\geq h^{\beta}$. \end{lemm} In order to facilitate the discussion, we shall work on the space of geodesic $G(\mathbb{S}^2)\simeq\IS^2$. With the induced symplectic form on $\IS^2$, $\varphi^{\tau}_{\langle V\rangle}$ can be viewed as the Hamiltonian flow of $\ml{R}(V)$ on $\IS^2$. Hence, what we are aiming at is an upper bound on $$0\leq \frac{1}{2\tau_0}\int_{-\tau_0}^{\tau_0}\mathcal{R}(\chi_{x,r})\circ\varphi_{\langle V\rangle}^{\tau}(\gamma)d\tau,$$ when $\gamma\in G(\IS^2)\simeq\IS^2$ and when $r\ll\tau_0$. It is in fact sufficient to find an upper bound on $$\frac{1}{2\tau_0}\int_{-\tau_0}^{\tau_0}\mathcal{R}(\mathbf{1}_{B_{2r}(x)})\circ\varphi_{\langle V\rangle}^{\tau}(\gamma)d\tau,$$ where $\mathbf{1}_{B_{2r}(x)}$ is the characteristic function of the geodesic ball of radius $2r$ centered at $x$. The function $\mathcal{R}(\mathbf{1}_{B_{2r}(x)})$ is supported in a neighborhood of width $4r$ of $\Gamma_x\subset G(\IS^2)$ and it is bounded from above by $4r$. Hence, \begin{equation}\label{e:trivialbound}\forall\gamma\in G(\IS^2),\quad0\leq \frac{1}{2\tau_0}\int_{-\tau_0}^{\tau_0}\mathcal{R}(\mathbf{1}_{B_{2r}(x)})\circ\varphi_{\langle V\rangle}^{\tau}(\gamma)d\tau\leq 4r.\end{equation} \begin{rema}\label{r:semiclassical-pert-2} In the case of semiclassical Schr\"odinger operators as in Remark~\ref{r:semiclassical-pert}, the argument would work similarly and we would also obtain the bound~\eqref{e:final-bound-ball} for this semiclassical problem (up to the already extra remainder $\mathcal{O}(h^{-1}\varepsilon_h)$ that apeared in this Remark). \end{rema} \subsection{Flow lines of $\varphi_{\langle V\rangle}^t$ near $\Gamma_{x_0}$} So far we did not use our assumptions on $V$ or on the point $x_0$. They will now be used to get an improvement of order $r^{1/2}$ on the upper bound~\eqref{e:trivialbound} when $x\in B_{r_0}(x_0)$. To that aim, we now fix $x_0$ satisfying the assumption of the Theorem and we will analyze the flow lines of $\varphi_{\langle V\rangle}^t$ near a given point $\gamma_0$ of $\Gamma_{x_0}$. Without loss of generality, we may suppose that $x_0$ is the north pole, i.e. with coordinates $(0,0,1)$ in the representation~\eqref{e:sphere}. Then, for every $x\in B_{\epsilon_0}(x_0)$, $\Gamma_x$ is a great circle of the sphere lying in the annulus $$\mathcal{A}_{\epsilon_0}:=\left\{(x_1,x_2,x_3)\in\mathbb{R}^3:x_1^2+x_2^2+x_3^2=1,\ |x_3|\leq\sin\epsilon_0\right\}.$$ Similarly, the function $\mathcal{R}(\mathbf{1}_{B_{2r}(x)})$ is supported on an annulus of width $2|\sin(2r)|$ around $\Gamma_x$ and it takes the value $4r$ on this annulus. In particular, if $\tau_0>0$ and $r_1>0$ are chosen small enough, then, for every $x\in B_{\epsilon_0}(x_0)$ and for every $0<r<r_1$, the support of \begin{equation}\label{e:av-radon-potential}\frac{1}{2\tau_0}\int_{-\tau_0}^{\tau_0}\mathcal{R}(\mathbf{1}_{B_{2r}(x)})\circ\varphi_{\langle V\rangle}^{\tau}d\tau\end{equation} is contained in the annulus $\mathcal{A}_{2\epsilon_0}$. Hence, once we have fixed $x\in B_{\epsilon_0}(x_0)$, we just need to study the value of this function inside such an annulus. More precisely, we want to show that this is of order $\mathcal{O}(r^{3/2})$ uniformly for $\gamma$ in this annulus. Let $\gamma_0\in \Gamma_{x_0}$ and let us prove this uppper bound in a neighborhood of a fixed $\gamma_0$. Without loss of generality, we can suppose that, in spherical coordinates $(\phi,\theta)$, one has $\gamma_0=(\pi/2,0)$. The vector field $X_{\langle V\rangle}$ can be written in this system of coordinates: $$X_{\langle V\rangle}(\phi,\theta)=-\frac{1}{\sin\phi}\frac{\partial\mathcal{R}(V)}{\partial\theta}\partial_\phi+\frac{\partial\mathcal{R}(V)}{\partial\phi}\partial_\theta.$$ We need to distinguish two situations: \begin{enumerate} \item $X_{\langle V\rangle}(\gamma_0)\notin T_{\gamma_0}\Gamma_{x_0}$ which means that $\frac{\partial\mathcal{R}(V)}{\partial\theta}(\pi/2,0)\neq 0$; \item $X_{\langle V\rangle}(\gamma_0)\in T_{\gamma_0}\Gamma_{x_0}$ which means that $\frac{\partial\mathcal{R}(V)}{\partial\theta}(\pi/2,0)= 0$. In that case, the hypothesis of Theorem~\ref{t:maintheo} implies that $\frac{\partial\mathcal{R}(V)}{\partial\phi}(\pi/2,0)\neq 0$ and $\frac{\partial^2\mathcal{R}(V)}{\partial\theta^2}(\pi/2,0)\neq 0$ \end{enumerate} The Hamilton-Jacobi equations can be written as \begin{equation}\label{e:HJ} \phi'(\tau)=-\frac{1}{\sin\phi(\tau)}\frac{\partial\mathcal{R}(V)}{\partial\theta}(\phi(\tau),\theta(\tau)),\quad\text{and}\quad\theta'(\tau)=\frac{\partial\mathcal{R}(V)}{\partial\phi}(\phi(\tau),\theta(\tau)). \end{equation} \subsubsection{The transverse case} Let us begin with the first situation which is slightly easier to handle. Witout loss of generality, we can suppose that $\frac{\partial\mathcal{R}(V)}{\partial\theta}(\pi/2,0)>0$ (the negative case is handled similarly). First, using spherical coordinates, we fix an open neighborhood $\mathcal{U}_{2\epsilon_0}:=(\pi/2-4\epsilon_0,\pi/2+4\epsilon_0)\times(-2\epsilon_0,2\epsilon_0)$ so that \begin{equation}\label{e:transverse-local} \forall \gamma=(\phi,\theta)\in\mathcal{U}_{2\epsilon_0},\quad \frac{\partial\mathcal{R}(V)}{\partial\theta}(\phi,\theta)>\frac{1}{2}\frac{\partial\mathcal{R}(V)}{\partial\theta}(\pi/2,0)=:a_0>0. \end{equation} Up to decreasing the value $\tau_0$, we can suppose without loss of generality that $\varphi^\tau_{\langle V\rangle}(\gamma)$ belongs to $\mathcal{U}_{2\epsilon_0}$ for every $|\tau|\leq\tau_0$ and for every $\gamma\in\mathcal{U}_{\epsilon_0}$. As already explained, the support of~\eqref{e:av-radon-potential} is contained in $\mathcal{A}_{2\epsilon_0}$. For the moment, we will study locally its value inside $\mathcal{U}_{\epsilon_0}\subset\mathcal{A}_{2\epsilon_0}$. We now fix some $\gamma$ in $\mathcal{U}_{\epsilon_0}$. In particular, $$\forall |\tau|\leq \tau_0,\quad \frac{\partial\mathcal{R}(V)}{\partial\theta}\left(\varphi^\tau_{\langle V\rangle}(\gamma)\right)\geq a_0,$$ which implies thanks to~\eqref{e:HJ} that $\phi'(\tau)<0$ along this piece of trajectory. This yields the following upper bound along the orbit $\left(\varphi^\tau_{\langle V\rangle}(\gamma)\right)_{-\tau_0\leq\tau\leq\tau_0}$: \begin{equation}\label{e:angle-interval}\phi(\tau_2)-\phi(\tau_1)\leq -\frac{a_0}{\cos(4\epsilon_0)}(\tau_2-\tau_1)\ \Longleftrightarrow\ \tau_2-\tau_1\leq\frac{\cos(4\epsilon_0)}{a_0}(\phi(\tau_1)-\phi(\tau_2)),\end{equation} for every $-\tau_0\leq\tau_1\leq\tau_2\leq\tau_0$. Recall now that the function in~\eqref{e:av-radon-potential} is defined by averaging $\mathcal{R}(\mathbf{1}_{B_{2r}(x)})$ for some $x\in B_{\epsilon_0}(x_0)$ and some $0<r<r_1$. In spherical coordinates, $x$ can be written $(\phi_x,\theta_x)$ where $0\leq\phi_x\leq \epsilon_0$ and $0\leq\theta_x\leq 2\pi$. Hence, using our identification $G(\IS^2)\simeq\IS^2$, $\mathcal{R}(\mathbf{1}_{B_{2r}(x)})$ is $4r$ times the characteristic function of the annulus of width $4r$ centered at $\Gamma_x$, $$\mathcal{A}_{2r}(x)=\left\{(\phi,\theta):\ \phi-\text{arccos}\left(-\cos(\theta-\theta_x)\sin(\phi_x)\right)\in[-2r,2r],\ 0\leq \theta\leq 2\pi\right\}.$$ The boundary of this annulus is given by $$\partial \mathcal{A}_{2r}(x)=\left\{\left(\text{arccos}\left(-\cos(\theta-\theta_x)\sin(\phi_x)\right)\pm 2r,\theta\right):\ 0\leq\theta\leq 2\pi\right\}$$ and it is oriented thanks to the natural orientation on $\IS^2$. Using now that $\mathcal{R}(V)$ is of class $\mathcal{C}^1$ and~\eqref{e:transverse-local}, we know that, up to decreasing the value of $\epsilon_0$ (and thus of $\tau_0$ and $r_1$), the vector field $X_{\langle V\rangle}$ is uniformly (negatively) transverse to $\partial \mathcal{A}_{2r}(x)\cap\mathcal{U}_{2\epsilon_0}$ for every $x\in B_{\epsilon_0}(x_0)$ and for every $0<r<r_1$. In particular, given $\gamma\in\mathcal{U}_{\epsilon_0}$, the set $$\left\{\tau\in[-\tau_0,\tau_0]:\varphi^{\tau}_{\langle V\rangle}(\gamma)\in \mathcal{A}_{2r}(x)\right\}$$ is an interval that we denote by $I_{x,r}(\gamma)$. Hence, $$0\leq\frac{1}{2\tau_0}\int_{-\tau_0}^{\tau_0}\mathcal{R}(\mathbf{1}_{B_{2r}(x)})\circ\varphi_{\langle V\rangle}^{\tau}(\gamma)d\tau\leq \frac{2r|I_{x,r}(\gamma)|}{\tau_0},$$ and it remains to determine an upper bound on the size of this interval in terms of $r$. Thanks to the upper bound~\eqref{e:angle-interval}, the length of the interval is bounded by the maximal variation of $\phi$ along the orbit of $\gamma$ inside $\mathcal{A}_{2r}(x)$. If we denote the interval $I_{x,r}(\gamma)$ by $[\tau_1,\tau_2]$, then $$\phi(\tau_1)-\phi(\tau_2)\leq 4r+\left|\text{arccos}\left(-\cos(\theta(\tau_1)-\theta_x)\sin(\phi_x)\right)-\text{arccos}\left(-\cos(\theta(\tau_2)-\theta_x)\sin(\phi_x)\right)\right|.$$ As $\phi_x\in[-\epsilon_0,\epsilon_0]$ (with $\epsilon_0>0$ small), this yields an upper bound of the form $$\phi(\tau_1)-\phi(\tau_2)\leq 4r+C \sin(\epsilon_0)|\tau_2-\tau_1|,$$ where $C>0$ is some uniform constant. Combined with~\eqref{e:angle-interval}, it gives us $$0\leq|I_{x,r}(\gamma)|=\tau_2-\tau_1\leq\frac{4r}{1-C\sin(\epsilon_0)},$$ and then, for every $x\in B_{\epsilon_0}(x_0)$ and every $0\leq r\leq r_1$, \begin{equation}\label{e:bound-transverse-case}\forall\gamma\in\mathcal{U}_{\epsilon_0},\ 0\leq\frac{1}{2\tau_0}\int_{-\tau_0}^{\tau_0}\mathcal{R}(\mathbf{1}_{B_{2r}(x)})\circ\varphi_{\langle V\rangle}^{\tau}(\gamma)d\tau\leq \frac{8r^2}{\tau_0(1-C\sin(\epsilon_0))}.\end{equation} This shows the expected upper bound in the neighborhood $\mathcal{U}_\epsilon(\gamma_0):=\mathcal{U}_{\epsilon_0}$ of $\gamma_0$ when $X_{\langle V\rangle}(\gamma_0)$ is transverse to $\Gamma_{x_0}$. \subsubsection{The tangent case} We now deal with the slightly more delicate case where $X_{\langle V\rangle}(\gamma_0)$ is tangent to $\Gamma_{x_0}$ where $\frac{\partial\mathcal{R}(V)}{\partial\theta}(\pi/2,0)=0$. Thanks to~\eqref{e:hyp-crit}, we can again without loss of generality assume that $\frac{\partial\mathcal{R}(V)}{\partial\phi}(\pi/2,0)>0$, and suppose that \begin{equation}\label{e:tangent-local} \forall \gamma=(\phi,\theta)\in\mathcal{U}_{2\epsilon_0},\quad \frac{\partial\mathcal{R}(V)}{\partial\phi}(\phi,\theta)>\frac{1}{2}\frac{\partial\mathcal{R}(V)}{\partial\phi}(\pi/2,0)=:a_0>0. \end{equation} Moreover, thanks to hypothesis~\eqref{e:hyp-trans}, the critical point at $0$ of the map $\theta\mapsto \mathcal{R}(V)(\pi/2,\theta)$ is nondegenerate. In particular, without loss of generality and up to decreasing the value of $\epsilon_0$, there exists $b_0>0$ such that \begin{equation}\label{e:tangent-local-second} \forall \gamma=(\phi,\theta)\in\mathcal{U}_{2\epsilon_0},\quad \frac{\partial^2\mathcal{R}(V)}{\partial\theta^2}(\phi,\theta)>\frac{1}{2}\frac{\partial^2\mathcal{R}(V)}{\partial\theta^2}(\pi/2,0)=:b_0>0. \end{equation} We now fix $\gamma\in\mathcal{U}_{\epsilon_0}\subset\mathcal{A}_{2\epsilon_0}$ and, as before, we can suppose that, for every $|\tau|\leq \tau_0$, $$ \frac{\partial\mathcal{R}(V)}{\partial\phi}\left(\varphi^\tau_{\langle V\rangle}(\gamma)\right)\geq a_0\quad \text{and}\quad \frac{\partial^2\mathcal{R}(V)}{\partial\theta^2}\left(\varphi^\tau_{\langle V\rangle}(\gamma)\right)\geq b_0.$$ As in the transverse case, one has $$0\leq\frac{1}{2\tau_0}\int_{-\tau_0}^{\tau_0}\mathcal{R}(\mathbf{1}_{B_{2r}(x)})\circ\varphi_{\langle V\rangle}^{\tau}(\gamma)d\tau\leq \frac{2r|I_{x,r}(\gamma)|}{\tau_0},$$ where $$I_{x,r}(\gamma):=\left\{\tau\in[-\tau_0,\tau_0]:\varphi^{\tau}_{\langle V\rangle}(\gamma)\in \mathcal{A}_{2r}(x)\right\}.$$ The main difference with the above case is that this set is not an interval in general. Yet, we can note that, along the trajectory of $\gamma$, the vector $(\phi'(\tau),\theta'(\tau))$ is nonvanishing thanks to~\eqref{e:tangent-local}. Moreover, it is tangent to $\partial\mathcal{A}_{r'}(x)$ (for some $r'<2\epsilon_0$) if and only if $$F(\tau):=\phi'(\tau)-\theta'(t)\frac{\sin(\theta(\tau)-\theta_x)\sin(\phi_x)}{\sqrt{1-\cos^2(\theta(\tau)-\theta_x)\sin^2\phi_x}}=0.$$ We can observe that \begin{eqnarray*} F'(\tau)&=&-\frac{1}{\sin\phi(\tau)}\frac{\partial\mathcal{R}(V)}{\partial\theta}(\phi(\tau),\theta(\tau))\frac{\partial^2\mathcal{R}(V)}{\partial\theta\partial\phi}(\phi(\tau),\theta(\tau))\\ &-&\frac{1}{\sin\phi(\tau)}\frac{\partial\mathcal{R}(V)}{\partial\phi}(\phi(\tau),\theta(\tau))\frac{\partial^2\mathcal{R}(V)}{\partial\theta^2}(\phi(\tau),\theta(\tau))+\mathcal{O}(\epsilon_0), \end{eqnarray*} where the constant in the remainder is uniformly bounded for $\tau\in[-\tau_0,\tau_0]$ and $\gamma\in\mathcal{U}_{\epsilon_0}$. Thus, as $\partial_\theta\mathcal{R}(V)(\pi/2,0)=0$, we can suppose that, up to decreasing the value of $\epsilon_0>0$, $|F'(\tau)|\geq a_0b_0/2$. In particular, $F$ is monotone and it vanishes at most at one point inside $[-\tau_0,\tau_0]$. As a consequence, the set $I_{x,r}(\gamma)$ is the union of at most two disjoint intervals inside $[-\tau_0,\tau_0]$ that we denote by $[\tau_1,\tau_2]$ and $[\tau_3,\tau_4]$. Moreover, $X_{\langle V\rangle}(\varphi_{\langle V\rangle}^\tau(\gamma))$ is tangent to $\partial A_{2r}(x)$ at most at one point inside $[\tau_1,\tau_2]\cup[\tau_3,\tau_4]$. It now remains to bound the length of these two intervals in terms of $r$. To that aim, we observe that, for $\tau\in[\tau_1,\tau_2]\cup[\tau_3,\tau_4]$, one can find $r(\tau)\in[-2r,2r]$ such that $$\phi(\tau)=r(\tau)+\text{arccos}\left(-\cos(\theta(\tau)-\theta_x)\sin\phi_x\right).$$ Given now $\tau,\tau'\in[\tau_1,\tau_2]\cup[\tau_3,\tau_4]$, one finds $$r(\tau)-r(\tau')=F(\tau')(\tau-\tau')+\frac{F'(\tau')}{2}(\tau-\tau')^2+\mathcal{O}((\tau-\tau')^3),$$ where the constant in the remainder can be made uniform in terms of $r$, $\gamma$, $\tau$ and $\tau'$. We use this equality to find an upper bound on the length of $[\tau_1,\tau_2]$. The other interval (if non empty) is handled simlarly. Recall from the above calculation that $F'(\tau)\leq -a_0b_0/2$ for every $\tau\in[-\tau_0,\tau_0]$. We have to distinguish three cases: \begin{itemize} \item $F(\tau_1)\leq 0$. In that case, we take $\tau'=\tau_1$ and $\tau=\tau_2$ and we find $$r(\tau_2)-r(\tau_1)\leq-\frac{a_0b_0}{4}(\tau_2-\tau_1)^2+\mathcal{O}((\tau_2-\tau_1)^3).$$ From this, we can deduce that $|\tau_2-\tau_1|\leq \frac{32r^{1/2}}{a_0b_0}.$ \item $F(\tau_1)\geq 0$ and $F(\tau_2)\geq 0$. In that case, we take $\tau'=\tau_2$ and $\tau=\tau_1$ and we find $$r(\tau_1)-r(\tau_2)\leq-\frac{a_0b_0}{4}(\tau_2-\tau_1)^2+\mathcal{O}((\tau_2-\tau_1)^3).$$ Again, we deduce an upper bound of order $\mathcal{O}(r^{\frac{1}{2}}).$ \item $F(\tau_1)>0$ and $F(\tau_2)<0$. In that case, one can find some $\tau_0\in[\tau_1,\tau_2]$ such that $F(\tau_0)=0$. Then, we apply the above inequality twice to get $$r(\tau_2)-r(\tau_0)=\frac{F'(\tau_0)}{2}(\tau_2-\tau_0)^2+\mathcal{O}((\tau_2-\tau_0)^3)\ \text{and}\ r(\tau_1)-r(\tau_0)=\frac{F'(\tau_0)}{2}(\tau_1-\tau_0)^2+\mathcal{O}((\tau_1-\tau_0)^3).$$ Combining the two equalities, we find that $|\tau_2-\tau_1|=\mathcal{O}(r^{\frac{1}{2}}).$ \end{itemize} Gathering these bounds, we find that, for every $x\in B_{\epsilon_0}(x_0)$ and for every $r\leq r_1$, \begin{equation}\label{e:bound-tangent-case}\forall\gamma\in\mathcal{U}_{\epsilon_0},\ 0\leq\frac{1}{2\tau_0}\int_{-\tau_0}^{\tau_0}\mathcal{R}(\mathbf{1}_{B_{2r}(x)})\circ\varphi_{\langle V\rangle}^{\tau}(\gamma)d\tau\leq Cr^{\frac{3}{2}}.\end{equation} \subsubsection{The conclusion} By compactness, one can find $\gamma_1,\ldots,\gamma_N$ in $\Gamma_{x_0}$ and $\epsilon_1,\ldots\epsilon_N>0$ such that $\cup_{j=1}^N\mathcal{U}_{\epsilon_j}(\gamma_j)$ covers $\Gamma_{x_0}$. We take $\epsilon_0:=\min\{\epsilon_j:1\leq j\leq N\}$ so that $\mathcal{A}_{2\epsilon_0}\subset\cup_{j=1}^N\mathcal{U}_{\epsilon_j}(\gamma_j).$ In particular, given any $x\in B_{\epsilon_0}(x_0)$ and any $r<r_1$ (with $r_1$ chosen small enough to handle each neighborhood $\mathcal{U}_{\epsilon_j}(\gamma_j)$), the support of the map $$\gamma\mapsto \frac{1}{2\tau_0}\int_{-\tau_0}^{\tau_0}\mathcal{R}(\mathbf{1}_{B_{2r}(x)})\circ\varphi_{\langle V\rangle}^{\tau}(\gamma)d\tau$$ is contained in $\cup_{j=1}^N\mathcal{U}_{\epsilon_j}(\gamma_j)$. Thus, applying~\eqref{e:bound-transverse-case} and~\eqref{e:bound-tangent-case} to~\eqref{e:final-bound-ball}, we obtain, for any normalized solution $u_h$ to~\eqref{e:semiclassical-schrodinger}, $$\int_{\IS^2}\chi_{x,r}(y)|u_h(y)|^2d\upsilon_{g_0}(y)=\mathcal{O}(r^{\frac{3}{2}})+\mathcal{O}(h^{1-3\beta}),$$ where the constant can be made uniform for $x\in B_{\epsilon_0}(x_0)$ and $r\geq h^{\beta}$. Taking $\beta= \frac{2}{9}$ yields Proposition~\ref{p:main-prop}. \begin{rema}\label{r:caustic} The analysis of the vector field performed here is related to the analysis in~\cite[\S~4]{MR16}. In that reference, we showed with Maci\`a that the semiclassical measures of $-\Delta_{g_0}+V$ can be decomposed as a convex combination of the Haar measures carried by the Lagrangian tori of the completely integrable system $(H_0,\mathcal{R}_{\text{cl}}(V))$. For $2$-dimensional tori, the projection of the Haar measure on $\IS^2$ is absolutely continuous~\cite[Th.~4.3]{MR16} with some eventual blow-up of the density at some points which are often called caustics~\cite[Lemma~4.6]{MR16}. This regularity of the projection is exactly the property we have been using here in a somewhat refined way to get our bounds $\mathcal{O}(r^{1+\alpha})$. The bound~\eqref{e:bound-transverse-case} ($\alpha=1$) corresponds to points of these $2$-dimensional Lagrangian tori where the projection is regular while~\eqref{e:bound-tangent-case} ($\alpha=1/2$) corresponds to these caustics. \end{rema} \section{Final comments} \subsection{Relaxing assumption~\eqref{e:hyp-trans}} Up to some extra work, assumption~\eqref{e:hyp-trans} could certainly be relaxed. For instance, one could require instead that the critical points are of finite order i.e. the derivative does not vanish at a certain order which may be larger than $2$. We would then end up with some upper bound of order $\mathcal{O}(r^{1+\alpha})$ for some $0<\alpha\leq 1/2$ related to the order of vanishing at the critical points of $\mathcal{R}(V)|_{\Gamma_{x_0}}.$ This would give slightly worst upper bound on the growth of $L^p$-norms but it would allow to take larger compact subsets $K$ in~\eqref{e:Lp-compact}. \subsection{Relaxing assumption~\eqref{e:hyp-crit}} A priori, it does not seem possible to remove assumption~\eqref{e:hyp-crit} from the hypothesis of Proposition~\ref{p:main-prop}. Indeed, if there exists $\gamma_0\in\Gamma_{x_0}$ such that $X_{\langle V\rangle}(\gamma_0)=0$, then the value of~\eqref{e:av-radon-potential} at $\gamma_0$ will be equal to $4r$ and it will prevent us from drawing the same conclusion using our argument. \subsection{Sharpness of the exponents} Even if we tried to optimize our arguments, it is not clear if the bounds we obtain on $L^p$-norms are sharp or not. The argument works as well for elements in $L^2(\IS^2)$ which verify~\eqref{e:schrodinger} modulo some small remainder (say $\ml{O}(\lambda^{-2})$) and it would be interesting (but probably subtle) to construct quasimodes saturating these local $L^p$-estimates. \subsection{The range $p>6$} In this range, it is plausible that the methods from~\cite{GaTo18, Ga19, CaGa19, CaGa20} allow to handle these critical geodesics. Indeed, suppose that there exist a point $x_0\in\mathbb{S}^2$ and a sequence $(\psi_{\lambda_k})_{k\geq 1}$ of normalized solutions to~\eqref{e:schrodinger} verifying $\lambda_k\rightarrow+\infty$ and \begin{equation}\label{e:blowup}\lim_{k\rightarrow+\infty}|\psi_{\lambda_k}(x_0)|\lambda_k^{-\frac{1}{2}}\neq0.\end{equation} Up to extracting a subsequence, we can suppose that $(\psi_{\lambda_k})_{k\geq 1}$ has a single semiclassical measure $\mu$~\cite[Ch.~5]{Zw12}. Recall that it is a probability measure carried by $S^*\IS^2$ which is invariant by the geodesic flow $\varphi_0^t$. In particular, it induces a measure $\tilde{\mu}$ on $G(\IS^2)$. Then, we can consider $\tilde{\mu}_{x_0}=\tilde{\mu}|_{\Gamma_{x_0}}$. This measure can be decomposed into three parts: the absolutely continuous component, the singular continuous one and the pure point one. According to the results of Galkowski and Toth in~\cite{GaTo18}, property~\eqref{e:blowup} implies that the absolutely continuous part is not identically $0$. Combined with~\cite[Prop.~2.3]{MR16}, this implies that $\mathcal{R}(V)|_{\Gamma_{x_0}}$ has infinitely many critical points. In other words, if $\mathcal{R}(V)|_{\Gamma_{x_0}}$ has finitely many critical points, then, for any sequence $(\psi_{\lambda_k})_{k\geq 1}$ of normalized solutions to~\eqref{e:schrodinger}, one has $$|\psi_{\lambda_k}(x_0)|=o\left(\lambda_k^{\frac{1}{2}}\right),$$ which improves the remainder from the local Weyl law at $x_0$ without imposing~\eqref{e:hyp-crit}. Compared with Theorem~\ref{t:maintheo}, this is of course not quantitative. If one is able to combine the quantitative arguments of Canzani and Galkowski~\cite{CaGa19, CaGa20} with the extra invariance by the flow of $X_{\langle V\rangle}$~\cite{MR16}, then this may give rise to improvements on Sogge's upper bounds~\eqref{e:Lp-sogge} in the range $p>6$ under weaker geometric assumptions than the ones appearing in Theorem~\ref{t:maintheo}. Recall from the introduction that, thanks to the conjugation formula~\eqref{e:guillemin-weinstein}, eigenfunctions of $-\Delta_{g_0}+V$ which are the image under $\mathcal{U}$ of joint eigenfunctions for $(-\Delta_{g_0},V^\sharp)$ enjoy improved $L^p$ estimates near $x_0$ (for $p>6$) under appropriate assumptions on the critical points of $\mathcal{R}(V)|_{\Gamma_{x_0}}$~\cite{GaTo20, Ta19b}. In particular, if the spectrum of $-\Delta_{g_0}+V$ is simple~\cite[Th.~7]{Uh76}, then all eigenfunctions of $-\Delta_{g_0}+V$ will be the image of joint eigenfunctions. \subsection{The case of odd potentials} In~\cite{MR19}, it was shown that one can uncover extra-invariance properties of semiclassical measures even if $\mathcal{R}(V)$ identically vanishes (meaning that $V$ is an odd function, e.g. $V(x_1,x_2,x_3)=x_3$). In principle, the above arguments could be adapted following the lines of this reference, up to some extra technical work. In that case, the role of $\mathcal{R}(V)$ would be played by the function $$\mathcal{R}^{(2)}(V)=\mathcal{R}(V^2)-\frac{1}{2\pi}\int_0^{2\pi}\int_0^t\{V\circ\varphi_0^t,V\circ\varphi_0^s\}dsdt.$$ See also~\cite{Gu78, Ur85} for earlier related results on spectral asymptotics of Schr\"odinger operators. \subsection{Semiclassical operators} In Remarks~\ref{r:Lp-semiclassical} and~\ref{r:KN-Lp}, we observed that our bounds on $L^p$ norms are valid more generally for solutions to $$-h^2\Delta_{g_0}u_h+\varepsilon_h Vu_h=u_h,\quad\|u_h\|_{L^2(\IS^2)}=1.$$ Even if it was maybe not optimal, for $p>6$, we needed to impose $\varepsilon_h\leq h^{1+\epsilon}$ for some positive $\epsilon$ while for $4\leq p<6$, we only required $\varepsilon_h\leq h$. Thanks to Remarks~\ref{r:semiclassical-pert} and~\ref{r:semiclassical-pert-2}, this yields the following bounds on $L^p$ norms. For $p=\infty$, one has $$\|u_h\|_{L^{\infty}(B_{r_0}(x_0))}\leq C_{\infty,x_0} h^{-\frac{1}{2}}\left(h^{\frac{1}{18}}+h^{\frac{\epsilon}{4}}\right),$$ which yields a polynomial improvement over the usual bound. In the range $4<p<6$, we get similarly, for any $r\geq h^{\frac{2}{9}}$, $$\|u_h\|_{L^{p}(B_{r_0}(x_0))}\leq C_{p,x_0} h^{-\sigma_0(p)}\left(h^{\frac{1}{9}}+(rh)^{-1}\varepsilon_h\right)^{\frac{1}{2}\left(\frac{6}{p}-1\right)},$$ while for $p=4$, we end up with $$\|u_h\|_{L^{4}(B_{r_0}(x_0))}\leq C_{4,x_0} |\log h|h^{-\frac{1}{8}}\left(h^{\frac{1}{9}}+(rh)^{-1}\varepsilon_h\right)^{\frac{1}{4}},$$ In these last two cases, it yields improvements over Sogge's upper bound as soon as $h^{-1}\varepsilon_h\rightarrow 0$. Note that in every cases, $\varepsilon_h$ may go to $0$ very fast. For instance, one may have $\varepsilon_h\ll h^2$. \subsection{The case of Zoll surfaces} Following the lines of~\cite{MR16}, we could adapt the results to Laplace eigenfunctions, $$-\Delta_g\psi_\lambda=\lambda^2\psi_\lambda,$$ where $g$ is a $C_{2\pi}$ (or Zoll) metric on $\IS^2$, i.e. all of whose geodesics are closed, simple and of length $2\pi$. See~\cite{Bes78} for a detailed review on this geometric assumption. In that case, it is known~\cite{CdV79} that $$\sqrt{-\Delta_g}=A+\frac{\alpha}{4}+Q,$$ where $Q$ is a pseudodifferential operator of order $-1$, $\alpha$ is the Maslov index of the closed trajectories and $\text{Sp}(A)\subset\IZ_+$. Combining the above proof with the arguments from~\cite[\S 3.1]{MR16}, we will end up with the same quantities as in~\eqref{e:final-bound-ball} except that $\mathcal{R}(V)$ will be replaced by some function $q_0(x,\xi)$ (related to the principal symbol of $Q$). An exact expression for $q_0$ was given by Zelditch in~\cite{Ze96, Ze97} and it involves curvature terms of the metric. Under the geometric assumptions of Theorem~\ref{t:maintheo} on the point $x_0$ but with $q_0$ replacing $\mathcal{R}(V)$, we could obtain improved $L^p$-bounds near $x_0$. Yet, the expression of $q_0$ being a little bit involved, this condition is harder to verify. \subsection{The higher dimensional case} For the sake of simplicity, we restricted ourselves to the $2$-dimensional case but the extra invariance property by the flow of $X_{\langle V\rangle}$ remains true in higher dimensions $n\geq 3$~\cite[Prop.~2.3]{MR16}. Thus, modulo some extra work and some appropriate assumptions on $X_{\langle V\rangle}|_{\Gamma_{x_0}}$, one should be able to obtain localized $L^2$-estimates as in Proposition~\ref{p:main-prop} but maybe for smaller values of $\alpha$. Then, in the range $p_c=\frac{2(n+1)}{n-1}<p\leq+\infty$, this can be transferred into $L^p$ bounds using that~\eqref{e:localized-Lp} remains true for $p=\infty$ in dimension $n\geq 3$~\cite[Eq.(3.3)]{So16}. Similarly, for $p<p_c$, the Kakeya-Nikodym bounds of Section~\ref{s:kakeya} remains true up to $p>\frac{2(n+2)}{n}$ and they can again be roughly bounded by the $L^2$-localized norms appearing in Proposition~\ref{p:main-prop}. Yet, we are not aware of an analogue of Guillemin's Theorem~\cite{Gu76} showing that $\mathcal{R}$ is an isomorphism when restricted to the appropriate spaces of smooth functions on $\IS^n$ and $G(\IS^n)$ and hence making the condition on $x_0$ easy to verify.
{ "timestamp": "2021-06-01T02:31:19", "yymm": "2012", "arxiv_id": "2012.08838", "language": "en", "url": "https://arxiv.org/abs/2012.08838" }
\section{\label{sec:Introduction}Introduction} MgO is one of the most extensively studied oxides which is used as a substrate material and in various heterostructures with applications related to tunneling magnetoresistance \cite{Maruyama,Sofin,Yang}. As a wide band gap insulator with a measured optical band gap of 7.7 \cite{Roessler} from absolute-reflectance measurements with UV radiation and 7.83 eV \cite{Whited} from thermoreflectance spectroscopy, this material is employed e.g. in transient x-ray spectroscopy and time-dependent density-functional theory (DFT) calculations aiming to unravel the propagation of excitations across the interface in metal-insulator heterostructures \cite{Melnikov,Gruner,Rothenbach,Beyazit}. Understanding spectroscopic features from first-principles requires accurate modeling beyond the ground state properties including excitations of different origin and energy scale. The structural and electronic properties of MgO have been widely studied with first-principles calculations \cite{Schleife-2006,Fuchs-2008,Shishkin}. DFT calculations with semilocal functionals yield a fundamental band gap of 4.88, 4.50 and 4.76 eV ~\cite{Wang,Schleife-2006,Shishkin}, respectively. Many-body perturbation theory (MBPT) calculations employing Hedin's \textit{GW} approximation \cite{Hedin} render an increased fundamental gap of 6.8 and 7.25 eV \cite{Fuchs-2008,Shishkin}, respectively, which is still lower than the experimental one. \noindent The optical spectrum, calculated by Wang \textit{et al.} \cite{Wang} using the local density approximation (LDA) as the exchange-correlation functional for the DFT calculation and subsequently including \textit{GW} and excitonic corrections agrees with experiment \cite{Palik} w.r.t. peak positions up to 12 eV whereas the amplitude of the peaks beyond the first one is overestimated due to the limited number of unoccupied bands employed in the BSE corrections. Schleife \textit{et al.} \cite{Schleife-2006} studied the frequency-dependent dielectric function for different MgO polymorphs -- wurzite, zinc blende, and rocksalt -- in the independent particle (IP) approximation using the generalized gradient approximation in the PW91 parametrization \cite{PW91}. Good agreement with experiment concerning the peak positions was obtained by including excitonic corrections with BSE, based on the Kohn-Sham (KS) eigenenergies and a scissors operator to describe the QP eigenenergies \cite{Schleife-2009,Schleife-2012}. While optical spectroscopy probes excitations from valence bands, x-ray absorption spectroscopy (XAS) probes those from the strongly localized core states. A common approach to model XAS is the final state rule (FSR) \cite{Barth} based on Fermi's Golden rule, where the effects of screening of the core-hole (the so-called \textit{final-state effects}) are calculated in a supercell. Alternatively, XAS can be described by considering quasiparticle and excitonic effects within MBPT by using $GW$ and solving the BSE. Rehr \textit{et al.} \cite{Rehr} showed that while both approaches led to similar overall features in the O and Mg K-edge spectra of MgO, BSE calculations result in better agreement with experiment at high transition energy due to the non-local treatment of the exchange interaction. Recent implementations of BSE in all-electron codes \cite{Laskowski,Vorwerk-2017} with explicit treatment of core states have demonstrated very good agreement with experiment for the XAS spectra of TiO${_2}$ (rutile and anatase), PbI${_2}$, and CaO \cite{Vorwerk-2017}. The latter approach is adopted in this work. Here we describe both the optical and x-ray absorption spectra of bulk MgO including many-body effects. As a first step, we perform the $G{_0}W{_0}$ corrections starting from Kohn-Sham (KS) wavefunctions. We show that careful consideration of the electron-hole interaction with BSE is essential to achieve agreement with experiment for both valence and core excitation spectra. In particular, the optical spectrum calculated with two different DFT functionals (PBEsol and HSE06) including the $G{_0}W{_0}$ and BSE corrections are consistent with experiment~\cite{Roessler,Bortz} and previous theoretical work \cite{Schleife-2009,Schleife-2012} and yield an improved agreement regarding the intensity of the peaks at higher energies, highlighting the importance of quasiparticle and excitonic effects. Previous studies have shown that a dense \textbf{k}-mesh is required for the sampling of the Brillouin zone to describe sufficiently the localization of the excitonic wave function and the fine structure in the vicinity of the absorption edge \cite{Fuchs-2008}. Here, we use a model for the static screening with parameters fitted to the $G{_0}W{_0}$ calculation, to solve the BSE (so-called model BSE \cite{Fuchs-2008,Liu}) starting directly from DFT wavefunctions on a denser \textbf{k}-mesh, which improves in particular the low energy range (7$-$11 eV). The results for the model BSE are presented in Appendix \ref{sec:mBSE}. Beyond previous work we provide a thorough analysis of interband transitions contributing to the peaks in the optical spectrum. Further insight into the nature of the first bound exciton is given by the real-space visualization of its wave function. Employing the \texttt{exciting} code, the O and Mg K-edge XAS spectra calculated with BSE show very good agreement with the experimental spectra \cite{Luches} and with previous theoretical results using the FSR \cite{Rehr}. Knowledge of the origin of peaks is essential for the interpretation of x-ray spectra. The main incentive of this study is to identify the nature of transitions which contribute to the peaks and analyze the character of the first exciton in the O K-edge both in real and reciprocal space The paper is structured as follows: the details of the calculations are presented in Section \ref{sec:CompDetail}, followed by the discussion of the results in Section \ref{sec:Result}. We start with the electronic properties of MgO in \ref{subsec:properties} and then compare the optical spectra calculated with two different starting exchange-correlation functionals in \ref{subsec:Optprop}. Subsequently, we analyze the transitions in reciprocal space to derive the origin of contributions to the peaks in the spectrum. In subsection \ref{sec:XASprop}, we present the XAS spectra of the O and Mg K-edge and identify the underlying transitions in reciprocal space for the prominent peaks. Finally, subsection \ref{sec:RealspaceProj} is dedicated to the real-space visualization of the first exciton of the optical and the O K-edge x-ray absorption spectrum. The results are summarized in Section \ref{sec:Summary}, followed by two appendices showing a comparison of the optical spectra obtained with VASP and \texttt{exciting} and the optical spectrum with the model BSE. \begin{figure*}[!htp] \includegraphics[width=1.0\textwidth]{Figure1.pdf}% \caption{\label{fig:Bandstr-PDOS}(a) Kohn-Sham and $G{_0}W{_0}$ band structure and (b-d) total and projected density of states (PDOS) of MgO calculated with PBEsol within VASP.} \end{figure*} \section{\label{sec:CompDetail}Computational details} The DFT calculations are performed with the VASP code (version 5.4.4) \cite{Vasp-96,Vasp2-96}, using pseudopotentials in combination with the projector augmented wave (PAW) method \cite{Vasp3-99}, and the \texttt{exciting} code \cite{Gulans-2014} (version Nitrogen) employing the all-electron full-potential (linearized) augmented planewave $+$ local orbital [(L)APW$+$lo] method. For the exchange-correlation functional we chose the generalized gradient approximation (GGA) in the implementation of Perdew, Burke, and Ernzerhof (PBE96) \cite{PBE-96}, PBEsol \cite{PBEsol-08,PBEsol-09}, and the hybrid functional, HSE06 \cite{HSE-03,HSE-05}. The equilibrium lattice constant determined with the different functionals amounts to 4.24~\AA\ (PBE96), 4.21~\AA\ (PBEsol), and 4.20~\AA\ (HSE06), the experimental one being 4.212~\AA\ \cite{Landolt}. For the calculation of the optical spectrum with VASP, we have performed single-shot $G{_0}W{_0}$ on top of the KS wavefunctions obtained with two DFT functionals, PBEsol and HSE06 and subsequently included excitonic corrections by solving the BSE. For all the BSE calculations the Tamm-Dancoff-approximation (TDA) \cite{Dancoff} is adopted. The calculations are performed for a two-atom unit cell with a $\Gamma$-centered 15$\times$15$\times$15 \textbf{k}-mesh (unless otherwise specified) with a plane-wave cut-off energy of 650 eV. $GW$ PAW pseudopotentials for excited properties were employed in all the calculations with two valence electrons for Mg: $3s^2$ and six for O: $2s^2$, $2p^4$. 192 unoccupied bands are used for both the DFT and single-shot $G{_0}W{_0}$ calculations with 100 frequency-grid points. For the optical spectrum a Lorentzian broadening of 0.3 eV is used. Employing PBEsol~\cite{PBEsol-08,PBEsol-09} as the starting exchange-correlation functional for the ground-state calculation, single-shot $G{_0}W{_0}$ calculations are also performed with the \texttt{exciting} code \cite{Gulans-2014} together with BSE \cite{Sagmeister-2009} (within TDA) for the optical and x-ray absorption spectra \cite{Vorwerk-2017}. A $\Gamma$-centered 11$\times$11$\times$11 mesh shifted by (0.09, 0.02,0.04) is employed for the calculations. Muffin-tin radii of 1.058 and 0.767~\AA\ for Mg and O, respectively, are used with a basis set cut-off $R_{MT}|\mathbf{G}+\mathbf{k}|_{max}=7$, and the lattice constant is set to the PBEsol value of 4.21~\AA . The energy threshold to include the local field effects in the excited properties, $|\mathbf{G}+\mathbf{q}|_{max}$, is set to 4.5 a.u.$^{-1}$ for the optical and O K-edge, and 1.5 a.u.$^{-1}$ for the Mg K-edge absorption spectra. The exchange-correlation functional PBEsol is employed for the Kohn-Sham (KS) states and a total of 192 unoccupied bands are considered in the ground state and $G{_0}W{_0}$ calculation for the optical and O and Mg K-edge x-ray absorption spectra. For the optical spectrum in the BSE calculation, four occupied and five unoccupied bands are considered, while eight unoccupied bands were taken into account for the XAS spectra. A Lorentzian broadening with a width of 0.55 eV is applied to the spectra to mimic the excitation lifetime. The atomic structures and isosurfaces are visualized with the VESTA software \cite{Vesta} and the band structure is calculated with the Wannier90 \cite{Wannier90} package in VASP. \section{\label{sec:Result}Results} \subsection{\label{subsec:properties}Electronic properties} We start our analysis by comparing the electronic properties obtained from DFT calculations with three different functionals, namely PBE96, PBEsol and HSE06. Table \ref{tab:ExcGWbg} presents the band gap calculated with VASP. With PBE96 (4.49 eV) and PBEsol (4.58 eV), the band gaps are considerably underestimated, consistent with previous calculations \cite{Schleife-2006,Fuchs-2008,Shishkin}. On the other hand, HSE06 renders a band gap of 6.58 eV closest but still below the experimental value of 7.7 and 7.83 eV \cite{Roessler,Whited}. The $G{_0}W{_0}$ band gap obtained with PBEsol (7.52 eV) is closest to experiment, whereas a somewhat lower value (7.26 eV) is obtained with PBE96 which is in agreement with the value of 7.25 eV from Ref.~\cite{Shishkin}. The latter study ~\cite{Shishkin} has also addressed the effect of self-consistent quasiparticle correction cycle on the optical properties: self-consistency in $G$ while keeping $W_0$ constant ($GW_{0}$) increased the band gap to 7.72 eV, while fully self-consistent $GW$ led to an overestimated band gap of 8.47 eV and was attributed to the missing vertex corrections in the self-consistency cycle. While the size of the band gap may be reproduced by considering (partial) self-consistency in $GW$, our results show that the inclusion of excitonic effects (see Sections \ref{subsec:Optprop} and \ref{sec:XASprop}) is essential in order to describe the relevant features and the shape of the spectrum. We note that the $G{_0}W{_0}$ band gap with the hybrid HSE06 functional is also overcorrected (8.53 eV). This is consistent with previous findings that the effects of the starting exchange-correlation functional are large at the independent-particle level, but the differences are reduced when considering quasiparticle~\citep{Jiang} and eventually excitonic effects~\cite{Begum}. Since PBEsol and HSE06 provide better electronic properties as compared to PBE96 we continue the analysis with those. In Fig.~\ref{fig:Bandstr-PDOS}a the Kohn-Sham and $G{_0}W{_0}$ band structure with the PBEsol functional is plotted along high-symmetry points, showing a direct ($\Gamma-\Gamma$) band gap. The inclusion of quasiparticle effects in the $G{_0}W{_0}$ calculation leads to a nearly rigid shift of the unoccupied Kohn-Sham bands to higher energies. The top of the valence band (VB) consists mainly of O $2p$ states (cf. the projected density of states in Figs. \ref{fig:Bandstr-PDOS}b-d) with low dispersion along the $L-\Gamma-K$ direction, whereas the lower bands are more dispersive. Further insight into the orbital-resolved contributions of O and Mg on the band structure is provided in Fig.~\ref{fig:Vasp-Orbital}. The bottom of the conduction band (CB) comprises hybridized O $3s$, $3p$, and Mg $3s$ states that are highly dispersive along the $L-\Gamma-X$ and $K-\Gamma$ directions (cf. Figs. \ref{fig:Bandstr-PDOS}c, d and Figs. \ref{fig:Vasp-Orbital}a, b, and d). In the range of 4.5 - 11 eV beyond the CB minimum, $3s$, $3p$, and Mg $3s$ states prevail, whereas above 11 eV O $3p$ states become predominant, followed by Mg $3p$ and $3d$ states above 15 eV (cf. Fig. \ref{fig:Bandstr-PDOS}d and Figs. \ref{fig:Vasp-Orbital}e, f). We will further analyze the ion- and orbital projections in the band structure in Section \ref{subsubsec:exciton-opt} and Section \ref{sec:XASprop} to correlate the contributions with the optical and XAS spectra. \begin{figure*}[!htp] \includegraphics[width=0.85\textwidth]{Figure2.pdf}% \caption{\label{fig:Vasp-Orbital}Oxygen (a-c) and Mg (d-f) orbital-resolved contributions projected on the ground state band structure within VASP.} \end{figure*} \begin{table}[!htp] \caption{\label{tab:ExcGWbg} Comparison of the fundamental band gap from the DFT and the $G_0W_0$ calculation with different starting functionals.} \begin{ruledtabular} \begin{tabular}{lcccc} \textrm{}& \textrm{E$_{xc}$}& \textrm{DFT}& \textrm{$G_{0}W_{0}$}& \textrm{Experiment}\\ \colrule \multirow{3}{4em}{$E_g$ (eV) ($\Gamma-\Gamma$)} & PBE96 & 4.49 & 7.26 & \multirow{3}{4em}{7.7\footnotemark[1], 7.83\footnotemark[2]}\\ & PBEsol & 4.58 & 7.52 &\\ & HSE06 & 6.58 & 8.53 &\\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Reference \citenum{Roessler}} \footnotetext[2]{Reference \citenum{Whited}} \end{table} \subsection{\label{subsec:Optprop}Optical properties} Starting from the electronic structure presented in the last section, we determine the optical spectrum including also many-body effects. We discuss the effect of approximations to the exchange-correlation functional, namely PBEsol and HSE06, on the spectra and the role of inclusion of $G_{0}W_{0}$ and excitonic corrections with BSE. In addition, the interband transitions responsible for the spectral features are analyzed in reciprocal space. \subsubsection{Optical spectrum within IP approximation and inclusion of $G_0W_0$ corrections} The calculated optical spectra are plotted in Fig.~\ref{fig:optics-vasp} together with the experimental ones~\cite{Roessler,Bortz}. The imaginary part of the experimental dielectric function shows four prominent peaks (marked in Fig. \ref{fig:optics-vasp}b): the first two at $\sim$7.7 eV and 10.7 eV are of nearly equal intensity, the third and fourth peak are at 13.32 and 16.9 eV, respectively, with a smaller intensity of the latter. We start our analysis by considering the results from the independent particle (IP) approximation using the KS eigenvalues calculated with the functionals PBEsol and HSE06. The imaginary part of the dielectric function has its onset at 4.58 and 6.58 eV for PBEsol and HSE06, respectively, below the experimental spectrum, due to the underestimation of the band gap (cf. Table \ref{tab:ExcGWbg}). Moreover, prominent peaks in the imaginary part of the spectrum are observed at $\sim$8.5, 11, and 15.5 eV for PBEsol and at around 11, 13, and 17.5 eV for HSE06, corresponding to pronounced band transitions that coincide with points of inflection in the real part of the spectrum. Inclusion of many-body effects within the $G_0W_0$ approximation results in a blue shift by $\sim$3 eV (PBEsol) and 2 eV (HSE06) compared to the IP spectra, due to the increased band gaps within $G_0W_0$. This strong effect is attributed to the weak dielectric screening in MgO \cite{Wang}. In Figs.~\ref{fig:optics-vasp}b, d sharper features emerge in $\epsilon_2$ with peaks at $\sim$11.8, 14, and 19 eV (PBEsol), that are shifted to higher energies at $\sim$ 12.5, 15, and 20.5 eV (HSE06). The real part of the spectrum in Figs.~\ref{fig:optics-vasp}a, c exhibits weaker/smoothened features compared to experiment \cite{Roessler} (Fig. \ref{fig:optics-vasp}b, d). The macroscopic static electronic dielectric constant, $\epsilon_{\infty}=$Re $\epsilon(\omega=0)$ obtained with PBEsol and HSE06 is presented in Table \ref{tab:Exc-eps}. Within the IP approximation, $\epsilon_{\infty}$ is overestimated for PBEsol (3.29) compared to the experimental value 2.94~\cite{Roessler}, similar to previous results with GGA-PW91 (3.16) \cite{Schleife-2009}. We also included the local field effects in the IP calculation and find that the dielectric constant decreases from 3.29 to 3.04 (PBEsol) and 2.76 to 2.57 (HSE06). A similar trend was observed in the work of Gajdoš \textit{et al.}~\cite{Gajdos-2006} for semiconductors as Si, SiC, AlP, GaAs, and for insulator as diamond (C). On the other hand, with the hybrid functional, $\epsilon_{\infty}$ is underestimated (2.76). Upon including quasiparticle effects ($G_0W_0$), the values are substantially reduced to 2.78 (PBEsol) and 2.53 (HSE06). \begin{table}[!htp] \caption{\label{tab:Exc-eps} Comparison of the macroscopic static electronic dielectric constant $\epsilon_{\infty}$ in the IP approximation and after $G_0W_0$ and BSE with different DFT functionals.} \begin{ruledtabular} \begin{tabular}{lcccc} \textrm{E$_{xc}$}& \textrm{IP}& \textrm{G$_{0}$W$_{0}$}& \textrm{BSE}& \textrm{Experiment}\\ \colrule PBEsol & 3.29 & 2.78 & 3.08 & \multirow{3}{4em}{2.94\cite{Roessler}}\\ HSE06 & 2.76 & 2.53 & 2.81 &\\ \end{tabular} \end{ruledtabular} \end{table} \begin{figure*}[!htp] \includegraphics[width=0.98\textwidth]{Figure3.pdf} \caption{\label{fig:optics-vasp} Optical spectrum of bulk MgO obtained with VASP: (a), (c) real part and (b), (d) imaginary part of the dielectric function for PBEsol and HSE06 as the starting functional, respectively. A Lorentzian broadening of 0.3 eV is employed for all the calculated spectra. The IP, IP$+G_{0}W_{0}$, and $G_{0}W_{0}+$BSE results are shown by brown solid, green dash-dotted, and red solid lines, respectively. Additionally, the experimental data from Exp. 1 \cite{Roessler} (black solid line), and Exp. 2 \cite{Bortz} (black dashed line) are displayed.} \end{figure*} \subsubsection{Optical spectrum with excitonic corrections\label{subsec:GW+BSW}} Additional to the quasiparticle corrections, we consider the effects arising from electron-hole interaction by solving the Bethe-Salpeter equation. The calculations are performed with four occupied and five unoccupied bands which are sufficient to evaluate the optical spectrum up to 30 eV. The inclusion of excitonic effects leads to a redistribution of the spectral weight to lower energies w.r.t. the $G_0W_0$ spectrum and the emergence of a sharp peak at the absorption onset. With PBEsol as the starting point in Fig. \ref{fig:optics-vasp}b, the agreement with experiment w.r.t. spectral shape is improved, but the onset of the imaginary part of the dielectric function is $\sim$0.7 eV lower than experiment \citep{Roessler,Bortz}. The prominent peaks are at $\sim$7.0 eV, 10 eV, 12.4 eV, and 16 eV. In the real part of the spectrum, the sign reversal at 12.8 eV indicates a plasmonic resonance. On the other hand, using HSE06 as the starting functional, the real and imaginary part of the spectrum are in excellent agreement with experiment. The peak positions of the distinct features and the plasmonic resonance at 13.4 eV coincide with the experimental ones \cite{Bortz}. The four peaks of $\epsilon_2$ at $\sim$8, 10.5, 13, and 17 eV are largely aligned with experiment, as shown in Figs.~\ref{fig:optics-vasp}c and d. Further analysis of the origin of the peaks in reciprocal space and the real-space projection of the first exciton are provided in Section \ref{subsubsec:exciton-opt} and Section \ref{sec:RealspaceProj}, respectively. The improved description w.r.t. the energetic positions and, to a lesser extent, intensity of the peaks can be attributed to the description of the ground state with a hybrid functional HSE06, the larger number of unoccupied bands considered in the BSE calculation and to performing BSE on top of $G_{0}W_{0}$. Furthermore, the agreement to experiment concerning the macroscopic static electronic dielectric constant $\epsilon_{\infty}$ is improved after BSE to 3.08 (PBEsol) and 2.81 (HSE06), respectively (cf. Table \ref{tab:Exc-eps}), also consistent with a previous value of 3.12 \cite{Schleife-2009}, where excitonic corrections were included using the KS eigenenergies and a scissor shift approach. We note that increasing the number of unoccupied bands to 9, leads to slightly higher values for $\epsilon_{\infty}$ 3.16 (PBEsol) and 2.90 (HSE06), the latter being in excellent agreement with experiment. In particular, more empty states are necessary for the calculation of the real part of the dielectric function from the Kramers-Kronig relation. Due to the high computational cost and an enhanced memory demand with more unoccupied bands, we proceed with five unoccupied bands for the further analysis. \begin{figure*}[!htp] \includegraphics[width=0.98\textwidth]{Figure4.pdf} \caption{\label{fig:optics-exc} Optical spectrum with PBEsol including many-body corrections calculated with the \texttt{exciting} code: (a) real and (b) imaginary part of the dielectric function. A Lorentzian broadening 0.3 eV is employed for the calculated spectrum for the $G_{0}W_{0}$+BSE corrections (red line) and the red vertical bars represent the oscillator strength (arb. units). The direct band gap at 7.26 eV is marked by a vertical green line. Experimental spectra from Roelssler \textit{et al.}~\cite{Roessler} (black solid line) and Bortz \textit{et al.}~\cite{Bortz} (black dashed line) are shown for comparison. (c$-$f) electron-hole coupling coefficients represented as circles in reciprocal space for the peaks at different energies marked in (b), where the size of the circle is proportional to the magnitude of the e-h contribution.} \end{figure*} Furthermore, for the binding energy of the first exciton we obtain 442 meV with PBEsol and 596 meV with HSE06. A similar value of 429 meV was obtained by Fuchs \textit{et al.} \cite{Fuchs-2008} employing the KS eigenenergies (GGA) with a scissor shift of 2.98 eV and subsequently including excitonic corrections. The overestimation of the binding energy w.r.t experiment (80 meV \cite{Roessler}) may be attributed to the fact that the ionic contributions to the static screening is not considered \cite{Shindo,Zimmermann}. \subsubsection{\label{subsubsec:exciton-opt}Analysis of spectral features in reciprocal space} In order to identify the origin of the most prominent peaks, we have performed calculations with the all-electron \texttt{exciting} code. The real and imaginary part of the dielectric function for the $G_0W_0+$BSE with PBEsol as the DFT functional and similar parameters (four occupied and five unoccupied bands) are plotted in Figs. \ref{fig:optics-exc}a, b, and show good agreement with experiment as well as the VASP result w.r.t. the energetic positions of the peaks (a comparison of the spectra obtained with the two codes is provided in Appendix A and Fig. \ref{fig:Comparison-OptSpec}). The PBEsol band gap is 4.60 eV at the KS level and increases to 7.25 eV after quasiparticle corrections with $G_0W_0$ are included. The most prominent peaks are marked in Fig.~\ref{fig:optics-exc}b and the corresponding e-h contributions are studied in Figs.~\ref{fig:optics-exc}c-f. We recall that the Bethe-Salpeter equation represents an eigenvalue problem for an effective two-particle Hamiltonian~\cite{Rohlfing,Sagmeister-2009}: \begin{equation}\label{eqn:BSE_EVP} \sum_{v'c'\textbf{k}'}H_{vc\textbf{k},v'c'\textbf{k}'}A^{\lambda}_{v'c'\textbf{k}'}=E^{\lambda}A^{\lambda}_{vc\textbf{k}}. \end{equation} where E$^\lambda$ are the transition energies and A$^\lambda_{vc\textbf{k}}$ are the corresponding states in terms of $v \mathbf{k} \rightarrow c \mathbf{k}$ transitions. The e-h coupling coefficients for a particular transition displayed as circles in Figs. \ref{fig:optics-exc}c-f are calculated from the BSE eigenvector A$^\lambda$ as: \begin{equation} w^\lambda_{c\textbf{k}}=\sum_{c}\mid A^{\lambda}_{vc\textbf{k}} \mid^2, \, w^\lambda_{v\textbf{k}}=\sum_{v}\mid A^{\lambda}_{vc\textbf{k}} \mid^2. \end{equation} \begin{figure*}[!htp] \includegraphics[width=1.0\textwidth]{Figure5}% \caption{\label{fig:xas-O}XAS spectrum of the O K-edge using $G_{0}W_{0}$+BSE calculated with the \texttt{exciting} code: (a) calculated absorption spectra with $G_{0}W_{0}$+BSE (red line) and within the independent quasiparticle approximation (IQPA, brown shaded area) are compared with experimental spectra from Luches \textit{et al.}~\cite{Luches} (black line). A shift of 34.4 eV was applied to the calculated spectra to align to the first peak of the experiment and a Lorentzian broadening of 0.55 eV is adopted to mimic the excitation lifetime. The green line at 535.2 eV marks the direct band gap. The red vertical bars represent the oscillator strength (arb. units). (b-g) excitonic contributions to the final states in the CB of the peaks marked in (a).} \end{figure*} The first exciton at 6.82 eV has a binding energy of 435 meV, close to the value obtained with VASP, as discussed in the previous section. This bound exciton contributes to the shoulder at the onset of the spectrum. The interband transitions responsible for the exciton and its real-space distribution are discussed in detail in Section \ref{sec:RealspaceProj}. In Fig. \ref{fig:optics-exc} a, the green line marks the fundamental band gap, below which the bound excitons lie. The first peak at 7.3 eV (cf. Fig.~\ref{fig:optics-exc}c) arises due to transitions from the top of the valence band (VB) to the bottom of the conduction band (CB) around the $\Gamma$-point in reciprocal space. A comparison with the site and orbital- projected DOS (Fig.~\ref{fig:Bandstr-PDOS}) and band structure (Fig. \ref{fig:Vasp-Orbital}) reveals a mixed O $3s$, $3p$, and Mg $3s$ character. The second peak at 9.4 eV involves interband transitions from the topmost VB to the lowest CB along $L-\Gamma-X$ and $\Gamma-K$. The CB is more dispersive along $L-\Gamma$ and has mixed O $3s$ and $3p$ character with Mg $3p$ contributions near $L$. The next peak at 10.4 eV stems from transitions to the CB from deeper-lying valence bands along $L-\Gamma-X$ and $\Gamma-K$. The final peak at 12.2 eV, plotted in Fig.~\ref{fig:optics-exc}f, results from transitions from the top of the VB to the higher-lying CB around $X$ as well as along $K-\Gamma$. In this energy range, the CB consists of O $3p$ and Mg $3s$ and $3d_{xy}$,$d_{xz}$ states along $\Gamma-X$ and $K-\Gamma$. The influence of lattice screening on the optical spectrum is a topic of current research and so far there is no established framework to treat the exciton-phonon coupling that renormalizes the absorption spectra. Such effects should be assessed in future work. \subsection{\label{sec:XASprop}X-ray absorption spectra} We now turn to the x-ray absorption spectra of the O and Mg K-edge of bulk MgO calculated with the \texttt{exciting} code. The ground state calculations were performed with the PBEsol exchange-correlation functional and the quasiparticle corrections were included with the single-step $G_0W_0$. Finally, the excitonic corrections were accounted for by solving BSE. \begin{figure*}[!htp] \includegraphics[width=1.0\textwidth]{Figure6}% \caption{\label{fig:xas-Mg} XAS spectrum of Mg K-edge including $G_{0}W_{0}$+BSE corrections calculated with the \texttt{exciting} code: (a) calculated absorption spectra with $G_{0}W_{0}$+BSE (red line)) and within IQPA (brown shaded area) are compared with experimental spectra from Luches \textit{et al.} \cite{Luches} on a MgO film on Ag(001), grazing/normal incidence of photon beam (black dashed-dotted/solid line); polycrystalline MgO (black dashed line). A shift of 58.8 eV was applied to the calculated spectra to achieve coincidence with the first peak of the experiment and a Lorentzian broadening of 0.55 eV is adopted for the theoretical curve to mimic the excitation lifetime. The vertical green line at 1306.6 eV marks the direct band gap. The red vertical bars represent the oscillator strength (arb. units). (b-k) excitonic contributions to the final states in the CB of the peaks marked in (a).} \end{figure*} \subsubsection{\label{subsubsec:XAS-O} O K-edge} The theoretical XAS spectrum of the O K-edge is plotted in Fig. \ref{fig:xas-O}d together with the experimental spectrum from Luches \textit{et al.} \cite{Luches}, who performed x-ray absorption measurements on MgO films of varying thickness grown epitaxially on Ag(001) as well as on polycrystalline bulk samples. The $G_0W_0+$BSE spectrum is characterized by six prominent peaks with high oscillator strength (cf. Fig. \ref{fig:xas-O}a). Their origin in terms of transitions to the conduction bands is visualized in Fig. \ref{fig:xas-O}b-g. We find that for the BSE calculation, a total of eight unoccupied bands are sufficient to obtain agreement with experiment and converge the oscillator strengths in the energy interval up to 30 eV. While the spectrum within the independent quasiparticle aprroximation (IQPA) obtained after the $G_0W_0$ correction captures the four peak-feature, a very good agreement to experiment concerning the spectral shape and the relative positions of the three prominent peaks at $\sim$ 537, 546, and 557 eV is obtained only after the $G_0W_0$+BSE corrections. The spectrum is also consistent with earlier work of Rehr \textit{et al.} \cite{Rehr} using FSR and BSE. The reduced intensity of the third peak in the $G_0W_0$+BSE spectrum can be attributed to the limited number of unoccupied bands considered in the calculation. Typically, the $GW$ approximation is not widely explored for core-level states, only recently there are first promising reports for its application to molecular $1s$ levels~\cite{Golze-2020}. In our calculations we do not correct the core energies obtained from DFT. Thus we shift the BSE spectrum to align with the experiment, as done in earlier works \cite{Vorwerk-2017,Vorwerk-2019}. For the O K-edge a shift of 34.4 eV is applied to the $G_0W_0$+BSE spectrum to align the first peak with experiment. The same shift is also applied to the IQPA spectrum. The green line in Fig. \ref{fig:xas-O} a marks the direct band gap and the states below it correspond to bound excitons.The first bound exciton at the O K-edge is found at 534.5 eV with a binding energy of 690 meV, its real-space distribution is discussed in Section \ref{sec:RealspaceProj}. This value is comparable to previous results for other oxides, e.g. 0.5 eV was reported for $\beta$-Ga$_{2}$O$_{3}$~\cite{Cocchi-2016}, and 285 meV, 345 meV, and 323 meV for the $\alpha$-, $\beta$-, and $\varepsilon$ phase of Ga$_{2}$O$_{3}$~\cite{Vorwerk-2021}, respectively. Six prominent features in the XAS spectrum with high oscillator strength are marked and their origin is analyzed further in Figs. \ref{fig:xas-O}b-g. Transitions at the onset of the spectrum at 535.3 eV are localized around $\Gamma$ at the bottom of the CB (Fig. \ref{fig:xas-O}b) and comprise predominantly O $3p$ character hybridized with O $3s$ (cf. Figs. \ref{fig:Vasp-Orbital}a-c), and Mg $3s$ character (cf. Fig. \ref{fig:Vasp-Orbital}d). The second peak at 536.9 eV also arises from transitions to the lowest CB, but is more dispersive along $L-\Gamma-X$ and $K-\Gamma$. The subsequent peak at 537.7 eV stems from transitions to the lowest CB, but is localized midway along the $L-\Gamma$ with some contribution along $K-\Gamma$ and has hybridized O $3s$, $3p$, and Mg $3s$ character. Furthermore, the peak at 539.5 eV results from transitions to the second lowest unoccupied band localized at $X$ and dispersive along $K-\Gamma$ with Mg 3$d_{xz}$ (cf. Fig. \ref{fig:Vasp-Orbital}f) as well as O $3p$ character. Transitions to higher unoccupied bands around $W$ and $\Gamma$ with mixed O $3p$ and $3d$ and Mg $3p$, $t_{2g}$ character result in a peak at 546.5 eV. The final peak at 557.2 eV arises from transitions to CB at energies above 25 eV with O $3p$ and Mg $3p$, $e_g$ contributions along $X-W-K-\Gamma$. \begin{figure*}[!htp] \includegraphics[width=0.7\textwidth]{Figure7.pdf}% \caption{\label{fig:Exciton_O}Analysis of the first exciton in the optical spectrum (a) and O K-edge XAS (b) in reciprocal space. The lower panels show the density associated with the electronic part of the excitonic wavefunctions for a selected cross section in real space: In (c) along ($56\bar{1}$) for the optical excitation shown in (a) and (d) along ($\bar{5}1\bar{6}$) for the O K-edge XAS in (b). The color code visualizes the spacial extension of the wave functions: Blue colors refer to vanishing or low densities, orange to red colors to elevated densities. The hole is fixed near the oxygen (fractional coordinate: 0.52, 0.52, 0.52) and is marked by a white cross.} \end{figure*} \subsubsection{\label{subsubsec:XAS-Mg} Mg K-edge} Fig. \ref{fig:xas-Mg}a displays the Mg K-edge from the $G_0W_0+$BSE calculation and the experimental spectrum from Luches \textit{et al.} \cite{Luches}. The experimental spectrum has four prominent peaks at 1308.3, 1314.4, and 1316.2 eV, followed by a broader peak at 1326.6 eV, with a noticeable difference in peak intensities for normal and grazing incidence of the MgO film on Ag(001) and for the polycrystalline sample. While the IQPA spectrum for the O K-edge shows overall agreement with the $G_0W_0$+BSE result, for the Mg K-edge the IQPA spectrum fails to describe the general features of the experimental spectrum. On the other hand, including the core-hole - electron interaction leads to a large redistribution of the spectral weight, accompanied by the emergence of a high intensity peak at the onset of the BSE spectrum. Overall, the $G_0W_0$+BSE Mg K-edge is in very good agreement with the experimental spectrum of \citep{Luches} and with previous BSE and FSR calculations \citep{Rehr} concerning peak positions and relative intensity. Similar to the O K-edge, we applied a shift of 58.8 eV to the $G_0W_0$+BSE spectrum to align it to the first peak in the experimental spectrum and the same shift is applied to the IQPA spectrum. The green line in Fig.~\ref{fig:xas-Mg}a denotes the presence of bound excitons below this energy, the first bound exciton being at 1305.81 eV with a binding energy of 760 meV. Ten prominent peaks with high oscillator strength are marked in the spectrum which are analyzed further in Figs. \ref{fig:xas-Mg}b-k. The first peak at 1307.4 eV arises from transitions to the bottom of CB with Mg $3p$ character, hybridized with Mg $3s$ states (cf. Figs.~\ref{fig:Vasp-Orbital}a,b). The second peak at 1308.2 eV comprises transitions along the $L-\Gamma-X$ with Mg $3p$ character and along $K-\Gamma$ with mixed Mg $3s$ and $3p$ character. The third peak at 1310.1 eV includes transitions to the lowest CB concentrated halfway between $\Gamma-X$ with hybridized Mg $3s$ and O $3s$ and $3p$ character (cf. Figs.~ \ref{fig:Vasp-Orbital}a,b). Moreover, the peaks at 1313.8, 1315.6, and 1316.7 eV arise from transitions to higher energy CB ($>$10 eV) and are dispersive along the whole k-path with hybridized O $3p$ and Mg $3s$ and $3p$ as well as Mg $3d_{xz}$ character (cf. Figs.~\ref{fig:Vasp-Orbital}d-f). The peaks at 1323.6, 1325.7, 1327.6, and 1329.5 eV stem from the transitions to the unoccupied bands with $E>$20 eV predominantly along $X-W-K$ with prevailing hybridized Mg $3p$ and $e_g$ character. \subsection{\label{sec:RealspaceProj}Real-space projection of the first exciton} The real-space wavefunction of the excited electron for a given exciton can be obtained from the BSE eigenvectors A$^\lambda$ as: \begin{equation}\label{eqn:BSE_e-hWF} \Psi_{\lambda}(\textbf{r}_{h},\textbf{r}_{e})=\sum_{vc\textbf{k}}A^{\lambda}_{vc\textbf{k}}\psi^{\ast}_{v\textbf{k}}(\textbf{r}_{h})\psi_{c\textbf{k}}(\textbf{r}_{e}). \end{equation} For more details see Ref. \cite{Sagmeister-2009,Vorwerk-2019} and references therein. For the analysis we fixed the hole slightly off the oxygen position (0.52,0.52,0.52) and plotted the electronic part of the wavefunction in real-space for the first exciton of the optical and the O K-edge XAS spectrum in Fig.~\ref{fig:Exciton_O}. The first bound exciton of the optical spectrum (Fig. \ref{fig:Exciton_O}a) consists of transition from the valence band maximum (VBM) to the conduction band minimum (CBM) that are strongly localized around $\Gamma$. Since the excited electron is distributed solely over the lowest, highly dispersive conduction band, the bound exciton was previously described in the Wannier-Mott two-band model by Fuchs \textit{et al.}~\cite{Fuchs-2008}. In Fig.~\ref{fig:Exciton_O}c we display a cut along the ($56\bar{1}$) plane through the center of the spread of the wave function near the fixed position of the hole, that shows that the exciton is delocalized over several unit cells which supports the Wannier-Mott character. Moreover the intensity of the spread has a maximum at the oxygen sites and is weaker at the Mg sites. The reciprocal space projection in conjunction with the orbitally projected band structure (cf. Fig.~\ref{fig:Vasp-Orbital}) shows a main contribution of hybridized O $3s$ and Mg $3s$ states at the CBM. For comparison, we have also analyzed the real space projection of the first exciton in the O K-edge XAS spectrum. As shown in Fig. \ref{fig:Exciton_O}b this exciton involves transitions to the CBM, but is more dispersive in reciprocal space along $L-\Gamma-X$ and $K-\Gamma$. This goes hand in hand with a stronger localization in real space, visible from the real space projection in Fig. \ref{fig:Exciton_O}d along the ($\bar{5}1\bar{6}$) plane that exhibits a significant decrease in the spread of the wavefunction. Compared to the exciton of the optical spectrum, here the spread is confined to two to three unit cells only. The 2D cut through the center of the wavefunction spread also illustrates the orbital contributions with $s$ and $p$ character near the O sites, whereas the contributions around the Mg sites have $s$- like character. This can be attributed to the strong hybridization of the O $3s$, $3p$, and Mg $3s$ states around the CBM, discussed above. \section{\label{sec:Summary}Summary} We have provided a comprehensive study of the optical and x-ray absorption spectra of bulk MgO with the VASP and \texttt{exciting} codes. The results indicate that the quasiparticle, and in particular excitonic effects are crucial to describe the spectra, concerning peak positions and to a lesser extent intensity. For the optical spectrum, the effect of two different functionals (GGA-PBEsol and the hybrid HSE06) are studied: an excellent agreement with the experiment is obtained with HSE06 w.r.t. the energetic positions of the peaks. Analysis of the electron-hole coupling coefficients in reciprocal space allows us to identify the valence to conduction band transitions contributing to the peaks in the spectrum. In particular, the peak at 7.3 eV arises due to transitions localized around $\Gamma$ from the top of the VB to the bottom of the CB with mixed O $3s$, $3p$, and Mg $3s$ character, followed by a peak at 9.4 eV stemming from similar interband transitions but along $L-\Gamma-X$ and $\Gamma-K$ with mixed O $3s$, $3p$, and Mg $3p$ character near $L$. The third peak at 10.4 eV is from transitions to the bottom of CB from deeper lying valence bands and the final peak at 12.2 eV results from a transition to higher lying conduction bands with hybridized O $3p$ and Mg $3s$ and $3d_{xy}$,$d_{xz}$ character. The inclusion of core-hole electron interaction by solving the BSE is found to be essential also for the XAS Mg and O K-edge. By visualizing the transitions to the unoccupied bands in reciprocal space, we determine the origin of the relevant peaks in the spectra. In the O K-edge spectrum, the peak at $\sim$ 537 eV originates from the transitions to unoccupied bands with hybridized O $3s$, $3p$, and Mg $3s$ character, the peak at 546 eV stems from O $3p$, $3d$ hybridized with Mg $3p$ and $t_{2g}$ states, and the peak at 557 eV emerges from transitions to the CB with hybridized O $3p$ and Mg $3p$ and 3d character. The real space projection of the electronic part of the wavefunction of the first exciton in the optical spectrum shows it has a delocalized Wannier-Mott character, consistent with previous studies in reciprocal space \cite{Fuchs-2008}. On the other hand, the wavefunction of the first exciton in the O K-edge spectrum is stronger localized and spreads up to only three unit cells. We believe that our detailed analysis of the optical and x-ray excitations in this paradigmatic oxide material regarding their orbital character and extension in real and reciprocal space based on state-of-the-art many-body approaches serves as an important benchmark and provides useful background information for the interpretation of experimental data both from static but also time-dependent investigations. \begin{acknowledgments} We thank Caterina Cocchi, Andr\'e Schleife, Heiko Wende, Andrea Eschenlohr, Katharina Ollefs, Nico Rothenbach, and Okan K\"oksal for fruitful discussions. We wish to acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) within collaborative research center CRC1242 (project number 278162697, subproject C02) and computational time at the Center for Computational Sciences and Simulation of the University of Duisburg-Essen on the supercomputer magnitUDE (DFG grants INST 20876/209-1 FUGG, INST 20876/243-1 FUGG). C. V. and C. D. appreciate funding from the Leibniz-ScienceCampus GraFOx. \end{acknowledgments}
{ "timestamp": "2021-04-06T02:19:38", "yymm": "2012", "arxiv_id": "2012.08960", "language": "en", "url": "https://arxiv.org/abs/2012.08960" }
\section{Introduction} Now we briefly present some preliminaries including definitions and notations. \vspace{0.5 cm} \textbf{Definition 1.} ([1]) A directed topological space, or a d-space $X = (X,PX)$ is a topological space equipped with a set $PX$ of continuous maps $\gamma : I \to X$ (where $I=[0,1]$ equipped with subspace topology of standard topology on $\mathbb{R}$), called directed paths or d-paths, satisfying these axioms: \vspace{.3 cm} 1. every constant map $I \to X$ is directed\\ 2. $PX$ is closed under composition with continuous non-decreasing maps from $I$ to $I$\\ 3. $PX$ is closed under concatenation \vspace{0.5 cm} \textbf{Notation} ([2]) $\stackrel{\rm ir}{\rm \mathbb{R}}$ is the real line equipped with the left order topology, and $\stackrel{\rm ir}{\rm I}$ is $[0,1]$ equipped with subspace topology of $\stackrel{\rm ir}{\rm \mathbb{R}}$. \vspace{.5 cm} \textbf{Definition 2.} Let $X$ be a topological space. A function $\gamma : \stackrel{\rm ir}{\rm I} \to X$ is called an ir-path in $X$, if it is continuous on $\stackrel{\rm ir}{\rm I}$. Also, $\gamma(0)$ is the initial point and $\gamma(1)$ is the terminal point of the ir-path $\gamma$. \vspace{.5 cm} \textbf{Definition 3.} ([3]) ${\stackrel{\rm \longrightarrow}{\rm \Gamma_X}} = \bigg \{ (x,y) \ \bigg | \ \exists \gamma \in PX, \ \gamma(0) = x , \ \gamma(1)=y \bigg \}$ \vspace{.5 cm} \textbf{Definition 4.} ${\stackrel{\rm ir}{\rm \Gamma_X}} = \bigg \{ (x,y) \ \bigg | \ \exists \gamma \in X^{\stackrel{\rm ir}{\rm I}}, \ \gamma(0) = x , \ \gamma(1)=y \bigg \} $ \section{Relation between d-paths and ir-paths} \textbf{Proposition 1.} ${\stackrel{\rm \longrightarrow}{\rm \Gamma_{\stackrel{\rm \longrightarrow}{\rm \mathbb{R}^n}}}} = {\stackrel{\rm ir}{\rm \Gamma_{\stackrel{\rm ir}{\rm \mathbb{R}^n}}}} $.\\ \textit{Proof.} From [4] we know that\\ \begin{center} ${\stackrel{\rm \longrightarrow}{\rm \Gamma_{\stackrel{\rm \longrightarrow}{\rm \mathbb{R}^n}}}} = \bigg \{(x_1 \ldots x_n, y_1 \ldots y_n) \ \bigg | \ x_i \le y_i \ \forall i \in \{1, \ldots , n \} \bigg \}$ \end{center} Also, we know from [2] that there exists an ir-path from $x=(x_1 \ldots x_n) \in \ \stackrel{\rm ir}{\rm \mathbb{R}^n} $ to $y=(y_1 \ldots y_n) \in \ \stackrel{\rm ir}{\rm \mathbb{R}^n}$ iff $y \in \overline{\{x\}}$. Therefore $(x,y) \in \ {\stackrel{\rm ir}{\rm \Gamma_{\stackrel{\rm ir}{\rm \mathbb{R}^n}}}}$ iff $(y_1 \ldots y_n) \in \ \prod_{i=1}^n [x_i, \infty)$. Clearly, for all $i \in \{1, \ldots , n \}$ we have $x_i \le y_i$, which proves the statement. \begin{flushright} $\square$ \end{flushright} \textbf{Theorem 1.} Every d-path in $\stackrel{\rm \longrightarrow}{\rm \mathbb{R}^n}$ is an ir-path in ${\stackrel{\rm ir}{\rm \mathbb{R}^n}}$.\\ \textit{Proof.} We know that d-paths in $\stackrel{\rm \longrightarrow}{\rm \mathbb{R}^n}$ are non-decreasing paths in $\mathbb{R}^n$. Thus, it suffices to show that every non-decreasing path in $\mathbb{R}^n$ is an ir-path in ${\stackrel{\rm ir}{\rm \mathbb{R}^n}}$.\\ Suppose that $\gamma: I \to \mathbb{R}^n$ is a non-decreasing path from $x=(x_1 \ldots x_n)$ to $y=(y_1 \ldots y_n)$ and $\prod_{i=1}^n (- \infty, m_i)$ is a base element of ${\stackrel{\rm ir}{\rm \mathbb{R}^n}}$. Now, if ${\gamma}^{-1} \bigg (\prod_{i=1}^n (- \infty, m_i) \bigg )$ is an open subset of $\stackrel{\rm ir}{\rm I }$, then $\gamma$ is an ir-path in ${\stackrel{\rm ir}{\rm \mathbb{R}^n}}$. There are two cases for $m_i$. If for some $1 \le i \le n$, $m_i \le x_i$, then ${\gamma}^{-1} \bigg (\prod_{i=1}^n (- \infty, m_i) \bigg ) = \emptyset$ is open in $\stackrel{\rm ir}{\rm I }$. If for all $1 \le i \le n$, $m_i \ge x_i$, then $0 \in {\gamma}^{-1} \bigg (\prod_{i=1}^n (- \infty, m_i) \bigg ) $. Also we know that ${\gamma}^{-1} \bigg (\prod_{i=1}^n (- \infty, m_i) \bigg ) $ is open in $I$. Hence, ${\gamma}^{-1} \bigg (\prod_{i=1}^n (- \infty, m_i) \bigg ) = [0, n)$ is open in $\stackrel{\rm ir}{\rm I }$.\\ The case ${\gamma}^{-1} \bigg (\prod_{i=1}^n (- \infty, m_i) \bigg ) = [0, n) \cup U \neq [0,p)$ does not happen. Consider an element $u \in U$. Since $\gamma$ is non-decreasing, $\gamma(n) \le \gamma(u) < (m_1 \ldots m_n) $, then $n \in \ {\gamma}^{-1} \bigg (\prod_{i=1}^n (- \infty, m_i) \bigg ) $, a contradiction. \begin{flushright} $\square$ \end{flushright} \textbf{Example.} This example demonstrates that there exist ir-paths in ${\stackrel{\rm ir}{\rm \mathbb{R}^n}}$ which are not d-paths in $\stackrel{\rm \longrightarrow}{\rm \mathbb{R}^n}$.\\ Let $\gamma : \stackrel{\rm ir}{\rm I} \to {\stackrel{\rm ir}{\rm \mathbb{R}^n}}$ be an ir-path from $x=(x_1 \ldots x_n)$ to $y=(y_1 \ldots y_n)$ defined by\\ \begin{center} $ \gamma(t) = \bigg \{ \begin{array}{ll} x & \hspace{1cm} 0 \le t < 1 \\ y & \hspace{1cm} t = 1 \\ \end{array}$ \end{center} Since ${\gamma}^{-1} \bigg (\prod_{i=1}^n (x_i,y_i + \epsilon) \bigg ) = \{1\}$ is not open in $I$, $\gamma: I \to \mathbb{R}^n$ is not continuous, and hence not a path. Therefore, $\gamma$ is an ir-path that is not a d-path. \section*{} \label{bibsection}
{ "timestamp": "2020-12-17T02:18:35", "yymm": "2012", "arxiv_id": "2012.08945", "language": "en", "url": "https://arxiv.org/abs/2012.08945" }
\section{Introduction} Horizontal gene transfer (HGT) laterally introduces foreign genetic material into a genome. The phenomenon is particularly frequent in prokaryotes \cite{Soucy:15,NelsonSathi:15} but also contributed to shaping eukaryotic genomes \cite{Keeling:08,Husnik:18,Acuna:12,Li:14,Moran:10,Schonknecht:13}. HGT may be additive, in which case its effect is similar to gene duplications, or lead to the replacement of a vertically inherited homolog. From a phylogenetic perspective, HGT leads to an incongruence of gene trees and species trees, thus complicating the analysis of gene family histories. A broad spectrum of computational methods have been developed to identify horizontally transferred genes and/or HGT events, recently reviewed by \citet{Ravenhall:15}. Parametric methods use genomic signatures, i.e., sequence features specific to a (group of) species identify horizontally inserted material. Genomic signatures include e.g.\ GC content, $k$-mer distributions, sequence autocorrelation, or DNA deformability \cite{Dufraigne:05,Becq:10}. Direct (or ``explicit'') phylogenetic methods start from a given gene tree $T$ and species tree $S$ and compute a reconciliation, i.e., a mapping of the gene tree into the species tree. This problem first arose in the context of host/parasite assemblages \cite{Page:94,Charleston:98} considering the equivalent problem of mapping a parasite tree $T$ to a host phylogeny $S$ such that the number of events such as host-switches, i.e., horizontal transfers, is minimized. For a review of the early literature we refer to \cite{Charleston:06}. A major difficulty is to enforce time consistency in the presence of multiple horizontal transfer events, which renders the problem of finding optimal reconciliations NP-hard \cite{Hallett:01,Ovadia:11,Tofigh:11,Hasic:19}. Nevertheless several practical approaches have become available, see e.g.\ \cite{Tofigh:11,Chen:12,Ma:18}. Indirect (or ``implicit'') phylogenetic methods forego the reconstruction of trees and start from sequence similarity or evolutionary distances and use unexpectedly small or large distances between genes as indicators of HGT. While indirect methods have been used successfully in the past, reviewed by \citet{Ravenhall:15}, they have received very little attention from a more formal point of view. In this contribution, we focus on a particular type of implicit phylogenetic information, following the ideas of \citet{Novichkov:04}. The basic idea is that the evolutionary distance between orthologous genes is approximately proportional to the distances between their species. Xenologous gene pairs as well as duplicate genes thus appear as outliers \cite{Lawrence:92,Clarke:02,Novichkov:04,Dessimoz:08}. More precisely, consider a family of homologous genes in a set of species and plot the phylogenetic distance of pairs of most similar homologs as a function of the phylogenetic distances between the species in which they reside. Since distances between orthologous genes can be expected to be approximately proportional to the distances between the species, orthologous pairs fall onto a regression line that defines equal divergence time for the last common ancestor of corresponding gene and species pairs. The gene pairs with ``later divergence times'', i.e., those that are more closely related than expected from their species, fall below the regression line \cite{Novichkov:04}. \citet{Kanhere:09} complemented this idea with a statistical test based on the Cook distance to identify xenologous pairs in a statistically sound manner. For the mathematical analysis we assume that we can perfectly identify all pairs of genes $a$ and $b$ that are more closely related than expected from the phylogenetic distance of their respective genomes. Naturally, this defines a graph $(G,\sigma)$, whose vertices $x$ (the genes) are colored by the species $\sigma(x)$ in which they appear. Here, we are interested in two questions: \begin{enumerate} \item[(1)] What are the mathematical properties that characterize these ``\emph{later-divergence-time}'' (\emph{LDT}) graphs? \item[(2)] What kind of information about HGT events, the gene and species tree, and the reconciliation map between them is contained implicitly in an LDT graph? \end{enumerate} In Sec.~\ref{sect:edit} we will briefly consider the situation that later-divergence-time information is fraught with experimental errors. These questions are motivated by a series of recent publications that characterized the mathematical structure of orthology \cite{Hellmuth:13a,Lafond:14}, the xenology relation \textit{sensu} Fitch \cite{Geiss:18a,Hellmuth:18a,Hellmuth:2019a}, and the (reciprocal) best match relation \cite{Geiss:19a,Geiss:20a,Schaller:20x,Schaller:21g}. Each of these relations satisfies stringent mathematical conditions that -- at least in principle -- can be used to correct empirical estimates and thus serve as a potential means of noise reduction \cite{Hellmuth:15a,Stadler:20a}. This approach has also lead to efficient algorithms to extract gene trees, species trees, and reconciliations from the relation data. Although the resulting representations of gene family histories are usually not fully resolved, they can provide important constraints for subsequent refinements. The advantage of the relation-based approach is primarily robustness. While the inference of phylogenetic trees relies on detailed probability models or the additivity of distance metrics, our approach starts from yes/no answers to simple, pairwise comparisons. These data can therefore be represented as edges in a graph, possibly augmented by a measure of confidence. Noise and inaccuracies in the initial estimates then translate into violations of the required mathematical properties of the graphs in question. Graph editing approaches can therefore be harnessed as a means of noise reduction \cite{Hellmuth:15a,dondi2017approximating,Lafond:14,Lafond:16, Hellmuth:20a,Hellmuth:20b,Schaller:20y}. Previous work following this paradigm has largely been confined to duplication-loss (DL) scenarios, excluding horizontal transfer. As shown in \cite{Hellmuth:2017}, it is possible to partition a gene set into HGT-free classes separated by HGTs. Within each class, the reconstruction problems then simplify to the much easier DL scenarios. It is of utmost interest, therefore, to find robust methods to infer this partition directly from (dis)similarity data. Here, we explore the usefulness and limitations of LDT graphs for this purpose. This contribution is organized as follows. After introducing the necessary notation, we introduce \emph{relaxed scenarios}, a very general framework to describe evolutionary scenarios that emphasizes time consistency of reconciliation rather than particular types of evolutionary events. In Sec.~\ref{sect:LDT}, LDT graphs are defined formally and characterized as those properly colored cographs for which a set of accompanying rooted triples is consistent (Thm.~\ref{thm:characterization}). The proof is constructive and provides a method (Algorithm~\ref{alg:Ru-recognition}) to compute a relaxed scenario for a given LDT graph. Sec.~\ref{sect:HGT} defines HGT events, shows that every edge in a LDT graph corresponds to an HGT event, and characterizes those LDT graphs that already capture all HGT events. In addition, we provide a characterization of ``rs-Fitch graphs'' (general vertex-colored graphs that capture all HGT events) in terms of their coloring. These properties can be verified in polynomial time. Since LDT graphs do not usually capture all HGT events, we discuss in Sec.~\ref{app:edit} several ways to obtain a plausible set of HGT candidates from LDT graphs. In Sec.~\ref{sect:simul}, we address the question ``how much information about all HGT events is contained in LDT graphs'' with the help of simulations of evolutionary scenarios with a wide range of duplication, loss, and HGT events. We find that LDT graphs cover roughly a third of xenologous pairs, while a simple greedy graph editing scheme can more than double the recall at moderate false positive rates. This greedy approach already yields a median accuracy of $89 \%$, and in $99.8 \%$ of the cases produces biologically feasible solutions in the sense that the inferred graphs are rs-Fitch graphs. We close with a discussion of several open problems and directions for future research in Sec.~\ref{sect:concl}. The material of this contribution is extensive and contains several lengthy, very technical proofs. We therefore divided the presentation into a Narrative Part that contains only those mathematical results that contribute to our main conclusions, and a Technical Part providing additional results and all proofs. To facilitate cross-referencing between the two parts, the same numbering of Definitions, Lemmas, Theorems, etc., is used. Sections \ref{TP:sect:LDT}, \ref{TP:sect:HGT}, and \ref{app:edit} contain the technical material corresponding to Sections \ref{sect:LDT}, \ref{sect:HGT}, and \ref{sect:edit}, respectively. \section{Notation} \paragraph{Graphs.} We consider undirected graphs $G=(V,E)$ with vertex set $V(G)\coloneqq V$ and edge set $E(G)\coloneqq E$, and denote edges connecting vertices $x,y\in V$ by $xy$. The graphs $K_1$ and $K_2$ denote the complete graphs on one and two vertices, respectively. The graph $K_2+K_1$ is the disjoint union of a $K_2$ and a $K_1$. The join $G\join H$ of two graphs $G=(V,E)$ and $H=(W,F)$ is the graph with vertex set $V\charfusion[\mathbin]{\cup}{\cdot} W$ and edge set $E\charfusion[\mathbin]{\cup}{\cdot} F\charfusion[\mathbin]{\cup}{\cdot} \{xy\mid x\in V,y\in W\}$. We write $H\subseteq G$ if $V(H)\subseteq V(G)$ and $E(H)\subseteq E(G)$, in which case $H$ is called a \emph{subgraph of $G$}. Given a graph $G=(V,E)$, we write $G[W]$ for the graph induced by $W\subseteq V$. A \emph{connected component} $C$ of $G$ is an inclusion-maximal vertex set such that $G[C]$ is connected. A \emph{(maximal) clique} $C$ in an undirected graph $G$ is an (inclusion-maximal) vertex set such that, for all vertices $x,y\in C$, it holds that $xy\in E(G)$, i.e., $G[C]$ is \emph{complete}. A subset $W\subseteq V$ is a \emph{(maximal) independent set} if $G[W]$ is edgeless (and $W$ is maximal w.r.t.\ inclusion). A graph $G = (V,E)$ is \emph{complete multipartite} if $V$ consists of $k\ge 1$ pairwise disjoint independent sets $I_1,\dots, I_k$ and $xy\in E$ if and only if $x\in I_i$ and $y\in I_j$ with $i\neq j$. A graph $G$ together with a vertex coloring $\sigma$, denoted by $(G,\sigma)$, is \emph{properly colored} if $uv \in E(G)$ implies $\sigma(u)\neq \sigma(v)$. For a coloring $\sigma\colon V\to M$ and a subset $W\subseteq V$, we write $\sigma(W) \coloneqq \{\sigma(w)\mid w\in W\}$ for the set of colors that appear on the vertices in $W$. Throughout, we will need restrictions of the coloring map $\sigma$. \begin{definition} Let $\sigma\colon L\to M$ be a map, $L'\subseteq L$ and $\sigma(L') \subseteq M' \subseteq M$. Then, the map $\sigma_{|L',M'}\colon L'\to M'$ is defined by putting $\sigma_{|L',M'}(v) = \sigma(v)$ for all $v\in L'$. If we only restrict the domain of $\sigma$, we just write $\sigma_{|L'}$ instead of $\sigma_{|L',M}$. \label{def:sigma-restrictions} \end{definition} We do neither assume that $\sigma$ nor that its restriction $\sigma_{|L',M'}$ is surjective. \paragraph{Rooted Trees.} All trees appearing in this contribution are rooted in one of their vertices. We write $x \preceq_{T} y$ if $y$ lies on the unique path from the root to $x$, in which case $y$ is called an ancestor of $x$, and $x$ is called a descendant of $y$. We may also write $y \succeq_{T} x$ instead of $x \preceq_{T} y$. We use $x \prec_T y$ for $x \preceq_{T} y$ and $x \neq y$. In the latter case, $y$ is a \emph{strict ancestor} of $x$. If $x \preceq_{T} y$ or $y \preceq_{T} x$, the vertices $x$ and $y$ are \emph{comparable} and, otherwise, \emph{incomparable}. We write $L(T)$ for the set of leaves of the tree $T$, i.e., the $\preceq_T$-minimal vertices and say that $T$ is a tree \emph{on $L(T)$}. We write $T(u)$ for the subtree of $T$ rooted in $u$. The \emph{last common ancestor} of a vertex set $W\subseteq V(T)$ is the $\preceq_T$-minimal vertex $u\coloneqq \lca_T(W)$ for which $w\preceq_T u$ for all $w\in W$. For brevity we write $\lca_T(x,y)=\lca_T(\{x,y\})$. We employ the convention that edges $(x,y)$ in a tree are always written such that $y \preceq_{T} x$ is satisfied. If $(x,y)$ is an edge in $T$, then $\parent(y)\coloneqq x$ is the \emph{parent} of $y$, and $y$ the \emph{child} of $x$. We denote with $\child_T(x)$ the set of all children of $x$ in $T$. It will be convenient for the discussion below to extend the ancestor relation $\preceq_T$ on $V$ to the union of the edge and vertex sets of $T$. More precisely, for a vertex $x\in V(T)$ and an edge $e=(u,v)\in E(T)$ we put $x \prec_T e$ if and only if $x\preceq_T v$; and $e \prec_T x$ if and only if $u\preceq_T x$. In addition, for edges $e=(u,v)$ and $f=(a,b)$ in $T$ we put $e\preceq_T f$ if and only if $v \preceq_T b$. A rooted tree is \emph{phylogenetic} if all vertices that are adjacent to at least two vertices have at least two children. A rooted tree $T$ is planted if its root has degree $1$. In this case, we denote the ``planted root'' by $0_T$. In planted phylogenetic trees there is a unique ``planted edge'' $(0_T,\rho_T)$ where $\rho_T\coloneqq \lca_T(L(T))$. Note that by definition $0_T\notin L(T)$. \emph{Throughout, we will assume that all trees are rooted and phylogenetic unless explicitly stated otherwise. Whenever there is no danger of confusion, we will refer also to planted phylogenetic trees simply as trees.} The set of \emph{inner vertices} is given by $V^0(T)\coloneqq V(T)\setminus (L(T)\cup \{0_T\})$. An edge $(u,v)$ is an \emph{inner} edge if both vertices $u$ and $v$ are inner vertices and, otherwise, an \emph{outer} edge. The restriction of $T$ to a subset $L'\subseteq L(T)$ of leaves, denoted by $T_{|L'}$ is obtained by identifying the (unique) minimal subtree of $T$ that connects all leaves in $L'$, and suppressing all vertices with degree two except possibly the root $\rho_{T_{L'}}=\lca_T(L')$. $T$ \emph{displays} a tree $T'$, in symbols $T'\le T$, if $T'$ can be obtained from a restriction $T_{|L'}$ of $T$ by a series of inner edge contractions \cite{Bryant:95}. If, in addition, $L(T)=L(T')$, then $T$ is a \emph{refinement} of $T'$. Throughout this contribution, we will consider leaf-colored trees $(T,\sigma)$ with $\sigma$ being defined for $L(T)$ only. \paragraph{Rooted Triples.} A rooted triple is a tree $T$ on three leaves and two internal vertices. We write $ab|c$ for the triple with $\lca_T(a,b)\prec \lca_T(a,c)=\lca_T(b,c)$. For a set $\mathscr{R}$ of triples we write $L(\mathscr{R})\coloneqq \bigcup_{\mathsf{t}\in\mathscr{R}}L(\mathsf{t})$. The set $\mathscr{R}$ is \emph{compatible} if there is a tree $T$ with $L(\mathscr{R}) \subseteq L(T)$ that displays every triple $\mathsf{t}\in\mathscr{R}$. The construction of such a tree $T$ from a triple set $\mathscr{R}$ on $L$ makes use of an auxiliary graph that will play a prominent role in this contribution. \begin{definition}{\cite{Aho:81}} Let $\mathscr{R}$ be a set of rooted triples on the vertex set $L$. The \emph{Aho graph} $[\mathscr{R},L]$ has vertex set $L$ and edge set $\{ xy \mid \exists z\in L:\, xy|z \in\mathscr{R}\}$. \end{definition} The algorithm \texttt{BUILD} \cite{Aho:81} uses Aho graphs in a top-down recursion starting from a given set of triples $\mathscr{R}$ and returns for compatible triple sets $\mathscr{R}$ on $L$ an unambiguously defined tree $\Aho(\mathscr{R}, L)$ on $L$, which is known as the \emph{Aho tree}. \texttt{BUILD} runs in polynomial time. The key property of the Aho graph that ensures the correctness of \texttt{BUILD} can be stated as follows: \begin{proposition}{\cite{Aho:81,Bryant:95}} A set of triples $\mathscr{R}$ is compatible if and only if for each subset $L\subseteq L(\mathscr{R})$ with $|L|>1$ the graph $[\mathscr{R},L]$ is disconnected. \label{prop:ahograph} \end{proposition} \paragraph{Cographs} are recursively defined as undirected graphs that can be generated as joins or disjoint unions of cographs, starting from single-vertex graphs $K_1$. The recursive construction defines a rooted tree $(T,t)$, called \emph{cotree}, whose leaves are the vertices of the cograph $G$, i.e., the $K_1$s, while each of its inner vertices $u$ of $T$ represent the join or disjoint union operations, labeled as $t(u)=1$ and $t(u)=0$, respectively. Hence, for a given cograph $G$ and its cotree $(T,t)$, we have $xy\in E(G)$ if and only if $t(\lca_T(x,y))=1$. Contraction of all tree edges $(u,v)\in E(T)$ with $t(u)=t(v)$ results in the \emph{discriminating cotree} $(T_G,\hat t)$ of $G$ with cotree-labeling $\hat t$ such that $\hat t(u)\ne \hat t(v)$ for any two adjacent interior vertices of $T_G$. The discriminating cotree $(T_G,\hat t)$ is uniquely determined by $G$ \cite{Corneil:81}. Cographs have a large number of equivalent characterizations. In this contribution, we will need the following classical results: \begin{proposition}{\cite{Corneil:81}} Given an undirected graph $G$, the following statements are equivalent: \begin{enumerate} \item $G$ is a cograph. \item $G$ does not contain a $P_4$, i.e., a path on four vertices, as an induced subgraph. \item $\mathrm{diam}(H) \leq 2$ for all connected induced subgraphs $H$ of $G$. \item Every induced subgraph $H$ of $G$ is a cograph. \end{enumerate} \label{prop:cograph} \end{proposition} \section{Relaxed Reconciliation Maps and Relaxed Scenarios} \label{sec:reconciliation} \citet{Tofigh:11} and \citet{Bansal:12} define ``Duplication-Transfer-Loss'' (DTL) scenarios in terms of a vertex-only map $\gamma:V(T)\to V(S)$. The H-trees introduced by \citet{Gorecki:2010,Gorecki:2012} formalize the same concept in a very different manner. A definition of a DTL-like class of scenarios in terms of a reconciliation map $\mu: V(T)\to V(S)\cup E(S)$ was analyzed by \citet{Nojgaard:18a}. For binary trees, the two definitions are equivalent; for non-binary trees, however, the DTL-scenarios are a proper subset, see \cite[Fig.~1]{Nojgaard:18a} for an example. Several other mathematical frameworks have been used in the literature to specify evolutionary scenarios. Examples include the DLS-trees of \citet{Gorecki:06}, which can be seen as event-labeled gene trees with leaves denoting both surviving genes and loss-events, maps $g:V(S')\to 2^{V(T)}$ from a suitable subdivision $S'$ of the species tree $S$ to the gene tree as used by \citet{Hallett:01}, and associations of edges, i.e., subsets of $E(T)\times E(S)$ \cite{Wieseke:13}. In the presence of HGT, the relationships of gene trees and species are not only constrained by local conditions corresponding to the admissible local evolutionary events (duplication, speciation, gene loss, and HGT) but also by the global condition that the HGT events within each lineage admit a temporal order \cite{Merkle:05,Gorbunov:09,Tofigh:11}. In order to capture time consistency from the outset and to establish the mathematical framework, we consider here trees with explicit timing information \cite{Merkle:05}. \begin{definition}[Time Map] \label{def:time-map} The map $\ensuremath{\tau_{T}} : V(T) \rightarrow \mathbb{R}$ is a time map for a tree $T$ if $x\prec_T y$ implies $\ensuremath{\tau_{T}}(x)<\ensuremath{\tau_{T}}(y)$ for all $x,y\in V(T)$. \end{definition} It is important to note that only \emph{qualitative}, relative timing information will be used in practice, i.e., we will never need the actual value of time maps but only information on whether an event pre-dates, post-dates, or is concurrent with another. Def.~\ref{def:time-map} ensures that the ancestor relation $\preceq_T$ and the timing of the vertices are not in conflict. For later reference, we provide the following simple result. \begin{lemma} Given a tree $T$, a time map $\ensuremath{\tau_{T}}$ for $T$ satisfying $\ensuremath{\tau_{T}}(x)=\tau_0(x)$ with arbitrary choices of $\tau_0(x)$ for all $x\in L(T)$ can be constructed in linear time. \label{lem:arbitrary-tT} \end{lemma} \begin{proof} We traverse $T$ in postorder. If $x$ is a leaf, we set $\ensuremath{\tau_{T}}(x)=\tau_0(x)$, and otherwise compute $t\coloneqq\max_{u\in\child(x)} \ensuremath{\tau_{T}}(u)$ and set $\ensuremath{\tau_{T}}(x)=t'$ with an arbitrary value $t'> t$. Clearly the total effort is $O(|V(T)|+|E(T)|)$, and thus also linear in the number of leaves $L(T)$. \end{proof} Lemma~\ref{lem:arbitrary-tT} will be useful for the construction of time maps as it, in particular, allows us to put $\ensuremath{\tau_{T}}(x)=\ensuremath{\tau_{T}}(y)$ for all $x,y\in L(T)$. \begin{definition}[Time Consistency] \label{def:tc-map} Let $T$ and $S$ be two trees. A map $\mu \colon V(T) \to V(S) \cup E(S)$ is called \emph{time-consistent} if there are time maps $\ensuremath{\tau_{T}}$ for $T$ and $\ensuremath{\tau_{S}}$ for $S$ satisfying the following conditions for all $u \in V(T)$: \begin{description} \item[(C1)] If $\mu(u) \in V(S)$, then $\ensuremath{\tau_{T}}(u) = \ensuremath{\tau_{S}}(\mu(u))$. \item[(C2)] Else, if $\mu(u) = (x,y) \in E(S)$, then $\ensuremath{\tau_{S}}(y)<\ensuremath{\tau_{T}}(u)<\ensuremath{\tau_{S}}(x)$. \end{description} \end{definition} Conditions (C1) and (C2) ensure that the reconciliation map $\mu$ preserves time in the following sense: If vertex $u$ of the gene tree is mapped to a vertex $\mu(u)=v$ in the species tree, then $u$ and $v$ receive the same time stamp by Condition (C1). If $u$ is mapped to an edge $\mu(u) = (x,y)$, then the time stamp of $u$ falls within the time range $[\ensuremath{\tau_{S}}(x),\ensuremath{\tau_{S}}(y)]$ of the edge $xy$ in the species tree. The following definition of reconciliation is designed (1) to be general enough to encompass the notions of reconciliation that have been studied in the literature, and (2) to separate the mapping between gene tree and species tree from specific types of events. Event types such as duplication or horizontal transfer therefore are considered here as a matter of \emph{interpreting} scenarios, not as part of their definition. \begin{definition}[Relaxed Reconciliation Map] Let $T$ and $S$ be two planted trees with leaf sets $L(T)$ and $L(S)$, respectively and let $\sigma:L(T)\to L(S)$ be a map. A map $\mu \colon V(T)\to V(S)\cup E(S)$ is a \emph{relaxed reconciliation map} for $(T,S,\sigma)$ if the following conditions are satisfied: \begin{description} \item[(G0)] \emph{Root Constraint.} $\mu(x) = 0_{S}$ if and only if $x = 0_{T}$ \item[(G1)] \emph{Leaf Constraint.} $\mu(x)=\sigma(x)$ if and only if $x\in L(T)$. \item[(G2)] \emph{Time Consistency Constraint.} The map $\mu$ is time-consistent for some time maps $\ensuremath{\tau_{T}}$ for $T$ and $\ensuremath{\tau_{S}}$ for $S$. \end{description} \label{def:relaxed-reconc} \end{definition} Condition (G0) is used to map the respective planted roots. (G1) ensures that genes are mapped to the species in which they reside. (G2) enforces time consistency. The reconciliation maps most commonly used in the literature, see e.g.\ \cite{Tofigh:11,Bansal:12}, usually not only satisfy (G0)--(G2) but also impose additional conditions. We therefore call the map $\mu$ defined here ``relaxed''. \begin{definition}[ relaxed Scenario] The 6-tuple $\ensuremath{\mathscr{S}} = (T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ is a \emph{relaxed scenario} if $\mu$ is a relaxed reconciliation map for $(T,S,\sigma)$ that satisfies (G2) w.r.t.\ the time maps $\ensuremath{\tau_{T}}$ and $\ensuremath{\tau_{S}}$. \label{def:relaxed-scenario} \end{definition} By definition, relaxed reconciliation maps are time-consistent. Moreover, $\ensuremath{\tau_{T}}(x)=\ensuremath{\tau_{S}}(\sigma(x))$ for all $x \in L(T)$ by Def.~\ref{def:tc-map}(C1) and Def.~\ref{def:relaxed-reconc}(G1,G2). In the following we will refer to the map $\sigma:L(T)\to L(S)$ as the \emph{coloring of $\ensuremath{\mathscr{S}}$}. \section{Later-Divergence-Time Graphs} \label{sect:LDT} \subsection{LDT Graphs and $\mu$-free Scenarios} In the absence of horizontal gene transfer, the last common ancestor of two species $A$ and $B$ should mark the latest possible time point at which two genes $a$ and $b$ residing in $\sigma(a)=A$ and $\sigma(b)=B$, respectively, may have diverged. Situations in which this constraint is violated are therefore indicative of HGT. To address this issue in some more detail, we next define ``$\mu$-free scenarios'' that eventually will lead us to the class of ``LDT graphs'' that contain all information about genes that diverged after the species in which they reside. \begin{cdefinition}{\ref{def:mu-free}}[{$\mathbf{\mu}$}-free scenario] Let $T$ and $S$ be planted trees, $\sigma\colon L(T)\to L(S)$ be a map, and $\ensuremath{\tau_{T}}$ and $\ensuremath{\tau_{S}}$ be time maps of $T$ and $S$, respectively, such that $\ensuremath{\tau_{T}}(x) = \ensuremath{\tau_{S}}(\sigma(x))$ for all $x\in L(T)$. Then, $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ is called a \emph{$\mu$-free scenario}. \end{cdefinition} This definition of a scenario without a reconciliation map $\mu$ is mainly a technical convenience that simplifies the arguments in various proofs by avoiding the construction of a reconciliation map. It is motivated by the observation that the ``later-divergence-time'' of two genes in comparison with their species is independent from any such $\mu$. Every relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ implies an underlying $\mu$-free scenario $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$. Statements proved for $\mu$-free scenarios therefore also hold for relaxed scenarios. Note that, by Lemma~\ref{lem:arbitrary-tT}, given the time map $\ensuremath{\tau_{S}}$, one can easily construct a time map $\ensuremath{\tau_{T}}$ such that $\ensuremath{\tau_{T}}(x) = \ensuremath{\tau_{S}}(\sigma(x))$ for all $x\in L(T)$. In particular, when constructing relaxed scenarios explicitly, we may simply choose $\ensuremath{\tau_{T}}(u)=0$ and $\ensuremath{\tau_{S}}(x)=0$ as common time for all leaves $u\in L(T)$ and $x\in L(S)$. Although not all $\mu$-free scenarios admit a reconciliation map and thus can be turned into relaxed scenarios, Lemma~\ref{lem:mfscen} below implies that for every $\mu$-free scenario $\ensuremath{\mathscr{T}}$ there is a relaxed scenario with possibly slightly distorted time maps that encodes the same LDT graph as $\ensuremath{\mathscr{T}}$. \begin{cdefinition}{\ref{def:LDTgraph}}[LDT graph] For a $\mu$-free scenario $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$, we define $\Gu(\ensuremath{\mathscr{T}}) = \Gu(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}}) = (V,E)$ as the graph with vertex set $V\coloneqq L(T)$ and edge set \begin{equation*} E \coloneqq \{ab\mid a,b\in L(T), \ensuremath{\tau_{T}}(\lca_T(a,b))<\ensuremath{\tau_{S}}(\lca_S(\sigma(a),\sigma(b))). \} \end{equation*} A vertex-colored graph $(G,\sigma)$ is a \emph{later-divergence-time graph (LDT graph)}, if there is a $\mu$-free scenario $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ such that $G=\Gu(\ensuremath{\mathscr{T}})$. In this case, we say that $\ensuremath{\mathscr{T}}$ \emph{explains} $(G,\sigma)$. \end{cdefinition} It is easy to see that the edge set of $\Gu(\ensuremath{\mathscr{T}})$ defines an \emph{undirected} graph and that two genes $a$ and $b$ form an edge if the divergence time of $a$ and $b$ is strictly less than the divergence time of the underlying species $\sigma(a)$ and $\sigma(b)$. Moreover, there are no edges of the form $aa$, since $\ensuremath{\tau_{T}}(\lca_T(a,a)) = \ensuremath{\tau_{T}}(a) = \ensuremath{\tau_{S}}(\sigma(a)) =\ensuremath{\tau_{S}}(\lca_S(\sigma(a),\sigma(a)))$. Hence $\Gu(\ensuremath{\mathscr{T}})$ is a simple graph. By definition, every relaxed scenario $\ensuremath{\mathscr{S}} =(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ satisfies $\ensuremath{\tau_{T}}(x)=\ensuremath{\tau_{S}}(\sigma(x))$ all $x \in L(T)$. Therefore, removing $\mu$ from $\ensuremath{\mathscr{S}}$ yields a $\mu$-free scenario $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$. Thus, we will use the following simplified notation. \begin{cdefinition}{\ref{def:Gu-scen}} We put $\Gu(\ensuremath{\mathscr{S}}) \coloneqq \Gu(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ for a given relaxed scenario $\ensuremath{\mathscr{S}} =(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ and the underlying $\mu$-free scenario $(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ and say, by slight abuse of notation, that $\ensuremath{\mathscr{S}}$ \emph{explains} $(\Gu(\ensuremath{\mathscr{S}}),\sigma)$. \end{cdefinition} The next two results show that the existence of a reconciliation map $\mu$ does not impose additional constraints on LDT graphs. \begin{clemma}{\ref{lem:mfscen}} For every $\mu$-free scenario $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$, there is a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\widetilde\ensuremath{\tau_{T}},\widetilde\ensuremath{\tau_{S}})$ for $T, S$ and $\sigma$ such that $(\Gu(\ensuremath{\mathscr{T}}),\sigma) = (\Gu(\ensuremath{\mathscr{S}}), \sigma)$. \end{clemma} \begin{ctheorem}{\ref{thm:LDT-scen}} $(G,\sigma)$ is an LDT graph if and only if there is a relaxed scenario $\ensuremath{\mathscr{S}} = (T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ such that $(G,\sigma) = (\Gu(\ensuremath{\mathscr{S}}),\sigma)$. \end{ctheorem} \begin{remark} From here on, we omit the explicit reference to Lemma~\ref{lem:mfscen} and Thm.~\ref{thm:LDT-scen} and assume that the reader is aware of the fact that every LDT graph is explained by some relaxed scenario $\ensuremath{\mathscr{S}}$ and that for every $\mu$-free scenario $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$, there is a relaxed scenario $\ensuremath{\mathscr{S}}$ for $T, S$ and $\sigma$ such that $(\Gu(\ensuremath{\mathscr{T}}),\sigma) = (\Gu(\ensuremath{\mathscr{S}}), \sigma)$. \end{remark} \begin{figure}[t] \begin{center} \includegraphics[width=0.7\textwidth]{./images-Rb/Gu-example2.pdf} \end{center} \caption{Top row: A relaxed scenario $\ensuremath{\mathscr{S}} =(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ (left) with its LDT graph $(\Gu(\ensuremath{\mathscr{S}}),\sigma)$ (right). The reconciliation map $\mu$ is shown implicitly by the embedding of the gene tree $T$ into the species tree $S$. The times $\ensuremath{\tau_{T}}$ and $\ensuremath{\tau_{S}}$ are indicated by the position on the vertical axis, i.e., if a vertex $x$ is drawn higher than a vertex $y$, this implies $\ensuremath{\tau_{T}}(y)<\ensuremath{\tau_{T}}(x)$. In subsequent figures we will not show the time maps explicitly. Bottom row: Another relaxed scenario $\ensuremath{\mathscr{S}}' =(T',S',\sigma',\mu',\ensuremath{\tau_{T}}',\ensuremath{\tau_{S}}')$ with a connected LDT graph $(\Gu(\ensuremath{\mathscr{S}}'),\sigma')$. As we shall see, connectedness of an LDT graph depends on the relative timing of the roots of the gene and species tree (cf.\ Lemma~\ref{lem:Gu-connected}). } \label{fig:Gu-example} \end{figure} \subsection{Properties of LDT Graphs} We continue by deriving several interesting characteristics LDT graphs. \begin{cproposition}{\ref{prop:properCol}} Every LDT graph $(G,\sigma)$ is properly colored. \end{cproposition} As we shall see below, LDT graphs $(G,\sigma)$ contain detailed information about both the underlying gene trees $T$ and species trees $S$ for \emph{all} $\mu$-scenarios that explain $(G,\sigma)$, and thus by Lemma~\ref{lem:mfscen} and Thm.~\ref{thm:LDT-scen} also about every relaxed scenario $\ensuremath{\mathscr{S}}$ satisfying $G=\Gu(\ensuremath{\mathscr{S}})$. This information is encoded in the form of certain rooted triples that can be retrieved directly from local features in the colored graphs $(G,\sigma)$. \begin{cdefinition}{\ref{def:infoTriples}} For a graph $G=(L,E)$, we define the set of triples on $L$ as \begin{equation*} \ensuremath{\mathfrak{T}}(G) \coloneqq \{xy|z \; \colon x,y,z\in L \text{ are pairwise distinct, } xy\in E,\; xz,yz\notin E\} \,. \end{equation*} If $G$ is endowed with a coloring $\sigma\colon L\to M$ we also define a set of color triples \begin{align*} \ensuremath{\mathfrak{S}}(G,\sigma) \coloneqq \{\sigma(x)\sigma(y)|\sigma(z)\; \colon & x,y,z\in L,\, \sigma(x),\sigma(y),\sigma(z) \text{ are pairwise distinct},\\ &xz, yz\in E,\; xy\notin E\}. \end{align*} \end{cdefinition} \begin{clemma}{\ref{lem:Ru-SpeciesTriple}} If a graph $(G,\sigma)$ is an LDT graph, then $\ensuremath{\mathfrak{S}}(G,\sigma)$ is compatible and $S$ displays $\ensuremath{\mathfrak{S}}(G,\sigma)$ for every $\mu$-free scenario $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ that explains $(G,\sigma)$. \end{clemma} The next lemma shows that induced $K_2+K_1$ subgraphs in LDT graphs imply triples that must be displayed by the gene tree $T$. \begin{clemma}{\ref{lem:Ru-GeneTriple}} If $(G,\sigma)$ is an LDT graph, then $\ensuremath{\mathfrak{T}}(G)$ is compatible and $T$ displays $\ensuremath{\mathfrak{T}}(G)$ for every $\mu$-free scenario $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ that explains $(G,\sigma)$. \end{clemma} The next results shows that LDT graphs cannot contain induced $P_4$s. \begin{clemma}{\ref{lem:propcolcograph}} Every LDT graph $(G,\sigma)$ is a properly colored cograph. \end{clemma} The converse of Lemma~\ref{lem:propcolcograph} is not true is in general. To see this, consider the properly-colored cograph $(G,\sigma)$ with vertex $V(G)=\{a,a',b,b',c,c'\}$, edges $ab,bc, a'b',a'c' $ and coloring $\sigma(a)=\sigma(a')=A$, $\sigma(b)=\sigma(b')=B$, and $\sigma(c)=\sigma(c')=C$ with $A,B,C$ being pairwise distinct. In this case, $\ensuremath{\mathfrak{S}}(G,\sigma)$ contains the triples $AC|B$ and $BC|A$. By Lemma \ref{lem:Ru-SpeciesTriple}, the tree $S$ in every $\mu$-free scenario $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ or relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ explaining $(G,\sigma)$ displays $AC|B$ and $BC|A$. Since no such scenario can exist, $(G,\sigma)$ is not an LDT graph. \subsection{Recognition and Characterization of LDT Graphs} In order to design an algorithm for the recognition of LDT graphs, we will consider partitions of the vertex set of a given input graph $(G=(L,E),\sigma)$. To construct suitable partitions, we start with the connected components of $G$. The coloring $\sigma\colon L\to M$ imposes additional constraints. We capture these with the help of binary relations that are defined in terms of partitions $\mathcal{C}$ of the color set $M$ and employ them to further refine the partition of $G$. \begin{cdefinition}{\ref{def:rel}} Let $(G=(L,E),\sigma)$ be a graph with coloring $\sigma\colon L\to M$. Let $\mathcal{C}$ be a partition of $M$, and $\mathcal{C}'$ be the set of connected components of $G$. We define the following binary relation $\ensuremath{\mathfrak{R}}(G, \sigma, \mathcal{C})$ by setting \begin{align*} (x,y)\in \ensuremath{\mathfrak{R}}(G, \sigma, \mathcal{C}) \iff x,y\in L,\; \sigma(x), \sigma(y) & \in C \text{ for some } C\in\mathcal{C}, \text{ and } \\ x,y & \in C' \text{ for some } C'\in\mathcal{C}'. \end{align*} \end{cdefinition} By construction, two vertices $x,y\in L$ are in relation $\ensuremath{\mathfrak{R}}(G, \sigma, \mathcal{C})$ whenever they are in the same connected component of $G$ and their colors $\sigma(x), \sigma(y)$ are contained in the same set of the partition of $M$. As shown in Lemma~\ref{lem:KinCC} in the Technical Part, the relation $\ensuremath{\mathfrak{R}}\coloneqq\ensuremath{\mathfrak{R}}(G, \sigma, \mathcal{C})$ is an equivalence relation and every equivalence class of $\ensuremath{\mathfrak{R}}$ is contained in some connected component of $G$. In particular, each connected component of $G$ is the disjoint union of $\ensuremath{\mathfrak{R}}$-classes. The following partition of the leaf sets of subtrees of a tree $S$ rooted at some vertex $u\in V(S)$ will be useful: \begin{align*} &\text{If } u \textrm{ is not a leaf, then } &\mathcal{C}_{S}(u)& \coloneqq \{L(S(v)) \mid v\in\child_S(u)\} \\ & \textrm{and, otherwise, } &\mathcal{C}_{S}(u)&\coloneqq \{\{u\}\}. \end{align*} One easily verifies that, in both cases, $\mathcal{C}_{S}(u)$ yields a valid partition of the leaf set $L(S(u))$. Recall that $\sigma_{|L',M'}\colon L'\to M'$ was defined as the ``submap'' of $\sigma$ with $L'\subseteq L$ and $\sigma(L') \subseteq M' \subseteq M$. \begin{clemma}{\ref{lem:xy-iff-Ks-in-same-CC}} Let $(G=(L,E),\sigma)$ be a properly colored cograph. Suppose that the triple set $\ensuremath{\mathfrak{S}}(G,\sigma)$ is compatible and let $S$ be a tree on $M$ that displays $\ensuremath{\mathfrak{S}}(G,\sigma)$. Moreover, let $L'\subseteq L$ and $u\in V(S)$ such that $\sigma(L') \subseteq L(S(u))$. \ Finally, set $\ensuremath{\mathfrak{R}}\coloneqq \ensuremath{\mathfrak{R}}(G[L'],\sigma_{|L',L(S(u))},\mathcal{C}_{S}(u))$.\\ Then, for all distinct $\ensuremath{\mathfrak{R}}$-classes $K$ and $K'$, either $xy\in E$ for all $x\in K$ and $y\in K'$, or $xy\notin E$ for all $x\in K$ and $y\in K'$. In particular, for $x\in K$ and $y\in K'$, it holds that \begin{equation*} xy\in E \iff K, K' \text{ are contained in the same connected component of } G[L']. \end{equation*} \end{clemma} \begin{figure}[t] \begin{center} \includegraphics[width=0.85\textwidth]{./images-Rb/algo-visu.pdf} \end{center} \caption{Visualization of Algorithm~\ref{alg:Ru-recognition}. (A) The case $u_S$ is a leaf (cf.\ Line~\ref{line:species-leaf}). (B)-(E) The case $u_S$ is an inner vertex (cf.\ Line~\ref{line:else}). (B) The subgraph of $(G,\sigma)$ induced by $L'$. (C) The local topology of the species tree $S$ yields $\mathcal{C}_{S}(u_S)=\{\{A,B,\dots\},\{C,D,\dots\}\}$. Note that $L(S(u_S))$ may contain colors that are not present in $\sigma(L')$ (not shown). (D) The equivalence classes of $\ensuremath{\mathfrak{R}}\coloneqq \ensuremath{\mathfrak{R}}(G[L'], \sigma_{|L',L(S(u))}, \mathcal{C}_{S}(u_S))$. (E) The vertex $u_T$ and the vertices $v_T$ are created in this recursion step. The vertices $w_K$ corresponding to the $\ensuremath{\mathfrak{R}}$-classes $K$ are created in the next-deeper steps. Note that some vertices have only a single child, and thus get suppressed in Line~\ref{line:Tphylo}.} \label{fig:algo-visu} \end{figure} \begin{algorithm}[t] \small \caption{Construction of a relaxed scenario $\ensuremath{\mathscr{S}}$ for a properly colored cograph $(G,\sigma)$ with compatible triple set $\ensuremath{\mathfrak{S}}(G,\sigma)$.} \label{alg:Ru-recognition} \DontPrintSemicolon \SetKwFunction{FRecurs}{void FnRecursive}% \SetKwFunction{FRecurs}{BuildGeneTree} \SetKwProg{Fn}{Function}{}{} \KwIn{A cograph $(G=(L,E),\sigma)$ with proper coloring $\sigma\colon L\to M$ and compatible triple set $\ensuremath{\mathfrak{S}}(G,\sigma)$. } \label{line:if-false} \KwOut{A relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ explaining $(G,\sigma)$.} \BlankLine $S\leftarrow$ tree on $M$ displaying $\ensuremath{\mathfrak{S}}(G,\sigma)$ with planted root $0_S$ \label{line:S}\; $\ensuremath{\tau_{S}}\leftarrow$ time map for $S$ satisfying $\ensuremath{\tau_{S}}(x)=0$ for all $x\in L(S)$ \label{line:tS}\; $\epsilon \leftarrow \frac{1}{3} \min\{\ensuremath{\tau_{S}}(y)-\ensuremath{\tau_{S}}(x) \mid (y,x)\in E(S) \}$ \label{line:epsilon}\; initialize empty maps $\mu, \ensuremath{\tau_{T}}$\; \BlankLine \Fn{\FRecurs{$L',u_{S}$}}{ create a vertex $u_T$ \label{line:create-uT}\; $\ensuremath{\tau_{T}}(u_T)\leftarrow \ensuremath{\tau_{S}}(u_{S}) + \epsilon$ and $\mu(u_T)\leftarrow (\parent_S(u_S), u_S)$ \label{line:mu-tT-inner1}\; \uIf{$u_S$ is a leaf\label{line:species-leaf}}{ \ForEach{$x\in L'$}{ connect $x$ as a child of $u_T$ \label{line:attach-leaf}\; $\ensuremath{\tau_{T}}(x)\leftarrow 0$ and $\mu(x)\leftarrow \sigma(x)$ \label{line:mu-tT-leaves}\; } } \Else{\label{line:else} $\ensuremath{\mathfrak{R}}\leftarrow \ensuremath{\mathfrak{R}}(G[L'], \sigma_{|L',L(S(u_S))}, \mathcal{C}_{S}(u_S))$ \label{line:rel}\; \ForEach{connected component $C$ of $G[L']$}{ create a vertex $v_T$ \label{line:create-vT}\; connect $v_T$ as a child of $u_T$\; choose $v^*_S\in \child_S(u_{S})$ such that $\sigma(C)\cap L(S(v^*_S))\ne\emptyset$ \label{line:choose-v-S}\; $\ensuremath{\tau_{T}}(v_T)\leftarrow \ensuremath{\tau_{S}}(u_{S}) - \epsilon$ and $\mu(v_T)\leftarrow (u_S, v^*_S)$ \label{line:mu-tT-inner2}\; \ForEach{$\ensuremath{\mathfrak{R}}$-class $K$ such that $K\subseteq C$}{ identify $v_S\in \child_S(u_{S})$ such that $\sigma(K)\subseteq L(S(v_S))$ \label{line:choose-v-S-for-class}\; $w_K\leftarrow$ \FRecurs{$K, v_S$} \label{line:recursive-call}\; connect $w_K$ as a child of $v_T$\; } } } \Return $u_T$\; } \BlankLine $T' \leftarrow$ tree with root \FRecurs{$L,\rho_S$}\; $T\leftarrow T'$ with (i) a planted root $0_T$ added, and (ii) all inner degree-2 vertices (except $0_T$) suppressed \label{line:Tphylo}\; $\ensuremath{\tau_{T}}(0_T)\leftarrow \ensuremath{\tau_{S}}(0_S)$ and $\mu(0_T)\leftarrow 0_S$ \label{line:mu-tT-planted-root}\; \textbf{return} $(T,S,\sigma,\mu_{|V(T)},\tau_{T|V(T)},\ensuremath{\tau_{S}})$\; \end{algorithm} Lemma~\ref{lem:xy-iff-Ks-in-same-CC} suggests a recursive strategy to construct a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ for a given properly-colored cograph $(G,\sigma)$, which is illustrated in Fig.~\ref{fig:algo-visu}. The starting point is a species tree $S$ displaying all the triples in $\ensuremath{\mathfrak{S}}(G,\sigma)$ that are required by Lemma~\ref{lem:Ru-SpeciesTriple}. We show below that there are no further constraints on $S$ and thus we may choose $S=\Aho(\ensuremath{\mathfrak{S}}(G,\sigma),L)$ and endow it with an arbitrary time map $\ensuremath{\tau_{S}}$. Given $(S,\ensuremath{\tau_{S}})$, we construct $(T,\ensuremath{\tau_{T}})$ in top-down order. In order to reduce the complexity of the presentation and to make the algorithm more compact and readable, we will not distinguish the cases in which $(G,\sigma)$ is connected or disconnected, nor whether a connected component is a superset of one or more $\ensuremath{\mathfrak{R}}$-classes. The tree $T$ therefore will not be phylogenetic in general. We shall see, however, that this issue can be alleviated by simply suppressing all inner vertices with a single child. The root $u_T$ is placed above $\rho_S$ to ensure that no two vertices from distinct connected components of $G$ will be connected by an edge in $\Gu(\ensuremath{\mathscr{S}})$. The vertices $v_T$ representing the connected components $C$ of $G$ are each placed within an edge of $S$ below $\rho_S$. W.l.o.g., the edges $(\rho_S,v_S)$ are chosen such that the colors of the corresponding connected component $C$ and the colors in $L(S(v_S))$ overlap. Next we compute the relation $\ensuremath{\mathfrak{R}}\coloneqq\ensuremath{\mathfrak{R}}(G,\sigma,\mathcal{C}_{S}(\rho_S))$ and determine, for each connected component $C$, the $\ensuremath{\mathfrak{R}}$-classes $K$ that are a subset of $C$. For each of them, a child $w_K$ is appended to the tree vertex $v_T$. The subtree $T(w_K)$ will have leaf set $L(T(w_K))=K$. Since $\ensuremath{\mathfrak{R}}$ is defined on $\mathcal{C}_{S}(\rho_S)$ in this first step, $G(\ensuremath{\mathscr{S}})$ will have all edges between vertices that are in the same connected component $C$ but in distinct $\ensuremath{\mathfrak{R}}$-classes (cf.\ Lemma~\ref{lem:xy-iff-Ks-in-same-CC}). The definition of $\ensuremath{\mathfrak{R}}$ also implies that we always find a vertex $v_S\in\child_S(\rho_S)$ such that $\sigma(K)\subseteq L(S(v_S))$ (more detailed arguments for this are given in the proof of Claim~\ref{clm:color-subset} in the proof of Thm.~\ref{thm:algo-works} below). Thus we can place $w_K$ into this edge $(\rho_S,v_S)$, and proceed recursively on the $\ensuremath{\mathfrak{R}}$-classes $L'\coloneqq K$, the induced subgraphs $G[L']$ and their corresponding vertices $v_S\in V(S)$, which then serve as the root of the species trees. More precisely, we identify $w_K$ with the root $u'_T$ created in the ``next-deeper'' recursion step. Since we alternate between vertices $u_T$ for which no edges between vertices of distinct subtrees exist, and vertices $v_T$ for which all such edges exist, we can label the vertices $u_T$ with ``0'' and the vertices $v_T$ with ``1'' and obtain a cotree for the cograph $G$. This recursive procedure is described more formally in Algorithm~\ref{alg:Ru-recognition} which also describes the constructions of an appropriate time map $\ensuremath{\tau_{T}}$ for $T$ and a reconciliation map $\mu$. We note that we find it convenient to use as trivial case in the recursion the situation in which the current root $u_S$ of the species tree is a leaf rather than the condition $|L'|=1$. In this manner we avoid the distinction between the cases $u_S\in L(S)$ and $u_S\notin L(S)$ in the \textbf{else}-condition starting in Line~\ref{line:else}. This results in a shorter presentation at the expense of more inner vertices that need to be suppressed at the end in order to obtain the final tree $T$. We proceed by proving the correctness of Algorithm~\ref{alg:Ru-recognition}. \begin{ctheorem}{\ref{thm:algo-works}} Let $(G,\sigma)$ be a properly colored cograph, and assume that the triple set $\ensuremath{\mathfrak{S}}(M,G)$ is compatible. Then Algorithm~\ref{alg:Ru-recognition} returns a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ such that $\Gu(\ensuremath{\mathscr{S}})=G$ in polynomial time. \end{ctheorem} As a consequence of Lemma~\ref{lem:Ru-SpeciesTriple} and~\ref{lem:propcolcograph}, and the fact that Algorithm~\ref{alg:Ru-recognition} returns a relaxed scenario $\ensuremath{\mathscr{S}}$ for a given properly colored cograph with compatible triple set $\ensuremath{\mathfrak{S}}(G,\sigma)$, we obtain \begin{ctheorem}{\ref{thm:characterization}} A graph $(G,\sigma)$ is an LDT graph if and only if it is a properly colored cograph and $\ensuremath{\mathfrak{S}}(G,\sigma)$ is compatible. \end{ctheorem} Thm.~\ref{thm:characterization} has two consequences that are of immediate interest: \begin{ccorollary}{\ref{cor:LDTpoly}} LDT graphs can be recognized in polynomial time. \end{ccorollary} \begin{ccorollary}{\ref{cor:LDT-here}} The property of being an LDT graph is hereditary, that is, if $(G,\sigma)$ is an LDT graph then each of its vertex induced subgraphs is an LDT graph. \end{ccorollary} The relaxed scenarios $\ensuremath{\mathscr{S}}$ explaining an LDT graph $(G,\sigma)$ are far from being unique. In fact, we can choose from a large set of trees $(S,\ensuremath{\tau_{S}})$ that is determined only by the triple set $\ensuremath{\mathfrak{S}}(G,\sigma)$: \begin{ccorollary}{\ref{cor:manyT}} If $(G=(L,E),\sigma)$ is an LDT graph with coloring $\sigma\colon L\to M$, then for all planted trees $S$ on $M$ that display $\ensuremath{\mathfrak{S}}(G,\sigma)$ there is a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ that contains $\sigma$ and $S$ and that explains $(G,\sigma)$. \end{ccorollary} As shown in the Technical Part, for every LDT graph $(G,\sigma)$ there is a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ explaining $(G,\sigma)$ such that $T$ displays the discriminating cotree $T_{G}$ of $G$ (cf.\ Cor.\ \ref{cor:displayed-cotree} in the Technical Part). However, this property is not satisfied by all relaxed scenarios that explain an $(G,\sigma)$. Nevertheless, the latter results enable us to relate connectedness of LDT graphs to properties of the relaxed scenarios by which it can be explained (cf.\ Lemma~\ref{lem:Gu-connected} in Technical Part). \subsection{Least Resolved Trees for LDT graphs} As we have seen e.g.\ in Cor.~\ref{cor:manyT}, there are in general many trees $S$ and $T$ forming relaxed scenarios $\ensuremath{\mathscr{S}}$ that explain a given LDT graph $(G,\sigma)$. This begs the question to what extent these trees are determined by ``representatives''. For $S$, we have seen that $S$ always displays $\ensuremath{\mathfrak{S}}(G,\sigma)$, suggesting to consider the role of $S=\Aho(\ensuremath{\mathfrak{S}}(G,\sigma),M)$, where $M$ is the codomain of $\sigma$. This tree is least resolved in the sense that there is no relaxed scenario explaining the LDT graph $(G,\sigma)$ with a tree $S'$ that is obtained from $S$ by edge-contractions. The latter is due to the fact that any edge contraction in $\Aho(\ensuremath{\mathfrak{S}}(G,\sigma),M)$ yields a tree $S'$ that does not display $\ensuremath{\mathfrak{S}}(G,\sigma)$ any more \cite{Jansson:12}. By Prop.~\ref{lem:Ru-SpeciesTriple}, none of the relaxed scenarios containing $S'$ explain the LDT graph $(G,\sigma)$. \begin{cdefinition}{\ref{def:LRT-LDT}} Let $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ be a relaxed scenario explaining the LDT graph $(G,\sigma)$. The planted tree $T$ is \emph{least resolved} for $(G,\sigma)$ if no relaxed scenario $(T',S',\sigma',\mu',\ensuremath{\tau_{T}}',\ensuremath{\tau_{S}}')$ with $T'<T$ explains $(G,\sigma)$. \end{cdefinition} In other words, $T$ is least resolved for $(G,\sigma)$ if no relaxed scenario with a gene tree $T'$ obtained from $T$ by a series of edge contractions explains $(G,\sigma)$. \begin{figure}[t] \begin{center} \includegraphics[width=0.85\textwidth]{./images-Rb/no-unique-lrt.pdf} \end{center} \caption{Examples of LDT graphs $(G,\sigma)$ with multiple least resolved trees. Top row: No unique least resolved gene tree. For both trees, contraction of the single inner edge leads to a loss of the gene triple $ab|c\in \ensuremath{\mathfrak{T}}(G)$ (cf.\ Lemma~\ref{lem:Ru-GeneTriple}). The species tree is also least resolved since contraction of its single inner edge leads to loss of the species triples $\sigma(a)\sigma(c)|\sigma(d), \sigma(b)\sigma(c)|\sigma(d)\in \ensuremath{\mathfrak{S}}(G,\sigma)$ (cf.\ Lemma~\ref{lem:Ru-SpeciesTriple}). Bottom row: No unique least resolved species tree. Both trees display the two necessary triples $AB|E,CD|E\in\ensuremath{\mathfrak{S}}(G,\sigma)$, and are again least resolved w.r.t.\ these triples. The gene trees are also least resolved since contraction of either of its two inner edges leads e.g.\ to loss of one of the triples $ae|c, ce'|a\in \ensuremath{\mathfrak{T}}(G)$. } \label{fig:LRT-not-unique} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.85\textwidth]{./images-Rb/cotree-not-resolved-enough.pdf} \end{center} \caption{Example of an LDT graph $(G,\sigma)$ in Panel B that is explained by the relaxed scenario shown in Panel~A. Here, $(G,\sigma)$ cannot be explained by a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu, \ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ such that $T$ is the unique discriminating cotree (shown in panel C) for the cograph $G$, see Panel D and the text for further explanations.} \label{fig:cotree-not-resolved-enough} \end{figure} The examples in Fig.~\ref{fig:LRT-not-unique} show that LDT graphs are in general not accompanied by unique least resolved trees. In the top row, relaxed scenarios with different least resolved gene trees $T$ and the same least resolved species tree $S$ explain the LDT graph $(G,\sigma)$. In the example below, two distinct least resolved species trees exist for a given least-resolved gene tree. The example in Fig.~\ref{fig:cotree-not-resolved-enough} shows, furthermore, that the unique discriminating cotree $T_G$ of an LDT graph $(G,\sigma)$ is not always ``sufficiently resolved''. To see this, assume that the graph $(G,\sigma)$ in the example can be explained by a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu, \ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ such that $T=T_G$. First consider the connected component consisting of $a,b,c,d$. Since $\lca_T(a,b)\succ_T \lca_T(c,d)$, $ab\in E(G)$ and $cd\notin E(G)$, we have $\ensuremath{\tau_{S}}(\lca_S(\sigma(a),\sigma(b))) > \ensuremath{\tau_{T}}(\lca_T(a,b))> \ensuremath{\tau_{T}}(\lca_T(c,d))\ge \ensuremath{\tau_{S}}(\lca_S(\sigma(c),\sigma(d)))$. By similar arguments, the second connected component implies $\ensuremath{\tau_{S}}(\lca_S(\sigma(c),\sigma(d))) > \ensuremath{\tau_{S}}(\lca_S(\sigma(a),\sigma(b)))$; a contradiction. These examples emphasize that LDT graphs constrain the relaxed scenarios, but are far from determining them. \section{Horizontal Gene Transfer and Fitch Graphs} \label{sect:HGT} \subsection{HGT-Labeled Trees and rs-Fitch Graphs} As alluded to in the introduction, the LDT graphs are intimately related with horizontal gene transfer. To formalize this connection we first define transfer edges. These will then be used to encode Walter Fitch's concept of xenologous gene pairs \cite{Fitch:00,Darby:17} as a binary relation, and thus, the edge set of a graph. \begin{cdefinition}{\ref{def:HGT-label}} Let $\ensuremath{\mathscr{S}} = (T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ be a relaxed scenario. An edge $(u,v)$ in $T$ is a \emph{transfer edge} if $\mu(u)$ and $\mu(v)$ are incomparable in $S$. The \emph{HGT-labeling} of $T$ in $\ensuremath{\mathscr{S}}$ is the edge labeling $\lambda_{\ensuremath{\mathscr{S}}}: E(T)\to\{0,1\}$ with $\lambda(e)=1$ if and only if $e$ is a transfer edge. \end{cdefinition} The vertex $u$ in $T$ thus corresponds to an HGT event, with $v$ denoting the subsequent event, which now takes place in the ``recipient'' branch of the species tree. Note that $\lambda_{\ensuremath{\mathscr{S}}}$ is completely determined by $\ensuremath{\mathscr{S}}$. In general, for a given a gene tree $T$, HGT events correspond to a labeling or coloring of the edges of $T$. \begin{cdefinition}{\ref{def:FitchG}}[Fitch graph] Let $(T,\lambda)$ be a tree $T$ together with a map $\lambda\colon E(T)\to \{0,1\}$. The \emph{Fitch graph} $\digamma(T,\lambda) = (V,E)$ has vertex set $V\coloneqq L(T)$ and edge set \begin{align*} E \coloneqq \{xy \mid x,y\in L, &\text{ the unique path connecting } x \text{ and } y \text{ in } T \\ &\text{ contains an edge } e \text{ with } \lambda(e)=1. \} \end{align*} \end{cdefinition} By definition, Fitch graphs of 0/1-edge-labeled trees are loopless and undirected. We call edges $e$ of $(T,\lambda)$ with label $\lambda(e)=1$ also 1-edges and, otherwise, 0-edges. \begin{remark} Fitch graphs as defined here have been termed \emph{undirected} Fitch graphs \cite{Hellmuth:18a}, in contrast to the notion of the \emph{directed} Fitch graphs of 0/1-edge-labeled trees studied e.g.\ in \cite{Geiss:18a,Hellmuth:2019a}. \end{remark} \begin{cproposition}{\ref{prop:fitch}}{\cite{Hellmuth:18a,Zverovich:99}} The following statements are equivalent. \begin{enumerate} \item $G$ is the Fitch graph of a 0/1-edge-labeled tree. \item $G$ is a complete multipartite graph. \item $G$ does not contain $K_2+K_1$ as an induced subgraph. \end{enumerate} \end{cproposition} \begin{cdefinition}{\ref{def:rsFitchG}}[rs-Fitch graph] Let $\ensuremath{\mathscr{S}} = (T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ be a relaxed scenario with HGT-labeling $\lambda_{\ensuremath{\mathscr{S}}}$. We call the vertex colored graph $(\digamma(\ensuremath{\mathscr{S}}),\sigma) \coloneqq (\digamma(T,\lambda_{\ensuremath{\mathscr{S}}}),\sigma)$ the \emph{Fitch graph of the scenario $\ensuremath{\mathscr{S}}$.}\\ A vertex colored graph $(G,\sigma)$ is a \emph{relaxed scenario Fitch graph} (\emph{rs-Fitch graph}) if there is a relaxed scenario $\ensuremath{\mathscr{S}} = (T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ such that $G = \digamma(\ensuremath{\mathscr{S}})$. \end{cdefinition} \begin{figure}[t] \begin{center} \includegraphics[width=0.85\textwidth]{./images-Rb/fitch-example.pdf} \end{center} \caption{(A) The relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ as already shown in Fig.~\ref{fig:Gu-example}. (B) A 0/1-edge-labeled tree $(T,\lambda)$ satisfying $\lambda=\lambda_{\ensuremath{\mathscr{S}}}$. (C) The corresponding Fitch graph $\digamma(T,\lambda)$ drawn in a layout that emphasizes the property that $\digamma(T,\lambda)$ is a complete multipartite graph. Independent sets are circled. (D) An alternative layout as in Fig.~\ref{fig:Gu-example} (top row) that emphasizes the relationship $\Gu(\ensuremath{\mathscr{S}})\subseteq \digamma(\ensuremath{\mathscr{S}})=\digamma(T,\lambda)$ (cf.\ Thm.~\ref{thm:infer-fitch} below). Edges that are not present in $\Gu(\ensuremath{\mathscr{S}})$ are drawn as dashed lines.} \label{fig:fitch-example} \end{figure} Fig.~\ref{fig:fitch-example} shows that rs-Fitch graphs are not necessarily properly colored. A subtle difficulty arises from the fact that Fitch graphs of 0/1-edge-labeled trees are defined without a reference to the vertex coloring $\sigma$, while the rs-Fitch graph is vertex colored. This together with Prop.~\ref{prop:fitch} implies \begin{cfact}{\ref{obs:Fitch}} If $(G,\sigma)$ is an rs-Fitch graph then $G$ is a complete multipartite graph. \end{cfact} The ``converse'' of Obs.~\ref{obs:Fitch} is not true in general, as we shall see in Thm.~\ref{thm:char-rsFitch} below. If, however, the coloring $\sigma$ can be chosen arbitrarily, then every complete multipartite graph $G$ can be turned into an rs-Fitch graph $(G,\sigma)$ as shown in Prop.~\ref{prop:converse-obs-fitch}. \begin{cproposition}{\ref{prop:converse-obs-fitch}} If $G$ is a complete multipartite graph, then there exists a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ such that $(G,\sigma)$ is an rs-Fitch graph. \end{cproposition} Although every complete multipartite graph can be colored in such a way that it becomes an rs-Fitch graph (cf.\ Prop.~\ref{prop:converse-obs-fitch}), there are colored, complete multipartite graphs $(G,\sigma)$ that are not rs-Fitch graphs, i.e., that do not derive from a relaxed scenario (cf.\ Thm.~\ref{thm:char-rsFitch}). We summarize this discussion in the following \begin{cfact}{\ref{obs:01T-notScen}} There are (planted) 0/1-edge labeled trees $(T,\lambda)$ and colorings $\sigma\colon L(T)\to M$ such that there is no relaxed scenario $\ensuremath{\mathscr{S}} = (T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ with $\lambda=\lambda_{\ensuremath{\mathscr{S}}}$. \end{cfact} A subtle -- but important -- observation is that trees $(T,\lambda)$ with coloring $\sigma$ for which Obs.~\ref{obs:01T-notScen} applies may still encode an rs-Fitch graph $(\digamma(T,\lambda),\sigma)$, see Example \ref{ex:lst} and Fig.~\ref{fig:TreeClassesDistinct}. The latter is due to the fact that $\digamma(T,\lambda) = \digamma(T',\lambda')$ may be possible for a different tree $(T',\lambda')$ for which there is a relaxed scenario $\ensuremath{\mathscr{S}}' = (T',S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ with $\lambda' = \lambda_{\ensuremath{\mathscr{S}}}$. In this case, $(\digamma(T,\lambda),\sigma) = (\digamma(\ensuremath{\mathscr{S}}'),\sigma)$ is an rs-Fitch graph. We shall briefly return to these issues in the discussion section~\ref{sect:concl}. \begin{xmpl} \label{ex:lst} Consider the planted edge-labeled tree $(T,\lambda)$ shown in Fig.~\ref{fig:TreeClassesDistinct} with leaf set $L=\{a,b,b',c,d\}$, together with a coloring $\sigma$ where $\sigma(b)=\sigma(b')$ and $\sigma(a), \sigma(b), \sigma(c), \sigma(d)$ are pairwise distinct.\\ Assume, for contradiction, that there is a relaxed scenario $\ensuremath{\mathscr{S}} = (T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ with $(T,\lambda) = (T,\lambda_{\ensuremath{\mathscr{S}}})$. Hence, $\mu(v)$ and $\mu(b)=\sigma(b)$ as well as $\mu(u)$ and $\mu(b')=\sigma(b)$ must be comparable in $S$. Therefore, $\mu(u)$ and $\mu(v)$ must both be comparable to $\sigma(b)$ and thus, they are located on the path from $\rho_S$ to $\sigma(b)$. But this implies that $\mu(u)$ and $\mu(v)$ are comparable in $S$; a contradiction, since then $\lambda_{\ensuremath{\mathscr{S}}}(u,v) = 0\neq \lambda(u,v) = 1$. \end{xmpl} \begin{figure}[t] \begin{center} \includegraphics[width=0.85\textwidth]{./images-Rb/no-reconc-01-tree.pdf} \end{center} \caption{0/1-edge-labeled tree $(T,\lambda)$ for which no relaxed scenario exists such that $(T,\lambda) = (T,\lambda_{\ensuremath{\mathscr{S}}})$ (see Example~\ref{ex:lst}). Red edges indicates 1-labeled edges. Nevertheless for $\digamma\coloneqq\digamma(T,\lambda)$ there is an alternative tree $(T',\lambda')$ for which a relaxed scenario $\ensuremath{\mathscr{S}} = (T',S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ exists (right) such that $\digamma = \digamma(T',\lambda') = \digamma(\ensuremath{\mathscr{S}})$. } \label{fig:TreeClassesDistinct} \end{figure} \subsection{LDT Graphs and rs-Fitch Graphs} We proceed to investigate to what extent an LDT graph provides information about an rs-Fitch graph. As we shall see in Thm.~\ref{thm:FitchRu-scenario} there is indeed a close connection between rs-Fitch graphs and LDT graphs. We start with a useful relation between the edges of rs-Fitch graphs and the reconciliation maps $\mu$ of their scenarios. \begin{clemma}{\ref{lem:independent-lca}} Let $\digamma(\ensuremath{\mathscr{S}})$ be an rs-Fitch graph for some relaxed scenario $\ensuremath{\mathscr{S}}$. Then, $ab\notin E(\digamma(\ensuremath{\mathscr{S}}))$ implies that $\lca_S(\sigma(a),\sigma(b)) \preceq_S \mu(\lca_T(a,b))$. \end{clemma} The next result shows that a subset of transfer edges can be inferred immediately from LDT graphs: \begin{ctheorem}{\ref{thm:infer-fitch}} If $(G,\sigma)$ is an LDT graph, then $G\subseteq \digamma(\ensuremath{\mathscr{S}})$ for all relaxed scenarios $\ensuremath{\mathscr{S}}$ that explain $(G,\sigma)$. \end{ctheorem} Since we only have that $xy$ is an edge in $\digamma(\ensuremath{\mathscr{S}})$ if the path connecting $x$ and $y$ in the tree $T$ of $\ensuremath{\mathscr{S}}$ contains a transfer edge, Thm.~\ref{thm:infer-fitch} immediately implies \begin{ccorollary}{\ref{cor:noHGT}} For every relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ without transfer edges, it holds that $E(\Gu(\ensuremath{\mathscr{S}})) = \emptyset$. \end{ccorollary} \begin{figure}[t] \begin{center} \includegraphics[width=0.85\textwidth]{./images-Rb/fitch-not-Gu.pdf} \end{center} \caption{Two relaxed scenarios $\ensuremath{\mathscr{S}}_1$ and $\ensuremath{\mathscr{S}}_2$ with the same rs-Fitch graph $\digamma = \digamma(\ensuremath{\mathscr{S}}_1)=\digamma(\ensuremath{\mathscr{S}}_2)$ (right) and different LDT graphs $\Gu(\ensuremath{\mathscr{S}}_1)\neq \digamma$ and $\Gu(\ensuremath{\mathscr{S}}_2)=\digamma$.} \label{fig:Fitch-not-RU} \end{figure} Thm.~\ref{thm:infer-fitch} provides the formal justification for indirect phylogenetic approaches to HGT inference that are based on the work of \citet{Lawrence:92}, \citet{Clarke:02}, and \citet{Novichkov:04} by showing that $(x,y)\in E(\Gu(\ensuremath{\mathscr{S}}))$ can be explained only by HGT, irrespective of how complex the true biological scenario might have been. However, it does not cover all HGT events. Fig.\ \ref{fig:Fitch-not-RU} shows that there are relaxed scenarios $\ensuremath{\mathscr{S}}$ for which $\Gu(\ensuremath{\mathscr{S}}) \neq \digamma(\ensuremath{\mathscr{S}})$ even though $\digamma(\ensuremath{\mathscr{S}})$ is properly colored. Moreover, it is possible that an rs-Fitch graph $(G,\sigma)$ contains edges $xy\in E(G)$ with $\sigma(x)=\sigma(y)$. In particular, therefore, an rs-Fitch graph is not always an LDT graph. It is natural, therefore, to ask whether for every properly colored Fitch graph there is a relaxed scenario $\ensuremath{\mathscr{S}}$ such that $\Gu(\ensuremath{\mathscr{S}}) = \digamma(\ensuremath{\mathscr{S}})$. An affirmative answer is provided by \begin{ctheorem}{\ref{thm:FitchRu-scenario}} The following statements are equivalent. \begin{enumerate} \item $(G,\sigma)$ is a properly colored complete multipartite graph. \item There is a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ with coloring $\sigma$ such that $G=\Gu(\ensuremath{\mathscr{S}}) = \digamma(\ensuremath{\mathscr{S}})$. \item $(G,\sigma)$ is complete multipartite and an LDT graph. \item $(G,\sigma)$ is properly colored and an rs-Fitch graph. \end{enumerate} In particular, for every properly colored complete multipartite graph $(G,\sigma)$ the triple set $\ensuremath{\mathfrak{S}}(G,\sigma)$ is compatible. \end{ctheorem} relaxed scenarios for which $(\digamma(\ensuremath{\mathscr{S}}),\sigma)$ is properly colored do not admit two members of the same gene family that are separated by a HGT event. While restrictive, such models are not altogether unrealistic. Proper coloring of $(\digamma(\ensuremath{\mathscr{S}}),\sigma)$ is, in particular, the case if every horizontal transfer is \emph{replacing}, i.e., if the original copy is effectively overwritten by homologous recombination \cite{Thomas:05}, see also \cite{Choi:12} for a detailed case study in \emph{Streptococcus}. As a consequence of Thm.~\ref{thm:FitchRu-scenario}, LDT graphs are sufficient to describe replacing HGT. However, the incidence rate of replacing HGT decreases exponentially with phylogenetic distance between source and target \cite{Williams:12}, and additive HGT becomes the dominant mechanism between phylogenetically distant organisms. Still, replacing HGTs may also be the result of additive HGT followed by a loss of the (functionally redundant) vertically inherited gene. \subsection{rs-Fitch Graphs with General Colorings} In scenarios with additive HGT, the rs-Fitch graph is no longer properly colored and no-longer coincides with the LDT graph. Since not every vertex-colored complete multipartite graph $(G,\sigma)$ is an rs-Fitch graph (cf. Thm.~\ref{thm:char-rsFitch}), we ask whether an LDT $(G,\sigma)$ that is not itself already an rs-Fitch graph imposes constraints on the rs-Fitch graphs $(\digamma(\ensuremath{\mathscr{S}}),\sigma)$ that derive from relaxed scenarios $\ensuremath{\mathscr{S}}$ that explain $(G,\sigma)$. As a first step towards this goal, we aim to characterize rs-Fitch graphs, i.e., to understand the conditions imposed by the existence of an underlying scenario $\ensuremath{\mathscr{S}}$ on the compatibility of the collection of independent sets $\mathcal{I}$ of $G$ and the coloring $\sigma$. As we shall see, these conditions can be explained in terms of an auxiliary graph that we introduce in a very general setting: \begin{cdefinition}{\ref{def:auxfitch}} Let $L$ be a set, $\sigma\colon L\to M$ a map and $\mathcal{I}=\{I_1,\dots, I_k\}$ a set of subsets of $L$. Then the graph $\auxfitch(\sigma,\mathcal{I})$ has vertex set $M$ and edges $xy$ if and only if $x\ne y$ and $x,y\in \sigma(I')$ for some $I'\in\mathcal{I}$. \end{cdefinition} By construction $\auxfitch(\sigma,\mathcal{I'})$ is a subgraph of $\auxfitch(\sigma,\mathcal{I})$ whenever $\mathcal{I'}\subseteq\mathcal{I}$. An extended version of Def.\ \ref{def:auxfitch} that contains also an edge-labeling of $\auxfitch(\sigma,\mathcal{I})$ can be found in the Technical Part -- this technical detail is not needed here. As it turns out, rs-Fitch graphs are characterized by the structure of their auxiliary graphs $\auxfitch$ as shown in the next \begin{ctheorem}{\ref{thm:char-rsFitch}} A graph $(G,\sigma)$ is an rs-Fitch graph if and only if (i) it is complete multipartite with independent sets $\mathcal{I}=\{I_1,\dots, I_k\}$, and (ii) if $k>1$, there is an independent set $I'\in \mathcal{I}$ such that $\auxfitch(\sigma,\mathcal{I}\setminus\{I'\})$ is disconnected. \end{ctheorem} As a consequence of Thm.~\ref{thm:char-rsFitch}, we obtain \begin{ccorollary}{\ref{cor:auxfitch1}} rs-Fitch graphs can be recognized in polynomial time. \end{ccorollary} As for LDT graphs, the property of being an rs-Fitch graph is hereditary. \begin{ccorollary}{\ref{cor:rsFitch-hereditary}} If $(G=(L,E),\sigma)$ is an rs-Fitch graph, then the colored vertex induced subgaph $(G[W],\sigma_{|W})$ is an rs-Fitch graph for all non-empty subsets $W\subseteq L$. \end{ccorollary} \begin{figure}[ht] \begin{center} \includegraphics[width=0.85\textwidth]{./images-Rb/hereditary-surjective.pdf} \end{center} \caption{Shown are three distinct relaxed scenarios $\ensuremath{\mathscr{S}}$, $\ensuremath{\mathscr{S}}'$ and $\ensuremath{\mathscr{S}}''$ with corresponding rs-Fitch graphs. Here $\sigma' = \sigma_{|\{a,a'\}}$ and $\sigma'' = \sigma_{|\{a,a'\},\{A\}}$ (cf.\ Def.~\ref{def:sigma-restrictions}). Putting $(G,\sigma) = (\digamma(\ensuremath{\mathscr{S}}),\sigma)$, one can observe that $(G[\{a,a'\}], \sigma') = (\digamma(\ensuremath{\mathscr{S}}'),\sigma')$ is an rs-Fitch graph. In contrast, $\sigma''$ is restricted to the ``observable'' part of species (consisting of $A$ alone), and $(G[\{a,a'\}], \sigma'')$ is not an rs-Fitch graph, see text for further details.} \label{fig:hereditary-surjective} \end{figure} Note, however, that Cor.~\ref{cor:rsFitch-hereditary} is not satisfied if we restrict the codomain of $\sigma$ to the observable part of colors, i.e., if we consider $\sigma_{|W,\sigma(W)}\colon W \to \sigma(W)$ instead of $\sigma_{|W}\colon W\to M$, even if $\sigma$ is surjective. To see this consider the vertex colored graph $(G,\sigma)$ with $V(G)=\{a,a',b\}$, $E(G) = \{aa',ab,a'b\}$ and $\sigma \colon V(G)\to M = \{A,B\}$ where $\sigma(a) = \sigma(a')=A \neq \sigma(b)=B$. A possible relaxed scenario $\ensuremath{\mathscr{S}}$ for $(G,\sigma)$ is shown in Fig.~\ref{fig:hereditary-surjective}(A). The deletion of $b$ yields $W=V(G)\setminus \{b\} = \{a,a'\}$ and the graph $(G[W],\sigma_{|W})$ for which $\ensuremath{\mathscr{S}}'$ with HGT-labeling $\lambda_{\ensuremath{\mathscr{S}}'}$ as in Fig.~\ref{fig:hereditary-surjective}(B) is a relaxed scenario that satisfies $G[W] = \digamma(T,\lambda_{\ensuremath{\mathscr{S}}'})$. However, if we restrict the codomain of $\sigma$ to obtain $\sigma_{|W,\{A\}}\colon \{a,a'\} \to \sigma(W) =\{A\}$, then there is no relaxed scenario $\ensuremath{\mathscr{S}}$ for which $G[W] = \digamma(T,\lambda_{\ensuremath{\mathscr{S}}})$, since there is only a single species tree $S$ on $L(S)=\{A\}$ (Fig.~\ref{fig:hereditary-surjective}(C)) that consists of the single edge $(0_T,A)$ and thus, $\mu(v)$ and $\mu(a)$ as well as $\mu(v)$ and $\mu(a')$ must be comparable in this scenario. \subsection{Least Resolved Trees for Fitch graphs} It is important to note that the characterization of rs-Fitch graphs in Thm.~\ref{thm:char-rsFitch} does not provide us with a characterization of rs-Fitch graphs that share a common relaxed scenario with a given LDT graph. As a potential avenue to address this problem we investigate the structure of least-resolved trees for Fitch graphs as possible source of additional constraints. \begin{cdefinition}{\ref{def:FLRT}} The edge-labeled tree $(T,\lambda)$ is \emph{Fitch-least-resolved} w.r.t.\ $\digamma(T,\lambda)$, if for all trees $T'\neq T$ that are displayed by $T$ and every labeling $\lambda'$ of $T'$ it holds that $\digamma(T,\lambda)\neq \digamma(T',\lambda')$. \end{cdefinition} As shown in the Technical Part (Thm.~\ref{thm:LRT-rsFitch}), Fitch-least-resolved trees can be characterized in terms of their edge-labeling, a result that is very similar to the results for ``directed'' Fitch graphs of 0/1-edge-labeled trees in \cite{Geiss:18a}. As a consequence of this characterization, Fitch-least-resolved trees can be constructed in polynomial time. However, Fitch-least-resolved trees are far from being unique. In particular, Fitch-least-resolved trees are only of very limited use for the construction of relaxed scenarios $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ from an underlying Fitch graph. In fact, even though $(G,\sigma)$ is an rs-Fitch graph, Example~\ref{ex:FLRT-noScen} in the Technical Part shows that it is possible that there is no relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ with HGT-labeling $\lambda_{\ensuremath{\mathscr{S}}}$ such that $(T,\lambda) = (T,\lambda_{\ensuremath{\mathscr{S}}})$ for \emph{any} of its Fitch-least-resolved trees $(T,\lambda)$. \section{Editing Problems} \label{sect:edit} \subsection{Editing Colored Graphs to LDT Graphs and Fitch Graphs} Empirical estimates of LDT graphs from sequence data are expected to suffer from noise and hence to violate the conditions of Thm.~\ref{thm:characterization}. It is of interest, therefore, to consider the problem of correcting an empirical estimate $(G,\sigma)$ to the closest LDT graph. We therefore briefly investigate the usual three edge \emph{modification} problems for graphs: \emph{completion} only considers the insertion of edges, for \emph{deletion} edges may only be removed, while solutions to the \emph{editing} problem allow both insertions and deletions, see e.g.\ \cite{Burzyn:06}. \begin{problem}[\PROBLEM{LDT-Graph-Modification (LDT-M)}]\ \\ \begin{tabular}{ll} \emph{Input:} & A colored graph $(G =(V,E),\sigma)$ and an integer $k$.\\ \emph{Question:} & Is there a subset $F\subseteq E$ such that $|F|\leq k$ and $(G'=(V,E\star F),\sigma)$ \\ &is an LDT graph where $\star\in \{\setminus, \cup, \Delta\}$? \end{tabular} \end{problem} We write \PROBLEM{LDT-E}, \PROBLEM{LDT-C}, \PROBLEM{LDT-D} for the editing, completion, and deletion version of \PROBLEM{LDT-M}. By virtue of Thm.~\ref{thm:characterization}, the \PROBLEM{LDT-M} is closely related to the problem of finding a compatible subset $\mathscr{R}\subseteq \ensuremath{\mathfrak{S}}(G_\mathscr{R},\sigma)$ with maximum cardinality. The corresponding decision problem, \PROBLEM{MaxRTC}, is known to be NP-complete \cite[Thm.~1]{Jansson:01}. In the technical part we prove \begin{ctheorem}{\ref{thm:LDT-M-NP}} \PROBLEM{LDT-M} is NP-complete. \end{ctheorem} Even through at present it remains unclear whether rs-Fitch graphs can be estimated directly, the corresponding graph modification problems are at least of theoretical interest. \begin{problem}[\PROBLEM{rs-Fitch Graph-Modification (rsF-M)}]\ \\ \begin{tabular}{ll} \emph{Input:} & A colored graph $(G =(V,E),\sigma)$ and an integer $k$.\\ \emph{Question:} & Is there a subset $F\subseteq E$ such that $|F|\leq k$ and $(G'=(V,E\star F),\sigma)$ \\ &is an rs-Fitch graph where $\star\in \{\setminus, \cup, \Delta\}$? \end{tabular} \end{problem} As above, we write \PROBLEM{rsF-E}, \PROBLEM{rsF-C}, \PROBLEM{rsF-D} for the editing, completion, and deletion version of \PROBLEM{rsF-M}. Since rs-Fitch graphs are complete multipartite, their complements are disjoint unions of complete graphs. The problems \PROBLEM{rsF-M} are thus closely related the cluster graph modification problems. Both \PROBLEM{Cluster Deletion} and \PROBLEM{Cluster Editing} are NP-complete, while \PROBLEM{Cluster Completion} is polynomial (by completing each connected component to a clique, i.e., computing the transitive closure) \cite{Shamir:04}. We obtain \begin{ctheorem}{\ref{thm:rsF-M-NP}} \PROBLEM{rsF-C} and \PROBLEM{rsF-E} are NP-complete. \end{ctheorem} \PROBLEM{rsF-D} remains open since the complement of the transitive closure of the complement of a colored graph $(G,\sigma)$ is not necessarily an rs-Fitch graph. This is in particular the case if $(G,\sigma)$ is complete multipartite but not an rs-Fitch graph. \subsection{Editing LDT Graphs to Fitch Graphs} Putative LDT graphs $(G,\sigma)$ can be estimated directly from sequence (dis)similarity data. The most direct approach was introduced by \citet{Novichkov:04}, where, for (reciprocally) most similar genes $x$ and $y$ from two distinct species $\sigma(x)=A$ and $\sigma(x)=B$, dissimilarities $\delta(x,y)$ between genes and dissimilarities $\Delta(A,B)$ of the underlying species are compared under the assumption of a (gene family specific) clock-rate $r$, i.e., the expectation that orthologous gene pairs satisfy $\delta(x,y)\approx r \Delta(A,B)$. In this setting, $xy\in E(G)$ if $\delta(x,y)< r \Delta(A,B)$ at some level of statistical significance. The rate assumption can be relaxed to consider rank-order statistics. For fixed $x$, differences in the orders of $\delta(x,y)$ and $\Delta(\sigma(x),\sigma(y))$ assessed by rank-order correlation measures have been used to identify $x$ as HGT candidate e.g.\ \cite{Lawrence:92,Clarke:02}. An interesting variation on the theme is described by \citet{Sevillya:20}, who use relative synteny rather than sequence similarity for the same purpose. A more detailed account on estimating $(G,\sigma)$ will be given elsewhere. In contrast, it seems much more difficult to infer a Fitch graph $(\digamma,\sigma)$ directly from data. To our knowledge, no method for this purpose has been proposed in the literature. However, $(\digamma,\sigma)$ is of much more direct practical interest because the independent sets of $\digamma$ determine the maximal HGT-free subsets of genes, which could be analyzed separately by better-understood techniques. In this section, we therefore focus on the aspects of $(\digamma,\sigma)$ that are not captured by LDT graphs $(G,\sigma)$. In the light of the previous section, these are in particular non-replacing HGTs, i.e., HGTs that result in genes $x$ and $y$ in the same species $\sigma(x)=\sigma(y)$. In this case, $(\digamma,\sigma)$ is no longer properly colored and thus $G\ne\digamma$. To get a better intuition on this case consider three genes $a$, $a'$, and $b$ with $\sigma(a)=\sigma(a')\ne\sigma(b)$ with $ab\notin E(G)$ and $a'b\in E(G)$. By Lemma~\ref{lem:Ru-GeneTriple}, the gene tree $T$ of any explaining relaxed scenario displays the triple $a'b|a$. Fig.~\ref{fig:2plausibeScen} shows two relaxed scenarios with a single HGT that explain this situation: \begin{figure}[ht] \begin{center} \includegraphics[width=0.7\textwidth]{./images-Rb/two-scen-same-triple.pdf} \end{center} \caption{Two relaxed scenarios with $T$ displaying the triple $a'b|a$ and explaining the same graph $(G,\sigma)$.} \label{fig:2plausibeScen} \end{figure} In the first, we have $aa'\in E(\digamma)$, while the other implies $aa'\notin E(\digamma)$. Neither scenario is \emph{a priori} less plausible than the other. Although the frequency of true homologous replacement via crossover decreases exponentially with the phylogenetic distance of donor and acceptor species \cite{Williams:12}, additive HGT with subsequent loss of one copy is an entirely plausible scenario. A pragmatic approach to approximate $(\digamma,\sigma)$ is therefore to consider the step from an LDT graph $(G,\sigma)$ to $(\digamma,\sigma)$ as a graph modification problem. First we note that Algorithm~\ref{alg:Ru-recognition} explicitly produces a relaxed scenario $\ensuremath{\mathscr{S}}$ and thus implies a corresponding gene tree $T_{\ensuremath{\mathscr{S}}}$ with HGT-labeling $\lambda_{\ensuremath{\mathscr{S}}}$, and thus an rs-Fitch graph $(\digamma(\ensuremath{\mathscr{S}}),\sigma)$. However, Algorithm~\ref{alg:Ru-recognition} was designed primarily as proof device. It produces neither a unique relaxed scenario nor necessarily the most plausible or a most parsimonious one. Furthermore, both the LDT graph $(G,\sigma)$ and the desired rs-Fitch graph $(\digamma,\sigma)$ are consistent with a potentially very large number of scenarios. It thus appears preferable to altogether avoid the explicit construction of scenarios at this stage. Since every LDT graph $(G,\sigma)$ is explained by some $\ensuremath{\mathscr{S}}$, it is also a spanning subgraph of the corresponding rs-Fitch graph $(\digamma(\ensuremath{\mathscr{S}}),\sigma)$. The step from an LDT graph $(G,\sigma)$ to an rs-Fitch graph $(\digamma,\sigma)$ can therefore be viewed as an edge-completion problem. The simplest variation of the problem is \begin{problem}[Fitch graph completion] Given an LDT graph $(G,\sigma)$, find a minimum cardinality set $Q$ of possible edges such that $((V(G),E(G)\cup Q),\sigma)$ is a complete multipartite graph. \label{problem:Fcomp} \end{problem} A close inspection of Problem~\ref{problem:Fcomp} shows that the coloring is irrelevant in this version, and the actual problem to be solved is the problem \textsc{Complete Multipartite Graph Completion} with a cograph as input. We next show that this task can be performed in linear time. The key idea is to consider the complementary problem, i.e., the problem of deleting a minimum set of edges from the complementary cograph $\overline{G}$ such that the end result is a disjoint union of complete graphs. This is known as \textsc{Cluster Deletion} problem \cite{Shamir:04}, and is known to have a greedy solution for cographs \cite{Gao:13}. \begin{clemma}{\ref{lem:editing}} There is a linear-time algorithm to solve Problem \ref{problem:Fcomp} for every cograph $G$. \end{clemma} All maximum clique partitions of a cograph $G$ have the same sequence of cluster sizes \cite[Thm.~1]{Gao:13}. However, they are not unique as partitions of the vertex set $V(G)$. Thus the minimal editing set $Q$ that needs to be inserted into a cograph to reach a complete multipartite graphs will not be unique in general. In the Technical Part, we briefly sketch a recursive algorithm operating on the cotree of $\overline{G}$. However, an optimal solution to Problem~\ref{problem:Fcomp} with input $(G,\sigma)$ does not necessarily yield an rs-Fitch graph or an rs-Fitch graph $(\digamma(\ensuremath{\mathscr{S}}),\sigma)$ such that $G=\Gu(\ensuremath{\mathscr{S}})$, see Fig.~\ref{fig:optimal-edit-no-rs-Fitch}. In particular, there are LDT graphs $(G,\sigma)$ for which more edges need to be added to obtain an rs-Fitch graph than the minimum required to obtain a complete multipartite graph, see Fig.~\ref{fig:optimal-rs-Fitch-no-min-compl}. \begin{figure}[ht] \begin{center} \includegraphics[width=0.7\textwidth]{./images-Rb/optimal-edit-no-rs-Fitch.pdf} \end{center} \caption{Upper panel: a relaxed scenario $\ensuremath{\mathscr{S}}$ with LDT graph $(\Gu(\ensuremath{\mathscr{S}}),\sigma)$ and rs-Fitch graph $(\digamma(\ensuremath{\mathscr{S}}),\sigma)$. There are two minimum edge completion sets that yield the complete multipartite graphs $(\digamma_1,\sigma)$ and $(\digamma_2,\sigma)$ (lower part). By Thm.~\ref{thm:char-rsFitch}, $(\digamma_2,\sigma)$ is not an rs-Fitch graph. The graph $(\digamma_1,\sigma)$ is an rs-Fitch graph for the relaxed scenario $\ensuremath{\mathscr{S}}'$. However, $\Gu(\ensuremath{\mathscr{S}})\ne \Gu(\ensuremath{\mathscr{S}}')$ for all scenarios $\ensuremath{\mathscr{S}}'$ with $(\digamma(\ensuremath{\mathscr{S}}'),\sigma) = (\digamma_1,\sigma)$. To see this, note that the gene tree $T=((a,b),(a',b'))$ in $\ensuremath{\mathscr{S}}$ is uniquely determined by application of Lemma~\ref{lem:2order} and~\ref{lem:Ru-GeneTriple}. Assume that there is any edge-labeling $\lambda$ such that $\digamma(T,\lambda) = \digamma_1$. The none-edges in $\digamma_1$ imply that along the two paths from $a$ to $a'$ and $b$ to $b'$ there is no transfer edge, that is, there cannot be any transfer edge in $T$; a contradiction.} \label{fig:optimal-edit-no-rs-Fitch} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=0.85\textwidth]{./images-Rb/optimal-rs-Fitch-no-min-compl.pdf} \end{center} \caption{The LDT graph $(\Gu(\ensuremath{\mathscr{S}}),\sigma)$ for the relaxed scenario $\ensuremath{\mathscr{S}}$ has a unique minimum edge completion set (as determined by full enumeration), resulting in the complete multipartite graph $(\digamma_1,\sigma)$. However, Thm.~\ref{thm:char-rsFitch} implies that $(\digamma_1,\sigma)$ is not rs-Fitch graph. An edge completion set with more edges must be used to obtain an rs-Fitch graph, for instance $(\digamma_2,\sigma)$, which is explained by the scenario $\ensuremath{\mathscr{S}}'$.} \label{fig:optimal-rs-Fitch-no-min-compl} \end{figure} A more relevant problems for our purposes, therefore is \begin{problem}[rs-Fitch graph completion] Given an LDT graph $(G,\sigma)$ find a minimum cardinality set $Q$ of possible edges such that $((V(G),E(G)\cup Q),\sigma)$ is an rs-Fitch graph. \label{problem:rsFcomp} \end{problem} The following, stronger version is what we ideally would like to solve: \begin{problem}[strong rs-Fitch graph completion] Given an LDT graph $(G,\sigma)$ find a minimum cardinality set $Q$ of possible edges such that $\digamma = ((V(G),E(G)\cup Q),\sigma)$ is an rs-Fitch graph and there is a common relaxed scenario $\ensuremath{\mathscr{S}}$, that is, $\ensuremath{\mathscr{S}}$ satisfies $G = \Gu(\ensuremath{\mathscr{S}})$ and $\digamma = \digamma(\ensuremath{\mathscr{S}})$. \label{problem:strong-rsFcomp} \end{problem} The computational complexity of Problems \ref{problem:rsFcomp} and \ref{problem:strong-rsFcomp} is unknown. We conjecture, however, that both are NP-hard. In contrast to the application of graph modification problems to correct possible errors in the originally estimated data, the minimization of inserted edges into an LDT graph lacks a direct biological interpretation. Instead, most-parsimonious solutions in terms of evolutionary events are usually of interest in biology. In our framework, this translates to \begin{problem}[Min Transfer Completion] Let $(G,\sigma)$ be an LDT graph and $\mathbb{S}$ be the set of all relaxed scenarios $\ensuremath{\mathscr{S}}$ with $G=\Gu(\ensuremath{\mathscr{S}})$. Find a relaxed scenario $\ensuremath{\mathscr{S}}'\in\mathbb{S}$ that has a minimal number of transfer edges among all elements in $\mathbb{S}$ and the corresponding rs-Fitch graph $\digamma(\ensuremath{\mathscr{S}}')$. \label{problem:strong-Tcomp} \end{problem} One way to address this problem might be as follows: Find edge-completion sets for the given LDT graph $(G,\sigma)$ that minimize the number of independent sets in the resulting rs-Fitch graph $\digamma = ((V(G),E(G)\cup Q),\sigma)$. The intuition behind this idea is that, in this case, the number of pairs within the individual independent sets is maximized and thus, we get a maximized set of gene pairs without transfer along their connecting path in the gene tree. It remains an open question whether this idea always yields a solution for Problem~\ref{problem:strong-Tcomp}. \section{Simulation Results} \label{sect:simul} Evolutionary scenarios covering a wide range of HGT frequencies were generated with the simulation library \texttt{AsymmeTree} \cite{Stadler:20a}. The tool generates a planted species tree $S$ with time map $\ensuremath{\tau_{S}}$. A constant-rate birth-death process then generates a gene tree $(\widetilde T,\widetilde\ensuremath{\tau_{T}})$ with additional branching events producing copies at inner vertex $u$ of $S$ propagating to each descendant lineage of $u$. To model HGT events, a recipient branch of $S$ is selected at random. The simulation is event-based in the sense that each node of the ``true'' gene tree other than the planted root is one of speciation, gene duplication, horizontal gene transfer, gene loss, or a surviving gene. Here, the lost as well as the surviving genes form the leaf set of $\widetilde T$. We used the following parameter settings for \texttt{AsymmeTree}: Planted species trees with a number of leaves between 10 and 50 (randomly drawn in each scenario) were generated using the Innovation Model \cite{Keller:12} and equipped with a time map as described in \cite{Stadler:20a}. Multifurcations were introduced into the species tree by contraction of inner edges with a common probability $p=0.2$ per edge to simulate. Gene trees therefore are also not binary in general. We used multifurcations to model the effects of limited phylogenetic resolution. Duplication and HGT events, however, always result in bifurcations in the gene tree $\widetilde T$. We considered different combinations of duplication, loss, and HGT event rates (indicated on the horizontal axis in Figs.~\ref{fig:dataset-stats}--\ref{fig:fitch-approx-bp}). For each combination of event rates, we simulated 1000 scenarios per event rate combination. Fig.~\ref{fig:dataset-stats} summarizes basic statistics of the simulated data sets. \begin{figure}[ht] \begin{center} \includegraphics[width=0.85\textwidth]{./images-Rb/dataset-stats.pdf} \end{center} \caption{Top panel: Distribution of the numbers of species (i.e.\ species tree leaves), species thereof that contain at least one surviving genes, surviving genes in total (non-loss leaves in the gene trees), loss events (loss leaves), and horizontal transfer events (inner vertices that are HGT events). Bottom panel: Mean and standard deviation of these quantities. The numbers in the legend indicate the mean and standard deviation taken over all event rate combinations. The tuples on the horizontal axis give the rates for duplication, loss, and horizontal transfer.} \label{fig:dataset-stats} \end{figure} The simulation also determines the set of surviving genes $L\subseteq L(\widetilde{T})$, the reconciliation map $\widetilde\mu\colon V(\widetilde{T})\to V(S)\cup E(S)$ and the coloring $\sigma\colon L\to L(S)$ representing the species in which each surviving gene resides. From the true tree $\widetilde T$, the observable gene tree $T=\widetilde{T}_{|L}$ is obtained by recursively removing leaves that correspond to loss events, i.e.\ $L(\widetilde{T})\setminus L$, and suppressing inner vertices with a single child and setting $\ensuremath{\tau_{T}}(x)=\widetilde\ensuremath{\tau_{T}}(x)$ and $\mu(x)=\widetilde\mu(x)$ for all $x\in V(T)$. This defines a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$. From the scenario $\ensuremath{\mathscr{S}}$, we can immediately determine the associated HGT map $\lambda_{\ensuremath{\mathscr{S}}}$, the Fitch graph $\digamma(\ensuremath{\mathscr{S}})$, and the LDT graph $\Gu(\ensuremath{\mathscr{S}})$. We also consider $\widetilde\ensuremath{\mathscr{S}}=(\widetilde T, S,\sigma,\widetilde\mu,\widetilde\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ which, from a formal point of view, is not a relaxed scenario, see Fig.~\ref{fig:transfer-edges-plot}. In this example, the gene-species association $\sigma \colon L \to L(S)$ is not a map for the entire leaf set $L(\widetilde T)$. Still, we can define the \emph{true LDT graph} $\Gu(\widetilde \ensuremath{\mathscr{S}})$ and the \emph{true Fitch graph} $\digamma(\widetilde\ensuremath{\mathscr{S}})$ of $\widetilde\ensuremath{\mathscr{S}}$ in the same way as LDT graphs using Defs.~\ref{def:LDTgraph}, \ref{def:Gu-scen}, and \ref{def:rsFitchG}, respectively. Note that this does not guarantee that every true Fitch graph is also an rs-Fitch graph. The example in Fig.~\ref{fig:transfer-edges-plot} shows, furthermore, that $\digamma(\widetilde\ensuremath{\mathscr{S}})[L] \neq \digamma(\ensuremath{\mathscr{S}})$ is possible. For the LDT graphs, on the other hand, we have $\Gu(\ensuremath{\mathscr{S}}) = \Gu(\widetilde \ensuremath{\mathscr{S}})$ because $\widetilde \ensuremath{\mathscr{S}}$ and $\ensuremath{\mathscr{S}}$ are based on the same time maps. \begin{figure}[ht] \begin{center} \includegraphics[width=0.5\textwidth]{./images-Rb/transfer-edges-plot.pdf} \includegraphics[width=0.35\textwidth]{./images-Rb/invisible-transfer.pdf} \end{center} \caption{Left: Fraction of ``visible'' transfer edges among the ``true'' transfer edges in $T$ in the simulated scenarios, i.e., the edges that correspond to a path in $\widetilde T$ containing at least one transfer edge w.r.t.\ $\widetilde{\ensuremath{\mathscr{S}}}$ (see also the explanation in the text). The tuples on the horizontal axis give the rates for duplication, loss, and horizontal transfer. Since $E\coloneqq E(\digamma(\ensuremath{\mathscr{S}})) \subseteq \widetilde{E} \coloneqq E(\digamma(\widetilde\ensuremath{\mathscr{S}})[L(T)])$, we also show the ratio $|E|/|\widetilde E|$. Right: A relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ with an ``invisible'' transfer edge $(u,a')$ (as determined by the knowledge of $\widetilde\ensuremath{\mathscr{S}}=(\widetilde T,S,\sigma,\widetilde\mu,\widetilde\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$). In this example we have $\digamma(\widetilde\ensuremath{\mathscr{S}})[L(T)=\{a,a'\}] \neq \digamma(\ensuremath{\mathscr{S}})$.} \label{fig:transfer-edges-plot} \end{figure} The distinction between the true graph $\digamma(\widetilde\ensuremath{\mathscr{S}})[L]$ and the rs-Fitch graph $\digamma(\ensuremath{\mathscr{S}})$ is closely related to the definition of transfer edges. So far, we only took into account transfer edges $(u,v)$ in the (observable) gene trees $T$, for which $u$ and $v$ are mapped to incomparable vertices or edges of the species trees $S$ (cf.\ Def.~\ref{def:HGT-label}). Thus, given the knowledge of the relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$, these transfer edges are in that sense ``visible''. However, given $\widetilde\ensuremath{\mathscr{S}}=(\widetilde T, S,\sigma,\widetilde\mu,\widetilde\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$, which still contains all loss branches, it is possible that a non-transfer edge in $T$ corresponds to a path in $\widetilde T$ which contains a transfer edge w.r.t.\ $\widetilde\ensuremath{\mathscr{S}}$, i.e., some edge $(u,v)\in E(\widetilde{T})$ such that $\widetilde{\mu}(u)$ and $\widetilde{\mu}(v)$ are incomparable in $S$. In particular, this is the case whenever a gene is transferred into some recipient branch followed by a back-transfer into the original branch and a loss in the recipient branch (see Fig.~\ref{fig:transfer-edges-plot}, right). Fig.~\ref{fig:transfer-edges-plot} shows that, in the majority of the simulated scenarios, the HGT information is preserved in the observable data. In fact, $\digamma(\ensuremath{\mathscr{S}})=\digamma(\widetilde\ensuremath{\mathscr{S}})$ in $86.7\%$ of simulated scenarios. Occasionally, however, we also encounter scenarios in which large fractions of the xenologous pairs are hidden from inference by the LDT-based approach. In the following, we will only be concerned with estimating a Fitch graph $\digamma(\ensuremath{\mathscr{S}})$, i.e., the graph resulting from the ``visible'' transfer edges. These were edgeless in about $17.7\%$ of the observable scenarios $\ensuremath{\mathscr{S}}$ (all parameter combinations taken into account). In these cases the LDT and thus also the inferred Fitch graphs are edgeless. These scenarios were excluded from further analysis. \begin{figure}[tb] \begin{center} \includegraphics[width=0.85\textwidth]{./images-Rb/fitch-approx-bp.pdf} \end{center} \caption{Xenologs inferred from LDT graphs. Only observable scenarios $\ensuremath{\mathscr{S}}$ whose LDT graph $(\Gu(\ensuremath{\mathscr{S}}),\sigma)$ contains at least one edge are included (82.3\% of all scenarios). The tuples on the horizontal axis give the rates for duplication, loss, and horizontal transfer. Top panel: Recall. Fraction of edges in $\digamma(\ensuremath{\mathscr{S}})$ represented in $\Gu(\ensuremath{\mathscr{S}})$ (light blue). As an alternative, the fraction of edges in a ``minimum edge completion'' (m.e.c.) to the ``closest'' complete multipartite graph is shown in dark blue. We observe a substantial increase in the fraction of inferred edges. The Fitch graph $\digamma(\ensuremath{\mathscr{S}}')$ obtained from the scenario $\ensuremath{\mathscr{S}}'$ produced by Alg.~\ref{alg:Ru-recognition} with input $(\Gu(\ensuremath{\mathscr{S}}),\sigma)$ yields an even better recall (light green). Second panel: Increase in the number of correctly inferred edges relative to the LDT graph $\Gu(\ensuremath{\mathscr{S}})$. Third panel: Precision. In contrast to LDT graphs, which by Thm.~\ref{thm:infer-fitch} cannot contain false positive edges, this is not the case for the estimated Fitch graphs obtained as m.e.c.\ and by Alg.~\ref{alg:Ru-recognition}. While false positive edges are typically rare, occasionally very poor estimates are observed. Bottom panel: Accuracy.} \label{fig:fitch-approx-bp} \end{figure} We first ask how well the LDT graph $\Gu(\ensuremath{\mathscr{S}})$ approximates the Fitch graph $\digamma(\ensuremath{\mathscr{S}})$. As shown in Fig.~\ref{fig:fitch-approx-bp}, the recall is limited. Over a broad range of parameters, the LDT graph contains about a third of the xenologous pairs. This begs the question whether the solution of the editing Problem~\ref{problem:Fcomp}, obtained using the exact recursive algorithm detailed in Sec.~\ref{app:edit} in the Technical Part, leads to a substantial improvement. We find that recall indeed increases substantially, at very moderate levels of false positives. The editing approach achieves a median precision of well above 90\% in most cases and a median recall of at least 60\%, it provides results that are at the very least encouraging. We find that minimal edge completion (Problem~\ref{problem:Fcomp}) already yields an rs-Fitch graph in the vast majority of cases ($99.8$\%, scenarios of all parameter combinations taken into account), even if we restrict the color set to $M'\coloneqq \sigma(L)$ (instead of $L(S)$) and thus force surjectivity of the coloring $\sigma$. We note that the original LDT graph and the minimal edge completion may not always be explained by a common scenario. This suggests that it will be worthwhile to consider the more difficult editing problems for rs-Fitch graphs with a relaxed scenario $\ensuremath{\mathscr{S}}$ that at the same time explains the LDT graph. Alg.~\ref{alg:Ru-recognition} provides a means to obtain an rs-Fitch graph satisfying the latter constraint but without giving any guarantees for optimality in terms of a minimal edge completion. An implementation is available in the current release of the \texttt{AsymmeTree} package. For the rs-Fitch graphs $\digamma(\ensuremath{\mathscr{S}}')$ of the scenarios $\ensuremath{\mathscr{S}}'$ constructed by Alg.~\ref{alg:Ru-recognition} with $(\Gu(\ensuremath{\mathscr{S}}),\sigma)$ as input, we observe another moderate increase of recall when compared with the minimal edge completion results. This comes, however, at the expense of a loss in precision. This is not surprising, since $\digamma(\ensuremath{\mathscr{S}}')$ by construction contains at least as many edges as any minimal edge completion of $\Gu(\ensuremath{\mathscr{S}})$. Therefore, the number of both true positive and false positive edges in $\digamma(\ensuremath{\mathscr{S}}')$ can be expected to be higher, resulting in a higher recall and lower precision, respectively. The recall is given by $TP / (TP + FN)$, and $|E(\digamma(\ensuremath{\mathscr{S}}))|= TP + FN$ in terms of true positives $TP$ and false negatives $FN$. Moreover, $\Gu(\ensuremath{\mathscr{S}})$ is a subgraph of the Fitch graphs $\digamma_{\textrm{m.e.c.}}$ and $\digamma(\ensuremath{\mathscr{S}}')$ inferred with editing or with Alg.~\ref{alg:Ru-recognition}, respectively. The ratio $|E(\digamma(\ensuremath{\mathscr{S}})) \cap E(\digamma^*)| / |E(\digamma(\ensuremath{\mathscr{S}}) \cap E(\Gu(\ensuremath{\mathscr{S}})))|$ with $\digamma^*\in \{\digamma_{\textrm{m.e.c.}}, \digamma(\ensuremath{\mathscr{S}}') \}$ therefore directly measures the increase in the number of correctly predicted xenologous pairs relative to the LDT. It is equivalent to the ratio of the respective recalls. By construction, the ratio is always $\ge1$. This is summarized as the second panel in Fig.~\ref{fig:fitch-approx-bp}. \section{Discussion and Future Directions} \label{sect:concl} In this contribution, we have introduced \emph{later-divergence-time (LDT) graphs} as a model capturing the subset of horizontal transfer detectable through the pairs of genes that have diverged later than their respective species. Within the setting of relaxed scenarios, LDT graphs $(G,\sigma)$ are exactly the properly colored cographs with a consistent triple set $\ensuremath{\mathfrak{S}}(G,\sigma)$. We further showed that LDT graphs describe a sufficient set of HGT events if and only if they are complete multipartite graphs. This corresponds to scenarios in which all HGT events are replacing. Otherwise, additional HGT events exist that separate genes from the same species. To better understand these, we investigated scenario-derived rs-Fitch graphs and characterized them as those complete multipartite graphs that satisfy an additional constraint on the coloring (expressed in terms of an auxiliary graph). Although the information contained in LDT graphs is not sufficient to unambiguously determine the missing HGT edges, we arrive at an efficiently solvable graph editing problem from which a ``best guess'' can be obtained. To our knowledge, this is the first detailed mathematical investigation into the power and limitation of an implicit phylogenetic method for HGT inference. From a data analysis point of view, LDT graphs appear to be an attractive avenue to infer HGT in practice. While existing methods to estimate them from (dis)similarity data certainly can be improved, it is possible to use their cograph structure to correct the initial estimate in the same way as orthology data \cite{Hellmuth:15a}. Although the LDT modification problems are NP-complete (Thm.~\ref{thm:LDT-M-NP}), it does not appear too difficult to modify efficient cograph editing heuristics \cite{Crespelle:19x,Hellmuth:20b} to accommodate the additional coloring constraints. LDT graphs by themselves clearly do no contain sufficient information to completely determine a relaxed scenario. Additional information, e.g.\ a best match graph \cite{Geiss:19a,Geiss:20b} will certainly be required. The most direct practical use of LDT information is to infer the Fitch graph, whose independent sets correspond to maximal HGT-free subsets of genes. These subsets can be analyzed separately \cite{Hellmuth:2017} using recent results to infer gene family histories, including orthology relations from best match data \cite{Geiss:20b,Schaller:20x}. The main remaining unresolved question is whether the resulting HGT-free subtrees can be combined into a complete scenario using only relational information such as best match data. One way to attack this is to employ the techniques used by \citet{LH:20} to characterize the conditions under which a fully event-labeled gene tree can be reconciled with unknown species trees. These not only resulted in an polynomial-time algorithm but also establishes additional constraints on the HGT-free subtrees. An alternative, albeit mathematically less appealing approach is to adapt classical phylogenetic methods to accommodate the HGT-free subtrees as constraints. We suspect that best match data can supply further, stringent constraints for this task. We will pursue this avenue elsewhere. Several alternative routes can be followed to obtain Fitch graphs from LDT graphs. The most straightforward approach is to elaborate on the editing problems briefly discussed in Sec.~\ref{sect:edit}. A natural question arising in this context is whether there are non-LDT edges that are shared by all minimal completion sets $Q$, and whether these ``obligatory Fitch-edges'' can be determined efficiently. A natural alternative is to modify Algorithm~\ref{alg:Ru-recognition} to incorporate some form of cost function to favor the construction of biologically plausible scenarios. In a very different approach, one might also consider to use LDT graphs as constraints in probabilistic models to reconstruct scenarios, see e.g.\ \cite{Sjostrand:14,Khan:16}. Although we have obtained characterizations of both LDT graphs and rs-Fitch graphs, many open questions and avenues for future research remain. \paragraph{Reconciliation maps.} The notion of \emph{relaxed reconciliation maps} used here appears to be at least as general as alternatives that have been explored in the literature. It avoids the concurrent definition of event types and thus allows situations that may be excluded in a more restrictive setting. For example, relaxed scenarios may have two or more vertically inherited genes $x$ and $y$ in the same species with $u\coloneqq \lca_T(x,y)$ mapping to a vertex of the species trees. In the usual interpretation, $u$ correspond to a speciation event (by virtue of $\mu(u)\in V^0(S)$); on the other hand, the descendants $x$ and $y$ constitute paralogs in most interpretations. Such scenarios are explicitly excluded e.g.\ in \cite{Stadler:20a}. Lemma~\ref{lem:NoR=} suggests that relaxed scenarios are sufficiently flexible to make it possible to replace a scenario $\ensuremath{\mathscr{S}}$ that is ``forbidden'' in response to such inconsistent interpretations of events by an ``allowed'' scenario $\ensuremath{\mathscr{S}}'$ with the same $\sigma$ such that $\Gu(\ensuremath{\mathscr{S}})=\Gu(\ensuremath{\mathscr{S}}')$. Whether this is indeed true, or whether a more restrictive definition of reconciliation imposes additional constraints of LDT graphs will of course need to be checked in each case. The restriction of a $\mu$-free scenario to a subset $L'$ of leaves of $T$ and to a subset $M'$ of leaves of $S$ is well defined as long as $\sigma(L')\subseteq M'$. One can also define a corresponding restriction of the reconciliation map $\mu$. Most importantly, the deletion of some leaves of $T$ may leave inner vertices in $T$ with only a single child, which are then suppressed to recover a phylogenetic tree. This replaces paths in $T$ by single edges and thus affects the definition of the HGT map $\lambda_{\ensuremath{\mathscr{S}}}$ since a path in $T$ that contains two adjacent vertices $u_1$, $u_2$ with incomparable images $\mu(u_1)$ and $\mu(u_2)$ may be replaced by an edge with comparable end points in the restricted scenario $\ensuremath{\mathscr{S}}'$. This means that HGT events may become invisible, and thus $\digamma(\ensuremath{\mathscr{S}}')$ is not necessarily an \emph{induced} subgraph of $\digamma(\ensuremath{\mathscr{S}})$, but a subgraph that may lack additional edges. Note that this is in contrast to the \emph{assumptions} made in the analysis of (directed) Fitch graphs of 0/1-edge-labeled graphs \cite{Geiss:18a,Hellmuth:2019a}, where the information on horizontal transfers is inherited upon restriction of $(T,\lambda)$. \paragraph{Observability.} The latter issue is a special case of the more general problem with \emph{observability} of events. Conceptually, we assume that evolution followed a \emph{true scenario} comprising discrete events (speciations, duplications, horizontal transfer, gene losses, and possibly other events such as hybridization which are not considered here). In computer simulations, of course we know this true scenario, as well as all event types. Gene loss not only renders some leaves invisible but also erases the evidence of all subtrees without surviving leaves. Removal of these vertices in general results in a non-phylogenetic gene tree that contains inner vertices with a single child. In the absence of horizontal transfer, this causes little problems and the \emph{unobservable vertices} can be be removed as described in the previous paragraph, see e.g.\ \cite{HernandezRosales:12a}. The situation is more complicated with HGT. In \cite{Nojgaard:18a}, an HGT-vertex is deemed observable if it has both a horizontally and a vertically inherited descendant. In our present setting, the scenario retains an HGT-edge by virtue of consecutive vertices in $T$ with incomparable $\mu$-images, irrespective of whether an HGT-vertex is retained. This type of ``vertex-centered'' notion of xenology is explored further in \cite{Hellmuth:17a}. We suspect that these different points of view can be unified only when gene losses are represented explicitly or when gene and species tree trees are not required to be phylogenetic (with single-child vertices implicating losses). Either extension of the theory, however, requires a more systematic understanding of which losses need to be represented and what evidence can be acquired to ``observe'' them. \paragraph{Impact of Orthology.} Pragmatically, one would define two genes $x$ and $y$ to be \emph{orthologs} if $\mu(\lca_T(x,y))\in V^0(S)$, i.e., if $x$ and $y$ are the product of a speciation event. Lemma~\ref{lem:NoR=} implies that there is always a scenario without any orthologs that explains a given LDT graph $(G,\sigma)$. In particular, therefore, $(G,\sigma)$ makes no implications on orthology. Conversely, however, orthology information is available and additional information on HGT might become available. In a situation akin to Fig.~\ref{fig:2plausibeScen} (with the ancestral duplication moved down to the speciation), knowing that $a$ and $b$ are orthologs in the more restrictive sense that $\mu(\lca_T(a,b))=\lca_S(\sigma(a),\sigma(b))$ excludes the r.h.s.\ scenario and implies that $a'$ is the horizontally inherited child, and therefore also that $a$ and $a'$ are xenologs. This connection of orthology and xenology will be explored elsewhere. \paragraph{Other types of implicit phylogenetic information.} LDT graphs are not the only conceivable type of accessible xenology information. A large class of methods is designed to assess whether a single gene is \emph{a xenolog}, i.e., whether there is evidence that it has been horizontally inserted into the genome of the recipient species. The main subclasses evaluate nucleotide composition patterns, the phyletic distribution of best-matching genes, or combination thereof. A recent overview can be found e.g.\ in \cite{SanchezSoto:20}. It remains an open question how this information can be utilized in conjunction with other types of HGT information, such as LDT graphs. It seems reasonable to expect that it can provide not only additional constraints to infer rs-Fitch graphs but also provides directional information that may help to infer the directed Fitch graphs studied by \cite{Geiss:18a,Hellmuth:2019a}. Complementarily, we may ask whether it is possible to gain direct information on HGT edges between pairs of genes in the same genome, and if so, what needs to be measured to extract this information efficiently. We also have to leave open several mathematical questions. Regarding 0/1-edge labeled trees $(T,\lambda)$, it would be of interest to know whether there is always a relaxed scenario $\ensuremath{\mathscr{S}} = (T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ such that $(T,\lambda) = (T,\lambda_{\ensuremath{\mathscr{S}}})$ for a suitable choice of $\sigma$. Elaborating on Thm.~\ref{thm:FitchRu-scenario}, it would be interesting to characterize the leaf colorings $\sigma$ for $(T,\lambda)$ such that there is a relaxed scenario $\ensuremath{\mathscr{S}}$ with $\digamma(T,\lambda) = \digamma(\ensuremath{\mathscr{S}})$. \subsection*{Acknowledgments} We thank the three anonymous referees for their valuable comments that helped to significantly improve the paper. This work was funded in part by the Deutsche Forschungsgemeinschaft (proj.\ CO1 within CRG 1423, no.\ 421152132 and proj. MI439/14-2), and by the Natural Sciences and Engineering Research Council of Canada (NSERC, grant RGPIN-2019-05817). \begin{appendix} \section*{Technical Part} \section{Later-Divergence-Time Graphs} \label{TP:sect:LDT} \subsection{LDT Graphs and Evolutionary Scenarios} In the absence of horizontal gene transfer, the last common ancestor of two species $A$ and $B$ should mark the latest possible time point at which two genes $a$ and $b$ residing in $\sigma(a)=A$ and $\sigma(b)=B$, respectively, may have diverged. Situations in which this constraint is violated are therefore indicative of HGT. \begin{definition}[{$\mathbf{\mu}$}-free scenario] Let $T$ and $S$ be planted trees, $\sigma\colon L(T)\to L(S)$ be a map and $\ensuremath{\tau_{T}}$ and $\ensuremath{\tau_{S}}$ be time maps of $T$ and $S$, respectively, such that $\ensuremath{\tau_{T}}(x) = \ensuremath{\tau_{S}}(\sigma(x))$ for all $x\in L(T)$. Then, $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ is called a \emph{$\mu$-free scenario}. \label{def:mu-free} \end{definition} The condition that $\ensuremath{\tau_{T}}(x) = \ensuremath{\tau_{S}}(\sigma(x))$ for all $x\in L(T)$ is mostly a technical convenience that makes $\mu$-free scenarios easier to interpret. Nevertheless, by Lemma~\ref{lem:arbitrary-tT}, given the time map $\ensuremath{\tau_{S}}$, one can easily construct a time map $\ensuremath{\tau_{T}}$ such that $\ensuremath{\tau_{T}}(x) = \ensuremath{\tau_{S}}(\sigma(x))$ for all $x\in L(T)$. In particular, when constructing relaxed scenarios explicitly, we may simply choose $\ensuremath{\tau_{T}}(u)=0$ and $\ensuremath{\tau_{S}}(x)=0$ as common time for all leaves $u\in L(T)$ and $x\in L(S)$. \begin{definition}[LDT graph] For a $\mu$-free scenario $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$, we define $\Gu(\ensuremath{\mathscr{T}}) = \Gu(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}}) = (V,E)$ as the graph with vertex set $V\coloneqq L(T)$ and edge set \begin{equation*} E \coloneqq \{ab\mid a,b\in L(T), \ensuremath{\tau_{T}}(\lca_T(a,b))<\ensuremath{\tau_{S}}(\lca_S(\sigma(a),\sigma(b))). \} \label{eq:Ru-def} \end{equation*} A vertex-colored graph $(G,\sigma)$ is a \emph{later-divergence-time graph} (\emph{LDT} graph), if there is a $\mu$-free scenario $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ such that $G=\Gu(\ensuremath{\mathscr{T}})$. In this case, we say that $\ensuremath{\mathscr{T}}$ \emph{explains} $(G,\sigma)$. \label{def:LDTgraph} \end{definition} It is easy to see that the edge set of $\Gu(\ensuremath{\mathscr{T}})$ defines an \emph{undirected} graph and that there are no edges of the form $aa$, since $\ensuremath{\tau_{T}}(\lca_T(a,a)) = \ensuremath{\tau_{T}}(a) = \ensuremath{\tau_{S}}(\sigma(a)) =\ensuremath{\tau_{S}}(\lca_S(\sigma(a),\sigma(a)))$. Hence $\Gu(\ensuremath{\mathscr{T}})$ is a simple graph. By definition, every relaxed scenario $\ensuremath{\mathscr{S}} =(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ satisfies $\ensuremath{\tau_{T}}(x)=\ensuremath{\tau_{S}}(\sigma(x))$ all $x \in L(T)$. Therefore, removing $\mu$ from $\ensuremath{\mathscr{S}}$ yields a $\mu$-free scenario $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$. Thus, we will use the following simplified notation. \begin{definition} We put $\Gu(\ensuremath{\mathscr{S}}) \coloneqq \Gu(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ for a given relaxed scenario $\ensuremath{\mathscr{S}} =(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ and the underlying $\mu$-free scenario $(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ and say, by slight abuse of notation, that $\ensuremath{\mathscr{S}}$ \emph{explains} $(\Gu(\ensuremath{\mathscr{S}}),\sigma)$. \label{def:Gu-scen} \end{definition} \begin{lemma} For every $\mu$-free scenario $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$, there is a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\widetilde\ensuremath{\tau_{T}},\widetilde\ensuremath{\tau_{S}})$ for $T, S$ and $\sigma$ such that $(\Gu(\ensuremath{\mathscr{T}}),\sigma) = (\Gu(\ensuremath{\mathscr{S}}), \sigma)$. \label{lem:mfscen} \end{lemma} \begin{proof} Let $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ be a $\mu$-free scenario. In order to construct a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\widetilde\ensuremath{\tau_{T}},\widetilde\ensuremath{\tau_{S}})$ that satisfies $\Gu(\ensuremath{\mathscr{S}})=\Gu(\ensuremath{\mathscr{T}})$, we start with a time map $\widetilde\ensuremath{\tau_{T}}$ for $T$ satisfying $\widetilde\ensuremath{\tau_{T}}(0_T)=\max(\ensuremath{\tau_{T}}(0_T),\ensuremath{\tau_{S}}(0_S))$ and $\widetilde\ensuremath{\tau_{T}}(v)=\ensuremath{\tau_{T}}(v)$ for all $v\in V(T)\setminus\{0_T\}$. Correspondingly, we introduce a time map $\widetilde\ensuremath{\tau_{S}}$ for $S$ such that $\widetilde\ensuremath{\tau_{S}}(0_S)=\max(\ensuremath{\tau_{T}}(0_T),\ensuremath{\tau_{S}}(0_S))$ and $\widetilde\ensuremath{\tau_{S}}(v)=\ensuremath{\tau_{S}}(v)$ for all $v\in V(S)\setminus\{0_S\}$. By construction, we have $t_{\max,T}\coloneqq \max\{\ensuremath{\tau_{T}}(v) \mid v\in V(T)\}=\ensuremath{\tau_{T}}(0_T)=\ensuremath{\tau_{S}}(0_S)$. Moreover, we have $t_{\min,S}\coloneqq\min\{\ensuremath{\tau_{S}}(v) \mid v\in V(S)\} \le \min\{\ensuremath{\tau_{T}}(v) \mid v\in V(T)\}\eqqcolon t_{\min,T}$. To see this, we can choose $x\in V(T)$ such that $\ensuremath{\tau_{T}}(v)=t_{\min,T}$. By the definition of time maps and minimality of $\ensuremath{\tau_{T}}(v)$, the vertex $x$ must be a leaf. Hence, since $\ensuremath{\mathscr{T}}$ is a $\mu$-free scenario, we have $\ensuremath{\tau_{T}}(x)=\ensuremath{\tau_{S}}(\sigma(x))$ with $X\coloneqq\sigma(x)\in L(S)\subset V(S)$. Therefore, it must hold that $t_{\min,S}\le t_{\min,T}$. We now define $P\coloneqq\{p\in V(S)\cup E(S) \mid X\preceq_{S} p\}$, i.e., the set of all vertices and edges on the unique path in $S$ from $0_S$ to the leaf $X$. Since $\ensuremath{\tau_{S}}(X)= t_{\min,T} < t_{\max,T} = \ensuremath{\tau_{S}}(0_S)$, we find, for each $v\in V(T)$, \emph{either} a vertex $u\in P$ such that $\ensuremath{\tau_{T}}(v)=\ensuremath{\tau_{S}}(u)$ \emph{or} an edge $(u,w)\in P$ such that $\ensuremath{\tau_{S}}(w)<\ensuremath{\tau_{T}}(v)<\ensuremath{\tau_{S}}(u)$. Hence, we can specify the reconciliation map $\mu$ by defining, for every $v\in V(T)$, \begin{equation*} \mu(v) \coloneqq \begin{cases} 0_S &\text{if } v=0_T,\\ \sigma(v) &\text{if } v\in L(T),\\ u &\text{if there is some vertex } u\in P \textrm{ with } \ensuremath{\tau_{T}}(v)=\ensuremath{\tau_{S}}(u),\\ (u,w) &\text{if there is some edge } (u,w)\in P \textrm{ with } \ensuremath{\tau_{S}}(w)<\ensuremath{\tau_{T}}(v)<\ensuremath{\tau_{S}}(u). \end{cases} \end{equation*} For each $v\in V^0(T)$, exactly one of the two alternatives for $P$ applies, hence $\mu$ is well-defined. It is now an easy task to verify that all conditions in Definitions~\ref{def:tc-map} and~\ref{def:relaxed-reconc} are satisfied for $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\widetilde\ensuremath{\tau_{T}},\widetilde\ensuremath{\tau_{S}})$ by construction. Hence, by Def.~\ref{def:relaxed-scenario}, $\ensuremath{\mathscr{S}}$ is a relaxed scenario. It remains to show that $\Gu(\ensuremath{\mathscr{T}})=\Gu(\ensuremath{\mathscr{S}})$. Let $a,b\in L(T)$ be arbitrary. Clearly, neither $\lca_T(a,b)$ nor $\lca_S(\sigma(a),\sigma(b))$ equals the planted root $0_T$ or $0_S$, respectively. Since we have only changed the timing of the roots $0_T$ or $0_S$, we obtain $ab\in E(\Gu(\ensuremath{\mathscr{S}}))$ if and only if $\widetilde\ensuremath{\tau_{T}}(\lca_T(a,b)) = \ensuremath{\tau_{T}}(\lca_T(a,b)) < \widetilde\ensuremath{\tau_{S}}(\lca_S(\sigma(a),\sigma(b))) = \ensuremath{\tau_{S}}(\lca_S(\sigma(a),\sigma(b)))$ if and only if $ab\in E(\Gu(\ensuremath{\mathscr{T}}))$, which completes the proof. \end{proof} \begin{theorem} $(G,\sigma)$ is an LDT graph if and only if there is a relaxed scenario $\ensuremath{\mathscr{S}} = (T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ such that $(G,\sigma) = (\Gu(\ensuremath{\mathscr{S}}),\sigma)$. \label{thm:LDT-scen} \end{theorem} \begin{proof} By definition, $(G,\sigma)$ is an LDT graph for every relaxed scenario $\ensuremath{\mathscr{S}}$ with coloring $\sigma$ that satisfies $(G,\sigma) = (\Gu(\ensuremath{\mathscr{S}}),\sigma)$. Now suppose that $(G,\sigma)$ is an LDT graph. By definition, there is a $\mu$-free scenario $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ with coloring $\sigma$ such that $(G,\sigma)=(\Gu(\ensuremath{\mathscr{T}}),\sigma)$. By Lemma~\ref{lem:mfscen}, there is a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\widetilde\ensuremath{\tau_{T}},\widetilde\ensuremath{\tau_{S}})$ for $T, S$ and $\sigma$ such that $(G,\sigma) = (\Gu(\ensuremath{\mathscr{S}}), \sigma)$. \end{proof} \begin{remark} From here on, we omit the explicit reference to Lemma~\ref{lem:mfscen} and Thm \ref{thm:LDT-scen} and assume that the reader is aware of the fact that every LDT graph is explained by some relaxed scenario $\ensuremath{\mathscr{S}}$ and that for every $\mu$-free scenario $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$, there is a relaxed scenario $\ensuremath{\mathscr{S}}$ for $T, S$ and $\sigma$ such that $(\Gu(\ensuremath{\mathscr{T}}),\sigma) = (\Gu(\ensuremath{\mathscr{S}}), \sigma)$. \end{remark} We now derive some simple properties of $\mu$-free and relaxed scenarios. It may be surprising at first glance that ``the speciation nodes'', i.e., vertices $u\in V^0(T)$ with $\mu(u)\in V(S)$ do not play a special role in determining LDT graphs. \begin{lemma} For every relaxed scenario $\ensuremath{\mathscr{S}} =(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ there exists a relaxed scenario $\widetilde{\ensuremath{\mathscr{S}}} = (T,S,\sigma,\widetilde\mu,\widetilde\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ such that $\Gu(\widetilde{\ensuremath{\mathscr{S}}})=\Gu(\ensuremath{\mathscr{S}})$ and for all distinct $x,y\in L(T)$ with $xy\notin E(\Gu(\ensuremath{\mathscr{S}}))$ holds $\widetilde \ensuremath{\tau_{T}}(\lca_T(x,y))>\ensuremath{\tau_{S}}(\lca_S(\sigma(x),\sigma(y)))$. \label{lem:NoR=} \end{lemma} \begin{proof} For the relaxed scenario $\ensuremath{\mathscr{S}} =(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ we write $V^0(S)\coloneqq V(S)\setminus (L(S)\cup \{0_S\})$ and define \begin{align*} D_S &\coloneqq \{|\ensuremath{\tau_{S}}(y)-\ensuremath{\tau_{S}}(x)| \colon x,y\in V(S),\ensuremath{\tau_{S}}(x)\neq\ensuremath{\tau_{S}}(y)\} \textrm{,}\\ D_T &\coloneqq \{|\ensuremath{\tau_{T}}(y)-\ensuremath{\tau_{T}}(x)| \colon x,y\in V(T),\ensuremath{\tau_{T}}(x)\neq\ensuremath{\tau_{T}}(y)\} \textrm{, and} \\ D_{TS} &\coloneqq \{|\ensuremath{\tau_{T}}(x)-\ensuremath{\tau_{S}}(y)| \colon x\in V(T),\, y\in V(S), \ensuremath{\tau_{T}}(x)\neq \ensuremath{\tau_{S}}(y)\}. \end{align*} We have $D_S\ne\emptyset$ and $D_T\ne\emptyset$ since we do not consider empty trees, and thus, at least the ``planted'' edges $0_S\rho_S$ and $0_T\rho_T$ always exist. By construction, all values in $D_T$, $D_S$, and $D_{TS}$ are strictly positive. Now define \begin{equation*} \epsilon \coloneqq \frac{1}{2}\min (D_{ST}\cup D_S\cup D_T). \end{equation*} Since $D_S$ and $D_T$ are not empty, $\epsilon$ is well-defined and, by construction, $\epsilon>0$. Next we set, for all $v\in V(T)$, \begin{equation*} \begin{split} \widetilde\ensuremath{\tau_{T}}(v) &\coloneqq \begin{cases} \ensuremath{\tau_{T}}(v)+\epsilon, \text{ if } v\in V^0(T)\\ \ensuremath{\tau_{T}}(v), \text{ otherwise,} \end{cases}\\ \widetilde\mu(v) &\coloneqq \begin{cases} (\parent(x),x), \text{ if } \mu(v) = x\in V^0(S)\\ \mu(v), \text{ otherwise.} \end{cases} \\ \end{split} \end{equation*} \begin{Xclaim} $\widetilde{\ensuremath{\mathscr{S}}} \coloneqq (T,S,\sigma,\widetilde\mu,\widetilde\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ is a relaxed scenario. \end{Xclaim} \begin{claim-proof} By construction, if $\mu(v)\in (L(S)\cup \{0_S\})$ and thus, $\mu(v)\notin V^0(S)$, $\mu(v)$ and $\widetilde\mu(v)$ coincide. Therefore, (G0) and (G1) are trivially satisfied for $\widetilde\mu$. In order to show (G2), we first note that $\widetilde \ensuremath{\tau_{T}}(v)= \ensuremath{\tau_{T}}(v) = \ensuremath{\tau_{S}}(\sigma(v))$ holds for all $v \in L(T)$ by Def.\ \ref{def:tc-map}. We next argue that $\widetilde\ensuremath{\tau_{T}}$ is a time map. To this end, let $x,y\in V(T)$ with $x\prec_T y$. Hence, $\ensuremath{\tau_{T}}(x)<\ensuremath{\tau_{T}}(y)$ and, in particular, $\ensuremath{\tau_{T}}(y)-\ensuremath{\tau_{T}}(x)\geq 2\epsilon$. Assume for contradiction that $\widetilde \ensuremath{\tau_{T}}(x) \geq \widetilde \ensuremath{\tau_{T}}(y)$. This implies $\widetilde \ensuremath{\tau_{T}}(x) = \ensuremath{\tau_{T}}(x)+\epsilon$ and $\widetilde \ensuremath{\tau_{T}}(y) =\ensuremath{\tau_{T}}(y)$, since $\ensuremath{\tau_{T}}(x)<\ensuremath{\tau_{T}}(y)$ and $\epsilon>0$ always implies $\ensuremath{\tau_{T}}(x)+\epsilon <\ensuremath{\tau_{T}}(y) +\epsilon$ and $\ensuremath{\tau_{T}}(x) <\ensuremath{\tau_{T}}(y) +\epsilon$. Therefore, $\widetilde \ensuremath{\tau_{T}}(y) - \widetilde \ensuremath{\tau_{T}}(x) = \ensuremath{\tau_{T}}(y)-(\ensuremath{\tau_{T}}(x) + \epsilon) \geq \epsilon>0$ and thus, $\widetilde \ensuremath{\tau_{T}}(y) > \widetilde \ensuremath{\tau_{T}}(x)$; a contradiction. We continue with showing that the two time maps $\widetilde\ensuremath{\tau_{T}}$ and $\ensuremath{\tau_{S}}$ are time-consistent w.r.t.\ $\widetilde{\ensuremath{\mathscr{S}}}$. To see that Condition (C1) is satisfied, observe that, by construction, $\widetilde\mu(v)\in V(S)$ does hold only in case $\mu(v)\notin E(S)\cup V^0(S)$ and thus, $\mu(v)\in L(S) \cup \{0_S\}$. In this case, $\widetilde\mu(v) = \mu(v)$ and since $\mu(v)$ satisfies (G1) we have $v\in L(T)\cup \{0_T\}$. Thus, $v\notin V^0(T)$ and, therefore, $\widetilde \ensuremath{\tau_{T}}(v) =\ensuremath{\tau_{T}}(v) = \ensuremath{\tau_{S}}(\mu(v))$. Therefore, Condition (C1) is satisfied. Now consider Condition (C2). As argued above, $\widetilde \mu(v)\in E(S)$ holds for all $v\in V^0(T) = V(T)\setminus (L(T)\cup \{0_T\})$. By construction, $\widetilde \ensuremath{\tau_{T}}(v) = \ensuremath{\tau_{T}}(v)+\epsilon$. There are two cases: $\mu(v)=x\in V^0(S)$, or $\mu(v)=(y,x)\in E(S)$ with $y = \parent(x)$. The following arguments hold for both cases: We have $\widetilde \mu(v) = (y,x)\in E(S)$. Moreover, $\ensuremath{\tau_{S}}(x) \leq \ensuremath{\tau_{T}}(v)< \widetilde \ensuremath{\tau_{T}}(v)$ since $\ensuremath{\tau_{T}}$ and $\ensuremath{\tau_{S}}$ satisfy (C1) and (C2). Furthermore, $\ensuremath{\tau_{T}}(v)<\ensuremath{\tau_{S}}(y)$ and, by construction, $\ensuremath{\tau_{S}}(y)-\ensuremath{\tau_{T}}(v)\geq 2\epsilon$. This immediately implies that $\ensuremath{\tau_{S}}(y) \geq \ensuremath{\tau_{T}}(v) + 2\epsilon = \widetilde \ensuremath{\tau_{T}}(v) + \epsilon > \widetilde \ensuremath{\tau_{T}}(v)$. In summary, $\ensuremath{\tau_{S}}(x) < \widetilde{\ensuremath{\tau_{T}}}(v) < \ensuremath{\tau_{S}}(y)$ whenever $\widetilde \mu(v) = (y,x)\in E(S)$. Therefore, Condition (C2) is satisfied for $\widetilde{\ensuremath{\mathscr{S}}}$. \end{claim-proof} \begin{Xclaim}\label{claim:subset-edges} $E(\Gu(\ensuremath{\mathscr{S}})) \subseteq E(\Gu(\widetilde{\ensuremath{\mathscr{S}}}))$. \end{Xclaim} \begin{claim-proof} Let $xy$ be an edge in $\Gu(\ensuremath{\mathscr{S}})$ and thus $x\ne y$, and set $v_T\coloneqq \lca_T(x,y)$ and $v_S\coloneqq \lca_S(\sigma(x),\sigma(y))$. By definition, we have $\ensuremath{\tau_{T}}(v_T)<\ensuremath{\tau_{S}}(v_S)$. Therefore, we have $\ensuremath{\tau_{S}}(v_S)-\ensuremath{\tau_{T}}(v_T)\in D_{TS}$ and, hence, $\ensuremath{\tau_{S}}(v_S)-\ensuremath{\tau_{T}}(v_T)\ge 2\epsilon$. Since $x\ne y$, $v_T=\lca_T(x,y)$ is an inner vertex of $T$. By construction, therefore, $\widetilde{\ensuremath{\tau_{T}}}(v_T)=\ensuremath{\tau_{T}}(v_T)+\epsilon$. The latter arguments together with the fact that $\ensuremath{\tau_{S}}$ remains unchanged imply that $\ensuremath{\tau_{S}}(v_S)-\widetilde{\ensuremath{\tau_{T}}}(v_T)\ge \epsilon>0$, and thus, $\widetilde{\ensuremath{\tau_{T}}}(v_T)<\ensuremath{\tau_{S}}(v_S)$. Therefore, we conclude that $xy$ is an edge in $\Gu(\widetilde{\ensuremath{\mathscr{S}}})$. \end{claim-proof} It remains to show \begin{Xclaim} For all distinct $x,y\in L(T)$ with $xy\notin E(\Gu(\ensuremath{\mathscr{S}}))$, we have $\widetilde \ensuremath{\tau_{T}}(\lca_T(x,y))>\ensuremath{\tau_{S}}(\lca_S(\sigma(x),\sigma(y)))$. \end{Xclaim} \begin{claim-proof} Suppose $xy\notin E(\Gu(\ensuremath{\mathscr{S}}))$ for two distinct $x,y\in L(T)$, and set $v_T\coloneqq \lca_T(x,y)$ and $v_S\coloneqq \lca_S(\sigma(x),\sigma(y))$. By definition, this implies $\ensuremath{\tau_{T}}(v_T)\ge \ensuremath{\tau_{S}}(v_S)$. Since $x\ne y$, we clearly have that $v_T=\lca_T(x,y)$ is an inner vertex of $T$, and hence, $\widetilde{\ensuremath{\tau_{T}}}(v_T)=\ensuremath{\tau_{T}}(v_T)+\epsilon$. The latter two argument together with $\epsilon>0$ and the fact that $\ensuremath{\tau_{S}}$ remains unchanged imply that $\widetilde{\ensuremath{\tau_{T}}}(v_T)>\ensuremath{\tau_{S}}(v_S)$. \end{claim-proof} In particular, therefore, $xy\notin E(\Gu(\ensuremath{\mathscr{S}}))$ implies that $xy\notin E(\Gu(\widetilde\ensuremath{\mathscr{S}}))$ and therefore, $E(\Gu(\widetilde{\ensuremath{\mathscr{S}}}))\subseteq E(\Gu(\ensuremath{\mathscr{S}}))$. Together with Claim \ref{claim:subset-edges} and the fact that both $\Gu(\ensuremath{\mathscr{S}})$ and $\Gu(\widetilde\ensuremath{\mathscr{S}})$ have vertex set $L(T)$, we conclude that $\Gu(\ensuremath{\mathscr{S}}) = \Gu(\widetilde{\ensuremath{\mathscr{S}}})$, which completes the proof. \end{proof} Since the relaxed scenario $\widetilde{\ensuremath{\mathscr{S}}} = (T,S,\sigma,\widetilde\mu,\widetilde\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ as constructed in the proof of Lemma~\ref{lem:NoR=} satisfies $\widetilde\mu(v)\notin V^0(S)$ we obtain \begin{corollary} For every relaxed scenario $\ensuremath{\mathscr{S}} =(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ there exists a relaxed scenario $\widetilde{\ensuremath{\mathscr{S}}} = (T,S,\sigma,\widetilde\mu,\widetilde\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ such that $\Gu(\widetilde{\ensuremath{\mathscr{S}}})=\Gu(\ensuremath{\mathscr{S}})$ and $\widetilde\mu(v)\notin V^0(S)$ for all $v\in V(T)$. \end{corollary} Lemma~\ref{lem:NoR=}, however, does not imply that one can always find a relaxed scenario with a reconciliation map $\widetilde\mu$ for given trees $T$ and $S$ satisfying $\widetilde\mu(\lca_T(x,y))\succ_S\lca_S(\sigma(x),\sigma(y))$ for all distinct $x,y \in L(T)$ with $xy\notin E(\Gu(\ensuremath{\mathscr{S}}))$, as shown in Example~\ref{ex:widetildeMu}. \begin{xmpl}\label{ex:widetildeMu} Consider the LDT graph $(\Gu(\ensuremath{\mathscr{S}}),\sigma)$ with corresponding relaxed scenario $\ensuremath{\mathscr{S}}$ as shown in Fig.~\ref{fig:counterexample-comparable-mu}. Note first that $v=\lca_T(a,b)=\lca_{T}(c,d)$ and $ab,cd\notin E(\Gu)$. To satisfy both $\widetilde\mu(v)\succ_S \lca_S(\sigma(a),\sigma(b))$ and $\widetilde\mu(v)\succ_S \lca_S(\sigma(c),\sigma(d))$, we clearly need that $\widetilde{\mu}(v)\succeq_S \rho_S$, and thus $\widetilde\ensuremath{\tau_{T}}(v)\ge \widetilde\ensuremath{\tau_{S}}(\rho_S)$. However, $ad'\in E(\Gu)$ and $\lca_{T}(a,d')=u$ imply that $\widetilde\ensuremath{\tau_{T}}(u)<\ensuremath{\tau_{S}}(\sigma(a),\sigma(d))=\ensuremath{\tau_{S}}(\rho_S)$. Hence, we obtain $\widetilde\ensuremath{\tau_{T}}(u)<\ensuremath{\tau_{S}}(\rho_S)\le\widetilde\ensuremath{\tau_{T}}(v)$; a contradiction to $(u,v)\in E(T)$ and $\widetilde{\ensuremath{\tau_{T}}}$ being a time map for $T$. Therefore, there is no relaxed scenario $\widetilde{\ensuremath{\mathscr{S}}} = (T,S,\sigma,\widetilde\mu,\widetilde\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ such that $\Gu(\widetilde{\ensuremath{\mathscr{S}}})=\Gu(\ensuremath{\mathscr{S}})$ and such that $\widetilde \mu(\lca_T(x,y))\succ_S\lca_S(\sigma(x),\sigma(y))$ for all distinct $x,y\in L(T)$ with $xy\notin E(\Gu(\ensuremath{\mathscr{S}}))$. \end{xmpl} \begin{figure}[t] \begin{center} \includegraphics[width=0.6\textwidth]{./images-Rb/counterexample-comparable-mu.pdf} \end{center} \caption{Left a relaxed scenario $\ensuremath{\mathscr{S}} =(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ with corresponding graph $(\Gu(\ensuremath{\mathscr{S}}),\sigma)$ (right). For $(\Gu(\ensuremath{\mathscr{S}}),\sigma)$ there is no relaxed scenario $\widetilde{\ensuremath{\mathscr{S}}} = (T,S,\sigma,\widetilde\mu,\widetilde\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ such that $\Gu(\widetilde{\ensuremath{\mathscr{S}}})=\Gu(\ensuremath{\mathscr{S}})$ and for all distinct $x,y\in L(T)$ with $xy\notin E(\Gu(\ensuremath{\mathscr{S}}))$ it holds that $\widetilde \mu(\lca_T(x,y))\succ_S\lca_S(\sigma(x),\sigma(y))$, see Example \ref{ex:widetildeMu}.} \label{fig:counterexample-comparable-mu} \end{figure} For the special case that the graph under consideration has no edges we have \begin{lemma} For an edgeless graph $G$ and for any choice of ~$T$ and $S$ with $L(T)=V(G)$ and $\sigma(L(T))=L(S)$ there is a relaxed scenario $\ensuremath{\mathscr{S}} =(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ that satisfies $G = \Gu(\ensuremath{\mathscr{S}})$. \label{lem:Rempty} \end{lemma} \begin{proof} Given $T$ and $S$ we construct a relaxed scenario as follows. Let $\ensuremath{\tau_{S}}$ be an arbitrary time map on $S$. Then we can choose $\ensuremath{\tau_{T}}$ such that $\ensuremath{\tau_{S}}(\rho_S)<\ensuremath{\tau_{T}}(u)<\ensuremath{\tau_{S}}(0_S)$ for all $u\in V^0(T)$. Each leaf $u\in L(T)$ then has a parent in $T$ located above the last common ancestor $\rho_S$ of all species in which case $\Gu(\ensuremath{\mathscr{S}})$ is edgeless. \end{proof} Lemma~\ref{lem:Rempty} is reminiscent of the fact that for DL-only scenarios any given gene tree $T$ can be reconciled with an arbitrary species tree as long as $\sigma(L(T))=L(S)$ \cite{Guigo:96,Geiss:20b}. \subsection{Properties of LDT Graphs} \begin{proposition} Every LDT graph $(G,\sigma)$ is properly colored. \label{prop:properCol} \end{proposition} \begin{proof} Let $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ be a $\mu$-free scenario such that $(G,\sigma) = (\Gu(\ensuremath{\mathscr{T}}),\sigma)$ and recall that every $\mu$-free scenario satisfies $\ensuremath{\tau_{T}}(x) = \ensuremath{\tau_{S}}(\sigma(x))$ for all $x\in L(T)$ with $\sigma(x)\in L(S)$. Let $a,b\in L(T)$ be distinct and suppose that $\sigma(a)=\sigma(b)=A$. Since $a$ and $b$ are distinct we have $a,b\prec_T \lca_T(a,b)$ and hence, by Def.~\ref{def:time-map}, $\ensuremath{\tau_{T}}(a) < \ensuremath{\tau_{T}}(\lca_T(a,b))$. This implies that $\ensuremath{\tau_{T}}(a) = \ensuremath{\tau_{S}}(A) = \ensuremath{\tau_{S}}(\lca_S(A,A)) <\ensuremath{\tau_{T}}(\lca_T(a,b))$. Therefore, $ab\notin E(G)$. Consequently, $ab\in E(G)$ implies $\sigma(a)\neq \sigma(b)$, which completes the proof. \end{proof} Extending earlier work of Dekker (\citeyear{Dekker:86ma}), Bryant and Steel (\citeyear{Bryant:95}) derived conditions under which two triples $r_1,r_2$ imply a third triple $r_3$ that must be displayed by any tree that displays $r_1,r_2$. In particular, we make frequent use of the following \begin{lemma} If a tree $T$ displays $xy|z$ and $zw|y$ then $T$ displays $xy|w$ and $zw|x$. In particular $T_{|\{x,y,z,w\}} = ((x,y),(z,w))$ (in \emph{Newick} format). \label{lem:2order} \end{lemma} \begin{definition} For every graph $G=(L,E)$, we define the set of triples on $L$ \begin{equation*} \ensuremath{\mathfrak{T}}(G) \coloneqq \{xy|z \; \colon x,y,z\in L \text{ are pairwise distinct, } xy\in E,\; xz,yz\notin E\} \,. \end{equation*} If $G$ is endowed with a coloring $\sigma\colon L\to M$ we also define a set of color triples \begin{align*} \ensuremath{\mathfrak{S}}(G,\sigma) \coloneqq \{\sigma(x)\sigma(y)|\sigma(z)\; \colon & x,y,z\in L,\, \sigma(x),\sigma(y),\sigma(z) \text{ are pairwise distinct},\\ &xz, yz\in E,\; xy\notin E\}. \end{align*} \label{def:infoTriples} \end{definition} \begin{lemma} If a graph $(G,\sigma)$ is an LDT graph then $\ensuremath{\mathfrak{S}}(G,\sigma)$ is compatible and $S$ displays $\ensuremath{\mathfrak{S}}(G,\sigma)$ for every $\mu$-free scenario $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ that explains $(G,\sigma)$. \label{lem:Ru-SpeciesTriple} \end{lemma} \begin{proof} Suppose that $(G=(L,E),\sigma)$ is an LDT graph and let $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ be a $\mu$-free scenario that explains $(G,\sigma)$. In order to show that $\ensuremath{\mathfrak{S}}(G,\sigma)$ is compatible it suffices to show that $S$ displays every triple in $\ensuremath{\mathfrak{S}}(G,\sigma)$. Let $AB|C\in \ensuremath{\mathfrak{S}}(G,\sigma)$. By definition, $A,B,C$ are pairwise distinct and there must be vertices $a,b,c\in L$ with $\sigma(a)=A$, $\sigma(b)=B$, and $\sigma(c)=C$ such that $ab \notin E$ and $bc,ac \in E$. First, $ab \notin E$ and $bc,ac \in E$ imply $\ensuremath{\tau_{T}}(\lca_T(a,b))\geq \ensuremath{\tau_{S}}(\lca_S(A,B))$, $\ensuremath{\tau_{T}}(\lca_T(b,c))<\ensuremath{\tau_{S}}(\lca_S(B,C))$, and $\ensuremath{\tau_{T}}(\lca_T(a,c))<\ensuremath{\tau_{S}}(\lca_S(A,C))$. Moreover, for any three vertices $a,b,c$ in $T$ it holds that $1 \leq |\{\lca_T(a,b),\lca_T(a,c),\lca_T(b,c)\}| \leq 2$. Therefore we have to consider the following four cases: (1) $u\coloneqq \lca_T(a,b)=\lca_T(b,c)=\lca_T(a,c)$, (2) $u\coloneqq \lca_T(a,b)=\lca_T(a,c)\neq\lca_T(b,c)$ and (3) $u\coloneqq\lca_T(a,b)=\lca_T(b,c)\neq\lca_T(a,c)$, (4) $\lca_T(a,b)\neq u\coloneqq \lca_T(b,c)=\lca_T(a,c)$. Note, for any three vertices $x,y,z$ in $T$, $\lca_T(x,y)\neq \lca_T(x,z)=\lca_T(y,z)$ implies that $\lca_T(x,y)\prec_T \lca_T(x,z)=\lca_T(y,z)$. In Cases (1) and (2), we find $\ensuremath{\tau_{S}}(\lca_S(A,C)) > \ensuremath{\tau_{T}}(u) \geq \ensuremath{\tau_{S}}(\lca_S(A,B))$. Together with the fact that $\lca_S(A,C)$ and $\lca_S(A,B)$ are comparable in $S$, this implies that $AB|C$ is displayed by $S$. In Case (3), we obtain $\ensuremath{\tau_{S}}(\lca_S(B,C)) > \ensuremath{\tau_{T}}(u) \geq \ensuremath{\tau_{S}}(\lca_S(A,B))$ and, by analogous arguments, $AB|C$ is displayed by $S$. Finally, in Case (4), the tree $T$ displays the triple $ab|c$. Thus, $\ensuremath{\tau_{S}}(\lca_S(A,B))\leq \ensuremath{\tau_{T}}(\lca_T(a,b)) < \ensuremath{\tau_{T}}(u) < \ensuremath{\tau_{S}}(\lca_S(A,C))$. Again, $AB|C$ is displayed by $S$. \end{proof} The next lemma shows that induced $K_2+K_1$ subgraphs in LDT graphs implies triples that must be displayed by $T$. \begin{lemma} If $(G,\sigma)$ is an LDT graph, then $\ensuremath{\mathfrak{T}}(G)$ is compatible and $T$ displays $\ensuremath{\mathfrak{T}}(G)$ for every $\mu$-free scenario $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ that explains $(G,\sigma)$. \label{lem:Ru-GeneTriple} \end{lemma} \begin{proof} Suppose that $(G=(L,E),\sigma)$ is an LDT graph and let $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ be a $\mu$-free scenario that explains $(G,\sigma)$. In order to show that $\ensuremath{\mathfrak{T}}(G)$ is compatible it suffices to show that $T$ displays every triple in $\ensuremath{\mathfrak{T}}(G,\sigma)$. Let $ab|c \in \ensuremath{\mathfrak{T}}(G)$. By definition, $a,b,c\in L(T)$ are distinct, and $ab\in E$ and $ac,bc\not\in E$. Since $ab \in E$, we have $A\coloneqq\sigma(a)\neq \sigma(b)\eqqcolon B$ by Prop.~\ref{prop:properCol}. There are two cases, either $\sigma(c)\in \{A,B\}$ or not. Suppose first that w.l.o.g.\ $\sigma(c)=A$. In this case, $ab \in E$ and $bc \notin E$ together imply $\ensuremath{\tau_{T}}(\lca_T(a,b))<\ensuremath{\tau_{S}}(\lca_S(A,B))\leq \ensuremath{\tau_{T}}(\lca_T(b,c))$. This and the fact that $\lca_T(a,b)$ and $\lca_T(b,c)$ are comparable in $T$ implies that $T$ displays $ab|c$. Suppose now that $\sigma(c)=C\notin \{A,B\}$. We now consider the four possible topologies of $S'=S_{|ABC}$: (1) $S'$ is a star, (2) $S'=AB|C$, (3) $S'=AC|B$, and (4) $S'=BC|A$. In Cases~(1), (2) and (4), we have $\ensuremath{\tau_{S}}(\lca_S(A,B)) \leq \ensuremath{\tau_{S}}(\lca_S(A,C))$, where equality holds only in Cases~(1) and (4). This together with $ab \in E$ and $ac \notin E$ implies $\ensuremath{\tau_{T}}(\lca_T(a,b))<\ensuremath{\tau_{S}}(\lca_S(A,B)) \leq \ensuremath{\tau_{S}}(\lca_S(A,C)) \leq \ensuremath{\tau_{T}}(\lca_T(a,c))$. This and the fact that $\lca_T(a,b)$ and $\lca_T(a,c)$ are comparable in $T$ implies that $T$ displays $ab|c$. In Case (3), $ab \in E$ and $bc \notin E$ imply $\ensuremath{\tau_{T}}(\lca_T(a,b))<\ensuremath{\tau_{S}}(\lca_S(A,B)) = \ensuremath{\tau_{S}}(\lca_S(B,C)) \leq \ensuremath{\tau_{T}}(\lca_T(b,c))$. By analogous arguments as before, $T$ displays $ab|c$. \end{proof} We note, finally, that the Aho graph of the triple set $[\ensuremath{\mathfrak{T}}(G),L]$ in a sense recapitulates $G$. More precisely, we have: \begin{proposition} Let $(G=(L,E),\sigma)$ be a vertex-colored graph. If for all edges $xy\in E$ there is a vertex $z$ such that $xz,yz\notin E$ (and thus, in particular, in case that $G$ is disconnected), then $[\ensuremath{\mathfrak{T}}(G),L]=G$. \label{prop:Aho-G} \end{proposition} \begin{proof} Clearly, the vertex sets of $[\ensuremath{\mathfrak{T}}(G),L]$ and $G$ are the same, that is, $L$. Let $xy\in E$ and thus, we have $x\neq y$. There is a vertex $z\neq x,y$ in $G$ with $xz,yz\notin E$ if and only if $xy|z\in \ensuremath{\mathfrak{T}}(G)$ and thus, if and only if $xy$ is an edge in $[\ensuremath{\mathfrak{T}}(G),L]=G$. \end{proof} \begin{definition} For a vertex-colored graph $(G,\sigma)$, we will use the shorter notation $x_1-x_2-\dots-x_n$ and $X_1-X_2-\dots-X_n$ for a path $P_n$ that is induced by the vertices $\{x_i\mid 1\leq i\leq n\}$ with colors $\sigma(x_i)=X_i$, $1\leq i\leq n$ and edges $x_ix_{i+1}$, $1\leq i\leq n-1$. \end{definition} \begin{lemma} Every LDT graph $(G,\sigma)$ is a properly colored cograph. \label{lem:propcolcograph} \end{lemma} \begin{proof} Let $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ be a $\mu$-free scenario that explains $(G,\sigma)$. By Prop.~\ref{prop:properCol}, $(G,\sigma)$ is properly colored. To show that $G=(L,E)$ is a cograph it suffices to show that $G$ does not contain an induced path on four vertices (cf.\ Prop.~\ref{prop:cograph}). Hence, assume for contradiction that $G$ contains an induced $P_4$. First we observe that for each edge $ab$ in this $P_4$ it holds that $\sigma(a)\neq \sigma(b)$ since, otherwise, by Prop.~\ref{prop:properCol}, $ab\notin E$. Based on possible colorings of the $P_4$ w.r.t.\ $\sigma$ and up to symmetry, we have to consider four cases: (1) $A-B-C-D$, (2) $A-B-C-A$, (3) $A-B-A-C$ and (4) $A-B-A-B$. In Case (1) the $P_4$ is of the form $a-b-c-d$ with $\sigma(a)=A$, $\sigma(b)=B$, $\sigma(c)=C$, $\sigma(d)=D$. By Lemma \ref{lem:Ru-SpeciesTriple}, the species tree $S$ must display both $AC|B$ and $BD|C$. Hence, by Lemma~\ref{lem:2order}, $S_{|ABCD} = ((A,C),(B,D))$ in \emph{Newick} format. Let $x \coloneqq \lca_S(A,B,C,D) = \rho_{S_{|ABCD}}$. Note, $x$ ``separates'' $A$ and $C$ from $B$ and $D$. Now, $ab\in E$ and $ad\notin E$ implies that $\ensuremath{\tau_{T}}(\lca_T(a,b))<\ensuremath{\tau_{S}}(x)\leq \ensuremath{\tau_{T}}(\lca_T(a,d))$. This and the fact that $\lca_T(a,b)$ and $\lca_T(a,d)$ are comparable in $T$ implies that $T$ displays $ab|d$. Similarly, $cd\in E$ and $ad\notin E$ implies that $T$ displays $cd|a$ is displayed by $T$. By Lemma~\ref{lem:2order}, $T_{|abcd} = ((a,b),(c,d))$. Let $y \coloneqq \lca_T(a,b,c,d) = \rho_{T_{|abcd}}$. Now, $bc\in E$, $\lca_T(b,c)=y$, and $\lca_S(B,C)=x$ implies $\ensuremath{\tau_{T}}(y)<\ensuremath{\tau_{S}}(x)$. This and $\lca_T(a,d)=y$ and $\lca_S(A,D)=x$ imply that $ad\in E$, and thus $a,b,c,d$ do not induce a $P_4$ in $G$; a contradiction. Case (2) can be directly excluded, since Lemma~\ref{lem:Ru-SpeciesTriple} implies that, in this case, $S$ must display $AC|B$ and $AB|C$; a contradiction. Now consider Case (3), that is, the $P_4$ is of the form $a-b-a'-c$ with $\sigma(a)=\sigma(a')=A$, $\sigma(b)=B$ and $\sigma(c)=C$. By Lemma \ref{lem:Ru-SpeciesTriple}, the species tree $S$ must display $BC|A$ and thus $x\coloneqq\lca_S(A,B)=\lca_S(A,C)$. Since $ab\in E$ and $ac\notin E$ we observe $\ensuremath{\tau_{T}}(\lca_T(a,b))<\ensuremath{\tau_{S}}(x)\leq \lca_T(a,c)$ and, as in Case (1) we infer that $T$ displays $ab|c$. By similar arguments, $a'c\in E$ and $ac\notin E$ implies that $T$ displays $a'c|a$. By Lemma \ref{lem:2order}, $T_{|abcd} = ((a,b),(a',c))$ and thus, $y\coloneqq \lca_T(a',b) = \lca_T(a,c)$ and $a'b\in E$ implies that $\ensuremath{\tau_{T}}(y)<\ensuremath{\tau_{S}}(x)$. Since $y= \lca_T(a,c)$ and $\ensuremath{\tau_{T}}(y)<\ensuremath{\tau_{S}}(x)=\ensuremath{\tau_{S}}(\lca_S(A,C))$, we can conclude that $ac\in E$. Hence, $a,b,c,d$ do not induce a $P_4$ in $G$; a contradiction. In Case (4) the $P_4$ is of the form $a-b-a'-b'$ with $\sigma(a)=\sigma(a')=A$ and $\sigma(b)=\sigma(b')=B$. Now, $ab,a'b'\in E$ and $ab'\notin E$ imply that $\ensuremath{\tau_{T}}(\lca_T(a,b)), \ensuremath{\tau_{T}}(\lca_T(a',b')) < \ensuremath{\tau_{S}}(\lca_S(A,B))\leq \ensuremath{\tau_{T}}(\lca_T(a,b'))$. Hence, by similar arguments as above, $T$ must display $ab|b'$ and $a'b'|a$. By Lemma~\ref{lem:2order}, $T_{abcd} = ((a,b),(a',b'))$ and thus, $y\coloneqq \lca_T(a'b) = \lca_T(a,b')$. However, $a'b\notin E$ implies that $\ensuremath{\tau_{T}}(y)<\ensuremath{\tau_{S}}(\lca_S(A,B))$; a contradiction to $\ensuremath{\tau_{S}}(\lca_S(A,B))\leq \ensuremath{\tau_{T}}(\lca_T(a,b'))$. \end{proof} The converse of Lemma~\ref{lem:propcolcograph} is not true in general. To see this, consider the properly-colored cograph $(G,\sigma)$ with vertex $V(G)=\{a,a',b,b',c,c'\}$, edges $ab,bc, a'b',a'c' $ and coloring $\sigma(a)=\sigma(a')=A$ $\sigma(b)=\sigma(b')=B$, $\sigma(c)=\sigma(c')=C$ with $A,B,C$ being pairwise distinct. In this case, $\ensuremath{\mathfrak{S}}(G,\sigma)$ contains the triples $AC|B$ and $BC|A$. By Lemma \ref{lem:Ru-SpeciesTriple}, the tree $S$ in every $\mu$-free scenario $\ensuremath{\mathscr{T}}=(T,S,\sigma,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ or relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ explaining $(G,\sigma)$ displays $AC|B$ and $BC|A$. Since no such scenario can exist, $(G,\sigma)$ is not an LDT graph. \subsection{Recognition and Characterization of LDT Graphs} \begin{definition} Let $(G=(L,E),\sigma)$ be a graph with coloring $\sigma\colon L\to M$. Let $\mathcal{C}$ be a partition of $M$, and $\mathcal{C}'$ be the set of connected components of $G$. We define the following binary relation $\ensuremath{\mathfrak{R}}(G, \sigma, \mathcal{C})$ by setting \begin{align*} (x,y)\in \ensuremath{\mathfrak{R}}(G, \sigma, \mathcal{C}) \iff x,y\in L,\; \sigma(x), \sigma(y) & \in C \text{ for some } C\in\mathcal{C}, \text{ and } \\ x,y & \in C' \text{ for some } C'\in\mathcal{C}'. \end{align*} \label{def:rel} \end{definition} In words, two vertices $x,y\in L$ are in relation $\ensuremath{\mathfrak{R}}(G, \sigma, \mathcal{C})$ whenever they are in the same connected component of $G$ and their colors $\sigma(x), \sigma(y)$ are contained in the same set of the partition of $M$. \begin{lemma} Let $(G=(L,E),\sigma)$ be a graph with coloring $\sigma\colon L\to M$ and $\mathcal{C}$ be a partition of $M$. Then, $\ensuremath{\mathfrak{R}}\coloneqq\ensuremath{\mathfrak{R}}(G, \sigma, \mathcal{C})$ is an equivalence relation and every equivalence class of $\ensuremath{\mathfrak{R}}$, or short $\ensuremath{\mathfrak{R}}$-class, is contained in some connected component of $G$. In particular, each connected component of $G$ is the disjoint union of $\ensuremath{\mathfrak{R}}$-classes. \label{lem:KinCC} \end{lemma} \begin{proof} It is easy to see that $\ensuremath{\mathfrak{R}}$ is reflexive and symmetric. Moreover, $xy,yz\in \ensuremath{\mathfrak{R}}$ implies that $\sigma(x), \sigma(y), \sigma(z)$ must be contained in the same set of the partition $\mathcal{C}$, and $x,y,z$ must be contained in the same connected component of $G$. Therefore, $xy\in \ensuremath{\mathfrak{R}}$ and thus, $\ensuremath{\mathfrak{R}}$ is transitive. In summary, $\ensuremath{\mathfrak{R}}$ is an equivalence relation. We continue with showing that every $\ensuremath{\mathfrak{R}}$-class $K$ is entirely contained in some connected component of $G$. Clearly, there is a connected component $C$ of $G$ such that $C\cap K\neq \emptyset$. Assume, for contradiction, that $K\not\subseteq C$. Hence, $G$ must be disconnected and, in particular, there is a second connected component $C'$ of $G$ such that $C'\cap K\neq \emptyset$. Hence, there is a pair $xy\in K$ such that $x\in C\cap K$ and $y\in C'\cap K$. But then $x$ and $y$ are in different connected components of $G$ violating the definition of $\ensuremath{\mathfrak{R}}$; a contradiction. Hence, every $\ensuremath{\mathfrak{R}}$-class is entirely contained in some connected component of $G$. This and the fact the $\ensuremath{\mathfrak{R}}$-classes are disjoint implies that each connected component of $G$ is the disjoint union of $\ensuremath{\mathfrak{R}}$-classes. \end{proof} The following partition of the leaf sets of subtrees of a tree $S$ rooted at some vertex $u\in V(S)$ will be useful: \begin{align*} &\text{If } u \textrm{ is not a leaf, then } &\mathcal{C}_{S}(u)&\coloneqq \{L(S(v)) \mid v\in\child_S(u)\} \\ & \textrm{and, otherwise, } &\mathcal{C}_{S}(u)&\coloneqq \{\{u\}\}. \end{align*} One easily verifies that, in both cases, $\mathcal{C}_{S}(u)$ yields a valid partition of the leaf set $L(S(u))$. Recall that $\sigma_{|L',M'}\colon L'\to M'$ was defined as the ``submap'' of $\sigma$ with $L'\subseteq L$ and $\sigma(L') \subseteq M' \subseteq M$. \begin{lemma}\label{lem:xy-iff-Ks-in-same-CC} Let $(G=(L,E),\sigma)$ be a properly colored cograph. Suppose that the triple set $\ensuremath{\mathfrak{S}}(G,\sigma)$ is compatible and let $S$ be a tree on $M$ that displays $\ensuremath{\mathfrak{S}}(G,\sigma)$. Moreover, let $L'\subseteq L$ and $u\in V(S)$ such that $\sigma(L') \subseteq L(S(u))$. \ Finally, set $\ensuremath{\mathfrak{R}}\coloneqq \ensuremath{\mathfrak{R}}(G[L'],\sigma_{|L',L(S(u))},\mathcal{C}_{S}(u))$.\\ Then, for all distinct $\ensuremath{\mathfrak{R}}$-classes $K$ and $K'$, either $xy\in E$ for all $x\in K$ and $y\in K'$, or $xy\notin E$ for all $x\in K$ and $y\in K'$. In particular, for $x\in K$ and $y\in K'$, it holds that \begin{equation*} xy\in E \iff K, K' \text{ are contained in the same connected component of } G[L']. \end{equation*} \end{lemma} \begin{proof} Let $\sigma \colon L\to M$ and put $\ensuremath{\mathfrak{S}} = \ensuremath{\mathfrak{S}}(G,\sigma)$. Since $\ensuremath{\mathfrak{S}}$ is a compatible triple set on $M$, there is a tree $S$ on $M$ that displays $\ensuremath{\mathfrak{S}}$. Moreover, the condition $\sigma(L') \subseteq L(S(u))\subseteq M$ together with the fact that $\mathcal{C}_{S}(u)$ is a partition of $L(S(u))$ ensures that $\ensuremath{\mathfrak{R}}$ is well-defined. Now suppose that $K$ and $K'$ are distinct \ensuremath{\mathfrak{R}}-classes. As a consequence of Lemma~\ref{lem:KinCC}, we have exactly the two cases: either (i) $K$ and $K'$ are contained in the same connected component $C$ of $G[L']$ or (ii) $K\subseteq C$ and $K'\subseteq C'$ for distinct components $C$ and $C'$ of $G[L']$. Case (i). Assume, for contradiction, that there are two vertices $x\in K$ and $y\in K'$ with $xy\notin E$. Note that $C\subseteq L'$ and thus, $G[C]$ is an induced subgraph of $G[L']$. By Prop.~\ref{prop:cograph}, both induced subgraphs $G[L']$ and $G[C]$ are cographs. Now we can again apply Prop.~\ref{prop:cograph} to conclude that $\mathrm{diam}(G[C])\leq 2$. Hence, there is a vertex $z\in C$ such that $xz,zy\in E$. Since $x$ and $y$ are in distinct classes of $\ensuremath{\mathfrak{R}}$ but in the same connected component $C$ of $G[L']$, $\sigma(x)$ and $\sigma(y)$ must lie in distinct sets of $\mathcal{C}_{S}(u)$. In particular, it must hold that $\sigma(x)\neq \sigma(y)$. The fact that $G[L']$ is properly colored together with $xz, yz \in E$ implies that $\sigma(z)\neq \sigma(x),\sigma(y)$. By definition and since $G[L']$ is an induced subgraph of $G$, we obtain that $\sigma(x)\sigma(y)|\sigma(z)\in\ensuremath{\mathfrak{S}}$. In particular, $\sigma(x)\sigma(y)|\sigma(z)$ is displayed by $S$. Since $\sigma(x)$ and $\sigma(y)$ lie in distinct sets of $\mathcal{C}_{S}(u)$, $u$ must be an inner vertex, and we have $\sigma(x)\in L(S(v))$ and $\sigma(y)\in L(S(v'))$ for distinct $v, v'\in\child_S(u)$. In particular, it must hold that $\lca_S(\sigma(x),\sigma(y))=u$. Moreover, $z\in C\subseteq L'$ and $\sigma(L')\subseteq L(S(u))$ imply that $\sigma(z)\in L(S(u))$. Taken together, the latter two arguments imply that $S$ cannot display the triple $\sigma(x)\sigma(y)|\sigma(z)$; a contradiction. Case~(ii). By assumption, the $\ensuremath{\mathfrak{R}}$-classes $K$ and $K'$ are in distinct connected components of $G[L']$, which immediately implies $xy\notin E$ for all $x\in K$, $y\in K'$. In summary, either $xy\in E$ for all $x\in K$ and $y\in K'$, or $xy\notin E$ for all $x\in K$ and $y\in K'$. Moreover, Case (i) establishes the \emph{if}-direction and Case (ii) establishes, by means of contraposition, the \emph{only-if}-direction of the final statement. \end{proof} Lemma~\ref{lem:xy-iff-Ks-in-same-CC} suggests a recursive strategy to construct a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ for a given properly-colored cograph $(G,\sigma)$, which is outlined in the main part of this paper and described more formally in Algorithm~\ref{alg:Ru-recognition}. We proceed by proving the correctness of Algorithm~\ref{alg:Ru-recognition}. \begin{theorem} \label{thm:algo-works} Let $(G,\sigma)$ be a properly colored cograph, and assume that the triple set $\ensuremath{\mathfrak{S}}(M,G)$ is compatible. Then Algorithm~\ref{alg:Ru-recognition} returns a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ such that $\Gu(\ensuremath{\mathscr{S}})=G$ in polynomial time. \end{theorem} \begin{proof} Let $\sigma\colon L\to M$ and put $\ensuremath{\mathfrak{S}}\coloneqq\ensuremath{\mathfrak{S}}(G,\sigma)$. By a slight abuse of notation, we will simply write $\mu$ and $\ensuremath{\tau_{T}}$ also for restrictions to subsets of $V(T)$. Observe first that due to Line \ref{line:if-false}, the algorithm continues only if $(G,\sigma)$ is a properly colored cograph and $\ensuremath{\mathfrak{S}}$ is compatible, and returns a tuple $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ in this case. In particular, a tree $S$ on $M$ that displays $\ensuremath{\mathfrak{S}}$ exists, and can e.g.\ be constructed using \texttt{BUILD} (Line \ref{line:S}). By Lemma~\ref{lem:arbitrary-tT}, we can always construct a time map $\ensuremath{\tau_{S}}$ for $S$ satisfying $\ensuremath{\tau_{S}}(x)=0$ for all $x\in L(S)$ (Line~\ref{line:tS}). By definition, $\ensuremath{\tau_{S}}(y)>\ensuremath{\tau_{S}}(x)$ must hold for every edge $(y,x)\in E(S)$, and thus, we obtain $\epsilon>0$ in Line~\ref{line:epsilon}. Moreover, the recursive function \texttt{BuildGeneTree} maintains the following invariant: \begin{Xclaim} \label{clm:color-subset} In every recursion step of the function \texttt{BuildGeneTree}, we have $\sigma(L')\subseteq L(S(u_S))$. \end{Xclaim} \begin{claim-proof} Since $S$ (with root $\rho_S$) is a tree on $M$ by construction and thus $L(S(\rho_S))=M$, the statement holds for the top-level recursion step on $L$ and $\rho_S$. Now assume that the statement holds for an arbitrary step on $L'$ and $u_S$. If $u_S$ is a leaf, there are no deeper recursion steps. Thus assume that $u_S$ is an inner vertex. Recall that $\mathcal{C}_{S}(u_S)$ is a partition of $L(S(u_S))$ (by construction), and that $\ensuremath{\mathfrak{R}}= \ensuremath{\mathfrak{R}}(G[L'], \sigma_{|L',L(S(u))}, \mathcal{C}_{S}(u_S))$ is an equivalence relation (by Lemma~\ref{lem:KinCC}). This together with the definition of $\ensuremath{\mathfrak{R}}$ and $\sigma(L')\subseteq L(S(u_S))$, implies that there is a child $v_S\in \child_S(u_{S})$ such that $\sigma(K)\subseteq L(S(v_S))$ for all $\ensuremath{\mathfrak{R}}$-classes $K$. In particular, therefore, the statement is true for all recursive calls on $K$ and $v_S$ in Line~\ref{line:recursive-call}. Repeating this argument top-down along the recursion hierarchy proves the claim. \end{claim-proof} Note, that we are in the \emph{else}-condition in Line \ref{line:rel} only if $u_S$ is not a leaf. Therefore and as a consequence of Claim~\ref{clm:color-subset} and by similar arguments as in its proof, there is a vertex $v^*_S\in\child_S(u_S)$ such that $\sigma(C)\cap L(S(v^*_S))\ne\emptyset$ for every connected component $C$ of $G[L']$ in Line~\ref{line:choose-v-S}, and a vertex $v_S\in \child_S(u_{S})$ such that $\sigma(K)\subseteq L(S(v_S))$ for every $\ensuremath{\mathfrak{R}}$-class $K$ in Line~\ref{line:choose-v-S-for-class}. Moreover, $\parent_S(u_{S})$ is always defined since we have $u_S=\rho_S$ and thus $\parent_S(u_S)=0_S$ in the top-level recursion step, and recursively call the function \texttt{BuildGeneTree} on vertices $v_S$ such that $v_S\prec_S u_S$. In summary, all assignments are well-defined in every recursion step. It is easy to verify that the algorithm terminates since, in each recursion step, we either have that $u_S$ is a leaf, or we recurse on vertices $v_{S}$ that lie strictly below $u_S$. We argue that the resulting tree $T'$ is a \emph{not necessarily phylogenetic} tree on $L$ by observing that, in each step, each $x\in L'$ is either attached to the tree as a leaf if $u_S$ is a leaf, or, since $\ensuremath{\mathfrak{R}}$ forms a partition of $L'$ by Lemma~\ref{lem:KinCC}, passed down to a recursion step on $K$ for some $\ensuremath{\mathfrak{R}}$-class $K$. Nevertheless, $T'$ is turned into a phylogenetic tree $T$ by suppression of degree-two vertices in Line~\ref{line:Tphylo}. Finally, $\mu(x)$ and $\ensuremath{\tau_{T}}(x)$ are assigned for all vertices $x\in L(T')=L$ in Line~\ref{line:mu-tT-leaves}, and for all newly created inner vertices in Lines~\ref{line:mu-tT-inner1} and~\ref{line:mu-tT-inner2}. Recall that $\ensuremath{\tau_{S}}$ is a valid time map satisfying $\ensuremath{\tau_{S}}(x)=0$ for all $x\in L(S)$ by construction. Before we continue to show that $\ensuremath{\mathscr{S}}$ is a relaxed scenario, we first show that the conditions for time maps and time consistency are satisfied for $(T',\ensuremath{\tau_{T}}, S, \ensuremath{\tau_{S}},\mu)$: \begin{Xclaim} \label{clm:tT-mu-in-T-prime} For all $x,y \in V(T')$ with $x\prec_{T'} y$, we have $\ensuremath{\tau_{T}}(x)<\ensuremath{\tau_{T}}(y)$. Moreover, for all $x\in V(T')$, the following statements are true: \vspace{-0.02in} \begin{description} \item[(i)] if $\mu(x)\in V(S)$, then $\ensuremath{\tau_{T}}(x)=\ensuremath{\tau_{S}}(\mu(x))$, and \item[(ii)] if $\mu(x)=(a,b)\in E(S)$, then $\ensuremath{\tau_{S}}(b)<\ensuremath{\tau_{T}}(x)<\ensuremath{\tau_{S}}(a)$. \end{description} \end{Xclaim} \begin{claim-proof} Recall that we always write an edge $(u,v)$ of a tree $T$ such that $v\prec_T u$. For the first part of the statement, it suffices to show that $\ensuremath{\tau_{T}}(x)<\ensuremath{\tau_{T}}(y)$ holds for every edge $(y,x)\in E(T')$, and thus to consider all vertices $x\neq \rho_{T'}$ in $T'$ and their unique parent, which will be denoted by $y$ in the following. Likewise, we have to consider all vertices $x\in V(T')$ including the root to show the second statement. The root $\rho_{T'}$ of $T'$ corresponds to the vertex $u_T$ created in Line~\ref{line:create-uT} in the top-level recursion step on $L$ and $\rho_{S}$. Hence, we have $\mu(\rho_{T'})=(\parent_S(\rho_S)=0_S,\rho_S)\in E(S)$ and $\ensuremath{\tau_{T}}(\rho_{T'})=\ensuremath{\tau_{S}}(\rho_S) +\epsilon$ (cf.\ Line~\ref{line:mu-tT-inner1}). Therefore, we have to show~(ii). Since $\epsilon>0$, it holds that $\ensuremath{\tau_{S}}(\rho_S)<\ensuremath{\tau_{T}}(\rho_{T'})$. Moreover, $\ensuremath{\tau_{S}}(0_S)-\ensuremath{\tau_{S}}(\rho_{S})\ge 3\epsilon$ holds by construction, and thus $\ensuremath{\tau_{S}}(0_S)-(\ensuremath{\tau_{T}}(\rho_{T'})-\epsilon)\ge 3\epsilon$ and $\ensuremath{\tau_{S}}(0_S)-\ensuremath{\tau_{T}}(\rho_{T'})\ge 2\epsilon$, which together with $\epsilon>0$ implies $\ensuremath{\tau_{T}}(\rho_{T'})<\ensuremath{\tau_{S}}(0_S)$. We now consider the remaining vertices $x\in V(T')\setminus\{\rho_{T'}\}$. Every such vertex $x$ is introduced into $T'$ in some recursion step on $L'$ and $u_S$ in one of the Lines~\ref{line:create-uT}, \ref{line:attach-leaf}, \ref{line:create-vT} or~\ref{line:recursive-call}. There are exactly the following three cases: (a) $x\in L(T')$ is a leaf attached to some inner vertex $u_T$ in Line~\ref{line:attach-leaf}, (b) $x=v_T$ as created in Line~\ref{line:create-vT}, and (c) $x=w_T$ as assigned in Line~\ref{line:recursive-call}. Note that if $x=u_T$ as created in Line~\ref{line:create-uT}, then $u_T$ is either the root of $T'$, or equals a vertex $w_T$ as assigned in Line~\ref{line:recursive-call} in the ``parental'' recursion step. In Case~(a), we have that $x\in L(T')$ is a leaf and attached to some inner vertex $y=u_T$. Since $u_S$ must be a leaf in this case, and thus $\ensuremath{\tau_{S}}(u_S)=0$, we have $\ensuremath{\tau_{T}}(y)=0+\epsilon=\epsilon$ and $\ensuremath{\tau_{T}}(x)=0$ (cf.\ Lines~\ref{line:mu-tT-inner1} and~\ref{line:mu-tT-leaves}). Since $\epsilon>0$, this implies $\ensuremath{\tau_{T}}(x)<\ensuremath{\tau_{T}}(y)$. Moreover, we have $\mu(x)=\sigma(x)\in L(S)\subset V(S)$ (cf.\ Line~\ref{line:mu-tT-leaves}), and thus have to show Subcase~(i). Since $u_S$ is a leaf and $\sigma(L')\subseteq L(S(u_S))$, we conclude $\sigma(x)=u_S$. Thus we obtain $\ensuremath{\tau_{T}}(x)=0=\ensuremath{\tau_{S}}(u_S)=\ensuremath{\tau_{S}}(\mu(x))$. In Case~(b), we have $x=v_T$ as created in Line~\ref{line:create-vT}, and $x$ is attached as a child to some vertex $y=u_T$ created in the same recursion step. Thus, we have $\ensuremath{\tau_{T}}(y)=\ensuremath{\tau_{S}}(u_S)+\epsilon$ and $\ensuremath{\tau_{T}}(x)=\ensuremath{\tau_{S}}(u_S)-\epsilon$ (cf.\ Lines~\ref{line:mu-tT-inner1} and~\ref{line:mu-tT-inner2}). Therefore and since $\epsilon>0$, it holds $\ensuremath{\tau_{T}}(x)<\ensuremath{\tau_{T}}(y)$. Moreover, we have $\mu(x)=(u_S,v^*_S)\in E(S)$ for some $v^*_S\in\child_S(u_S)$. Hence, we have to show Subcase~(ii). By a similar calculation as before, $\epsilon>0$, $\ensuremath{\tau_{S}}(u_S)-\ensuremath{\tau_{S}}(v^*_S)\ge 3\epsilon$ and $\ensuremath{\tau_{T}}(x)=\ensuremath{\tau_{S}}(u_S)-\epsilon$ imply $\ensuremath{\tau_{S}}(v^*_S)<\ensuremath{\tau_{T}}(x)<\ensuremath{\tau_{S}}(u_S)$. In Case~(c), $x=w_T$ as assigned in Line~\ref{line:recursive-call} is equal to $u_T$ as created in Line~\ref{line:create-uT} in some next-deeper recursion step with $u'_S\in\child_S(u_S)$. Thus, we have $\ensuremath{\tau_{T}}(x)=\ensuremath{\tau_{S}}(u'_S)+\epsilon$ and $\mu(x)=(u_S,u'_S)\in E(S)$ (cf.\ Line~\ref{line:mu-tT-inner1}). Moreover, $x$ is attached as a child of some vertex $y=v_T$ as created in Line~\ref{line:create-vT}. Thus, we have $\ensuremath{\tau_{T}}(y)=\ensuremath{\tau_{S}}(u_S)-\epsilon$. By construction and since $(u_S,u'_S)\in E(S)$, we have $\ensuremath{\tau_{S}}(u_S)-\ensuremath{\tau_{S}}(u'_S)\ge 3\epsilon$. Therefore, $(\ensuremath{\tau_{T}}(y)+\epsilon) - (\ensuremath{\tau_{T}}(x)-\epsilon) \ge 3\epsilon$ and thus $\ensuremath{\tau_{T}}(y)- \ensuremath{\tau_{T}}(x) \ge \epsilon$. This together with $\epsilon>0$ implies $\ensuremath{\tau_{T}}(x)<\ensuremath{\tau_{T}}(y)$. Moreover, since $\mu(x)=(u_S,u'_S)\in E(S)$ for some $u'_S\in\child_S(u_S)$, we have to show Subcase~(ii). By a similar calculation as before, $\epsilon>0$, $\ensuremath{\tau_{S}}(u_S)-\ensuremath{\tau_{S}}(u'_S)\ge 3\epsilon$ and $\ensuremath{\tau_{T}}(x)=\ensuremath{\tau_{S}}(u'_S)+\epsilon$ imply $\ensuremath{\tau_{S}}(u'_S)<\ensuremath{\tau_{T}}(x)<\ensuremath{\tau_{S}}(u_S)$. \end{claim-proof} \begin{Xclaim} \label{clm:relaxed-scenario} $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ is a relaxed scenario. \end{Xclaim} \begin{claim-proof} The tree $T$ is obtained from $T'$ by first adding a planted root $0_T$ (and connecting it to the original root) and then suppressing all inner vertices except $0_T$ that have only a single child in Line \ref{line:Tphylo}. In particular, $T$ is a planted phylogenetic tree by construction. The root constraint (G0) $\mu(x)=0_S$ if and only if $x=0_T$ also holds by construction (cf.\ Line~\ref{line:mu-tT-planted-root}). Since we clearly have not contracted any outer edges $(y,x)$, i.e.\ with $x\in L(T')$, we conclude that $L(T')=L(T)=L$. As argued before, we have $\ensuremath{\tau_{T}}(x)=0$ and $\mu(x)=\sigma(x)$ whenever $x\in L(T')=L(T)$ (cf.\ Line~\ref{line:mu-tT-leaves}). Since all other vertices are either $0_T$ or mapped by $\mu$ to some edge of $S$ (cf.\ Lines~\ref{line:mu-tT-planted-root}, \ref{line:mu-tT-inner1} and~\ref{line:mu-tT-inner2}), the leaf constraint (G1) $\mu(x)=\sigma(x)$ is satisfied if and only if $x\in L(T)$. By construction, we have $V(T)\setminus \{0_T\} \subseteq V(T')$. Moreover, suppression of vertices clearly preserves the $\preceq$-relation between all vertices $x,y\in V(T)\setminus \{0_T\}$. Together with Claim~\ref{clm:tT-mu-in-T-prime}, this implies $\ensuremath{\tau_{T}}(x)<\ensuremath{\tau_{T}}(y)$ for all vertices $x,y\in V(T)\setminus \{0_T\}$ with $x\prec_{T} y$. For the single child $\rho_T$ of $0_T$ in $T$, we have $\ensuremath{\tau_{T}}(\rho_T)\le \ensuremath{\tau_{S}}(\rho_S)+\epsilon$ where equality holds if the root of $T'$ was not suppressed and thus is equal to $\rho_T$. Moreover, $\ensuremath{\tau_{T}}(0_T)=\ensuremath{\tau_{S}}(0_S)$ and $\ensuremath{\tau_{S}}(0_S)-\ensuremath{\tau_{S}}(\rho_S)\ge 3\epsilon$ hold by construction. Taken together the latter two arguments imply that $\ensuremath{\tau_{T}}(\rho_T)<\ensuremath{\tau_{T}}(0_T)$. In particular, we obtain $\ensuremath{\tau_{T}}(x)<\ensuremath{\tau_{T}}(y)$ for all vertices $x,y\in V(T)$ with $x\prec_{T} y$. Hence, $\ensuremath{\tau_{T}}$ is a time map for $T$, which, moreover, satisfies $\ensuremath{\tau_{T}}(x)=0$ for all $x\in L(T)$. To show that $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu, \ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ is a relaxed scenario, it remains to show that $\mu$ is time-consistent with the time maps $\ensuremath{\tau_{T}}$ and $\ensuremath{\tau_{S}}$. In case $x\in L(T)\subset V(T)$, we have $\mu(x)=\sigma(x)\in L(S)\subset V(S)$ and thus $\ensuremath{\tau_{T}}(x)=0=\ensuremath{\tau_{S}}(\sigma(x))=\ensuremath{\tau_{S}}(\mu(x))$. For $0_T$, we have $\ensuremath{\tau_{T}}(0_T)=\ensuremath{\tau_{S}}(0_S)=\ensuremath{\tau_{S}}(\mu(0_T))$. The latter two arguments imply that all vertices $x\in L(T)\cup \{0_T\}$ satisfy (C1) in the Def.~\ref{def:tc-map}. The remaining vertices of $T$ are all vertices of $T'$ as well. In particular, they are all inner vertices that are mapped to some edge of $S$ (cf.\ Lines~\ref{line:mu-tT-inner1} and~\ref{line:mu-tT-inner2}). The latter two arguments together with Claim~\ref{clm:tT-mu-in-T-prime} imply that, for all vertices $x\in V(T)\setminus (L(T)\cup \{0_T\})$, we have $\mu(x)=(a,b)\in E(S)$ and $\ensuremath{\tau_{S}}(b)<\ensuremath{\tau_{T}}(x)<\ensuremath{\tau_{S}}(a)$. Therefore, every such vertex satisfies (C2) in Def.~\ref{def:tc-map}. It follows that the time consistency constraint (G2) is also satisfied, and thus $\ensuremath{\mathscr{S}}$ is a relaxed scenario. \end{claim-proof} \begin{Xclaim} \label{clm:cotree} Every vertex $v\in V^0(T)$ was either created in Line~\ref{line:create-uT} or in Line~\ref{line:create-vT}. In particular, it holds for all $x,y\in L(T)$ with $\lca_T(x,y)=v$: \vspace{-0.02in} \begin{description} \item[(1)] If $v$ was created in Line~\ref{line:create-uT}, then $xy\notin E(G)$ and $xy\notin E(\Gu(\ensuremath{\mathscr{S}}))$. \item[(2)] If $v$ was created in Line~\ref{line:create-vT}, then $xy\in E(G)$ and $xy\in E(\Gu(\ensuremath{\mathscr{S}}))$. \end{description} Furthermore, $G$ is a cograph with cotree $(T,t)$ where $t(v) = 0$ if $v$ was created in Line~\ref{line:create-uT} and $t(v) = 1$, otherwise. \end{Xclaim} \begin{claim-proof} Since $T$ is phylogenetic, every vertex $v\in V^0(T)$ is the last common ancestor of two leaves $x,y\in L\coloneqq L(T)$. Let $v\in V^0(T)$ be arbitrary and choose arbitrary leaves $x,y\in L$ such that $\lca_T(x,y)=v$. Since $v\in V^0(T)$, the leaves $x$ and $y$ must be distinct. Note that $v\notin L(T)\cup\{0_T\}$, and thus, $v$ is also an inner vertex in $T'$. Therefore, we have exactly the two cases (1) $v=u_T$ is created in Line~\ref{line:create-uT}, and (2) $v=v_T$ is created in Line~\ref{line:create-vT}. Similar as before, the case that $v=w_K$ is assigned in Line~\ref{line:recursive-call} is covered by Case~(a), since, in this case, $w_K$ is created in a deeper recursion step. We consider the recursion step on $L'$ and $u_S$, in which $v$ was created. Clearly, it must hold that $x,y\in L'$. Before we continue, set $\ensuremath{\mathfrak{R}}\coloneqq\ensuremath{\mathfrak{R}}(G[L'], \sigma_{|L',L(S(u))}, \mathcal{C}_{S}(u_S))$ as in Line~\ref{line:rel}. Note, since $\ensuremath{\mathscr{S}}$ is a relaxed scenario, the graph $(\Gu(\ensuremath{\mathscr{S}}),\sigma)$ is well-defined. For Statement~(1), suppose that $v=u_T$ was created in Line~\ref{line:create-uT}. Hence, we have the two cases~(i) the vertex $u_S$ of $S$ in this recursion step is a leaf, and (ii) $u_S$ is an inner vertex. In Case~(i), we have $L(S(u_S))=\{u_S\}$. Together with Claim~\ref{clm:color-subset} and $\sigma(x),\sigma(y)\in\sigma(L')$, this implies $\sigma(x)=\sigma(x)=u_S$. By assumption, $(G,\sigma)$ is properly colored. By Prop.~\ref{prop:properCol} $(\Gu(\ensuremath{\mathscr{S}}),\sigma)$ must be properly colored as well. Hence, we conclude that $xy\notin E(G)$ and $xy\notin E(\Gu(\ensuremath{\mathscr{S}}))$, respectively. In Case~(ii), $u_S$ is not a leaf. Therefore, $\lca_{T}(x,y)=v=u_T$ is only possible if $x$ and $y$ lie in distinct connected components of $G[L']$. This immediately implies $xy\notin E(G)$. Moreover, we have $\sigma(x),\sigma(y)\in L(S(u_S))$ and thus $\lca_S(\sigma(x),\sigma(y))\preceq_{S} u_S$. Since $\ensuremath{\tau_{S}}$ is a time map for $S$, it follows that $\ensuremath{\tau_{S}}(\lca_S(\sigma(x),\sigma(y)))\le \ensuremath{\tau_{S}}(u_S)$. Together with $\ensuremath{\tau_{T}}(u_T)=\ensuremath{\tau_{S}}(u_S)+\epsilon$ (cf.\ Line~\ref{line:mu-tT-inner1}) and $\epsilon>0$, this implies $\ensuremath{\tau_{S}}(\lca_S(\sigma(x),\sigma(y))) < \ensuremath{\tau_{T}}(v)=\ensuremath{\tau_{T}}(\lca_T(x,y))$. Hence, $xy\notin E(\Gu(\ensuremath{\mathscr{S}}))$. For Statement~(2), suppose that $v=v_T$ was created in Line~\ref{line:create-vT}. Therefore, $\lca_{T}(x,y)=v=v_T$ is only possible if $x$ and $y$ lie in the same connected components of $G[L']$ but in distinct $\ensuremath{\mathfrak{R}}$-classes. Now, we can apply Lemma~\ref{lem:xy-iff-Ks-in-same-CC} to conclude that $xy\in E(G)$. Moreover, the fact that $x$ and $y$ lie in the same connected component of $G[L']$ but in distinct $\ensuremath{\mathfrak{R}}$-classes implies that $\sigma(x)$ and $\sigma(y)$ lie in distinct sets of $\mathcal{C}_{S}(u_S)$. Hence, there are distinct $v_S,v'_S\in\child_S(u)$ such that $\sigma(x)\preceq_{S}v_S$ and $\sigma(y)\preceq_{S} v'_S$. In particular, $\lca_S(\sigma(x),\sigma(y))=u_S$. In Line~\ref{line:mu-tT-inner2}, we assign $\ensuremath{\tau_{T}}(\lca_T(x,y))=\ensuremath{\tau_{T}}(v_T)=\ensuremath{\tau_{S}}(u_S)-\epsilon$. Together with $\epsilon>0$, the latter two arguments imply $\ensuremath{\tau_{T}}(\lca_T(x,y))<\ensuremath{\tau_{S}}(u_S)=\ensuremath{\tau_{S}}(\lca_S(\sigma(x),\sigma(y)))$. Therefore, we have $xy\in E(\Gu(\ensuremath{\mathscr{S}}))$. By the latter arguments, the cotree $(T,t)$ as defined above is well-defined and, for all $v\in V^0(T)$, we have $t(v)=1$ if and only if $xy\in E(G)$ for all $x,y\in L$ with $\lca_T(x,y)=v$. Hence, $(T,t)$ is a cotree for $G$. \end{claim-proof} \begin{Xclaim} \label{clm:Ru-scen-equals-Ru} The relaxed scenario $\ensuremath{\mathscr{S}}$ satisfies $\Gu(\ensuremath{\mathscr{S}})=G$. \end{Xclaim} \begin{claim-proof} Since $L(T)=L$, the two undirected graphs $\Gu(\ensuremath{\mathscr{S}})$ and $G$ have the same vertex set. By Claim~\ref{clm:cotree}, we have, for all distinct $x,y\in L$, either $xy\notin E(G)$ and $xy\notin E(\Gu(\ensuremath{\mathscr{S}}))$, or $xy\in E(G)$ and $xy\in E(\Gu(\ensuremath{\mathscr{S}}))$. \end{claim-proof} Together, Claims~\ref{clm:relaxed-scenario} and~\ref{clm:Ru-scen-equals-Ru} imply that Algorithm~\ref{alg:Ru-recognition} returns a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ with coloring $\sigma$ such that $\Gu(\ensuremath{\mathscr{S}})=G$. To see that Algorithm \ref{alg:Ru-recognition} runs in polynomial time, we first note that the function $\mathtt{BuildGeneTree()}$ operates in polynomial time. This is clear for the setup and the $\mathbf{if}$ part. The construction of $\ensuremath{\mathfrak{R}}$ in the $\mathbf{else}$ part involves the computation of connected components and the evaluation of Def.~\ref{def:rel}, both of which can be achieved in polynomial time. This is also true for the comparisons of color classes required to identify $v_S^*$ and $v_S$. Since the sets $K$ in recursive calls of $\mathtt{BuildGeneTree()}$ form a partition of $L'$, and the $v_S$ are children of $u_S$ in $S$ and the depth of the recursion is bounded by $O(|L(S)|)$, the total effort remains polynomial. \end{proof} \begin{theorem} A graph $(G,\sigma)$ is an LDT graph if and only if it is a properly colored cograph and $\ensuremath{\mathfrak{S}}(G,\sigma)$ is compatible. \label{thm:characterization} \end{theorem} \begin{proof} By Lemma~\ref{lem:Ru-SpeciesTriple} and~\ref{lem:propcolcograph}, if $(G,\sigma)$ is an LDT graph then it is a properly colored cograph and $\ensuremath{\mathfrak{S}}(G,\sigma)$ is compatible. Now suppose that $(G,\sigma)$ is a properly colored cograph and $\ensuremath{\mathfrak{S}}(G,\sigma)$ is compatible. Then, by Thm.~\ref{thm:algo-works}, Algorithm~\ref{alg:Ru-recognition} outputs a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ such that $\Gu(\ensuremath{\mathscr{S}})=G$. By definition, this in particular implies that $(G,\sigma)$ is an LDT graph. \end{proof} \begin{corollary} LDT graphs can be recognized in polynomial time. \label{cor:LDTpoly} \end{corollary} \begin{proof} Cographs can be recognized in linear time \cite{Corneil:81a}, the proper coloring can be verified in linear time, the triple set $\ensuremath{\mathfrak{S}}(G,\sigma)$ contains not more than $|V(G)|\cdot|E(G)|$ triples and can be constructed in $O(|V(G)|\cdot|E(G)|)$ time, and compatibility of $\ensuremath{\mathfrak{S}}(G,\sigma)$ can be checked in $O(\min(|\ensuremath{\mathfrak{S}}|\log^2 |V(G)|, |\ensuremath{\mathfrak{S}}| + |V(G)|^2\ln |V(G)|))$ time \cite{Jansson:05}. \end{proof} \begin{corollary} The property of being an LDT graph is hereditary, that is, if $(G,\sigma)$ is an LDT graph then each of its vertex induced subgraphs is an LDT graph. \label{cor:LDT-here} \end{corollary} \begin{proof} Let $(G=(V,E),\sigma)$ be an LDT graph. It suffices to show that $(G-x, \sigma_{|V\setminus \{x\}})$ is an LDT graph, where $G-x$ is obtained from $G$ by removing $x\in V$ and all its incident edges. By Prop.~\ref{prop:cograph}, $G-x$ is a cograph that clearly remains properly colored. Moreover, every induced path on three vertices in $G-x$ is also an induced path on three vertices in $G$. This implies that if $xy|z \in \ensuremath{\mathfrak{S}}' = \ensuremath{\mathfrak{S}}(G-x,\sigma_{|V\setminus \{x\}})$, then $xy|z \in \ensuremath{\mathfrak{S}}(G,\sigma)$. Hence, $\ensuremath{\mathfrak{S}}' \subseteq \ensuremath{\mathfrak{S}}(G,\sigma)$. By Thm.~\ref{thm:characterization}, $\ensuremath{\mathfrak{S}}(G,\sigma)$ is compatible. Hence, any tree that displays all triples in $\ensuremath{\mathfrak{S}}(G,\sigma)$, in particular, displays all triples in $\ensuremath{\mathfrak{S}}'$. Therefore, $\ensuremath{\mathfrak{S}}'$ is compatible. In summary, $(G-x, \sigma_{|V\setminus \{x\}})$ is a properly colored cograph and $\ensuremath{\mathfrak{S}}'$ is compatible. By Thm.~\ref{thm:characterization} it is an LDT graph. \end{proof} The relaxed scenarios $\ensuremath{\mathscr{S}}$ explaining an LDT graph $(G,\sigma)$ are far from being unique. In fact, we can choose from a large set of trees $(S,\ensuremath{\tau_{S}})$ that is determined only by the triple set $\ensuremath{\mathfrak{S}}(G,\sigma)$: \begin{corollary} If $(G=(L,E),\sigma)$ is an LDT graph with coloring $\sigma\colon L\to M$, then for all planted trees $S$ on $M$ that display $\ensuremath{\mathfrak{S}}(G,\sigma)$ there is a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ that contains $\sigma$ and $S$ and that explains $(G,\sigma)$. \label{cor:manyT} \end{corollary} \begin{proof} If $(G,\sigma)$ is an LDT graph, then the species tree $S$ assigned in Line~\ref{line:S} in Algorithm~\ref{alg:Ru-recognition} is an arbitrary tree on $M$ displaying $\ensuremath{\mathfrak{S}}(G,\sigma)$. \end{proof} \begin{corollary}\label{cor:displayed-cotree} If $(G,\sigma)$ is an LDT graph, then there exists a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ explaining $(G,\sigma)$ such that $T$ displays the discriminating cotree $T_{G}$ of $G$. \end{corollary} \begin{proof} Suppose that $(G,\sigma)$ is an LDT graph. By Thm.~\ref{thm:characterization}, $(G,\sigma)$ must be a properly colored cograph and $\ensuremath{\mathfrak{S}}(G,\sigma)$ is comparable. Hence, Thm.~\ref{thm:algo-works} implies that Algorithm~\ref{alg:Ru-recognition} constructs a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ explaining $(G,\sigma)$. In particular, the tree $T$ together with labeling $t$ as specified in Claim~\ref{clm:cotree} is a cotree for $G$. Since the unique discriminating cotree $(T_{G},\hat t)$ of $G$ is obtained from any other cotree by contraction of edges in $T$, the tree $T$ must display $T_{G}$. \end{proof} Although, Cor.~\ref{cor:displayed-cotree} implies that there is always a relaxed scenario $\ensuremath{\mathscr{S}}$ where the tree $T$ displays the discriminating cotree $T_{G}$ of $G=G(\ensuremath{\mathscr{S}})$, this is not true for all relaxed scenarios $\ensuremath{\mathscr{S}}$ with $G=G(\ensuremath{\mathscr{S}})$. Fig.~\ref{fig:Ru-cotree-not-displ} shows a relaxed scenario $\ensuremath{\mathscr{S}}' = (T',S',\sigma,\mu',\ensuremath{\tau_{T}}',\ensuremath{\tau_{S}}')$ with $G = G(\ensuremath{\mathscr{S}}')$ for which $T'$ does not display $T_G$. \begin{figure}[t] \centering \includegraphics[width=0.8\textwidth]{./images-Rb/cotree-not-displayed.pdf} \caption{A relaxed scenario $\ensuremath{\mathscr{S}}$ (A) with gene tree $T$ (B) and its associated graph $(\Gu(\ensuremath{\mathscr{S}}),\sigma)$ (C). The discriminating cotree $T_{\Gu(\ensuremath{\mathscr{S}})}$ (D) is not displayed by $T$.} \label{fig:Ru-cotree-not-displ} \end{figure} Cor.~\ref{cor:displayed-cotree} enables us to relate connectedness of LDT graphs to properties of the relaxed scenarios by which it can be explained. \begin{lemma} \label{lem:Gu-connected} An LDT graph $(G=(L,E),\sigma)$ with $|L|>1$ is connected if and only if for every relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ that explains $(G,\sigma)$, we have $\ensuremath{\tau_{T}}(\rho_T)<\ensuremath{\tau_{S}}(\lca_S(\sigma(L)))$. \end{lemma} \begin{proof} By contraposition, suppose first that there is a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ that explains $(G,\sigma)$ such that $\ensuremath{\tau_{T}}(\rho_T) \geq \ensuremath{\tau_{S}}(\lca_S(\sigma(L)))$. Since $|L(T)|=|L|>1$, the root $\rho_{T}$ is not a leaf. To show that $G$ is disconnected we consider two distinct children $v,w\in\child(\rho_T)$ of the root and leaves $x\in L(T(v))$ and $y\in L(T(w))$ and verify that $x$ and $y$ cannot be adjacent in $G$. If $\sigma(x)=\sigma(y)$, then $xy\notin E$ since $(G,\sigma)$ is properly colored (cf.\ Lemma~\ref{lem:propcolcograph}). Hence, suppose that $\sigma(x)\neq \sigma(y)$. By construction, $\lca_T(x,y)=\rho_T$ and thus, by assumption, $\ensuremath{\tau_{T}}(\lca_T(x,y)) = \ensuremath{\tau_{T}}(\rho_T) \geq \ensuremath{\tau_{S}}(\lca_S(\sigma(L)))$. Now $\lca_S(\sigma(L))\succeq_S \lca_S(\sigma(x),\sigma(y))$ implies that $\ensuremath{\tau_{S}}(\lca_S(\sigma(L)))\geq \ensuremath{\tau_{S}}(\lca_S(\sigma(x),\sigma(y)))$ and thus, $\ensuremath{\tau_{T}}(\lca_T(x,y))\geq \ensuremath{\tau_{S}}(\lca_S(\sigma(x),\sigma(y)))$. Hence, $xy\notin E$. Consequently, for all distinct children $v,w\in\child(\rho_T)$, none of the vertices in $L(T(v))$ are adjacent to any of the vertices in $L(T(w))$ and thus, $G$ is disconnected. Conversely, suppose that $G$ is disconnected. We consider Alg.~\ref{alg:Ru-recognition} with input $(G,\sigma)$. By Thms.~\ref{thm:algo-works} and~\ref{thm:characterization}, the algorithm constructs a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ that explains $(G,\sigma)$. Consider the top-level recursion step on $L$ and $\rho_S$. Since $G$ is disconnected, the vertex $u_T$ created in Line~\ref{line:create-uT} of this step equals the root $\rho_T$ of the final tree $T$. To see this, assume first that $\rho_S$ is a leaf. Then, we attach the $|L|>1$ elements in $L$ as leaves to $u_T$ (cf.\ Line~\ref{line:attach-leaf}). Now assume that $\rho_S$ is not a leaf. Since $G[L]=G$ has at least two components, we attach at least two vertices $v_T$ created in Line~\ref{line:create-vT} to $u_T$. Hence $u_T$ is not suppressed in Line~\ref{line:Tphylo} and thus $\rho_T=u_T$. By construction, therefore, we have $\ensuremath{\tau_{T}}(\rho_T)=\ensuremath{\tau_{T}}(u_T)=\ensuremath{\tau_{S}}(u_S)+\epsilon=\ensuremath{\tau_{S}}(\rho_S)+\epsilon$ for some $\epsilon>0$. From $\sigma(\rho_S)\succeq_S \lca_S(\sigma(L))$ and the definition of time maps, we obtain $\ensuremath{\tau_{S}}(\rho_S)\ge\ensuremath{\tau_{S}}(\lca_S(\sigma(L)))$. Therefore, we have $\ensuremath{\tau_{T}}(\rho_T)\ge \ensuremath{\tau_{S}}(\lca_S(\sigma(L)))+\epsilon>\ensuremath{\tau_{S}}(\lca_S(\sigma(L)))$, which completes the proof. Therefore, we have shown so-far that if all relaxed scenarios $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ that explain $(G,\sigma)$ satisfy $\ensuremath{\tau_{T}}(\rho_T)\leq\ensuremath{\tau_{S}}(\lca_S(\sigma(L)))$, then $(G,\sigma)$ must be connected. However, $\ensuremath{\tau_{T}}(\rho_T) = \ensuremath{\tau_{S}}(\lca_S(\sigma(L)))$ cannot occur, since we can reuse the same arguments as in the beginning of this proof to show that, in this case, $G$ is disconnected. \end{proof} \subsection{Least Resolved Trees for LDT graphs} As we have seen e.g.\ in Cor.~\ref{cor:manyT}, there are in general many trees $S$ and $T$ forming relaxed scenarios $\ensuremath{\mathscr{S}}$ that explain a given LDT graph $(G,\sigma)$. This begs the question to what extent these trees are determined by ``representatives''. For $S$, we have seen that $S$ always displays $\ensuremath{\mathfrak{S}}(G,\sigma)$, suggesting to consider the role of $S=\Aho(\ensuremath{\mathfrak{S}}(G,\sigma), M)$. This tree is least resolved in the sense that there is no relaxed scenario explaining the LDT graph $(G,\sigma)$ with a tree $S'$ that is obtained from $S$ by edge-contractions. The latter is due to the fact that any edge contraction in $\Aho(\ensuremath{\mathfrak{S}}(G,\sigma), M)$ yields a tree $S'$ that does not display $\ensuremath{\mathfrak{S}}(G,\sigma)$ any more \cite{Jansson:12}. By Prop.~\ref{lem:Ru-SpeciesTriple}, none of the relaxed scenarios containing $S'$ explain the LDT $(G,\sigma)$. \begin{definition} Let $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ be a relaxed scenario explaining the LDT graph $(G,\sigma)$. The planted tree $T$ is \emph{least resolved} for $(G,\sigma)$ if no relaxed scenario $(T',S',\sigma',\mu',\ensuremath{\tau_{T}}',\ensuremath{\tau_{S}}')$ with $T'<T$ explain $(G,\sigma)$. \label{def:LRT-LDT} \end{definition} In other words, $T$ is least resolved for $(G,\sigma)$ if no scenario with a gene tree $T'$ obtained from $T$ by a series of edge contractions explains $(G,\sigma)$. The examples in Fig.~\ref{fig:LRT-not-unique} show that there is not always a unique least resolved tree. As outlined in the main part of this paper, the examples in Fig.\ \ref{fig:LRT-not-unique} show that LDT graphs are in general not accompanied by unique least resolved trees and the example in Fig.~\ref{fig:cotree-not-resolved-enough} shows that the unique discriminating cotree $T_G$ of an LDT graph $(G,\sigma)$ is not always ``sufficiently resolved''. \section{Horizontal Gene Transfer and Fitch Graphs} \label{TP:sect:HGT} \subsection{HGT-Labeled Trees and rs-Fitch Graphs} As alluded to in the introduction, the LDT graphs are intimately related with horizontal gene transfer. To formalize this connection we first define transfer edges. These will then be used to encode Walter Fitch's concept of xenologous gene pairs \cite{Fitch:00,Darby:17} as a binary relation, and thus, the edge set of a graph. \begin{definition} Let $\ensuremath{\mathscr{S}} = (T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ be a relaxed scenario. An edge $(u,v)$ in $T$ is a \emph{transfer edge} if $\mu(u)$ and $\mu(v)$ are incomparable in $S$. The \emph{HGT-labeling} of $T$ in $\ensuremath{\mathscr{S}}$ is the edge labeling $\lambda_{\ensuremath{\mathscr{S}}}: E(T)\to\{0,1\}$ with $\lambda(e)=1$ if and only if $e$ is a transfer edge. \label{def:HGT-label} \end{definition} The vertex $u$ in $T$ thus corresponds to an HGT event, with $v$ denoting the subsequent event, which now takes place in the ``recipient'' branch of the species tree. Note that $\lambda_{\ensuremath{\mathscr{S}}}$ is completely determined by $\ensuremath{\mathscr{S}}$. In general, for a given a gene tree $T$, HGT events correspond to a labeling or coloring of the edges of $T$. \begin{definition}[Fitch graph] Let $(T,\lambda)$ be a tree $T$ together with a map $\lambda\colon E(T)\to \{0,1\}$. The \emph{Fitch graph} $\digamma(T,\lambda) = (V,E)$ has vertex set $V\coloneqq L(T)$ and edge set \begin{equation*} E \coloneqq \{xy \mid x,y\in L, \text{ the unique path connecting } x \text{ and } y \text{ in } T \text{ contains an edge } e \text{ with } \lambda(e)=1. \} \end{equation*} \label{def:FitchG} \end{definition} By definition, Fitch graphs of 0/1-edge-labeled trees are loop-less and undirected. We call edges $e$ of $(T,\lambda)$ with label $\lambda(e)=1$ also 1-edges and, otherwise, 0-edges. \begin{remark} Fitch graphs as defined here have been termed \emph{undirected} Fitch graphs \cite{Hellmuth:18a}, in contrast to the notion of the \emph{directed} Fitch graphs of 0/1-edge-labeled trees studied e.g.\ in \cite{Geiss:18a,Hellmuth:2019a}. \end{remark} \begin{proposition}{\cite{Hellmuth:18a,Zverovich:99}} The following statements are equivalent. \begin{enumerate} \item $G$ is the Fitch graph of a 0/1-edge-labeled tree. \item $G$ is a complete multipartite graph. \item $G$ does not contain $K_2+K_1$ as an induced subgraph. \end{enumerate} \label{prop:fitch} \end{proposition} A natural connection between LDT graphs and complete multipartite graphs is suggested by the definition of triple sets $\ensuremath{\mathfrak{T}}(G)$, since each forbidden induced subgraph $K_2+K_1$ of a complete multipartite graphs corresponds to a triple in an LDT graph. More precisely, we have: \begin{lemma} $(G,\sigma)$ is a properly colored complete multipartite if and only if it is properly colored and $\ensuremath{\mathfrak{T}}(G) = \emptyset$. \label{lem:mulip-triples} \end{lemma} \begin{proof} The equivalence between the statements can be seen by observing that $G$ is a complete multipartite graph if and only if $G$ does not contain an induced $K_2+K_1$ (cf.\ Prop.~\ref{prop:fitch}). By definition of $\ensuremath{\mathfrak{T}}(G)$, this is the case if and only if $\ensuremath{\mathfrak{T}}(G)=\emptyset$. \end{proof} \begin{definition}[rs-Fitch graph] Let $\ensuremath{\mathscr{S}} = (T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ be a relaxed scenario with HGT-labeling $\lambda_{\ensuremath{\mathscr{S}}}$. We call the vertex colored graph $(\digamma(\ensuremath{\mathscr{S}}),\sigma) \coloneqq (\digamma(T,\lambda_{\ensuremath{\mathscr{S}}}),\sigma)$ the \emph{Fitch graph of the scenario $\ensuremath{\mathscr{S}}$.}\\ A vertex colored graph $(G,\sigma)$ is a \emph{relaxed scenario Fitch graph} (\emph{rs-Fitch graph}) if there is a relaxed scenario $\ensuremath{\mathscr{S}} = (T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ such that $G = \digamma(\ensuremath{\mathscr{S}})$. \label{def:rsFitchG} \end{definition} Fig.~\ref{fig:fitch-example} shows that rs-Fitch graphs are not necessarily properly colored. A subtle difficulty arises from the fact that Fitch graphs of 0/1-edge-labeled trees are defined without a reference to the vertex coloring $\sigma$, while the rs-Fitch graph is vertex colored. \begin{fact} If $(G,\sigma)$ is an rs-Fitch graph then $G$ is a complete multipartite graph. \label{obs:Fitch} \end{fact} The ``converse'' of Obs.~\ref{obs:Fitch} is not true in general, as we shall see in Thm.~\ref{thm:char-rsFitch} below. If, however, the coloring $\sigma$ can be chosen arbitrarily, then every complete multipartite graph $G$ can be turned into an rs-Fitch graph $(G,\sigma)$ as shown in Prop.~\ref{prop:converse-obs-fitch}. \begin{proposition} If $G$ is a complete multipartite graph, then there exists a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ such that $(G,\sigma)$ is an rs-Fitch graph. \label{prop:converse-obs-fitch} \end{proposition} \begin{proof} Let $G$ be a complete multipartite graph and set $L\coloneqq V(G)$ and $R\coloneqq E(G)$. If $R=\emptyset$, then the relaxed scenario $\ensuremath{\mathscr{S}}$ constructed in the proof of Lemma~\ref{lem:Rempty} shows that $E(G)=E(\digamma(\ensuremath{\mathscr{S}})) = \emptyset$. Hence, we assume that $R\neq \emptyset$ and explicitly construct a relaxed scenario $\ensuremath{\mathscr{S}} = (T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ such that $(G,\sigma)$ is an rs-Fitch graph. We start by specifying the coloring $\sigma\colon L\to M$. Since $G$ is a complete multipartite graph it is determined by its independent sets $I_1,\dots,I_k$, which form a partition of $L$. We set $M\coloneqq\{1,2,\ldots,k\}$ and color every $x\in I_j$ with color $\sigma(x)=j$, $1\leq j\leq k$. By construction, $(G,\sigma)$ is properly colored, and $\sigma(x)=\sigma(y)$ whenever $xy\notin R$, i.e., whenever $x$ and $y$ lie in the same independent set. Therefore, we have $\ensuremath{\mathfrak{S}}(G,\sigma) = \emptyset$. Let $S$ be the planted star tree with leaf set $L(S)=\{1,\dots,k\} = M$ and $\child_S(\rho_S)=M$. Since $R\neq \emptyset$, we have $k\geq 2$, and thus, $\rho_S$ has at least two children and is, therefore, phylogenetic. We choose the time map $\ensuremath{\tau_{S}}$ by putting $\ensuremath{\tau_{S}}(0_S)=2$, $\ensuremath{\tau_{S}}(\rho_S)=1$ and $\ensuremath{\tau_{S}}(x)=0$ for all $x\in L(S)$. Finally, we construct the planted phylogenetic tree $T$ with planted root $0_T$ and root $\rho_T$ as follows: Vertex $\rho_T$ has $k$ children $u_1,\dots, u_k$. If $I_j=\{x_j\}$ consists of a single element, then we put $u_j\coloneqq x_j$ as a leaf or $T$, and otherwise, vertex $u_j$ has exactly $|I_j|$ children where $\child(u_j)=I_j$. Now label, for all $i\in \{2,\dots, k\}$, the edge $(\rho_T,u_i)$ with ``$1$'', and all other edges with ``$0$''. Since $k\ge 2$, the tree $T$ is also phylogenetic by construction. \begin{figure}[ht] \centering \includegraphics[width=0.85\textwidth]{./images-Rb/proof-converse-obs-fitch.pdf} \caption{Construction in the proof of Prop.~\ref{prop:converse-obs-fitch}.} \label{fig:prop:converse-obs-fitch} \end{figure} We specify the time map $\ensuremath{\tau_{T}}$ and the reconciliation map $\mu$ by defining, for every $v\in V(T)$, \begin{equation*} \ensuremath{\tau_{T}}(v) \coloneqq \begin{cases} 2=\ensuremath{\tau_{S}}(0_S) \\ 0 \\ 1/2 \\ 1/4 \end{cases} \mu(v) \coloneqq \begin{cases} 0_S &\text{if } v=0_T,\\ \sigma(v) &\text{if } v\in L(T),\\ (\rho_S,1) &\text{if } v = \rho_T, \textrm{ and}\\ (\rho_S,i) &\text{if } v=u_i\not\in L(T), 1\leq i\leq k. \end{cases} \end{equation*} With the help of Fig.~\ref{fig:prop:converse-obs-fitch}, it is now easy to verify that (i) $\ensuremath{\tau_{T}}$ is a time map for $T$, (ii) the reconciliation map $\mu$ is time-consistent, and (iii) $\lambda_{\ensuremath{\mathscr{S}}} = \lambda$. In summary, $\ensuremath{\mathscr{S}} = (T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ is a relaxed scenario, and $(G,\sigma) = (\digamma(\ensuremath{\mathscr{S}}),\sigma)$ is an rs-Fitch graph. \end{proof} Although every complete multipartite graph can be colored in such a way that it becomes an rs-Fitch graph (cf.\ Prop.~\ref{prop:converse-obs-fitch}), there are colored, complete multipartite graphs $(G,\sigma)$ that are not rs-Fitch graphs, i.e., that do not derive from a relaxed scenario (cf.\ Thm.~\ref{thm:char-rsFitch}). We summarize this discussion in the following \begin{fact} There are (planted) 0/1-edge labeled trees $(T,\lambda)$ and colorings $\sigma\colon L(T)\to M$ such that there is no relaxed scenario $\ensuremath{\mathscr{S}} = (T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ with $\lambda=\lambda_{\ensuremath{\mathscr{S}}}$. \label{obs:01T-notScen} \end{fact} A subtle -- but important -- observation is that trees $(T,\lambda)$ with coloring $\sigma$ for which Obs.~\ref{obs:01T-notScen} applies may still encode an rs-Fitch graph $(\digamma(T,\lambda),\sigma)$, see Example \ref{ex:lst} and Fig.~\ref{fig:TreeClassesDistinct}. The latter is due to the fact that $\digamma(T,\lambda) = \digamma(T',\lambda')$ may be possible for a different tree $(T',\lambda')$ for which there is a relaxed scenario $\ensuremath{\mathscr{S}}' = (T',S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ with $\lambda' = \lambda_{\ensuremath{\mathscr{S}}}$. In this case, $(\digamma(T,\lambda),\sigma) = (\digamma(\ensuremath{\mathscr{S}}'),\sigma)$ is an rs-Fitch graph. We shall briefly return to these issues in the discussion section~\ref{sect:concl}. \subsection{LDT Graphs and rs-Fitch Graphs} We proceed to investigate to what extent an LDT graph provides information about an rs-Fitch graph. As we shall see in Thm.~\ref{thm:FitchRu-scenario} there is indeed a close connection between rs-Fitch graphs and LDT graphs. We start with a useful relation between the edges of rs-Fitch graphs and the reconciliation maps $\mu$ of their scenarios. \begin{lemma} Let $\digamma(\ensuremath{\mathscr{S}})$ be an rs-Fitch graph for some relaxed scenario $\ensuremath{\mathscr{S}}$. Then, $ab\notin E(\digamma(\ensuremath{\mathscr{S}}))$ implies that $\lca_S(\sigma(a),\sigma(b)) \preceq_S \mu(\lca_T(a,b)) $. \label{lem:independent-lca} \end{lemma} \begin{proof} Assume first that $ab\notin E(\digamma(\ensuremath{\mathscr{S}}))$ and denote by $P_{xy}$ the unique path in $T$ that connects the two vertices $x$ and $y$. Clearly, $u\coloneqq \lca_T(a,b)$ is contained in $P_{ab}$, and this path $P_{ab}$ can be subdivided into the two paths $P_{u,a}$ and $P_{u,b}$ that have only vertex $u$ in common. Since $ab\notin E(\digamma(\ensuremath{\mathscr{S}}))$, none of the edges $(v,w)$ along the path $P_{ab}$ in $T$ is a transfer edge, and thus, the images $\mu(v)$ and $\mu(w)$ are comparable in $S$. This implies that the images of any two vertices along the path $P_{u,a}$ as well as the images of any two vertices along $P_{u,b}$ are comparable. In particular, therefore, $\mu(u)$ is comparable with both $\mu(a)=\sigma(a)\eqqcolon A$ and $\mu(b)=\sigma(b)\eqqcolon B$, where we may have $A=B$. Together with the fact that $A$ and $B$ are leaves in $S$, this implies that $\mu(u)$ is an ancestor of $A$ and $B$. Since $\lca_S(A,B)$ is the ``last'' vertex that is an ancestor of both $A$ and $B$, we have $\lca_S(A,B) \preceq_S \mu(u)$. \end{proof} The next result shows that a subset of transfer edges can be inferred immediately from LDT graphs: \begin{theorem} If $(G,\sigma)$ is an LDT graph, then $G\subseteq \digamma(\ensuremath{\mathscr{S}})$ for all relaxed scenarios $\ensuremath{\mathscr{S}}$ that explain $(G,\sigma)$. \label{thm:infer-fitch} \end{theorem} \begin{proof} Let $\ensuremath{\mathscr{S}} = (T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ be a relaxed scenario that explains $(G,\sigma)$, i.e., $G = \Gu(\ensuremath{\mathscr{S}})$. By definition, $V(G) = V(\digamma(\ensuremath{\mathscr{S}})) = L(T)$. Hence it remains to show that $E(G) \subseteq E(\digamma(\ensuremath{\mathscr{S}}))$. To this end, consider $ab \in E(G)$ and assume, for contradiction, that $ab\notin E(\digamma(\ensuremath{\mathscr{S}}))$. Let $A \coloneqq \sigma(a)$ and $B\coloneqq \sigma(b)$. By Lemma \ref{lem:independent-lca}, $\lca_S(A,B) \preceq_S \mu(\lca_T(a,b))$. But then, by Def.\ \ref{def:time-map} and \ref{def:tc-map}, $\ensuremath{\tau_{S}}(\lca_S(A,B)) \leq \ensuremath{\tau_{S}}(\lca_T(a,b))$, implying $ab\notin E(G)$, a contradiction. \end{proof} Since we only have that $xy$ is an edge in $\digamma(\ensuremath{\mathscr{S}})$ if the path connecting $x$ and $y$ in the tree $T$ of $\ensuremath{\mathscr{S}}$ contains a transfer edge, Thm.~\ref{thm:infer-fitch} immediately implies \begin{corollary} For every relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ without transfer edges, it holds that $E(\Gu(\ensuremath{\mathscr{S}})) = \emptyset$. \label{cor:noHGT} \end{corollary} Thm.~\ref{thm:infer-fitch} provides the formal justification for indirect phylogenetic approaches to HGT inference that are based on the work of \citet{Lawrence:92}, \citet{Clarke:02}, and \citet{Novichkov:04} by showing that $xy\in E(\Gu(\ensuremath{\mathscr{S}}))$ can be explained only by HGT, irrespective of how complex the true biological scenario might have been. However, it does not cover all HGT events. Fig.~\ref{fig:Fitch-not-RU} shows that there are relaxed scenarios $\ensuremath{\mathscr{S}}$ for which $\Gu(\ensuremath{\mathscr{S}}) \neq \digamma(\ensuremath{\mathscr{S}})$ even though $\digamma(\ensuremath{\mathscr{S}})$ is properly colored. Moreover, it is possible that an rs-Fitch graph $(G,\sigma)$ contains edges $xy\in E(G)$ with $\sigma(x)=\sigma(y)$. In particular, therefore, an rs-Fitch graph is not always an LDT graph. It is natural, therefore, to ask whether for every properly colored Fitch graph there is a relaxed scenario $\ensuremath{\mathscr{S}}$ such that $\Gu(\ensuremath{\mathscr{S}}) = \digamma(\ensuremath{\mathscr{S}})$. An affirmative answer is provided by \begin{theorem} The following statements are equivalent. \begin{enumerate} \item $(G,\sigma)$ is a properly colored complete multipartite graph. \item There is a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ with coloring $\sigma$ such that $G=\Gu(\ensuremath{\mathscr{S}}) = \digamma(\ensuremath{\mathscr{S}})$. \item $(G,\sigma)$ is complete multipartite and an LDT graph. \item $(G,\sigma)$ is properly colored and an rs-Fitch graph. \end{enumerate} In particular, for every properly colored complete multipartite graph $(G,\sigma)$ the triple set $\ensuremath{\mathfrak{S}}(G,\sigma)$ is compatible. \label{thm:FitchRu-scenario} \end{theorem} \begin{proof} \par\noindent\emph{(1) implies (2)}. We assume that $(G,\sigma)$ is a properly colored multipartite graph and set $L\coloneqq V(G)$ and $E\coloneqq E(G)$. If $E=\emptyset$, then the relaxed scenario $\ensuremath{\mathscr{S}}$ constructed in the proof of Lemma~\ref{lem:Rempty} satisfies $G=\Gu(\ensuremath{\mathscr{S}}) = \digamma(\ensuremath{\mathscr{S}})$, i.e., the graphs are edgeless. Hence, we assume that $E\neq \emptyset$ and explicitly construct a relaxed scenario $\ensuremath{\mathscr{S}} = (T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ with coloring $\sigma$ such that $G=\Gu(\ensuremath{\mathscr{S}}) = \digamma(\ensuremath{\mathscr{S}})$. The graph $(G,\sigma)$ is properly colored and complete multipartite by assumption. Let $I_1,\dots, I_k$ denote the independent sets of $G$. Since $E\neq \emptyset$, we have $k>1$. Since all $x\in I_i$ are adjacent to all $y\in I_j$, $i\neq j$ and $(G,\sigma)$ is properly colored, it must hold that $\sigma(I_i)\cap \sigma(I_j)=\emptyset$. For a fixed $i$ let $v_i^1,\dots v_i^{|I_i|}$ denote the elements in $I_i$. We first start with the construction of the species tree $S$. First we add a planted root $0_S$ with child $\rho_S$. Vertex $\rho_S$ has children $w_1,\dots, w_k$ where each $w_j$ corresponds to one $I_j$. Note, $\sigma\colon L\to M$ may not be surjective, in which case we would add one additional child $x$ to $\rho_S$ for each color $x\in M\setminus \sigma(L)$. If $|\sigma(I_j)| = 1$, then we identify the single color $x\in \sigma(I_j)$ with $w_j$. Otherwise, i.e., if $|\sigma(I_j)| > 1$, vertex $w_j$ has as children the set $\child_S(w_j)=\sigma(I_j)$ which are leaves in $S$. See Fig.~\ref{fig:Fitch-RU} for an illustrative example. Now we can choose the time map $\ensuremath{\tau_{S}}$ for $S$ such $\ensuremath{\tau_{S}}(0_S)=3$, $\ensuremath{\tau_{S}}(\rho_S)=2$, $\ensuremath{\tau_{S}}(x)=0$ for all $x\in L(S)$ and $\ensuremath{\tau_{S}}(x)=1$ for all $x\in V^0(S)\setminus\{\rho_S\}$. We now construct $T$ as follows. The tree $T$ has planted root $0_T$ with child $\rho_T$. Vertex $\rho_T$ has $k$ children $u_1,\dots, u_k$ where each $u_j$ corresponds to one $I_j$. Vertex $u_j$ is a leaf if $|I_j|=1$, and, otherwise, has exactly $|I_j|$ children that are uniquely identified with the elements in $I_j$. \begin{figure}[ht] \begin{center} \includegraphics[width=0.85\textwidth]{./images-Rb/proof-fitch-Gu.pdf} \end{center} \caption{Construction of the relaxed scenario $\ensuremath{\mathscr{S}}$ in the proof of Thm.~\ref{thm:FitchRu-scenario}. } \label{fig:Fitch-RU} \end{figure} We now define the time map $\ensuremath{\tau_{T}}$ and reconciliation map $\mu$ for $v\in V(T)$: \begin{equation*} \ensuremath{\tau_{T}}(v) \coloneqq \begin{cases} 3=\ensuremath{\tau_{S}}(0_S) \\ 0 \\ 1.5 \\ 1.25 \end{cases} \mu(v) \coloneqq \begin{cases} 0_S &\text{if } v=0_T,\\ \sigma(v) &\text{if } v\in L(T),\\ (\rho_S,w_1) &\text{if } v = \rho_T, \textrm{ and}\\ (\rho_S,w_i) &\text{if } v=u_i\not\in L(T), 1\leq i\leq k. \end{cases} \end{equation*} With the help of Fig.~\ref{fig:Fitch-RU} it is now easy to verify that (i) $\ensuremath{\tau_{T}}$ is a time map for $T$, and that (ii) the reconciliation map $\mu$ is time-consistent. In summary the constructed $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ is a relaxed scenario. We continue with showing that $E=E(\Gu(\ensuremath{\mathscr{S}}))=E(\digamma(\ensuremath{\mathscr{S}}))$. To this end, let $a,b\in L$ be two vertices. Note, $ab\in E$ if and only if $a\in I_i$ and $b\in I_j$ for distinct $i,j\in [k]\coloneqq \{1,2,\ldots,k\}$. First assume that $ab\in E$ and thus, $a\in I_i$ and $b\in I_j$ for distinct $i,j\in [k]$. By construction, $a\preceq_{T}u_i\ne u_j\succeq_{T} b$ with $\lca_{T}(u_i,u_j)=\rho_{T}$. In particular, we have $\parent_T(u_i)=\parent_T(u_j)=\rho_{T}$ and the path from $a$ to $b$ contains the two edges $(\rho_{T},u_i)$ and $(\rho_{T},u_j)$. By construction, we have $\mu(\rho_T)=(\rho_{S},w_1)$, and for all $1\leq l\leq k$, $\mu(u_l)=\sigma(u_l)=w_l$ if $u_l$ is a leaf, and $\mu(u_l)=(\rho_S,w_l)$ otherwise. These two arguments imply that $\mu(\rho_T)$ and $\mu(u_l)$ are comparable if and only if $u_l=u_1$. Now, since $u_i\ne u_j$, they cannot both be equal to $u_1$ and thus, at least one of the edges $(\rho_{T},u_i)$ and $(\rho_{T},u_j)$ is a transfer edge. Hence, $ab\in E(\digamma(\ensuremath{\mathscr{S}}))$. By construction, $ab\in E$ implies $\lca_T(a,b)=\rho_T$. Hence, we have $\mu(\lca_T(a,b)) = \mu(\rho_T)=(\rho_S,w_1)\prec_S\rho_S = \lca_S(\sigma(a),\sigma(b))$, and thus $ab\in E(\Gu(\ensuremath{\mathscr{S}}))$. Now assume that $ab\notin E$, and thus, $a,b\in I_i$ for some $i\in[k]$. It clearly suffices to consider the case $a\ne b$, and thus, $a,b\in\child_T(u_i)$ and $u_i\notin L(T)$ holds by construction. In particular, the path between $a$ and $b$ only consists of the edges $(u_i,a)$ and $(u_i,b)$. Moreover, we have $\sigma(a),\sigma(b)\preceq_{S} w_i$ and $\mu(u_i)=(\rho_S,w_i)$. Hence, none of the edges $(u_i,a)$ and $(u_i,b)$ is a transfer edge, and $ab\notin E(\digamma(\ensuremath{\mathscr{S}}))$. We have $\mu(\lca_{T}(a,b))=(\rho_S,w_i)\succ_T w_i \succeq_{T} \lca_{S}(\sigma(a),\sigma(b))$, and thus $\ensuremath{\tau_{T}}(\lca_{T}(a,b))> \ensuremath{\tau_{S}}(\lca_{S}(\sigma(a),\sigma(b)))$. Hence, $ab\notin E(\Gu(\ensuremath{\mathscr{S}}))$. In summary, $ab\in E$ if and only if $ab\in E(\digamma(\ensuremath{\mathscr{S}}))$ if and only if $ab\in E(\Gu(\ensuremath{\mathscr{S}}))$, and consequently, $G=\Gu(\ensuremath{\mathscr{S}}) = \digamma(\ensuremath{\mathscr{S}})$. \smallskip \par\noindent\emph{(2) implies (1).} Thus, suppose that there is a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ with coloring $\sigma$ such that $G=\Gu(\ensuremath{\mathscr{S}}) = \digamma(\ensuremath{\mathscr{S}})$. Prop.~\ref{prop:properCol} implies that $(G,\sigma)=(\Gu(\ensuremath{\mathscr{S}}),\sigma)$ is properly colored. Moreover, $(G,\sigma)=(\digamma(\ensuremath{\mathscr{S}}),\sigma)$ is an rs-Fitch graph and thus, by Obs.~\ref{obs:Fitch}, $G$ is complete multipartite. Statements (1) and (2) together with Prop.~\ref{prop:fitch} imply (3). Conversely, if (3) is satisfied then Prop.~\ref{prop:properCol} implies that $(G,\sigma)$ is properly colored. This and the fact that $G$ is complete multipartite implies (1). Therefore, Statements (1), (2) and (3) are equivalent. Furthermore, (4) implies (1) by Obs.~\ref{obs:Fitch}. Conversely, $(G,\sigma)$ in Statement (2) is an rs-Fitch graph and an LDT graph. Hence it is properly colored by Prop.~\ref{prop:properCol}. Thus (2) implies (4). Statement (3), in particular, implies that every properly colored complete multipartite $(G,\sigma)$ is an LDT graph and, thus, there is a relaxed scenario $\ensuremath{\mathscr{S}}$ such that $G=\Gu(\ensuremath{\mathscr{S}})$. Now, we can apply Lemma~\ref{lem:Ru-SpeciesTriple} to conclude that $\ensuremath{\mathfrak{S}}(G,\sigma)$ is compatible, which completes the proof. \end{proof} \begin{corollary} A colored graph $(G,\sigma)$ is an LDT graph and an rs-Fitch graph if and only if $(G,\sigma)$ is a properly colored complete multipartite graph (and thus, a properly colored Fitch graph for some 0/1-edge-labeled tree). \label{cor:scen-sat-fitch} \end{corollary} \begin{proof} If $(G,\sigma)$ is an rs-Fitch graph then, by Obs.~\ref{obs:Fitch}, $G$ is a complete multipartite graph. Moreover, since $(G,\sigma)$ is an LDT graph, $(G,\sigma)$ is properly colored (cf.\ Prop.~\ref{prop:properCol}). Conversely, if $(G,\sigma)$ is a properly colored complete multipartite graph it is, by Thm.~\ref{thm:FitchRu-scenario}(2), an rs-Fitch graph and an LDT graph. Now the equivalence between Statements~(1) and (3) in Thm.~\ref{thm:FitchRu-scenario} shows that $(G,\sigma)$ is an LDT graph. \end{proof} \begin{corollary} Let $(G,\sigma)$ be a vertex-colored graph. If $\ensuremath{\mathfrak{T}}(G) = \emptyset$ and $\ensuremath{\mathfrak{S}}(G,\sigma)$ is incompatible, then $G$ is a complete multipartite graph (and thus, a Fitch graph for some 0/1-edge-labeled tree), but $\sigma$ is not a proper vertex coloring of $G$. \label{cor:Fitch-compatible} \end{corollary} \begin{proof} By definition, if $\ensuremath{\mathfrak{T}}(G)=\emptyset$, then $G$ cannot contain an induced $K_2+K_1$. By Prop.~\ref{prop:fitch}, $G$ is a Fitch graph. Contraposition of the last statement in Thm.~\ref{thm:FitchRu-scenario} and $G$ being a Fitch graph for some $(T,\lambda)$ implies that $\sigma$ is not a proper vertex coloring of $G$. \end{proof} As outlined in the main part of this paper, LDT graphs are sufficient to describe replacing HGT. They fail, however, to describe additive HGT in full detail. \subsection{rs-Fitch Graphs with General Colorings} In scenarios with additive HGT, the rs-Fitch graph is no longer properly colored and no-longer coincides with the LDT graph. Since not every vertex-colored complete multipartite graphs $(G,\sigma)$ is an rs-Fitch graph (cf. Thm.~\ref{thm:char-rsFitch}), we ask whether an LDT graph $(G,\sigma)$ that is not itself already an rs-Fitch graph imposes constraints on the rs-Fitch graphs $(\digamma(\ensuremath{\mathscr{S}}),\sigma)$ that derive from relaxed scenarios $\ensuremath{\mathscr{S}}$ that explain $(G,\sigma)$. As a first step towards this goal, we aim to characterize rs-Fitch graphs, i.e., to understand the conditions imposed by the existence of an underlying scenario $\ensuremath{\mathscr{S}}$ on the compatibility of the collection of independent sets $\mathcal{I}$ of $G$ and the coloring $\sigma$. As we shall see, these conditions can be explained in terms of an auxiliary graph that we introduce in a very general setting: \begin{definition} Let $L$ be a set, $\sigma\colon L\to M$ a map and $\mathcal{I}=\{I_1,\dots, I_k\}$ a set of subsets of $L$. Then the graph $\auxfitch(\sigma,\mathcal{I})$ has vertex set $M$ and edges $xy$ if and only if $x\ne y$ and $x,y\in \sigma(I')$ for some $I'\in\mathcal{I}$. We define an edge labeling $\ell: E(\auxfitch) \to 2^{\mathcal{I}}$ such that $\ell(e) \coloneqq \{I\in\mathcal{I}\mid \exists x,y\in I \text{ s.t.\ } \sigma(x)\sigma(y)=e\}$. \label{def:auxfitch} \end{definition} By construction $\auxfitch(\sigma,\mathcal{I'})$ is a subgraph of $\auxfitch(\sigma,\mathcal{I})$ whenever $\mathcal{I'}\subseteq\mathcal{I}$. The labeling of an edge $e$ records the sets $I\in\mathcal{I}$ that imply the presence of the edge. \begin{theorem}\label{thm:char-rsFitch} A graph $(G,\sigma)$ is an rs-Fitch graph if and only if (i) it is complete multipartite with independent sets $\mathcal{I}=\{I_1,\dots, I_k\}$, and (ii) if $k>1$, there is an independent set $I'\in \mathcal{I}$ such that $\auxfitch(\sigma,\mathcal{I}\setminus\{I'\})$ is disconnected. \end{theorem} \begin{proof} Let $G=(L,E)$ be a graph with coloring $\sigma\colon L\to M$. Suppose first that $G$ satisfies~(i) and~(ii). To show that $(G,\sigma)$ is an rs-Fitch graph, we will construct a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ such that $G = \digamma(\ensuremath{\mathscr{S}})$. If $k=1$, or equivalently $E=\emptyset$, then the relaxed scenario $\ensuremath{\mathscr{S}}$ constructed in the proof of Lemma~\ref{lem:Rempty} satisfies $G=\digamma(\ensuremath{\mathscr{S}})$, i.e., both graphs are edgeless. Now assume that $k>1$ and thus, $E\neq \emptyset$. Hence, we can choose an independent set $I'\in \mathcal{I}$ such that $\auxfitch'\coloneqq \auxfitch(\sigma,\mathcal{I}\setminus\{I'\})$ is disconnected. Note that $\mathcal{I}\setminus\{I'\}$ is non-empty since $k>1$. Moreover, since $\auxfitch'$ is a disconnected graph on the color set $M$, there is a connected component $C$ of $\auxfitch'$ such that $(M\setminus C) \cap \sigma(I')\ne\emptyset$. Hence $M_1\coloneqq M\setminus C$ and $M_2\coloneqq C$ form a bipartition of $M$ such that neither $M_1$ nor $M_2$ are empty sets. We continue by showing that every $I\in \mathcal{I}\setminus \{I'\}$ satisfies either $\sigma(I)\subseteq M_1$ or $\sigma(I)\subseteq M_2$. To see this, assume, for contradiction, that there are colors $A\in \sigma(I)\cap M_1$ and $B\in \sigma(I)\cap M_2$ for some $I\in \mathcal{I}\setminus \{I'\}$. Thus, $B\in C$ and, by definition, $AB\in E(\auxfitch')$. Therefore, $A$ and $B$ must lie in the connected component $C$; a contradiction. Therefore, we can partition $\mathcal{I}\setminus \{I'\}$ into $\mathcal{I}_1\coloneqq \{I\in\mathcal{I}\setminus \{I'\} \mid \sigma(I)\subseteq M_1\}$ and $\mathcal{I}_2\coloneqq \{I\in\mathcal{I}\setminus \{I'\} \mid \sigma(I)\subseteq M_2\}$. Note that one of the sets $\mathcal{I}_1$ and $\mathcal{I}_2$, but not both of them, may be empty. This may be the case, for instance, if $\sigma$ is not surjective. Now, we construct a relaxed scenario $\ensuremath{\mathscr{S}} = (T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ with coloring $\sigma$ such that $G=\digamma(\ensuremath{\mathscr{S}})$. We first define the species tree $S$ as the planted tree where $\rho_{S}$ (i.e.\ the single child of $0_S$) hast two children $w_1$ and $w_2$. If $|M_1|=1$, we identify $w_1$ with the single element in $M_1$, and otherwise, we set $\child_S(w_1)=L(S(w_1))\coloneqq M_1$. We proceed analogously for $w_2$ and $M_2$. Thus, $S$ is phylogenetic by construction. We choose the time map $\ensuremath{\tau_{S}}$ by putting $\ensuremath{\tau_{S}}(0_S)=2$, $\ensuremath{\tau_{S}}(\rho_S)=1$, $\ensuremath{\tau_{S}}(w_1)=\ensuremath{\tau_{S}}(w_2)=0.5$ and $\ensuremath{\tau_{S}}(x)=0$ for all $x\in L(S)$. This completes the construction of $S$ and $\ensuremath{\tau_{S}}$. We proceed with the construction of the gene tree $T$, its time map $\ensuremath{\tau_{T}}$ and the reconciliation map $\mu$. This tree $T$ has leaf set $L$, planted root $0_T$, and root $\rho_T$. We set $\mu(0_T)=0_S$ and $\ensuremath{\tau_{T}}(0_T)=\ensuremath{\tau_{S}}(0_S)=2$, and moreover $\mu(x)=\sigma(x)$ and $\ensuremath{\tau_{T}}(x)=0$ for all $x\in L$. For each $I_j\in \mathcal{I}\setminus\{I'\}$, we add a vertex $u_j$. We will later specify how these vertices are connected (via paths) to $\rho_T$. If $|I_j|=1$, $u_j$ becomes a leaf of $T$ that is identified with the unique element in $I_j$. Otherwise, we add exactly $|I_j|$ children to $u_j$, each of which is identified with one of the elements in $I_j$. If $u_j$ is a leaf, we already defined $\mu(u_j)=\sigma(u_j)$ and $\ensuremath{\tau_{T}}(u_j)=0$. Otherwise, we set $\ensuremath{\tau_{T}}(u_j)=0.6$ and $\mu(u_j)=(\rho_S,w_1)$ if $I_j\in\mathcal{I}_1$ and $\mu(u_j)=(\rho_S,w_2)$ if $I_j\in\mathcal{I}_2$. Recall that $M_1\cap \sigma(I')\ne\emptyset$. However, both $M_2\cap \sigma(I')\ne\emptyset$ and $M_2\cap \sigma(I')=\emptyset$ are possible. The latter case appears e.g.\ whenever $\auxfitch(\sigma,\mathcal{I})$ was already disconnected. To connect the vertices $u_j$ to $\rho_T$, we distinguish the three mutually exclusive cases: \begin{figure}[h] \begin{center} \includegraphics[width=0.85\textwidth]{./images-Rb/rs-fitch-charac.pdf} \end{center} \caption{Illustration of the relaxed scenario constructed in the \emph{if}-direction of the proof of Thm.~\ref{thm:char-rsFitch}. For Cases~(a) and~(c), only the situation in which a vertex $u'$ and $u''$, resp., is necessary is shown. Otherwise, the single element in $I'$, $I'_1$ or $I'_2$ would be a child of the root $\rho_T$. Moreover, the vertices $u_j$ are drawn under the assumption that $|I_j|>1$. Otherwise, there are identified with the single leaf in $I_j$.} \label{fig:rs-fitch-charac} \end{figure} \par\noindent\emph{Case~(a): $M_2\cap \sigma(I')=\emptyset$ and $\mathcal{I}_1\ne\emptyset$.}\\ We set $\mu(\rho_T)=(\rho_S,w_2)$ and $\ensuremath{\tau_{T}}(\rho_T)=0.9$. We attach all $u_j$ that correspond to elements $I_j\in \mathcal{I}_1$ as children of $\rho_T$. If $|I'|> 1$ or $\mathcal{I}_2\ne\emptyset$, we create a vertex $u'$ to which all elements in $I'$ and all $u_j$ such that $I_j\in \mathcal{I}_2$ are attached as children, attach $u'$ as a child of $\rho_T$, and set $\mu(u')=(\rho_S,w_1)$ and $\ensuremath{\tau_{T}}(u')=0.75$. Otherwise, we simply attach the single element $x'$ in $I'$ as a child of $\rho_T$. Clearly, the so constructed tree $T$ is phylogenetic. Note that the edges $(\rho_T, u_j)$ with $I_j\in \mathcal{I}_1$ as well as the edges $(u',u_j)$ with $I_j\in \mathcal{I}_2$ are transfer edges. Together with $(\rho_T,u')$ or $(\rho_T,x)$, respectively, these are the only transfer edges. \par\noindent\emph{Case~(b): $M_2\cap \sigma(I')=\emptyset$ and $\mathcal{I}_1=\emptyset$.}\\ By the arguments above, the latter implies $\mathcal{I}_2\ne\emptyset$. Hence, we can set $\mu(\rho_{T})=(\rho_S,w_1)$ and $\ensuremath{\tau_{T}}(\rho_T)=0.9$ and attach all elements of $I'$ as well as the vertices $u_j$ corresponding to the independent sets $I_j\in\mathcal{I}_2=\mathcal{I}\setminus \{I'\}$ as children of $\rho_T$. Since $|I'|\ge 1$ and $\mathcal{I}_2\ge 1$, the tree $T$ obtained in this manner is again phylogenetic. Moreover, note that the transfer edges are exactly the edges $(\rho_T,u_j)$. \par\noindent\emph{Case~(c): $M_2\cap \sigma(I')\ne\emptyset$.}\\ In this case, the sets $I'_1\coloneqq\{x\in I'\mid \sigma(x)\in M_1\}$ and $I'_2\coloneqq\{x\in I'\mid \sigma(x)\in M_2\}$ must be non-empty. We set $\mu(\rho_T)=(0_T,\rho_T)$ and $\ensuremath{\tau_{T}}(\rho_T)=1.5$. If $|I'_1|> 1$ or $\mathcal{I}_2\ne\emptyset$, we create a vertex $u'$ to which all elements in $I'_1$ and all $u_j$ such that $I_j\in \mathcal{I}_2$ are attached as children, and set $\mu(u')=(\rho_S,w_1)$ and $\ensuremath{\tau_{T}}(u')=0.75$. Otherwise, we simply attach the single element in $I'_1$ as a child of $\rho_T$. For the ``other side'', we proceed analogously: If $|I'_2|> 1$ or $\mathcal{I}_1\ne\emptyset$, we create a vertex $u''$ to which all elements in $I'_2$ and all $u_j$ such that $I_j\in \mathcal{I}_1$ are attached as children, and set $\mu(u')=(\rho_S,w_2)$ and $\ensuremath{\tau_{T}}(u'')=0.75$. Otherwise, we simply attach the single element in $I'_2$ as a child of $\rho_T$. By construction, the so constructed tree is again phylogenetic. Moreover, the transfer edges are exactly the edges $(u',u_j)$ and $(u'',u_j)$. Using Fig.~\ref{fig:rs-fitch-charac}, one can easily verify that, in all three Cases~(a)-(c), the reconciliation map $\mu$ is time-consistent with $\ensuremath{\tau_{T}}$ and $\ensuremath{\tau_{S}}$. Thus, $\ensuremath{\mathscr{S}}$ is a relaxed scenario. Moreover, Fig.~\ref{fig:rs-fitch-charac} together with the fact that $\sigma(I)\subseteq M_1$ holds for all $I\in \mathcal{I}_1$, and $\sigma(I)\subseteq M_2$ holds for all $I\in \mathcal{I}_2$, shows that $G=\digamma(\ensuremath{\mathscr{S}})$ in all three cases. Hence, $(G,\sigma)$ is an rs-Fitch graph. For the \emph{only-if}-direction, assume that $(G=(V,E),\sigma)$ is an rs-Fitch graph. Hence, there exists a relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ such that $G = \digamma(\ensuremath{\mathscr{S}})$. By Obs.~\ref{obs:Fitch} and Prop.~\ref{prop:fitch}, $(G,\sigma)$ is a complete multipartite graph that is determined by its set of independent sets $\mathcal{I}=\{I_1,\dots,I_k\}$. Hence, Condition (i) is satisfied. Now assume, for contradiction, that Condition (ii) is violated. Thus $k\ge 2$ and there is no independent set $I'\in \mathcal{C}$ such that $\auxfitch(\sigma,\mathcal{I}\setminus\{I'\})$ is disconnected. If $|M|=1$, then the species tree $S$ only consists of the planted root $0_S$ and the root $\rho_S$, which in this case is identified with the single element in $M$. Clearly, all vertices and edges are comparable in such a tree $S$, and hence, there is no transfer edges in $\ensuremath{\mathscr{S}}$, implying $E = \emptyset$ and thus $|\mathcal{I}| = 1$; a contradiction to $k\ge 2$. Thus we have $|M|\ge 2$ and the root $\rho_S$ of the species tree $S$ has at least two children. Since $\auxfitch(\sigma,\mathcal{I}\setminus\{I'\})$ is connected for every $I'\in \mathcal{C}$, the graph $\auxfitch(\sigma,\mathcal{I})$ is also connected. Since each color appears at most once as a leaf of $S$, $\sigma(L(S(v_1))) \cap \sigma(L(S(v_2)))=\emptyset$ holds for any two distinct children $v_1,v_2\in\child_S (\rho_S)$. These three assertions, together with the definition of the auxiliary graph $\auxfitch(\sigma,\mathcal{I})$, imply that there are two distinct colors $A, B\in M$ such that $AB$ is an edge in $\auxfitch(\sigma,\mathcal{I})$, $A\preceq_S v_1$ and $B\prec_{S} v_2$ for distinct children $v_1,v_2\in\child_S (\rho_S)$. By definition of $\auxfitch(\sigma,\mathcal{I})$ there is an independent set $I'\in\mathcal{I}$ containing a vertex $a\in I'$ with $\sigma(a)=A$ and a vertex $b\in I'$ with $\sigma(b)=B$. Since $a$ and $b$ lie in the same independent set, we have $ab\notin E$. By Lemma~\ref{lem:independent-lca}, $\mu(\lca_T(a,b)) \succeq_S \lca_S(A,B)=\rho_S$. Since, by assumption, $\auxfitch(\sigma,\mathcal{I}\setminus\{I'\})$ is also connected, we find two distinct colors $C$ and $D$ (not necessarily distinct from $A$ and $B$) such that $CD$ is an edge in $\auxfitch(\sigma,\mathcal{I})$, $C\preceq_S v_3$ and $D\prec_{S} v_4$ for distinct children $v_3,v_4\in\child_S (\rho_S)$ (but not necessarily distinct from $v_1$ and $v_2$), and in particular, an independent set $I''\in\mathcal{I}\setminus \{I'\}$ containing a vertex $c\in I''$ with $\sigma(c)=C$ and a vertex $d\in I''$ with $\sigma(d)=D$. By construction, $I'\ne I''$, and thus, all edges between $I'$ and $I''$ exist in $G$, in particular the edges $ac,ad,bc,bd$. Since $c,d\in I''$, we have $cd\notin E$ and thus, by Lemma~\ref{lem:independent-lca}, $\mu(\lca_T(c,d)) \succeq_S \lca_S(C,D)=\rho_S$. We now consider the unique path $P$ in $T$ that connects $\lca_T(a,b)$ and $\lca_T(c,d)$. Since $\mu$ is time-consistent and $\mu(\lca_T(a,b)), \mu(\lca_T(c,d)) \succeq_S \rho_S$, we conclude that, for every edge $uv$ along this path $P$, we have $\mu(u), \mu(v)\succeq_S \rho_S$ and thus $\mu(u), \mu(v)\in \{\rho_S, (0_S,\rho_S)\}$. But then, $\mu(u)$ and $\mu(v)$ are comparable in $S$. Therefore, $P$ does not contain any transfer edge. Since $ab\notin E$, the path connecting $a$ and $\lca_{T}(a,b)$ does not contain any transfer edges. Likewise, $cd\notin E$ implies that the path connecting $c$ and $\lca_{T}(c,d)$ does not contain any transfer edges. Thus, the path connecting $a$ and $c$ also does not contain any transfer edge, which implies that $ac\notin E(\digamma(\ensuremath{\mathscr{S}}))=E$; a contradiction since $a$ and $c$ belong to two distinct independent sets. Hence, we conclude that for $k>1$ there exists an independent set $I'\in \mathcal{C}$ such that $\auxfitch(\sigma,\mathcal{I}\setminus\{I'\})$ is disconnected. \end{proof} \begin{corollary} rs-Fitch graphs can be recognized in polynomial time. \label{cor:auxfitch1} \end{corollary} \begin{proof} Every rs-Fitch graph $(G,\sigma)$ must be complete multipartite, which can be verified in polynomial time. In this case, the set of independent sets $\mathcal{I}=\{I_1,\dots, I_k\}$ of $G$ can also be determined and the graph $\auxfitch(\sigma,\mathcal{I})$ can be constructed in polynomial time. Finally, we need to find an independent set $I'\in \mathcal{I}$, such that $\auxfitch(\sigma,\mathcal{I}\setminus\{I'\})$ is disconnected. Clearly, checking whether $\auxfitch(\sigma,\mathcal{I}\setminus\{I'\})$ is disconnected can be done in polynomial time and since there are at most $|V(G)|$ independent sets in $\mathcal{I}$, finding an independent set $I'$ such that $\auxfitch(\sigma,\mathcal{I}\setminus\{I'\})$ is disconnected (if one exists) can be done in polynomial time as well. \end{proof} \begin{corollary} Let $(G,\sigma)$ be a complete multipartite graph with coloring $\sigma\colon V(G) \to M$ and set of independent sets $\mathcal{I}$. Then, $(G,\sigma)$ is an rs-Fitch graph if and only if $\auxfitch(\sigma,\mathcal{I})$ is disconnected or there is a cut $Q\subseteq E(\auxfitch(\sigma,\mathcal{I}))$ such that all edges $e\in Q$ have the same label $\ell(e)=\{I\}$ for some $I\in\mathcal{I}$. \label{cor:auxfitch2} \end{corollary} \begin{proof} If $\auxfitch(\sigma,\mathcal{I})$ is disconnected, then $\auxfitch(\sigma,\mathcal{I}\setminus \{I\})$ remains disconnected for all $I\in \mathcal{I}$ and, by Thm.~\ref{thm:char-rsFitch}, $(G,\sigma)$ is an rs-Fitch graph. If there is a cut $Q\subseteq E(\auxfitch(\sigma,\mathcal{I}))$ such that all edges $e\in Q$ have the same label $\ell(e)=\{I\}$ for some $I\in\mathcal{I}$, then, by definition, $E(\auxfitch(\sigma,\mathcal{I}\setminus \{I\}))\subseteq E'\coloneqq E(\auxfitch(\sigma,\mathcal{I}))\setminus Q$. Since $Q$ is a cut in $\auxfitch(\sigma,\mathcal{I})$, the resulting graph $\auxfitch'= (M,E')$ is disconnected. By the latter arguments, $\auxfitch(\sigma,\mathcal{I}\setminus \{I\})$ is a subgraph of $\auxfitch'$, and thus, disconnected as well. By Thm.~\ref{thm:char-rsFitch}, $(G,\sigma)$ is an rs-Fitch graph. Conversely, if $(G,\sigma)$ is an rs-Fitch graph, then Thm.~\ref{thm:char-rsFitch} implies that $\auxfitch(\sigma,\mathcal{I}\setminus \{I\})$ is disconnected for some $I\in \mathcal{I}$. If $\auxfitch(\sigma,\mathcal{I})$ was already disconnected, then there is nothing to show. Hence assume that $\auxfitch(\sigma,\mathcal{I}) = (M,E)$ is connected and let $\auxfitch(\sigma,\mathcal{I}\setminus \{I\}) = (M,E')$. Moreover, let $F\subseteq E$ be the subset of edges $e\in E$ with $I\in \ell(e)$. Note, $F$ contains all edges of $E$ that have potentially been removed from $E$ to obtain $E'$. However, all edges $e=xy$ in $F$ with $|\ell(e)|>1$ must remain in $\auxfitch(\sigma,\mathcal{I}\setminus \{I\})$, since there is another independent set $I'\in \ell(e)\setminus \{I\}$ such that $x,y\in \sigma(I')$. Hence, only those edges $e$ in $F$ for which $|\ell(e)|=1$ are removed from $E$. Hence, there is a cut $Q\subseteq F\subseteq E$ such that all edges $e\in Q$ have the same label $\ell(e)=\{I\}$ for some $I\in\mathcal{I}$. \end{proof} \begin{corollary} If $(G,\sigma)$ with coloring $\sigma\colon V(G) \to M$ is an rs-Fitch graph, then there are no two disjoint independent sets $I$ and $I'$ of $G$ with $\sigma(I)=\sigma(I')= M$. \label{cor:auxfitch3} \end{corollary} \begin{proof} Let $\mathcal{I}$ be the set of independent sets of $G$. If $|\mathcal{I}|=1$, there is nothing to show and thus, we assume that $|\mathcal{I}|>1$. Assume, for contradiction, that there are two distinct independent sets $I, I' \in \mathcal{I}$ such that $\sigma(I)=\sigma(I')= M$. For every $I''\in\mathcal{I}$, the set $\mathcal{I}\setminus \{I''\}$ clearly contains at least one of the two sets $I$ and $I'$, both of which contain all colors in $M$. Therefore, $\auxfitch(\sigma, \mathcal{I}\setminus \{I''\})$ is the complete graph by construction and, thus, connected for every $I''\in\mathcal{I}$. This together with Thm.~\ref{thm:char-rsFitch} implies that $(G,\sigma)$ is not an rs-Fitch graph; a contradiction. \end{proof} \begin{corollary}\label{cor:surj-rsF} Every complete multipartite graph $(G,\sigma)$ with a vertex coloring $\sigma\colon V(G) \to M$ that is not surjective is an rs-Fitch graph. \end{corollary} \begin{proof} If $\sigma\colon V(G) \to M$ is not surjective, then $\auxfitch(\sigma,\mathcal{I})$ is disconnected, where $\mathcal{I}$ denotes the set of independent sets of $G$. Hence, if $k>1$, then $\auxfitch(\sigma,\mathcal{I}\setminus \{I\})$ remains disconnected for all $I\in \mathcal{I}$. By Thm.~\ref{thm:char-rsFitch}, $(G,\sigma)$ is an rs-Fitch graph. \end{proof} Cor.~\ref{cor:surj-rsF} may seem surprising since it implies that the property of being an rs-Fitch graph can depend on species (colors $M$) for which we have no genes $L$ in the data. The reason is that an additional lineage in the species tree provides a place to ``park'' interior vertices in the gene tree from which HGT-edges can emanate that could not always be accommodated within lineages that have survivors -- where they may force additional HGT edges. \begin{corollary} Every Fitch graph $(G,\sigma)$ that contains an independent set $I$ and a vertex $x\in I$ with $\sigma(x)\notin\sigma(I')$ for all other independent sets $I'\neq I$, is an rs-Fitch graph. \label{cor:re-fitsh-isolatedcolor} \end{corollary} \begin{proof} Let $\mathcal{I}$ denote the set of independent sets of $G$. If there is an independent set $I\in \mathcal{I}$ that contains a vertex $x\in I$ with $\sigma(x)\notin \sigma(I')$ for all other independent sets $I'\neq I$, then the vertex $\sigma(x)$ in $\auxfitch(\sigma,\mathcal{I}\setminus \{I\})$ is an isolated vertex and thus, $\auxfitch(\sigma,\mathcal{I}\setminus \{I\})$ is disconnected. By Thm.~\ref{thm:char-rsFitch}, $(G,\sigma)$ is an rs-Fitch graph. \end{proof} \par\noindent As for LDT graphs, the property of being an rs-Fitch graph is hereditary. \begin{corollary} If $(G=(L,E),\sigma)$ is an rs-Fitch graph, then the colored vertex induced subgraph $(G[W],\sigma_{|W})$ is an rs-Fitch graph for all non-empty subsets $W\subseteq L$. \label{cor:rsFitch-hereditary} \end{corollary} \begin{proof} It suffices to show the statement for $W = L\setminus\{x\}$ for an arbitrary vertex $x\in L$. If $G=(L,E)$ is edgeless, then $G[W]$ is edgeless and thus, by Thm.~\ref{thm:char-rsFitch}, an rs-Fitch graph. Thus, assume that $E\neq \emptyset$ and thus, for the set $\mathcal{I}$ of independent sets of $G$ it holds that $|\mathcal{I}|>1$. Since $G$ does not contain an induced $K_2+K_1$, it is easy to see that $G[W]$ cannot contain an induced $K_2+K_1$ and thus, $G[W]$ is a complete multipartite graph. Hence, Thm.~\ref{thm:char-rsFitch}(i) is satisfied. Moreover, if for the set $\mathcal{I}'$ of independent sets of $G[W]$ it holds that $|\mathcal{I}'|=1$ then, Thm.~\ref{thm:char-rsFitch} already shows that $(G[W],\sigma_{|W})$ is an rs-Fitch graph. Thus, assume that $|\mathcal{I}'|>1$. Now compare the labeling $\ell$ of the edges in $\auxfitch = \auxfitch(\sigma, \mathcal{I})$ and the labeling $\ell'$ of the edges in $\auxfitch' = \auxfitch(\sigma_{|W}, \mathcal{I}')$. Note, $\auxfitch$ and $\auxfitch'$ have still the same vertex set $M$. Let $I\in\mathcal{I}$ with $x\in I$. For all vertices $y\in I$ with $\sigma(x)\neq \sigma(y)$, we have an edge $e =\sigma(x)\sigma(y)$ in $\auxfitch$ and $I\in \ell(e)$. Consequently, for all edges $e$ of $\auxfitch$ that are present in $\auxfitch'$ we have $\ell'(e)\subseteq \ell(e)$. In particular, $\auxfitch'$ cannot have edges that are not present in $\auxfitch$, since we reduced for one independent set the size by one. Therefore, $\auxfitch'$ is a subgraph of $\auxfitch$. By Thm.~\ref{thm:char-rsFitch}, there is an independent set $I'\in \mathcal{I}$, not necessarily distinct from $I$, such that $\auxfitch(\sigma,\mathcal{I}\setminus\{I'\})$ is disconnected. If $I' = \{x\}$, then $\mathcal{I}' = \mathcal{I}\setminus \{I'\}$ and $\auxfitch' =\auxfitch$ must be disconnected as well. Otherwise, $\auxfitch'\subseteq \auxfitch$ and similar arguments as above show that $\auxfitch(\sigma,\mathcal{I'}\setminus\{I'\}) \subseteq \auxfitch(\sigma,\mathcal{I}\setminus\{I'\})$. Therefore, in both of the latter cases, $\auxfitch(\sigma,\mathcal{I'}\setminus\{I'\})$ is disconnected and Thm.~\ref{thm:char-rsFitch} implies that $(G[W],\sigma_{|W})$ is an rs-Fitch graph. \end{proof} As outlined in the main part of this paper, Cor.~\ref{cor:rsFitch-hereditary} is usually not satisfied if we restrict the codomain of $\sigma$ to the observable part of colors, even if $\sigma$ is surjective. \subsection{Least Resolved Trees for Fitch graphs} \label{ssec:LRTFitch} It is important to note that the characterization of rs-Fitch graphs in Thm.~\ref{thm:char-rsFitch} does not provide us with a characterization of rs-Fitch graphs that share a common relaxed scenario with a given LDT graph. As a potential avenue to address this problem we investigate the structure of least-resolved trees for Fitch graphs as possible source of additional constraints. \emph{All trees considered in this subsection \ref{ssec:LRTFitch} are rooted and phylogenetic but not planted unless stated differently.} This is no loss of generality, since we are interested in Fitch-least-resolved trees, which are never planted because the edge incident with the planted root can be contracted without affecting the paths between the leaves. \begin{definition}\label{def:FLRT} The edge-labeled tree $(T,\lambda)$ is \emph{Fitch-least-resolved} w.r.t.\ $\digamma(T,\lambda)$, if for all trees $T'\neq T$ that are displayed by $T$ and every labeling $\lambda'$ of $T'$ it holds that $\digamma(T,\lambda)\neq \digamma(T',\lambda')$. \end{definition} \begin{definition} \label{def:contract} Let $(T,\lambda)$ be an edge-labeled tree and let $e=(x,y)\in E(T)$ be an inner edge. The tree $(T_{/e}, \lambda_{/e})$ with $L(T_{/e})=L(T)$, is obtained by contraction of the edge $e$ in $T$ and by keeping the edge labels of all non-contracted edges. \end{definition} Note, if $e$ is an inner edge of a phylogenetic tree $T$, then the tree $T_{/e}$ is again phylogenetic. \begin{definition}\label{def:rel-label} An edge $e$ in $(T,\lambda)$ is \emph{relevantly-labeled in $(T,\lambda)$} if, for the tree $(T,\lambda')$ with $\lambda'(f)=\lambda(f)$ for all $f\in E(T)\setminus\{e\}$ and $\lambda'(e)\neq \lambda(e)$, it holds that $\digamma(T,\lambda)\neq \digamma(T,\lambda')$. \end{definition} \begin{lemma} An outer 0-edge $e=(v,x)$ in $(T,\lambda)$ is \emph{relevantly-labeled in $(T,\lambda)$} if and only if $zx\notin E(\digamma(T,\lambda))$ for some $z\in L(T)\setminus \{x\}$. \label{lem:rel-label} \end{lemma} \begin{proof} Assume that $e=(v,x)$ is a relevantly-labeled outer 0-edge. Hence, for $(T,\lambda')$ with $\lambda'(f)=\lambda(f)$ for all $f\in E(T)\setminus\{e\}$ and $\lambda'(e)=1$, it holds that $\digamma(T,\lambda)\neq \digamma(T,\lambda')$. Since we only changed the label of the outer edge $(v,x)$, it still holds that $yy'\in E(\digamma(T,\lambda'))$ if and only if $yy'\in E(\digamma(T,\lambda))$ for all distinct $y,y'\in L(T)\setminus \{x\}$. Moreover, since $\lambda'(e)=1$ and $e=(v,x)$ is an outer edge, we have $xz\in E(\digamma(T,\lambda'))$ for all $z\in L(T)\setminus \{x\}$. Thus, $\digamma(T,\lambda)\neq \digamma(T,\lambda')$ implies that $xz\notin E(\digamma(T,\lambda))$ for at least one $z\in L(T)\setminus \{x\}$. Now, suppose that $zx\notin E(\digamma(T,\lambda))$ for some $z\in L(T)\setminus \{x\}$. Clearly, this implies that the outer edges $e=(v,x)$ and $f=(w,z)$ must be 0-edges and changing one of them to a 1-edge would imply that $xz$ becomes an edge in the Fitch graph. Hence, $e$ is relevantly-labeled in $(T,\lambda)$. \end{proof} \begin{lemma} For every tree $(T,\lambda)$ and every inner 0-edge $e$ of $T$, it holds $\digamma(T,\lambda)=\digamma(T_{/e},\lambda_{/e})$. \label{lem:contract-0-edge} \end{lemma} \begin{proof} Suppose that $(T,\lambda)$ contains an inner 0-edge $e=(u,v)$. The contraction of this edge does not change the number of 1-edges along the paths connecting any two leaves. It affects the least common ancestor of $x$ and $y$, if $\lca_T(x, y) = u$ or $\lca_T (x, y) = v$. In either case, however, the number of 1-edges between $\lca_T (x, y)$ and the leaves $x$ and $y$ remains unchanged. Hence, we have $\digamma(T,\lambda) = \digamma(T_{/e},\lambda_{/e})$. \end{proof} \begin{lemma} If $(T,\lambda)$ is a Fitch-least-resolved tree w.r.t.\ $\digamma(T,\lambda)$, then it does neither contain inner 0-edges nor inner 1-edges that are not relevantly-labeled. \label{lem:innerEdgesinLRT} \end{lemma} \begin{proof} Suppose first, by contraposition, that $(T,\lambda)$ contains an inner 0-edge $e=(u,v)$. By Lemma~\ref{lem:contract-0-edge}, $\digamma(T,\lambda) = \digamma(T_{/e},\lambda_{/e})$, and thus, $(T,\lambda)$ is not Fitch-least-resolved. Assume now, by contraposition, that $(T,\lambda)$ contains an inner 1-edge $e$ that is not relevantly-labeled. Hence, we can put $\lambda'(e)=0$ and $\lambda(f)=\lambda(f')$ for all $f\in E(T)\setminus \{e\}$ and obtain $\digamma(T,\lambda) = \digamma(T,\lambda')$. Since $(T,\lambda')$ contains an inner 0-edge, it cannot be Fitch-least-resolved. Therefore and by definition, $(T,\lambda)$ cannot be Fitch-least-resolved as well. \end{proof} The converse of Lemma~\ref{lem:innerEdgesinLRT} is, however, not always satisfied. To see this, consider the Fitch graph $G \simeq K_3$ with vertices $x,y$ and $z$. Now, consider the tree $(T,\lambda)$ where $T$ is the triple $xy|z$, the two outer edges incident to $y$ and $z$ are 0-edges while the remaining two edges in $T$ are 1-edges. It is easy to verify that $G=\digamma(T,\lambda)$. In particular, the inner edge $e$ is relevantly-labeled, since if $\lambda'(e) = 0$ we would have $yz\notin E(\digamma(T,\lambda'))$. However, $(T,\lambda)$ is not Fitch-least-resolved w.r.t.\ $G$, since the star tree $T'$ on the three leaves $x,y,z$ is displayed by $T$, and the labeling $\lambda'$ with $\lambda'(e)=1$ for all $e\in E(T')$ provides a tree $(T',\lambda')$ with $G=\digamma(T',\lambda')$. \begin{lemma} A tree $(T,\lambda)$ is a Fitch-least-resolved tree w.r.t.\ $\digamma(T,\lambda)$ if and only if $\digamma(T,\lambda) \neq \digamma(T_{/e},\lambda')$ holds for all labelings $\lambda'$ of $T_{/e}$ and all inner edges $e$ in $T$. \label{lem:no-cont} \end{lemma} \begin{proof} Let $(T,\lambda)$ be an edge-labeled tree. Suppose first that $(T,\lambda)$ is Fitch-least-resolved w.r.t.\ $\digamma(T,\lambda)$. For every inner edge $e$ in $T$, the tree $T_{/e}\ne T$ is displayed by $T$. By definition of Fitch-least-resolved trees, we have $\digamma(T,\lambda)\neq \digamma(T_{/e},\lambda')$ for every labeling $\lambda'$ of $T_{/e}$. For the converse, assume, for contraposition, that $(T,\lambda)$ is not Fitch-least-resolved w.r.t.\ $\digamma(T,\lambda)$. Hence, there is a tree $(T',\lambda')$ such that $T'\ne T$ is displayed by $T$ and $\digamma(T,\lambda) = \digamma(T',\lambda')$. Clearly, $T$ and $T'$ must have the same leaf set. Therefore and since $T'<T$, the tree $T'$ can be obtained from $T$ by a sequence of contractions of inner edges $e_1,\dots,e_{\ell}$ (in this order) where $\ell\ge 1$. If $\ell=1$, then we have $T'=T_{/e_1}$ and, by assumption, $\digamma(T,\lambda) = \digamma(T_{/e_1},\lambda')$. Thus, we are done. Now assume $\ell\ge 2$. We consider the tree $(T_{/e_1},\lambda'')$ where $\lambda''(f)=\lambda'(f)$ if $f \in E(T')$ and $\lambda''(f)=0$ otherwise. Hence, $(T',\lambda')$ can be obtained from $(T_{/e_1},\lambda'')$ by stepwise contraction of the 0-edges $e_2,\dots,e_{\ell}$, and by keeping the labeling of $\lambda''$ for the remaining edges in each step. Hence, we can repeatedly apply Lemma~\ref{lem:contract-0-edge} to conclude that $\digamma(T_{/e_1},\lambda'')=\digamma(T',\lambda')$. Together with $\digamma(T,\lambda) = \digamma(T',\lambda')$, we obtain $\digamma(T,\lambda) = \digamma(T_{/e_1},\lambda'')$, which completes the proof. \end{proof} As a consequence of Lemma~\ref{lem:no-cont}, it suffices to show that $\digamma(T,\lambda) = \digamma(T_{/e},\lambda')$ for some inner edge $e\in E(T)$ and some labeling $\lambda'$ for $T_{/e}$ to show that $(T,\lambda)$ is not Fitch-least-resolved tree w.r.t.\ $\digamma(T,\lambda)$. The next result characterizes Fitch-least-resolved trees and is very similar to the results for ``directed'' Fitch graphs of 0/1-edge-labeled trees (cf.\ Lemma~11(1,3) in \cite{Geiss:18a}). However, we note that we defined Fitch-least-resolved in terms of all possible labelings $\lambda'$ for trees $T'$ displayed by $T$, whereas \citet{Geiss:18a} call $(T,\lambda)$ least-resolved whenever $(T_{/e},\lambda_{/e})$ results in a (directed) Fitch graph that differs from the one provided by $(T,\lambda)$ for every $e\in E(T)$. \begin{theorem} Let $G$ be a Fitch graph, and $(T,\lambda)$ be a tree such that $G=\digamma(T,\lambda)$. If all independent sets of $G$ are of size one (except possibly for one independent set), then $(T,\lambda)$ is Fitch-least-resolved for $G$ if and only if it is a star tree.\\ If $G$ has at least two independent sets of size at least two, then $(T,\lambda)$ is Fitch-least-resolved for $G$ if and only if \begin{itemize} \item[(a)] every inner edge of $(T,\lambda)$ is a 1-edge, \item[(b)] for every inner vertex $v\in V^0(T)$ there are (at least) two relevantly-labeled outer 0-edges $(v,x), (v,y)$ in $(T,\lambda)$ \end{itemize} In particular, if distinct $x, y\in L(T)$ are in the same independent set of $G$, then they have the same parent in $T$ and $(\parent(x), x)$, $(\parent(x), y)$ are relevantly-labeled outer 0-edges. \label{thm:LRT-rsFitch} \end{theorem} \begin{proof} Suppose that every independent set of $G$ is of size one (except possibly for one). Let $(T,\lambda)$ be the star tree where $\lambda((\rho_T,v)) =1$ if and only if $v$ is the single element in an independent set of size one. It is now a simple exercise to verify that $G=\digamma(T,\lambda)$. Since $(T,\lambda)$ is a star tree, it is clearly Fitch-least-resolved. The converse follows immediately from this construction together with fact that the star tree is displayed by all trees with leaf set $V(G)$. In the following we assume that $G$ contains at least two independent sets of size at least two. First suppose that $(T,\lambda)$ is Fitch-least resolved w.r.t.\ $\digamma(T,\lambda)$. By Lemma~\ref{lem:innerEdgesinLRT}, Condition~(a) is satisfied. We continue with showing that Condition~(b) is satisfied. In particular, we show first that every inner vertex $v\in V^0(T)$ is incident to at least one relevantly-labeled outer 0-edge. To this end, assume, for contradiction, that $(T,\lambda)$ contains an inner vertex $v\in V^0(T)$ for which this property is not satisfied. That is, $v$ is either (i) incident to 1-edges only (incl.\ $\lambda((\parent_T(v),v))=1$ in case $v\neq \rho_T$ by Condition~(a)) or (ii) there is an outer 0-edge $(v,x)$ that is not relevantly-labeled. In Case (i), we put $\lambda'=\lambda$. In Case (ii), we obtain a new labeling $\lambda'$ by changing the label of every outer 0-edge $(v,x)$ with $x\in \child_T(v) \cap L(T)$ to ``1'' while keeping the labels of all other edges. This does not affect the Fitch graph, since every such 0-edge is not relevantly-labeled, and thus, $zx\in E(\digamma(T,\lambda))$ for all $z\in L(T)\setminus \{x\}$ by Lemma~\ref{lem:rel-label}. Hence, for both Cases (i) and (ii), for the labeling $\lambda'$ \emph{all} outer edges $(v,x)$ with $x\in \child(v)\cap L(T)$ are labeled as 1-edges, $v$ is incident to 1-edges only (by Condition~(a)) and $\digamma(T,\lambda) = \digamma(T,\lambda')$. We thus have $xy\in E(\digamma(T,\lambda')) =E(\digamma(T,\lambda))$ for all $x\in L(T(v))$ and $y\in L(T)\setminus L(T(v))$. Now, if $v\neq \rho_T$ let $e=(u\coloneqq\parent_T(v),v)$. Otherwise, if $v=\rho_T$ then let $e=(v,u)$ for some inner vertex $u\in \child_T(v)$. Note, such an inner edge $(\rho_T,u)$ exists since $G$ contains at least two independent sets of size at least two and $T$ is not a star tree as shown above. Now consider the tree $(T_{/e},\lambda'_{/e})$, and denote by $w$ the vertex obtained by contraction of the inner edge $e$. By construction, every path in $T_{/e}$ connecting any $x\in L(T(v))$ and $y\in L(T)\setminus L(T(v))$ must contain some 1-edge $(w,w')$ with $w'\in\child_{T_{/e}}(w)=\child_{T}(v)$ implying $xy\in E(\digamma(T_{/e},\lambda'_{/e}))$. Moreover, the edge contraction does not affect whether or not the path between any vertices within $L(T(v))$ or within $L(T)\setminus L(T(v))$ contains a 1-edge. Hence, $\digamma(T,\lambda) = \digamma(T,\lambda') = \digamma(T_{/e},\lambda'_{/e})$, and $(T,\lambda)$ is not Fitch-least-resolved; a contradiction. In summary, every inner vertex $v$ must be incident to at least one relevantly-labeled outer 0-edge $(v,x)$. By Lemma~\ref{lem:rel-label}, $(v,x)$ is a relevantly-labeled outer 0-edge if and only if there is a vertex $z\in L(T)\setminus \{x\}$ such that $zx\notin E(\digamma(T,\lambda))$. By Condition (a), all inner edges in $(T,\lambda)$ are 1-edges, and thus, there is only one place where the leaf $z$ can be located in $T$, namely as a leaf adjacent to $v$. In particular, the outer edge $(v,z)$ is a relevantly-labeled 0-edge, since $zx\notin E(\digamma(T,\lambda))$. Therefore, Condition (b) is satisfied for every inner vertex $v$ of $T$. The latter arguments also show that all distinct vertices $x,y\in L(T)$ that are contained in the same independent set must have the same parent. Clearly, $(\parent(x), x)$, $(\parent(x), y)$ must be outer 0-edges, since otherwise $xy\in E(\digamma(T,\lambda))$. Hence, the final statement of the theorem is satisfied. Now let $(T,\lambda)$ be such that Conditions~(a) and~(b) are satisfied. First observe that none of the outer edges can be contracted without changing $L(T)$. Now let $e = (u,v)$ be an inner edge. By Condition (a), $e$ is a 1-edge. Moreover, by Condition (b), vertex $u$ and $v$ are both incident to at least two relevantly-labeled outer 0-edges. Hence, there are outer 0-edges $(u,x),(u,x'),(v,y),(v,y')$ with pairwise distinct leaves $x,x',y,y'$ in $T$. Since $(u,v)$ is a 1-edge, we have $xy,xy',x'y,x'y' \in E(\digamma(T,\lambda))$. Moreover, we have $xx',yy'\notin E(\digamma(T,\lambda))$. Now consider the tree $(T_{/e}, \lambda')$ with an arbitrary labeling $\lambda'$ and denote by $w$ the vertex obtained by contraction of the inner edge $(u,v)$. In this tree, $x,x',y,y'$ all have the same parent $w$. If $\lambda'((w,x))=1$ \emph{or} $\lambda'((w,y))=1$, we have $xx'\in\digamma(T_{/e}, \lambda')$ or $yy'\in E(\digamma(T_{/e}, \lambda'))$, respectively. If $\lambda'((w,x))=0$ \emph{and} $\lambda'((w,y))=0$, we have $xy\notin E(\digamma(T_{/e}, \lambda'))$. Hence, it holds $\digamma(T_{/e}, \lambda')\ne \digamma(T, \lambda)$ in both cases. Since the inner edge $e$ and $\lambda'$ were chosen arbitrarily, we can apply Lemma~\ref{lem:no-cont} to conclude that $(T,\lambda)$ is Fitch-least-resolved. \end{proof} As a consequence of Thm.~\ref{thm:LRT-rsFitch}, Fitch-least-resolved trees can be constructed in polynomial time. To be more precise, if a Fitch graph $G$ contains only independent sets of size one (except possibly for one), we can construct a star tree $T$ with edge labeling $\lambda$ as specified in the proof of Thm.~\ref{thm:LRT-rsFitch} to obtain the 0/1-edge labeled tree $(T,\lambda)$ that is Fitch-least-resolved w.r.t.\ $G$. This construction can be done in $O(|V(G)|)$ time. Now, assume that $G$ has at least two independent sets of size at least two. Let $\mathcal{I}$ be the set of independent sets of $G$ and $I_1,\dots,I_k\in \mathcal{I}$, $k\geq 2$ be all independent sets of size at least two. We now construct a tree $(T,\lambda)$ with root $\rho_T$ as follows: First we add $k$ vertices $v_1 = \rho_T$ and $v_2,\dots,v_{k}$, and add inner edges $e_i=(v_i,v_{i+1})$ with label $\lambda(e_i)=1$, $1\leq i\leq k-1$. Each vertex $v_i$ gets as children the leaves in $I_i$, $1\leq i\leq k$ and all these additional outer edges obtain label ``0''. Finally, all elements in the remaining independent sets $\mathcal{I}\setminus \{I_1,\dots,I_k\}$ are of size one and are connected as leaves via outer 1-edges to the root $v_1=\rho_T$. It is an easy exercise to verify that $T$ is a phylogenetic tree and that $\digamma(T,\lambda)=G$. In particular, Thm.~\ref{thm:LRT-rsFitch} implies that $(T,\lambda)$ is Fitch-least-resolved w.r.t.\ $G$. This construction can be done in $O(|V(G)|)$ time. We summarize this discussion as \begin{proposition}\label{prop:FLRT-polytime} For a given Fitch graph $G$, a Fitch-least-resolved tree can be constructed in $O(|V(G)|)$ time. \end{proposition} Fitch-least-resolved trees, however, are only of very limited use for the construction of relaxed scenarios $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ from an underlying Fitch graph. First note that we would need to consider \emph{planted versions} of Fitch-least-resolved trees, i.e., Fitch-least-resolved trees to which a planted root is added, since otherwise, such trees cannot be part of an explaining scenario, which is defined in terms of planted trees. Even though $(G,\sigma)$ is an rs-Fitch graph, Example~\ref{ex:FLRT-noScen} shows that it is possible that there is no relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ with HGT-labeling $\lambda_{\ensuremath{\mathscr{S}}}$ such that $(T,\lambda) = (T,\lambda_{\ensuremath{\mathscr{S}}})$ for the planted version $(T,\lambda)$ of \emph{any} of its Fitch-least-resolved trees. \begin{figure}[t] \begin{center} \includegraphics[width=0.85\textwidth]{./images-Rb/flrt-no-scen.pdf} \end{center} \caption{An rs-Fitch graph $(G,\sigma)$ and a possible relaxed scenario $\ensuremath{\mathscr{S}}=(T,S,\sigma,\mu,\ensuremath{\tau_{T}},\ensuremath{\tau_{S}})$ with $G = \digamma(T,\lambda_{\ensuremath{\mathscr{S}}})$. For the planted versions $(T_1,\lambda_1)$ and $(T_2,\lambda_2)$ of the Fitch-least-resolved trees of $(G,\sigma)$ there is no relaxed scenario $\ensuremath{\mathscr{S}}$ such that $(T_i,\lambda_i) = (T_i,\lambda_{\ensuremath{\mathscr{S}}})$, $i\in \{1,2\}$. Red edges indicate 1-labeled (i.e., transfer) edges. See Example~\ref{ex:FLRT-noScen} for further details.} \label{fig:FLRT-noScen} \end{figure} \begin{xmpl}\label{ex:FLRT-noScen} Consider the rs-Fitch graph $(G,\sigma)$ with $V(G)=\{a,b,b',c\}$, $E(G)=\{ab',ac,bb',bc\}$ and surjective coloring $\sigma$ such that $\sigma(a)=A$, $\sigma(b)=\sigma(b')=B$, $\sigma(c)=C$ and $A,B,C$ are pairwise distinct. The rs-Fitch graph $(G,\sigma)$, a Fitch tree $(T,\lambda)$ and relaxed scenario $\ensuremath{\mathscr{S}}$ with $(T,\lambda) = (T,\lambda_{\ensuremath{\mathscr{S}}})$ as well as the planted versions $(T_1,\lambda_1)$ and $(T_2,\lambda_2)$ of its two Fitch-least-resolved trees are shown in Fig.~\ref{fig:FLRT-noScen}. Fitch-least-resolved trees for $(G,\sigma)$ must contain an inner 1-edge, since $G$ has two independent sets of size two and by Thm.~\ref{thm:LRT-rsFitch}. Thus, it is easy to verify that there are no other Fitch-least-resolved trees for $(G,\sigma)$. By Lemma~\ref{lem:independent-lca}, we obtain $\lca_S(A,B) \preceq_S \mu(\lca_{T_i}(a,b))$ and $\lca_S(B,C) \preceq_S \mu(\lca_{T_i}(b',c))$, $i\in\{1,2\}$, for both (planted versions of the) Fitch-least-resolved trees. However, for all of the possible species trees on three leaves $A,B,C$, this implies that the images $\mu(\lca_{T_i}(a,b))$ and $\mu(\lca_{T_i}(b',c))$ are the single inner edge or the edge $(0_T,\rho_T)$ in $S$. Therefore, $\mu(\lca_{T_i}(a,b))$ and $\mu(\lca_{T_i}(b',c))$ are always comparable in $S$. Hence, for all possible relaxed scenarios $\ensuremath{\mathscr{S}}$, we have $\lambda_{\ensuremath{\mathscr{S}}}(e)=0$ for the single inner edge $e$, whereas $\lambda_i(e)=1$ in $T_i$, $i\in \{1,2\}$. This implies that there is no relaxed scenario $\ensuremath{\mathscr{S}}$ with $(T_i,\lambda_i) = (T_i,\lambda_{\ensuremath{\mathscr{S}}})$, $i\in \{1,2\}$. \end{xmpl} \newcommand{\displaystyle}{\displaystyle} \section{Editing Problems} \label{app:edit} \subsection{Editing Colored Graphs to LDT Graphs and Fitch Graphs} We consider the following two edge modification problems for completion, deletion, and editing. \begin{problem}[\PROBLEM{LDT-Graph-Modification (LDT-M)}]\ \\ \begin{tabular}{ll} \emph{Input:} & A colored graph $(G =(V,E),\sigma)$ and an integer $k$.\\ \emph{Question:} & Is there a subset $F\subseteq E$ such that $|F|\leq k$ and $(G'=(V,E\star F),\sigma)$ \\ &is an LDT graph where $\star\in \{\setminus, \cup, \Delta\}$? \end{tabular} \end{problem} \begin{problem}[\PROBLEM{rs-Fitch Graph-Completion/Editing (rsF-D/E)}]\ \\ \begin{tabular}{ll} \emph{Input:} & A colored graph $(G =(V,E),\sigma)$ and an integer $k$.\\ \emph{Question:} & Is there a subset $F\subseteq E$ such that $|F|\leq k$ and $(G'=(V,E\star F),\sigma)$ \\ &is an rs-Fitch graph where $\star\in \{\setminus, \cup, \Delta\}$? \end{tabular} \end{problem} NP-completeness of \PROBLEM{LDT-M} be shown by reduction from \begin{problem}[\PROBLEM{Maximum Rooted Triple Compatibility (MaxRTC)}]\ \\ \begin{tabular}{ll} \emph{Input:} & A set of (rooted) triples $\mathscr{R}$ and an integer $k$.\\ \emph{Question:} & Is there a compatible subset $\mathscr{R}^*\subseteq \mathscr{R}$ such that $|\mathscr{R}^*|\geq |\mathscr{R}|-k$? \end{tabular} \end{problem} \begin{theorem}{\cite[Thm.~1]{Jansson:01}} \PROBLEM{MaxRTC} is NP-complete. \end{theorem} \begin{theorem} \PROBLEM{LDT-M} is NP-complete. \label{thm:LDT-M-NP} \end{theorem} \begin{proof} Since LDT graphs can be recognized in polynomial time (cf.\ Cor.~\ref{cor:LDTpoly}), a given solution can be verified in polynomial time. Thus, \PROBLEM{LDT-M} is contained in NP. We now show NP-hardness by reduction from \PROBLEM{MaxRTC}. Let $(\mathscr{R},k)$ be an instance of this problem, i.e., $\mathscr{R}$ is a set of triples and $k$ is a non-negative integer. We construct a colored graph $(G_\mathscr{R}=(L,E),\sigma)$ as follows: For each triple $r_i = xy|z\in \mathscr{R}$, we add three vertices $x_i,y_i,z_i$, two edges $x_iz_i$ and $y_iz_i$, and put $\sigma(x_i) = x$, $\sigma(y_i) = y$ and $\sigma(z_i) = z$. Hence, $(G_\mathscr{R},\sigma)$ is properly colored and the disjoint union of paths on three vertices $P_3$. In particular, therefore, $(G_\mathscr{R},\sigma)$ does not contain an induced $P_4$, and is therefore a properly colored cograph (cf.\ Prop.~\ref{prop:cograph}). By definition and construction, we have $\mathscr{R} = \ensuremath{\mathfrak{S}}(G_\mathscr{R},\sigma)$. First assume that \PROBLEM{MaxRTC} with input $(\mathscr{R}, k)$ has a yes-answer. In this case let $\mathscr{R}^*\subseteq \mathscr{R}$ be a compatible subset such that $|\mathscr{R}^*| \geq |\mathscr{R}| - k$. For each of the triples $r_i= xy|z\in \mathscr{R}\setminus\mathscr{R}^*$, we add the edge $x_iy_i$ to $G_\mathscr{R}$ or remove the edge $x_iz_i$ from $G_\mathscr{R}$ for \PROBLEM{LDT-E/C} and \PROBLEM{LDT-D}, respectively, to obtain the graph $G^*$. In both cases, we eliminate the corresponding triple $xy|z$ from $\ensuremath{\mathfrak{S}}(G^*,\sigma)$. By construction, therefore, we observe that $\ensuremath{\mathfrak{S}}(G^*,\sigma) = \mathscr{R}^*$ is compatible. Moreover, since we have never added edges between distinct $P_3$s, all connected components of $G^*$ are of size at most three. Therefore, $G^*$ does not contain an induced $P_4$, and thus remains a cograph. By Thm.~\ref{thm:characterization}, the latter arguments imply that $(G^*,\sigma)$ is an LDT graph. Since $(G^*,\sigma)$ was obtained from $(G_\mathscr{R},\sigma)$ by using $|\mathscr{R}\setminus\mathscr{R}^*| \leq k$ edge modifications, we conclude that \PROBLEM{LDT-M} with input $(G_\mathscr{R},\sigma, k)$ has a yes-answer. For the converse, suppose that \PROBLEM{LDT-M} with input $(G_\mathscr{R},\sigma, k)$ has a yes-answer with a solution $(G^* = (L,E\star F),\sigma)$, i.e., $(G^*,\sigma)$ is an LDT graph and $|F|\le k$. By Thm.~\ref{thm:characterization}, $\ensuremath{\mathfrak{S}}(G^*,\sigma)$ is compatible. Let $\mathscr{R}^*$ be the subset of $\mathscr{R} = \ensuremath{\mathfrak{S}}(G_\mathscr{R},\sigma)$ containing all triples of $\mathscr{R}$ for which the corresponding induced $P_3$ in $G_\mathscr{R}$ remains unmodified and thus, is still an induced $P_3$ in $G^*$. By construction, we have $\mathscr{R}^*\subseteq \ensuremath{\mathfrak{S}}(G^*,\sigma)$. Hence, $\mathscr{R}^*$ is compatible. Moreover, since $|F|\le k$, at most $k$ of the vertex-disjoint $P_3$s have been modified. Therefore, we conclude that $|\mathscr{R}^*|\ge |\mathscr{R}|-k$. In summary, \PROBLEM{LDT-M} is NP-hard. \end{proof} \begin{theorem} \PROBLEM{rsF-C} and \PROBLEM{rsF-E} are NP-complete. \label{thm:rsF-M-NP} \end{theorem} \begin{proof} Since rs-Fitch graphs can be recognized in polynomial time, a given solution can be verified as being a yes- or no-answer in polynomial time. Thus, \PROBLEM{rsF-C/E}$\in NP$. Consider an arbitrary graph $G$ and an integer $k$. We construct an instance $(G,\sigma,k)$ of \PROBLEM{rsF-C/E} by coloring all vertices distinctly. Then condition (ii) in Thm.~\ref{thm:char-rsFitch} is always satisfied. To see this, we note that for $k>1$ there are no edges between colors in the auxiliary graph $\auxfitch(\sigma,\mathcal{I})$ such that their corresponding unique vertices are in distinct independent sets $I, I'\in \mathcal{I}$. The problem therefore reduces to completion/editing of $(G,\sigma)$ to a complete multipartite graph, which is equivalent to a complementary deletion/editing of the complement of $(G,k)$ to a disjoint union of cliques, i.e., a cluster graph. Both \PROBLEM{Cluster Deletion} and \PROBLEM{Cluster Editing} are NP-hard \cite{Shamir:04}. \end{proof} Although \PROBLEM{Cluster Completion} is polynomial (it is solved by computing the transitive closure), \PROBLEM{rsF-D} remains open: Consider a colored complete multipartite graph $(G,\sigma)$ that is not an rs-Fitch graph. Then solving \PROBLEM{Cluster Completion} on the complement returns $(G,\sigma)$, which by construction is not a solution to \PROBLEM{rsF-D}. \subsection{Editing LDT Graphs to Fitch Graphs} \begin{lemma} There is a linear-time algorithm to solve Problem \ref{problem:Fcomp} for every cograph $G$. \label{lem:editing} \end{lemma} \begin{proof} Instead of inserting in the cograph $G$ the minimum number of edges necessary to reach a complete multipartite graph, we consider the equivalent problem of \emph{deleting} a minimal set $Q$ of edges from its complement $\overline{G}$, which is also a cograph, to obtain the complement of a complete multipartite graph, i.e., the disjoint union of complete graphs. This problem is known as the \textsc{Cluster Deletion} problem \cite{Shamir:04}, which is known to have an polynomial-time solution for cographs \cite{Gao:13}: A greedy maximum clique partition of $G$ is obtained by recursively removing a maximum clique $K$ from $G$, see also \cite{Dessmark:07}. For cographs, the greedy maximum clique partitions are the solutions of the \textsc{Cluster Deletion} problem \cite[Thm.~1]{Gao:13}. The \textsc{Maximum Clique} problem on cographs can be solved in linear time using the co-tree of $G$ \cite{Corneil:81}, which can also be obtained in linear time \cite{Corneil:81}. \end{proof} An efficient algorithm to solve the \textsc{Cluster Deletion} problem for cographs can be devised by making use of the recursive construction of a cograph along its discriminating cotree $(T,t)$. For all $u\in V(T)$, we have \begin{equation*} G[u] = \begin{cases} \displaystyle\bigcupdot_{v\in\child(u)} G[v] & \text{ if } t(u)=0 \\ \displaystyle\bigjoin_{v\in\child(u)} G[v] & \text{ if } t(u)=1 \\ \displaystyle (\{u\},\emptyset) & \text{ if } u \text{ is a leaf } \end{cases} \end{equation*} Denote by $\mathscr{P}(u)$ the optimal clique partition of the cograph implied by the subtree $T(u)$ of the discriminating cotree $(T,t)$. We think of $\mathscr{P}(u) := [Q_1(u),Q_2(u),\dots]$ as an ordered list, such that $|Q_i(u)|\ge |Q_j(u)|$ if $i<j$. It will be convenient to assume that the list contains an arbitrary number of empty sets acting as an identity element for the join and disjoint union operation. With this convention, the optimal clique partitions $\mathscr{P}(u)$ satisfy the recursion: \begin{equation*} \mathscr{P}(u) = \begin{cases} \displaystyle\bigcup_{v\in\child(u)} \mathscr{P}(v) & \text{ if } t(u)=0 \\ \displaystyle\left[ \bigcup_{v\in\child(u)} Q_i(v) \quad \Big| \; i=1,2,\dots \right] & \text{ if } t(u)=1 \\ \displaystyle[\{u\},\emptyset,\dots] & \text{ if } u \text{ is a leaf } \end{cases} \end{equation*} In the first case, where $t(u)=0$, we assume that the union operation to obtain $\mathscr{P}(u) = [Q_1(u),Q_2(u),\dots]$ maintains the property $|Q_i(u)|\ge |Q_j(u)|$ if $i<j$. In an implementation, this can e.g.\ be achieved using $k$-way merging where $k=|\child(u)|$. To see that the recursion is correct, it suffices to recall that the greedy clique partition is optimal for cographs as input \cite{Gao:13} and to observe the following simple properties of cliques in cographs \cite{Corneil:81}: (i) a largest clique in a disjoint union of graphs is also a largest clique in any of its components. The optimal clique partition of a disjoint union of graphs is, therefore, the union of the optimal clique partitions of the constituent connected components. (ii) For a join of two or more graphs $G_i$, each maximum size clique $Q$ is the join of a maximum size clique of each constituent. The next largest clique disjoint from $Q=\bigjoin_i Q_i$ is, thus, the join of a largest cliques disjoint from $Q_i$ in each constituent graph $G_i$. Thus a greedy clique partition of $G$ is obtained by size ordering the clique partitions of $G_i$ and joining the $k$-largest cliques from each. The recursive construction of $\mathscr{P}(\rho_T)$ operates directly on the discriminating cotree $(T,t)$ of the cograph $G$. For each node $u$, the effort is proportional to $|L(T(u))| \log(\deg(u))$ for the $\deg(u)$-wise merge sort step if $t(u)=0$ and proportional to $|L(T(u))|$ for the merging of the $k$-th largest clusters for $t(u)=1$. Using $\sum_u \deg(u)|L(T(u))|\le |L(T)|\sum_u \deg(u)\le |L(T)|2|E(T)|$ together with $|E(T)|=|V(T)|-1$ and $|V(T)|\leq 2 |L(T)|-1$ (cf.\ \cite[Lemma 1]{Hellmuth:15a}), we obtain $\sum_u \deg(u)|L(T(u))| \in \mathcal{O}(|L(T)|^2) = \mathcal{O}(|V(G)|^2)$, that is, a quadratic upper bound on the running time. \end{appendix} \bibliographystyle{plainnat}
{ "timestamp": "2021-04-07T02:21:54", "yymm": "2012", "arxiv_id": "2012.08897", "language": "en", "url": "https://arxiv.org/abs/2012.08897" }
\section{\uppercase{Introduction}} \label{sec:introduction} In his influential 2017 paper \cite{narrative_econ_paper}, later expanded into the successful 2019 book {\em Narrative Economics: How Stories Go Viral and Drive Major Economic Events} \cite{narrative_econ_book}, Nobel Laureate Robert Shiller introduced the concept of {\em narrative economics} as an overlooked factor in understanding market trends. In brief, Shiller argues that in many markets the movement and maintenance of prices are driven to a significant extent by the stories -- i.e., the narratives -- that market participants tell each other. Shiller draws comparisons between the spread of narratives and the transmission of infectious diseases, and argues that financial bubbles and crashes (most notably in cryptocurrency markets) can plausibly be accounted for as primarily driven by the narratives that traders tell each other, even when those narratives make little sense to outside observers. The narratives told in and about a market are externalisations, verbalizations, of the participants' interior beliefs or opinions. In this paper, we present the first results from a novel synthesis of two previously separate fields that both rely on agent-based modelling: our work combines practices from minimal-intelligence {\em agent-based computational economics} (ACE) with ideas developed separately in the research field known as {\em opinion dynamics}. We show here for the first time how existing well-known and widely-used ACE models of trader-agents can be extended so that each trader also holds its own independent opinion, which is our minimal approximation model of Shiller's notion that real traders are influenced by the narratives that they hear, read, and tell. In our work, an individual trader's opinion may be influenced to varying degrees by the opinions of other traders that it interacts with; and the trader's own opinion also directly influences its individual trading activity, i.e. the sequence of bids and/or offers that it quotes into a single central financial exchange that all traders in our model interact with. Our model financial exchange is technically a {\em continuous double auction} (CDA) market operating with a {\em limit order book} (LOB), which is exactly the structure of existing financial markets such as the New York Stock Exchange, and all other major national and international financial exchanges. In keeping with the spirit of minimalism that motivates much ACE work, We show here for the first time how zero-intelligence (ZI) and minimal-intelligence (MI) trader-agents can be extended so that each trader also holds its own independent opinion. For consistency with prior work in opinion dynamics (OD) research, we model each trader's opinion as a signed scalar real value, e.g. as a number in the continuous range $[-1.0, +1.0]$: this approach is long-established in OD research, a field that over its multi-decade history has seen developed a succession of models introduced to explore and/or account for observable patterns of opinion dynamics in human societies. In our work we have explored the integration of ZI/MI traders with the following previously-established OD models: the {\em Bounded Confidence} model \cite{krause,hegselmannandkrause}; the {\em Relative Agreement} model \cite{deffuant2002,reexaminingRA}; and the {\em Relative Disagreement} model \cite{RD}. We refer to these three opinion dynamics models as the BC, RA, and RD models respectively. The trader-agents that we extend by addition of these OD models are Gode \& Sunder's (1993) {\em Zero Intelligence Constrained} (ZIC), and the {\em Near-Zero-Intelligence} (NZI) trader agents of \cite{Duffy} which minimally extend Gode \& Sunder's ZI approach in such a way that markets populated by NZI traders can exhibit asset-price bubbles. We refer to the extended agent designs as {\em opinionated agents}: we name our opinionated version of ZIC as OZIC, and our opinionated version of NZI as ONZI. For both OZIC and ONZI agents, the bounds of the probability distribution used to randomly generate a trader's bid or offer prices is dependent at least in part on the current value of that agent's opinion-variable; and that opinion variable can change over time as a consequence of interactions with other traders in the market, thereby modelling Shiller's notion of narrative economics: in our system opinions can drive prices, and prices can alter opinions. To the best of our knowledge, we are the first authors to report on such a system, a synthesis of opinion dynamics and market-trading agents, and so the primary contribution of this paper is the modelling platform that we describe for the first time here. The source-code for our system has been placed in the public domain as a freely-available open-source release on {\em GitHub}.\footnote{ {\tt github.com/ken-neth/opinion\_dynamics\_BSE.git}} We evaluate and test the performance of these trading agents, contrasting and comparing the BC, RA, and RD opinion dynamics models, using as our financial-market simulator {\em BSE}, a long-established open-source simulator of a LOB-based financial exchange for a single asset, and freely available in the public domain since 2012 \cite{BSE}. This paper summarises \cite{myThesis}, which contains extensive further visualization and discussion of additional results that are not included here. In Section~\ref{sec:background} we summarise relevant prior academic literature. Section~\ref{sec:NZItraders} describes near-zero-intelligence traders in more depth. Section~\ref{sec:opinionatedtraders} then introduces our innovation, the addition of opinions to trading-agent models, giving {\em opinionated traders}, and results from simulation studies running on our platform are presented in Section~\ref{sec:results}. \section{\uppercase{Background}} \label{sec:background} \subsection{Opinion Dynamics} People are complicated. In particular, how ideas are formed and conveyed to others are difficult to model as there are numerous factors that could affect the behaviour of individuals. Nevertheless we can say, with some degree of certainty, that people hold opinions and these opinions are changed by interacting with the world. Taking this a step further, people communicate and at some point during or after the communication their opinions may alter as a consequence. Given a sufficiently large population we can design models for how their opinions will change over time, i.e. models of the system's opinion dynamics (OD). Of course these models make clear assumptions and may not fully encapsulate the inner workings of a person but can nevertheless be useful in understanding problems relying on the opinions of large populations. One early OD model is given in \cite{deGroot}. In this model, a group of experts have different opinions on a subject and want to reach a consensus. The experts decide on a format of structured debate where each individual expert has a turn to express their opinion, taking the form of a real number, and at the end every expert updates their own individual opinion, using a fixed weight. The experts continue to take turns sharing their opinions until a consensus is reached. \cite{deGroot} proves that they will always reach a consensus given positive weights. A number of later works have analysed the DeGroot model. In \cite{chatterjee} the DeGroot model's treatment of the consensus problem is related to the ergodicity problem in probability theory, which concerns stochastic state spaces where from a given state all possible states are reachable and hence backwards traversal of the state space is difficult. The DeGroot model was subsequently analysed by \cite{friedkin}, who described experiments to understand how the model's mean opinions change over time. Choice-shifts are shown by the difference between the final group mean opinion and their initial mean opinion. These experiments showed how individuals in the population could have greater influence on the overall consensus, and Friedkin argued that choice shifts are an inherent problem in discussions of issues where influence is not balanced. \subsubsection{Bounded Confidence} A variation on the DeGroot model was described in \cite{krause} and named the {\em Bounded Confidence} (BC) model. In this, all agents in a fixed-size population hold an opinion that is represented as a real number. The agents share their opinions and only update their opinions if they are closer than a given deviation threshold. The reasoning for this is that humans are less likely to have their opinions swayed by someone whose opinion heavily deviates from their own. A formal specification of the BC model is given in \cite{hegselmannandkrause} and summarised as follows: given a population of size $n$, $x_i(t)$ represents the opinion of expert $i$ at time $t$. This is updated by: \[ x_i(t+1) = a_{i1}x_{1}(t) + a_{i2}x_{2}(t) + ... + a_{in}x_{n}(t), \] where $a_{ij}$ is the confidence factor between experts $i$ and $j$. Crucially the confidence factor between two experts can be zero if the difference in their opinions are too great. Since at each time step opinions change, it is possible that at a much later time step two agents that initially held too-distant opinions can come to be within a sufficiently close range to start to agree. At the beginning of a simulation, all opinions should be distributed over $[-1, +1]\subset{\cal R}$, with any individuals holding opinions less than or greater than a certain extreme value parameter regarded as \textit{extremists}. As time progresses, experts whose opinions deviate by less than the deviation threshold move closer together according to a confidence factor. The opinions of the experts will converge until the simulation reaches a stable state with do further changes. \subsubsection{Relative Agreement} Another well-known Opinion Dynamics model, the {\em Relative Agreement} (RA) model was proposed by \cite{deffuant2000}. In the RA model experts hold opinions, that are each represented as a real number, but with the difference that they also hold an {\em uncertainty}, which acts like a range around their opinion. The experts communicate and provided the overlap of their uncertainties exceeds the expert's individual uncertainty then they update their opinion and uncertainty by a weight parameter and a Relative Agreement value. \begin{figure}[!h] \centering \includegraphics[width=0.4\linewidth]{images/relative_agreement.png} \caption{Overlap $h_{ij}$ for experts $i$ and $j$ with opinions $X_i$ and $X_j$ and uncertainties $u_i$ and $u_j$ respectively} \label{fig:relativeagreement} \end{figure} According to the RA model definition in the Deffuant \textit{et al.} 2000 paper, opinions are updated as follows: a pair of experts $i$ and $j$ are chosen at random from the population of experts. Firstly, calculate the overlap $h_{ij}$, as illustrated in Figure \ref{fig:relativeagreement}, \[h_{ij} = min (x_i+u_i,x_j+u_j) - max(x_i-u_i,x_j-u_j),\] where $x_i$ is the real number representation of the opinion of expert $i$, and $u_i$ is the uncertainty of expert $i$ in their own opinion. Then, subtract the size of the non-overlapping part $2u_i- h_{ij}$ so the total agreement of the two experts is given by: \[h_{ij}- (2u_i-h_{ij}) = 2(h_{ij}-u_i),\] and so the RA between $i$ and $j$ is given by: \[RA_{ij} = 2(h_{ij}-u_i) / 2u_i= (h_{ij}/u_i) - 1\] Then if $h_{ij} > u_i$, the update is given by: \[x_j:=x_j+\mu RA_{ij}(x_i - x_j)\] \[u_j:=u_j+\mu RA_{ij}(u_i - u_j)\] where $\mu$ is a constant parameter for convergence, similar to the confidence factor in the BC model. \cite{deffuant2000} show that the RA model converges to an average of $n = w/2u$ opinions as opposed to the BC model that converges to $n = {\rm floor}(w/2u)$ opinions. {\em Extremists} were added by \cite{deffuant2002}, which also describes three modes of convergence that occur with the RA model: central convergence; bipolar convergence; and single-extreme convergence. As with BC, at the beginning of an RA simulation all opinions are randomly distributed over $[-1, +1]\subset{\cal R}$. Central convergence appears as all of the opinions converge towards a stable single central value, around zero. In the case where the opinions converge towards two separate values and reach a stable state, we have bipolar convergence. When all opinions converge towards an extreme value and reach a stable state, exceeding a given extreme parameter, we have single-extreme convergence. In a later paper \cite{deffuant2006}, an \textit{asymmetric influence rule} is described where agents that are more convinced of their own opinion exert greater influence upon others. In \cite{deffuant2002} a metric is used to measure the influence of extremists in a population called the \textit{y metric}. The y metric, or indicator, is given by the formula: \[y = p_{+}^{2} + p_{-}^{2},\] where $p_{+}$ denotes the proportion of experts that were initially moderate but held a positive extreme opinion by the end of the simulation, and $p_{-}$ denotes the proportion of experts that were initially moderate but held a negative extreme opinion by the end of the simulation. Deffuant \textit{et al.} use the $y$ metric as an indicator of convergence type, i.e.\ central convergence at $y=0$, bipolar convergence at $y=0.5$, and single extreme convergence at $y=1$. \subsubsection{Relative Disagreement} The RA model has been shown to successfully simulate useful convergences in populations with extremists initialized. A more recent model, introduced in \cite{RD}, and called the Relative Disagreement (RD) model improves on the RA model by introducing probability $\lambda$ of an update occurring and the idea of \textit{reactance}. In \cite{RD} the RD model was shown to achieve the same opinion convergences as the RA model without the need for initialising the population with extremists. Reactance is the motivation to disagree with an opinion. In psychology it has been rationalised as a desire to exercise freedom when that freedom is under threat \cite{reactance}. It is an important part of how people behave and how they come to hold certain opinions. The RD model incorporates the idea of reactance by having individuals' opinions diverge when they disagree to enough of a degree. In contrast to $h_{ij}$ in RA, $g_{ij}$ is the non overlapping distance calculated by: \[g_{ij} = max(x_i - u_i, x_j - u_j) - min(x_i + u_i, x_j + u_j)\] \begin{figure}[!h] \centering \includegraphics[width=0.4\linewidth]{images/relative_disagreement.png} \caption{Illustration of non overlapping distance $g_{ij}$ for experts $i$ and $j$ with opinions $X_i$ and $X_j$ and uncertainties $u_i$ and $u_j$ respectively} \label{fig:relativedisagreement} \end{figure} Subtract the extent of the overlap $2u_i - g_{ij}$ to give the total disagreement: \[g_{ij} - (2u_i - g_{ij}) = 2 (g_{ij} - u_i)\] The RD between $i$ and $j$ is given by: \[RD_{ij} = 2 (g_{ij} - u_i)/2u_i = (g_{ij}/u_i)-1\] If $g_{ij} > u_i$, update the opinions and uncertainties with probability $\lambda$, where $\lambda$ is a parameter. \[x_j:=x_j+\mu RD_{ij}(x_i - x_j)\] \[u_j:=u_j+\mu RD_{ij}(u_i - u_j)\] \subsection{Markets and Traders} The famous 18th-Century Scottish economist Adam Smith included a description of what he called \textit{The Invisible Hand} in his landmark book \cite{theInvisibleHand}; Smith used the term to embody the unintended positive effects of selfish behaviour in a market. This idea forms the basis for \textit{allocative efficiency}, sometimes thought as the ``fairness" of a market. Where utility is the measure of the usefulness a person gets from a product, the \textit{allocative efficiency} of a market is the total utility gained from trade, expressed as a percentage of the maximum possible utility to be gained. Understanding the details of how selfish interactions among competitive traders in a market can give rise to desirable outcomes, such as efficient allocation of scarce resources between producers and consumers, has been a desire of economists ever since Adam Smith. A major step forward was taken by American economist Vernon Smith who in the late 1950s started a program of experimental studies of human traders interacting in markets under repeatable laboratory conditions -- a field that became known as {\em experimental economics}, the founding and growth of which resulted in Vernon Smith being awarded the Nobel Prize in Economics in 2002. Much of Smith's experimental work studied the dynamics of markets in which human traders, either {\em buyers} announcing bid-prices or {\em sellers} announcing ask-prices, interacted with one another via a market mechanism known as the {\em continuous double auction} (CDA) which is the basis of almost all of the world's major financial markets. In a CDA a buyer can announce a bid at any time and a seller can announce an offer at any time, and any buyer is free to accept an ask at any time while any seller is free to accept a bid at any time. In establishing experimental economics research, Vernon Smith had devised experimental CDA auctions for teaching purposes and later as a tool to observe how traders in a market act according to different specified conditions \cite{VernonSmith}. Vernon Smith and his fellow experimental economists focused entirely on the interactions among human traders in their market laboratories but in 1993, inspired by Vernon Smith's work, the economists Gode \& Sunder devised experiments to compare the allocative efficiency of minimally-simple automated trading systems against human traders. Gode \& Sunder's automated traders we so simple that they were, entirely justifiably, referred to as \textit{zero-intelligence} (ZI) traders. Most notably, in \cite{GodeandSunder} the authors describe the design of a ZI trader known as ZIC (for ZI-Constrained) which generated random bid or ask prices, subject to the single {\em budget constraint} that the prices generated should not lead to loss-making deals: ZIC is constrained by a {\em limit price} and so draws its bid quote price from a uniform random distribution below the limit price, and its ask quote price from a uniform random distribution above the limit price. To everyone's surprise the allocative efficiency scores of CDA markets populated by ZIC traders was demonstrated to be statistically indistinguishable from those of comparable CDA markets populated by human traders. Gode \& Sunder's result indicated to many people that the high intelligence of human traders was irrelevant within the context of a CDA-based market, and a research field formed, with various authors publishing details of automated trading systems that refined and extended the ZI approach. Often these early automated traders involved some means of making the trader adaptive, so that it could adjust its response to changing market conditions. As adaptivity to the environment is seen by some as a minimal signifier of intelligence, adaptive ZI-style automated trading agents became known as minimal-intelligence (MI) traders. Numerous variations on ZI/MI traders have been proposed to test the limits of their trading performance and to provide more human-like trader to test new trading strategies against. A notable work, which extended a MI trading strategy to enable the study of asset price bubbles and crashes, is \cite{Duffy}, discussed in more detail below. The primary contribution of this paper is to combine the Opinion Dynamics models with ZI/MI automated traders, creating a new class of automated trading strategies: ones that are still zero- or minimal- intelligence, but which also hold opinions. In the 27 years since Gode and Sunder published their seminal 1993 paper on ZIC, the field of agent-based computational economics (ACE) has grown and matured. For reviews of work in this field, see \cite{chen2018book,hommes_lebaron_2018}. ACE is a subset of research in agent-based modelling (ABM), which uses computational models of interacting agents to study various phenomena in the natural and social sciences: see \cite{ABM} for more details. \subsection{The BSE Financial Exchange} We used the \textit{BSE} open-source simulator of a contemporary financial exchange populated with a number of automated trading systems. The BSE project is open source and publicly available on Github, at: \url{https://github.com/davecliff/BristolStockExchange} \cite{BSE}. BSE is a simulated CDA-based financial market, which is populated by a user-specifiable configuration of various automated-trader systems; it includes a number of predefined classes of automated trader each with unique trading strategies. BSE's implementation of a CDA, like real-world financial exchanges, requires buyers and sellers to submit bid and ask prices simultaneously and continuously onto an exchange mechanism that publishes the orders to a Limit Order Book, (\textit{LOB}), each order (each bid or ask) specifies a price and a quantity. A transaction will go through when a buyer's bid price and a seller's ask price are the same or 'cross', i.e. if a buyer's bid exceeds a seller's ask, or a seller's ask is less than a buyer's bid. When the transaction is complete, the orders have been filled hence they are removed from the \textit{LOB}. On a Limit Order Book (LOB), the bids and asks are stacked separately on ordered lists each sorted from best to worst: the best bid is the highest-priced one and the remaining bids are listed in decreasing-price order below it; the best ask is the lowest-priced one and the remaining asks are listed in ascending-price-order below it. BSE comes with several types of ZI/MI automated traders built-in, including Gode \& Sunder's ZIC, and also Vytelingum's {\em AA} trader \cite{vytelingum2006} which was demonstrated by \cite{deluca_cliff_2011} to outpefrom human traders, so an experimental market can readily be set up and populated with some number of traders of each type. However BSE does not include the \textit{Near-Zero Intelligence} (NZI) trader-type introduced by \cite{Duffy}, so we created our own implementation of that and added it to BSE: the source-code for that implementation is available in our GitHub repository, the location of which was given in the footnote in Section 1. In the next section we describe NZI traders in more detail. \section{Near-Zero-Intelligence Traders} \label{sec:NZItraders} In \cite{Duffy}, NZI traders are defined to mimic the behaviour of traders in markets where asset prices bubble and crash, i.e.\ where the price of a tradeable asset rises quickly and falls precipitously. As the name implies, NZI traders are similar to Gode and Sunder's ZI traders but have some added features. The following is a summary of key aspects of NZI traders. \subsection{The Weak Foresight Assumption} Firstly, Duffy and Ünver define the \textit{weak foresight assumption} (WFA) which gives the traders knowledge that the trading session is coming to an end. This involves two variables: $\Bar{D}^T_t$ and $\pi_t$, both of which are explained further below. A trading period is defined as 240 seconds where at the end of a trading period the traders earn a dividend per unit of the asset they own. The dividend amount is a random variable drawn from a uniform distribution with support: ${d_1, d_2, d_3, d_4}$ where $\{0\leq d_1<d_2<d_3<d_4\}$. Hence the expected dividend is given by: \[\Bar{d} = \frac{1}{4}\sum_{i=1}^{4} d_i\] At the start of each simulation of $T$ trading periods, a trader $i$ has a balance of $x_i$ and owns a number $y_i$ of units of the tradeable asset. Before the first trading period, $t=1$, we have the equation: \[x_i + \Bar{D}^T_1 y_i = c\] where $c$ is a constant for all $i$. During the simulation of the market sessions, $\Bar{D}^T_t$ decreases as $t\rightarrow T$. It represents the fundamental market price or the default value of the asset at period $t$ which earns zero profit. It is calculated by the equation: \[\Bar{D}^T_t = \Bar{d}(T-t+1) + \Bar{D}^T_{T+1}\] $\Bar{D}^T_t $ is a value that decreases by $\Bar{d}$ each trading period $t$, this makes up the first part of the WFA. The second part of the WFA is $\pi_t$, the probability of a trader being a buyer in trading period $t$. It is given by the equation: \[\pi_t = max\{0.5-\varphi t, 0\}\] where $\varphi \in [0,{0.5}/{T})$. Since $0 \leq \varphi < \frac{0.5}{T}$ then $0 < \pi_t \leq 0.5$, and as $t \rightarrow T$, the probability of a trader being a buyer decreases over time; therefore traders are less likely to buy as time goes by. The combination of a reduction in tendency to buy, caused by $\pi_t$, and a decrease in the default value of the asset, $\Bar{D}^T_t$, results in traders having a ``weak" awareness of the future hence, the name ``weak foresight assumption". \subsection{The Loose Budget Constraint} In \cite{GodeandSunder}, their ZIC trader has a \textit{no loss constraint}. That constraint on ZIC traders forces them to buy and sell at prices bounded by the intrinsic value, and transacting at that price would not result in asset price inflation. In contrast to Gode and Sunder's work, \cite{Duffy} propose a \textit{``loose" budget constraint}: if trader $i$ is a seller and has an asset, submit an ask price; and if trader $i$ is a buyer and has sufficient cash balance, submit a bid price: \begin{algorithmic} \IF{trader $i$ is a seller \AND trader $i$ has an asset} \STATE {submit ask} \ELSIF{trader $i$ is a buyer} \STATE{submit min(balance, bid)} \ENDIF \end{algorithmic} \subsection{The ``Anchoring Effect"} Another departure from \cite{GodeandSunder} is that Duffy \& Ünver's NZI traders are not entirely \textit{zero-intelligence}. In fact they have knowledge of the mean transaction price from the previous trading period, denoted $\Bar{p}_{t-1}$, which is used to calculate the trader's initial quote price in a trading period -- thus the trader's quote price is to some extent ``anchored'' by the previous period's prices. In the first session, $\Bar{p}_{t-1}=0$, and the traders submit low quote prices. \subsection{Formal Specification} Simulations involve $T$ market periods or sessions, $t \in [1,T]$, and within each iteration of each market session a trader $i$ is chosen to submit an order in sequence $S$, $s \in S$. The uniform random variable $u^i_{t,s}$ is calculated using $\Bar{D}^T_t$ via: \[u^i_{t,s} \in [\underline\epsilon_t, \Bar\epsilon_t]\] where $\underline\epsilon_t = 0$, $\Bar\epsilon_t = k\Bar{D}^T_t$ and $k > 0$ is a parameter. The upper bound of $u^i_{t,s}$, $\Bar\epsilon_t$, will decrease over time since $\Bar{D}^T_t$ decreases. Therefore the range for $u^i_{t,s}$ becomes smaller and with an average of $\frac{1}{2}k\Bar{D}^T_t$, the value of $u^i_{t,s}$ should decrease. If a trader is a seller then offer the ask price $a^i_{t,s}$, \[a^i_{t,s} = (1-\alpha) u^i_{t,s} + \alpha \Bar{P}_{t-1},\] where $\alpha \in (0,1)$ is a constant parameter. Using the \textit{loose budget constraint} so a buyer can only offer as much money as they possess, if a trader is a buyer then offer the bid price $b^i_{t,s}$, \[b^i_{t,s} = min\{(1-\alpha) u^i_{t,s} + \alpha \Bar{P}_{t-1}, x^i_{t,s}\}\] \begin{figure}[!h] \centering \includegraphics[width=0.8\linewidth]{images/duffy.png} \caption{Comparison of mean transaction price path in the simulations and actual data from \cite{Duffy}} \label{fig:duffy} \end{figure} The combination of a decreasing $\Bar{D}^T_t$ value and an anchoring to the mean transaction price of the previous trading period $\Bar{P}_{t-1}$ results in a humped shape pattern in the transaction history. This hump is the model's endogenous rise in price, i.e.\ the `bubble', followed by a fall or `crash'. The mean transaction price per trading period increases initially due to the high $\Bar{D}^T_t$ value which increases the bid and ask prices above the previous mean transaction price $\Bar{P}_{t-1}$. Eventually as the value of $\Bar{D}^T_t$ decreases, the mean transaction price levels out closer to $\alpha \Bar{P}_{t-1}$ which is less than or equal to $\Bar{P}_{t-1}$. \section{Opinionated Traders} \label{sec:opinionatedtraders} We introduce a new variation on the ZIC trader model, from \cite{GodeandSunder}, called the Opinionated-ZIC (i.e., OZIC) trader, that submits quote-prices affected by its opinion. \begin{figure}[!h] \centering \begin{subfigure}[b]{0.6\linewidth} \centering \includegraphics[width=0.6\textwidth]{images/ZICdiagram3.png} \caption{Quote price range of ZIC traders\vspace*{2em}} \label{fig:OZIC1} \end{subfigure} \vspace*{1em} \begin{subfigure}[b]{0.6\linewidth} \centering \includegraphics[width=0.6\textwidth]{images/OZICdiagram3.png} \caption{Quote price range of OZIC traders} \label{fig:OZIC2} \end{subfigure} \vspace*{0.5em} \caption{Diagrams of quote price range for Gode \& Sunder's Zero Intelligence Constrained (ZIC) Traders in \ref{fig:OZIC1} and for our Opinionated-ZIC (OZIC) Traders in \ref{fig:OZIC2}. The shaded region represents the uniform distribution that the traders' quote prices are drawn from.} \label{fig:OZIC} \end{figure} The BSE simulator \cite{BSE} contains an implementation of the ZIC trader, which has knowledge of the Limit Order Book (LOB), it sets its minimum quote price to the worst bid on the LOB, its maximum quote price to the best ask price on the LOB, and its limit price to that specified by the customer order currently being worked on. If the ZIC trader is a buyer then it submits orders with a quote price generated from a random draw between the minimum quote price and the limit price. Otherwise, if the ZIC trader is a seller then it submits orders with a quote price generated from a random draw between the limit price and the maximum quote price. The quote price distribution for ZIC traders are illustrated in Figure \ref{fig:OZIC1}, with the buyers' quote price distribution on the left and the sellers' quote price distribution on the right. The \textit{Opinionated Zero-Intelligence-Constrained} (OZIC) trader model submits quote prices that vary according to its opinion. If the OZIC trader is a buyer and its opinion is negative then it submits a low bid, and if its opinion is positive then it submits a bid that is higher but still capped at its limit price. On the other hand if the OZIC trader is a seller and its opinion is negative then it submits a low ask, and if its opinion is positive then it submits a high ask. This models the idea that traders will submit quote prices close to what they believe the actual value of the stock to be, and if a traders holds a positive opinion of the stock they would believe the value of the stock to be greater than a trader holding a negative opinion of the stock. As illustrated in Figure \ref{fig:OZIC2}, the quote price range for OZIC buyers are between the minimum price and their \textit{opinionated limit}, and the quote price range for OZIC sellers are between their opinionated limit and the maximum price. If the OZIC trader $i$ is a buyer then calculate the opinionated limit $OL_i$ by: \[OL_i = f(x) = \frac{L (1 + x_i) + \underline{M} ( 1 - x_i )}{2},\] where $L$ is the limit price, $\underline{M}$ is the minimum price, and $x_i$ is the opinion of OZIC trader $i$: this gives $f(-1)=\underline{M}$; $f(0)=\frac{L + \underline{M}}{2}$: and $f(1)=L$. Then generate a bid quote price as a random draw from the interval $[\underline{M}, OL_i]$. If the OZIC trader $i$ is a seller then calculate the opinionated limit $OL_i$ by: \[OL_i = f'(x) = \frac{L (1 - x_i) + \Bar{M} ( 1 + x_i )}{2},\] where $L$ is the limit price, $\Bar{M}$ is the maximum price, and $x_i$ is the opinion of OZIC trader $i$: this gives $f'(-1)=L$; $f'(0)=\frac{L + \Bar{M}}{2}$; and $f(1)=\Bar(M)$. Then bid quote prices are generated as a random draw from the interval $[OL_i, \Bar{M}]$. \subsection{Opinionated NZI Traders} We also introduce here an {\em Opinionated Near-Zero-Intelligence} (ONZI) trader based on the \textit{near-zero-intelligence} (NZI) trader model of \cite{Duffy}. The ONZI trader model offers the possibility of price bubbles dependent on the prevailing opinions of the population, i.e.\ if the opinions are mostly positive then the bubble should be greater than if the opinions were mostly negative. \subsection{Recreating NZI trader model} Duffy \& Utku Ünver's NZI trader model uses a random component $u_{t,s}^i$, given by $u_{t,s}^i \in [0, k\Bar{D}^T_t],$ where $i$ is the index of the trader, $t$ is the current trading period out of $T$ periods, $s$ is the order of the trader in the sequence that the traders submit orders, $k$ is a constant parameter, and $\Bar{D}^T_t$ is the default value of the asset. The ask price $a_{t,s}^i$ is calculated using $u_{t,s}^i$ as described in Section~\ref{sec:NZItraders}. In \cite{Duffy}, optimal parameter values were calibrated to best match their simulated data with the data collected from experiments with human traders. The values are as follows: $k^*=4.0846,$ $\alpha^*=0.8480,$ $\phi^*=0.01674,$ and $S^*=5$. We use the optimised parameter values $k^*$ and $\alpha^*$ hereafter, however we have not used $\phi^*$ because in our work the buyers and sellers do not change specification and we have not used $S^*$ as small values of $S$ do not show opinion convergences in large populations very well. The ask and bid price of traders are calculated in such a way that they require the default value $\Bar{D}^T_t$ of the asset and the mean transaction price of the previous trading period $\Bar{P}_{t-1}$. To get the default value of $\Bar{D}^T_t$ for each trading period $t$, the expected dividend amount $\Bar{d}$ is calculated by the average of dividends $[0,1,2,3]$ which is $1.5$ and the final value is set $\Bar{D}^T_{T+1}=40$. These values form a similar gradient for $\Bar{D}^T_t$ over time to that shown in \cite{Duffy}. \subsection{Opinionated Limit} We created an \textit{opinionated limit} to integrate trader opinions with the NZI strategies. Similarly to the opinionated limit calculation in our OZIC trader model, the opinionated limit of the ONZI trader model can be calculated from between $\alpha \Bar{P}_{t-1}$ and $(1-\alpha)k\Bar{D}^T_t + \alpha \Bar{P}_{t-1}$, as shown in Figure \ref{fig:ONZIC1}, because the maximum $u_{t,s}^i$ value is $k\Bar{D}^T_t$. So for an ONZI trader $i$, with opinion $x_i$, the opinionated limit $OL_i$ is calculated by: \[OL_i = \frac{(1-\alpha)(k\Bar{D}^T_t + \alpha \Bar{P}_{t-1})(1+x_i)+ (\alpha \Bar{P}_{t-1})(1-x_i)}{2}\] This form is closest to that of OZIC traders but is easier to read when expressed in terms of the \textit{opinionated uncertainty} $OU_{t,s}^i$, based on the definition of $u_{t,s}^i$, which is given by: \[OU_{t,s}^i\in [0, \frac{1}{2} k \Bar{D}^T_t (1+x_i)]\] \begin{figure}[!h] \centering \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=0.4\textwidth]{images/ONZICdiagram2.png} \caption{\vspace*{2em}} \label{fig:ONZIC1} \end{subfigure} \hfill \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=0.4\textwidth]{images/ONZICgraphdiagram.png} \caption{} \label{fig:ONZIC2} \end{subfigure} \caption{Diagram of quote price range for Opinionated near-zero-intelligence (ONZI) Traders in \ref{fig:ONZIC1} and an illustration of the possible range for the mean transaction price $\Bar{P}_t$ of trading period $t$ in relation to the previous mean transaction price $\Bar{P}_{t-1}$ in \ref{fig:ONZIC2}.} \label{fig:ONZIC} \end{figure} Then the quote price $a^i_{t,s}$ is calculated by: \[a^i_{t,s} = (1-\alpha)OU^i_{t,s} + \alpha \Bar{P}_{t-1}\] The effect of the opinionated uncertainty $u^i_{t,s}$ is illustrated in Figure \ref{fig:ONZIC2}, where the value of $\Bar{P}_t$ is the mean transaction price for trading period $t$. During trading period $t$, every trader will submit quotes between $\alpha \Bar{P}_{t-1}$ and $(1-\alpha)k\Bar{D}^T_t + \alpha \Bar{P}_{t-1}$ so if there are $n$ transactions that take place at the maximum $(1-\alpha)k\Bar{D}^T_t + \alpha \Bar{P}_{t-1}$ then the average $\Bar{P}_{t}$ will be: \[\frac{1}{n}\sum^{n}((1-\alpha)k\Bar{D}^T_t + \alpha \Bar{P}_{t-1}) = (1-\alpha)k\Bar{D}^T_t + \alpha \Bar{P}_{t-1}\] Similarly if all transactions in trading period $t$ occur at the minimum $\alpha \Bar{P}_{t-1}$, then the average $\Bar{P}_{t}$ will be: \[\frac{1}{n}\sum^{n}(\alpha \Bar{P}_{t-1}) = \alpha \Bar{P}_{t-1}\] The shaded region in Figure \ref{fig:ONZIC2} represents the range that $\Bar{P}_t$ can be in, i.e. between $\alpha \Bar{P}_{t-1}$ and $(1-\alpha)k\Bar{D}^T_t + \alpha \Bar{P}_{t-1}$. The value of $\Bar{D}^T_t$ will decrease hence the range for $\Bar{P}_{t}$ decreases however will roughly remain centered. In contrast, a population of ONZI traders will submit high quote prices, close to the maximum, when they hold positive opinions and will submit low quote prices, close to the minimum, when they hold negative opinions. \section{Results} \label{sec:results} \subsection{OZIC Traders} \subsubsection{Baseline Results} The more useful results are in the extreme cases of opinion distribution, i.e. when all the traders hold extremely positive opinions or negative opinions. In Figure \ref{fig:extremesOZIC}, we have shown the effects of extremely positive opinion distribution on the transaction history which is quite high, whereas for an extremely negative opinion distribution the transaction history shows very low prices. The results use the RA model with $pe=0.5$ and $w=0.5$, and a function that specifies the distribution of extremists. \begin{figure}[!h] \centering \includegraphics[trim=145 20 20 10, clip,width=\linewidth]{images/OZIC_extremes.png} \caption{OZIC traders with extreme opinions. Upper row of plots is for traders with extremely positive opinions; lower row is for traders with extremeley negative opinons. The plot at far left shows the convergence of opinion values in the population over time, in the 2D style used by \cite{deffuant2002} among others -- the population converges to a situation where all traders hold one of three opinions; the two central plots display the same opinion-distribution data as 3D plots (heatmap-colored on the left; uncoloured on the right), which gives a better indication of the number of traders that hold each converged-upon opinion. The dark-background plot at far right in each row os the transaction-price time series from this experiment.} \label{fig:extremesOZIC} \end{figure} In Figure \ref{fig:compOZIC}, we have plotted the transaction histories of OZIC traders with extremely positive opinions, in orange, and extremely negative opinions, in green. When compared this way it is clear that the traders with extremely positive opinions trade at much higher prices than traders with extremely negative opinions. \begin{figure}[!h] \centering \includegraphics[width=0.4\linewidth]{images/OZICextremecomparison.png} \caption{Comparison of OZIC trader transaction histories with extremely negative and positive opinions} \label{fig:compOZIC} \end{figure} \subsubsection{Extreme Opinion Shift} We initialise a given proportion of extremists to be extremely positive or negative initially and switch them to the polar opposite opinion half way through the duration of the simulation. Figure \ref{fig:OZICshifts} shows the results for a population of 100 OZIC buyers and 100 OZIC sellers using the RA model with proportion of extremists $pe=0.5$, confidence factor $\mu=0.5$, and uncertainty in the range $[0.2, 2.0]$. \begin{figure}[!h] \centering \includegraphics[trim=145 20 20 10, clip,width=0.6\linewidth]{images/OZIC_shifts.png} \caption{OZIC traders with extreme shifts in opinion at the start of Period 6; format as for Figure~\ref{fig:extremesOZIC}.} \label{fig:OZICshifts} \end{figure} The results show a clear change in mean transaction price in relation to opinion distribution. For a positive to negative opinion shift, the traders start selling and buying at high prices and after $t=1350$ drastically shift to lower prices. Similarly for a negative to positive opinion shift, the traders begin trading at low prices and after $t=1350$ trade at higher prices. \subsection{ONZI Trader Results} \subsubsection{Baseline Results} The same rationality for testing the extreme opinion distributions for ONZI traders applies to testing ONZI traders. With extremely positive opinions, the shape of the transaction history peaks higher and has a greater initial gradient than that of ONZI traders with extremely negative opinions. ONZI traders with extremely negative opinions show a shorter hump shaped pattern than the ONZI traders with extremely positive opinions. \begin{figure}[!h] \centering \includegraphics[trim=145 20 20 10, clip,width=0.6\linewidth]{images/ONZI_extremes.png} \caption{ONZI trader transaction histories with extreme positive and negative opinions; format as for Figure~\ref{fig:extremesOZIC}.} \label{fig:ONZICextreme} \end{figure} In Figures \ref{fig:ONZICpospath} and \ref{fig:ONZICnegpath}, inspired by a graph in \cite{Duffy}, we have plotted the transaction histories of the ONZI trader, in orange, against an ordinary \textit{near-zero-intelligence} (NZI) trader's results, in green. We have also plotted $\Bar{D}^T$ over time and $1/2 \kappa \Bar{D}^T$ over time to illustrate the effect it has on the transaction price over time. The average transaction price per trading period is also shown to encapsulate the overall behaviour of the market trends, in red. The simulated data for NZI traders, in green, tapers off and does not crash because we are not using a decreasing proportion of buyers in the population. The transaction price data for ONZI traders with extremely positive opinions is very close to the simulated transaction history of \textit{near-zero-intelligence} traders, as shown in Figure \ref{fig:ONZICpospath}. On the other hand, the transaction price data for ONZI traders with extremely negative opinions is much lower than the simulated transaction history of \textit{near-zero-intelligence} traders, as shown in Figure \ref{fig:ONZICnegpath}. \begin{figure}[!h] \centering \includegraphics[trim=250 10 10 70, clip,width=8cm]{images/ONZI_pos_path.png} \caption{ONZI trader transaction history with extremely positive opinions; compared to the original NZI results shown in Figure \ref{fig:duffy}. Yellow lines show transaction history of traders with extreme positive opinions; green lines are baseline comparison; red line shows mean transaction price. } \label{fig:ONZICpospath} \end{figure} \begin{figure}[!h] \centering \includegraphics[trim=250 10 10 70, clip, width=8cm]{images/ONZI_neg_path.png} \caption{ONZI trader transaction history with extremely negative opinions; compared to the original NZI results as shown in Figure \ref{fig:duffy}. Color-coding of lines is as for Figure \ref{fig:ONZICpospath}.} \label{fig:ONZICnegpath} \end{figure} \subsubsection{Extreme Opinion Shift} Figure \ref{fig:ONZICshifts} shows ONZI traders with extremely positive opinions until half way through the simulation, i.e. $t=1350$, when the opinions shift to extremely negative, and vice versa. The opinion dynamics model used is RA with confidence factor $\mu=0.5$ and proportion of extremists $pe=0.5$ for both initializations of extremists. Similarly to the results in Figures \ref{fig:ONZICposneg} and \ref{fig:ONZICnegpos}, we have plotted the transaction histories of ONZI traders with drastically shifting opinion distributions against the ordinary NZI traders, the default value $\Bar{D}^T$, the expected uncertainty $1/2 \kappa \Bar{D}^T$, and the mean transaction price per trading period. The mean transaction price per trading period, in red, is a useful indicator of the trends generated from the opinion distribution, as the average transaction price over time increases and decreases according to positive and negative opinions respectively. \begin{figure}[!h] \centering \includegraphics[trim=145 20 20 10, clip,width=0.6\linewidth]{images/ONZI_shifts.png} \caption{ONZI extreme opinion shifts; format as for Figure~\ref{fig:extremesOZIC}.} \label{fig:ONZICshifts} \end{figure} \begin{figure}[!h] \centering \includegraphics[trim=250 10 10 70, clip,width=0.6\linewidth]{images/ONZI_posneg_path.png} \caption{ONZI traders with extremely positive opinions drastically shifting to negative opinions at the start of Period 6. Color-coding of lines is as for Figure~\ref{fig:ONZICpospath}.} \label{fig:ONZICposneg} \end{figure} \begin{figure}[!h] \centering \includegraphics[trim=250 10 10 70, clip,width=0.6\linewidth]{images/ONZI_negpos_path.png} \caption{ONZI traders with extremely negative opinions drastically shifting to positive opinions at the start of Period 6. Color-coding of lines is as for Figure \ref{fig:ONZICpospath}. } \label{fig:ONZICnegpos} \end{figure} \section{\uppercase{Conclusions}} \label{sec:conclusion} \noindent In this paper we have described what we believe to be the first ever system that integrates ideas from opinion dynamics into well-established trader-agent models, and in doing so we have created the first platform for the experimental exploration of agent-based models of narrative economics. In his seminal work on narrative economics, Nobel-Laureate Robert Shiller argues for a program of empirical research, gathering data on the stories, the narratives, that humans tell each other about economic affairs, which shape and change their opinions about future economic events, and where those opinions are themselves also significant factors in the dynamics of economic affairs. Our work opens up an experimental approach that is complementary to the one proposed by Shiller: using our platform, experimentalists can now also run agent-based simulations to better understand the dynamic interplay between opinions, expressions of those opinions, and subsequent economic outcomes. \clearpage \section*{\uppercase{Acknowledgements}} \noindent The work described here was orally presented in October 2020 at an international conference on Zero- and Minimal-Intelligence Trading Agents held virtually at the Yale School of Management, Connecticut, USA. We are grateful to the participants of that meeting for their insightful questions and comments, and for awarding this work the Best Student Paper prize. \bibliographystyle{apalike} {\small
{ "timestamp": "2020-12-17T02:14:52", "yymm": "2012", "arxiv_id": "2012.08840", "language": "en", "url": "https://arxiv.org/abs/2012.08840" }
\section{Introduction} Quantum optomechanics focuses on the interaction between the electromagnetic radiation and motional degrees of freedom of mechanical oscillators\,\cite{Aspelmeyer,optobook,backaction}. The simplest optomechanical system consists of a single cavity mode interacting with a single mechanical mode and is realised, for example, in an optical cavity with a movable mirror. In this case the mechanism responsible for the interaction is radiation pressure, which entails momentum exchange between light and matter. The presence of a cavity boosts the otherwise weak radiation pressure force, enhancing the light-matter interaction. The quantum effects of radiation pressure forces and the associated limits they set on the precision of mirror-displacement measurements are of great importance for many applications including gravitational wave detectors, scanning probe microscopy and force sensing\,\cite{classical,backaction,optobook,Aspelmeyer}. Although the radiation pressure interaction is intrinsically non-linear\,\cite{cklaw}, approximate models are usually used \cite{Aspelmeyer,optobook} which assume a linear dependence of the cavity frequency $\omega(X_b)$ on the dimensionless position of the movable mirror, $X_b$. So far these ``linear" models have proved extremely successful, aided by the fact that the bare (or `single-photon') optomechanical coupling strength is usually very small\,\cite{Aspelmeyer,optobook}. However, researchers are continuously exploring ways of enhancing the optomechanical coupling, as well as the potential of optomechanics for ultra-high-accuracy applications such as Planck physics \cite{s-kumar}. Hence, extensions to the linear model are becoming a necessity. The next step beyond the linear approach is to expand the cavity frequency up to and including second order in $X_b$, leading to what we will call the ``quadratic model" \,\cite{sala}. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{modelinf.png} \end{center} \caption{Schematic of the parameter estimation methodology for driven-dissipative optomechanics. We consider a driven-dissipative optomechanical system featuring a driven (by an external laser) and lossy (photons escaping the cavity) cavity and a damped mechanical oscillator. The mechanical support has low but finite temperature (leading to a non-zero thermal occupation number). The optomechanical coupling arises due to the radiation pressure on the movable mirror. Once the system has reached a steady state we measure an observable. We repeat the measurement many times to get the statistics. Finally, we process the data to find the best guess for the coupling parameters of interest.} \label{fig:0} \end{figure} Accurate knowledge of all the relevant optomechanical coupling parameters will indeed be crucial for virtually any application of these systems. With such motivation in mind, this paper exploits local quantum estimation theory\,\cite{paris} (QET) to investigate how precisely the linear and quadratic coupling strengths may be measured in a model quantum optomechanical system. In a nutshell, QET looks for the best strategy for estimating unknown parameters encoded in the density matrix of a quantum system (i.e. a \textit{quantum statistical model}) \,\cite{estimation,estimationnopto}. The ultimate limits to the precision with which the desired parameters can be estimated may be quantified via the quantum Cram\'{e}r-Rao bounds \cite{paris,safranek,multiparameter}. Within such framework, we consider an optomechanical set-up featuring a driven (and lossy) cavity, whose dynamics is described by a Lindblad master equation. As is typical in recent optomechanics experiments, we assume sufficiently strong driving to approximate the dynamics via a master equation that is bilinear in the canonical operators. This leads to a Gaussian steady state \cite{gaussianstates}, whose first and second moments we characterise via a combination of matrix algebra and numerical methods. This state, with its explicit dependence on all the model parameters (in particular the unknown coupling strengths), will embody our quantum statistical model. This in turn may be attacked via general closed-form expressions that are available for QET in Gaussian models\,\cite{gerardo}. The visual representation of the parameter estimation methodology for driven-dissipative optomechanics is shown in Fig. \ref{fig:0}. Using this approach, and for typical values of the model parameters (inspired by recent experiments), we found that it is significantly easier to estimate the linear coupling strength than the quadratic one. This result is indeed to be expected in typical experimental conditions, given the significant difference in the relative strength of the two constants. Our analysis also reveals that the majority of information about the parameters is encoded in the reduced state of the mechanical element. We then investigate how well some specific measurements perform when compared to the fundamental limits imposed by QET. We in particular focus on measurements of the mechanical position $X_b$, field amplitude $Q$, mechanical momentum $P_b$ and field phase $P$, which can all be analyzed within the Gaussian formalism \cite{serafini}. Among those we find that the best strategy to estimating the coupling parameters is through a direct measurement of the mechanical position $X_b$. We additionally explored the influence of temperature on the estimation precision of the coupling strengths. In the case of the linear coupling parameter we found that the effect of temperature is mostly significant at lower intracavity photon numbers (i.e. lower driving strengths), where it improves the estimation precision. At higher intracavity photon numbers, the zero temperature scenario predicts a better estimation precision instead. Interestingly, in the case of the quadratic coupling parameter we found that a hotter mechanical bath gives a higher estimation precision at all intracavity photon numbers in our range. We note that the application of QET in closely related optomechanical set-ups was previously considered in works by Bernad, Sanavio and Xuereb\,\cite{estimationnopto,sanavio}, who adopted the linear model of optomechanics. In Ref.~\cite{estimationnopto}, purely Hamiltonian (non-dissipative) dynamics was assumed, and it was found that larger intracavity photon numbers would facilitate the estimation of the linear coupling strength. Analogously, our results to follow show that a similar conclusion holds in the strong driving regime. However, we also find that for weaker drive strengths the picture is more complicated when considering finite-temperature effects. We note that very recently the same authors went on to consider a driven-damped system~\cite{sanavio} though using a somewhat different approach to ours and without considering quadratic couplings. In particular, Ref.~\cite{sanavio} neglects the contribution of the steady state's first moments (i.e. the averages of the canonical operators) to the quantum Fisher information (QFI). We checked that this is a well-justified assumption for the model parameters adopted therein. However, our results show clearly that there are experimentally accessible parameter regimes where the picture changes dramatically and the first moments can come to dominate the QFI. Finally, Schneiter et al.~\cite{nonlinearregime} have applied local QET to a time-dependent, purely Hamiltonian and quadratic optomechanical system. Such a model, however, is sufficiently different from ours to prevent a simple and direct comparison between the two. This paper is organised as follows. In Sec.~\ref{model} we introduce our model of driven-dissipative optomechanics, including both linear and quadratic coupling terms, and outline how the dynamics may be approximated via a bilinear master equation in the limit of strong driving. In Sec.~\ref{theoreticalmethods} we calculate the steady state of the system, which is Gaussian within the considered approximations, and hence is fully characterized by its first and second moments. We then develop the necessary QET tools to investigate the optimal estimation of linear and quadratic coupling parameters. In Sec.~\ref{results} we present and discuss the findings of our research. Finally, in Sec.~\ref{summary} we summarise our results. \section{Model} \label{model} \label{sec:optomodels} We consider a simple optomechanical system consisting of two quantum harmonic oscillators, describing a single-mode cavity field and a single mechanical mode, respectively. The two modes are coupled non-linearly via radiation pressure \cite{Aspelmeyer,optobook,backaction}. We thus assume that our system is described by the Hamiltonian \begin{align} H_0&=\hbar\omega(\hat X_b)\frac{\hat Q^2+\hat P^2}{2}+\hbar\omega_m\frac{\hat X_b^2+\hat P_b^2}{2}, \end{align} with $\hat Q$ and $\hat P$ the amplitude and phase quadratures for the cavity mode, $\hat X_b$ and $\hat P_b$ the dimensionless position and momentum operators of the movable mirror, whilst $m$ and $\omega_m$ are the effective mass and frequency of the mechanics \cite{sala,estimationnopto,cklaw,operator}. The only nontrivial commutators read $[\hat Q,\hat P]=[\hat X_b,\hat P_b]=i$. The optomechanical coupling arises from a parametric dependence of the cavity frequency on the mechanical position $\omega(\hat X_b)$. In the widely explored ``linear" regime of optomechanics, the mechanical motion is assumed to be very small and $\omega(\hat X_b)$ is approximated by an expansion to linear order in $\hat X_b$. For the ``quadratic model" of optomechanics, instead, terms up to and including $\hat X_b^2$ are retained: \begin{align} \omega(\hat X_b)\approx \omega_0+\omega'(0) \hat X_b+\frac{1}{2} \omega''(0)\hat X_b^2, \end{align} where $\omega_0$ is the bare cavity frequency \cite{Aspelmeyer,optobook,sala}. The strength of the optomechanical interaction can be quantified with the linear and quadratic coupling strengths, which for a generic set-up are defined as \begin{align} g_1&\equiv \frac{1}{\sqrt{2}}\omega'(0),\label{g1def}\\ g_2&\equiv \frac{1}{2}\omega''(0),\label{g2def} \end{align} respectively \cite{optobook}. Note that we can always ensure that $g_1$ is positive by a redefinition of the positive direction of $X_b$, and that the linear model is recovered by setting $g_2$ to zero. A purely Hamiltonian description of the system is however not sufficient for our purposes, since we aim to describe a (more realistic) driven-dissipative optomechanical system featuring a driven and lossy cavity, and a damped mechanical oscillator. In order to conveniently introduce coherent driving in the model, we shall move to a frame rotating at the frequency of the driving laser, $\omega_L$. In this frame the Hamiltonian of the driven system may be written as \begin{align} H = &\hbar\left(\Delta_0+\omega'(0) \hat X_b+\frac{1}{2} \omega''(0)\hat X_b^2\right)\frac{\hat Q^2+ \hat P^2}{2}\nonumber\\ &+\hbar\omega_m\frac{\hat X_b^2+\hat P_b^2}{2}+\sqrt{2}\hbar{\cal E} \hat Q, \end{align} where $\Delta_0=\omega_0-\omega_L$ is the detuning between the cavity and driving laser and ${\cal E}$ is the drive amplitude. In our model we will include cavity decay at a rate $\kappa$ and mechanical damping at a rate $\Gamma_m$, assuming that the thermal occupation of the cavity mode is negligible. We assume that the corresponding master equation describing the dynamics of the system is of the general Lindblad form \cite{easy,breuer}: \begin{align} \label{genformme} \dot{\rho}(t)=&-\frac{i}{\hbar}[H,\rho(t)]\nonumber\\ &+\sum_{ij}\frac{\gamma_{ij}}{2}\left[2\hat R_i\rho(t)\hat R_j-\{\hat R_j\hat R_i,\rho(t)\}\right], \end{align} where we defined the vector of quadrature operators \begin{align} \hat{\boldsymbol{R}}&=(\hat Q,\hat P,\hat X_b,\hat P_b),\label{Rdef} \end{align} while $\boldsymbol{\gamma}$ is the damping matrix: \begin{align} \label{dampingmatrix} \boldsymbol{\gamma}= \begin{pmatrix} \frac{\kappa}{2} &-i\frac{\kappa}{2} &0 & 0\\ i\frac{\kappa}{2} & \frac{\kappa}{2} &0 &0\\ 0& 0& \frac{\Gamma_m}{2}(2\bar{n}_m+1)& -i\frac{\Gamma_m}{2}\\ 0& 0& i\frac{\Gamma_m}{2}&\frac{\Gamma_m}{2}(2\bar{n}_m+1) \end{pmatrix}, \end{align} $\bar{n}_m=1/\left(e^{\hbar\omega_m/k_BT}-1\right)$ is the mean occupancy of the mechanical oscillator, $k_B$ is the Boltzmann constant and $T$ is the temperature of the mechanical reservoir \cite{Aspelmeyer,optobook}. We note that, in choosing a Lindblad form, we automatically excluded the use of the standard Brownian motion master equation (SBMME) \cite{breuer} to describe mechanical damping. Indeed, a Lindblad form greatly simplifies our analysis, since it avoids non-positivity issues that are known to occur in the SBMME \cite{vitali}. The main effect of the drive is to displace the steady states of both the cavity field and the mechanical position \cite{optobook}. We assume that the cavity is driven sufficiently strongly (and that the optomechanical couplings are weak enough) so that the system dynamics can be approximated via a bilinear master equation description, where only small quantum fluctuations around the semi-classical steady state are considered \cite{optobook,sanavio}. In detail, we start by displacing our canonical operators as per $\hat{\mathbf{R}}\to\hat{\mathbf{R}}+\mathbf{R}_0$, where \begin{align} \mathbf{R}_0&=(Q_0,P_0,x_0,p_0) \label{averages} \end{align} is the vector of steady-state quadrature averages. Here, $x_0$ and $p_0$ are the average position and momentum of the mechanics in the steady state, while $Q_0$ and $P_0$ are the steady state displacements of the amplitude and phase quadratures, respectively \cite{optobook}. Of course, the steady-state expectation values of the transformed operators will now vanish. This results in the following equations for the steady-state values of the system's first moments: \begin{align} \label{average1} Q_0 &=\frac{-2\Delta_{eff}\mathcal{E}}{\sqrt{2}\left(\Delta_{eff}^2+\frac{\kappa^2}{4}\right)},\\ \label{average2} P_0&=\frac{-\kappa\mathcal{E}}{\sqrt{2}\left(\Delta_{eff}^2+\frac{\kappa^2}{4}\right)},\\ \label{average3} x_0&=-\frac{\omega'(0)\omega_m\mathcal{E}^2}{\left(\omega_m^2+\frac{\Gamma_m^2}{4}\right)\left(\Delta_{eff}^2+\frac{\kappa^2}{4}\right)+\omega''(0)\omega_m\mathcal{E}^2},\\ \label{average4} p_0&=\frac{\Gamma_m}{2\omega_m}x_0, \end{align} where $\Delta_{eff}=\Delta_0-\sqrt{2}g_1x_0+g_2x_0^2$ is the effective detuning. The non-linearity of Eqs. (\ref{average1}-\ref{average4}) suggests that multiple steady state solutions are possible \cite{Aspelmeyer,optobook}. This is known as dynamical multistability \cite{Aspelmeyer,optobook}. In detail, depending on the driving strength up to five (quadratic model) or three (linear model) different steady states solutions can exist. In this work, we only focus on parameters regimes where the system is \textit{stable}, i.e., where a unique real solution to Eqs.~(\ref{average1}-\ref{average4}) exists. This in turn places an upper bound to the drive strength $|\mathcal{E}|^2$ \cite{estimationnopto}. After the displacement has been implemented, we neglect terms that are beyond quadratic order in the transformed canonical operators \cite{optobook}. The corresponding master equation reads \begin{align} \label{linearisedquadraticrho} \dot{\rho}(t)=&-\frac{i}{\hbar}[H_B,\rho(t)]\nonumber\\ &+\sum_{ij}\frac{\gamma_{ij}}{2}\left[2\hat R_i\rho(t)\hat R_j-\{\hat R_j\hat R_i,\rho(t)\}\right], \end{align} where we note that the Lindblad operators remain unchanged, while the Hamiltonian now takes the bilinear form \begin{align} \label{linearisedham} H_B=&\frac{\hbar}{2}\Delta_{eff}\left(\hat Q^2+\hat P^2\right)+\frac{\hbar}{2}\omega_m \hat P_b^2+\frac{\hbar}{2}\omega_{eff}\hat X_b^2\nonumber\\ &+\hbar g_{eff}\hat X_b\left(\hat QQ_0+\hat PP_0\right), \end{align} with $\omega_{eff}=\omega_m+2g_2|\alpha|^2$ the effective mechanical frequency and $g_{eff}=-\sqrt{2}g_1+2g_2x_0$ the effective coupling strength. Note that the assumption of strong cavity driving translates into the condition that the intracavity photon number is large, i.e., $|\alpha|^2\equiv(Q_0^2+P_0^2)/2\gg 1$. \section{Estimating coupling constants from the steady state} \label{theoreticalmethods} \subsection{Covariance Matrix Formalism} Due to its bilinear form, the master equation \eqref{linearisedquadraticrho} admits a Gaussian steady state that can, in general, be fully characterised by its first and second moments of the quadrature operators $\hat{\mathbf{R}}$ \cite{gaussianstates}. After having determined our steady state, we will be able to exploit general closed-form expressions that are available for QET in Gaussian models \cite{gerardo}. As anticipated in the previous section, the first moments of our Gaussian steady state are given by $\mathbf{R}_0=(Q_0,P_0,x_0,p_0),$ and are found by solving Eqs.~(\ref{average1}-\ref{average4}) --- recall also that we will only consider parameter regimes in which such solution is unique. The second moments are instead encoded in the steady state covariance matrix $\bar{\sigma}$, which in our displaced frame of reference is given by \cite{gaussianstates} \begin{align} \label{covariancematrix} \scalemath{0.8}{ {\boldsymbol{\bar{\sigma}}}=\begin{bmatrix} \langle \hat Q^2 \rangle&\langle \frac{1}{2}\{\hat Q,\hat P\}\rangle&\langle \hat Q\hat X_b\rangle &\langle \hat Q\hat P_b\rangle\\ \langle \frac{1}{2}\{\hat Q,\hat P\}\rangle&\langle \hat P^2\rangle&\langle \hat P\hat X_b\rangle&\langle \hat P\hat P_b\rangle\\ \langle \hat X_b\hat Q\rangle&\langle \hat X_b\hat P\rangle &\langle \hat X_b^2\rangle &\langle \frac{1}{2}\{\hat X_b,\hat P_b\}\rangle\\ \langle \hat P_b\hat Q\rangle&\langle \hat P_b\hat P\rangle&\langle \frac{1}{2}\{\hat X_b,\hat P_b\}\rangle&\langle \hat P_b^2\rangle \end{bmatrix} }, \end{align} where $\{\hat A,\hat B\}\equiv \hat A\hat B+\hat B\hat A$ is the anticommutator. As detailed in Appendix~\ref{covariancematrixlanguage}, master equation \eqref{linearisedquadraticrho} implies the following Lyapunov equation for the steady state covariance matrix: \begin{align} \label{Lyapunovsolve1} B^T{\boldsymbol{\bar{\sigma}}}+{\boldsymbol{\bar{\sigma}}}B=C, \end{align} where \begin{align} \label{B} B&=\frac{i}{\hbar}\boldsymbol{HW}+\boldsymbol{\gamma_AW},\\ C&=-\boldsymbol{W\gamma_SW},\\ \boldsymbol{H}&= \begin{pmatrix} \hbar\Delta_{eff}& 0& \hbar g_{eff} Q_0& 0\\ 0&\hbar\Delta_{eff} & \hbar g_{eff}P_0& 0\\ \hbar g_{eff}Q_0&\hbar g_{eff}P_0 &\hbar\omega_{eff}& 0\\ 0& 0& 0& \hbar\omega_m \end{pmatrix},\\ \boldsymbol{W}&= \begin{pmatrix} 0 & i &0 & 0\\ -i & 0 &0 &0\\ 0& 0& 0& i\\ 0& 0& -i&0 \end{pmatrix},\\ \boldsymbol{\gamma_S}&=\frac{\boldsymbol{\gamma}+\boldsymbol{\gamma}^T}{2},\\ \boldsymbol{\gamma_A}&=\frac{\boldsymbol{\gamma}-\boldsymbol{\gamma}^T}{2}. \end{align} We note that Eq.~\eqref{Lyapunovsolve1} can be solved analytically in terms of the model parameters and the vector of averages $\mathbf{R}_0$. The latter, however, may in general not admit an analytical expression in terms of the model parameters, as we recall it is the solution to the nonlinear system of equations \eqref{average1}-\eqref{average4}. In the next section we shall show how to develop a comprehensive QET analysis of the coupling parameters solely from the knowledge of the first and second moments of our Gaussian steady state. \subsection{Quantum Estimation Theory for Gaussian States} The aim of quantum estimation theory (QET) is to identify the best strategy for estimating one or more parameters encoded in the density matrix of a quantum system \cite{paris,estimation,estimationnopto}. Here we focus on local QET, which seeks a strategy that maximises the Fisher information over all possible measurements, and implicitly assumes that a rough estimate of the parameter value is known in advance \cite{paris}. In our model of driven-dissipative optomechanics, the parameters to be estimated shall be the coupling strengths $g_1$ and $g_2$. As anticipated, all of the information about these parameters will be contained in the steady-state averages, $\mathbf{R}_0$, as well as in the steady state covariance matrix, $\boldsymbol{\bar{\sigma}}$. Specifically, for our coupling parameters $(g_1,g_2)$ the elements of the quantum Fisher information matrix (QFIM) are given by \begin{align} \label{QFIM} I_{i,j}&=\left(\partial_{g_i}\boldsymbol{R}_0^T\right)\boldsymbol{\bar{\sigma}}^{-1}\left(\partial_{g_j}\boldsymbol{R}_0\right)\nonumber\\ &+2Tr\left[\left(\partial_{g_i}\boldsymbol{\bar{\sigma}}\right)\left(4\mathcal{L}_{\boldsymbol{\bar{\sigma}}}+\mathcal{L}_W\right)^{-1}(\partial_{g_j}\boldsymbol{\bar{\sigma}})\right], \end{align} where $\mathcal{L}_{\boldsymbol{\bar{\sigma}}} (\boldsymbol{A})=\boldsymbol{\bar{\sigma}} \boldsymbol{A}\boldsymbol{\bar{\sigma}}$, $\mathcal{L}_W (\boldsymbol{A})=\boldsymbol{W} \boldsymbol{A}\boldsymbol{W}$. Note also that the term $\left(4\mathcal{L}_{\boldsymbol{\bar{\sigma}}}+\mathcal{L}_W\right)^{-1}$ refers to the pseudoinverse if the term inside the bracket is singular \cite{gerardo,monras}. The first term is the contribution due to the averages, while the second term is the contribution due to the variances and covariances towards the total QFI \cite{monras}. This terminology will be convenient later on as we seek to unravel how the different terms contribute across different parameter regimes. We note, however, that our terminology only describes the origin of the dependence of the gradients with respect to the coupling parameters. Hence whilst the first term in eqn.\ (\ref{QFIM}) only contains gradients of the averages with respect to the coupling constants, and we will therefore call it the contribution of the averages, it does also depend on the covariance matrix. Eq. (\ref{QFIM}) facilitates efficient numerical computation of the QFI \cite{monras}. The ultimate limit to parameter estimation in this context is set by the QCRB \cite{paris,safranek,multiparameter}. In multi-parameter estimation theory both coupling parameters are assumed to be unknown (or only known with low precision). In this case, the QCRB relates the covariance matrix of any pair of unbiased estimators for the parameters $(g_1, g_2)$ to the QFIM. For $M$ experimental runs, the corresponding QCRB reads \cite{multiparameter,gerardo} \begin{align} Cov(g_1,g_2)\geq \frac{1}{M} \boldsymbol{I}^{-1}. \end{align} The limiting case of single-parameter estimation theory can be reached if we assume that only one parameter is unknown, say $g_i$. In this case the QCRB relates the variance $Var(g_i)$ of an unbiased estimator of the parameter $g_i$ to the corresponding diagonal element of the QFIM. For $M$ experimental runs, the corresponding bound reads \cite{gerardo,serafini,monras,zwierz,zou,abinitio} \begin{align} \label{QCRB} Var(g_i) \geq \frac{1}{M\,I_{ii}}. \end{align} In other words, the diagonal elements of the QFI matrix quantify the ``best-case-scenario" performance for the estimation of each individual parameter. Hence, in what follows we shall pick Eq.~\eqref{QCRB} as our benchmark in evaluating the performance of various measurements (see below). Note that in single-parameter estimation theory the saturation of the QCRB is guaranteed, at least in the limit $M\to \infty$, and assuming that every mathematically allowed quantum measurement can be implemented \cite{multiparameter,gerardo,monras}. This, however, is not true for the estimation of multiple parameters: in this case the optimal measurements for different parameters may not be compatible \cite{multiparameter}. While the QFI quantifies the ultimate quantum limit to parameter estimation \cite{paris,multiparameter,advanced}, the estimation performance of specific measurement strategies may be quantified via the classical Fisher information (FI) matrix \cite{paris,luati}. In our context, the FI measures the amount of information that a classical random variable $s$ (the outcome of a quantum measurement) contains about the parameters $g_1,g_2$ \cite{fisher}. The FI matrix elements take the form \begin{align} \label{classicalfisher} J_{i,j}=\int_{-\infty}^{\infty} ds \frac{1}{p_{g_1,g_2}(s)}\left(\frac{\partial p_{g_1,g_2}(s)}{\partial g_i}\right)\left(\frac{\partial p_{g_1,g_2}(s)}{\partial g_j}\right), \end{align} where $p_{g_1,g_2}(s)$ is the probability distribution of the measurement outcome $s$, assumed to be a smooth function of $(g_1,g_2)$ \cite{gerardo,fisher}. Depending on the chosen observables, analytical solutions to the integral in Eq. (\ref{classicalfisher}) may exist. This is particularly true for quadrature measurements (i.e. a measurement of $\hat Q, \hat P, \hat X_b, \hat P_b$ or a linear combination thereof), provided that the measured state is Gaussian \cite{serafini}. In the case of optomechanics, it is well known that one can use a homodyne detection scheme to measure the light quadratures $\hat Q$, $\hat P$ \cite{homodyne,serafini}. However, we shall also consider a direct measurement of the mechanical quadratures, $\hat X_b$ and $\hat P_b$, for completeness. In practice this could potentially be achieved using e.g. another optical mode of the cavity. In this scenario, the probability distribution associated with a measurement of $\hat s\in\{\hat Q,\hat P,\hat X_b,\hat P_b\}$ has the following expression \cite{serafini}: \begin{align} p_{g_1,g_2}(s)=\frac{e^{-\frac{(s-s_0(g_1,g_2))^2}{2\bar\sigma_{kk}(g_1,g_2)}}}{\sqrt{2\pi\bar{\sigma}_{kk}(g_1,g_2)}}, \end{align} where $s_0(g_1,g_2)$ is the steady state average of the chosen quadrature, appropriately chosen from the set $\{Q_0,P_0,x_0,p_0\}$, while $\bar{\sigma}_{kk}(g_1,g_2)$ is the corresponding diagonal element of the steady state covariance matrix ($\bar{\sigma}_{11}$ for $\hat s=\hat Q$, $\bar{\sigma}_{22}$ for $\hat s=\hat P$ and so on). In this setting an analytical solution to the integral in Eq. (\ref{classicalfisher}) exists and is given by \begin{align} \label{analyticalFI} J_{i,j}&=\frac{1}{2\bar{\sigma}_{kk}(g_1,g_2)^2}\times\nonumber\\ &\Big[2\bar{\sigma}_{kk}(g_1,g_2)\left(\frac{\partial s_0(g_1,g_2)}{\partial g_j}\right)\left(\frac{\partial s_0(g_1,g_2)}{\partial g_i}\right)\nonumber\\ &+\left(\frac{\partial \bar{\sigma}_{kk}(g_1,g_2)}{\partial g_j}\right)\left(\frac{\partial \bar{\sigma}_{kk}(g_1,g_2)}{\partial g_i}\right)\Big]. \end{align} Note that the choice of a strategy to estimate the parameters $(g_1,g_2)$ is optimal if the FI and QFI matrices are equal, i.e. $\boldsymbol{J}=\boldsymbol{I}$. As anticipated, we shall focus solely on the diagonal elements of the QFIM (Eq. (\ref{QFIM})): $I_{1,1}$ and $I_{2,2}$, respectively. As noted above, the diagonal elements are indeed the ``most optimistic'' quantifiers of estimation precision of the coupling strengths. In general, however, the combined precision of the two parameter estimations will be worse than what the diagonal elements suggest. Both, the definitions of the QFI (Eq. (\ref{QFIM})) and the FI (Eq. (\ref{analyticalFI})) rely on the derivatives of the steady state covariance matrix and the averages with respect to the coupling strengths. Since we have seen that both $\bar{\sigma}$ and $\mathbf{R}_0$ are determined by the nonlinear system of equations \eqref{average1}-\eqref{average4}, which in general can only be solved numerically, we use implicit differentiation to calculate the derivatives in question. This allows us to express all our quantities of interest in terms of the numerical solution to the above nonlinear equations, and allows us to avoid numerical differentiation altogether. \begin{figure*}[t!] \begin{center} \includegraphics[width=.475\linewidth]{quadvslinearfullmech.png} \hfill \includegraphics[width=.475\linewidth]{zerovs1mkvs80mkg2comparison.png} \end{center} \caption{{(a)} Log-log plot of the relative error bound on $g_1$ (as implied by QFI) against the intracavity photon number, $|\alpha|^2$ as predicted by the linear and quadratic models in the zero temperature (green circles and orange dashed line, respectively), low temperature (red triangles and black line, respectively) and high temperature (blue squares and purple dot-dashed line, respectively) scenarios. {(b)} Log-log plot of the relative error bound on $g_2$ (as implied by QFI) against the intracavity photon number, $|\alpha|^2$ as predicted by the quadratic model in the zero temperature (purple dot-dashed line), low temperature (black line) and high temperature (orange dashed line) scenarios.} \label{Fig1} \end{figure*} \section{Results} \label{results} For simplicity we consider a specific geometry, that of a Fabry-Perot cavity \cite{Aspelmeyer}, in which one mirror is fixed and the other is mounted on the mechanical oscillator. In this case, assuming an ideal one-dimensional cavity field, the cavity frequency takes the specific form \begin{align} \omega(\hat X_b)=\frac{\omega_0}{1+\frac{\sqrt{2}x_{zp}\hat X_b}{L}}, \end{align} with $L$ the bare cavity length and $x_{zp}=\sqrt{\hbar/2m\omega_m}$ the ground state position uncertainty of the mechanical oscillator \cite{optobook}. Making the standard assumption that the mechanical motion is very small on the scale of the cavity length $\hat X_b/L\ll 1$, for the quadratic model of optomechanics $\omega(\hat X_b)$ is approximated as \begin{align} \omega(\hat X_b)\approx \omega_0-\sqrt{2}g_1\hat X_b +g_2 \hat X_b^2, \end{align} where $g_1=\omega_0 x_{zp}/L$ and $g_2=2\omega_0 x_{zp}^2/L^2$ are the linear and quadratic coupling strengths in accordance with Eqs. (\ref{g1def}) and (\ref{g2def}), respectively \cite{Aspelmeyer,optobook,sala}. \begin{figure*}[t] \centering \includegraphics[width=.475\linewidth]{totalg0vsmechvslight.png} \hfill \includegraphics[width=.475\linewidth]{totalg2vsmechvslight.png} \caption{Relative error bounds on coupling constants (a) $g_1$ and (b) $g_2$, as a function of the intracavity photon number, $|\alpha|^2$ as implied by the global, light and mechanics QFIs in the case of zero temperature (black line, red dashed line, purple squares, respectively) and high temperature (blue dot-dashed line, green dotted line and orange circles, respectively).} \label{Fig2} \end{figure*} Here, we examine three scenarios: zero temperature ($T=0$ K), low temperature ($T=1$ mK) and ``high'' temperature ($T=80$ mK) scenario. In each case we are looking for the best strategy to estimate the linear, ${g}_1$, and quadratic, ${g}_2$, coupling strengths. First, we establish the fundamental quantum limits on the estimation precision, which, in accordance with the quantum Cram\'{e}r-Rao bound (QCRB), are quantified with the ``global" QFIs (i.e. the QFIs calculated from the bipartite state of light plus mechanics). Additionally, by tracing out the mechanical (light) mode, we can calculate ``local" QFIs that are relevant when only the light (mechanical) mode is directly measurable. Comparing these local QFIs with the global ones will also reveal how much information about the coupling parameters is contained in the reduced states of light and mechanics. Finally, we compare the QFI limits to the performance of a small selection of ``realistic'' measurements (quantified with the respective FI), including those of $\hat Q$, $\hat P$, $\hat X_b$ and $\hat P_b$. This can help us discern which of the experimentally common measurements constitute the best strategy to parameter estimation in each scenario. In many cases we have chosen to measure the estimation precision with the relative error \begin{equation} \frac{\Delta g_i}{g_i}\geq\frac1{g_i\sqrt{I_{i,i}}}. \end{equation} Our choice of parameter values is motivated by recent experiments where the ground state of a mechanical oscillator was approached via back-action cooling arising from a red-detuned laser drive, specifically \cite{goodcavity}. Correspondingly, we adopt the following parameter values $\omega_m=1.1\times 10^7$ Hz, $m=4.8\times 10^{-14}$ kg, $\Gamma_m=32$ Hz, $\Delta_0=\omega_m$, $\kappa=10^5$ Hz, $g_1=2\times 10^2$ Hz and $g_2=2g_1^2/\omega_0\approx 1.1\times 10^{-5}$ Hz \cite{goodcavity}. In order to ensure that the driving is strong enough for the Gaussian approximation to hold and we do not encounter any stability issues we consider a region $10^8\leq\mathcal{E}\leq3.8\times10^9$ Hz in all three scenarios. In terms of the intracavity photon number, $|\alpha|^2$, this corresponds to a region: $80\lesssim|\alpha|^2\lesssim1.2\times 10^5$ (or $1.9 \lesssim \log_{10}(\mid\alpha\mid^2)\lesssim 5.1$). In Fig. \ref{Fig1}(a) we investigate the effects of the higher order $g_2$ term, temperature and driving on the estimation precision of the linear coupling strength, $g_1$. Clearly, in all scenarios the effect of the higher order corrections due to the $g_2$ term is minimal: the linear and quadratic models show a very good agreement at all $|\alpha|^2$. This is to be expected as $g_2$ is several orders of magnitude below $g_1$ in our example. As an interesting aside, in a membrane-in-the-middle optomechanical system it is possible to engineer a purely quadratic coupling via the position of the membrane, in which case $g_2$ would clearly play the key role. \cite{optobook,membrane}. Figure \ref{Fig1}(a) reveals a surprisingly complex dependence on temperature. There is a crossover around $\log_{10}(|\alpha|^2)\sim 4.6$ (or $|\alpha|^2\sim 4\times 10^4$): below this value the high temperature scenario offers the best precision for estimating $g_1$, but above it the best precision is found at lower temperatures. As discussed in Appendix \ref{Appendix} (see in particular Fig. \ref{fig:4}), much of this behavior can be understood by looking at the relative contributions of the variances and averages to the QFI and how these change with temperature. The contribution of the averages to the QFI always increases monotonically with the intracavity photon number and hence it always eventually dominates. This, taken together with the fact that the contribution of the averages to the QFI is reduced by increasing the temperature of the mechanical reservoir, means that the best precision is eventually expected at zero-temperature. However, for non-zero temperatures the contribution to the QFI from the variances is important and it is dominant at sufficiently low intracavity photon numbers. This is not surprising given the strong cooling effect that the driven cavity can have on the mechanics\,\cite{Aspelmeyer, optobook}, leading to a strong dependence of the corresponding variances on the coupling strength (and the intracavity photon number). The impact of the cooling effect gets stronger at higher temperatures. In contrast, at zero temperature there is a very weak dependence of the variances on $g_1$, so that in that case the averages always dominate. Away from the zero-temperature limit, the contribution of the variances to the QFI develops a peak at a particular intra-cavity photon number, reflected as the maxima in the relative error for $g_1$ seen at finite temperatures in Fig. \ref{Fig1}(a). \begin{figure*} \centering \begin{subfigure}[b]{0.47\textwidth} \centering \includegraphics[width=\textwidth]{totalvsclassicalmeasurementsg0quadfullhigh.png} \end{subfigure} \hfill \begin{subfigure}[b]{0.47\textwidth} \centering \includegraphics[width=\textwidth]{wholevsclassicalfullg2quadhigh.png} \end{subfigure} \begin{subfigure}[b]{0.47\textwidth} \centering \includegraphics[width=\textwidth]{totalvsclassicalmeasurementsg0quadfullzerotemp.png} \end{subfigure} \hfill \begin{subfigure}[b]{0.47\textwidth} \centering \includegraphics[width=\textwidth]{wholevsclassicalfullg2quadzerotemp.png} \end{subfigure} \caption{Log-log plots of relative error bounds for coupling constants against the intracavity photon number, $|\alpha|^2$ as implied by the QFI (black line) and the measurements of $\hat P$ (red dotted line), $\hat Q$ (green dot-dashed line), $\hat X_b$ (blue circles) and $\hat P_b$ (purple dashed line). (a) and (b) are for $g_1$ and $g_2$ in the high temperature scenario respectively, whilst (c) and (d) are for $g_1$ and $g_2$ in the zero-temperature scenario.} \label{fig:3} \end{figure*} In Fig. \ref{Fig1}(b) we explore the effect of cavity driving and temperature on the estimation precision of the quadratic coupling strength, $g_2$. Overall the relative estimation errors are much higher for $g_2$ than $g_1$, which is unsurprising as the former is several orders of magnitude smaller. Interestingly, at all $|\alpha|$ within our allowed range the high temperature scenario predicts the lowest relative errors bound on $g_2$: a hotter mechanical bath gives a better estimation precision for all driving strengths below the instability threshold. This can be traced back to the fact that, also in the estimation of $g_2$, the information content of the variances is again higher than that of the averages at lower driving strengths, and in the high temperature case it remains so for all driving strengths up until the instability threshold. The effect gets weaker as the temperature is reduced, and in the $T=0$ case a crossover is seen with the contribution of the averages eventually becoming dominant (see Fig. \ref{fig:5} in Appendix \ref{Appendix}). The overall result is that the relative error bounds on $g_2$ decrease monotonically with increasing drive across the parameter range studied. In Fig. \ref{Fig2} we compare global and local QFIs for $g_1$ (Fig. \ref{Fig2}(a)) and $g_2$ (Fig. \ref{Fig2}(b)). In a nutshell, we find that the majority of information about the coupling parameters is contained in the reduced state of the mechanics. Note that, in standard optomechanical experiments, measurements are typically performed on the light mode. Nevertheless, our results suggest that significantly more information about the couplings might be available by probing the mechanical motion more directly. For $g_1$ the uncertainties found from either just the mechanical subsystem, or just the optical subsystem only drop monotonically with drive strength at $T=0$, matching what happens with the full system. In this case, even at $80$mK the uncertainties obtained from the reduced state of the mechanics almost match those from the full system. In contrast, for $g_2$ the uncertainty obtained from the state of just the mechanics only reaches that achieved with the full state in the zero temperature limit. Finally, in Fig. \ref{fig:3} we show how some realistic measurements perform in comparison to the ultimate limits given by the QFI. The figure shows that, out of the measurements considered, the mechanical position almost always does best at estimating the coupling parameters. The ultimate limits to estimation precision of the coupling strengths can only be approached at low and intermediate drive strengths via measurement of $\hat X_b$ at zero temperature. For higher temperatures this limit is approached for $g_1$ at very high intracavity photon numbers, whilst for $g_2$ it is never achieved. \section{Conclusions} \label{summary} We employed local QET to the problem of estimating linear and quadratic coupling parameters in driven-dissipative optomechanics. For experimentally realistic values of the model parameters, inspired by Ref.~\cite{goodcavity}, we have found that it is considerably easier to estimate the linear coupling strength than the quadratic one. Our analysis has also showed that the best strategy for estimating the coupling parameters can be well approximated by a direct measurement of the mechanical position $\hat X_b$. Exploring the effect of temperature on the estimation precision of the coupling strengths, we found that higher temperatures are not always detrimental to the estimation performance. The effect of temperature is particularly striking when analyzing the estimation of the quadratic coupling parameter: in this case we found that a hotter mechanical bath ($T=80$mK) resulted in a higher estimation precision for all drive strengths below the instability threshold. In contrast, in the case of the linear coupling strength the effect of temperature is most significant at lower driving. Past a certain drive strength, better estimation precision for the linear coupling parameter is instead achieved at lower temperatures. \section*{Acknowledgments} T.T. acknowledges support from the University of Nottingham via a Nottingham Research Fellowship. K.S. and A.D. acknowledge support from the University of Nottingham. A.D.A. was supported through a Leverhulme Trust Research Project Grant (RPG-2018-213).
{ "timestamp": "2020-12-17T02:15:47", "yymm": "2012", "arxiv_id": "2012.08876", "language": "en", "url": "https://arxiv.org/abs/2012.08876" }
\section*{Introduction} \label{sec:intro} Measurement and characterization of quantum systems comprise a long-standing problem in quantum information science~\cite{James2001}. However, the exponential scaling of Hilbert space dimension with the number of qubits makes full characterization extremely challenging, inspiring a plethora of approaches designed to estimate properties of quantum states with as few measurements as possible, such as compressed sensing~\cite{Gross2010, Flammia2012}, adaptive tomography~\cite{Huszar2012, Kravtsov2013, Granade2017}, matrix product state formulations~\cite{Cramer2010}, and neural networks~\cite{Torlai2018, Carrasquilla2019, Lohani2020}. Very recently, a groundbreaking approach known as classical shadows was proposed and analyzed~\cite{Huang2020}. Building on and simplifying ideas from ``shadow tomography''~\cite{Aaronson2018}, the classical shadow was shown to provide accurate predictions of observables with a fixed number of measurements, including simulated examples for quantum systems in excess of 100 qubits~\cite{Huang2020}. Astonishingly simple, the classical shadow is formed by collecting the results of random measurements on a repeatedly prepared input state, and inverting them through an appropriate virtual quantum channel. However, several features of the classical shadow remain enigmatic, including its highly nonphysical nature, optimality with respect to alternative cost functions, and relationship to more conventional likelihood-based tomographic techniques. One such method, Bayesian mean estimation (BME)~\cite{Blume2010}, provides a conceptually straightforward path to estimate a quantum state given measured data, making use of prior knowledge and providing meaningful error bars for any experimental conditions. BME appears particularly well suited for contextualizing classical shadows, since it returns a principled estimate under any number of measurements (even zero), and is optimal in terms of minimizing average squared error~\cite{Robert1999}. In this work, we directly compare the estimates of classical shadows and BME for identical simulated datasets. For particular observables with relatively improbable values from the perspective of BME, shadow is found to reach the ground truth with significantly fewer measurements. However, after properly reformulating the problem under test for consistency with the Bayesian prior, the situation reverses, with BME returning estimates possessing lower error on average. In the latter portion of our investigation, we seek to construct a BME model emulating the key features of the classical shadow, but with positive semidefinite states as support. While complicated by the shadow's nonphysical nature, we ultimately propose an observable-oriented pseudo-likelihood that rates quantum states by their observable values with respect to those of shadow. Our pseudo-likelihood successfully mimics the dimension-independence of shadow, with the advantage of delivering entirely physical estimates for any number of measurements. \section*{Results} \label{sec:res} \subsection*{Problem Formulation} \textit{Classical Shadows.} For our analysis, we invoke the setup of the original classical shadow proposal~\cite{Huang2020}. Consider a $D$-dimensional Hilbert space occupied by a ground truth quantum state $\rho_g$ that can be repeatedly prepared. On each preparation $m$, $\rho_g$ is subjected to a randomly chosen $D\times D$ unitary $U_m$ and one measurement is performed in the computational basis, leaving result $\ket{b_m}$. Defining $\ket{\psi_m} = U_m^\dagger\ket{b_m}$, the classical snapshot associated with measurement $m$ follows as $\mathcal{M}^{-1}(\ket{\psi_m}\bra{\psi_m})$, where $\mathcal{M}(\cdot)$ is the quantum channel defined by averaging over all possible unitaries and outcomes. We assume the $U_m$ are drawn from the set of $D\times D$ Haar-random unitaries, in which case $\mathcal{M}^{-1}(\ket{\psi_m}\bra{\psi_m}) = (D+1)\ket{\psi_m}\bra{\psi_m} - I_D$, with $I_D$ the $D\times D$ identity matrix~\cite{Huang2020}. (This channel holds for the more restricted class of random Cliffords as well~\cite{Webb2016, Zhu2017}.) Averaging over $M$ measurements yields the shadow estimator \begin{equation} \label{eq:shad} \rho_s = \frac{D+1}{M} \sum_{m=1}^M \ket{\psi_m}\bra{\psi_m} - I_D. \end{equation} (In what follows, the phrases ``classical shadow,'' ``shadow estimator,'' and simply ``shadow'' refer interchangeably to this estimator as well as the procedure more generally.) In this form, the simplicity of $\rho_s$ is evident: it is merely a scaled and recentered average of all observed outcomes. Interestingly, though, $\rho_s$ is in general not positive semidefinite; for $M<D$, $\rho_s$ possesses at least $D-M$ eigenvalues equal to $-1$. Accordingly, in the targeted regime for classical shadows of $M\ll D$, $\rho_s$ is highly nonphysical. Understanding the role the shadow estimator's negativity on estimation forms a central goal of the present study. Finally, defining $\lambda$ as the expectation of the observable $\Lambda$ ($\lambda = \Tr\rho\Lambda$), the shadow estimate thereof follows as \begin{equation} \label{eq:s} \lambda^{(s)} = \Tr\rho_s\Lambda, \end{equation} to be compared to the ground truth $\lambda^{(g)} = \Tr\rho_g\Lambda$. As an aside, we note that Ref.~\cite{Huang2020} employed an additional statistical technique, ``median of means,'' to reduce the impact of outliers by partitioning the $M$ outcomes into $K$ subsets and taking the median as the estimate $\lambda^{(s)}$. In the interests of simplicity and ease of comparison, we focus on $K=1$ in Eq.~(\ref{eq:shad}). We expect the benefits of selecting $K>1$ will prove similar in both the shadow and Bayesian cases~\cite{Orenstein2019}, but work on this is beyond the scope of the present investigation. \textit{Bayesian Mean Estimation.} In the Bayesian paradigm, the same set of measurement outcomes ${\bm{\mathcal{D}}} = \{\ket{\psi_1}, \ket{\psi_2},...,\ket{\psi_M} \}$ is related to a possible density matrix $\rho(\mathbf{x})$ via a likelihood consisting of the product of probabilities set by Born's rule: \begin{equation} \label{eq:LL} L_{\bm{\mathcal{D}}}(\mathbf{x}) = \prod_{m=1}^M \braket{\psi_m|\rho(\mathbf{x})|\psi_m}, \end{equation} that is, $L_{\bm{\mathcal{D}}}(\mathbf{x})\propto \Pr({\bm{\mathcal{D}}}|\rho)$---the probability of receiving the dataset ${\bm{\mathcal{D}}}$ given quantum state $\rho$. Some prior distribution $\pi_0(\mathbf{x})$ is also assumed, defined for parameters $\mathbf{x}$ such that $\rho(\mathbf{x})$ is always physical: trace-one, Hermitian, and positive semidefinite. Then the posterior describing the distribution of $\mathbf{x}$ given the observed data ${\bm{\mathcal{D}}}$ ensues from Bayes' rule: \begin{equation} \label{eq:post} \pi(\mathbf{x}) = \frac{1}{\mathcal{Z}} L_{\bm{\mathcal{D}}}(\mathbf{x}) \pi_0 (\mathbf{x}). \end{equation} Note that the randomness of the chosen unitaries $U_m$ does not enter the Bayesian model; only the outcomes $\ket{\psi_m}$ play a role. The selection of unitary $U_m$ is independent of the (unknown) density matrix, i.e., $\Pr(U_m=U|\rho) = \Pr(U_m=U)$; thus any probabilities would cancel out through the normalization factor $\mathcal{Z}$ in Eq.~(\ref{eq:post}). Intuitively, in the Bayesian view the experimenter knows the unitaries exactly post-experiment, regardless of how they were chosen, so imposing uncertainty on them in the estimation process proves superfluous. Consequently, while the uncertainty of BME depends strongly on the variety of measurements chosen, the theory does not, a conspicuous departure from shadow where the distribution of $U_m$ enters directly through the inverted quantum channel $\mathcal{M}^{-1}(\cdot)$. Formally, the posterior distribution in Eq.~(\ref{eq:post}) completes the Bayesian model. From this, one can estimate any function of $\rho(\mathbf{x})$. For the most direct comparison with the classical shadow, here we focus on BME specifically, which for some observable $\Lambda$ is the point estimate defined as \begin{equation} \label{eq:B} \begin{split} \lambda^{(B)} & = \braket{\Tr \rho\Lambda}_\rho \\ & = \int d\mathbf{x}\, \pi(\mathbf{x}) \, \Tr \rho(\mathbf{x})\Lambda \\ & = \Tr\left\{ \left[ \int d\mathbf{x}\, \pi(\mathbf{x}) \rho(\mathbf{x}) \right] \Lambda \right\} \\ & = \Tr \rho_B\Lambda, \end{split} \end{equation} where the last two lines follow, respectively, from the linearity of the trace operation and defining the Bayesian mean $\rho_B = \int d\mathbf{x}\, \pi(\mathbf{x}) \rho(\mathbf{x})$. This convenient simplification, in which the Bayesian mean of a quantity is simply its value at $\rho_B$, holds for linear functions of $\rho$, which includes all quantum observables and which we focus on in this article. Moreover, $\lambda^{(B)}$ is the function of ${\bm{\mathcal{D}}}$ which minimizes the mean-squared error (MSE) averaged over all possible states and outcomes. That is, \begin{equation} \label{eq:opt} \lambda^{(B)} = \argmin_{\lambda({\bm{\mathcal{D}}})} \int d{\bm{\mathcal{D}}} \int d\mathbf{x}\, \pi(\mathbf{x},{\bm{\mathcal{D}}}) \left[ \lambda({\bm{\mathcal{D}}}) - \Tr \rho(\mathbf{x})\Lambda \right]^2, \end{equation} with $\pi(\mathbf{x},{\bm{\mathcal{D}}})$ the joint distribution over data and parameters~\cite{Robert1999}. This optimality is nonasymptotic, holding for any number or collection of unitaries $\{U_1, U_2,..., U_M\}$. Considering the widely different expressions for $\rho_s$ [Eq.~(\ref{eq:shad})] and $\rho_B$ [Eq.~(\ref{eq:B})], we found it remarkable just how well $\rho_s$ performed in Ref.~\cite{Huang2020} in light of BME's optimality in Eq.~(\ref{eq:opt}); it was this feature which initially inspired us to develop a thorough comparison between shadow and BME. \textit{Simulated Experiments} In general, comparing the performance of estimators derived from classical (frequentist) statistics---like $\rho_s$---with those from Bayesian methods proves tricky business, since they view uncertainty in functionally different ways. Therefore we adopt a pragmatic view which aligns with the interests of experimentalists: perform experiments, compute the associated shadow and BME estimators, and calculate their error with respect to actual values. While the final step is not always possible in practice, it is in numerical simulation, where the ground truth $\rho_g$ is known exactly. Doing so enables us to illuminate the advantages and disadvantages of both approaches on equal footing. We employ the approach described in the ``Methods'' section for obtaining simulated datasets ${\bm{\mathcal{D}}}$. \begin{figure*}[tb!] \centering\includegraphics[width=2\columnwidth]{fig1.png} \caption{Comparison of shadow and BME estimates of $\lambda_n$ for (a) $D=32$ and (b) $D=256$. Results from fifty trials for each dimension are plotted, assuming a fixed ground truth state $\ket{0}$.} \label{fig1} \end{figure*} \subsection*{Comparing Classical Shadows and BME} \label{sec:compare} \textit{Picture 1: Fixed Ground Truth.} As our first benchmark, we compare the performance of $\rho_s$ and $\rho_B$ in estimating three rank-1 observables, of which fidelities and entanglement witnesses form an important and experimentally relevant subset. Specifically, we consider $\Lambda_n = \ket{\phi_n}\bra{\phi_n}$ ($n=0,1,2$) where \begin{equation} \label{eq:states} \begin{split} \ket{\phi_0} & = \ket{0}\\ \ket{\phi_1} & = \frac{1}{\sqrt{2}}\ket{0} + \frac{1}{\sqrt{2(D-1)}}\sum_{j=1}^{D-1} \ket{j}\\ \ket{\phi_2} & = \ket{1}. \end{split} \end{equation} These possess ground truth values equally spaced within the physically allowed range for trace-one, rank-one observables: $\lambda_0^{(g)}=1$, $\lambda_1^{(g)}=\frac{1}{2}$, and $\lambda_2^{(g)}=0$. The shadow estimator is readily obtained from Eq.~(\ref{eq:shad}), so we compute $\rho_s$ for all $M\in\{1,2,...,1000\}$, where $M$ defines the set containing the first $M$ measurements: ${\bm{\mathcal{D}}}=\{\ket{\psi_m};\; m=1,2,...,M\}$. On the other hand, $\rho_B$ requires evaluation of the high-dimensional integral $\int d\mathbf{x}\,\pi(\mathbf{x})\rho(\mathbf{x})$. To that end, we summon Markov chain Monte Carlo (MCMC) methods, several of which have been explored in the context of quantum state estimation, including Metropolis--Hastings~\cite{Blume2010, Mai2017}, Hamiltonian Monte Carlo~\cite{Seah2015}, sequential Monte Carlo (SMC)~\cite{Granade2016}, and slice sampling~\cite{Williams2017, Lu2019a}. We select the preconditioned Crank--Nicolson algorithm~\cite{Cotter2013} applied in Ref.~\cite{Lukens2020}, which to our knowledge is the most efficient BME approach currently available for density matrix recovery. Finally, because of our assumed pure state ground truth, we take as prior all pure states $\rho=\ket{\psi}\bra{\psi}$ uniformly distributed on the complex $D$-dimensional unit hypersphere. Numerically, the parameters $\mathbf{x}$ reduce to a $D$-dimensional complex column vector, so we have $\pi_0(\mathbf{x})\propto \exp\left(-\frac{1}{2}\mathbf{x}^\dagger\mathbf{x}\right)$, $\rho(\mathbf{x}) = \frac{\mathbf{x}\bx^\dagger}{|\mathbf{x}|^2}$, and $d\mathbf{x} = \prod_{l=1}^{D} d(\R x_l) d(\I x_l)$ with $x_l$ denoting a single component of $\mathbf{x}$. The use of pure states is not central to the BME formalism whatsoever, but does permit us to simulate in higher dimensions than otherwise possible. With pure states only, our parameterization entails $2D$ real numbers, compared to $2D^2+D$ for mixed states. As an example, for $D=256$, the pure state prior, and likelihood of Eq.~(\ref{eq:LL}), each MCMC chain takes about ten minutes to converge on our desktop computer, which for the 400 settings involved in Fig.~\ref{fig1}(b) amounts to $\sim$2.5 days. Based on previous studies~\cite{Lukens2020} the mixed state version would therefore have been completely unfeasible at this dimension with our computational resources, likely taking weeks (or more) to complete~\footnote{Incorporating some of the methods suggested in Ref.~\cite{Lukens2020} in further research, such as embedding within SMC samplers and parallelization, should permit the extension to significantly larger $D$ and mixed states}. With pure states, then, we can focus more directly on dimensional scaling and the statistics from many trials. For each trial, we perform BME for eight collections of measurements $M\in\{1,50,100,200,400,600,800,1000\}$. We keep $R=2^{10}$ samples from each chain of length $RT$, where we select the thinning factor $T$ empirically to obtain convergence. Figure~\ref{fig1} plots the estimates for all 50 trials obtained by both shadow and BME for $D=32$ [Fig.~\ref{fig1}(a)] and $D=256$ [Fig.~\ref{fig1}(b)]. A thinning value of $T=2^9$ ($T=2^{12}$) is used for $D=32$ ($D=256$). Each column corresponds to a particular expectation value $\lambda_n$; the bottom row shows the MSE with respect to the ground truth, averaged over all trials defined as $\braket{|\lambda_n^{(\cdot)} - \lambda_n^{(g)}|^2}_\mathrm{trials}$ with $\cdot=s$ for the shadow and $\cdot=B$ for BME. The classical shadows show wide variation for low $M$, including highly nonphysical estimates ($\lambda_n^{(s)}>1$ or $<0$), but they converge to ground truth values rapidly, with nearly identical rates for all observables and dimensions. This is confirmed quantitatively in the MSE curves that attain values of $\sim$10$^{-3}$ by $M=1000$ for all cases. The behavior proves vastly different for BME. While physical estimates are always returned, the number of measurements needed to reach the ground truth varies strongly both with observable $\lambda_n$ and with dimension $D$. Intriguingly, shadow shows significantly lower MSE for $\lambda_0$ and $\lambda_1$, widening as $D$ increases. On first glance, this presents a paradox: Eq.~(\ref{eq:opt}) implies that $\lambda_n^{(B)}$ should possess the lowest possible MSE for any $n$ and $M$, and yet $\lambda_n^{(s)}$ convincingly surpasses it these cases. Yet this dilemma can be resolved by studying the prior $\pi_0(\mathbf{x})$. When the Bayesian model assigns equal \emph{a priori} weights to all possible states---a sensible choice for an uninformative prior---this by implication makes observable values such as $\lambda_0^{(g)}=1$ highly unlikely, since only one state in the domain attains this. On the other hand, expectations for any rank-1 projector $\Lambda$ on the order of $\lambda\sim \frac{1}{D}$ are to be expected initially since $\int d\mathbf{x}\,\pi_0(\mathbf{x}) \Tr \rho(\mathbf{x})\Lambda = \frac{1}{D}$. This manifests itself in Fig.~\ref{fig1} in BME's much lower MSE for $\lambda_2$, whose ground truth value $\lambda_2^{(g)}=0$ is much more probable. Thus, by running 50 repeated trials with the \emph{same ground truth} $\rho_g=\ket{0}\bra{0}$, the situation over which we average does not accurately reflect the uninformative prior; the conditions for BME optimality are not met. \textit{Picture 2: Random Ground Truth.} To accurately reflect uninformative prior knowledge, we therefore must prepare \emph{random} ground truth states in our simulations. To do so, we leverage the equivalence between (i) randomly prepared input states with a fixed observable---the situation of interest---and (ii) random selection of an observable for a fixed input. Consider the expectation of observable $\Lambda$, where the quantum state is rotated by some random unitary $U$: \begin{equation} \label{eq:equiv} \Tr \left[(U\rho U^\dagger)\Lambda\right]= \Tr \left[\rho(U^\dagger\Lambda U)\right]. \end{equation} Thus one can emulate the effect of a randomized state by randomly rotating the observable and evaluating it on a fixed state. Practically speaking, we are free to employ the same simulated datasets and estimators $\rho_s$ and $\rho_B$ above, but select at random a different projector $\Lambda=\ket{\phi}\bra{\phi}$ for each trial. This is equivalent to performing all trials with a random ground truth but a fixed observable. We call this randomized evaluation ``Picture 2'' to distinguish it from the fixed ground truth case above (Picture 1). \begin{figure*}[tb!] \centering\includegraphics[width=2\columnwidth]{fig2.png} \caption{Estimating rank-1 observable $\Lambda$ for randomly chosen ground truth states (Picture 2). (a) $D=32$ case. (b) $D=256$ case. The first four columns show $\lambda$ values for each trial; the last column plots MSE with respect to ground truth over all trials.} \label{fig2} \end{figure*} Results appear in Fig.~\ref{fig2} for (a) $D=32$ and (b) $D=256$. The first column plots the ground truth value $\lambda^{(g)}$ for each trial, the next three columns plot the shadow and BME estimates for increasing numbers of measurements, and the final column presents the MSE with respect to the ground truth. Now BME returns much more accurate estimates than shadow on average, and the paradox regarding Bayesian optimality is solved: the Bayesian mean gives the lowest MSE as long as the prior accurately reflects the true uncertainty of the system under test. Accordingly, this BME study clarifies an underlying assumption in selecting observables in Picture 1: being able to ``guess'' an observable with such high overlap to the ground truth suggests that one is not really operating under the neutrality implied by a uniform prior; an informative prior would more accurately reflect the situation. This observation brings to light an interesting question of motivation in a given quantum experiment. In the sense of ensuring that any estimate is adequately justified by the data, the idea of ``baking in'' a prior favoring some subset of quantum states is undesirable. And yet, in many situations the researcher \emph{does} have strong beliefs---or at least hopefulness---about the state being prepared, and wants to verify this by computing an observable, such as fidelity, where it is desired that $\lambda^{(g)} \sim 1$. In this case, one wishes to validate such high values quickly with few measurements, but likely does not care so much about how well the procedure can estimate the ground truth when it is \emph{low} (e.g., when $\lambda^{(g)}\sim \frac{1}{D}$), since this situation suggests a poorly prepared state anyway. Accordingly, the felt cost is stronger when error is higher for situations with $\lambda^{(g)} \gg \frac{1}{D}$ than when $\lambda^{(g)} \sim \frac{1}{D}$, which is not captured by the standard MSE as expressed in Eq.~(\ref{eq:opt}). And as shown in our tests here, it is precisely these improbable situations wherein shadow excels over BME. Thus our simulations reveal one surprising reason classical shadows are so powerful: they perform well within those subspaces of the entire Hilbert space which are of interest to a high-fidelity system. \begin{figure*}[tb!] \centering\includegraphics[width=2\columnwidth]{fig3.png} \caption{Bayesian inference results utilizing the pseudo-likelihood in Eq.~(\ref{eq:PL1}) for (a) $D=32$ and (b) $D=256$. The overlap with shadow, $\Tr\rho_B\rho_s$ is plotted in (c) for $D=32$ and (d) for $D=256$.} \label{fig3} \end{figure*} \subsection*{Emulating Classical Shadows with BME} \label{sec:emulate} \textit{Pseudo-Likelihood Formulation.} The dimension-independence and rapid convergence of classical shadows for cases of interest indicate the value of a Bayesian version with similar features, both to gain further insight into shadow itself and to improve thereon by ensuring only physically acceptable states. A simple approach for custom Bayesian models, gaining traction in ``probably approximately correct'' (PAC) learning~\cite{Guedj2019}, proposes use of a pseudo-likelihood that rates a prospective state's suitability through a cost function, instead of a full likelihood based on a physical model. In quantum state tomography in particular, quadratic costs of the form $\lVert \rho-\tilde{\rho} \rVert_F^2$ have been explored~\cite{Mai2017, Lukens2020}, where $\tilde{\rho}$ signifies some point estimator and $\lVert A \rVert_F=\sqrt{\Tr A^\dagger A}$ the Frobenius norm. Therefore, to obtain a physical state with properties similar to $\rho_s$, we first suggest the pseudo-likelihood \begin{equation} \label{eq:PL1} L_{\bm{\mathcal{D}}}(\mathbf{x}) = \exp\left(- \frac{K}{2} \lVert \rho(\mathbf{x})-\rho_s \rVert_F^2\right). \end{equation} The constant $K$ establishes the relative weight of prior and likelihood. Previously, we suggested $K=M$ for reasonable uncertainty quantification~\cite{Lukens2020}; here we consider $K=MD$ to impart dimension-independence. (Incidentally, we have found no significant modifications to the results below when testing with $K\gg MD$.) Figure~\ref{fig3}(a) and (b) show the BME results obtained for $D=32$ and $D=256$, respectively, where we again return to Picture 1 with fixed ground truth for all trials. For the tests here, thinning of $T=2^8$ ($T=2^{10}$) is used for the $D=32$ ($D=256$) MCMC chains. Compared to the shadow results of Fig.~\ref{fig1}, the BME predictions still do not reach ground truth values for $\lambda_0$ and $\lambda_1$ efficiently. This proves intriguing, since $\lVert \rho-\rho_s \rVert_F^2$ with $\rho=\ket{\psi}\bra{\psi}$ is minimized precisely by states for which $\braket{\psi|\rho_s|\psi}$ is large. So if $\lambda_0^{(s)}=\braket{g|\rho_s|g}\sim 1$ (cf. Fig.~\ref{fig1}), it is odd that predictions using a BME value maximizing $\braket{\psi|\rho_s|\psi}$ looks so different for $D=256$. The origin of this discrepancy, however, lies in $\rho_s$'s nonphysicality. \begin{figure*}[tb!] \centering\includegraphics[width=2\columnwidth]{fig4.png} \caption{Bayesian estimation using the pseudo-likelihood of Eq.~(\ref{eq:PL2}) with $N=3$. (a) Results for $D=32$. (b) Results for $D=256$. The MSE values for shadow from Fig.~\ref{fig1} are reproduced for comparison.} \label{fig4} \end{figure*} Plotting the average overlap between shadow and Bayesian samples ($\Tr\rho_B\rho_s$) in Fig.~\ref{fig3}(c) and (d), we find that $\rho_B$ overlaps with $\rho_s$ \emph{more strongly than the ground truth} $\rho_s=\ket{g}\bra{g}$. Because $\rho_s$ is not positive semidefinite, $\Tr\rho_B\rho_s > 1$ for all cases examined. Thus the BME procedure succeeds in finding states with strong overlap to the shadow, but the closest physical state to $\rho_s$ is not the ground truth, even though $\braket{g|\rho_s|g}\sim 1$. Intuitively, this nonphysicality helps explain why observables with highly improbable values from the Bayesian view are estimated so much more efficiently with shadow. For a parameterization over physical states and rank-1 observable $\Lambda$, only a single state in the Hilbert space attains $\lambda=1$, and since this represents the maximum value possible for any valid quantum state, it can only be approached from below. On the other hand, a continuum of shadow estimators $\rho_s$ permit $\lambda=1$, for $\rho_s$ is constrained only by Hermiticity and unit-trace---not positive semidefiniteness. Therefore the estimate $\lambda^{(s)}$ can err on either the high or low side (cf. Fig.~\ref{fig1}), pulling the shadow more rapidly to the ground truth in these extreme cases. This discloses the second central finding of our investigation: the nonphysicality of $\rho_s$ is not a deficiency, but rather critical to obtaining dimension independence. Thus the key features of the shadow are not necessarily translated onto physical projections like the BME model here~\footnote{As an additional check, we performed the algorithm of Ref.~\cite{Smolin2012} to determine the closest physical density matrix to $\rho_s$, finding very similar results as Fig.~\ref{fig3}. This indicates that our projection conclusions are not an artefact of the pure state prior, but hold for general mixed states as well.} or, for that matter, alternative projected-least-squares approaches~\cite{Smolin2012, Guta2020}. While strange from the conventional wisdom of maximum likelihood and Bayesian mean estimation, nonphysical states are actually beneficial for classical shadows. \textit{Observable-Oriented Pseudo-Likelihood.} Deriving a positive semidefinite model emulating classical shadows remains an intriguing question, however, to eliminate unphysical estimates while retaining the favorable scaling features. With projecting directly onto $\rho_s$ proving unfruitful to this end, we note that, indeed, $\rho_s$ was never intended to serve as an accurate substitute for the true $\rho_g$; instead it facilitates estimates of observables~\cite{Huang2020}. Accordingly, we propose the ``observable-oriented pseudo-likelihood'' \begin{equation} \label{eq:PL2} L_{\bm{\mathcal{D}}}(\mathbf{x}) = \exp\left(-\frac{K}{2} \sum_{n=0}^{N-1} \left|\Tr\rho(\mathbf{x})\Lambda_n - \lambda_n^{(s)}\right|^2 \right), \end{equation} where we insert the estimates $\lambda_n^{(s)}$ of $N$ observables from $\rho_s$. This formalism ensures only physical values are returned [through the prior $\pi_0(\mathbf{x})$], and rates the fitness of proposed states through their overlap with respect to shadow's predictions of observables only. For dimension-independence, we again set $K=MD$ and perform BME for all simulated datasets and $N=3$ above, thinning to $T=2^{10}$ ($T=2^{13}$) for $D=32$ ($D=256$). The results follow in Fig.~\ref{fig4}. Now BME shows very similar behavior to shadow: the MSE with respect to the ground truth matches shadow results from Fig.~\ref{fig1} closely, though BME still outperforms for $\lambda_2$. Yet unlike shadow, BME here always gives physically permissible estimates ($\lambda_n^{(B)}\in[0,1]$). This pseudo-likelihood therefore attains the goal of a BME model commensurate with classical shadows. Yet it is important to emphasize that this approach depends heavily on the quality of the classical shadow. It refines estimates from the shadow with its positive semidefinite requirement, but it does not do markedly better at estimating the ground truth state---at least for arbitrary observables. As an example, we repeat the inference procedure for an observable-oriented pseudo-likelihood based solely on $\Lambda_1$, i.e., \begin{equation} \label{eq:PL3} L_{\bm{\mathcal{D}}}(\mathbf{x}) = \exp\left(-\frac{K}{2} \left|\Tr\rho(\mathbf{x})\Lambda_1 - \lambda_1^{(s)}\right|^2 \right), \end{equation} which has ground truth value $\lambda_2^{(g)}=\frac{1}{2}$. Results for the $D=32$ case appear in Fig.~\ref{fig5}, where we plot the Bayesian estimates for all three observables even though the psuedo-likelihood is based on $\lambda_1$ only. The estimate $\lambda_1^{(B)}$ closely matches shadow as designed, and $\lambda_2^{(B)}$ agrees with the ground truth well, due to the fact its value is highly probable for a uniform prior. But $\lambda_0^{(B)}\rightarrow \sim\frac{1}{4}$, far from $\lambda_0^{(g)}=1$. When using the pseudo-likelihood above, all quantum states with identical overlap to $\Lambda_1$ are equally probable, of which the ground truth $\rho_g$ represents just one possibility. The estimate of $\lambda_0$ given only $\lambda_1$ information reflects the inherent uncertainty within this specification. So to summarize, our observable-oriented pseudo-likelihood builds physicality into shadow, yet it can only (in general) accurately predict the $N$ observables injected into it: to infer quantities beyond these $N$ can prove unreliable. \begin{figure}[tb!] \centering\includegraphics[width=\columnwidth]{fig5.png} \caption{Bayesian inference results employing the psuedo-likelihood in Eq.~(\ref{eq:PL3}), for $D=32$. The shadow MSE values from Fig.~\ref{fig1} are reprinted for clarity.} \label{fig5} \end{figure} \section*{Discussion} \label{sec:disc} Our numerical investigations here have elucidated two fascinating features of classical shadows: \begin{enumerate} \item Classical shadows perform extremely well at predicting ``unlikely'' observables, i.e., those which obtain high values only on a restricted subset of states within the complete Hilbert space. \item The nonphysicality of classical shadows is critical to their dimension-independence and accuracy under few measurements. \end{enumerate} These findings do not contradict the optimality of Bayesian methods expressed in Eq.~(\ref{eq:opt}): BME with a full likelihood minimizes MSE for any number and collection of measurements, provided the prior distribution accurately reflects the true knowledge involved. The predictive power of $\rho_s$, then, derives from the fact that the situations in which it is much more accurate that BME are often of particular interest in practice, such as verification of a high-fidelity or highly entangled quantum state. Desiring to extend these features in the Bayesian context, we proposed an observable-oriented pseudo-likelihood that attains shadow's dimension-independence and state-specialized accuracy, with the advantage of guaranteed physicality. Nonetheless, in all these explorations there remains one prominent sense in which classical shadows unquestionably eclipse BME: computational efficiency. The shadow estimator $\rho_s$ is formed directly from measurements for any dimension $D$; yet computing $\rho_B$ requires tedious MCMC methods, with the number of parameters increasing linearly (quadratically) with $D$ for a pure (mixed) state prior. Here we considered up to $D=256$, a far cry from the $D=2^{120}$ example in Ref.~\cite{Huang2020}, where there is no hope for BME with a parameterization such as ours. Moving forward, it would therefore seem profitable to explore simplified Bayesian models that maintain a fixed parameter dimensionality even as the Hilbert space grows exponentially. For example, if one could specify a prior and likelihood on an observable $\lambda$ only, to the effect of $\pi(\lambda)\propto L_{\bm{\mathcal{D}}}(\lambda)\pi_0(\lambda)$, the inference procedure would not be limited directly by exponentially large Hilbert spaces. In this way, Bayesian methods could be extendable to the types of quantum systems sought for practically useful quantum computation. Overall, our analyses have revealed the value of BME as a tool for shedding light on estimation procedures which formally have no connection to the Bayesian paradigm. The numerical simulations here reveal the complementary strengths of classical shadow and Bayesian tomographic approaches in the efficient estimation of quantum properties. And so we expect valuable opportunities for both methods as quantum information processing resources continue to mature in size and complexity. \section*{Methods} \subsection*{Data Simulation Approach} The method of classical shadows introduced in Ref.~\cite{Huang2020} involves application of a Haar-random (or effectively Haar-random) unitary $U$ followed by measurement in the computational basis. We exploit the fact that our target state is pure to substantially reduce the complexity of simulating this procedure. In particular, our simulation method requires the generation of only size-$D$ random vectors rather than $D\times D$ random unitaries. Without loss of generality we work in a rotated basis such that the first basis state coincides with the ground truth: $\rho_g = \ket{0}\bra{0}$. Then the probability of observing outcome $j$ depends only on $|\braket{j|U|0}|^2 = |U_{j0}|^2 = |(U^\dagger)_{0j}|^2$. That is, the distribution of outcomes depends only on the first row of $U^\dagger$. Now, when $U$ is Haar-random, each individual row and column of $U^\dagger$ is a uniformly distributed length-1 vector $u$. Furthermore, given any component $u_j$, the remaining components are a uniformly distributed vector of length $\sqrt{1-|u_j|^2}$. A uniformly random vector $u$, corresponding to the first row of $U^\dagger$, may be obtained by generating $D$ complex normal random values and normalizing them to yield a unit length vector. An outcome $n\in \{0,1,\ldots,D-1\}$ is then chosen with probability $|u_n|^2$. This selects the $n$th column of $U^\dagger$. Since this column (whichever it is) is uniformly distributed, its remaining elements are uniformly distributed with length $\sqrt{1-|u_n|^2}$. The explicit procedure is as follows: \begin{enumerate} \item Posit a measurement unitary $U_m^\dagger = [\tilde{\varphi}_0 \cdots \tilde{\varphi}_{D-1}]$, where each $\tilde{\varphi}_n$ is a column vector corresponding to one of the $D$ possible output states. \item Generate $D$ complex normal samples $w_n \stackrel{\textrm{i.i.d.}}{\sim}\mathcal{N}(0,1) + i\mathcal{N}(0,1)$ and normalize \begin{equation} \label{eq:row} u_n = \frac{w_n}{\sqrt{\sum \limits_{n^\prime=0}^{D-1} |w_{n^\prime}|^2}}. \end{equation} These define projections of the unitary's basis states on the ground truth: $u_n = \braket{0|\tilde{\varphi}_n}$, or in other words, the elements in the first row of $U_m^\dagger$. \item Select an integer $n\in\{0,1,...,D-1\}$ at random with probability $|u_n|^2$. This implies that the state $\tilde{\varphi}_n$ is detected. \item Generate $D-1$ complex normal samples $v_j \stackrel{\textrm{i.i.d.}}{\sim}\mathcal{N}(0,1) + i\mathcal{N}(0,1)$ ($j=1,2,...,D-1$). These set the remaining coefficients of the detected state $\tilde{\varphi}_n$. \item Finally, take \begin{equation} \label{eq:col} \ket{\psi_m} = u_n\ket{0} + \sqrt{ \frac{1-|u_n|^2} {\sum\limits_{j^\prime=1}^{D-1} |v_{j^\prime}|^2}} \sum_{j=1}^{D-1} v_j \ket{j} \end{equation} as the measured state. \end{enumerate} Utilizing this method, we performed 50 independent trials with 1000 measurements each, for Hilbert space dimensions $D=32$ and $D=256$, giving a total of 100 datasets which are used in all subsequent tests above. The two values of $D$ were selected specifically to clarify how classical shadows and BME differ in their scaling with dimension. \vspace{-0.1in} \section*{Acknowledgments} \vspace{-0.15in} This work was funded by the U.S. Department of Energy, Office of Advanced Scientific Computing Research, through the Quantum Algorithm Teams and Early Career Research Programs. This work was performed in part at Oak Ridge National Laboratory, operated by UT-Battelle for the U.S. Department of Energy under contract no. DE-AC05-00OR22725. \section*{Introduction} \label{sec:intro} Measurement and characterization of quantum systems comprise a long-standing problem in quantum information science~\cite{James2001}. However, the exponential scaling of Hilbert space dimension with the number of qubits makes full characterization extremely challenging, inspiring a plethora of approaches designed to estimate properties of quantum states with as few measurements as possible, such as compressed sensing~\cite{Gross2010, Flammia2012}, adaptive tomography~\cite{Huszar2012, Kravtsov2013, Granade2017}, matrix product state formulations~\cite{Cramer2010}, and neural networks~\cite{Torlai2018, Carrasquilla2019, Lohani2020}. Very recently, a groundbreaking approach known as classical shadows was proposed and analyzed~\cite{Huang2020}. Building on and simplifying ideas from ``shadow tomography''~\cite{Aaronson2018}, the classical shadow was shown to provide accurate predictions of observables with a fixed number of measurements, including simulated examples for quantum systems in excess of 100 qubits~\cite{Huang2020}. Astonishingly simple, the classical shadow is formed by collecting the results of random measurements on a repeatedly prepared input state, and inverting them through an appropriate virtual quantum channel. However, several features of the classical shadow remain enigmatic, including its highly nonphysical nature, optimality with respect to alternative cost functions, and relationship to more conventional likelihood-based tomographic techniques. One such method, Bayesian mean estimation (BME)~\cite{Blume2010}, provides a conceptually straightforward path to estimate a quantum state given measured data, making use of prior knowledge and providing meaningful error bars for any experimental conditions. BME appears particularly well suited for contextualizing classical shadows, since it returns a principled estimate under any number of measurements (even zero), and is optimal in terms of minimizing average squared error~\cite{Robert1999}. In this work, we directly compare the estimates of classical shadows and BME for identical simulated datasets. For particular observables with relatively improbable values from the perspective of BME, shadow is found to reach the ground truth with significantly fewer measurements. However, after properly reformulating the problem under test for consistency with the Bayesian prior, the situation reverses, with BME returning estimates possessing lower error on average. In the latter portion of our investigation, we seek to construct a BME model emulating the key features of the classical shadow, but with positive semidefinite states as support. While complicated by the shadow's nonphysical nature, we ultimately propose an observable-oriented pseudo-likelihood that rates quantum states by their observable values with respect to those of shadow. Our pseudo-likelihood successfully mimics the dimension-independence of shadow, with the advantage of delivering entirely physical estimates for any number of measurements. \section*{Results} \label{sec:res} \subsection*{Problem Formulation} \textit{Classical Shadows.} For our analysis, we invoke the setup of the original classical shadow proposal~\cite{Huang2020}. Consider a $D$-dimensional Hilbert space occupied by a ground truth quantum state $\rho_g$ that can be repeatedly prepared. On each preparation $m$, $\rho_g$ is subjected to a randomly chosen $D\times D$ unitary $U_m$ and one measurement is performed in the computational basis, leaving result $\ket{b_m}$. Defining $\ket{\psi_m} = U_m^\dagger\ket{b_m}$, the classical snapshot associated with measurement $m$ follows as $\mathcal{M}^{-1}(\ket{\psi_m}\bra{\psi_m})$, where $\mathcal{M}(\cdot)$ is the quantum channel defined by averaging over all possible unitaries and outcomes. We assume the $U_m$ are drawn from the set of $D\times D$ Haar-random unitaries, in which case $\mathcal{M}^{-1}(\ket{\psi_m}\bra{\psi_m}) = (D+1)\ket{\psi_m}\bra{\psi_m} - I_D$, with $I_D$ the $D\times D$ identity matrix~\cite{Huang2020}. (This channel holds for the more restricted class of random Cliffords as well~\cite{Webb2016, Zhu2017}.) Averaging over $M$ measurements yields the shadow estimator \begin{equation} \label{eq:shad} \rho_s = \frac{D+1}{M} \sum_{m=1}^M \ket{\psi_m}\bra{\psi_m} - I_D. \end{equation} (In what follows, the phrases ``classical shadow,'' ``shadow estimator,'' and simply ``shadow'' refer interchangeably to this estimator as well as the procedure more generally.) In this form, the simplicity of $\rho_s$ is evident: it is merely a scaled and recentered average of all observed outcomes. Interestingly, though, $\rho_s$ is in general not positive semidefinite; for $M<D$, $\rho_s$ possesses at least $D-M$ eigenvalues equal to $-1$. Accordingly, in the targeted regime for classical shadows of $M\ll D$, $\rho_s$ is highly nonphysical. Understanding the role the shadow estimator's negativity on estimation forms a central goal of the present study. Finally, defining $\lambda$ as the expectation of the observable $\Lambda$ ($\lambda = \Tr\rho\Lambda$), the shadow estimate thereof follows as \begin{equation} \label{eq:s} \lambda^{(s)} = \Tr\rho_s\Lambda, \end{equation} to be compared to the ground truth $\lambda^{(g)} = \Tr\rho_g\Lambda$. As an aside, we note that Ref.~\cite{Huang2020} employed an additional statistical technique, ``median of means,'' to reduce the impact of outliers by partitioning the $M$ outcomes into $K$ subsets and taking the median as the estimate $\lambda^{(s)}$. In the interests of simplicity and ease of comparison, we focus on $K=1$ in Eq.~(\ref{eq:shad}). We expect the benefits of selecting $K>1$ will prove similar in both the shadow and Bayesian cases~\cite{Orenstein2019}, but work on this is beyond the scope of the present investigation. \textit{Bayesian Mean Estimation.} In the Bayesian paradigm, the same set of measurement outcomes ${\bm{\mathcal{D}}} = \{\ket{\psi_1}, \ket{\psi_2},...,\ket{\psi_M} \}$ is related to a possible density matrix $\rho(\mathbf{x})$ via a likelihood consisting of the product of probabilities set by Born's rule: \begin{equation} \label{eq:LL} L_{\bm{\mathcal{D}}}(\mathbf{x}) = \prod_{m=1}^M \braket{\psi_m|\rho(\mathbf{x})|\psi_m}, \end{equation} that is, $L_{\bm{\mathcal{D}}}(\mathbf{x})\propto \Pr({\bm{\mathcal{D}}}|\rho)$---the probability of receiving the dataset ${\bm{\mathcal{D}}}$ given quantum state $\rho$. Some prior distribution $\pi_0(\mathbf{x})$ is also assumed, defined for parameters $\mathbf{x}$ such that $\rho(\mathbf{x})$ is always physical: trace-one, Hermitian, and positive semidefinite. Then the posterior describing the distribution of $\mathbf{x}$ given the observed data ${\bm{\mathcal{D}}}$ ensues from Bayes' rule: \begin{equation} \label{eq:post} \pi(\mathbf{x}) = \frac{1}{\mathcal{Z}} L_{\bm{\mathcal{D}}}(\mathbf{x}) \pi_0 (\mathbf{x}). \end{equation} Note that the randomness of the chosen unitaries $U_m$ does not enter the Bayesian model; only the outcomes $\ket{\psi_m}$ play a role. The selection of unitary $U_m$ is independent of the (unknown) density matrix, i.e., $\Pr(U_m=U|\rho) = \Pr(U_m=U)$; thus any probabilities would cancel out through the normalization factor $\mathcal{Z}$ in Eq.~(\ref{eq:post}). Intuitively, in the Bayesian view the experimenter knows the unitaries exactly post-experiment, regardless of how they were chosen, so imposing uncertainty on them in the estimation process proves superfluous. Consequently, while the uncertainty of BME depends strongly on the variety of measurements chosen, the theory does not, a conspicuous departure from shadow where the distribution of $U_m$ enters directly through the inverted quantum channel $\mathcal{M}^{-1}(\cdot)$. Formally, the posterior distribution in Eq.~(\ref{eq:post}) completes the Bayesian model. From this, one can estimate any function of $\rho(\mathbf{x})$. For the most direct comparison with the classical shadow, here we focus on BME specifically, which for some observable $\Lambda$ is the point estimate defined as \begin{equation} \label{eq:B} \begin{split} \lambda^{(B)} & = \braket{\Tr \rho\Lambda}_\rho \\ & = \int d\mathbf{x}\, \pi(\mathbf{x}) \, \Tr \rho(\mathbf{x})\Lambda \\ & = \Tr\left\{ \left[ \int d\mathbf{x}\, \pi(\mathbf{x}) \rho(\mathbf{x}) \right] \Lambda \right\} \\ & = \Tr \rho_B\Lambda, \end{split} \end{equation} where the last two lines follow, respectively, from the linearity of the trace operation and defining the Bayesian mean $\rho_B = \int d\mathbf{x}\, \pi(\mathbf{x}) \rho(\mathbf{x})$. This convenient simplification, in which the Bayesian mean of a quantity is simply its value at $\rho_B$, holds for linear functions of $\rho$, which includes all quantum observables and which we focus on in this article. Moreover, $\lambda^{(B)}$ is the function of ${\bm{\mathcal{D}}}$ which minimizes the mean-squared error (MSE) averaged over all possible states and outcomes. That is, \begin{equation} \label{eq:opt} \lambda^{(B)} = \argmin_{\lambda({\bm{\mathcal{D}}})} \int d{\bm{\mathcal{D}}} \int d\mathbf{x}\, \pi(\mathbf{x},{\bm{\mathcal{D}}}) \left[ \lambda({\bm{\mathcal{D}}}) - \Tr \rho(\mathbf{x})\Lambda \right]^2, \end{equation} with $\pi(\mathbf{x},{\bm{\mathcal{D}}})$ the joint distribution over data and parameters~\cite{Robert1999}. This optimality is nonasymptotic, holding for any number or collection of unitaries $\{U_1, U_2,..., U_M\}$. Considering the widely different expressions for $\rho_s$ [Eq.~(\ref{eq:shad})] and $\rho_B$ [Eq.~(\ref{eq:B})], we found it remarkable just how well $\rho_s$ performed in Ref.~\cite{Huang2020} in light of BME's optimality in Eq.~(\ref{eq:opt}); it was this feature which initially inspired us to develop a thorough comparison between shadow and BME. \textit{Simulated Experiments} In general, comparing the performance of estimators derived from classical (frequentist) statistics---like $\rho_s$---with those from Bayesian methods proves tricky business, since they view uncertainty in functionally different ways. Therefore we adopt a pragmatic view which aligns with the interests of experimentalists: perform experiments, compute the associated shadow and BME estimators, and calculate their error with respect to actual values. While the final step is not always possible in practice, it is in numerical simulation, where the ground truth $\rho_g$ is known exactly. Doing so enables us to illuminate the advantages and disadvantages of both approaches on equal footing. We employ the approach described in the ``Methods'' section for obtaining simulated datasets ${\bm{\mathcal{D}}}$. \begin{figure*}[tb!] \centering\includegraphics[width=2\columnwidth]{fig1.png} \caption{Comparison of shadow and BME estimates of $\lambda_n$ for (a) $D=32$ and (b) $D=256$. Results from fifty trials for each dimension are plotted, assuming a fixed ground truth state $\ket{0}$.} \label{fig1} \end{figure*} \subsection*{Comparing Classical Shadows and BME} \label{sec:compare} \textit{Picture 1: Fixed Ground Truth.} As our first benchmark, we compare the performance of $\rho_s$ and $\rho_B$ in estimating three rank-1 observables, of which fidelities and entanglement witnesses form an important and experimentally relevant subset. Specifically, we consider $\Lambda_n = \ket{\phi_n}\bra{\phi_n}$ ($n=0,1,2$) where \begin{equation} \label{eq:states} \begin{split} \ket{\phi_0} & = \ket{0}\\ \ket{\phi_1} & = \frac{1}{\sqrt{2}}\ket{0} + \frac{1}{\sqrt{2(D-1)}}\sum_{j=1}^{D-1} \ket{j}\\ \ket{\phi_2} & = \ket{1}. \end{split} \end{equation} These possess ground truth values equally spaced within the physically allowed range for trace-one, rank-one observables: $\lambda_0^{(g)}=1$, $\lambda_1^{(g)}=\frac{1}{2}$, and $\lambda_2^{(g)}=0$. The shadow estimator is readily obtained from Eq.~(\ref{eq:shad}), so we compute $\rho_s$ for all $M\in\{1,2,...,1000\}$, where $M$ defines the set containing the first $M$ measurements: ${\bm{\mathcal{D}}}=\{\ket{\psi_m};\; m=1,2,...,M\}$. On the other hand, $\rho_B$ requires evaluation of the high-dimensional integral $\int d\mathbf{x}\,\pi(\mathbf{x})\rho(\mathbf{x})$. To that end, we summon Markov chain Monte Carlo (MCMC) methods, several of which have been explored in the context of quantum state estimation, including Metropolis--Hastings~\cite{Blume2010, Mai2017}, Hamiltonian Monte Carlo~\cite{Seah2015}, sequential Monte Carlo (SMC)~\cite{Granade2016}, and slice sampling~\cite{Williams2017, Lu2019a}. We select the preconditioned Crank--Nicolson algorithm~\cite{Cotter2013} applied in Ref.~\cite{Lukens2020}, which to our knowledge is the most efficient BME approach currently available for density matrix recovery. Finally, because of our assumed pure state ground truth, we take as prior all pure states $\rho=\ket{\psi}\bra{\psi}$ uniformly distributed on the complex $D$-dimensional unit hypersphere. Numerically, the parameters $\mathbf{x}$ reduce to a $D$-dimensional complex column vector, so we have $\pi_0(\mathbf{x})\propto \exp\left(-\frac{1}{2}\mathbf{x}^\dagger\mathbf{x}\right)$, $\rho(\mathbf{x}) = \frac{\mathbf{x}\bx^\dagger}{|\mathbf{x}|^2}$, and $d\mathbf{x} = \prod_{l=1}^{D} d(\R x_l) d(\I x_l)$ with $x_l$ denoting a single component of $\mathbf{x}$. The use of pure states is not central to the BME formalism whatsoever, but does permit us to simulate in higher dimensions than otherwise possible. With pure states only, our parameterization entails $2D$ real numbers, compared to $2D^2+D$ for mixed states. As an example, for $D=256$, the pure state prior, and likelihood of Eq.~(\ref{eq:LL}), each MCMC chain takes about ten minutes to converge on our desktop computer, which for the 400 settings involved in Fig.~\ref{fig1}(b) amounts to $\sim$2.5 days. Based on previous studies~\cite{Lukens2020} the mixed state version would therefore have been completely unfeasible at this dimension with our computational resources, likely taking weeks (or more) to complete~\footnote{Incorporating some of the methods suggested in Ref.~\cite{Lukens2020} in further research, such as embedding within SMC samplers and parallelization, should permit the extension to significantly larger $D$ and mixed states}. With pure states, then, we can focus more directly on dimensional scaling and the statistics from many trials. For each trial, we perform BME for eight collections of measurements $M\in\{1,50,100,200,400,600,800,1000\}$. We keep $R=2^{10}$ samples from each chain of length $RT$, where we select the thinning factor $T$ empirically to obtain convergence. Figure~\ref{fig1} plots the estimates for all 50 trials obtained by both shadow and BME for $D=32$ [Fig.~\ref{fig1}(a)] and $D=256$ [Fig.~\ref{fig1}(b)]. A thinning value of $T=2^9$ ($T=2^{12}$) is used for $D=32$ ($D=256$). Each column corresponds to a particular expectation value $\lambda_n$; the bottom row shows the MSE with respect to the ground truth, averaged over all trials defined as $\braket{|\lambda_n^{(\cdot)} - \lambda_n^{(g)}|^2}_\mathrm{trials}$ with $\cdot=s$ for the shadow and $\cdot=B$ for BME. The classical shadows show wide variation for low $M$, including highly nonphysical estimates ($\lambda_n^{(s)}>1$ or $<0$), but they converge to ground truth values rapidly, with nearly identical rates for all observables and dimensions. This is confirmed quantitatively in the MSE curves that attain values of $\sim$10$^{-3}$ by $M=1000$ for all cases. The behavior proves vastly different for BME. While physical estimates are always returned, the number of measurements needed to reach the ground truth varies strongly both with observable $\lambda_n$ and with dimension $D$. Intriguingly, shadow shows significantly lower MSE for $\lambda_0$ and $\lambda_1$, widening as $D$ increases. On first glance, this presents a paradox: Eq.~(\ref{eq:opt}) implies that $\lambda_n^{(B)}$ should possess the lowest possible MSE for any $n$ and $M$, and yet $\lambda_n^{(s)}$ convincingly surpasses it these cases. Yet this dilemma can be resolved by studying the prior $\pi_0(\mathbf{x})$. When the Bayesian model assigns equal \emph{a priori} weights to all possible states---a sensible choice for an uninformative prior---this by implication makes observable values such as $\lambda_0^{(g)}=1$ highly unlikely, since only one state in the domain attains this. On the other hand, expectations for any rank-1 projector $\Lambda$ on the order of $\lambda\sim \frac{1}{D}$ are to be expected initially since $\int d\mathbf{x}\,\pi_0(\mathbf{x}) \Tr \rho(\mathbf{x})\Lambda = \frac{1}{D}$. This manifests itself in Fig.~\ref{fig1} in BME's much lower MSE for $\lambda_2$, whose ground truth value $\lambda_2^{(g)}=0$ is much more probable. Thus, by running 50 repeated trials with the \emph{same ground truth} $\rho_g=\ket{0}\bra{0}$, the situation over which we average does not accurately reflect the uninformative prior; the conditions for BME optimality are not met. \textit{Picture 2: Random Ground Truth.} To accurately reflect uninformative prior knowledge, we therefore must prepare \emph{random} ground truth states in our simulations. To do so, we leverage the equivalence between (i) randomly prepared input states with a fixed observable---the situation of interest---and (ii) random selection of an observable for a fixed input. Consider the expectation of observable $\Lambda$, where the quantum state is rotated by some random unitary $U$: \begin{equation} \label{eq:equiv} \Tr \left[(U\rho U^\dagger)\Lambda\right]= \Tr \left[\rho(U^\dagger\Lambda U)\right]. \end{equation} Thus one can emulate the effect of a randomized state by randomly rotating the observable and evaluating it on a fixed state. Practically speaking, we are free to employ the same simulated datasets and estimators $\rho_s$ and $\rho_B$ above, but select at random a different projector $\Lambda=\ket{\phi}\bra{\phi}$ for each trial. This is equivalent to performing all trials with a random ground truth but a fixed observable. We call this randomized evaluation ``Picture 2'' to distinguish it from the fixed ground truth case above (Picture 1). \begin{figure*}[tb!] \centering\includegraphics[width=2\columnwidth]{fig2.png} \caption{Estimating rank-1 observable $\Lambda$ for randomly chosen ground truth states (Picture 2). (a) $D=32$ case. (b) $D=256$ case. The first four columns show $\lambda$ values for each trial; the last column plots MSE with respect to ground truth over all trials.} \label{fig2} \end{figure*} Results appear in Fig.~\ref{fig2} for (a) $D=32$ and (b) $D=256$. The first column plots the ground truth value $\lambda^{(g)}$ for each trial, the next three columns plot the shadow and BME estimates for increasing numbers of measurements, and the final column presents the MSE with respect to the ground truth. Now BME returns much more accurate estimates than shadow on average, and the paradox regarding Bayesian optimality is solved: the Bayesian mean gives the lowest MSE as long as the prior accurately reflects the true uncertainty of the system under test. Accordingly, this BME study clarifies an underlying assumption in selecting observables in Picture 1: being able to ``guess'' an observable with such high overlap to the ground truth suggests that one is not really operating under the neutrality implied by a uniform prior; an informative prior would more accurately reflect the situation. This observation brings to light an interesting question of motivation in a given quantum experiment. In the sense of ensuring that any estimate is adequately justified by the data, the idea of ``baking in'' a prior favoring some subset of quantum states is undesirable. And yet, in many situations the researcher \emph{does} have strong beliefs---or at least hopefulness---about the state being prepared, and wants to verify this by computing an observable, such as fidelity, where it is desired that $\lambda^{(g)} \sim 1$. In this case, one wishes to validate such high values quickly with few measurements, but likely does not care so much about how well the procedure can estimate the ground truth when it is \emph{low} (e.g., when $\lambda^{(g)}\sim \frac{1}{D}$), since this situation suggests a poorly prepared state anyway. Accordingly, the felt cost is stronger when error is higher for situations with $\lambda^{(g)} \gg \frac{1}{D}$ than when $\lambda^{(g)} \sim \frac{1}{D}$, which is not captured by the standard MSE as expressed in Eq.~(\ref{eq:opt}). And as shown in our tests here, it is precisely these improbable situations wherein shadow excels over BME. Thus our simulations reveal one surprising reason classical shadows are so powerful: they perform well within those subspaces of the entire Hilbert space which are of interest to a high-fidelity system. \begin{figure*}[tb!] \centering\includegraphics[width=2\columnwidth]{fig3.png} \caption{Bayesian inference results utilizing the pseudo-likelihood in Eq.~(\ref{eq:PL1}) for (a) $D=32$ and (b) $D=256$. The overlap with shadow, $\Tr\rho_B\rho_s$ is plotted in (c) for $D=32$ and (d) for $D=256$.} \label{fig3} \end{figure*} \subsection*{Emulating Classical Shadows with BME} \label{sec:emulate} \textit{Pseudo-Likelihood Formulation.} The dimension-independence and rapid convergence of classical shadows for cases of interest indicate the value of a Bayesian version with similar features, both to gain further insight into shadow itself and to improve thereon by ensuring only physically acceptable states. A simple approach for custom Bayesian models, gaining traction in ``probably approximately correct'' (PAC) learning~\cite{Guedj2019}, proposes use of a pseudo-likelihood that rates a prospective state's suitability through a cost function, instead of a full likelihood based on a physical model. In quantum state tomography in particular, quadratic costs of the form $\lVert \rho-\tilde{\rho} \rVert_F^2$ have been explored~\cite{Mai2017, Lukens2020}, where $\tilde{\rho}$ signifies some point estimator and $\lVert A \rVert_F=\sqrt{\Tr A^\dagger A}$ the Frobenius norm. Therefore, to obtain a physical state with properties similar to $\rho_s$, we first suggest the pseudo-likelihood \begin{equation} \label{eq:PL1} L_{\bm{\mathcal{D}}}(\mathbf{x}) = \exp\left(- \frac{K}{2} \lVert \rho(\mathbf{x})-\rho_s \rVert_F^2\right). \end{equation} The constant $K$ establishes the relative weight of prior and likelihood. Previously, we suggested $K=M$ for reasonable uncertainty quantification~\cite{Lukens2020}; here we consider $K=MD$ to impart dimension-independence. (Incidentally, we have found no significant modifications to the results below when testing with $K\gg MD$.) Figure~\ref{fig3}(a) and (b) show the BME results obtained for $D=32$ and $D=256$, respectively, where we again return to Picture 1 with fixed ground truth for all trials. For the tests here, thinning of $T=2^8$ ($T=2^{10}$) is used for the $D=32$ ($D=256$) MCMC chains. Compared to the shadow results of Fig.~\ref{fig1}, the BME predictions still do not reach ground truth values for $\lambda_0$ and $\lambda_1$ efficiently. This proves intriguing, since $\lVert \rho-\rho_s \rVert_F^2$ with $\rho=\ket{\psi}\bra{\psi}$ is minimized precisely by states for which $\braket{\psi|\rho_s|\psi}$ is large. So if $\lambda_0^{(s)}=\braket{g|\rho_s|g}\sim 1$ (cf. Fig.~\ref{fig1}), it is odd that predictions using a BME value maximizing $\braket{\psi|\rho_s|\psi}$ looks so different for $D=256$. The origin of this discrepancy, however, lies in $\rho_s$'s nonphysicality. \begin{figure*}[tb!] \centering\includegraphics[width=2\columnwidth]{fig4.png} \caption{Bayesian estimation using the pseudo-likelihood of Eq.~(\ref{eq:PL2}) with $N=3$. (a) Results for $D=32$. (b) Results for $D=256$. The MSE values for shadow from Fig.~\ref{fig1} are reproduced for comparison.} \label{fig4} \end{figure*} Plotting the average overlap between shadow and Bayesian samples ($\Tr\rho_B\rho_s$) in Fig.~\ref{fig3}(c) and (d), we find that $\rho_B$ overlaps with $\rho_s$ \emph{more strongly than the ground truth} $\rho_s=\ket{g}\bra{g}$. Because $\rho_s$ is not positive semidefinite, $\Tr\rho_B\rho_s > 1$ for all cases examined. Thus the BME procedure succeeds in finding states with strong overlap to the shadow, but the closest physical state to $\rho_s$ is not the ground truth, even though $\braket{g|\rho_s|g}\sim 1$. Intuitively, this nonphysicality helps explain why observables with highly improbable values from the Bayesian view are estimated so much more efficiently with shadow. For a parameterization over physical states and rank-1 observable $\Lambda$, only a single state in the Hilbert space attains $\lambda=1$, and since this represents the maximum value possible for any valid quantum state, it can only be approached from below. On the other hand, a continuum of shadow estimators $\rho_s$ permit $\lambda=1$, for $\rho_s$ is constrained only by Hermiticity and unit-trace---not positive semidefiniteness. Therefore the estimate $\lambda^{(s)}$ can err on either the high or low side (cf. Fig.~\ref{fig1}), pulling the shadow more rapidly to the ground truth in these extreme cases. This discloses the second central finding of our investigation: the nonphysicality of $\rho_s$ is not a deficiency, but rather critical to obtaining dimension independence. Thus the key features of the shadow are not necessarily translated onto physical projections like the BME model here~\footnote{As an additional check, we performed the algorithm of Ref.~\cite{Smolin2012} to determine the closest physical density matrix to $\rho_s$, finding very similar results as Fig.~\ref{fig3}. This indicates that our projection conclusions are not an artefact of the pure state prior, but hold for general mixed states as well.} or, for that matter, alternative projected-least-squares approaches~\cite{Smolin2012, Guta2020}. While strange from the conventional wisdom of maximum likelihood and Bayesian mean estimation, nonphysical states are actually beneficial for classical shadows. \textit{Observable-Oriented Pseudo-Likelihood.} Deriving a positive semidefinite model emulating classical shadows remains an intriguing question, however, to eliminate unphysical estimates while retaining the favorable scaling features. With projecting directly onto $\rho_s$ proving unfruitful to this end, we note that, indeed, $\rho_s$ was never intended to serve as an accurate substitute for the true $\rho_g$; instead it facilitates estimates of observables~\cite{Huang2020}. Accordingly, we propose the ``observable-oriented pseudo-likelihood'' \begin{equation} \label{eq:PL2} L_{\bm{\mathcal{D}}}(\mathbf{x}) = \exp\left(-\frac{K}{2} \sum_{n=0}^{N-1} \left|\Tr\rho(\mathbf{x})\Lambda_n - \lambda_n^{(s)}\right|^2 \right), \end{equation} where we insert the estimates $\lambda_n^{(s)}$ of $N$ observables from $\rho_s$. This formalism ensures only physical values are returned [through the prior $\pi_0(\mathbf{x})$], and rates the fitness of proposed states through their overlap with respect to shadow's predictions of observables only. For dimension-independence, we again set $K=MD$ and perform BME for all simulated datasets and $N=3$ above, thinning to $T=2^{10}$ ($T=2^{13}$) for $D=32$ ($D=256$). The results follow in Fig.~\ref{fig4}. Now BME shows very similar behavior to shadow: the MSE with respect to the ground truth matches shadow results from Fig.~\ref{fig1} closely, though BME still outperforms for $\lambda_2$. Yet unlike shadow, BME here always gives physically permissible estimates ($\lambda_n^{(B)}\in[0,1]$). This pseudo-likelihood therefore attains the goal of a BME model commensurate with classical shadows. Yet it is important to emphasize that this approach depends heavily on the quality of the classical shadow. It refines estimates from the shadow with its positive semidefinite requirement, but it does not do markedly better at estimating the ground truth state---at least for arbitrary observables. As an example, we repeat the inference procedure for an observable-oriented pseudo-likelihood based solely on $\Lambda_1$, i.e., \begin{equation} \label{eq:PL3} L_{\bm{\mathcal{D}}}(\mathbf{x}) = \exp\left(-\frac{K}{2} \left|\Tr\rho(\mathbf{x})\Lambda_1 - \lambda_1^{(s)}\right|^2 \right), \end{equation} which has ground truth value $\lambda_2^{(g)}=\frac{1}{2}$. Results for the $D=32$ case appear in Fig.~\ref{fig5}, where we plot the Bayesian estimates for all three observables even though the psuedo-likelihood is based on $\lambda_1$ only. The estimate $\lambda_1^{(B)}$ closely matches shadow as designed, and $\lambda_2^{(B)}$ agrees with the ground truth well, due to the fact its value is highly probable for a uniform prior. But $\lambda_0^{(B)}\rightarrow \sim\frac{1}{4}$, far from $\lambda_0^{(g)}=1$. When using the pseudo-likelihood above, all quantum states with identical overlap to $\Lambda_1$ are equally probable, of which the ground truth $\rho_g$ represents just one possibility. The estimate of $\lambda_0$ given only $\lambda_1$ information reflects the inherent uncertainty within this specification. So to summarize, our observable-oriented pseudo-likelihood builds physicality into shadow, yet it can only (in general) accurately predict the $N$ observables injected into it: to infer quantities beyond these $N$ can prove unreliable. \begin{figure}[tb!] \centering\includegraphics[width=\columnwidth]{fig5.png} \caption{Bayesian inference results employing the psuedo-likelihood in Eq.~(\ref{eq:PL3}), for $D=32$. The shadow MSE values from Fig.~\ref{fig1} are reprinted for clarity.} \label{fig5} \end{figure} \section*{Discussion} \label{sec:disc} Our numerical investigations here have elucidated two fascinating features of classical shadows: \begin{enumerate} \item Classical shadows perform extremely well at predicting ``unlikely'' observables, i.e., those which obtain high values only on a restricted subset of states within the complete Hilbert space. \item The nonphysicality of classical shadows is critical to their dimension-independence and accuracy under few measurements. \end{enumerate} These findings do not contradict the optimality of Bayesian methods expressed in Eq.~(\ref{eq:opt}): BME with a full likelihood minimizes MSE for any number and collection of measurements, provided the prior distribution accurately reflects the true knowledge involved. The predictive power of $\rho_s$, then, derives from the fact that the situations in which it is much more accurate that BME are often of particular interest in practice, such as verification of a high-fidelity or highly entangled quantum state. Desiring to extend these features in the Bayesian context, we proposed an observable-oriented pseudo-likelihood that attains shadow's dimension-independence and state-specialized accuracy, with the advantage of guaranteed physicality. Nonetheless, in all these explorations there remains one prominent sense in which classical shadows unquestionably eclipse BME: computational efficiency. The shadow estimator $\rho_s$ is formed directly from measurements for any dimension $D$; yet computing $\rho_B$ requires tedious MCMC methods, with the number of parameters increasing linearly (quadratically) with $D$ for a pure (mixed) state prior. Here we considered up to $D=256$, a far cry from the $D=2^{120}$ example in Ref.~\cite{Huang2020}, where there is no hope for BME with a parameterization such as ours. Moving forward, it would therefore seem profitable to explore simplified Bayesian models that maintain a fixed parameter dimensionality even as the Hilbert space grows exponentially. For example, if one could specify a prior and likelihood on an observable $\lambda$ only, to the effect of $\pi(\lambda)\propto L_{\bm{\mathcal{D}}}(\lambda)\pi_0(\lambda)$, the inference procedure would not be limited directly by exponentially large Hilbert spaces. In this way, Bayesian methods could be extendable to the types of quantum systems sought for practically useful quantum computation. Overall, our analyses have revealed the value of BME as a tool for shedding light on estimation procedures which formally have no connection to the Bayesian paradigm. The numerical simulations here reveal the complementary strengths of classical shadow and Bayesian tomographic approaches in the efficient estimation of quantum properties. And so we expect valuable opportunities for both methods as quantum information processing resources continue to mature in size and complexity. \section*{Methods} \subsection*{Data Simulation Approach} The method of classical shadows introduced in Ref.~\cite{Huang2020} involves application of a Haar-random (or effectively Haar-random) unitary $U$ followed by measurement in the computational basis. We exploit the fact that our target state is pure to substantially reduce the complexity of simulating this procedure. In particular, our simulation method requires the generation of only size-$D$ random vectors rather than $D\times D$ random unitaries. Without loss of generality we work in a rotated basis such that the first basis state coincides with the ground truth: $\rho_g = \ket{0}\bra{0}$. Then the probability of observing outcome $j$ depends only on $|\braket{j|U|0}|^2 = |U_{j0}|^2 = |(U^\dagger)_{0j}|^2$. That is, the distribution of outcomes depends only on the first row of $U^\dagger$. Now, when $U$ is Haar-random, each individual row and column of $U^\dagger$ is a uniformly distributed length-1 vector $u$. Furthermore, given any component $u_j$, the remaining components are a uniformly distributed vector of length $\sqrt{1-|u_j|^2}$. A uniformly random vector $u$, corresponding to the first row of $U^\dagger$, may be obtained by generating $D$ complex normal random values and normalizing them to yield a unit length vector. An outcome $n\in \{0,1,\ldots,D-1\}$ is then chosen with probability $|u_n|^2$. This selects the $n$th column of $U^\dagger$. Since this column (whichever it is) is uniformly distributed, its remaining elements are uniformly distributed with length $\sqrt{1-|u_n|^2}$. The explicit procedure is as follows: \begin{enumerate} \item Posit a measurement unitary $U_m^\dagger = [\tilde{\varphi}_0 \cdots \tilde{\varphi}_{D-1}]$, where each $\tilde{\varphi}_n$ is a column vector corresponding to one of the $D$ possible output states. \item Generate $D$ complex normal samples $w_n \stackrel{\textrm{i.i.d.}}{\sim}\mathcal{N}(0,1) + i\mathcal{N}(0,1)$ and normalize \begin{equation} \label{eq:row} u_n = \frac{w_n}{\sqrt{\sum \limits_{n^\prime=0}^{D-1} |w_{n^\prime}|^2}}. \end{equation} These define projections of the unitary's basis states on the ground truth: $u_n = \braket{0|\tilde{\varphi}_n}$, or in other words, the elements in the first row of $U_m^\dagger$. \item Select an integer $n\in\{0,1,...,D-1\}$ at random with probability $|u_n|^2$. This implies that the state $\tilde{\varphi}_n$ is detected. \item Generate $D-1$ complex normal samples $v_j \stackrel{\textrm{i.i.d.}}{\sim}\mathcal{N}(0,1) + i\mathcal{N}(0,1)$ ($j=1,2,...,D-1$). These set the remaining coefficients of the detected state $\tilde{\varphi}_n$. \item Finally, take \begin{equation} \label{eq:col} \ket{\psi_m} = u_n\ket{0} + \sqrt{ \frac{1-|u_n|^2} {\sum\limits_{j^\prime=1}^{D-1} |v_{j^\prime}|^2}} \sum_{j=1}^{D-1} v_j \ket{j} \end{equation} as the measured state. \end{enumerate} Utilizing this method, we performed 50 independent trials with 1000 measurements each, for Hilbert space dimensions $D=32$ and $D=256$, giving a total of 100 datasets which are used in all subsequent tests above. The two values of $D$ were selected specifically to clarify how classical shadows and BME differ in their scaling with dimension. \vspace{-0.1in} \section*{Acknowledgments} \vspace{-0.15in} This work was funded by the U.S. Department of Energy, Office of Advanced Scientific Computing Research, through the Quantum Algorithm Teams and Early Career Research Programs. This work was performed in part at Oak Ridge National Laboratory, operated by UT-Battelle for the U.S. Department of Energy under contract no. DE-AC05-00OR22725.
{ "timestamp": "2020-12-17T02:20:12", "yymm": "2012", "arxiv_id": "2012.08997", "language": "en", "url": "https://arxiv.org/abs/2012.08997" }
\section{Compilation: Generating Monitors}\label{sec:compilation} While interpretation of specifications enables rapid prototyping, its logic is far more complex than a compiled monitor, at the same time resulting in subpar performance. This renders compilation preferable. Additionally, the compiler can inject additional information into the generated code. Such annotations can benefit the certification process of the CPS either by providing a notion of traceability, or by outright enabling the static verification of the monitor. The target platform of the compilation can either be hardware or software, both coming with advantages and drawbacks. In this section, I will present and discuss a hardware compilation for \rtlola specifications, and a software compilation for \lola, i.e.\@\xspace, a subset of \rtlola, with verification annotations. \subsection{RTLola on FPGA} Realizing an \rtlola specification on hardware has several advantages. For one, the hardware monitor does not share resources with the controller of the system apart from power, eliminating potential negative interference. Moreover, special purpose hardware tends to be smaller, lighter, and require less energy than their general purpose counterparts. Secondly, hardware enables parallel computations at minimal cost. This synergizes well with \rtlola, where output streams within the same evaluation layer can be evaluated in parallel. The realization of \rtlola on hardware~\cite{fpgalola} works in two steps: an \rtlola specification is translated into \vhdl code, out of which an \fpga (field-pro\-gram\-ma\-ble gate array) implementation is synthesized. The synthesis provides additional static information regarding the required board size in terms of memory\footnote{The hardware realization might require temporary registers and working memory. This can slightly increase the computed memory consumption.} and lookup-up tables. This allows for validating whether the available hardware suffices to host the monitor. Moreover, the synthesis indicates the idle and peak power consumption of the monitor, information that is invaluable when integrating the monitor into the system. The aforementioned advantages are not only valid for \fpga but also for other hardware realization such as application-specific integrated circuits (\asic) and complex programmable logic device (\cpld). While \asic have significant advantages over \fpga when it comes to mass-producibility, power consumption and performance, \fpga are preferable during the development phase as they are orders of magnitude cheaper, have a lower entry barrier, and allow for rapid development. \cpld, on the other hand, are just too small to host realistic/non-trivial specifications. \subsubsection{Hardware Realization} Managing periodic and event-based streams under a common hardware clock poses the key challenging when realizing an \rtlola monitor in hardware. Yet, this distinction only affects the part of the monitor logic deciding \emph{when} to evaluate stream expressions; the evaluation itself is agnostic to it. For this reason, the monitor is split into two modules. The \emph{high-level controller} (\hlc) is responsible for scheduling evaluations, i.e.\@\xspace, to decide when and which streams to evaluation. It passes the information down to the second module, the \emph{low-level controller} (\llc), which is responsible for managing the evaluation. A \textsc{fifo} queue between the controllers buffers information sent from the \hlc to the \llc. Recall the specification from \Cref{sec:specification} checking for strong deviations in readings of two velocimeters. As a running example for the hardware compilation, we extend the specification by the following declarations. \begin{lstlisting} output avg_dev @10mHz := dev.aggregate(over: 10min, using: avg) trigger avg_dev > 4 "High average deviation." \end{lstlisting} The specification checks whether two velocimeters disagree strongly over three consecutive measurements and whether the average disagreement is close to the disagreement-threshold. Note that all streams are event-based except for \lstinline{avg_dev} and the second trigger. \Cref{fig:schematic:overall} illustrates the overall structure of the monitor. As can be seen, the \hlc accepts input events from the monitored system. Such an event has a fixed size of $2\cdot(32+1)$ bits, i.e.\@\xspace, 32 due to the input types and an additional bit per stream to indicate whether the stream received an update. For periodic streams, the \hlc has access to the system clock. Based on the current time and arrival of events, the \hlc triggers evaluation cycles by sending relevant information to the queue while rising the \signal{push} signal. Such a data package consists of $2\cdot(32+1) + 64 + 5$ bits. The first component of the sum represents the potential input event. If the evaluation only serves to update periodic streams, these bits will all be 0. The following 64 bits contain the timestamp of the evaluation, crucial information for the computation of sliding window expressions. The last 5 bits each represent an output stream or trigger and indicate whether the respective entity is affected by the evaluation cycle. As a result, the \llc does not have to distinguish between event-based and periodic streams; it merely has to evaluate all streams the \hlc marked as affected. The communication between the queue and the \llc consists of three data lines: the \signal{pop} bit is set by the \llc and triggers the queue to send another data packet down to it --- provided the \signal{empty} bit is 0. In this case, the queue puts the oldest evaluation information on the \signal{dout} wires. Internally, the \llc consists of two state machines. The first one handles the communication with the queue. While the first machines resides in the \textit{eval} state, the second state machine manages the evaluation. To this end, it cycles through different states, each representing an evaluation layer. The first state (``1'') copies the information about input stream updates into the respective memory region. In each consecutive state, the monitor enables the modules responsible for evaluating the respective stream expression by raising the \signal{enable} bit. It then waits on the \signal{done} bits. Upon receiving all of them, the monitor proceeds to the next state. During this process, the outputs of trigger expressions are not persisted locally, but directly piped down to the monitored system. \subsubsection{Resource Consumption} When compiling the specification into \vhdl and realizing it on a \textsc{zynq-7 zc702} Evaluation Board, using the Vivado Design Suite\footnote{https://www.xilinx.com/products/design-tools/vivado.html}, the hardware synthesizer provides information regarding the overall resource consumption. In this case, the monitor requires 10,700 lookup tables and 1735 bits of memory. The energy consumption amounts to \SI{144}{\micro\watt} when idle, and \SI{1.699}{\watt} under peak pressure. Even though the specification is rather small, it gives a glimpse at how low the resource consumption actually is. Baumeister~\etal~\cite{rtlolacavindustrial} successfully synthesized larger specifications designed for autonomous aircraft on the same \fpga. \begin{figure}[t] \include{figures/overall} \caption{Illustration of the hardware realization of an \rtlola monitor. It is composed of two components connected by a queue. The \hlc receives inputs from the system and manages the timing of periodic and event-based streams. The \llc controls the evaluation process with a state machine where each state represents an evaluation layer of the underlying specification. The \llc passes violations of safety properties on to the system.} \label{fig:schematic:overall} \end{figure} \subsection{Lola to Rust} While a compilation of \rtlola specifications into software is a topic for future work, a compilation for \lola does exist, presented by Finkbeiner~\etal~\cite{lolatorust}. \lola~\cite{lola} is a synchronous and discrete-time subset of \rtlola. As such, it does not have a notion of real-time, thus neither sliding windows nor periodic streams are an issue. Moreover, \lola assumes all input streams to receive new values at the same time, prompting all output streams to be extended as well. This renders sample and hold accesses obsolete. \lola does, however, allow for future lookups, i.e.\@\xspace, a stream may refer to the \emph{next} value of another stream. The example specification for the software compilation is another mild variation of the velocimeter cross-validation from \Cref{sec:specification}. The modification replaces the \lstinline{lasting_dev} stream by the following: \begin{lstlisting} output lasting_dev := dev > 5 $\land$ dev.offset(by: +1, default: 0) > 5 $\land$ dev.offset(by: -1, default: 0) > 5 \end{lstlisting} Here, \lstinline{lasting_dev} access the last, current and \emph{next} value of \lstinline{deviation}. The compilation presented in this section translates the \lola specification into Rust\footnote{https://www.rust-lang.org/} code that enables a static verification. Rust as a target language comes with several advantages. First, as a system language with an \textsc{LLVM}\footnote{https://llvm.org/} backend, it is compatible with a wide array of platforms. Secondly, a key paradigm of the language is to enforce static checks on the code and thus reduce dynamic failures. This goes hand in hand with the goal of verifying the functional correctness and absence of dynamic failures of the generated monitor. Lastly, Rust allows for fine-grained control low-level constructs such as memory management, enabling the programmer --- or in the case, the \lola compiler --- to write highly performant code. The compiler injects verification annotations into the code as well. This enables the static verifier \viper to prove functional correctness of the monitor in two steps. First, it relates the working memory of the monitor to the semantic model of \lola. The key challenge here is that the semantics of \lola argues about infinite data sequences while the monitor operates on a finite working memory. Next, the verification relates the verdict of the monitor, i.e.\@\xspace, the boolean indicator of whether a trigger should go off is correct given the current state of the working memory. In combination we can conclude that the monitor only emits an alarm if the semantic model demands so. \subsubsection{Dealing with Fallible Accesses} While future offsets provide a natural way to specify temporal dependencies, the monitor has to compensate for them by delaying the evaluation of the accessing streams. Thus, the evaluation of \lstinline{lasting_dev} needs to be delayed by one step since they access a future value of \lstinline{dev}. This delay is propagated through the dependency graph: the trigger transitively accesses a future value, so its evaluation needs to be delayed, too. With the delay operation in place, accesses via a future offset will always succeed up until the system terminates, and thus no longer produces new inputs. In this case, the monitor continues to evaluate delayed streams until they have the same length as the input streams. This phase is the \emph{postfix} phase of the monitor execution. Here, future offsets fail because the accesses values do not exist and never will. Similarly, past offsets fail at the beginning of the monitor execution, the \emph{prefix}. In the very first iteration of the monitor, only the inputs and \lstinline{dev} can be evaluated, the other output stream and the trigger are delayed. In the next iteration, the input is updated and all output streams and the trigger are evaluated. Evaluating \lstinline{lasting_dev} accesses both values of \lstinline{dev}. In addition to that, the past lookup refers to the -1st value of \lstinline{altitude}, a value, that will never exist. Thus, the monitor statically substituted the access with the default value. Clearly, the monitor goes through three phases: a prefix, in which past offsets fail unconditionally, a loop phase, in which both past and future offsets succeed unconditionally, and a postfix phase, in which future offsets fail unconditionally. In light of this, the compiler faces a trade-off: it can generate a general-purpose loop containing conditional statements resolving offsets dynamically, or it can take the three phases into account by generating code specific to them. The former option contains conditional statements not found in the original specification, resulting in far less performant code. The latter option, however, requires more code, resulting in a larger executable file. The compiler outlined in this section opts for the latter option. \begin{figure}[t] \include{figures/rustcomponents} \vspace{-0.8cm} \captionof{lstlisting}{Structure of the generated \rust code. The prelude is highlighted in orange, the monitor loop in blue, the execution prefix in green, and the execution postfix in violet.}\label{fig:monitorstructure} \end{figure} \Cref{fig:monitorstructure} illustrates the abstract structure of the generated Rust code. The first portion of the code is the \emph{prelude} containing declaration for data structures and I/O functions. Most notably, the \lstinline{Memory} struct represents the monitor's working memory and is of a fixed size. For this, it utilizes the memory analysis from \Cref{sec:static:analyses}. Note also, that the \lstinline{get_input} function returns an optional value: either it contains new input data or it indicates that the system terminated. The \lstinline{main} function is the entry point of the monitor. It allocates the working memory and transitions to the prefix. Here, the monitor code contains a static repetition of code checking for a new input, and evaluating all streams. In the evaluation, stream access are either translated into immediate access to memory or substituted by constant default values. The \lstinline{prefix} function returns a boolean flag indicating whether the system terminated before the prefix was completed. This prompts the \lstinline{main} function to jump to the postfix immediately. Otherwise, the main monitor loop begins following the same scheme of the prefix: retrieve new input values, commit them to memory, evaluate streams, repeat until the system terminates. Note that all stream accesses during the monitor loop translate to accesses to the working memory. Lastly, the \lstinline{main} function triggers the computation of the postfix. The structure is similar to the prefix except that it does not check for new input values. The evaluation logic for streams is a straight-forward translation of the \lola specification as conditional statements, constants, and arithmetic functions are syntactically and semantically almost identical in \lola and Rust. Only stream accesses requires special attention as they boil down to accesses to the \lstinline{Memory} struct. Lastly, the compilation has to order the evaluation of streams to comply with the evaluation order from \Cref{sec:static:analyses}. Streams in the same evaluation can be ordered arbitrarily or parallelized. The latter leads to a significant runtime overhead and only pays off if computational complexity of the stream expressions is sufficiently high. \subsubsection{Verification} The compilation injects verification annotations in the Rust code to enable the Prusti~\cite{prusti} plugin of the Viper framework~\cite{viper} to verify functional correctness of the monitor. The major challenge here is to verify that the finite memory available to the monitor suffices to accurately represent the infinite evaluation model of \lola. The compilation achieves this by introducing a dynamically growing list of values, the ghost memory. With this, the correctness proof proceeds in two steps. First, whenever the monitor receives or computes a new value, it commits it both to the ghost memory and the working memory. Here, the working memory acts as a ring buffer: as soon as its capacity is reached, the addition of a new value overwrites and thus evicts the oldest value. Therefore, the working memory is an excerpt of the ghost memory and thus the evaluation model. Ergo, the computation of new values is valid with respect to the evaluation model because memory accesses yield the same values as the evaluation model would. The newly computed, correct values, are then added into the memory, concluding the inductive proof of memory compliance. Secondly, the verdict of a trigger in the theoretical model needs to be equivalent to the concrete monitor realization. This amounts to proving that the trigger condition was translated properly. Here, memory accesses are particularly interesting, because the theoretical computation uses entries of the ghost memory whereas the realization accesses the working memory only. The agreement of the ghost memory and the working memory for the respective excerpts conclude this proof. Note that for the monitor the ghost memory is write-only, whereas the verification procedure ``reads'' it, i.e.\@\xspace, it refers to its theoretical values. The evaluation logic of the monitor uses data from the system to compute output values. Different components of the verification than access these values either directly or over the ghost memory (GM). Clearly, the information flow is unidirectional: information flows from the monitor to the verification but not vice versa. As a result, upon successful verification, the ghost memory can safely be dissected from the realization. \subsubsection{Conclusion} Not only does the compilation from \lola to Rust produce performant runtime monitors, the injection of verification annotations answers the question \textit{``Quis custodiet ipsos custodes?}\footnote{\textit{``Who will guard the guards themselves?''}}'' rendering it an important step into the direction of verified runtime monitor. The applicability of \lola for CPS is limited to high-level properties where neither asynchrony, nor real-time play a significant role. Further research in this direction especially relating to \rtlola can significantly increase the value and practical relevance of the compilation. \section{Conclusion}\label{sec:conclusion} In this paper, I provided an overview of recent work on the development of a runtime monitor for cyber-physical systems from design to integration. The process can roughly be divided into three phases. In the specification phase, specifiers transform natural language descriptions of properties into a suitable formal specification language. Such properties range from low-level properties validating a single sensor to high-level properties overseen the quality of the entire mission. Type checking and validation based on log data from previous or simulated missions increase confidence in the specification. The compilation phase transforms the specification into an executable software or hardware artifact, potentially injecting annotations to enable static verification of the monitor. This process can help increase the effectiveness of the monitor, which directly translates into safer systems. \section{Integration and Post-Mortem}\label{sec:integration} A key task when integrating a monitor into a CPS is finding a suitable spot in the system architecture. Improper placement can lead to ineffective monitoring or negative interference, jeopardizing the safety of the entire system. \subsection{Integration} The integration is considerably easier when the architecture is not yet fully determined. When adding the monitor retroactively, only minimal changes to the architecture are possible. Larger changes would render previous tests void since the additional component might physically change the system, e.g.\@\xspace in terms of weight distribution and power consumption, or logically offset the timing of other components. Consider, for example, a monitor that relies on dedicated messages from the controller as input data source. If the processor of the controller was already almost fully utilized, the additional communication leads to the controller missing deadlines. This can lead to safety hazards, as timing is critical for the safe operation of a CPS. Taking the placement of the monitor into account early on increases the degree of freedom, which helps avoid such problems. The amount of interference the monitor imposes on the system also depends on the method of instrumentation. Non-intrusive instrumentation such as bus snooping grants the monitor access to data without affecting other modules. The effectiveness of this approach hinges on the amount of data available on the bus. Consider, for example, the system architecture of the autonomous aircraft superARTIS of the German Aerospace Center (DLR) depicted in \Cref{fig:superartisplusarch}. When integrating a monitor for the optical navigation rail into the aircraft~\cite{rtlolacavindustrial}, the monitor was placed near the logging component. By design, the logger had access to all relevant data. This enabled monitoring of properties from the entire spectrum: The specification contained single-sensor validation, cross-validation, and geo-fence compliance checks. Note that in this particular case, the utilization of the logger was low. This allowed it to forward the information from the bus to the monitor. In a scenario where performance is critical, snooping is the preferable option. \begin{figure}[t] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\textwidth]{figures/artiscropped.eps} \caption{An image of the superARTIS aircraft.} \label{fig:superartis} \end{subfigure} \hfill \begin{subfigure}{.45\textwidth} \include{figures/architecture} \caption{A schematic of the superARTIS system architecture.} \label{fig:systemarchitecture} \end{subfigure} \caption{Information on the superARTIS aircraft of the German Aerospace Center.} \label{fig:superartisplusarch} \end{figure} \subsection{Post-Mortem Analysis} After termination of a flight, the post-mortem analysis allows for assessing the performance of the system and finding culprits for errors. The analysis relies on data recorded during the mission; a full record enables a perfect reconstruction of the execution from the perspective of the system. Resource restrictions, however, limit the amount of data, so full records are often not an option. Thus, data needs to be filtered and aggregated rigorously. A main task of the monitor is exactly this: refining input data by filtering and aggregation to obtain an accurate assessment of the system state. Based on this assessment, the monitor determines whether a property is violated. While this binary verdict is the major output of the monitor, the intermediate results provide valuable insight into the evolution of the system over time. Hence, logging this data can improve the post-mortem analysis and alleviates the need to filter and aggregate data in another component as well. \section{Introduction} Cyber-physical systems (CPS) directly interact with the physical world, rendering them inherently safety-critical. Integrating a runtime monitor into the CPS greatly increases confidence in its safety. The monitor assesses the health status of the system based on available data sources such as sensors. It detects a deterioration of the health and alerts the system such that it can e.g.\@\xspace initiate mitigation procedures. In this paper I will provide an overview regarding the development process of a runtime monitor for CPS based on recent work. For this, I will use the \rtlola~\cite{rtlolaarxiv,rtlolacavtoolpaper} monitoring framework. The process ranges from designing specifications to integrating the executable monitor. It starts by identifying relevant properties and translating them into a formal specification language. The resulting specification is type-checked and validated to increase confidence in its correctness. Afterwards, it is compiled into an executable artifact, either based on software or hardware. Lastly, the artifact is integrated into the full system. This step takes the existing system architecture of the CPS into account and enables the monitor to support a post-mortem analysis. The full process is illustrated in \Cref{fig:paperstructure}. The first step of the process concerns the specification. It captures a detailed analysis of the system behavior, which entails computationally challenging arithmetic. Yet, since the monitor for the specification will be realized as an embedded component, its resource consumption must be statically bounded. Thus, the specification language has to provide sufficient expressiveness while allowing the monitor to retain a predictable and low resource footprint. In particular, an ideal specification language provides formal guarantees on the runtime behavior of its monitors such as worst case execution time or memory consumption. In general, however, expressiveness, formal guarantees, and predictably low resource consumption cannot be achieved at the same time. Desirable properties like ``every request must be granted within a second'' might come at the cost that the memory consumption of the monitor depends on the unpredictable input frequency of requests. Consequently, specification languages providing input-independent formal guarantees on the required memory must impose restrictions to prohibit such properties. These restriction can be direct, i.e.\@\xspace, the syntax of the language renders the property inexpressible, or indirect, so the property can be expressed but falls into a language fragment unsuitable for monitoring. \rtlola falls into the former category. During the design phase of the CPS, the specifier defines properties spanning from validation of low-level input sensor readings to high-level mission-specific control decisions. The former are real-time critical, i.e.\@\xspace, they demand a timely response from the monitor, whereas the latter include long-term checks and statistical analysis~\cite{rtlolacavindustrial} where slight delays and mild inaccuracies are unsubstantial. Just like code is not a perfect reflection of what the programmer had in mind, the specification might deviate from the specifiers intention. To reduce the amount of undetected bugs, the specification needs to be validated. This increases confidence in it and --- by proxy --- in the monitor. The validation consists of two parts: type checks and validation based on log data. The former relies solely on the specification itself and checks for type errors or undefined behavior. The latter requires access to recorded or simulated traces of the system and interprets the specification over the given traces. The output of the monitor can then be compared against the expected result. After successfully validating the specification, a compiler for the specification language generates an executable artifact. This artifact is either a hardware or a software solution, depending on the requirements of the system architecture. If, for example, the architecture does not allow for adding additional components, a software solution is preferable as it does not require dedicated hardware; the monitor can be part of the control computer. Hardware solutions, on the other hand, are more resource efficient and allow for parallelization with nearly-0 cost. In any case, the compiler can inject additional annotations for static code-level verification\cite{lolatorust} or traceability\cite{janmaster} to further increase confidence in the correctness of the monitor. Finally, deploying the monitor into the CPS harbors additional pitfalls. As an external safety component, the monitor should not influence the regular operation of the system unless upon detection of a safety violation. As a result, the system architecture needs to enable non-intrusive data flow from the system to the monitor and intrusive data flow from the monitor to the controller. The controller then has to react on an alarm appropriately. Such a reaction can e.g.\@\xspace be a switch from the regular controller to a formally verified controller with significantly reduced complexity responsible for a graceful shutdown of the system~\cite{mtcps20}, as suggested in the Simplex Architecture~\cite{simplex}. After terminating a mission, the output of the monitor provides valuable data for the post-mortem analysis. Regular system logs might be insufficient as they do not contain the entire periphery data due to resource constraints. The monitor, however, filters and aggregates the data specifically to assess information regarding the system's status w.r.t.\@\xspace safety, thus providing valuable feedback. \begin{figure*}[t] \include{figures/structure} \vspace{-.5cm} \caption{Illustration of the paper structure. It is divided into three phases: specification, compilation, and integration.} \label{fig:paperstructure} \end{figure*} \section{Bibliographic Remarks}\label{sec:literature} Early work on runtime monitoring was mainly based on temporal logics~\cite{Drusinsky:2000:TRA:645880.672089,Lee99runtimeassurance, Finkbeiner+Sipma/01/Checking,ltl,Kupferman:2001:MCS:569028.569032,Havelund:2002:SMS:646486.694486}. Their notion of time was limited to discrete time, leading to the development of real-time logics like \stl~\cite{STL} and \mtl~\cite{mtl}. Not only do algorithms for runtime monitoring exist for these logics~\cite{RobustMonSTL,monitoringSTL,Basin:2015:MMF:2772377.2699444,aerial}, there is also work realizing it on an \fpga~\cite{stl2fpga}. The \textsc{R2U2}~\cite{rtutjournal,rtutrv} tool in particular implements \mtl monitors on \fpga while allowing for future-time specifications. Further, there are approaches for generating verified monitors for logics~\cite{metriccompiler,metriccompiler2}. Apart from these temporal logics, there are other specification languages specifically for CPS such as differential dynamic logic~\cite{ddl}. The ModelPlex~\cite{modelplex} framework translates such a specification into several verified components monitoring both the environment w.r.t.\@\xspace the assumed model and the controller decisions. Other approaches --- such as the one presented in this paper --- completely forgo logics. Similar to the compiler from \lola to Rust, there is a verified compiler for synchronous Lustre~\cite{lustrecompiler} programs to C code. Moreover, the \copilot~\cite{copilot,copilotembedded} toolchain is based on a functional, declarative, stream language with real-time capabilities. \copilot enables the verification of generated monitors using the \textsc{cbmc} model checker~\cite{cbmc}. As opposed to the verification with \viper, their verification is limited to the absence of various arithmetic errors, lacking functional correctness. In terms of integration, both the \textsc{R2U2}~\cite{rtutjournal} and the \copilot~\cite{copilotembedded} tool were successfully integrated into an aircraft. \section*{Acknowledgements} This paper is based on a tutorial at the 20th International Conference on Runtime Verification. The work summarized in this paper is based on several earlier publications~\cite{rtlolaarxiv,rtlolacavindustrial,rtlolacavtoolpaper,lolatorust,fpgalola} and I am grateful to all my co-authors. I especially would also like to thank Jan Baumeister and Bernd Finkbeiner for providing valuable feedback and comments. \bibliographystyle{splncs04} \section{Prelude: Requirements on the Monitor}\label{sec:requirements} \section{Specifications: From Sensors to Missions}\label{sec:specification} When designing specifications for CPS, the specifier has to keep in mind that not all properties are equal. They fall into a spectrum from low-level properties concerned with concrete sensor readings to high-level properties validating mission-specific criteria. Properties on the least end of the spectrum work on raw data points of single sensors. Most common are simple bounds checks (\textit{the altitude may not be negative}) or frequency checks (\textit{the barometer must provide between 9 and 11 readings per second}). Less low-level properties work on refined data points, e.g.\@\xspace to check whether several sensors contradict each other (\textit{the altitude measured by the sonic altimeter must not deviate more than $\epsilon$ from the altitude based on the air pressure}). Such a sensor cross-va\-li\-da\-tion requires refinement of raw values as they cannot be compared without further ado. While a barometer provides the air pressure, it needs further information such as pressure and temperature at sea level to accurately estimate the current altitude. Similarly, validating the position provided by the \emph{global navigation satellite system} (GNSS) module against the position estimated by the \emph{inertial measurement unit} (IMU) requires double integration of the measured acceleration. On the highest end of the spectrum reside mission-level properties. When checking such properties, the source of information is mostly discarded and the values are assumed to be correct. For example, consider an aircraft that traverses a set of dynamically received waypoints. Mission-level properties could demand that a waypoint is reached in time or that the traveled distance does deviate more than a factor from the actual distance between two points. \begin{figure*}[t] \include{figures/expressiveness} \vspace{-0.5cm} \caption{Illustration of monitor obligations for checking if a request (``r'') is granted (``g?'') within a second. While the \mtl interpretation is more precise, it requires statically unbounded memory. The \rtlola under-approximation requires constant memory.} \label{fig:costofexpressiveness} \end{figure*} Properties are usually expressed in natural language as above and translated into a specification language. Consider the first property: \textit{the altitude may not be negative}. Evidently, the properties harbors little challenge in terms of arithmetic. However, timeliness is critical. If the altimeter reports a negative altitude, clearly something is wrong and the system needs to be informed near-instantaneously. In \rtlola, the property translates to the following specification: \begin{lstlisting} input altitude: Float32 input orientation: Float32 trigger altitude < 0 "Altimeter reports negative altitude." trigger orientation > 2 * $\pi$ "Orientation exceeds 2$\color{redstrings}\pi$." \end{lstlisting} The first two lines declare input streams of name \lstinline{altitude} and \lstinline{orientation}, both with type \lstinline{Float32}. The remaining lines contain trigger conditions with message to be sent to the system for the case a condition turns true. Whenever the monitor receives a new value from the altimeter or gyroscope, the respective condition is checked immediately. Note that \rtlola allows for \emph{asynchronous} input behavior, i.e.\@\xspace, one input stream can produce a new value while the other does not. Thus, when the gyroscope produces a value, the respective trigger condition is checked regardless of the altimeter. This timing dependency from inputs to expressions is part of \rtlola's type system. The type system is two-dimensional: every stream and expression has a \emph{value type} and a \emph{pacing type}. Value types are common within programming languages, they indicate the shape and interpretation of data. The input streams, for example, are of value type \lstinline{Float32}, so storing a single value requires 32 bits and the bits should be interpreted as a floating point number. The pacing type, however, states \emph{when} expressions are evaluated and thus when streams produce new values. In case of the trigger expressions, the pacing types are \emph{event-based}, i.e.\@\xspace, they are coupled to the reception of new values from the altimeter or gyroscope. The pacing type can also be \emph{periodic}, effectively decoupling the evaluation of expressions from input streams in terms of timing. As an example, consider the second low-level property: \textit{the barometer must provide between 9 and 11 readings per second}. An \rtlola specification for this property is: \begin{lstlisting} input pressure: Float32 output readings_per_sec @ 1Hz := pressure.aggregate(over: 1s, using: count) trigger readings_per_sec < 9 "Barometer produces too few readings." trigger readings_per_sec > 11 "Barometer produces too many readings." \end{lstlisting} Here, \lstinline{readings_per_sec} is an output stream with a timing annotation \lstinline{@ 1Hz} prompting the monitor to only evaluate the expression of the stream once per second. Thus, the timing of the evaluation is decoupled from the reception of new input values. The expression itself is a sliding window aggregation, i.e.\@\xspace, whenever evaluated, the expression counts how many data points the barometer generated in the last second. If the count falls below 9 or exceeds 11, the monitor raises an alarm. While the logic behind a an efficient sliding window implementation is rather complex and requires a great deal of bookkeeping, \rtlola provides a simple primitive for it. This alleviates the need for the specifier to manually take care of the implementation details. Note that the specification does not precisely represent the property. Assume the system alternates between receiving 7 readings in the first half of a second and receiving 3 readings in the second half. Then, every other second, the system receives a total of 14 readings per second --- unbeknownst to the monitor. In \rtlola, it is impossible to specify the property correctly as it lacks the necessary expressiveness by design: sliding window expressions can only occur in streams with a timing annotation. This annotation renders the stream \emph{isochronous}, i.e.\@\xspace, the point in time when its expression will be evaluated are determined statically. The reason behind it is that the original properties lies in a category of properties that generally need a statically unbounded amount of memory to be monitored. To understand this, consider the property \textit{Every request must be granted within one second}. A sound monitor for the property needs to check whether a request was granted exactly one second after receiving a request. However, there is no static bound on the amount of requests the monitor receives within this frame of time. Since it has to store their arrival times, the memory consumption might exceed any bound. The problem is illustrated in \Cref{fig:costofexpressiveness}. There are specification logics such as \emph{metric temporal logic} (\mtl)~\cite{mtl}, in which the property can be expressed. In such a case, the memory consumption of the monitor is linear in the number of events receives within the second. Since \rtlola only provides constant memory monitors, it rejects specifications for such properties and instead enables constant-memory under-approximations. This design decision is a requirement to guarantee that the monitor cannot possible run out of memory during the execution. \rtlola provides primitives for more abstract constraints such as sensor cross-validations as well: \begin{lstlisting} input velocity_1: Int64 input velocity_2: Int64 output deviation := abs(velocity_1 - velocity_2) output lasting_dev := deviation > 5 $\land$ deviation.offset(by: -1, default: 0) > 5 $\land$ deviation.offset(by: -2, default: 0) > 5 trigger lasting_deviation "Lasting deviation in measured velocities." \end{lstlisting} The specification declares two input streams providing different readings for the velocity of the system, and two output streams \lstinline{deviation} and \lstinline{lasting_dev} that computes the absolut deviation of the readings and checks whether the deviation exceeds a threshold three consecutive times. The first conjunct of the stream expression accesses the current, i.e.\@\xspace, the latest value of the \lstinline{deviation} stream, whereas the \lstinline{offset(by: -n, default: v)} function allows for accessing the $n$th-to-latest value of the stream for $n \in \mathds{N}$\footnote{As a result, \rtlola does not allow for accessing future values.}. This value does not exist at the beginning of the monitor execution, so the specifier has to supply a default value~$v$. Here, the specification refers to the abstract notion of ``the last value'' rather than considering the real-time behavior, assuming that low-level validation already took place. Note that \lstinline{deviation} accesses both \lstinline{velocity} streams without supplying a default value. This indicates a \emph{synchronous} access and prompts the monitor to only evaluate \lstinline{deviation} when both input receive a new value. This is not necessarily the case since \rtlola considers inputs to be asynchronous. The pacing type of \lstinline{deviation} captures the information that the stream is only evaluated when the two inputs happen to arrive at the same time: it is event-based and the conjunction of both input streams. In contrast to that, a different definition of \lstinline{deviation} could look like: \begin{lstlisting} output deviation_disj @ velocity_1 $\lor$ velocity_2 := abs(velocity_1.hold(or: 0) - velocity_2.hold(or: 0) \end{lstlisting} Here, the output stream has a disjunctive type, so when it is extended, at least one of the two inputs received a new value, not necessarily both. In such a case, \rtlola forces the specifier to declare precisely how it should handle potentially old values. The specifier can, as in the example of \lstinline{deviation_disj}, turn the synchronous accesses into sample and hold accesses. When evaluating the expression, the monitor will access the latest --- yet potentially old --- value of the input stream with a 0-order hold. If the specifier attempted to access either stream synchronously, \rtlola would reject the specification because it contains an inner contradiction. These kinds of type checks greatly increase confidence in the correctness of the specification as they point out imprecise and potentially flawed parts. Lastly, consider two mission-level properties for an aircraft flying a dynamic list of waypoints: the monitor should issue a warning if the aircraft deviated from the straight-line distance by at least $\varepsilon$, and it should issue an error if such a deviation occurred more than 10 times. \begin{lstlisting} input wp: (Float64, Float64) input pos: (Float64, Float64) output start: (Float64, Float64) := (0, 0) output exp_dist @ wp := wp - wp.offset(by: -1, default: start) output dist_since_wp @ pos := pos - pos.offset(by: -1, default: start) + dist_since_wp.offset(by: -1, default: 0) output distance_deviation @ wp := abs(exp_dist.offset(by: -1, default: 0) - dist_since_wp.hold(or: 0)) trigger distance_deviation > $\varepsilon$ "Warn: Path deviation detected." output deviations := deviations.offset(by: -1, default: 0) + if distance_deviation > $\varepsilon$ then 1 else 0 trigger deviations > 10 "Err: Too many path deviations!" \end{lstlisting} The specification declares two clearly asynchronous input streams: the current waypoint \lstinline{wp} and the current position \lstinline{pos}. The \lstinline{start} stream is a constant stream only containing the initial position of the aircraft. \lstinline{exp_dist} contains the distance between the current and the last waypoint whereas \lstinline{dist_since_wp} aggregates the distance the aircraft has traveled since reaching the last waypoint/starting the mission. The deviation in distance is then the absolut difference between these two distances. Note that this value is only valid when the aircraft just reached a new waypoint, hence the \lstinline{@ wp} annotation. This prompts the monitor to only evaluate the stream when it receives a new waypoint. Lastly, the \lstinline{deviations} stream counts the number of times the deviation exceeded its threshold. The specification contains several pacing type annotations. This, however, is for illustration, as most of the time, \rtlola can infer both types from the stream expression. Yet, the specifier always has the option to annotate types for clarity or if the timing of a stream should deviate from the standard behavior, e.g.\@\xspace for disjunctive event-based types. Note that this was only a brief overview of \rtlola. For more details on the theory, refer to~\cite{maxmaster}, and for the implementation, refer to~\cite{rtlolacavtoolpaper}. \subsection{Specification Validation by Interpretation} \rtlola's type system already rules out several sources for incorrect behavior of the monitor. Yet, a validation of the specification is crucial to increase confidence in the \emph{correctness} of the specification. The validation requires access to records of previous runs of the system. These can be simulated, collected during test runs, or logs from sufficiently similar systems. Just like when testing software, developers annotate the trace data with points in time when they expect the monitor to raise an alarm. Then, they execute a monitor for the given specification on the trace and compare the result with their annotations. Deviations mainly root from either an error when designing the specification, or a discrepancy in the mental image of different people regarding the correct interpretation of a property. A key point for the specification validation is that the process should incur as little cost as possible to enable rapid prototyping. Hence, interpreting the specification rather than compiling it is preferable --- especially when the target platform is hardware-based. After all, realizing a specification on an \fpga usually takes upwards of \SI{30}{\minute}~\cite{fpgalola}. While interpretation is considerably less performant than compiled solutions, the \rtlola interpreter manages to process a single event in \SI{1.5}{\micro\second}. This enables a reasonably fast validation of specifications even against large traces. \subsection{Static Analysis for RTLola Specifications}\label{sec:static:analyses} After type checking the specification and validating its correctness based on test traces, \rtlola provides static checks to further analyze it. For this, \rtlola generates a dependency graph where each stream is a node and each stream access is an edge. This information suffices to perform a memory analysis and a running time analysis. The analysis identifies the resource consumption --- both spatial and temporal --- of each stream, granting fine-grained control to the specifier. \paragraph{Memory} For the memory consumption of stream $s$, the analysis identifies the access of another stream to $s$ with the greatest offset $n^\ast_s$. Evidently, the monitor only has to retain $n^\ast_s$ values of $s$ to successfully resolve all accesses. Moreover, note that all types in \rtlola have a fixed size. Let $T_s$ be the type of $s$ with bit-size $\card{T_s}$. Then, the memory consumption of $s$ induced by accesses through other streams amounts to $n^\ast_s \cdot \card{T_s}$. Sliding window expressions within the stream expression of $s$ incur additional memory overhead. Suppose $w_1,\dots,w_k$ are the windows occurring in $s$ where for $w_i = (\gamma_i, d_i)$, $\gamma_i$ is the aggregation function and $d_i$ is the length of the window. If $k > 0$, \rtlola demands $s$ to be periodic with frequency $\pi_s$. The memory consumption of $s$ induced by sliding windows consists of the number of \emph{panes} required. Here, a pane represents the time interval between two consecutive evaluations of the window. The pane consists of a single value which contains the aggregated information of all values that arrived in the respective time interval. This implementation of sliding windows is inspired by Li~\etal~\cite{nopane} and only works for list homomorphisms~\cite{meertens}. A window thus has $d_i\cdot\pi_s$ panes, which has to be multiplied by the size of the value stored within a pane: $T_{\gamma_i}$. This value is statically determined and depends on the aggregation function: for summation, it is merely the sum of the values, for the average it is the intermediate average plus the number of values that occurred within the pane. The overall memory consumption of $s$ is therefore \[ \mu(s) = n^\ast_s \card{T_s} + \mathds{1}_{k>0} \sum_{i=1}^k d_i \pi_i \card{T_{\gamma_i}} \] Here, $\mathds{1}_\varphi$ is the indicator function evaluating to 1 if $\varphi$ is true and 0 otherwise. \paragraph{Running Time} The running time cannot be fully determined based on the specification alone as it depends on the hardware of the CPS. For this reason, \rtlola provides a preliminary analysis that can be concretized given the concrete target platform. The preliminary analysis computes \begin{enumerate*}[label=\alph*)] \item the complexity of each evaluation cycle given a certain event or point in time, and \item the parallelizability of the specification. \end{enumerate*} For the former metric, note that the monitor starts evaluation cycles either when it receives an event, or at predetermined points in time (\emph{deadlines}). An event always updates a set of input streams and a statically determined set of output streams. Recall the mission-level specification computing the deviation from the flight path including the \lstinline{deviation_disj} stream. The specification declares two input streams, thus allowing for three possible non-empty events. An event covering either \lstinline{velocity} stream but not the other only triggers an evaluation of \lstinline{deviation_disj}. Only if the event covers both inputs, \lstinline{deviation} and the trigger are evaluated as well. Consider a specification containing periodic streams and suppose the monitor has a deadline at time $t$. It then evaluates all periodic streams due at $t$, i.e.\@\xspace, all streams with frequency $\pi$ where $\pi \cdot t$ is a natural number. Thus, the set of streams affected by an evaluation cycle is pre-determined. The next step in the analysis is concerned with \emph{evaluation layers} They are closely related to the parallelizability of the monitor as they indicate how many stream evaluations can take place at once. The analysis yields a partition of the set of streams where all streams within an element of the partition are \emph{independent}, enabling a parallel evaluation. The (in-)dependence relation is based on the dependency graph. If a stream accesses another \emph{synchronously}, i.e.\@\xspace, without an offset, than the target stream needs to be evaluated before the accessing stream. This definition entails an \emph{evaluation order} on output streams. The aforementioned partition is then the coarsest partition such that any two streams in the same set are incomparable with respect to the transitive closure of the evaluation order. Each element of the partition is an evaluation layer. By construction, streams within the same layer can be evaluated in an arbitrary order --- in particular also in parallel. The order in which layers are evaluated, however, still needs to follow the evaluation order. In the example specification before, the partition would be \lstinline[language=]!{{wp, pos, start} < {exp_dist, dist_since_wp} < {distance_deviation} < {deviations, trigger_warn} < {trigger_err}}!. The evaluation layer analysis immediately provides information regarding the parallelizability of the monitor. The running time analysis takes the number of evaluations into account as well as how many streams are affected by an evaluation cycle, and how complex their expressions are. Intuitively, if an event or deadline affects streams in a multitude of layers, then the evaluation is slow as computations depend on each other and thus require a sequential order. Conversely, if an event only affects few streams, all within the first layer, the evaluations are independent and thus highly parallelizable. As a result, the running time of the monitor is low. Note, however, that for software monitors the degree to which computations should run in parallel requires careful consideration, since spawning threads incurs a constant overhead. For hardware monitors, the overhead does not apply.
{ "timestamp": "2020-12-17T02:18:57", "yymm": "2012", "arxiv_id": "2012.08959", "language": "en", "url": "https://arxiv.org/abs/2012.08959" }
\section{Introduction} \IEEEPARstart{M}{OBILE} robots have been widely applied in various scenarios, such as family service, logistics, search-and-rescue, and so on. Motion planning is one of the key technologies for mobile robots to achieve full autonomy in these scenarios \cite{latombe1991robot, lavalle2006planning, choset2005principles}, which has been comprehensively investigated in the literature \cite{gonzalez2015review}. However, it is still challenging to design a motion planning approach that can ensure both safety, smoothness, flexibility, and efficiency in large-scale, unknown, or partially unknown complex environments \cite{alterovitz2016robot}. For computational efficiency considerations, the commonly adopted motion planning framework is organized in a two-layer architecture, namely, global path planning and local planning \cite{lunenburg2016motion}. The global path planner is employed to provide a rough route from the current robot pose to the goal one \cite{liu2015robotic,chi2018risk,wang2020eb, wang2020neural}, followed by a local planner supposed to generate safe and smooth motions according to real-time sensor data \cite{zhou2020trajectory,li2015real, zhou2020autonomous}. In \cite{zhang2018multilevel}, Zhang \emph{et al.} present a multilevel human-like motion planning framework, wherein global-level path planning, sensor-level path planning, and action-level trajectory planning corresponding to the functions of the human brain, eyes, and legs are designed respectively. In the work \cite{marder2010office}, Marder-Eppstein \emph{et al.} propose a robust navigation system for mobile robots in office environments by combining a global planner with Dijkstra's algorithm \cite{dijkstra1959note} and a local planner with the well-known dynamic window approach (DWA) \cite{fox1997dynamic}. In \cite{wang2017autonomous}, Wang \emph{et al.} design a motion planning framework for autonomous mobile robot navigation in uneven and unstructured indoor environments, in which an improved rapidly-exploring random tree (RRT)-based global planner and the elastic bands (EB) local planner \cite{quinlan1993elastic} are employed. In this work, an efficient motion planning framework is proposed. Different from the classical two-layer motion planning framework, local motion planning is decoupled into local path optimization and time-optimal velocity planning, and the two-layer motion planning framework is accordingly extended to a three-layer framework. Simulation and experimental results demonstrate that the proposed motion planning framework achieves better computational efficiency in both global path planning and local path optimization and better motion efficiency in velocity planning. Therefore, we name the proposed motion planning framework E\ensuremath{^3}MoP. Here, the number 3 not only means our approach is a three-layer framework but also means the proposed approach is efficient in three stages. \subsection{Global Planning} Sampling-based planning and grid-based planning are two popular global path planning approaches. Typical sampling-based planning algorithms include probabilistic roadmap (PRM) \cite{kavraki1996probabilistic} and RRT \cite{lavalle2001randomized}. These algorithms represent the configuration space (C-space) with a roadmap of randomly sampled configurations, which has considerable advantages in computational efficiency, especially in high-dimensional planning problems. However, sampling-based planning is limited by completeness and optimality, even some excellent variants such as RRT* can only guarantee asymptotic optimality \cite{karaman2011sampling}. In this work, we focus on grid-based path planning approaches. Grid-based planning overlays a grid on C-space and assumes each configuration is identified with a grid-cell center. Then search algorithms such as Dijkstra's algorithm \cite{dijkstra1959note} and A* \cite{hart1968formal} are used to find a path from the start to the goal, which can be complete and optimal in the discrete search space. However, original grid-based planning algorithms tend to produce paths that are non-smooth and do not generally satisfy the kinodynamic constraints of the robot. In \cite{pivtoraiko2005generating}, Pivtoraiko and Kelly propose the state lattice approach for graph construction, where the connectivity between two configurations in the graph is built from motion primitives which fulfill the kinodynamic constraints of the robot. In contrast to 4-connected or 8-connected grid-based representations, the feasibility of connections in the lattice guarantees that any path found using a lattice will also be feasible. This makes it very suitable to planning for non-holonomic robotic systems. Furthermore, to improve the computational efficiency of graph search, informative heuristics are employed to guide the search towards the most promising areas of the search space \cite{hart1968formal}. The Euclidean distance function is perhaps the most well-known heuristic. However, it has no knowledge of environmental information and is thus a poor heuristic when applied in an environment with dense obstacles since it seriously underestimates the actual path costs \cite{knepper2006high}. In \cite{likhachev2009planning}, Likhachev and Ferguson propose a 2-D grid-based heuristic $ h_{2D} $, which is derived by Dijkstra's algorithm incorporating the up-to-date environmental information. $ h_{2D} $ computes the costs of shortest paths from the goal cell to other cells in the 2-D grid and captures the geometry of obstacles in the environment. However, in \cite{likhachev2009planning} $ h_{2D} $ is only employed to estimate the cost of the shortest path from a given search state to the goal state, while the information about the graph search direction behind the heuristic has not been fully exploited. \subsection{Local Planning} Sampling-based local planning and optimization-based local planning are two popular local planning approaches in the field of mobile robots. A sampling-based local planner usually generates a set of candidate trajectories/paths first and then selects the best one which minimizes an elaborately designed evaluation function. The evaluation function usually includes a variety of factors, for instance, the distance to obstacles and the deviation from the global path. Some sampling-based local planners work in control space, such as the popular DWA \cite{fox1997dynamic}. Compared with sampling in control space, sampling in state space is superior in terms of sampling efficiency and robustness to initial conditions \cite{howard2014model}. In the work \cite{howard2008state}, Howard \emph{et al.} propose a state space sampling approach with a vehicle model-based trajectory generation approach \cite{howard2007optimal}, which has been successfully applied in the DARPA Urban Challenge \cite{ferguson2008motion}. However, the environmental constraints are not taken into account during trajectory generation in \cite{howard2008state}, and considerable time is wasted in generating infeasible trajectories \cite{chen2014quartic}. In \cite{zhang2018multilevel}, Zhang \emph{et al.} propose a state space sampling-based local planner by generating quintic B\'{e}zier curve-based paths with different initial curvatures offline. These local paths are saved in a lookup table and retrieved in the local planning stage according to the curvature condition, thus a significant amount of time is saved for online path generation. However, the endpoints of the offline generated paths are fixed, somehow limiting the flexibility of local planning. And in an environment with dense obstacles, sampling-based local planners may even fail to find a solution. Optimization-based local planning formulates the local planning problem as a non-linear optimization problem, which takes the global path in the local window as input and deforms the local path to make the optimization problem converge. In \cite{dolgov2010path}, Dolgov \emph{et al.} propose a conjugate gradient (CG)-based path optimization approach for autonomous vehicle-free space planning, wherein the smoothness and safety of the path and the curvature constraint of the vehicle are both considered. In the work \cite{rosmann2012trajectory,rosmann2013efficient}, R{\"o}smann \emph{et al.} propose the well-known timed elastic bands (TEB) local planner. Different from the classical EB \cite{quinlan1993elastic}, TEB explicitly considers the temporal aspects of motions, thus the initial path generated by the global path planner can be optimized with respect to minimizing the trajectory execution time, and kinodynamic constraints of robots such as maximum velocities and accelerations can be incorporated into the optimization objective as soft constraints. Inspired by the back-end of simulations localization and mapping (SLAM), TEB formulates the optimization problem in the hyper-graph framework and employs the graph optimization solver g2o \cite{kummerle2011g} to solve the problem. However, the essential \emph{banded} system structure behind the optimization problem has not been fully discussed and exploited in TEB. Furthermore, although TEB considers the velocity and acceleration constraints, it can not guarantee that these kinodynamic constraints are strictly satisfied in the soft constraint framework. In addition, too many constraints may lead to mutual compromises. For example, keeping a minimum distance to obstacles may be conflicting with acquiring a minimum-time trajectory. Therefore, the final optimized trajectory may not achieve good results in both motion efficiency and safety. \subsection{Contributions} Motivated by the challenges of motion planning problems of mobile robots and the aforementioned limitations of existing approaches, a three-layer motion planning framework called E\ensuremath{^3}MoP is proposed in this paper, including global path planning, local path optimization, and time-optimal velocity planning. Specifically, we propose new approaches in the module of global planning and local path optimization part: \subsubsection{Global Planning} The A* search algorithm combined with motion primitives is adopted in the stage of global path planning. Inspired by the environment-constrained heuristic $ h_{2D} $ presented in \cite{likhachev2009planning}, a novel heuristic-guided pruning strategy of motion primitives is proposed to further improve the computational efficiency of graph search. Given a set of motion primitives as the action set, the branching factor, i.e., the average number of successors per state, is fixed. Specifically, all the motion primitives are involved during every expanding process in the A* search. Different from using a complete set of motion primitives without pruning, in this work the search direction information behind $ h_{2D} $ is exploited to provide a one-step forward prediction for the A* search, and then motion primitives pruning is conducted so that only part of the motion primitives are involved in each expanding process. Therefore, the branching factor of graph search is decreased, and the computational efficiency is significantly improved. \subsubsection{Local Planning} Local motion planning problems are addressed by a path/velocity decoupled planning framework. A soft-constrained multi-objective local path optimization approach is newly proposed, wherein the constraints including safety and smoothness are both considered. Furthermore, we notice that each sub-objective only depends on a few local consecutive variables, thus the partial derivatives of the irrelevant variables in the Jacobian are all zero and the Hessian matrix of the whole optimization problem is \emph{sparse-banded}. Based on this property, the optimization problem is efficiently solved through the Levenberg–Marquardt (LM) algorithm combined with the forward elimination and back substitution algorithm, and high real-time optimization performance can be achieved. After path optimization, cubic spline interpolation is employed to further smooth the local path. Finally, a numerical integration-based velocity planning algorithm described in \cite{zhang2018multilevel} is utilized to generate a feasible linear velocity profile along the smoothed local path under the velocity and acceleration constraints. To summarize, the main contributions of this work are as follows: \begin{enumerate}[\hspace{1em}1)] \item A novel heuristic-guided pruning strategy of motion primitives is proposed, which is fully integrated into the state lattice-based global path planner to further improve the computational efficiency of graph search. \item A new soft-constrained local path optimization approach is proposed. The sparse-banded system structure of the underlying optimization problem is fully exploited to efficiently solve the problem, which converges quickly to generate a safe and smooth local path. The smoothness of the local path benefits the subsequent time-optimal velocity planning. \item Autonomous navigation is realized in large-scale, partially unknown complex environments. Extensive simulation and experimental evaluations are presented to verify the safety, smoothness, flexibility, and efficiency of the proposed approach. \end{enumerate} The remainder of the paper is organized as follows. Section II introduces the system framework of autonomous navigation. The pruning strategy of motion primitives and the local path optimization approach are detailed in Sections III and IV respectively. Section V provides some implementation details. The results of simulations and experiments are presented and discussed in Sections VI and VII respectively. Finally, this paper is concluded in Section VIII. \section{System framework} In this work, we follow the ``see-think-act'' pipeline and design an autonomous navigation software system consisting of three primary modules, namely, perception, motion planning, and control, as illustrated in Fig. \ref{fig:systemframework}. In particular, the motion planning module of E\ensuremath{^3}MoP is detailed in Fig. \ref{fig:motionplanning}. All the modules run in parallel and communicate with each other via messages based on the Robot Operating System (ROS) \cite{quigley2009ros}. Next, we will briefly introduce some details of localization, mapping, and motion planning. \begin{figure}[t] \centering \includegraphics[scale=0.3]{fig/systemframework.pdf} \caption{Software system architecture designed for autonomous navigation.} \label{fig:systemframework} \end{figure} \subsection{Localization} Perception is a fundamental module for autonomous navigation \cite{sun2020plane,cheng2020improving}. In this work, a feature-based localization approach is employed. When a new laser scan is received, an efficient and robust 2-D line segment extraction algorithm described in \cite{gao2018line} is utilized to extract line segment features. Then, the endpoints of these line segment features are matched with a prior feature map by the Random Sample Consensus (RANSAC) algorithm \cite{fischler1981random}. If the number of the matched pairs is greater than a preset threshold, the robot pose is obtained by computing the transformation between the matched point features. Otherwise, dead rocking is temporarily employed. \emph{It should be noted that the proposed navigation system is highly modular and the localization module can be replaced by other alternatives.} \subsection{Mapping} In this work, the environment is represented in the form of the Euclidean Distance Grid (EDG), wherein each cell stores the distance to the closest obstacle in the grid. Cells occupied by obstacles have a zero value. Such representation is convenient and efficient to evaluate whether a configuration is in collision with obstacles or not. For example, a collision-free judgment can be obtained if the minimum value of the cells covered by the robot is larger than the circumscribed radius of the robot footprint. With the help of EDG, considerable computing time for directly employing the geometric footprint of the robot to collision detection can be saved. \begin{figure}[t] \centering \includegraphics[scale=0.33]{fig/motionplanningflowchart.pdf} \caption{Flow chart of the three-layer motion planning framework E\ensuremath{^3}MoP.} \label{fig:motionplanning} \end{figure} \subsection{Motion Planning with E\ensuremath{^3}MoP} Motion planning, which plays an essential role in generating safe, smooth, efficient, and flexible motions for mobile robots, is the main focus of this work. Considering the challenges from large-scale, partially unknown complex environments, an efficient motion planning approach called E\ensuremath{^3}MoP is newly proposed. As illustrated in Fig. \ref{fig:motionplanning}, a three-layer motion planning framework is carefully designed, which consists of global path planning, local path optimization, and velocity planning. The global path planner is employed to provide a rough route from the current robot pose to the goal one, and the local path optimization combined with time-optimal velocity planning is used to generate safe, smooth, and efficient motion commands according to real-time sensor data. Different from the two-layer motion planning framework, local motion planning is decoupled into local path optimization and velocity planning in this work. Compared with the path/velocity coupled planning approaches such as TEB, the adopted path/velocity decoupled framework can achieve better motion efficiency and computational efficiency while ensuring smoothness. The reason is twofold: 1) Decoupled approaches divide the complex motion planning problem into two subproblems. Compared with the original problem, each subproblem has a lower dimension, and the computational efficiency of solving two subproblems separately is higher than directly solving the original complex problem; 2) Too many constraints need to be considered when jointly planning path and velocity, which may lead to mutual compromises. Based on the above careful consideration, the three-layer motion planning framework is designed to consider different constraints and objectives in different layers to avoid mutual conflicts. \begin{figure}[t] \centering \includegraphics[scale=0.11]{fig/heuristic2d.pdf} \caption{An illustration of the simplified 2-D search in a 16-connected grid. Obstacles are shown in black, and the gray cells denote the dangerous areas around obstacles. An environment-constrained heuristic path is represented by a red polyline, which provides significant guidance for the original 3-D search in complex environments.} \label{fig:heuristic2d} \end{figure} \section{Heuristic-guided pruning strategy of motion primitives} To compute a kinematically feasible collision-free global path first, the state lattice-based path planner is employed in this work. A typical state lattice-based path planner consists of graph construction and graph search \cite{likhachev2009planning}. As for graph search, A* is employed in this work. As for graph construction, the edge between two nodes in the graph is built from motion primitives which fulfill the kinematic constraints of the robot. In this section, a novel pruning strategy of motion primitives is proposed to further improve the computational efficiency of the state lattice-based path planner. \subsection{Problem Formulation} In this work, we adopt a three-dimensional state representation $ \left( {x,y,\theta } \right) $, where $ \left( {x,y} \right) $ denotes the position of the robot in the world, $ \theta $ represents the heading of the robot and is normalized to $ \left(-\pi, \pi \right] $. For the sake of computational efficiency, the velocities of the robot are not taken into account in the state representation in the global path planning stage. Instead, we temporarily assume that the robot travels at constant linear and angular velocities. Furthermore, in order to obtain a kinematically feasible path, the connectivity between two states is built from motion primitives which fulfill the kinematic constraints of the robot. A motion primitive $ \gamma\left(s, s'\right) $ consists of a sequence of internal robot poses when moving from state $ s $ to state $ s' $. In this work, we follow the popular search-based planning library (SBPL)\footnote{\url{http://wiki.ros.org/sbpl}} to design motion primitives offline, where the trajectory generation algorithm described in \cite{howard2007optimal} is employed with the unicycle model to generate motion primitives for differential-drive robots. If a motion primitive $ \gamma\left(s, s'\right) $ is collision-free, the cost $ g\left( \gamma\left(s, s'\right) \right) $ of this motion primitive is defined as the travel time spent on it. Otherwise, $ g\left( \gamma\left(s, s'\right) \right) $ is set to infinity. Based on the above preliminaries, the path planning problem is defined as follows. The input of the path planner is an up-to-date EDG, the kinematically feasible motion primitives, the current robot state $ {s_{start}} $ and a goal state $ {s_{goal}} $. And the output is a path that is collision-free and satisfies the kinematic constraints of the robot. Meanwhile, the generated path is expected to be optimal or sub-optimal with respect to the travel time in the state space. \subsection{Environment-Constrained 2-D Heuristic} To cope with the complex environmental constraints, it is usual to solve a simplified search problem online and take the result of the simplified search to guide the original, complex search. In this work, the environment-constrained 2-D heuristic $ h_{2D} $ presented in \cite{likhachev2009planning} is used to guide the A* search. Given the up-to-date environmental information, a 2-D version ($ \left(x, y\right) $) of the original path planning problem is solved by Dijkstra's algorithm. Namely, the original 3-D search problem is reduced to a 2-D search in a 8-connected or 16-connected grid, and the non-holonomic constraints of the robot are ignored. Such a simplified search procedure computes the costs of the shortest paths from the goal cell to other cells in the 2-D grid and stores them as a lookup table. Furthermore, to make the heuristic more informative, the costs of those cells whose distances to obstacles are less than the inscribed radius of the robot footprint are set to infinity. Intuitively, $ h_{2D} $ captures the geometry of obstacles in the environment and guides the more expensive 3-D search away from those areas with dense obstacles. Therefore, $ h_{2D} $ needs to be updated whenever the environment changes. An environment-constrained heuristic path is illustrated in Fig. \ref{fig:heuristic2d}. \subsection{Motion Primitives Pruning} \emph{Fully investigating $ h_{2D} $ and designing a pruning strategy of motion primitives to further improve the computational efficiency of graph search is the main novelty of this section.} In \cite{likhachev2009planning}, $ h_{2D} $ is only used to estimate the cost of the shortest path from a given search state to the goal state. In this work, $ h_{2D} $ is further exploited to provide a one-step forward prediction for the search direction, and only part of the motion primitives are involved in each expanding process. Therefore, the branching factor of graph search is decreased, bringing significant benefits for computational efficiency and memory consumption. \begin{figure*}[t] \subfigure[]{\includegraphics[scale=0.23]{fig/pruning1.pdf}} \centering \subfigure[]{\includegraphics[scale=0.23]{fig/pruning2.pdf}} \centering \hspace{0.4cm} \subfigure[]{\includegraphics[scale=0.23]{fig/pruning3.pdf}} \caption{Illustration of the pruning strategy of motion primitives. Obstacles are shown in black. The red polyline denotes the environment-constrained heuristic path, and the light gray curves represent the designed motion primitives. (a) Orientation for both 3-D search and 2-D search is discretized into 16 angles. (b) Planning with the standard state lattice-based path planner. (c) Planning with the pruning strategy of motion primitives. It should be noted that the start and end points of motion primitives are designed to land on the cell center. To make the illustration clear, we make an appropriate offset.} \label{fig:pruning} \end{figure*} Above all, it should be noted that the 2-D search is performed in a backward manner, i.e., the start of the 2-D search is the corresponding goal cell of the 3-D search. As illustrated in Fig. \ref{fig:pruning}(a), the orientation space is discretized into 16 angles, and motion primitives from each angle are designed. According to the angular resolution of the 3-D search, the 2-D search is performed correspondingly in a 16-connected grid to construct $ h_{2D} $. The procedure of the proposed pruning strategy of motion primitives is as follows: \begin{enumerate}[\hspace{1em}1)] \item For every cell $ c $ in the 2-D grid except the goal cell, the shortest 2-D path from the goal cell to $ c $ is computed by Dijkstra's algorithm. Furthermore, the predecessor $ c' $ of $ c $ in the 2-D heuristic path is retrieved and the angle $ \alpha $ between the positive $ x $-axis and the vector from $ c $ to $ c' $ is computed and recorded in the cell $ c $. \item During the 3-D search stage, for the currently expanded 3-D $ \left(x, y, \theta\right) $ search state $ s $, every one of its successor $ s' $ is retrieved according to the designed motion primitives. The angle $ \beta $ between the positive $ x $-axis and the vector from $ s $ to $ s' $ is computed, as shown in Fig. \ref{fig:pruning}(b). Furthermore, the corresponding 2-D cell $ c $ of $ s $ in the 2-D grid is derived, and the angle $ \alpha $ stored in $ c $ is retrieved. If the deviation between $ \left(\alpha - \theta\right) $ and $ \left(\beta - \theta\right) $ is greater than a threshold $ \varepsilon $, the vector from $ s $ to $ s' $ is considered to point to an unpromising search direction and the corresponding motion primitive is not involved in the actual expanding process. According to the angular resolution of the 3-D search, $ \varepsilon $ is set to $ \frac{\pi }{4} $ in this work. \item In the end, only part of the designed motion primitives are involved in each expanding process, as shown in Fig. \ref{fig:pruning}(c). \end{enumerate} In the above process, all angles are normalized to $ \left( { - \pi ,\pi } \right] $. The positive sign of the angle corresponds to the counterclockwise direction. It should be noted that three basic motion primitives, namely, ``taking a step forward'', ``turning in place left'', and ``turning in place right'', are always involved in each expanding process regardless of the pruning strategy. The reason is that at least one feasible path can be obtained by only combining these three basic actions if a physically feasible solution indeed exists in the discrete search space. Therefore, the resolution-completeness of the newly proposed path planner will not be lost due to the extension of the pruning strategy to the state lattice-based path planner. Intuitively, the 2-D cell $ c $ and its predecessor $ c' $ in the environment-constrained heuristic path provide a one-step forward prediction for the 3-D search direction. Based on the pruning strategy, the branching factor of graph search is decreased dramatically and the computational efficiency is significantly improved. Theoretically, we cannot guarantee the optimality of the proposed path planner with motion primitives pruning. However, the practical performance is satisfactory, which will be demonstrated through extensive simulations and experiments in Sections VI and VII. \section{Soft-constrained local path optimization with sparse-banded system structure} The path generated by the global path planner is kinematically feasible, but it is still piecewise linear and not suitable for velocity planning. In addition, the global path planner updates with a relatively low frequency. Therefore, the safety of the path needs to be further improved by the real-time sensor data. In this section, a new soft-constrained local path optimization approach is proposed to deform an initial path generated by the global path planner. The sparse-banded system structure of the underlying optimization problem is fully investigated and the LM algorithm combined with the forward elimination and back substitution algorithm is utilized to efficiently solve the problem. \subsection{Problem Formulation} Given the world coordinates of $ N $ path vertices $ \mathbf{x}_i = {\left( {x_i}, {y_i} \right)}^{\mathrm{T}}, 1 \le i \le N $, a multi-objective path optimization formulation is defined as \begin{equation} \begin{aligned} f\left( {\mathbf{x}} \right) &= {\omega _s}\sum\limits_{i = 2}^{N - 1} {{{\left( {{\mathbf{\Delta}} {{\mathbf{x}}_{i + 1}} - {\mathbf{\Delta}} {{\mathbf{x}}_i}} \right)}^{\mathrm{T}}}\left( {{\mathbf{\Delta}} {{\mathbf{x}}_{i + 1}} - {\mathbf{\Delta}} {{\mathbf{x}}_i}} \right)} \\ &+ {\omega _o}\sum\limits_{i = 1}^N {f_o^2\left( {{{\mathbf{x}}_i}} \right)}, \end{aligned} \label{eq1} \end{equation} and the optimal parameter vector $ {{\mathbf{x}}^ * } $ are obtained by solving the non-linear least squares problem \begin{equation} {{\mathbf{x}}^ * } = \mathop {\arg \min }\limits_{\mathbf{x}} f\left( {\mathbf{x}} \right). \label{eq2} \end{equation} Here, $ {\mathbf{x}} = {[{\mathbf{x}}_1^{\mathrm{T}}\;{\mathbf{x}}_2^{\mathrm{T}}\; \ldots \;{\mathbf{x}}_N^{\mathrm{T}}]^{\mathrm{T}}} $ is a $ 2N $-dimensional parameter vector, $ {\mathbf{\Delta}} {{\mathbf{x}}_i} = {{\mathbf{x}}_i} - {{\mathbf{x}}_{i - 1}},2 \le i \le N $ denotes the displacement vector at the vertex $ {{\mathbf{x}}_i} $, and $ {\omega _s} $ and $ {\omega _o} $ are the weights of the cost terms. $ {f_o}\left( {{{\mathbf{x}}_i}} \right) $ is a piecewise continuous differentiable cost function with $ {d_s} $ specifying the minimum safety distance to obstacles: \begin{equation} {f_o}\left( {{{\mathbf{x}}_i}} \right) = \left\{ \begin{aligned} &{d_s} - \tau\left( {{{\mathbf{x}}_i}} \right) &\tau\left( {{{\mathbf{x}}_i}} \right) < {d_s}\\ &0 &\tau\left( {{{\mathbf{x}}_i}} \right) \ge {d_s} \end{aligned} \right., \end{equation} wherein $ \tau\left( {{{\mathbf{x}}_i}} \right) $ represents the Euclidean distance between the path vertex $ {{{\mathbf{x}}_i}} $ and the closest obstacle. \begin{figure}[t] \centering \includegraphics[scale=0.78]{fig/bilinear.pdf} \caption{Illustration of bilinear interpolation. Bilinear interpolation first performs linear interpolation along the $ x $-axis and then does linear interpolation along the $ y $-axis. $ Q_0 $ and $ Q_1 $ are the intermediate results of bilinear interpolation in the $ x $-axis.} \label{fig:bilinear} \end{figure} The first term in Eq. \eqref{eq1} is a measure of the smoothness of the local path. The cost function $ {{\mathbf{\Delta}} {{\mathbf{x}}_{i + 1}} - {\mathbf{\Delta}} {{\mathbf{x}}_i}} $ can be rewritten as $ {{\mathbf{F}}_{i + 1,i}} + {{\mathbf{F}}_{i - 1,i}} $, where $ {{\mathbf{F}}_{i + 1,i}} = {{\mathbf{x}}_{i + 1}} - {{\mathbf{x}}_i} $ and $ {{\mathbf{F}}_{i - 1,i}} = {{\mathbf{x}}_{i - 1}} - {{\mathbf{x}}_i} $. From a physical point of view, this cost term treats the local path as a series spring system, where $ {{\mathbf{F}}_{i + 1,i}} + {{\mathbf{F}}_{i - 1,i}} $ is the resultant force of two springs connecting the vertices $ {{\mathbf{x}}_{i + 1}} $, $ {{\mathbf{x}}_i} $ and $ {{\mathbf{x}}_{i - 1}} $, $ {{\mathbf{x}}_i} $, respectively. If all the resultant forces are equal to zero, all the vertices would uniformly distribute in a straight line, and the local path is ideally smooth. The second term in Eq. \eqref{eq1} is a measure of the safety of the local path, which is efficiently evaluated in a grid-interpolation scheme. Firstly, an EDG is built by performing distance transform on top of an occupancy grid, wherein each cell of EDG stores the Euclidean distance $ {f_d}\left( P \right) $ between the center $ P $ of this cell and the center of the closest cell occupied by obstacles. Secondly, considering the discrete nature of EDG, which limits the precision of the evaluation of safety and does not allow the direct computation of derivatives, bilinear interpolation is employed to approximate gradients \cite{kohlbrecher2011flexible}. As depicted in Fig. \ref{fig:bilinear}, given the world coordinates $ {{\mathbf{x}}_i} = {({x_i},{y_i})^{\mathrm{T}}} $ of a path vertex $ Q $, $ \tau\left( {{{\mathbf{x}}_i}} \right) $, i.e., the measure of the distance between $ Q $ and the closest obstacle, is approximated as \begin{equation} \begin{aligned} \tau\left( {{{\mathbf{x}}_i}} \right) \approx &\frac{{{y_1} - {y_i}}}{{{y_1} - {y_0}}}\left( {\frac{{{x_1} - {x_i}}}{{{x_1} - {x_0}}}{f_d}\left( {{P_{00}}} \right) + \frac{{{x_i} - {x_0}}}{{{x_1} - {x_0}}}{f_d}\left( {{P_{10}}} \right)} \right)\\ + &\frac{{{y_i} - {y_0}}}{{{y_1} - {y_0}}}\left( {\frac{{{x_1} - {x_i}}}{{{x_1} - {x_0}}}{f_d}\left( {{P_{01}}} \right) + \frac{{{x_i} - {x_0}}}{{{x_1} - {x_0}}}{f_d}\left( {{P_{11}}} \right)} \right), \end{aligned} \end{equation} where $ {P_{00}} $, $ {P_{10}} $, $ {P_{01}} $, and $ {P_{11}} $ denote the centers of four closest cells surrounding $ Q $, and $ \left( {{x_0},{y_0}} \right) $, $ \left( {{x_1},{y_0}} \right) $, $ \left( {{x_0},{y_1}} \right) $, and $ \left( {{x_1},{y_1}} \right) $ are the world coordinates of $ {P_{00}} $, $ {P_{10}} $, $ {P_{01}} $, and $ {P_{11}} $ respectively. The gradient $ \nabla \tau\left( {{{\mathbf{x}}_i}} \right) = \left[ {\frac{{\partial \tau}}{{\partial x}}\left( {{{\mathbf{x}}_i}} \right)\;\frac{{\partial \tau}}{{\partial y}}\left( {{{\mathbf{x}}_i}} \right)} \right] $ is approximated as \begin{equation} \begin{aligned} \frac{{\partial \tau}}{{\partial x}}\left( {{{\mathbf{x}}_i}} \right) \approx &\frac{{{y_1} - {y_i}}}{{\left( {{x_1} - {x_0}} \right)\left( {{y_1} - {y_0}} \right)}}\left( {{f_d}\left( {{P_{10}}} \right) - {f_d}\left( {{P_{00}}} \right)} \right)\\ + &\frac{{{y_i} - {y_0}}}{{\left( {{x_1} - {x_0}} \right)\left( {{y_1} - {y_0}} \right)}}\left( {{f_d}\left( {{P_{11}}} \right) - {f_d}\left( {{P_{01}}} \right)} \right), \end{aligned} \end{equation} \begin{equation} \begin{aligned} \frac{{\partial \tau}}{{\partial y}}\left( {{{\mathbf{x}}_i}} \right) \approx &\frac{{{x_1} - {x_i}}}{{\left( {{x_1} - {x_0}} \right)\left( {{y_1} - {y_0}} \right)}}\left( {{f_d}\left( {{P_{01}}} \right) - {f_d}\left( {{P_{00}}} \right)} \right)\\ + &\frac{{{x_i} - {x_0}}}{{\left( {{x_1} - {x_0}} \right)\left( {{y_1} - {y_0}} \right)}}\left( {{f_d}\left( {{P_{11}}} \right) - {f_d}\left( {{P_{10}}} \right)} \right), \end{aligned} \end{equation} Based on the above grid-interpolation scheme, the safety of the local path is efficiently evaluated while the problem of discretization is overcome. \emph{Remark:} Eq. \eqref{eq2} has a solution if the initial guess of Eq. \eqref{eq2} is feasible and the weight of the safety constraint is dominant. If the weight of the safety constraint is dominant, the LM algorithm will try to minimize the safety term as much as possible, and each path vertex $ \mathbf{x}_i $ will be adjusted independently along the direction of the negative gradient of $ {f_o}\left( \mathbf{x}_i \right) $. Namely, each path vertex will be pushed away from obstacles until the zero gradient occurs. For $ {f_o}\left( \mathbf{x}_i \right) $, the zero gradient occurs in the safe areas where the distance to the closest obstacle is greater than $ d_s $. Therefore, $ \mathbf{x} $ will converge to a ``safer'' solution than the initial guess. If the initial guess is feasible, the solution of Eq. \eqref{eq2} is guaranteed to be feasible. \subsection{Least Squares Optimization} For simplicity of notation, in the rest of this paper we take \begin{equation} \begin{aligned} {{\mathbf{S}}_{i - 1,i,i + 1}}\left( {\mathbf{x}} \right) &\coloneqq {\mathbf{\Delta }}{{\mathbf{x}}_{i + 1}} - {\mathbf{\Delta }}{{\mathbf{x}}_i}, \\ {{\mathbf{O}}_i}\left( {\mathbf{x}} \right) &\coloneqq {f_o}\left( {{{\mathbf{x}}_i}} \right), \end{aligned} \end{equation} and Eq. \eqref{eq1} is rewritten as \begin{equation} \begin{aligned} f\left( {\mathbf{x}} \right) &= {\omega _s}\sum\limits_{i = 2}^{N - 1} {{{\mathbf{S}}_{i - 1,i,i + 1}}{{\left( {\mathbf{x}} \right)}^{\mathrm{T}}}{{\mathbf{S}}_{i - 1,i,i + 1}}\left( {\mathbf{x}} \right)} \\ &+ {\omega _o}\sum\limits_{i = 1}^N {{{\mathbf{O}}_i}{{\left( {\mathbf{x}} \right)}^{\mathrm{T}}}{{\mathbf{O}}_i}\left( {\mathbf{x}} \right)}. \end{aligned} \label{eq8} \end{equation} Assume that a good initial guess $ {\breve{\mathbf{x}}} $ of the parameter vector $ \mathbf{x} $ is known, a numerical solution of Eq. \eqref{eq2} can be obtained by using the popular Gauss-Newton or LM algorithms. The idea is to approximate the cost term by its first order Taylor expansion around the current initial guess \begin{equation} \begin{aligned} {{\mathbf{S}}_{i - 1,i,i + 1}}\left( {{\breve{\mathbf{x}}} + {\mathbf{\Delta x}}} \right) &\approx {{\mathbf{S}}_{i - 1,i,i + 1}} + {\mathbf{J}}_{i - 1,i,i + 1}^s{\mathbf{\Delta x}}, \\ {{\mathbf{O}}_i}\left( {\breve{\mathbf{x}} + {\mathbf{\Delta x}}} \right) &\approx {{\mathbf{O}}_i} + {\mathbf{J}}_i^o{\mathbf{\Delta x}}, \end{aligned} \label{eq9} \end{equation} where $ {\mathbf{J}}_{i - 1,i,i + 1}^s $ and $ {\mathbf{J}}_i^o $ are the Jacobians of $ {{\mathbf{S}}_{i - 1,i,i + 1}}\left( {\mathbf{x}} \right) $ and $ {{\mathbf{O}}_i}\left( {\mathbf{x}} \right) $ computed in $ \breve{\mathbf{x}} $, respectively. For simplicity of notation we take $ {{\mathbf{S}}_{i - 1,i,i + 1}} \coloneqq {{\mathbf{S}}_{i - 1,i,i + 1}}\left( {\breve{\mathbf{x}}} \right) $ and $ {{\mathbf{O}}_i} \coloneqq {{\mathbf{O}}_i}\left( \breve{\mathbf{x}} \right) $. Substituting Eq. \eqref{eq9} in the cost terms in Eq. \eqref{eq8}, we obtain \begingroup\makeatletter\def\f@size{9}\check@mathfonts \def\maketag@@@#1{\hbox{\m@th\normalsize\normalfont#1}}% \begin{equation} \begin{aligned} &\ {{{\mathbf{S}}_{i - 1,i,i + 1}}{{\left( {{\breve{\mathbf{x}}} + {\mathbf{\Delta x}}} \right)}^{\mathrm{T}}}{{\mathbf{S}}_{i - 1,i,i + 1}}\left( {{\breve{\mathbf{x}}} + {\mathbf{\Delta x}}} \right)}\\ &\approx {\left( {{{\mathbf{S}}_{i - 1,i,i + 1}} + {\mathbf{J}}_{i - 1,i,i + 1}^s{\mathbf{\Delta x}}} \right)^{\mathrm{T}}}\left( {{{\mathbf{S}}_{i - 1,i,i + 1}} + {\mathbf{J}}_{i - 1,i,i + 1}^s{\mathbf{\Delta x}}} \right) \\ &= \underbrace {{\mathbf{S}}_{i - 1,i,i + 1}^{\mathrm{T}}{{\mathbf{S}}_{i - 1,i,i + 1}}}_{c_{i - 1,i,i + 1}^s} + 2\underbrace {{\mathbf{S}}_{i - 1,i,i + 1}^{\mathrm{T}}{\mathbf{J}}_{i - 1,i,i + 1}^s}_{{{\left( {{\mathbf{b}}_{i - 1,i,i + 1}^s} \right)}^{\mathrm{T}}}}{\mathbf{\Delta x}} \\ &+ {\mathbf{\Delta }}{{\mathbf{x}}^{\mathrm{T}}}\underbrace {{{\left( {{\mathbf{J}}_{i - 1,i,i + 1}^s} \right)}^{\mathrm{T}}}\left( {{\mathbf{J}}_{i - 1,i,i + 1}^s} \right)}_{{\mathbf{H}}_{i - 1,i,i + 1}^s}{\mathbf{\Delta x}} \\ &= c_{i - 1,i,i + 1}^s + 2{{\left( {{\mathbf{b}}_{i - 1,i,i + 1}^s} \right)}^{\mathrm{T}}}{\mathbf{\Delta x}} + {\mathbf{\Delta }}{{\mathbf{x}}^{\mathrm{T}}}{\mathbf{H}}_{i - 1,i,i + 1}^s{\mathbf{\Delta x}}, \end{aligned} \end{equation} \endgroup and \begingroup\makeatletter\def\f@size{9}\check@mathfonts \def\maketag@@@#1{\hbox{\m@th\normalsize\normalfont#1}}% \begin{equation} \begin{aligned} &\ {{\mathbf{O}}_i}{\left( {{\breve{\mathbf{x}}} + {\mathbf{\Delta x}}} \right)^{\mathrm{T}}}{{\mathbf{O}}_i}\left( {{\breve{\mathbf{x}}} + {\mathbf{\Delta x}}} \right)\\ &\approx {\left( {{{\mathbf{O}}_i} + {\mathbf{J}}_i^o{\mathbf{\Delta x}}} \right)^{\mathrm{T}}}\left( {{{\mathbf{O}}_i} + {\mathbf{J}}_i^o{\mathbf{\Delta x}}} \right)\\ &= \underbrace {{\mathbf{O}}_i^{\mathrm{T}}{{\mathbf{\Omega }}_o}{{\mathbf{O}}_i}}_{c_i^o} + 2\underbrace {{\mathbf{O}}_i^{\mathrm{T}}{{\mathbf{\Omega }}_o}{\mathbf{J}}_i^o}_{{{\left( {{\mathbf{b}}_i^o} \right)}^{\mathrm{T}}}}{\mathbf{\Delta x}} + {\mathbf{\Delta }}{{\mathbf{x}}^{\mathrm{T}}}\underbrace {{{\left( {{\mathbf{J}}_i^o} \right)}^{\mathrm{T}}}{{\mathbf{\Omega }}_o}\left( {{\mathbf{J}}_i^o} \right)}_{{\mathbf{H}}_i^o}{\mathbf{\Delta x}}\\ &= c_i^o + 2{\left( {{\mathbf{b}}_i^o} \right)^{\mathrm{T}}}{\mathbf{\Delta x}} + {\mathbf{\Delta }}{{\mathbf{x}}^{\mathrm{T}}}{\mathbf{H}}_i^o{\mathbf{\Delta x}}. \end{aligned} \end{equation} \endgroup With the above local approximations, we can linearize the function given in Eq. \eqref{eq8} around the current initial guess \begingroup\makeatletter\def\f@size{9}\check@mathfonts \def\maketag@@@#1{\hbox{\m@th\normalsize\normalfont#1}}% \begin{equation} \begin{aligned} &\ f\left( {{\breve{\mathbf{x}}} + {\mathbf{\Delta x}}} \right)\\ &= {\omega _s}\sum\limits_{i = 2}^{N - 1} {{{\mathbf{S}}_{i - 1,i,i + 1}}{{\left( {{\breve{\mathbf{x}}} + {\mathbf{\Delta x}}} \right)}^{\mathrm{T}}}{{\mathbf{S}}_{i - 1,i,i + 1}}\left( {{\breve{\mathbf{x}}} + {\mathbf{\Delta x}}} \right)} \\ &+ {\omega _o}\sum\limits_{i = 1}^N {{{\mathbf{O}}_i}{{\left( {{\breve{\mathbf{x}}} + {\mathbf{\Delta x}}} \right)}^{\mathrm{T}}}{{\mathbf{O}}_i}\left( {{\breve{\mathbf{x}}} + {\mathbf{\Delta x}}} \right)} \\ &\approx {\omega _s}\sum\limits_{i = 2}^{N - 1} {c_{i - 1,i,i + 1}^s + 2{{\left( {{\mathbf{b}}_{i - 1,i,i + 1}^s} \right)}^{\mathrm{T}}}{\mathbf{\Delta x}} + {\mathbf{\Delta }}{{\mathbf{x}}^{\mathrm{T}}}{\mathbf{H}}_{i - 1,i,i + 1}^s{\mathbf{\Delta x}}} \\ &+ {\omega _o}\sum\limits_{i = 1}^N {c_i^o + 2{{\left( {{\mathbf{b}}_i^o} \right)}^{\mathrm{T}}}{\mathbf{\Delta x}} + {\mathbf{\Delta }}{{\mathbf{x}}^{\mathrm{T}}}{\mathbf{H}}_i^o{\mathbf{\Delta x}}} \\ &= {c^s} + 2{\left( {{{\mathbf{b}}^s}} \right)^{\mathrm{T}}}{\mathbf{\Delta x}} + {\mathbf{\Delta }}{{\mathbf{x}}^{\mathrm{T}}}{{\mathbf{H}}^s}{\mathbf{\Delta x}}\\ &+ {c^o} + 2{\left( {{{\mathbf{b}}^o}} \right)^{\mathrm{T}}}{\mathbf{\Delta x}} + {\mathbf{\Delta }}{{\mathbf{x}}^{\mathrm{T}}}{{\mathbf{H}}^o}{\mathbf{\Delta x}}\\ &= c + 2{{\mathbf{b}}^{\mathrm{T}}}{\mathbf{\Delta x}} + {\mathbf{\Delta }}{{\mathbf{x}}^{\mathrm{T}}}{\mathbf{H\Delta x}}, \end{aligned} \label{eq12} \end{equation} \endgroup where \begin{equation} \begin{array}{l} {c^s} = {\omega _s}\sum {c_{i - 1,i,i + 1}^s},\quad {c^o} = {\omega _o}\sum {c_i^o,} \\ {{\mathbf{b}}^s} = {\omega _s}\sum {{\mathbf{b}}_{i - 1,i,i + 1}^s} ,\quad {{\mathbf{b}}^o} = {\omega _o}\sum {{\mathbf{b}}_i^o} ,\\ {{\mathbf{H}}^s} = {\omega _s}\sum {{\mathbf{H}}_{i - 1,i,i + 1}^s} , \quad {{\mathbf{H}}^o} = {\omega _o}\sum {{\mathbf{H}}_i^o} ,\\ c = {c^s} + {c^o},\quad {\mathbf{b}} = {{\mathbf{b}}^s} + {{\mathbf{b}}^o},\quad {\mathbf{H}} = {{\mathbf{H}}^s} + {{\mathbf{H}}^o}. \end{array} \end{equation} Eq. \eqref{eq12} can be minimized in $ {{\mathbf{\Delta x}}} $ by taking the derivative of $ f\left( {{\breve{\mathbf{x}}} + {\mathbf{\Delta x}}} \right) $ with respective to $ {{\mathbf{\Delta x}}} $ and setting the result to zero \begin{equation} {\mathbf{H\Delta }}{{\mathbf{x}}^ * } = - {\mathbf{b}}, \label{eq14} \end{equation} where $ {\mathbf{H}} $ is the system matrix of the optimization problem. The solution of the path optimization problem is obtained by adding the increment $ {\mathbf{\Delta }}{{\mathbf{x}}^ * } $ to the initial guess \begin{equation} {{\mathbf{x}}^ * } = {\breve{\mathbf{x}}} + {\mathbf{\Delta }}{{\mathbf{x}}^ * }. \label{eq15} \end{equation} The Gauss-Newton algorithm iterates the linearization in Eq. \eqref{eq12}, the solution in Eq. \eqref{eq14}, and the update step in Eq. \eqref{eq15}. In every iteration, the previous solution is used as the linearization point and the initial guess until a given termination criterion is met. The LM algorithm introduces a damping factor and backup actions to Gauss-Newton to control the convergence. Instead of solving Eq. \eqref{eq14}, the LM algorithm solves a damped version \begin{equation} \left(\mathbf{H} + \lambda \mathbf{I}\right) \mathbf{\Delta} \mathbf{x^*} = -\mathbf{b}, \label{eq16} \end{equation} where $ \lambda $ is a damping factor to control the step size in case of nonlinear surfaces. \begin{figure*}[t] \centering \includegraphics[scale=0.046]{fig/sparsebandedmatrix.pdf} \caption{Superposition construction process of the sparse-banded Hessian matrix of the path optimization problem. Top and bottom matrices denote the Hessian matrices of the smoothness and safety constraints, respectively.} \label{fig:bandedmatrix} \end{figure*} \subsection{Sparse-Banded System Structure} An important property of the underlying optimization problem is the sparse-banded structure of the system matrix $ {\mathbf{H}} $ \begingroup\makeatletter\def\f@size{9}\check@mathfonts \def\maketag@@@#1{\hbox{\m@th\normalsize\normalfont#1}}% \begin{equation} \begin{aligned} {\mathbf{H}} &= {{\mathbf{H}}^s} + {{\mathbf{H}}^o}\\ &= {\omega _s}\sum\limits_{i = 2}^{N - 1} {{\mathbf{H}}_{i - 1,i,i + 1}^s\left( {\breve{\mathbf{x}}} \right)} + {\omega _o}\sum\limits_{i = 1}^N {{\mathbf{H}}_i^o\left( {\breve{\mathbf{x}}} \right)} \\ &= {\omega _s}\sum\limits_{i = 2}^{N - 1} {{{\left( {{\mathbf{J}}_{i - 1,i,i + 1}^s} \right)}^{\mathrm{T}}}\left( {{\mathbf{J}}_{i - 1,i,i + 1}^s} \right)} + {\omega _o}\sum\limits_{i = 1}^N {{{\left( {{\mathbf{J}}_i^o} \right)}^{\mathrm{T}}}\left( {{\mathbf{J}}_i^o} \right)}. \end{aligned} \end{equation} \endgroup Recall that $ {{\mathbf{S}}_{i - 1,i,i + 1}}\left( {\mathbf{x}} \right) = {\mathbf{\Delta }}{{\mathbf{x}}_{i + 1}} - {\mathbf{\Delta }}{{\mathbf{x}}_i} = {{\mathbf{x}}_{i + 1}} - 2{{\mathbf{x}}_i} + {{\mathbf{x}}_{i - 1}} $ is a function of the three consecutive variables $ {{{\mathbf{x}}_{i - 1}}} $, $ {{{\mathbf{x}}_i}} $, and $ {{{\mathbf{x}}_{i + 1}}} $. Therefore, the partial derivatives of the variables other than these three variables in $ {\mathbf{J}}_{i - 1,i,i + 1}^s\left( {\breve{\mathbf{x}}} \right) $ are all zero \begingroup\makeatletter\def\f@size{9}\check@mathfonts \def\maketag@@@#1{\hbox{\m@th\normalsize\normalfont#1}}% \begin{equation} \setlength{\arraycolsep}{1.2pt} {\mathbf{J}}_{i - 1,i,i + 1}^s\left( {\breve{\mathbf{x}}} \right) = \left( {\begin{array}{*{20}{c}} {\mathbf{0}}, &\cdots, &{\mathbf{0}}, &{{{\mathbf{I}}_{2 \times 2}}}, &{ - 2{{\mathbf{I}}_{2 \times 2}}}, &{{{\mathbf{I}}_{2 \times 2}}}, &{\mathbf{0}}, & \cdots, &{\mathbf{0}} \end{array}} \right), \end{equation} \endgroup and $ {\mathbf{H}}_{i - 1,i,i + 1}^s\left( {\breve{\mathbf{x}}} \right) = {\mathbf{J}}_{i - 1,i,i + 1}^s{\left( {\breve{\mathbf{x}}} \right)^{\mathrm{T}}}{\mathbf{J}}_{i - 1,i,i + 1}^s\left( {\breve{\mathbf{x}}} \right) $ only contributes a $ 6 \times 6 $ diagonal block to $ {\mathbf{H}} $ \begingroup\makeatletter\def\f@size{9}\check@mathfonts \def\maketag@@@#1{\hbox{\m@th\normalsize\normalfont#1}}% \begin{equation} \setlength{\arraycolsep}{1.2pt} {\mathbf{H}}_{i - 1,i,i + 1}^s\left( {\breve{\mathbf{x}}} \right) = \left( {\begin{array}{ccccc} \ddots &{}&{}&{}&{}\\ {}&{{{\mathbf{I}}_{2 \times 2}}}&{ - 2{{\mathbf{I}}_{2 \times 2}}}&{{{\mathbf{I}}_{2 \times 2}}}&{}\\ {}&{ - 2{{\mathbf{I}}_{2 \times 2}}}&{4{{\mathbf{I}}_{2 \times 2}}}&{ - 2{{\mathbf{I}}_{2 \times 2}}}&{}\\ {}&{{{\mathbf{I}}_{2 \times 2}}}&{ - 2{{\mathbf{I}}_{2 \times 2}}}&{{{\mathbf{I}}_{2 \times 2}}}&{}\\ {}&{}&{}&{}& \ddots \end{array}} \right). \end{equation} \endgroup For simplicity of notation we omit the zero blocks. There is a similar property for $ {\mathbf{J}}_i^o\left( {\breve{\mathbf{x}}} \right) $ and $ {{\mathbf{H}}_i^o\left( {\breve{\mathbf{x}}} \right)} $. In the end, $ {\mathbf{H}} $ is a $ 2N \times 2N $ banded matrix with bandwidth 5, as illustrated in Fig. \ref{fig:bandedmatrix}. \emph{Remark:} Considering a $ n \times n $ banded matrix $ {\mathbf{A}} = ({a_{ij}}) $, the bandwidth of $ {\mathbf{A}} $ is the maximum number $ k $ such that $ {a_{i,j}} = 0 $ if $ \left| {i - j} \right| > k $. For simplicity of notation, Eq. \eqref{eq16} is rewritten as \begin{equation} \mathbf{A \Delta x^*} = -{\mathbf{b}}, \label{eq20} \end{equation} where $ {\mathbf{A}} = {\mathbf{H}} + \lambda {\mathbf{I}} $. According to the LM algorithm, $ \mathbf{A} $ is symmetric positive-definite. To solve Eq. \eqref{eq20} efficiently, $ \mathbf{A} $ is decomposed as $ {\mathbf{A}} = {\mathbf{LU}} $ by LU decomposition \begin{equation} \mathbf{A\Delta x^*} = {\mathbf{LU\Delta x^*}} = -{\mathbf{b}}, \end{equation} where $ {\mathbf{L}} $ is a \emph{unit lower triangular} matrix and $ {\mathbf{U}} $ is an \emph{upper triangular} matrix. We can obtain $ \mathbf{\Delta x^*} $ by first solving \begin{equation} \mathbf{Ly} = -{\mathbf{b}}, \label{eq22} \end{equation} and then solving \begin{equation} \mathbf{U \Delta x^*} = {\mathbf{y}}. \label{eq23} \end{equation} Eq. \eqref{eq22} can be solved by forward elimination since $ {\mathbf{L}} $ is unit lower triangular. To solve Eq. \eqref{eq23}, we can use back substitution since $ {\mathbf{U}} $ is upper triangular. The forward elimination and back substitution algorithm is essentially the Gaussian elimination algorithm, but it makes full use of the non-zero element distribution characteristics of the sparse-banded matrix and significantly reduces the computational complexity. The time complexity of the forward elimination and back substitution algorithm is $ O({k^2} \cdot n) $, while that of the Gaussian elimination algorithm is $ O({n^3}) $, where $ n $ and $ k $ are the size and bandwidth of the banded matrix respectively. In addition, the forward elimination and back substitution algorithm is substantially faster than a general sparse factorization because it avoids having to store the factored form of the matrix. Readers can refer to \cite{datta2010numerical} for more details about the forward elimination and back substitution algorithm. \subsection{Velocity Profile Generation} After path optimization, a smooth local path is obtained, but it is still piecewise linear and not suitable for velocity planning. To address this problem, the local path is further smoothed via cubic spline interpolation. Finally, a numerical integration (NI)-based time-optimal velocity planning algorithm presented in \cite{zhang2018multilevel} is employed to generate a feasible linear velocity profile along the smoothed local path. The NI-based algorithm can acquire provably time-optimal trajectory with low computational complexity \cite{pham2014general, shen2017essential, shen2018complete, shen2020real}, which solves the problem by computing maximum velocity curve (MVC) considering both kinematic and environmental constraints and then performing numerical integration under MVC. Readers can refer to \cite{zhang2018multilevel} for more details about the proofs of feasibility, completeness, and time-optimality of this algorithm. \section{Implementation details} \subsection{Setup} E\ensuremath{^3}MoP is implemented in C/C++. The maximum number of iterations of the LM algorithm is set to $ 100 $. The initial guess of local path optimization is obtained by sampling in the global path with an interval of $ 0.1 $ $ \mathrm{m} $. In consideration of the computational efficiency and sensing range, the cumulative length of the initial local path is set to $ 3 $ $ \mathrm{m} $. EDG is computed through an efficient distance transform algorithm described in \cite{felzenszwalb2012distance}. We maintain two EDGs of different resolutions for the hierarchical motion planning framework, wherein $ 0.1 $ $ \mathrm{m/cell} $ for the global grid map and $ 0.05 $ $ \mathrm{m/cell} $ for the local grid map. The dimension of the global grid map corresponds to the global prior map, while the local grid map is a sliding window whose map center is corresponding to the robot pose and the map dimension is set to $ 8 \times 8 $ $ \mathrm{m^2} $. The safety distance $ {d_s} $ is set to $ 0.5 $ $ \mathrm{m} $. Moreover, the modules of the global path planning and EDG updating run in two independent threads, with cycles of $ 250 $ $ \mathrm{ms} $ and $ 40 $ $ \mathrm{ms} $ respectively. All the simulations and experiments are tested on a laptop with an Intel Core i7-9750H processor and 16 GB RAM. To obtain a set of feasible and effective weights for the path optimization formulation, we fix $ \omega_s $ to $ 1 $ firstly and then set an interval for $ \omega_o $. After that, $ \omega_o $ is tuned by dichotomy and visual inspection with the help of the ROS visualization tool RViz. In this work, $ \omega_o $ is set to $ 10 $. In the future, we plan to use machine learning techniques to tune these weights adaptively. \begin{figure*}[t] \centering \includegraphics[scale=0.207]{fig/intel.pdf} \caption{Intel Research Lab. A map derived from real sensor data is used to simulate a 2-D environment. We randomly select three sets of start poses and goal poses to test global path planners, as shown in (a)-(c). The light blue strips indicate the footprints of the robot along the planned paths.} \label{fig:intel} \end{figure*} \subsection{Metrics} The proposed pruning strategy of motion primitives is integrated into the standard state lattice-based path planner (A* + motion primitives) \cite{likhachev2009planning} to derive a new path planner (A* + motion primitives + pruning strategy). And the new path planner is compared with the standard state lattice-based path planner to validate the effectiveness of the pruning strategy. Both path planners require offline designed motion primitives. As mentioned before, the trajectory generation algorithm described in \cite{howard2007optimal} is employed with the unicycle model to generate motion primitives for differential-drive robots. Furthermore, the metrics of the number of expanded states and the planning time are used to evaluate the computational efficiency of graph search \cite{likhachev2009planning}, and the graph size, i.e., the number of nodes in the search graph, is employed to evaluate the memory consumption. To validate the efficiency, smoothness, and flexibility of the proposed optimization-based local planner, we compare it with the advanced quintic B\'{e}zier curve-based state space sampling local planner (QBC) \cite{zhang2018multilevel}. The total travel time is utilized to evaluate the motion efficiency of local planners. In addition, we carefully design various simulation and experimental scenarios to compare the smoothness and flexibility of local planners, which will be detailed in Sections VI and VII. \section{Simulations} In this section, we verify the applicability of the proposed E\ensuremath{^3}MoP in simulation. We choose Stage \cite{vaughan2008massively} as the simulation platform since its lightweight advantage. \subsection{Simulation Setup} To make the simulations more realistic and exhibit the real-world cluster, noise, and occlusion effects, the simulation environment is built on top of a map generated from real sensor data. As depicted in Fig. \ref{fig:intel}, we simulate a 2-D environment based on a map built from the Intel Research Lab data set, which is available from the Radish data set repository \cite{howard2003robotics}. The map is constructed by an open-source 2-D laser SLAM system Karto\footnote{\url{http://wiki.ros.org/slam_karto}}. The size of the environment is approximately $ 30 \times 30$ $ \mathrm{m^2} $. \begin{table}[t] \centering \caption{Quantitative statistics of path planning results in Intel Research Lab} \label{tab:globalplanner} \scalebox{0.94}{ \begin{tabular}{cccccccc} \toprule \multicolumn{2}{c}{} & \# of & Time & Branching & Graph & Path \\ \multicolumn{2}{c}{} & expands & (secs) & factor & size & cost \\ \midrule \multirow{2}{*}{Fig. \ref{fig:intel}(a)} & Lattice & 56,952 & 0.052 & 6.054 & 77,056 & 98,830 \\ & Ours & \textbf{17,657} & \textbf{0.016} & \textbf{4.899} & \textbf{24,405} & 98,830 \\ \midrule \multirow{2}{*}{Fig. \ref{fig:intel}(b)} & Lattice & 54,548 & 0.049 & 6.216 & 78,232 & 91,738 \\ & Ours & \textbf{17,634} & \textbf{0.013} & \textbf{5.132} & \textbf{23,917} & 91,738 \\ \midrule \multirow{2}{*}{Fig. \ref{fig:intel}(c)} & Lattice & 81,387 & 0.075 & 6.057 & 100,609 & 108,210 \\ & Ours & \textbf{30,966} & \textbf{0.024} & \textbf{4.788} & \textbf{39,600} & 108,210 \\ \bottomrule \end{tabular}} \end{table} \subsection{Comparison on Global Planning} We randomly select three sets of start poses and goal poses in the simulation environment to test global path planners, as shown in Fig. \ref{fig:intel}(a)-(c). The state lattice-based path planner and the proposed path planner generate equal quality paths. Therefore, we only show one path in each sub-figure. Table \ref{tab:globalplanner} enumerates some quantitative statistics of path planning results in these three sets of simulations. \subsubsection{Comparison on Computational Efficiency} Thanks to the pruning strategy of motion primitives, the search direction of the proposed path planner is focused towards the most promising search areas, and the computational efficiency of graph search is significantly improved. Compared with the state lattice-based path planner, the number of expanded states of the proposed path planner decreases by an average of $ 66.21\% $. As a result, planning with the proposed path planner is more than three times faster than planning with the state lattice-based path planner. \subsubsection{Comparison on Memory Consumption} Both the state lattice-based path planner and the proposed path planner employ the implicit graph representation. Namely, the memory is allocated according to the need of creating new nodes during the search process, rather than allocating memory for the whole search space in advance. Taking advantage of the pruning strategy of motion primitives, plenty of unpromising search branches are pruned, and the number of the created nodes is reduced. The graph size of the proposed path planner is about $ 33.87\% $ of that of the state lattice-based path planner. In conclusion, the proposed path planner generates equal quality paths with much less time and memory consumption than the state lattice-based path planner, which demonstrates the effectiveness of the proposed pruning strategy of motion primitives and implicitly validates that the advantage of the pruning strategy is not incorporated in a sophisticated heuristic such as $ h_{2D} $. \begin{figure}[t] \centering \includegraphics[scale=0.35]{fig/uturn.pdf} \caption{Local details of the simulation environment.} \label{fig:uturn} \end{figure} \begin{figure}[t] \centering \subfigure[]{\includegraphics[scale=0.19]{fig/localplanner1.pdf}} \centering \subfigure[]{\includegraphics[scale=0.19]{fig/localplanner2.pdf}} \caption{Comparisons on local planners in simulation. (a) Planning result of QBC. The blue trajectories represent the feasible candidate B\'{e}zier curves and the red trajectory denotes the selected optimal B\'{e}zier curve. (b) Planning result of the proposed local planner. The red arrows indicate the optimized poses. Green trajectories in both (a) and (b) indicate the global paths provided by the global path planner.} \label{fig:localplanner} \end{figure} \subsection{Comparison on Local Planning} \subsubsection{Comparison on Motion Flexibility} To compare the motion flexibility of local planners, we make the robot move along the planned path shown in Fig. \ref{fig:intel}(a). This is an extremely challenging navigation task. Firstly, the robot needs to make a sharp U-turn at the corner, as illustrated in Fig. \ref{fig:intel}(a) and Fig. \ref{fig:uturn}. This process requires local planners to provide flexible motions. Secondly, after going around the corner, the robot needs to go through a narrow polyline corridor to reach the goal, which requires safe motions. \emph{The whole navigation process poses a huge challenge to the flexibility and safety of local planners.} In the test, QBC guides the robot around the corner slowly. At the turning point of the corner, it selects a going-forward motion instead of a turning-left motion, as depicted in Fig. \ref{fig:localplanner}(a). In the end, the robot comes to a dead end. All candidate paths are infeasible and the local planner fails. On the contrary, the proposed local planner guides the robot around the U-turn smoothly, as shown in Fig. \ref{fig:localplanner}(b). This demonstrates the proposed local planner performs better motion flexibility. \subsubsection{Comparison on Motion Efficiency} To compare the motion efficiency of local planners, we make the robot move along the planned paths shown in Fig. \ref{fig:intel}(b) and (c). The local path generated by QBC is made up of many connected pieces of incomplete B\'{e}zier curves and is somewhat rough. On the contrary, the smoothness constraint is explicitly considered in the proposed local path optimization approach. Therefore, the local path generated by the proposed local planner is much smoother than that of QBC. \emph{The performance in smoothness is intuitively reflected in motion efficiency.} In the tests, the proposed local planner takes $ 49.78 $ $ \mathrm{s} $ and $ 50.59 $ $ \mathrm{s} $ respectively to guide the robot to the goals, while QBC costs $ 54.43 $ $ \mathrm{s} $ and $ 55.35 $ $ \mathrm{s} $ respectively under the same conditions. This supports the proposed local planner has higher motion efficiency. In conclusion, the proposed optimization-based local planner performs better motion flexibility and efficiency than QBC. \begin{figure}[t] \centering \subfigure[]{\includegraphics[scale=0.15]{fig/platform.pdf}} \centering \subfigure[]{\includegraphics[scale=0.15]{fig/scenario1.pdf}} \caption{(a) Experimental platform. (b) Experimental scenario I.} \label{fig:platform} \end{figure} \section{Experiments} In this section, we elaborately design three sets of challenging experimental scenarios to verify the efficiency, flexibility, smoothness, and safety of the proposed E\ensuremath{^3}MoP. The experimental results are presented and discussed in detail to validate the superior performance of E\ensuremath{^3}MoP. \subsection{Experimental Setup} In this work, the experiments are conducted on a Pioneer 3-DX differential-drive robot equipped with a Hokuyo UTM-30LX laser rangefinder, as shown in Fig. \ref{fig:platform}(a). The maximum linear velocity of the robot is $ 1.2 $ $ \mathrm {m/s} $. Considering the safety of indoor navigation, we set the upper bound of the linear velocity to $ 0.7 $ $ \mathrm {m/s} $. The laser rangefinder has a scanning range of $ {270^ \circ }$ with the angular resolution being $0.25^\circ$, and the effective measurement range is $ 0.1 $ $ \mathrm{m} $ to $ 30 $ $ \mathrm {m} $. Due to the limited angular range, backward motions are not considered in this work. \subsection{Comparison on Global Planning} To validate the superior performance of the proposed path planner, Scenario I is designed as depicted in Fig. \ref{fig:platform}(b). A box is placed in front of the robot as an obstacle. The robot is required to reach the door. To make a fair comparison on global path planners, we use the same local planner in the test of Scenario I. In both sets of tests, the thread of global path planning is triggered $ 61 $ times, and the experimental results are illustrated in Fig. \ref{fig:navpath}. The average number of expanded states and the graph size of the state lattice-based path planner are $ 1794 $ and $ 4226 $ respectively, while those of the proposed path planner are $ 978 $ and $ 1815 $ respectively. Compared with the state lattice-based path planner, the computational efficiency and memory consumption of the proposed path planner are improved by $ 45.48\% $ and $ 57.05\% $ respectively, which demonstrates the effectiveness of the proposed pruning strategy of motion primitives. \begin{figure}[t] \centering \subfigure[]{\includegraphics[scale=0.2]{fig/globalplanner1.pdf}} \centering \subfigure[]{\includegraphics[scale=0.2]{fig/globalplanner2.pdf}} \caption{Comparative experimental results of global path planners in the test of Scenario I. (a) The number of expanded states. (b) The number of nodes in the search graph.} \label{fig:navpath} \end{figure} \begin{table}[t] \centering \caption{Comparisons of Motion Efficiency (Seconds) in Scenario II} \label{tab:localplanner} \begin{tabular}{ccccc} \toprule & QBC planner \cite{zhang2018multilevel} & TEB planner \cite{rosmann2012trajectory} & Ours \\ \midrule 1 & 25.614 & 23.625 & \textbf{19.813} \\ 2 & 25.711 & 23.417 & \textbf{19.842} \\ 3 & 25.872 & 24.224 & \textbf{19.921} \\ 4 & 25.836 & 23.626 & \textbf{19.909} \\ 5 & 25.743 & 23.871 & \textbf{19.834} \\ \bottomrule \end{tabular} \end{table} \begin{figure}[t] \centering \subfigure[]{\includegraphics[scale=0.1826]{fig/scenario2.pdf}} \centering \subfigure[]{\includegraphics[scale=0.18]{fig/localplanner3.pdf}} \centering \subfigure[]{\includegraphics[scale=0.18]{fig/localplanner4.pdf}} \centering \subfigure[]{\includegraphics[scale=0.18]{fig/localplanner5.pdf}} \centering \subfigure[]{\includegraphics[scale=0.18]{fig/localplanner6.pdf}} \caption{(a) Experimental scenario II. (b) Planning result of the proposed local planner. (c) Planning result of QBC. (d) Planning result of the proposed local planner. (e) Planning result of TEB.} \label{fig:environment} \end{figure} \subsection{Comparison on Local Planning} To validate the flexibility, smoothness, and efficiency of the proposed local planner, Scenario II is designed as illustrated in Fig. \ref{fig:environment}(a). In this scenario, the robot is required to move along a long corridor and make a sharp right turn to pass through the door. In addition, there are two boxes placed inside the door as obstacles. Such a scenario requires local planners to provide smooth and flexible motions to guide the robot to the goal. In the test of Scenario II, the proposed local planner exhibits excellent motion efficiency and flexibility. In particular, when the robot approaches the door and needs to make a sharp right turn, the proposed local planner guides the robot through the door smoothly, as shown in Fig. \ref{fig:environment}(b). In contrast, QBC guides the robot through the door slowly. As illustrated in Fig. \ref{fig:environment}(c), the endpoints of the offline designed B\'{e}zier curves are fixed and cannot be adjusted according to the real-time planning task, limiting the motion flexibility of local planning. We repeat the experiment several times, and the experimental results are presented in Table \ref{tab:localplanner}. QBC takes approximately $ 25.76 $ $ \mathrm{s} $ to guide the robot to the goal, while the proposed local planner costs only $ 19.86 $ $ \mathrm{s} $ on average in the same situation. Compared with QBC, the motion efficiency of the proposed local planner is improved by $ 22.87\% $. To validate the advantage of the proposed path/velocity decoupled local planner, we compare it with the popular open-source path/velocity coupled local planner TEB \cite{rosmann2012trajectory,rosmann2013efficient}. As detailed in Section IV-A, the proposed local path optimization approach is based on a \emph{purely geometric} formulation consisting of smoothness and safety constraints, while the geometric smoothness of the local path is not incorporated in the formulation of TEB. As a result, the local path obtained by the proposed local planner is obviously smoother than that of TEB, as shown in Fig. \ref{fig:environment}(d) and (e). We also repeat the experiment several times, and the experimental results are presented in Table \ref{tab:localplanner}. TEB takes $ 23.75 $ $ \mathrm{s} $ on average to guide the robot to the goal. Compared with TEB, the motion efficiency of the proposed local planner is improved by $ 16.37\% $. Based on the above comparative experimental results, it is concluded that the proposed local planner achieves superior performance in smoothness, motion efficiency, and flexibility. \begin{figure}[t] \centering \includegraphics[scale=0.2]{fig/navigation.pdf} \caption{Route of the navigation experiment.} \label{fig:navigation} \end{figure} \begin{figure}[t] \centering \subfigure[]{\includegraphics[scale=0.29]{fig/dynamic1.pdf}} \centering \subfigure[]{\includegraphics[scale=0.29]{fig/dynamic2.pdf}} \caption{Experimental scenario III. (a) Passing through a narrow gap. (b) Avoiding an oncoming person.} \label{fig:dynamic} \end{figure} \subsection{Navigation Experiments} Finally, we test E\ensuremath{^3}MoP in a $ 92.9 \times 26.5 $ $ \mathrm{m^2} $ laboratory environment. As shown in Fig. \ref{fig:navigation}, the robot is required to move from $ \left( {53.7,3.8} \right) $ to $ \left( {31.0,4.0} \right) $. To make the test more challenging, we place some boxes as static obstacles in the long corridor that the robot passes through. Furthermore, we also design a dynamic pedestrian scene to challenge the safety, flexibility, and robustness of E\ensuremath{^3}MoP. The total travel distance of the navigation experiment is approximately $ 44.2 $ $ \mathrm{m} $, and E\ensuremath{^3}MoP takes $ 62.83 $ $ \mathrm{s} $ to guide the robot to the goal. \begin{figure}[t] \centering \includegraphics[scale=0.19]{fig/crowd.pdf} \caption{Screenshots of the robot passing through the crowd.} \label{fig:crowd} \end{figure} Here we summarize several representative experimental results of the navigation experiment to demonstrate the key characteristics of E\ensuremath{^3}MoP. \subsubsection{Dealing with static obstacles} Fig. \ref{fig:dynamic}(a) illustrates the testing scenario with static obstacles. The robot avoids some boxes and passes through a narrow gap smoothly, according to the reliable local planning results. \subsubsection{Dealing with dynamic obstacles} Fig. \ref{fig:dynamic}(b) shows the testing scenario with dynamic obstacles. The robot implements fast re-planning and avoids an oncoming person successfully, thanks to the efficient local planner. \subsubsection{Passing through the crowd} Fig. \ref{fig:crowd} shows the robot smoothly passes through the crowd under the guidance of the local planner. This challenging scenario requires the local planner to provide flexible, safe, and smooth motion commands. \section{Conclusion} In this paper, a three-layer motion planning framework called E\ensuremath{^3}MoP is proposed to address the motion planning problem of mobile robots in complex environments. A novel heuristic-guided pruning strategy of motion primitives is proposed to improve the computational efficiency of graph search. And a soft-constrained local path optimization approach combined with time-optimal velocity planning is presented to generate safe, smooth, and efficient motion commands according to real-time sensor data. Furthermore, the sparse-banded system structure of the underlying path optimization formulation is fully exploited to efficiently solve the problem. Extensive simulations and experiments are presented and discussed to validate that E\ensuremath{^3}MoP has superior performance in terms of safety, smoothness, flexibility, and efficiency. The objective function of the local path optimization that results from convex and concave terms is non-convex, thus the local planner may get stuck in local optima and is unable to transit across obstacles. In the future, we plan to extend the proposed local path optimization approach with the theory of homology classes \cite{bhattacharya2012topological} to maintain several homotopically distinct local paths and seek global optima. \section*{Acknowledgment} The data for the Intel Research Lab is available from the Radish data set repository \cite{howard2003robotics}. The authors gratefully thank Dirk H{\"a}hnel for providing this data set. \bibliographystyle{IEEEtran}
{ "timestamp": "2021-11-17T02:07:49", "yymm": "2012", "arxiv_id": "2012.08892", "language": "en", "url": "https://arxiv.org/abs/2012.08892" }
\section{\label{sec:Introdution} Introduction} The Kuramoto model describes synchronization between coupled phase oscillators, and has been studied extensively since it was first introduced \cite{kuramoto1975international}. This representative model has attracted much attention for its wide range of applications in physics, neuroscience, biology, and the social sciences as well as the variety of collective dynamical phenomena it demonstrates \cite{Str2000Kuramoto,acebron2005kuramoto,Are2008Synchronization,Rod2016Kuramoto}, including exotic collective states such as chimera states \cite{abrams2004chimera} and Bellerophon states \cite{qiu2016synchronization}, well known first and second order phase transitions \cite{Str2000Kuramoto,acebron2005kuramoto,Are2008Synchronization,Rod2016Kuramoto}, and an unusual hybrid phase transition \cite{coutinho2013kuramoto}. Here, from among the different lines of enquiry, we consider the problem of entrainment by an external periodic field. The phase diagram for a system of uniformly coupled Kuramoto oscillators in an external periodic field was first reported in \cite{sakaguchi1988cooperative}. {More recently, Childs and Strogatz performed a} complete bifurcation analysis of explicit dynamical equations {for the} periodically forced Kuramoto model{, revealing the} exact locations of all bifurcations \cite{childs2008stability}. In this work, we report a new phase transition in the periodically forced Kuramoto model, based on numerical and analytical analysis of the explicit dynamical equations derived in \cite{childs2008stability}. We show that this transition cannot be revealed by bifurcation analysis because it is not caused by a bifurcation. Moreover, it {cannot} be classified as a first or second order phase transition since it does not display critical phenomena characteristic of either transition. {Our analysis reveals the topological character of this transition, and the crucial role of a singular point in the order-parameter space.} Topological phase transitions {have been} observed and studied in quantum systems such as fractional quantum Hall liquids and topological isolators \cite{wen1995topological,hasan2010colloquium,goldman2016topological}. Our work shows that a topological transition can also occur in a classical system such as the forced Kuramoto model. In the next section, we provide an overview of the forced Kuramoto model, the order-parameter space, and a brief summary of the {physical behavior displayed by the model}. In Section~\ref{sec:three}, we present an updated phase diagram and our analysis of the transition. Finally, we discuss our findings in Section~\ref{sec:four}. \section{\label{sec:two} The Forced Kuramoto Model} In the Kuramoto model with a periodic field of strength \(F\) and angular frequency \(\sigma\), the phase \(\theta_i\) of the \(i^\text{th}\) phase oscillator is given by \begin{equation} \frac{d \theta_i}{d t} = \omega_i + \frac{K}{N}\sum_{j=1}^{N}\sin(\theta_j-\theta_i) + F\sin(\sigma t - \theta_i), \label{eq:km} \end{equation} where \(\omega_i\) is the oscillator's natural frequency, \(K\) is the uniform coupling strength, and \(N\) the number of oscillators in the system. The macroscopic state of the system is conveniently described by the complex order parameter \begin{equation} z = \rho e^{i\psi} = \frac{1}{N} \sum_{j=1}^{N} e^{i\theta_j}, \label{eq:order_parameter} \end{equation} where the parameter $\rho$ characterizes the degree of synchronization between the oscillators ($0 \leq \rho \leq 1$) and the phase $\psi$ shows the direction of alignment. Applying the Ott-Antonsen ansatz \cite{ott2008low} to a system of oscillators with natural frequencies drawn from a Lorentzian distribution of spread \(\Delta=1\), in a frame rotating at the field frequency $\sigma$, the macroscopic dynamics of the system are reduced to \begin{align} \frac{d \rho}{d t} &= -\rho + \frac{K}{2} \rho (1{-}\rho^2) {+} \frac{F}{2} (1{-}\rho^2) \cos\psi, \label{eq:rho} \\ \frac{d \psi}{d t} &= -\Omega - \frac{F}{2} \frac{(1+\rho^2)}{\rho} \sin\psi, \label{eq:psi} \end{align} where \(\Omega=\sigma - \omega_0\) is the detuning parameter, and \(\omega_0\) is the average natural frequency of oscillators. Note that {the resulting} equations are symmetric with respect to the replacement $\Omega \rightarrow -\Omega$ and $\psi \rightarrow - \psi$. For simplicity, we will assume \(\omega_0=0\), so that \(\Omega=\sigma\). {Equivalently}, one {may} consider a frame rotating with frequency \(\omega_0\){, where} the field frequency is $\sigma-\omega_0$. Eqs.~(\ref{eq:rho}) and (\ref{eq:psi}) describe the motion of the system in a two-dimensional order-parameter space $(\text{Re} (z), \text{Im} (z))=(\rho \cos \psi, \rho \sin \psi)$ {where} positive rotation of $\psi$ is counterclockwise. It is important to note that the state $z=0$ is a singular point in this space because $\psi$ becomes undefined when $\rho=0$. Since any trajectory through this order-parameter space requires the function $\psi(t)$ to be analytic at every time $t$, e.g. the first derivative defines angular velocity, any trajectory through the singular point $z=0$ is forbidden. The complete bifurcation analysis of Eqs. $(\ref{eq:rho})$ and $(\ref{eq:psi})$ was performed by Childs and Strogatz in \cite{childs2008stability} where one can find the explicit stability diagram, all bifurcations curves, and phase portraits of dynamical states of the forced Kuramoto model at different model parameters $F, \Omega$ and $K$. In short, the model demonstrates the following physical behavior. First, in the absence of the field, when the coupling $K$ between phase oscillators is larger than the critical coupling $K_c =2$, interaction results in a synchronous rotation of phase oscillators with the group velocity $\omega_0$ and order parameter $\rho=\sqrt{(K-2)/K}$. At $K=K_c$, the rotational symmetry is spontaneously broken and {the phases \(\theta_i\) of a finite fraction of oscillators become aligned along an arbitrary direction $\psi$.} Second, {upon applying a field with strength} $F$ {at detuning $|\Omega|$ sufficiently smaller than $F$, the synchronized group becomes entrained by the field, for a broad range of model parameters (see phase I in Fig. \ref{fig:phase_diagram}).} In the original non-rotating frame, the entrained state is phase- and frequency-locked to the field and a finite fraction of phase oscillators rotates synchronously with the field frequency $\sigma$. If $\Omega > 0$, the order parameter $z$ lags behind the field, i.e., the phase $\psi$ is negative and the order parameter $z$ lies in the lower half-plane of the order-parameter space. If $\Omega < 0$, then the order parameter $z$ is ahead of the field, the phase $\psi$ is positive and $z$ lies in the upper half-plane. {The latter symmetry} follows from the symmetry of Eqs. $(\ref{eq:rho})$ and $(\ref{eq:psi})$ with respect to the replacement $\Omega \rightarrow -\Omega$ and $\psi \rightarrow - \psi$. {Third, increasing $\Omega$ at fixed \(F\) disrupts the entrained state and leads to periodic dynamics in the rotating frame, as the system undergoes a SNIPER, saddle-node, or Hopf bifurcation \cite{childs2008stability}. As discussed below, we found that when entrainment is disrupted, coupled oscillators can either oscillate synchronously with respect to the field direction (phase II in Fig.~\ref{fig:phase_diagram}) or drift at a frequency other than the field frequency (phase III in Fig.~\ref{fig:phase_diagram}).} \section{SPOR transition \label{sec:three}} Our numerical analysis of Eqs. $(\ref{eq:rho})$ and $(\ref{eq:psi})$ revealed that the phase diagram in \cite{childs2008stability} is incomplete since it misses a phase transition from the phase with oscillations (phase II in Fig. \ref{fig:phase_diagram}) into a phase where the phase oscillators demonstrate a wobbling rotation around the singular point $z=0$ with an angular frequency that differs from the field frequency (phase III in Fig. \ref{fig:phase_diagram}). In order to understand the origin and properties of this transition, which we have tentatively named a Singular Point Oscillation-to-Rotation (SPOR) transition, we performed a detailed numerical analysis of Eqs. $(\ref{eq:rho})$ and $(\ref{eq:psi})$. \begin{figure}[H] \includegraphics[width=\linewidth]{figure_1.pdf} \caption{\label{fig:phase_diagram} Partial phase diagram for the Kuramoto model in a homogenous field with strength \(F\) and detuning \(\Omega\) at \(K=5\). Each phase is characterized by distinct dynamics of \(\psi(t)\). Phase \(\mathrm{I}\) corresponds to the entrained phase. Phase \(\mathrm{II}\) is the oscillating phase. Phase \(\mathrm{III}\), is the rotating phase. Full circles (blue) define a boundary (the SPOR transition) separating phases \(\mathrm{II}\) (shadowed region) and \(\mathrm{III}\). The upper boundary of phase \(\mathrm{I}\) was calculated following \cite{childs2008stability}: the dash-dotted line (black) indicates a SNIPER bifurcation, which becomes a Saddle-Node bifurcation (full black line) at the triangular marker, which in turn becomes a Hopf bifurcation (dotted green line) at the square marker. A detailed phase diagram in the region I is presented in \cite{childs2008stability}.} \end{figure} First, we studied the oscillations in phase II, by fixing the field strength $F$ and increasing $\Omega$. The non-uniform dynamics of \(z\) {in phase II} are typified in Fig.~\ref{fig:phase_portrait}(b), for oscillations born from a Hopf bifurcation. In this phase, the order parameter \(z\) describes a limit cycle in the lower half plane of the order-parameter space. The amplitude of the oscillations in $\psi$ is bound between \(\psi_{max}\) and \(\psi_{min}\), so that \(\psi_{max} - \psi_{min}<\pi\). As the detuning \(\Omega\) increases, the amplitude \(\psi_{max} - \psi_{min}\) tends to \(\pi\) at a critical detuning \(\Omega_C\), which determines the boundary between phases II and III. Fig.~\ref{fig:phase_portrait}(b) shows that over one period of oscillation, the order parameter slowly falls behind the field ($\psi$ moves slowly from 0 to $-\pi$) and then quickly catches up ($\psi$ quickly moves from $-\pi$ to 0). The sharp increase of $\psi$ from a value a little bit above $-\pi$ to a value a little bit below 0 corresponds to fast motion of $z$ along the upper part of the limit cycle, due to high angular velocity $d \psi / dt$ caused by the $1/\rho$ singularity in Eq.~$(\ref{eq:psi})$. Finally, we note that in the rotating frame the angular velocity $d \psi / dt$ averaged over the period of oscillations is zero. Thus, \(z\) is on average frequency-locked to the field. At \(\Omega> \Omega_C\), {we enter} phase III, {and} the order parameter \(z\) starts rotating clockwise {in the rotating frame}, around the singular point $z=0$, as shown in Figs.~\ref{fig:phase_portrait}(a) and \ref{fig:phase_portrait}(c). Note that the parameter $\rho$ has the same behavior above and below $\Omega_C$ (see Figs.~\ref{fig:phase_portrait}(b) and \ref{fig:phase_portrait}(c)). \begin{figure}[H] \includegraphics[width=\linewidth]{figure_2.pdf} \caption{(a) Orbits of the order parameter \(z\) at different $\Omega$ and parameters $F=3.5$ and $K=5$. Dynamics of $\psi$ and $\rho$ in (b) the oscillating phase at $\Omega = 3.90754900$ below the SPOR transition, and (c) the rotating phase at $\Omega = 3.90754901$ above the SPOR transition. Our estimation of the critical detuning parameter is $3.90754900 <\Omega_C < 3.90754901$. The origin of the complex plane (singular point) is marked with a star. The remaining markers indicate the unstable fixed point (spiral) and are mapped to each orbit by color. The inset in (a) shows the region of the order parameter space near the singular point, and a clear change in the position of the orbits relative to the singular point for a small change (\(10^{-8}\)) in $\Omega$. Arrowheads in (a) indicate the direction of motion along the orbits. \label{fig:phase_portrait}} \end{figure} In order to understand the origin of the SPOR transition, we analyzed the behavior of the bifurcation point and limit cycle. In phases II (oscillating phase) and III (rotating phase) of the phase diagram there is only one bifurcation point, an unstable spiral. In the oscillating phase, the limit cycle lies in the lower half-plane as shown in Fig.~\ref{fig:phase_portrait}(a). Increasing $\Omega$ moves the system away from the critical boundary with region I, {expanding the limit cycle around the bifurcation point, and moving the upper part of the limit cycle towards the singular point $z=0$. For $\Omega < \Omega_C$, the limit cycle remains in the lower half-plane, never crossing the singular point. Increasing $\Omega$ from below to above $\Omega_C$ causes the bifurcation point to move continuously towards the singular point $z=0$, as depicted in Fig.~\ref{fig:phase_portrait}(a).} Next, we numerically analyzed the shape and curvature of the limit cycles and found no peculiarities even at $\Omega$ very close \((10^{-8}\)) to $\Omega_C$. We used the well known equation for the curvature $\kappa$ of a curve, given by a function $\rho=\rho(\psi)$ in polar coordinates $(\rho, \psi)$: \begin{equation} \kappa(\psi)=\frac{|\rho^2 +2(\rho')^2 - \rho\rho''|}{[\rho^2 + (\rho')^2]^{3/2}}, \label{eq:curv} \end{equation} where $\rho'\equiv d\rho /d\psi$. We found the function $\rho(\psi)$, which defines the limit cycle, and its derivatives by use of the numerical solution $\rho(t)$ and $\psi(t)$. In the tested range of \(\Omega\) below $\Omega_C$, the limit cycles presented no peculiarity in shape, such as flattening (which corresponds to $\kappa \rightarrow 0$). The curvature is finite and nonzero at all points on the limit cycle, as shown in Fig. \ref{fig:curvature} for the limit cycle of oscillations at $\Omega$ very close to $\Omega_C$. \begin{figure}[H] \includegraphics[width=\linewidth]{figure_3.pdf} \caption{\label{fig:curvature} Curvature $\kappa$ versus $\psi$ of the limit cycle of oscillations near the critical point of the transition to rotation. Parameters: $K=5$, $F=3.5$, $\Omega=3.907549$.} \end{figure} \noindent We also extended our numerical analysis to limit cycles above the critical point $\Omega_C$. At \(\Omega > \Omega_C\), the singular point is encircled by the limit cycle, as shown in Fig.~\ref{fig:phase_portrait}(a). The shape of the limit cycle, its curvature, and the tangential velocity are similar to those below $\Omega_C$. Although we found the angular velocity diverges in the vicinity of the singular point $z=0$ as \(\Omega\) tends to $\Omega_C$, namely that $\max(d\psi/dt) \rightarrow + \infty$ from below and $\min(d\psi/dt) \rightarrow - \infty$ from above, the tangential velocity $(dx/dt, dy/dt)$ is finite at all points on the limit cycle, where $x=\text{Re} (z)$ and $y=\text{Im} (z)$. Finally, we analyzed the relaxation time above and below $\Omega_C$ and found that the relaxation time is finite and does not demonstrate critical behavior such as critical slowing down. Likewise, we found no evidence of hysteresis depending on initial conditions. Thus, we found no evidence suggesting the observed transition is second- or first-order, as would otherwise be suggested by the presence of the above-mentioned phenomena. {In the original {non-rotating} frame, the phase of the order parameter \(z\) equals $\widetilde{\psi}=\sigma t + \psi$. Therefore, the average cycling frequency $f$ of \(z\) over one period \(T\) of oscillations or wobbling rotations is \begin{equation} f = \frac{1}{2\pi T}\int_{t}^{t+T} dt' \frac{d \widetilde{\psi}}{dt'} = \frac{\sigma}{2\pi} + \frac{1}{2 \pi T} \left[ \psi(t+T)- \psi(t) \right], \label{eq:avg_frq} \end{equation} where $t$ is an arbitrary point in time, and the period $T$ is a function of $\Omega$, $F$, and $K$. From Eq.~(\ref{eq:avg_frq}), it follows that $f= \sigma/(2\pi )$ in the oscillating phase, where $\psi(t+T)= \psi(t)$, showing the system is on average frequency-locked to the field. In the rotating phase, the average cycling frequency is $f= \sigma/(2\pi )- 1/T$ since $\psi(t+T)= \psi(t)-2\pi$ for clockwise rotation, and the frequency $f$ tends to $\omega_0$ for increasing \(\Omega\) since a high frequency field does not impact coupled phase oscillators. Thus, the SPOR transition from oscillations to wobbling rotations appears as an abrupt drop in the average cycling frequency $f$ at $\Omega_C$, and the cycling frequency $f$ decreases for $\Omega > \Omega_C$, as shown in Fig. \ref{fig:figure_4} (c). The drop equals the inverse of the oscillation period $T$ at the critical point.} Our numerical results allow us to conclude that the only important difference between limit cycles below and above $\Omega_C$ is in the relative position of the singular point $z=0$. The singular point lies outside the limit cycles at \(\Omega < \Omega_C\) and is encircled by the limit cycles at \(\Omega > \Omega_C\), as shown in Fig.~\ref{fig:phase_portrait}(a). As a result, these two types of limit cycles, each associated with distinct dynamical states, acquire different topological properties. Based on the topological theory of ordered media \cite{mermin1979topological}, we can characterize limit cycles by a winding number $n$. The field $\textbf{s}(\textbf{r})=(\rho \cos \psi, \rho \sin \psi)$ is known on all points $\textbf{r}$ on a limit {cycle} [see, for example, vectors $\textbf{A}$ and $\textbf{B}$ in Fig. \ref{fig:figure_4}(d)]. We can measure the total angle $\psi$ when the vector $\textbf{s}(\textbf{r})$ turns as $\textbf{r}$ traverses the complete limit cycle (counterclockwise increments in angle are positive and clockwise increments are negative). Since $\textbf{s}(\textbf{r})$ is continuous on the {cycle}, this angle must be an integral multiple of $2\pi$. \begin{figure}[H] \includegraphics[width=\linewidth]{figure_4.pdf} \caption{(a) Minimum value of the amplitude \(\rho\left(t\right)\) of the complex order parameter \(z=\rho\exp(i\psi)\) as a function of the detuning \(\Omega\). (b) Winding number $n$, Eq. \ref{eq:winding_number}, of the orbit described by \(z\) in a frame rotating at the field frequency, for both positive and negative \(\Omega\). (c) Average cycling frequency of rotation \(f\) in the original {non-rotating} frame, for both positive and negative \(\Omega\). The dashed line indicates non-oscillating phases, and \(\Omega_C\) is the critical detuning. (d) Schematic representation of two orbits for detuning \(\Omega\) slightly greater (dashed line) and slightly smaller (solid line) than \(\Omega_C\). These orbits differ simply by the translation represented by the green dashed vector connecting unstable spiral points 1 and 2, and the translation is exaggerated for illustrative purposes. The tangential velocity at equivalent points under the translation is the same for both orbits, as shown at points A and B. The red vectors from the origin of the complex plane (marked with a star) to points A and B describe orbits with winding numbers 0 (solid vector) and \(-1\) (dashed vector). \label{fig:figure_4}} \end{figure} \noindent In addition, since we are mapping the temporal behavior of the order parameter $z(t)$ to the order-parameter space $(\rho, \psi)$, we parameterize the cycle by time $t$. The winding number $n$ of a limit {cycle} may then be defined as \begin{equation} n= \frac{1}{2\pi} \int_{t}^{t+T}dt' \frac{d\psi}{dt'}=\frac{1}{2\pi}[\psi(t+T)-\psi(t)], \label{eq:winding_number} \end{equation} where $T$ is the period and $t$ is an arbitrary point in time. In the case of oscillations, {the} winding number of the limit cycle is $n=0$, since $\psi(t+T)=\psi(t)$. For positive $\Omega > \Omega_C$ (clockwise rotation), rotations are characterized by $n=-1$, since $\psi(t+T)=\psi(t)-2\pi$. Moreover, by comparing Eqs.~(\ref{eq:winding_number}) and (\ref{eq:avg_frq}), we can immediately see that the abrupt drop in cycling frequency \(f\) measured in the original {non-rotating} frame is directly related with the winding number \(n\) {as follows} \begin{equation} f = \frac{\sigma}{2\pi} + \frac{n}{T}. \end{equation} In summary, our analysis revealed a topological phase transition (the SPOR transition) in the forced Kuramoto model, from oscillations to wobbly rotations of the complex order parameter \(z\). The transition takes place at $\Omega = \Omega_C$, as the orbit of \(z(\rho,\psi)\) in the rotating frame approaches the singular point \(z=0\), such that \(\mathrm{min}(\rho(t))\to 0\) as shown in Fig.~\ref{fig:figure_4}(a). In the non-rotating frame, the transition is marked by an abrupt drop in the average cycling frequency \(f\), as shown in Fig.~\ref{fig:figure_4}(c). Numerical analysis of the curvature shows the limit cycles immediately below and above $\Omega_C$ are identical, and related by a simple translation in the order-parameter space, as depicted schematically in Fig.~\ref{fig:figure_4}(d). However, the existence of the singular point \(z=0\) makes this translation impossible, since an adiabatic translation would require the limit cycle to pass through the singular point, which is forbidden (as explained in Sec.~\ref{sec:two}). In addition, the oscillating and rotating phases can be distinguished by a topological characteristic, the winding number \(n\) of the limit cycles, as shown in Fig.~\ref{fig:figure_4}(c). Notably, the average cycling frequency $f$ measured in the {non-rotating} frame is directly related with the winding number. \section{Discussion \label{sec:four} } In this paper we found that, by increasing the frequency or decreasing the strength of an external field, the periodically forced Kuramoto model undergoes an abrupt phase transition from a phase with oscillations to a phase with wobbly rotations of the order parameter. We call this a 'Singular Point Oscillation-to-Rotation' (SPOR) phase transition. In the original {non-rotating} frame, the SPOR transition appears as an abrupt drop in average group frequency of phase oscillators. Our analysis of the dynamical behavior of the forced Kuramoto model shows that the SPOR transition can neither be classified as a first nor as a second order phase transition since it does not display critical phenomena characteristic of either transition. First, according to the classification of phase transitions in Landau's phenomenological theory \cite{landau2013statistical}, second order (continuous) phase transitions are accompanied by symmetry breaking across the transition and a gradual increase of the order parameter in the ordered phase. In the case of the SPOR transition, symmetry breaking is absent. Second, the proximity to the critical point of a continuous transition is signalled by an increase of critical fluctuations that result in critical phenomena, such as a strong increase in susceptibility, correlation length, relaxation rate (known as critical slowing down of dynamics), etc. Our detailed analysis of the dynamical behavior of the forced Kuramoto model near the SPOR transition revealed no anomaly in behavior in the corresponding fixed point, the curvature of the limit cycle, and the relaxation rate. Third, first order (discontinuous) phase transitions are characterized by an abrupt appearance of order, hysteresis, and a region of metastable states. Moreover, critical phenomena occur close to the critical boundary of metastable states. In our case, hysteresis and metastable states are absent. Fourth, the absence of critical correlations also suggests the SPOR transition is not a hybrid phase transition, which combines the abrupt appearance of order, as in first order phase transitions, with the absence of hysteresis and the presence of critical fluctuations, as in second order phase transitions (for an example, consider the hybrid transition in the Kuramoto model with frequency-degree correlations \cite{coutinho2013kuramoto}.) Fifth, despite the absence of symmetry breaking, states above and below the critical point of the SPOR transition are topologically distinct states characterized by different winding numbers. Within the concept of topological phase transition, states of matter can be classified according to their topological properties \cite{mermin1979topological,wen1995topological,hasan2010colloquium,goldman2016topological}. In contrast to the conventional Landau paradigm of order parameters associated with spontaneous symmetry breaking, topological phases of matter are characterized by non-local topological invariants, such as Chern numbers for strongly correlated topological phases in ultracold gases \cite{goldman2016topological} {and topological insulators \cite{hasan2010colloquium}}, or winding numbers for topological defects \cite{mermin1979topological}. Topological phase transitions occur without symmetry breaking between states with different topological properties. Our analysis of the SPOR transition {has shown} that a singular point in the {order-parameter} space, corresponding to the state with zero order parameter, plays a crucial role in the topological transition. This singular point lies outside the limit cycle of oscillations, but is encircled by limit cycle in the rotating phase. As a result, the limit cycles corresponding to oscillations and rotation{s} acquire different topological properties and form two topologically distinct classes corresponding to different winding numbers. The limit cycles of oscillations have winding number zero and the limit cycles of rotations have winding number $\pm 1$ depending on the sign of the detuning parameter. Limit cycles belonging to the same class can be smoothly (adiabatically) deformed one into another, while cycles belonging to different classes {cannot} be deformed smoothly one into another. Based on this analysis, we conclude that the SPOR transition is a topological phase transition between states with winding numbers 0 and $\pm 1$. Since the transition is topological in origin, it is not caused by a specific bifurcation, and was therefore missed by previous studies based on bifurcation theory \cite{sakaguchi1988cooperative,antonsen2008external,childs2008stability}. {A similar phase transition has also been reported in the Kuramoto model with interacting identical phase oscillators subjected to white noise \cite{zaks2003noise}.} {As mentioned in the Introduction, topological phases of matter have been found in quantum systems, namely in fractional quantum Hall liquids and topological isolators, see, for example, reviews \cite{wen1995topological,goldman2016topological}. There is an interesting similarity between the SPOR transition in the classical forced Kuramoto model and a transition from the flipper phase to the spinner phase in a mechanical model of topological metamaterials \cite{chen2014nonlinear} (compare our Fig.~\ref{fig:phase_portrait} and Fig.~4(c) in paper \cite{chen2014nonlinear}). Topological states with winding number $\pm 1$ and 0 on Fig.~\ref{fig:phase_portrait} are topologically similar to topological edge states in two-dimensional topolectrical circuits \cite{olekhno2020topological} where bound two photon topological states appear if the topoelectrical circuit has a unit cell characterized by winding number 1, but are absent if the winding number is zero (see Fig. 3 in paper \cite{olekhno2020topological}). Note that these topological metamaterials and topolelectrical circuits are classical analogs of quantum systems with topological edge states \cite{chen2014nonlinear,olekhno2020topological}.} {The SPOR transition is an example of a topological transition between two distinct topological phases with winding numbers $\pm1$ and zero in a classical system of interacting phase oscillators with a singular point in the order-parameter space. We believe this kind of topological transition may be found in other classical systems.} As an illustrative example of a SPOR transition, we may consider a motorboat making circles on the open sea. All circles are topologically equivalent independently of their position. Let us now introduce an observer in the water. The situation becomes crucially different. If the observer is placed outside these circles, then the motorboat oscillates with respect to the observer. The observer can track the motorboat just by turning their neck. If the motorboat encircles the observer, rotating around them, the observer must also rotate their body to track the motorboat. The critical circle corresponds to the situation when the motorboat passes through the observer. Assuming the person steering the motorboat wishes to avoid committing a crime, passing through the observer is forbidden. In this model, the observer plays the role of the singular point. States with the observer outside and inside the closed tracks created by the motorboat have different topological properties with respect to the observer, {i.e. winding numbers 0 and $\pm 1$, respectively}. A transition from one state to another is an abrupt topological transition. \section*{Declaration of Competing Interest} None. \section*{Credit Author Statement} All authors contributed equally to this paper. \section*{Acknowledgments} We thank Ricardo Guimarães Dias for useful discussions. This work is funded by national funds (OE), through Portugal's FCT Funda\c{c}\~{a}o para a Ci\^{e}ncia e Tecnologia, I.P., within the scope of the framework contract foreseen in paragraphs 4, 5 and 6 of article 23, of Decree-Law 57/2016, of August 29, and amended by Law 57/2017, of July 19. E. A. P. W. acknowledges the financial support provided by FCT under PhD grant SFRH/BD/121331/2016.
{ "timestamp": "2021-04-28T02:02:31", "yymm": "2012", "arxiv_id": "2012.08882", "language": "en", "url": "https://arxiv.org/abs/2012.08882" }
\section{Introduction} Let $\mathcal{X} = L^\infty [0, 1]$ and consider the following Urysohn integral operator \begin{align}\label{eq:1.1} \mathcal{K} (x)(s) = \int_0^1 \kappa \left(s, t, x (t)\right) d t, \;\;\; s \in [0, 1], \; x \in \mathcal{X}, \end{align} where $\kappa (s, t, u)$ is a real valued continuous function defined on $ \Omega =[0, 1]\times[0, 1] \times \R.$ Then $\mathcal{K} $ is a compact operator from $L^\infty [0, 1]$ to $C [0, 1].$ Consider the Urysohn integral equation \begin{align}\label{eq:1.2} x(s) - \int_0^1 \kappa \left(s, t, x (t)\right) d t = f(s), \;\;\; s \in [0, 1], \end{align} where $f \in \mathcal{X}$ is given and $x$ is the unknown to be determined. We assume that $\varphi$ is an isolated solution of the above equation and consider its numerical approximations. We are interested in approximate solution which converges to $\varphi$ uniformly. We consider some projection methods associated with a sequence of orthogonal projections converging to the Identity operator point-wise. In this paper, we consider the case when the kernel $\kappa$ of the integral operator $\K$, is of the type of Green's function in its domain. We allow the partial derivatives of the kernel $\kappa$ to have jump discontinuities along the diagonal $s = t$. For $ r \geq 1,$ let $ \mathscr{X}_n$ be a space of piecewise polynomials of degree $ \leq r-1 $ with respect to a uniform partition of $ [0, 1]$ with $n$ subintervals each of length ${h = \frac {1} {n} }.$ Let $\pi_n$ be the restriction to $L^\infty [0, 1]$ of the orthogonal projection from $L^2 [0, 1]$ onto $\mathscr{X}_n.$ Galerkin method is a classical projection method for the approximate solution of an integral equation. In this method, (\ref{eq:1.2}) is approximated by \begin{equation}\label{eq:Gal} x_n^G - \pi_n \mathcal {K} (x_n^G) = \pi_n f. \end{equation} The above projection method has been studied extensively in the research literature. See Krasnoselsii \cite{Kra}, Krasnoselskii, Vainikko et al \cite{KraV} and Krasnoselskii and Zabreiko \cite{KraZ} for details. \noindent The iterated Galerkin solution is defined by $$x_n^S = \K(x_n^G) + f. $$ Note that $$ x_n^G = \pi_n x_n^S $$ and then the iterated Galerkin solution satisfies the following equation: \begin{equation}\label{eq:It_Gal} x_n^S - \K(\pi_{n} x_n^S) = f. \end{equation} \noindent From Atkinson-Potra \cite{Atk-Pot}, we quote the following orders of convergence: \noindent If $r = 1,$ then \begin{equation}\label{eq1.5} \| x_n^G - \varphi \|_\infty = O ( h ), \;\;\; \| x_n^S - \varphi \|_\infty = O ( h^{ 2 } ), \end{equation} whereas if $r \geq 2,$ then \begin{equation}\label{eq:1.5} \norm{x_n^G - \varphi}_\infty = O\left( h^{r} \right) ~ , \quad \norm{x_n^S - \varphi}_\infty = O\left( h^{r+2}\right). \end{equation} Asymptotic error analysis and extrapolation methods are classical topics in numerical analysis. Richardson extrapolation is the popular one. In Ford et al. \cite{Ford}, Hammerstein integral equation with Green's function type of kernel is considered. Composite trapezoidal quadrature method is used to approximate the integral operator, and then an asymptotic error expansion is obtained for the approximate solution at the node points. Richardson extrapolation is applied to improve the orders of convergence. In Kulkarni-Rane \cite{Rpk-Aks}, the authors have defined the Nystr\"om operator based on the composite midpoint and the composite modified Simpson rules to approximate the integral operator of a Hammerstein integral equation with Green's function type kernel. Asymptotic expansions for the approximate solution at the node points as well as at the partition points, are obtained and Richardson extrapolation is used to obtain approximate solutions with higher orders of convergence. Hammerstein integral equation is a special case of Urysohn integral equation. The case when the kernel of the Urysohn integral equation is sufficiently smooth, asymptotic error analysis are investigated for various projection methods in Kulkarni-Nidhin \cite{RPK-NTJ}. In the case of a linear integral equation of the second kind with smooth kernel, asymptotic series expansion for the iterated Galerkin solution is proved by McLean \cite{McLean}. The case of asymptotic expansion for approximate solution of integral equations with Green's kernel, at the partition points in the case of Nystr\"{o}m method with midpoint rule, modified Simpsons rule and iterated collocation method is treated in Kulkarni-Rane \cite {Rpk-Aks1}. In Rakshit-Rane \cite{GR-AR}, we considered a Fredholm integral equation with kernel of the type of Green's function and then asymptotic error analysis is investigated for the iterated Galerkin solution at the partition points. Richardson extrapolation is applied to obtain an approximate solution with higher rate of convergence. In this paper we shall analyze asymptotic expansion for the iterated Galerkin solution of Urysohn integral equation with Green's function type kernel. We will use Richardson extrapolation to improve the order of convergence. The paper is organized as follows. In Section 2, notation is set and some preliminary results are proved for later use. In Section 3, asymptotic error analysis for the iterated Galerkin solution at the partition points are investigated. Numerical illustration is given in Section 4. \setcounter{equation}{0} \section{Preliminaries} In this section we describe the Urysohn integral operator with Green's function type kernel, its Fr\'echet derivatives and related preliminary results. We introduce the following notations. For an integer $\alpha \geq 0$, let $ C^{\alpha}[0, 1]$ denotes the space of all real valued $\alpha$-times continuously differentiable functions on $[0, 1]$ with the following norm. $$ \norm{x}_{\alpha, \infty} = \max_{0 \leq j \leq \alpha} \norm{x^{(j)}}_\infty,$$ where $x^{(j)}$ is the $j^{\text{th}}$ derivative of the function $x$ and $$\norm{x^{(j)}}_\infty = \sup_{0 \leq t \leq 1} \left| x^{(j)}(t)\right|.$$ Define $$ \norm{\kappa}_{\alpha, \infty} = \max_{0 \leq i+j+k \leq \alpha} \norm{D^{(i, j, k)}\kappa(s, t, u)}_\infty,$$ where $$D^{(i, j, k)}\kappa(s, t, u) = \frac{\partial^{i+j+k} \kappa}{\partial s^i \partial t^j \partial u^k}(s, t, u).$$ \subsection{Properties of the kernel (Green's function type)}\label{subsection:2.1} Let $r \geq 1$ be an integer and assume that the kernel $\kappa$ has the following properties. \begin{enumerate} \item For $i = 1, 2, 3, 4$, the functions $\kappa, \displaystyle{\frac { \partial^i \kappa} {\partial u^i} \in C ( \Omega),}$ where $C ( \Omega)$ denotes the space of all real valued continuous function on $\Omega = [0, 1] \times [0, 1] \times \R$. \item Let $$ \Omega_1 = \{ (s, t, u): 0 \leq t \leq s \leq 1, \; u \in \R \},\;\;\; \Omega_2 = \{ (s, t, u): 0 \leq s \leq t \leq 1, \; u \in \R \}.$$ There are two functions $\kappa_j \in C^{r} ( \Omega_j ), j = 1, 2, $ such that $$ \kappa (s,t, u) = \left\{ {\begin{array}{ll} \kappa_1 (s, t, u), \;\;\; (s, t, u) \in \Omega_1, \\ \kappa_2 (s, t, u), \;\;\; (s, t, u) \in \Omega_2. \end{array}}\right. $$ \item Denote $\displaystyle{ \ell (s, t, u) = \frac {\partial \kappa } { \partial u}( s, t, u)} $ and $\displaystyle{ q (s, t, u) = \frac {\partial^2 \kappa } { \partial u^2}( s, t, u)}, $ for all $ (s, t, u) \in \Omega.$ The partial derivatives of $\ell (s, t, u)$ and $q (s, t, u)$ with respect to $s$ and $t$ have jump discontinuities on $s = t$. \item There are functions $\ell_j, q_j \in C^{r} ( \Omega_j ), j = 1, 2, $ with $$ \ell (s,t, u) = \left\{ {\begin{array}{ll} \ell_1 (s, t, u), \;\;\; (s, t, u) \in \Omega_1, \\ \ell_2 (s, t, u), \;\;\; (s, t, u) \in \Omega_2, \end{array}}\right. $$ $$ q (s,t, u) = \left\{ {\begin{array}{ll} q_1 (s, t, u), \;\;\; (s, t, u) \in \Omega_1, \\ q_2 (s, t, u), \;\;\; (s, t, u) \in \Omega_2. \end{array}}\right. $$ \end {enumerate} Following Atkinson-Potra \cite{Atk-Pot}, if the kernel $\kappa$ satisfies the above conditions, then we say that $\kappa$ is of class $\mathscr{G}_4(r, 0)$. Under the above assumptions, the operator $\mathcal {K}$ is four times Fr\'echet differentiable and its Fr\'echet derivatives at $x \in \mathcal{X}$ are given by $$ \mathcal {K}'(x) v_1 (s) = \int_0^1 \frac {\partial \kappa } {\partial u} \left(s,t, x(t)\right) \: v_1(t) \: dt $$ and \begin{equation}\label{eq:2.1} \mathcal {K}^{(i)}(x) (v_1,\ldots, v_i) (s) = \int_0^1 \frac {\partial^i \kappa } {\partial u^i} \left(s,t,x(t)\right) \: v_1(t) \cdots v_i(t) \: dt, \qquad i = 2, 3, 4, \end{equation} where \begin{align*} \frac {\partial^i \kappa } {\partial u^i} \left(s,t,x(t)\right) = \frac {\partial^i \kappa } {\partial u^i} \left(s,t,u\right)|_{u = x(t)}, \quad i = 1, 2, 3, 4 \end{align*} and $v_1, v_2, v_3, v_4 \in \mathcal{X}$. Note that $\mathcal {K}' (x) : \mathcal{X} \rightarrow \mathcal{X}$ is linear and $ \mathcal {K}^{(i)}(x) : \mathcal{X}^i \rightarrow \mathcal{X} $ are multi-linear operators, where $\mathcal{X}^i$ is the cartesian product of $i$ copies of $\mathcal{X}$. See Rall \cite{Rall}. We define \begin{equation}\nonumber \norm{\mathcal {K}^{(i)}(x) } \:=\: \sup_{ \stackrel {\norm{v_j}_\infty \leq 1} {j = 1, \ldots, i}} \norm{\mathcal {K}^{(i)}(x) (v_1, \ldots, v_i)}_\infty, \qquad i = 1, 2, 3, 4. \end{equation} It follows that \begin{eqnarray}\nonumber \norm{\mathcal {K}^{(i)}(x) } &\leq& \sup_{0 \leq s, t \leq 1} \left| \frac {\partial^i \kappa } {\partial u^i} \left(s, t, x(t)\right) \right|, \qquad i = 1, 2, 3, 4. \end{eqnarray} \noindent We rewrite the equation \eqref{eq:1.2} as \begin{align*} x - \mathcal{K} (x) = f, \quad x \in \mathcal{X}. \end{align*} Let \begin{equation}\label{new_op} \mathcal{T}(x) = \mathcal{K}(x) + f, \quad x \in \mathcal{X}. \end{equation} Assume that $\varphi$ is a fixed point of $\mathcal{T}$. Since $\K$ is compact, $\mathcal {K}' (\varphi)$ is a compact linear operator. See Krasnoselskii \cite{Kra}. Assume that $1$ is not an eigenvalue of $\mathcal {K}' (\varphi).$ Then, $\varphi$ is an isolated solution of \eqref{eq:1.2}. Let $ f \in C^{\alpha} [0, 1]$, then by the Corollary 3.2 of Atkinson-Potra \cite{Atk-Pot}, it follows that $\varphi \in C^{\alpha} [0, 1].$ \subsection{Approximating Space and Projection operator} Let $n \in \mathbb{N}$ and consider the following uniform partition of $[0, 1]:$ \begin{equation}\label{eq:2.2} \Delta: 0 < \frac{1} {n} < \cdots < \frac{n-1} {n} < 1. \end{equation} Define \begin{equation}\label{partition_points} t_j = \frac {j} {n}, \;\;\; j = 0, \ldots, n. \end{equation} Let \begin{equation}\nonumber \Delta_j = [t_{j-1}, t_j] \;\;\; \mbox {and} \;\;\; h = t_{j} - t_{j-1} = \frac {1} {n}, \;\;\; j = 1, \ldots, n. \end{equation} Consider a finite dimensional approximating space as \begin{equation}\nonumber \mathscr{X}_{n} = \left\{ g \in L^{\infty}[0, 1] : g \text{ is a polynomial of degree } \leq r-1 \text{ on }\Delta_j , ~j=1, 2, \dots, n \right\}. \end{equation} As no continuity conditions are imposed at the partition points, the dimension of $\mathscr{X}_n$ is $ n r$ and $ \displaystyle {\mathscr{X}_{n} \subset L^{\infty}[0,1]. }$ Let $\pi_n$ be the restriction to $L^\infty [0, 1]$ of the orthogonal projection from $L^2 [0, 1]$ onto $\mathscr{X}_n$, which converges to the Identity operator pointwise. Then \begin{equation}\label{eq:2.3} \sup_{n \geq 1} \norm{\pi_n}_{L^{\infty}[0, 1] \rightarrow L^{\infty}[0, 1]} < \infty. \end{equation} If $x \in C^{\alpha}[0, 1]$, it is well-known that \begin{eqnarray}\label{eq2.4} \norm{(I - \pi_{n})x}_\infty & \leq & C_1 \|x^{(\beta)} \|_\infty h^{\beta}, \end{eqnarray} where $\beta = \min\left\{ \alpha, r \right\}$ and $C_1$ is a constant independent of $h$. See Atkinson \cite{Atk}, Chatelin-Lebbar \cite{CL}. Denote $$ \pi_{n,j} x = \pi_{n} x |_{\Delta_j}, \quad j = 1, 2, \ldots, n.$$ For $x \in C^{\alpha} (\Delta_j),$ we have \begin{eqnarray}\label{eq:2.4} \|(I - \pi_{n,j})x \|_{\Delta_j, \infty} & \leq & C_2 \|x^{(\beta)} \|_{\Delta_j, \infty} h^{\beta}, \end{eqnarray} where $\beta = \min\left\{ \alpha, r \right\}$ and $C_2$ is a constant independent of $h$. See Atkinson-Potra \cite[Corollary 4.3]{Atk-Pot}. \subsection{Asymptotic Expansions and the higher order terms} Let $\varphi \in C^{2r+2}[0, 1]$. For $\delta > 0,$ let $$\mathcal{B}(\varphi, \delta) = \left\{ x \in \mathcal{X} : \norm{x - \varphi}_\infty \leq \delta\right\}$$ denote the closed $\delta$-neighbourhood of $\varphi$. Without loss of generality, we assume that the Galerkin solution $x_n^G$ and the iterated Galerkin solution $x_n^S$ belong to the above neighbourhood. \noindent Denote $$\ell_{*}(s, t) = \frac {\partial \kappa } {\partial u} \left(s,t, \varphi(t)\right), \quad s, t \in [0, 1],$$ $$q_{*}(s, t) = \frac {\partial^2 \kappa } {\partial u^2} \left(s,t, \varphi(t)\right), \quad s, t \in [0, 1].$$ It follows that \begin{equation}\label{eq:2.5} \mathcal {K}'(\varphi) v (s) = \int_0^1 \ell_{*}(s, t) \: v(t) \: dt, \qquad v \in \mathcal{X}, \: s \in [0, 1], \end{equation} $$ \K''(\varphi)(v_1, v_2)(s) = \int_{0}^{1} q_{*}(s,t) ~ v_1(t)\: v_2(t) \: dt, \qquad v_1, v_2 \in \mathcal{X}, \: s \in [0, 1],$$ where the kernels $\ell_{*}(\cdot, \cdot), \; q_{*}(\cdot, \cdot) \in C[0, 1] \times C[0, 1]$ are of the type of Green's function as mentioned in the section \ref{subsection:2.1}. Then \begin{equation*} \norm{\K'(\varphi)} \leq \sup_{0 \leq t \leq 1} \int_{0}^{1} \left| \ell_{*}(s,t) \right| \: ds, \end{equation*} \begin{equation}\label{eq:2.9} \norm{\K''(\varphi)} \leq \sup_{0 \leq t \leq 1} \int_{0}^{1} \left| q_{*}(s,t) \right| \: ds. \end{equation} See Atkinson \cite{Atk}. By assumption, $ I - \K'(\varphi)$ is invertible. Let $$ \mathcal{M} = \left( I - \K'(\varphi) \right)^{-1} \K'(\varphi),$$ $$ \mathcal{M}_2 = \left( I - \K'(\varphi) \right)^{-1} \K''(\varphi),$$ $$ \mathcal{M}_3 = \left( I - \K'(\varphi) \right)^{-1} \K^{(3)}(\varphi).$$ Then $\mathcal{M}$, $\mathcal{M}_2 $ and $\mathcal{M}_3$ are respectively compact linear, bi-linear and tri-linear integral operators. See Riesz-Nagy \cite{Riesz-Nagy}. For $v \in \mathcal{X}$, let \begin{equation*} \mathcal{M} v (s) = \int_{0}^{1} m(s, t) \: v(t) \: dt, \quad s \in [0, 1], \end{equation*} Note that the kernels of $\mathcal{M}$, $\mathcal{M}_2 $ and $\mathcal{M}_3$ inherit the same smoothness properties as the kernels of $\K'(\varphi)$, $\K''(\varphi)$ and $\K^{(3)}(\varphi)$ respectively. See Atkinson-Potra [Lemma 5.1]\cite{Atk-Pot}. Hence, the kernels of the above three operators are of the type of Green's function as mentioned in section \ref{subsection:2.1}. We quote the following result from Rakshit-Rane \cite{GR-AR}. \begin{equation}\label{asy_exp1} \mathcal{M} \varphi(t_i) = \mathcal{M} \pi_{n} \varphi(t_i) + \left(\mathcal{A}_{2r} \varphi \right)(t_i) h^{2r} + O \left( h^{2r+2} \right), \quad i = 0, 1, \ldots, n, \end{equation} where \begin{multline*} (\mathcal{A}_{2r} \varphi)(t_i)=\bar{b}_{2r,2r} \int_0^1 m(t_i,t) (t) ~ \varphi^{(2r)}(t) ~ dt \\ ~~~~~+\sum_{p=1}^{2r-1}\bar{b}_{2r,p} \Bigg\{ \left[ \left( \frac{\partial}{\partial t}\right) ^{2r-p - 1}\left( m(t_i,t) \varphi^{(p)}(t)\right) \right]_{t=0}^{t=1} \\ - \left[ \left( \frac{\partial}{\partial t}\right) ^{2r-p - 1}\left( m(t_i,t) \varphi^{(p)}(t)\right) \right]_{t=t_i-}^{t=t_i+} \Bigg\} \end{multline*} with \begin{equation}\nonumber \bar{b}_{2r,p}= \int_{0}^{1} \int_{0}^{1} \Lambda_{r}(\sigma,\tau)\frac{(\sigma-\tau)^{p}}{p!}\frac{B_{2r-p}(\tau)}{(2r-p)!} \: d\sigma \: d\tau, \end{equation} $ \displaystyle \Lambda_{r}(\sigma,\tau)= \sum_{q=0}^{r-1}e_{q}(\sigma) e_{q}(\tau)$, $\left\{ e_0, e_1, e_2, \ldots \right\}$ is the sequence of orthonormal polynomials in $L^2[0, 1]$ and $B_{k}$ is the Bernoulli polynomial of degree $k$. As in Kulkarni-Nidhin \cite[Lemma 2.4]{RPK-NTJ}, it can be shown that for all $s \in [0, 1]$, \begin{equation}\label{asy_exp2} \mathcal{M}_2 (\pi_n \varphi-\varphi)^2 (s) = \mathcal{V}_1 (\varphi) (s) h^{2r} +O(h^{2r+2}) \end{equation} and \begin{equation}\label{asy_exp3} \mathcal{M}_3(\pi_n \varphi-\varphi)^3(s)= \mathcal{V}_2 (\varphi)(s) h^{3r} +O(h^{3r+1}), \end{equation} where $$ \mathcal{V}_1(\varphi) = \left( \int_{0}^{1} [\chi_r(t)]^2 dt \right) \mathcal{M}_2\left( \varphi^{(r)} \right)^2 ,$$ $$ \mathcal{V}_2(\varphi) = \left( \int_{0}^{1} [\chi_r(t)]^3 dt \right) \mathcal{M}_3\left( \varphi^{(r)} \right)^3 \text{ and }~ \mathcal{V}_2(\varphi) = 0 \text{ for } ~ r= 1$$ with $$ \chi_r(t) = \int_{0}^{1} \Lambda_{r}(\sigma, t) \frac{(\sigma - t)^r}{r!} d\sigma,$$ are independent of $h$. In the proof of Lemma 2.4 in Kulkarni-Nidhin \cite{RPK-NTJ}, the authors used Euler-McLaurin expansion for smooth kernel. Since, the kernels of $\mathcal{M}_2$ and $ \mathcal{M}_3$ are of the type of Green's function, we use the extended Euler-McLaurin summation formula from Kulkarni-Rane \cite{Rpk-Aks1}. It follows that \begin{equation}\label{eq:2.13} \norm{\mathcal{M}_3(\pi_n \varphi-\varphi)^3}_\infty = O\left( h^4 \right ) \quad \text{ for }~ r=1. \end{equation} Let $$ C_3 = \max \left \{ \sup_{\stackrel { 0 \leq t \leq s \leq 1 }{|u| \leq \|\varphi \|_\infty + \delta }} \left | D^{(1,0, 0)} q_{1} (s, t, u) \right |, \sup_{\stackrel { 0 \leq s \leq t \leq 1 }{|u| \leq \|\varphi \|_\infty + \delta}} \left | D^{(1, 0, 0)} q_{2} (s, t, u) \right | \right \},$$ $$ C_4 = \max \left \{ \sup_ { 0 \leq t \leq s \leq 1 } \left | D^{(1,0)} \ell_{*,1} (s, t) \right |, \sup_{ 0 \leq s \leq t \leq 1 } \left | D^{(1, 0)} \ell_{*,2} (s, t) \right | \right \}.$$ We first prove the following preliminary result which is needed later on. \begin{lemma}\label{lem:2.1} Let $x \in \mathcal{B}(\varphi, \delta)$. Then, for $v_1, v_2 \in \mathcal{X}$, $$\norm{\left( \mathcal{K}''(x)(v_1, v_2) \right)^{'}}_\infty \leq C_3 \norm{v_1}_\infty \norm{v_2}_\infty. $$ \end{lemma} \begin{proof} Let $s \in [0, 1]$. Then \begin{align*} \mathcal{K}''(x)(v_1, v_2)(s) &= \int_{0}^{1} q(s, t, x(t)) \:v_1(t) \: v_2(t) \: dt \\ &= \int_{0}^{s} q_1(s, t, x(t)) \:v_1(t) \: v_2(t) \: dt + \int_{s}^{1} q_2(s, t, x(t)) \:v_1(t) \: v_2(t) \: dt. \end{align*} It follows that \begin{align*} \left( \mathcal{K}''(x)(v_1, v_2) \right)^{'}(s) & = \int_{0}^{s} \frac{\partial q_1}{\partial s}\left(s, t, x(t)\right) v_1(t) \: v_2(t) \: dt + q_1(s, s, x(s)) \:v_1(s) \: v_2(s) \\ & ~~ + \int_{s}^{1} \frac{\partial q_2}{\partial s}\left(s, t, x(t)\right) v_1(t) \: v_2(t) \: dt - q_2(s, s, x(s)) \:v_1(s) \: v_2(s). \end{align*} Since $q$ is continuous on $\Omega$, \begin{align*} \left( \mathcal{K}''(x)(v_1, v_2) \right)^{'}(s) = \int_{0}^{s} \frac{\partial q_1}{\partial s}\left(s, t, x(t)\right) v_1(t) v_2(t) \: dt + \int_{s}^{1} \frac{\partial q_2}{\partial s}\left(s, t, x(t)\right) v_1(t) v_2(t) \: dt. \end{align*} Hence, $$\norm{\left(\mathcal{K}''(x)(v_1, v_2) \right)^{'}}_\infty \leq C_3 \norm{v_1}_\infty \norm{v_2}_\infty.$$ This completes the proof. \end{proof} \noindent Let $x \in \mathcal{B}(\varphi, \delta)$. Then by the above lemma, we obtain \begin{equation}\label{eq:2.10} \norm{(I - \pi_{n}) \left( \mathcal{K}''(x)(v_1, v_2) \right)}_\infty \leq C_1 C_3 \norm{v_1}_\infty \norm{v_2}_\infty h. \end{equation} Similarly, for any $v \in \mathcal{X}$, it can be shown that the function $\mathcal{K}'(\varphi)v$ is differentiable on $[0, 1]$ and \begin{equation}\label{eq:2.11} \norm{\left(\mathcal{K}'(\varphi)v \right)^{'}}_\infty \leq C_4 \norm{v}_\infty. \end{equation} It follows that \begin{equation}\label{eq:2.12} \norm{(I - \pi_{n})\left( \mathcal{K}'(\varphi)v \right)}_\infty \leq C_1 C_4 \norm{v}_\infty h. \end{equation} The following crucial estimate \begin{align}\label{eq12} \norm{\mathcal{K}'(\varphi)(I - \pi_{n})\varphi}_\infty = \left\{ {\begin{array}{ll} O\left( h^{2}\right), ~~~ ~~ ~ r = 1, \\ O\left( h^{r+2}\right), ~~~\: r \geq 2. \end{array}}\right. \end{align} follows from \textit{Lemma 9} of Chatelin-Lebbar \cite{CL}. From \textit{Theorem 3.1} of Kulkarni \cite{kul}, we have \begin{align}\label{eq:2.14} \norm{(I - \pi_{n})\mathcal{K}'(\varphi)(I - \pi_{n})\varphi}_\infty = O\left( h^{r+2}\right), \quad r \geq 1. \end{align} In order to prove our main result, we need to establish the following lemmas and propositions. Note that \begin{equation}\label{rel:1} x_n^G - \varphi = \pi_n(x_n^S - \varphi) - (I - \pi_{n})\varphi. \end{equation} We use the above relation between Galerkin and iterated Galerkin solution several times in the following lemmas and propositions. \begin{lemma}\label{lem:1} Let $x_n^G$ be the Galerkin solution defined by the equation \eqref{eq:Gal}. Then for $r \geq 1$, \begin{align}\label{eq5} \left( I - \K'(\varphi) \right)^{-1}\mathcal{K}''(\varphi)(x_n^G - \varphi)^2 (s) = (\mathcal{V}_1 (\varphi))(s) \: h^{2r} + O\left( h^{2r+2} \right), \quad s \in [0, 1], \end{align} where $\mathcal{V}_1$ is defined by \eqref{asy_exp2}. \end{lemma} \begin{proof} Using \eqref{rel:1}, we write \begin{align*} \K''(\varphi)(x_n^G - \varphi)^2 & = ~ \K''(\varphi) \left[ \pi_n(x_n^S - \varphi) - (I - \pi_{n})\varphi \right]^2 \notag \\ & = ~ \K''(\varphi) \left( \pi_n(x_n^S - \varphi) \right)^2 \notag \\ & ~~~ - 2 ~ \K''(\varphi) \left( \pi_n(x_n^S - \varphi), (I - \pi_{n})\varphi \right) \notag \\ & ~~~ + ~ \K''(\varphi) \left( (I - \pi_{n})\varphi \right)^2. \end{align*} It follows that \begin{align}\label{eq6} \left( I - \K'(\varphi) \right)^{-1} \K''(\varphi)(x_n^G - \varphi)^2 & = ~ \mathcal{M}_2 \left( \pi_n(x_n^S - \varphi) \right)^2 \notag \\ & ~~~ - 2 ~ \left( I - \K'(\varphi) \right)^{-1}\mathcal{K}''(\varphi) \left( \pi_n(x_n^S - \varphi), (I - \pi_{n})\varphi \right) \notag \\ & ~~~ + ~ \mathcal{M}_2 \left( (I - \pi_{n})\varphi \right)^2. \end{align} Since $\displaystyle{\left( I - \K'(\varphi) \right)^{-1} \K^{(3)}(\varphi)}$ is bounded, from \eqref{eq:1.5}, \eqref{eq:2.3}, \eqref{eq2.4} and \eqref{eq:2.9}, it is easy to see that \begin{equation}\label{eq7} \norm{\mathcal{M}_2 \left( \pi_n(x_n^S - \varphi) \right)^2}_\infty = \left\{ {\begin{array}{ll} O\left( h^{4}\right), ~~~~ r = 1, \\ O\left( h^{2r+4} \right), ~~~\: r \geq 2. \end{array}}\right. \end{equation} and \begin{equation*} \norm{\K''(\varphi) \left( \pi_n(x_n^S - \varphi), (I - \pi_{n})\varphi \right)}_\infty = \left\{ {\begin{array}{ll} O\left( h^{3}\right), ~~~~ r = 1, \\ O\left( h^{2r+2} \right), ~~~\: r \geq 2. \end{array}}\right. \end{equation*} When $r=1$, that is, when $\mathscr{X}_n$ is the space of piecewise constant functions, the order of the term $\norm{\K''(\varphi) \left( \pi_n(x_n^S - \varphi), (I - \pi_{n})\varphi \right)}_\infty$ can be improved to $h^4$ in the following way. Note that \begin{align*} \K''(\varphi) \left( \pi_n(x_n^S - \varphi), (I - \pi_{n})\varphi \right)(s) & = \int_{0}^{1} q_{*}(s,t) \: (\pi_n(x_n^S - \varphi))(t) \: (I - \pi_{n})\varphi(t) \: dt \\ & = \sum_{j=1}^{n} \int_{t_{j-1}}^{t_j} q_{*}(s,t) (\pi_{n, j}(x_n^S - \varphi))(t) \: (I - \pi_{n,j})\varphi(t) \: dt. \end{align*} Since, the range of $\pi_{n}$ is $\mathscr{X}_n$, $\pi_{n, j}(x_n^S - \varphi)$ is a constant on $[t_{j-1}, t_j]$. It follows that \begin{multline*} \K''(\varphi) \left( \pi_n(x_n^S - \varphi), (I - \pi_{n})\varphi \right)(s) = \\ \sum_{j=1}^{n} (\pi_{n, j}(x_n^S - \varphi))\left( \frac{t_{j-1}+t_j}{2} \right) \int_{t_{j-1}}^{t_j} q_{*}(s,t) \: (I - \pi_{n,j})\varphi(t) \: dt. \end{multline*} As in the proof of Lemma 9 of Chatelin-Lebbar \cite{CL}, it can be shown that $$\left| \int_{t_{j-1}}^{t_j} q_{*}(s,t) \: (I - \pi_{n,j})\varphi(t) \: dt \right| = O\left( h^{3} \right). $$ Since $\left\{ \pi_{n, j} \right\}$ is uniformly bounded and $\norm{x_n^S - \varphi}_\infty = O\left( h^{2} \right)$, from the above estimate it follows that \begin{equation*} \norm{\K''(\varphi) \left( \pi_n(x_n^S - \varphi), (I - \pi_{n})\varphi \right)}_\infty = O\left( h^{4} \right), \quad r =1. \end{equation*} Hence, \begin{equation}\label{eq:2.23} \norm{\K''(\varphi) \left( \pi_n(x_n^S - \varphi), (I - \pi_{n})\varphi \right)}_\infty = \left\{ {\begin{array}{ll} O\left( h^{4}\right), ~~~~~~~ r = 1, \\ O\left( h^{2r+2} \right), ~~~\: r \geq 2. \end{array}}\right. \end{equation} Hence, \eqref{eq5} follows from \eqref{asy_exp2}, \eqref{eq6} and \eqref{eq:2.23}, the proof of the proposition is complete. \end{proof} \begin{lemma}\label{lem:2} Let $s \in [0, 1]$ and $r \geq 1$. Then \begin{equation}\label{eq9} \left( I - \K'(\varphi) \right)^{-1} \mathcal{K}^{(3)}(\varphi)(x_n^G - \varphi)^3 (s) = \left\{ {\begin{array}{ll} O\left( h^{4}\right) , \qquad \qquad \qquad \qquad ~ r = 1, \\ (\mathcal{V}_2(\varphi))(s) \: h^{3r} + O\left( h^{3r+1} \right) , ~~\: r \geq 2. \end{array}}\right. \end{equation} where $\mathcal{V}_2$ is defined by \eqref{asy_exp3}. \end{lemma} \begin{proof} We write \begin{align}\label{eq10} \left( I - \K'(\varphi) \right)^{-1} \K^{(3)}(\varphi)(x_n^G - \varphi)^3 & = ~ \left( I - \K'(\varphi) \right)^{-1}\K^{(3)}(\varphi) \left[ \pi_n(x_n^S - \varphi) - (I - \pi_{n})\varphi \right]^3 \notag \\ & = ~ \mathcal{M}_3 \left( \pi_n(x_n^S - \varphi) \right)^3 \notag \\ & ~~~ - 3 ~ \mathcal{M}_3 \left( \pi_n(x_n^S - \varphi), \pi_n(x_n^S - \varphi), (I - \pi_{n})\varphi \right) \notag \\ & ~~~ + 3 ~ \mathcal{M}_3 \left( \pi_n(x_n^S - \varphi), (I - \pi_{n})\varphi, (I - \pi_{n})\varphi \right)\notag \\ & ~~~ - ~ \mathcal{M}_3 \left( (I - \pi_{n})\varphi \right)^3 \end{align} Since $\displaystyle{\mathcal{M}_3 = \left( I - \K'(\varphi) \right)^{-1} \K^{(3)}(\varphi)}$ is bounded, \eqref{eq:1.5}, \eqref{eq:2.3}, \eqref{eq2.4} we have \begin{equation}\label{eq:2.26} \norm{\mathcal{M}_3 \left( \pi_n(x_n^S - \varphi) \right)^3}_\infty = \left\{ {\begin{array}{ll} O\left( h^{6}\right), \qquad ~ r = 1, \\ O\left( h^{3r+6} \right), \quad r \geq 2. \end{array}}\right. \end{equation} \begin{equation}\label{eq:2.27} \norm{\mathcal{M}_3 \left( \pi_n(x_n^S - \varphi), \pi_n(x_n^S - \varphi), (I - \pi_{n})\varphi \right)}_\infty = \left\{ {\begin{array}{ll} O\left( h^{5}\right), \qquad ~ r = 1, \\ O\left( h^{3r+4} \right), \quad r \geq 2. \end{array}}\right. \end{equation} \begin{equation}\label{eq:2.28} \norm{\mathcal{M}_3 \left( \pi_n(x_n^S - \varphi), (I - \pi_{n})\varphi, (I - \pi_{n})\varphi \right)}_\infty = \left\{ {\begin{array}{ll} O\left( h^{4}\right), \qquad ~ r = 1, \\ O\left( h^{3r+2} \right), \quad r \geq 2. \end{array}}\right. \end{equation} On the other hand, from \eqref{asy_exp3} we have \begin{equation*} \mathcal{M}_3 \left( (I - \pi_{n})\varphi \right)^3 (s) = \left\{ {\begin{array}{ll} O\left( h^{4}\right) , \qquad \qquad \qquad \qquad ~ r = 1, \\ (\mathcal{V}_2(\varphi))(s) \: h^{3r} + O\left( h^{3r+1} \right) , ~~\: r \geq 2. \end{array}}\right. \end{equation*} Hence, \eqref{eq9} follows from \eqref{eq10}, \eqref{eq:2.26}, \eqref{eq:2.27}, \eqref{eq:2.28} and the above equation. \end{proof} \section{The Main Result} Recall that the Iterated Galerkin solution is defined by \begin{align*} x_n^S - \K(\pi_{n} x_n^S) = f \end{align*} and the exact solution as \begin{equation*} \varphi - \mathcal{K}(\varphi) = f. \end{equation*} In this section, we prove our main result about the asymptotic series expansion for the iterated Galerkin solution $x_n^S$ at the partition points $t_i, \: i = 0, 1, \dots, n$. That is, we will show the following. \begin{align}\label{Sloan_asym} \varphi(t_i) - x_n^S (t_i) = \mathcal{A}_{2r}(t_i) h^{2r} + O\left( h^{2r+2} \right), \end{align} where $\mathcal{A}_{2r}$ is a function independent of $n$. Then, we can apply Richardson extrapolation to obtain an approximation of $\varphi$ with higher order at the partition points. From Ford et al \cite[Section 5]{Ford}, it can be shown that a continuous function can be reconstructed from the extrapolated discrete values at the partition points and it approximates the exact solution $\varphi$ to higher order in the uniform norm. We will not discuss this thing here. Our main aim is to prove \eqref{Sloan_asym}. Recall that $$ \mathcal{M} = \left( I - \K'(\varphi) \right)^{-1}\K'(\varphi)$$ with \begin{align}\label{emm} (\mathcal{M}x)(s) = \int_0^1 m(s, t) ~ x (t) ~ d t, \;\;\; s \in [0, 1], \; x \in \mathcal{X}, \end{align} where the kernel $m$ is of the type of Green's function. We quote the following expression for the error in the iterated Galerkin solution from Atkinson et al \cite[equation (2.28)]{AGS}: \begin{align}\label{eq:2.6} x_n^S - \varphi & ~=~ \left( I - \K'(\varphi) \right)^{-1} \left\{ \left[ \K(x_n^G) - \K(\varphi) - \K'(\varphi)(x_n^G - \varphi) \right] \right\}\notag \\ & ~~~ - \mathcal{M}(I - \pi_{n}) \left[ \K(x_n^G) - \K(\varphi) - \K'(\varphi)(x_n^G - \varphi) \right] \notag\\ & ~~~ - \mathcal{M}(I - \pi_{n}) \K'(\varphi)(x_n^G - \varphi) \notag\\ & ~~~ - \mathcal{M}(I - \pi_{n})\varphi. \end{align} By the following propositions, we will prove that the second and the third terms on the right hand side of the above equation are of the order $2r+2$ or higher, and the first and the last term has an asymptotic expansion at the partition points. Let \begin{eqnarray}\nonumber C_5 & = & \max_{0 \leq i \leq 4} \left( \sup_{ \stackrel {s, t \in [0, 1]} {|u|\leq \|\varphi\|_\infty + \delta}} \left| \frac{\partial^i \kappa}{\partial u^i} (s, t, u) \right | \right). \end{eqnarray} Let us investigate the first term on the right hand side of the equation \eqref{eq:2.26} for an asymptotic expansion. \begin{proposition}\label{prop:1} Let $x_n^G$ be the Galerkin solution defined by the equation \eqref{eq:Gal} and $s \in [0, 1]$. Then for $r \geq 1,$ \begin{equation*} \left( I - \K'(\varphi) \right)^{-1} \left[ \K(x_n^G) - \K(\varphi) - \K'(\varphi)(x_n^G - \varphi) \right] (s) = \mathcal{V}_1(\varphi)(s) \: h^{2r} + O\left( h^{2r+2} \right), \end{equation*} where $\mathcal{V}_1$ is defined by \eqref{asy_exp2}. \end{proposition} \begin{proof} Using the generalized Taylor's series expansion (see Linz \cite{Linz}) in the neighbourhood $\mathcal{B}(\varphi, \delta)$, we obtain \begin{align*} \K(x_n^G) - \K(\varphi) - & \K'(\varphi)(x_n^G - \varphi) \\ & = \frac{1}{2} \K''(\varphi)(x_n^G - \varphi)^2 + \frac{1}{6} \K^{(3)}(\varphi)(x_n^G - \varphi)^3 + \mathcal{R}_4 \left( x_n^G, \varphi \right), \end{align*} where $$\mathcal{R}_4 \left( x_n^G, \varphi \right) = \frac {1} {6} \int_0^1 \mathcal{K}^{(4)} \left(\varphi + \theta (x_n^G - \varphi) \right) (x_n^G - \varphi)^4 (1 - \theta)^3 \; d \theta.$$ By \eqref{eq:2.1}, we have \begin{align*} \mathcal{K}^{(4)} \left(\varphi + \theta (x_n^G - \varphi) \right) (x_n^G - \varphi)^4 (s) = \int_{0}^{1} \frac{\partial^4 \kappa}{\partial u^4} \left(s, t, \varphi(t) + \theta (x_n^G - \varphi)(t) \right)(x_n^G - \varphi)^4 (t) \: dt. \end{align*} Since $\norm{x_n^G - \varphi}_\infty \rightarrow 0$ as $n \rightarrow \infty$ and $\theta \in (0, 1)$, $$\varphi + \theta (x_n^G - \varphi) \in \mathcal{B}(\varphi, \delta).$$ It follows that \begin{eqnarray}\nonumber \norm{\mathcal{K}^{(4)} \left(\varphi + \theta (x_n^G - \varphi) \right) (x_n^G - \varphi)^4}_\infty \leq C_5 \norm{x_n^G - \varphi}_\infty^4 \end{eqnarray} Using \eqref{eq:1.5} and the above estimate, we obtain \begin{equation}\label{eq:2.7} \mathcal{R}_4 \left( x_n^G, \varphi \right) = O\left( h^{4r} \right). \end{equation} Note that \begin{align*} \left( I - \K'(\varphi) \right)^{-1} & \left[ \K(x_n^G) - \K(\varphi) - \K'(\varphi)(x_n^G - \varphi) \right] \\ = & \frac{1}{2} \: \left( I - \K'(\varphi) \right)^{-1} \K''(\varphi) (x_n^G - \varphi)^2 + \frac{1}{6} \: \left( I - \K'(\varphi) \right)^{-1} \K^{(3)}(\varphi)(x_n^G - \varphi)^3 \\ & ~ + \left( I - \K'(\varphi) \right)^{-1} \mathcal{R}_4 \left( x_n^G, \varphi \right). \end{align*} Hence, by Lemma \ref{lem:1}, Lemma \ref{lem:2}, equation \eqref{eq:2.7} and and the above estimate, we obtain the followings. \\ For $r = 1$, \begin{equation*} \left( I - \K'(\varphi) \right)^{-1} \left[ \K(x_n^G) - \K(\varphi) - \K'(\varphi)(x_n^G - \varphi) \right] (s) = \mathcal{V}_1(\varphi)(s) \: h^{2r} + O\left( h^{4} \right), \end{equation*} for $r \geq 2$, \begin{multline*} \left( I - \K'(\varphi) \right)^{-1} \left[ \K(x_n^G) - \K(\varphi) - \K'(\varphi)(x_n^G - \varphi) \right] (s) \\ = \mathcal{V}_1(\varphi)(s) \: h^{2r} + \mathcal{V}_2(\varphi)(s) \: h^{3r} + O\left( h^{2r+2} \right). \end{multline*} Since $3r \geq 2r+2$ for $r \geq 2$, \begin{equation*} \left( I - \K'(\varphi) \right)^{-1} \left[ \K(x_n^G) - \K(\varphi) - \K'(\varphi)(x_n^G - \varphi) \right] (s) = \mathcal{V}_1(\varphi)(s) \: h^{2r} + O\left( h^{2r+2} \right), \quad r \geq 2. \end{equation*} This completes the proof. \end{proof} Now we investigate the second term on the R.H.S. of the equation \eqref{eq:2.6}. \begin{proposition}\label{prop:2} Let $\displaystyle{ \left\{t_i : i = 0, 1, \dots, n \right\}}$ be the set of all partition points defined by \eqref{partition_points}. Then for $r \geq 1$, \begin{equation*} \mathcal{M}(I - \pi_{n}) \left[ \K(x_n^G) - \K(\varphi) - \K'(\varphi)(x_n^G - \varphi) \right](t_i) = O\left( h^{2r+2} \right). \end{equation*} \end{proposition} \begin{proof} By the generalized Taylor's theorem we obtain \begin{equation*} \K(x_n^G) - \K(\varphi) - \K'(\varphi)(x_n^G - \varphi) = \int_0^1 \mathcal{K}''\left(\varphi + \theta (x_n^G - \varphi) \right) (x_n^G - \varphi)^2 (1 - \theta) \; d \theta. \end{equation*} Let \begin{equation}\label{eq:3.4} \mathcal{R}_2(x_n^G, \varphi) = \int_0^1 \mathcal{K}''\left(\varphi + \theta (x_n^G - \varphi) \right) (x_n^G - \varphi)^2 (1 - \theta) \; d \theta. \end{equation} Note that \begin{align*} \mathcal{K}'' \left(\varphi + \theta (x_n^G - \varphi) \right) (x_n^G - \varphi)^2 (s) = \int_{0}^{1} \frac{\partial^2 \kappa}{\partial u^2} \left(s, t, \varphi(t) + \theta (x_n^G - \varphi)(t) \right)(x_n^G - \varphi)^2 (t) \: dt. \end{align*} It follows that \begin{eqnarray}\nonumber \norm{\mathcal{K}'' \left(\varphi + \theta (x_n^G - \varphi) \right) (x_n^G - \varphi)^2}_\infty \leq C_3 \norm{x_n^G - \varphi}_\infty^2. \end{eqnarray} Since $\displaystyle{\norm{x_n^G - \varphi}_\infty = O\left( h^{r} \right)}$, \begin{eqnarray}\label{eq:3.5} \norm{\mathcal{K}'' \left(\varphi + \theta (x_n^G - \varphi) \right) (x_n^G - \varphi)^2}_\infty = O\left( h^{2r} \right). \end{eqnarray} Let $s \in [0, 1]$ be fixed and $m_{s}(t) = m(s, t) , ~ t \in [0, 1]$, then \begin{multline*} \mathcal{M}(I - \pi_{n}) \left[ \K(x_n^G) - \K(\varphi) - \K'(\varphi)(x_n^G - \varphi) \right](s) \\ = \left< m_s, \: (I - \pi_{n}) \left[ \K(x_n^G) - \K(\varphi) - \K'(\varphi)(x_n^G - \varphi) \right] \right>, \end{multline*} where $\left< \cdot, \cdot \right>$ is the usual inner product in $L^2[0, 1]$, i.e., $$\left< x, y \right> = \int_{0}^{1} x(t)\: y(t) \: dt, \quad x, y \in L^2[0, 1].$$ Since $I - \pi_{n}$ is self-adjoint, \begin{multline}\label{eq:3.7} \mathcal{M}(I - \pi_{n}) \left[ \K(x_n^G) - \K(\varphi) - \K'(\varphi)(x_n^G - \varphi) \right](t_i) \\ = \left< (I - \pi_{n}) m_{t_i}, \: (I - \pi_{n}) \left[ \K(x_n^G) - \K(\varphi) - \K'(\varphi)(x_n^G - \varphi) \right] \right>. \end{multline} Note that $m_{t_i}$ is continuous on $[t_{j-1}, t_j]$ and $r$ times continuously differentiable on $(t_{j-1}, t_j)$ for all $j=1, 2, \dots, n$. Therefore by \eqref{eq:2.4} \begin{align}\label{eq3.8} \norm{(I - \pi_{n, j})m_{t_i}}_{\Delta_j, \infty} = O\left( h^{r} \right). \end{align} Using \eqref{eq:2.3}, \eqref{eq:3.5}, \eqref{eq:3.7} and the above estimate, we obtain \begin{equation}\label{eq:12} \mathcal{M}(I - \pi_{n}) \left[ \K(x_n^G) - \K(\varphi) - \K'(\varphi)(x_n^G - \varphi) \right](t_i) = O\left( h^{3r} \right), \quad r \geq 2. \end{equation} Consider the case $\mathbf{r =1}$. From \eqref{eq:3.4}, it is easy to see that if $\mathcal{K}'' \left(\varphi + \theta (x_n^G - \varphi) \right) (x_n^G - \varphi)^2$ is differentiable, then $\mathcal{R}_2(x_n^G, \varphi)$ is differentiable. Since $\varphi + \theta (x_n^G - \varphi) \in \mathcal{B}(\varphi, \delta)$, from Lemma \ref{lem:2.1} we have $\mathcal{R}_2(x_n^G, \varphi)$ is differentiable. Thus, from \eqref{eq:3.4} \begin{equation*} \left( \mathcal{R}_2(x_n^G, \varphi) \right)^{'} = \int_0^1 \left( \mathcal{K}''\left(\varphi + \theta (x_n^G - \varphi) \right) (x_n^G - \varphi)^2 \right)^{'} (1 - \theta) \; d \theta. \end{equation*} This implies, \begin{equation*} \norm{\left( \mathcal{R}_2(x_n^G, \varphi) \right)^{'}}_\infty \leq \frac{1}{2} \norm{\left( \mathcal{K}''\left(\varphi + \theta (x_n^G - \varphi) \right) (x_n^G - \varphi)^2 \right)^{'}}_\infty, \qquad 0 < \theta < 1. \end{equation*} Using Lemma \ref{lem:2.1}, \begin{equation*} \norm{\left( \mathcal{R}_2(x_n^G, \varphi) \right)^{'}}_\infty \leq C_3 \norm{x_n^G - \varphi}^2. \end{equation*} From \eqref{eq2.4}, it follows that \begin{align*} \norm{(I - \pi_{n}) \left[ \K(x_n^G) - \K(\varphi) - \K'(\varphi)(x_n^G - \varphi) \right]}_\infty & ~ = ~ \norm{(I - \pi_{n}) \mathcal{R}_2(x_n^G, \varphi)}_\infty \\ & ~ \leq ~ C_1 C_3 \norm{x_n^G - \varphi}^2 h. \end{align*} By \eqref{eq1.5}, \eqref{eq:3.7}, \eqref{eq3.8} and the above estimate, we obtain \begin{equation}\label{eq:3.8} \mathcal{M}(I - \pi_{n}) \left[ \K(x_n^G) - \K(\varphi) - \K'(\varphi)(x_n^G - \varphi) \right](t_i) = O\left( h^{4} \right), \quad r = 1. \end{equation} Hence, the required result follows from \eqref{eq:12} and \eqref{eq:3.8}. \end{proof} Next we investigate the third term on the R.H.S. of the equation \eqref{eq:2.6}. \begin{proposition}\label{prop:3} Let $\displaystyle{ \left\{t_i : i = 0, 1, \dots, n \right\}}$ be the set of all partition points defined by \eqref{partition_points}. Then $$ \mathcal{M}(I - \pi_{n}) \K'(\varphi)(x_n^G - \varphi)(t_i) = O\left( h^{2r+2} \right), \quad \text{ for } r \geq 1 . $$ \end{proposition} \begin{proof} From \eqref{emm} we have \begin{align}\label{eq13} \mathcal{M}(I - \pi_{n}) \K'(\varphi)(x_n^G - \varphi)(t_i) & = \int_0^1 m(t_i, t) ~ (I - \pi_{n}) \K'(\varphi)(x_n^G - \varphi)(t) ~ d t \notag\\ & = \left< m_{t_i} ~,~ (I - \pi_{n}) \K'(\varphi)(x_n^G - \varphi) \right> \notag\\ & = \left< (I - \pi_{n})m_{t_i} ~, ~ (I - \pi_{n})\K'(\varphi)(x_n^G - \varphi) \right>. \end{align} It is easy to see that, $m_{t_i}$ is continuous on $[t_{j-1}, t_j]$ and $r$ times continuously differentiable on $(t_{j-1}, t_j)$ for all $j=1, 2, \dots, n$. Therefore by \eqref{eq:2.4} \begin{align}\label{eq14} \norm{(I - \pi_{n, j})m_{t_i}}_{\Delta_j, \infty} = O\left( h^{r} \right). \end{align} Note that \begin{equation}\label{eq3.9} \K'(\varphi)(x_n^G - \varphi) = \K'(\varphi)\left( \pi_{n}(x_n^S - \varphi) \right) - \K'(\varphi)(I - \pi_{n})\varphi. \end{equation} Thus, by \eqref{eq:1.5}, \eqref{eq:2.3} and \eqref{eq12} we obtain, $$ \norm{ \K'(\varphi)(x_n^G - \varphi)}_\infty = O\left( h^{r+2} \right), \quad r \geq 2. $$ Hence, from \eqref{eq13}, \eqref{eq14} and the above estimate, we obtain \begin{equation}\label{eq:3.10} \mathcal{M}(I - \pi_{n}) \K'(\varphi)(x_n^G - \varphi)(t_i) = O\left( h^{2r+2} \right) , \quad r \geq 2. \end{equation} When $\mathbf{r =1}$, that is, when $\mathscr{X}_n$ is the space of piecewise constant functions, it is easy to see from \eqref{eq1.5}, \eqref{eq12} and \eqref{eq3.9} that $$ \norm{ \K'(\varphi)(x_n^G - \varphi)}_\infty = O\left( h^{2} \right),$$ which is not equal to $O\left( h^{2r+2} \right)$ with $r=1$. We consider this case separately. Note that, \begin{equation}\label{eq:3.11} (I - \pi_{n}) \K'(\varphi)(x_n^G - \varphi) = (I - \pi_{n}) \K'(\varphi)\left( \pi_{n}(x_n^S - \varphi) \right) - (I - \pi_{n}) \K'(\varphi)(I - \pi_{n})\varphi. \end{equation} From \eqref{eq:2.12}, we have \begin{equation*} \norm{(I - \pi_{n}) \K'(\varphi)\left( \pi_{n}(x_n^S - \varphi) \right)}_\infty \leq C_1 C_4 \norm{\pi_n} \norm{x_n^S - \varphi}_\infty h. \end{equation*} From \eqref{eq1.5} and \eqref{eq:2.3}, it follows that \begin{equation*} \norm{(I - \pi_{n}) \K'(\varphi)\left( \pi_{n}(x_n^S - \varphi) \right)}_\infty = O\left( h^{3} \right). \end{equation*} By \eqref{eq:2.14}, \eqref{eq13}, \eqref{eq14}, \eqref{eq:3.11} and the above estimate, we obtain \begin{equation}\label{eq:3.12} \mathcal{M}(I - \pi_{n}) \K'(\varphi)(x_n^G - \varphi)(t_i) = O\left( h^{4} \right) , \quad r = 1. \end{equation} Hence, the required result follows from \eqref{eq:3.10} and \eqref{eq:3.12}. \end{proof} \noindent Now, we prove our main theorem. \begin{theorem}\label{thm:1} Let $f \in C^{2r}[0, 1]$, and the kernel of the Urysohn integral operator \eqref{eq:1.1} be of class $\mathscr{G}_4(r, 0)$. Let $\varphi$ be a fixed point of the operator $\mathcal{T}$ defined by \eqref{new_op}, with $1$ not an eigenvalue of $\mathcal{K}'(\varphi)$. For $r \geq 1$, let $\mathscr{X}_n$ be the space of piecewise polynomials of degree $ \leq r-1 $ with respect to the partition \eqref{eq:2.2} and $\pi_{n}$ be the orthogonal projection defined by \eqref{eq:2.3}--\eqref{eq2.4}. Let $x_n^S$ be the iterated Galerkin solution defined by \eqref{eq:It_Gal}. Then, for $i = 0, 1, \ldots, n$, \begin{equation*} x_n^S(t_i) - \varphi(t_i) = - \zeta_{2r}(t_i) \: h^{2r} + O\left( h^{2r+2} \right), \end{equation*} where $\zeta_{2r}$ is a function bounded by a constant independent of $h$. \end{theorem} \begin{proof} From the equation \eqref{eq:2.6} \begin{align*} x_n^S(t_i) - \varphi(t_i) & ~=~ \left( I - \K'(\varphi) \right)^{-1} \left[ \K(x_n^G) - \K(\varphi) - \K'(\varphi)(x_n^G - \varphi) \right](t_i) \notag \\ & ~~~ - \mathcal{M}(I - \pi_{n}) \left[ \K(x_n^G) - \K(\varphi) - \K'(\varphi)(x_n^G - \varphi) \right](t_i) \notag\\ & ~~~ - \mathcal{M}(I - \pi_{n}) \K'(\varphi)(x_n^G - \varphi)(t_i) \notag\\ & ~~~ - \mathcal{M}(I - \pi_{n})\varphi(t_i). \end{align*} Let \begin{align*} \zeta_{2r} = \mathcal{A}_{2r} + \mathcal{V}_1(\varphi). \end{align*} Hence, the proof of this theorem follows from the equation \eqref{asy_exp1}, Proposition \ref{prop:1}, Proposition \ref{prop:2} and Proposition \ref{prop:3}. \end{proof} We can now apply one step of Richardson extrapolation and obtain an approximations of $\varphi$ of order $h^{2r+2}$ at the partition points. Define \begin{equation*} x_n^{EX} = \frac { 2^{ 4 r } x_{2 n}^S - x_n^S} { 2^{ 4 r } - 1}. \end{equation*} Then under the assumptions of Theorem \ref{thm:1}, we have the following result \begin{equation} \label{eq:3.9} x_n^{EX} {(t_i)} - \varphi{(t_i)} = O \left( h^{ 2r +2} \right), \quad i = 0, 1, 2, \ldots, n. \end{equation} \section{Numerical Illustration} For the sake of numerical illustration, we consider the following example of a non-linear Hammerstein integral equation from Kulkarni-Rane \cite{Rpk-Aks}. \noindent Consider \begin{equation}\label{eq:4.1} \varphi (s) - \int_0^1 \kappa (s, t) \left[ \psi \left(t, \varphi (t) \right) \right] \: dt = f(s), \;\;\; 0 \leq s \leq 1, \end{equation} where $$ \kappa (s,t) =\frac{1}{\gamma \sinh \gamma} \left\{ {\begin{array}{ll} \sinh \gamma s \: \sinh \gamma(1-t), & ~ 0 \leq t \leq s \leq 1, \\ \gamma (1-s) \sinh \gamma t, & ~ 0 \leq s \leq t \leq 1, \end{array}}\right. $$ with $\gamma = \sqrt{12},$ and $$ \psi(t, \varphi(t))= \gamma^2 \varphi(t) - 2 \left( \varphi(t) \right)^3, \quad t \in [0,1].$$ We have $f(s) =\frac{1}{\gamma \sinh \gamma} \left \lbrace 2 \sinh \gamma(1-s) + \frac{2}{3} \sinh \gamma s \right \rbrace. $ The exact solution of \eqref{eq:4.1} is given by \begin{align*} \varphi(s) =\frac{2}{2s+1}, \quad s \in [0, 1]. \end{align*} Let $\mathscr{X}_n$ be the space of piecewise constant polynomials with respect to the uniform partition \eqref{eq:2.2} of the interval $[0,1]$ considered before. Let $\pi_n: L^\infty[0,1] \rightarrow \mathscr{X}_n$ be the orthogonal projection defined by \eqref{eq:2.3}--\eqref{eq2.4}.\\ In this case, it is given by $$ (\pi_n \varphi)(s) =\displaystyle \frac{1}{h} \int_{(i-1)h}^{ih} \varphi(t) \: dt,\; s \in [(i-1)h,ih],$$ where $h = \frac{1}{n}$. In the definition of projection operator defined above, if we replace the integral by the right hand rule, then $ (\pi_n \varphi)(ih-)= \varphi(ih-).$ Let $x_n^S$ be the Sloan solution defined by \eqref{eq:It_Gal}. Then $x_n^G(ih)=\displaystyle \frac{\pi_n x_n^S(ih-)+\pi_n x_n^S(ih+)}{2}$ at the partition points is obtained by solving the approximate system of non-linear equations which gives the values of $\pi_n x_n^S(ih-)$ and $\pi_n x_n^S(ih+).$ The system is as follows: \begin{equation} \alpha_j = h \sum_{l=1}^n \kappa(s_j,s_l) \left[ \gamma^2 \alpha_l - 2\frac{\alpha_l^3}{h} \right] + \frac{f(s_j)}{\sqrt{h}}, \quad j=1, 2, \ldots, n, \end{equation} where $\alpha_l= \pi_n x_n^G(s_l)$ and $s_l = (l-\frac{1}{2})h$ for $l = 1, 2, \ldots, n$. The above system is obtained by replacing all the integrals by numerical integration formula. We have used Picard's iteration to solve the above system of non-linear equations. Let $t_i=(i-1)/20, i=1,2,\ldots,21$ be the partition points with step size $h=\frac{1}{20}.$ It is easy to see that \begin{align*} E_1^n(t_i) =|\varphi(t_i) - x_n^S(t_i)| = O\left( h^2 \right). \end{align*} We define \begin{align*} x_n^{EX}(t_i) = \frac{4 x_{2n}^S(t_i) -x_n^S(t_i)}{3}. \end{align*} Then $$ E_2^n(t_i) = \left|\varphi(t_i) - x_n^{EX}(t_i) \right| = O\left( h^4 \right).$$ The orders of convergence are calculated using the formula : \begin{align*} \begin{aligned} \alpha_1= \frac{log(E_1^n(t_i)/E_1^{2n}(t_i))}{log(2)}, \\ \beta=\frac{log(E_2^n(t_i)/E_2^{2n}(t_i))}{log(2)}, \end{aligned} ~~~ ~~~~ n=40 \end{align*} \begin{align*} \alpha_2= \frac{log(E_1^n(t_i)/E_1^{2n}(t_i))}{log(2)}, \quad n=20. \end{align*} We expect $\alpha_1=\alpha_2=2$ and $\beta=4.$ \begin{center} Table 4.1 \begin{tabular} {|c|c|c|c|c|c|}\hline $t_i$ & $E_1^n(t_i): n=20$ & ~~~ $E_1^n(t_i): n=40$ & $E_1^n(t_i): n=80$ & ~~~ $\alpha_{1}$ & ~~~ $\alpha_{2}$ \\ \hline $0.05$& $ 8.6 \times 10^{-3} $ & $ 2.15 \times 10^{-3} $ & $ 5.37 \times 10^{-4} $ & $2.00$ & $2.00$ \\ $0.1$& $ 7.56 \times 10^{-3} $ & $ 1.89 \times 10^{-3} $ & $ 4.72 \times 10^{-4} $ & $2.00$ & $2.00$ \\ $0.15$& $ 6.79 \times 10^{-3} $ & $ 1.7 \times 10^{-3} $ & $ 4.24 \times 10^{-4} $ & $2.00$ & $2.00$ \\ $0.2$& $ 6.22 \times 10^{-3} $ & $ 1.55 \times 10^{-3} $ & $ 3.89 \times 10^{-4} $ & $2.00$ & $2.00$ \\ $0.25$& $ 5.78 \times 10^{-3} $ & $ 1.44 \times 10^{-3} $ & $ 3.61 \times 10^{-4} $ & $2.00$ & $2.00$ \\ $0.3$& $ 5.45 \times 10^{-3} $ & $ 1.36 \times 10^{-3} $ & $ 3.4 \times 10^{-4} $ & $2.00$ & $2.00$ \\ $0.35$& $ 5.19 \times 10^{-3} $ & $ 1.3 \times 10^{-3} $ & $ 3.24 \times 10^{-4} $ & $2.00$ & $2.00$ \\ $0.4$& $ 4.98 \times 10^{-3} $ & $ 1.25 \times 10^{-3} $ & $ 3.11 \times 10^{-4} $ & $2.00$ & $2.00$ \\ $0.45$& $ 4.82 \times 10^{-3} $ & $ 1.2 \times 10^{-3} $ & $ 3.01\times 10^{-4} $ & $2.00$ & $2.00$ \\ $0.5$& $ 4.68 \times 10^{-3} $ & $ 1.17 \times 10^{-3} $ & $ 2.92 \times 10^{-4} $ & $2.00$ & $2.00$ \\ $0.55$& $ 4.55 \times 10^{-3} $ & $ 1.14 \times 10^{-3} $ & $ 2.84 \times 10^{-4} $ & $2.00$ & $2.00$ \\ $0.6$& $ 4.44 \times 10^{-3} $ & $ 1.11 \times 10^{-3} $ & $ 2.77 \times 10^{-4} $ & $2.00$ & $2.00$ \\ $0.65$& $ 4.33 \times 10^{-3} $ & $ 1.08 \times 10^{-3} $ & $ 2.7 \times 10^{-4} $ & $2.00$ & $2.00$ \\ $0.7$& $ 4.22 \times 10^{-3} $ & $ 1.05 \times 10^{-3} $ & $ 2.64 \times 10^{-4} $ & $2.00$ & $2.00$ \\ $0.75$& $ 4.10 \times 10^{-3} $ & $ 1.02 \times 10^{-3} $ & $ 2.56 \times 10^{-4} $ & $2.00$ & $2.00$ \\ $0.8$& $ 3.98 \times 10^{-3} $ & $ 9.94 \times 10^{-4} $ & $ 2.48 \times 10^{-4} $ & $2.00$ & $2.00$ \\ $0.85$& $ 3.84 \times 10^{-3} $ & $ 9.6 \times 10^{-4} $ & $ 2.4 \times 10^{-4} $ & $2.00$ & $2.00$ \\ $0.9$& $ 3.69 \times 10^{-3} $ & $ 9.22 \times 10^{-4} $ & $ 2.3 \times 10^{-4} $ & $2.00$ & $2.00$ \\ $0.95$& $ 3.52 \times 10^{-3} $ & $ 8.8 \times 10^{-4} $ & $ 2.2 \times 10^{-4} $ & $2.00$ & $2.00$ \\ \hline \end{tabular} \end{center} \newpage \begin{center} Table 4.2 \begin{tabular} {|c|c|c|c|}\hline $t_i$ & $E_2^n(t_i): n=20$ & ~~~ $E_2^n(t_i): n=40$ & ~~~ $\beta$ \\ \hline $0.05$& $ 2.98 \times 10^{-6} $ & $ 1.87 \times 10^{-7} $ &$3.99$ \\ $0.1$& $ 2.23 \times 10^{-6} $ & $ 1.41 \times 10^{-7} $ &$3.99$ \\ $0.15$& $ 1.59 \times 10^{-6} $ & $ 1.01 \times 10^{-7} $ &$3.99$ \\ $0.2$& $ 1.09 \times 10^{-6} $ & $ 6.94 \times 10^{-8} $ &$3.97$ \\ $0.25$& $ 7.13 \times 10^{-7} $ & $ 4.58 \times 10^{-8} $ &$3.96$ \\ $0.3$& $ 4.46 \times 10^{-7} $ & $ 2.91 \times 10^{-8} $ &$3.94$ \\ $0.35$& $ 2.7 \times 10^{-7} $ & $ 4.58 \times 10^{-8} $ &$3.91$ \\ $0.4$& $ 1.69 \times 10^{-7} $ & $ 4.58 \times 10^{-8} $ &$3.86$ \\ $0.45$& $ 1.3 \times 10^{-7} $ & $ 4.58 \times 10^{-8} $ &$3.83$ \\ $0.5$& $ 1.41 \times 10^{-7} $ & $ 4.58 \times 10^{-8} $ &$3.85$ \\ $0.55$& $ 1.91 \times 10^{-7} $ & $ 4.58 \times 10^{-8} $ &$3.89$ \\ $0.6$& $ 2.72 \times 10^{-7} $ & $ 4.58 \times 10^{-8} $ &$3.93$ \\ $0.65$& $ 3.75 \times 10^{-7} $ & $ 4.58 \times 10^{-8} $ &$3.95$ \\ $0.7$& $ 4.95 \times 10^{-7} $ & $ 4.58 \times 10^{-8} $ &$3.97$ \\ $0.75$& $ 6.26 \times 10^{-7} $ & $ 4.58 \times 10^{-8} $ &$3.98$ \\ $0.8$& $ 7.6 \times 10^{-7} $ & $ 4.58 \times 10^{-8} $ &$3.99$ \\ $0.85$& $ 8.94 \times 10^{-7} $ & $ 4.58 \times 10^{-8} $ &$3.99$ \\ $0.9$& $ 1.02 \times 10^{-6} $ & $ 4.58 \times 10^{-8} $ &$3.99$ \\ $0.95$& $ 1.14 \times 10^{-6} $ & $ 4.58 \times 10^{-8} $ &$4$ \\ \hline \end{tabular} \end{center} \noindent This verifies the result \eqref{eq:3.9}. \\ \noindent \textbf{\large Acknowledgment} The author Akshay S. Rane would like to thank UGC faculty recharge program, India for their support.
{ "timestamp": "2020-12-17T02:15:52", "yymm": "2012", "arxiv_id": "2012.08879", "language": "en", "url": "https://arxiv.org/abs/2012.08879" }
\section{Introduction} Determination of the structure of the hadron scattering amplitude is an important task for both theory and experiment. Perturbative Quantum Chromodynamics cannot be used in calculation of the real and imaginary parts of the scattering amplitude in the diffraction range. A worse situation holds for spin-flip parts of the scattering amplitude in the domain of small transfer momenta. On the one hand, the usual representation tells us that the spin-flip amplitude dies at superhigh energies, and on the other hand, we have different nonperturbative models which lead to a nondyig spin-flip amplitude at superhigh energies \ci{bsw,zpc,ans,ETT-79}. The researches into the spin-dependent structure of the hadron scattering amplitude are important for various tasks. On the one hand, the spin amplitudes constitute the spin portrait of the nucleon. Without knowing their energy and momentum transfer dependence, it is impossible to understand spin observable of nucleon scattering of nuclei. Such knowledge is also needed for the studies of very subtle effects, such as some attempts to search for a null-test signal of T-invariance violation under P-parity conservation in a $pd$ double polarization collision at SPD NICA energies \cite{Uzikov}. Parity violation in the interaction of longitudinally polarized protons or deuterons with an unpolarized target have been discussed in \cite{NNN1}, and the estimates of the P-odd asymmetry in nucleon-nucleon scattering in the NICA energy range were reported in \cite{NNN2}. This is especially important for future fixed-target experiments at LHC, in which $pp$, $pd$ and $pA$ collisions can be performed at $\sqrt{s_{NN}} =115$~GeV as well as $Pbp$ and $PbA$ collisions at $\sqrt{s_{NN}} =72$~GeV. The study of elastic scattering requires knowledge of the properties of the pomeron, the object determining the interaction of hadrons in elastic and exclusive processes. In this case, the study of the structure and spin properties of both the hadron and the pomeron acquires a special role \cite{lap}. The vacuum $t$-channel amplitude is usually associated with two-gluon exchange in QCD \cite{low}. The properties of the spinless pomeron were analyzed on the basis of a QCD model, by taking into account the non-perturbative properties of the theory \cite{la-na,don-la}. Now we recognize that the research into the pomeron exchange requires not only a pure elastic process but also many physical processes involving electroweak boson exchanges \cite{Jenk1}. There are two approaches to the pomeron, the "soft" pomeron built of multiperipheral hadron exchanges and a more current perturbative-QCD "hard" pomeron built of the gluon-ladder. The spin structure of the pomeron is still an unresolved question in diffractive scattering of polarized particles. There have been many observations of spin effects at high energies and at fixed momentum transfers \cite{krish,nur}; several attempts to extract the spin-flip amplitude from the experimental data show that the ratio of spin-flip to spin-nonflip amplitudes can be non-negligible and may be independent of energy \cite{akch,sel-pl}. It is generally believed, based on calculations of the simplest QCD diagrams, that the spin effects decrease as an inverse power of the center-of-mass energy and that the pomeron exchange does not lead to appreciable spin effects in the diffraction region at super-high energies. Complete calculations of the full set of helicity scattering amplitudes in the diffraction region cannot be carried out presently since they require extensive treatment of confinement and contributions from many diagrams. Semi-phenomenological models, however, have been developed with parameters which are expected to be fixed with the aid of data from experiments. There is a specific features of model approaches to the description of spin correlations parameters. They can be divided into two classes - low energy and high energy. In the region of low energies there is a wide range of experimental data. To describe the data quantitatively the models used pure phenomenological approaches like \cite{Wakaizumi} or obtained some qualitative descriptions of the data in some theoretical approaches \cite{Sibirt}. Above $\sqrt{s} = 10$ GeV the number of experimental data essentially decreases, practically gives only the values of the analysing power $A_N(s,t)$. The description of such data was presented in the old famous Bourrelly-Soffer model \cite{bsw} and recently in the model with a huge number of free parameters \cite{Martynov}. However, such models do not take into account in the analysis the data at $\sqrt{s} < 13$ GeV. Some models predict non-zero spin effects as $s \to \infty$ and $|t|/s \to 0$. In these studies, the spin-flip amplitudes, which lead to weakly altered spin effects with increasing energy, are connected with the structure of hadrons and their interactions at large distances \cite{soff,zpc}. In \cite{soff}, the spin-dependence of the pomeron term is constructed within model rotation of matter inside the proton. This approach is based on Chou and Yang's concept of hadronic current density \cite{c-y}. This picture can be related with the spin effects determined by higher-order $\alpha_{s}$ contributions in the framework of PQCD. The high energy two-particle amplitude determined by pomeron exchange can be written in the form: \begin{equation} T(s,t)=is \ I\hspace{-1.mm}P(s,t) V_{h_1h_1 I\hspace{-0.8mm}P}^{\mu} \otimes V^{h_2h_2sp I\hspace{-0.8mm}P}_{\mu}. \label{tpom} \end{equation} Here $ I\hspace{-1.mm}P(s,t)$ is a function caused by a pomeron with a weak energy dependence $\sim (\ln{s})^n$ and $V_{\mu}^{hhI\hspace{-0.8mm}P}$ are the pomeron-hadron vertices. The perturbative calculation of the pomeron coupling structure is rather difficult, and the non-perturbative contributions are important for momentum transfers of a few $\,{\rm (GeV}/c)^{2}$. The situation changes dramatically when large-distance loop contributions are considered, which leads to a more complicated spin structure of the pomeron coupling. As a result, spin asymmetries appear that have a weak energy dependence as $s \to \infty$. Additional spin-flip contributions to the quark-pomeron vertex may also have their origins in instantons, {\it e.g.} \cite{do,ans}. Note that in the framework of the perturbative QCD, the analyzing power of hadron-hadron scattering was shown to be of the order: $$ A_N \ \propto \ m \alpha_s / \sqrt{p_{t}^{2}}$$ where $m$ is around hadron mass \cite{ter}. Hence, one would expect a large analyzing power for moderate $p_{t}^{2}$, where the spin-flip amplitudes are expected to be important for diffractive processes. Now there are many different models to describe the elastic hadron scattering amplitude at small angles (see reviews \cite{Rev-LHC,Paccomomi,Martynov}). They lead to different predictions of the structure of the scattering amplitude at super-high energies. The diffraction processes at very high energies, especially at LHC energies are not simple asymptotically but can display complicated features \cite{FS-07,dif04}. Note that the interference of hadronic and electromagnetic amplitudes can give an important contribution not only at very small transfer momenta but also in the range of the diffraction minimum \cite{lap}. However, one should also know the phase of the interference of the Coulomb and hadron amplitudes at sufficiently large transfer momenta and the contribution of the hadron-spin-flip amplitude to the CNI effect \ci{trosh,soff}. Now we cannot exactly calculate all contributions and find their energy dependences. But a great amount of the experimental material at low energies allows us to make complete phenomenological analyses and find the size and form of different parts of the hadron scattering amplitude. The difficulty is that we do not know the energy dependence of these amplitudes and individual contributions of the asymptotic non-dying spin-flip amplitudes. From a modern point of view, the structure of a hadron can be described by the Generalized parton distribution (GPD) functions \cite{{Muller,Ji97,R97}} combining our knowledge about the one-dimensional parton distribution in the longitudinal momentum with the impact-parameter or transverse distribution of matter in a hadron or nucleus. They allow one to obtain a 3-dimensional picture of the nucleon (nucleus) \cite{Burk1,Burk2,Diehl}. In the general picture, hadron-hadron processes determined by the strong interaction with the hadron spin equal to $1/2$ can be represented by some combinations of three vectors in the center of mass system. In the c.m.s. there are only two independent three-dimensional momenta. In the case of the elastic scattering, $\vec{p}_1 = - \vec{p}_2$ and $\vec{p}_3 = - \vec{p}_4$ Using the initial and final momenta ${\bf p}$ and $p^{\prime}$ and their unity vectors ${\bf \hat{p}}$ and ${\bf \hat{p}^{\prime}}$ , so that ${\bf \hat{p}}={\bf p}/{|\bf p}|$ and ${\bf \hat{p}}={\bf p^{\prime}}/{|\bf p}^{\prime}|$, one can obtain three independent combinations $${\bf \hat{l}} \equiv \frac{ {\bf p + p^{\prime}} }{ |{\bf p + p^{\prime}}| }; \ \ \ {\bf \hat{q}} \equiv \frac{ {\bf p - p\prime } }{ |{\bf p - p\prime }| }; \ \ \ {\bf \hat{n}} \equiv \frac{ {\bf p \times p^{/}}}{|{\bf p \times p^{/}}| }. $$ The vectors ${\bf \hat{l}}$, ${\bf \hat{q}}$, ${\bf \hat{n}}$ and spin-vectors ${ \hat{ \sigma_{1}}}$ and ${\hat{\sigma_{2}}}$ create eight independent scalars \cite{Nelipa} $({ \hat{\sigma_{1}}} {\bf \hat{n}}) ({ \hat{\sigma_{2}}} {\bf \hat{n}}) \ \ \ $, ${ \hat{\sigma_{1}}} {\bf \hat{n}} + { \hat{\sigma_{2}}} {\bf \hat{n}} \ \ \ $, ${ \hat{ \sigma_{1}}} {\bf \hat{n}} - { \hat{\sigma_{2}}} {\bf \hat{n}} \ \ \ $, $({ \hat{\sigma_{1}}} {\bf \hat{I}}) ({ \hat{\sigma_{2}}} {\bf \hat{I}}) \ \ \ $, $({ \hat{\sigma_{1}}} {\bf \hat{q}}) \times ({ \hat{\sigma_{2}}} {\bf \hat{q}}) \ \ \ $, $({ \hat{\sigma_{1}}} {\bf \hat{q}}) ({\hat{\sigma_{2}}} {\bf \hat{I}}) \ \ \ $, $({ \hat{\sigma_{1}}} {\bf \hat{I}}) ({ \hat{\sigma_{2}}} {\bf \hat{q}}) \ \ \ $, $[{ \hat{\sigma_{1}}} { \hat{\sigma_{2}}}] {\bf \hat{n}} $. The main experimental data show the conservation of time parity, charge conjugation, and space parity in the strong interaction processes. Then, under time inverse ${\hat {\sigma_{1}}}$ changes to $-{ \hat{\sigma_{1}}}$, and ${\bf \hat{q}} \rightarrow {\bf \hat{q}}$, ${\bf \hat{n}} \rightarrow - {\bf \hat{n}} \ \ \ $, ${\bf \hat{I}} \rightarrow - {\bf \hat{I}}$, the combinations $[{ \hat{\sigma_{1}}} { \hat{\sigma_{2}}}] {\bf \hat{n}} \ \ \ $ and $({ \hat{\sigma_{1}}} {\bf \hat{q}}) ({ \hat{\sigma_{2}}} {\bf \hat{I}}) \ \ \ $, $({ \hat{\sigma_{1}}} {\bf \hat{I}}) ({ \hat{\sigma_{2}}} {\bf \hat{q}}) $ have to be removed as a result of the time parity conservation. If the interacting particles are identical, as in the case of the proton-proton elastic scattering, their combinations should not be changed when replacing one particle by another. As a result, the scattering amplitude is \begin{eqnarray} \phi(s,t))=\phi_1(s,t)& +& \phi_2(s,t) ({ \sigma_1} \cdot {\bf{\hat{n}}}) ({\sigma_2} \cdot {\bf{\hat{n}}}) + \phi_3(s,t) ({\sigma_1} \cdot {\bf \hat{n}} + {\sigma_2} \cdot \bf{\hat{n}} ) \nonumber \\ & + & \phi_4(s,t)({ \sigma_1} \cdot {\bf \hat{q}}) ({\sigma_2} \cdot {\bf \hat{q}}) + \phi_5(s,t)({ \sigma_1} \cdot {\bf \hat{l}}) ({ \sigma_2} \cdot \bf{\hat{l}}), \end{eqnarray} The amplitude corresponds to the spin-dependent interaction potential. It can be taken as a Born term of scattering processes. The Born term of the amplitude in the transfer momentum representation can be obtained by the corresponding amplitudes in the impact parameter representation \begin{eqnarray} \phi(s,b) \ = -\frac{1}{2 \pi} \ \int \ d^2 q \ e^{i \vec{b} \cdot \vec{q} } \ \phi^{\rm Born}_{h}(s,q^2) \,, \label{tot02} \end{eqnarray} The corresponding amplitude is connected to the interaction potential in the position representation. Using the standard Fourier transform \cite{Gold} of the potential $V(\vec{r})$, one can obtain the Born term of the scattering amplitude. If the potentials $V(\vec{r}) = V_{1}(\vec{r}) + V_{5}(\vec{r})$ are assumed to have a Gaussian form $$ V_{1,5}(\rho, z) \sim \ \int_{-\infty}^{\infty} e^{-B \ r^2} \ d z \ = \ \frac{\sqrt{\pi}}{\sqrt{B}} e^{-B \ \rho^2}, $$ in the first Born approximation $\phi^{h}_{1}$ and $\hat{\phi_h}^{5}$ will have the same form \begin{eqnarray} \phi^{h}_{1}(s,t) \sim \int_{0}^{\infty} \ \rho \ d\rho \ J_{0}(\rho \Delta) e^{-B \ \rho^2} \ = \ e^{-B \Delta^{2}}, \lab{f1a} \end{eqnarray} \begin{eqnarray} \phi^{h}_{5}(s,t) \sim \int_{0}^{\infty} \ \rho^2 \ d\rho \ J_{1}(\rho \Delta) \ e^{\chi_{0}(s,\rho)} \ e^{-B \ \rho^2 } \ = \ q \ \ B \ e^{-B \Delta^{2}} . \lab{f5a} \end{eqnarray} In this special case, therefore, the spin-flip and ``residual''spin-non-flip amplitudes are indeed with the same slope \cite{PS-Sl}. The first observation that the slopes do not coincide was made in \cite{Predazzi66}. It was found from the analysis of the $\pi^{\pm} p \rightarrow \ \pi^{\pm}p $ and $pp \rightarrow \ pp $ reactions at $p_L \ = \ 20 \div 30 \ GeV/c $ that the slope of the ``residual'' spin-flip amplitude is about twice as large as the slope of the spin-non flip amplitude. This conclusion can also be obtained from the phenomenological analysis carried out in \cite{Wakaizumi} for spin correlation parameters of the elastic proton-proton scattering at $p_L \ = \ 6 \ GeV/c$. The model-dependent analysis based on all the existing experimental data of the spin-correlation parameters above $p_L \ \geq \ 6 \ GeV$ allows us to determine the structure of the hadron spin-flip amplitude at high energies and to predict its behavior at superhigh energies \cite{z100} This analysis shows that the ratios $Re \ \phi^{h}_{5}(s,t) / (\sqrt{|t|} \ Re \ \phi^{h}_{1}(s,t))$ and $Im \ \phi^{h}_{5}(s,t)/(\sqrt{|t|} \ Im \ \phi^{h}_{1}(s,t))$ depend on $\ s$ and $t$. At small momentum transfers, it was found that the slope of the ``residual'' spin-flip amplitudes is approximately twice the slope of the spin-non flip amplitude \cite{JPS}. The electromagnetic current of a nucleon is \begin{eqnarray} J_{\mu} (P^{'},s^{'}; P,s) = \bar{u} (P^{'},s^{'}) \Lambda_{\mu} (q,P) u(P,s) = \bar{u} (P^{'},s^{'}) (\gamma_{\mu} F_{1}(q^2) + \frac{1}{2M} i \sigma_{\mu \nu }q_{\nu }F_{2}(q^{2}))u(P,s), \end{eqnarray} where $P,s, (P^{'},s^{'}) $ are the four-momentum and polarisation of the incoming (outgoing) nucleon, and $q = P^{'}- P $ is the momentum transfer. The quantity $ \Lambda_{\mu} (q,P) $ is the nucleon-photon vertex. If the potential has the spherical symmetry, the Born amplitude can be calculated as \begin{eqnarray} \phi_{B}(t) = g/q \int_{0}^{\infty} r \ sin( q r ) V(\vec{r}) \ dr. \end{eqnarray} or in the impact parameter representation \begin{eqnarray} \phi_{B}(b) = \frac{1}{4\Pi} \int_{0}^{\infty} r V(\sqrt{(z^2+b^2)} ) \ dr. \end{eqnarray} There are some different forms of the unitarization procedures \cite{Cud-Sel-nl,Cud-Pred-Sel-nl}. One of them is the standard eikonal representation, where the Born term of the scattering amplitude in the impact parameter representation takes the eikonal phase and the total scattering amplitude is represented as \begin{eqnarray} \phi(s,t)=\frac{i s}{2 \pi} \int_{0}^{\infty} [1-\exp{(-\chi({\bf b })}] \exp{(-i {\bf{ q \cdot b}})} d^2 {\bf{ b}}. \end{eqnarray} If the terms are taken into account to first order in the spin-dependent eikonals of the spin-dependent eikonal amplitude, where the eikonal function $\chi(\bf{b})$ is a sum of the spin-independent central term $\chi_{si}$, spin-orbit term - $\chi_{ls}$, and spin-spin term $\chi_{ss}$, separate spin-dependent amplitudes are written as follows: \begin{eqnarray} \phi_{1s}(s,t)&=&i s \int b d b J_0(b q)[1-\exp{(-\chi_{si}(b))}]; \\ \phi_{2s}(s,t)&=&i s \int b^2 d b J_1(b q) \exp{(-\chi_{si}(b))} \chi_{ls}(b); \\ \phi_{3s}(s,t)&=& s \int b d b J_0(b q) \exp{(-\chi_{si}(b))} \chi_{ss}(b). \end{eqnarray} Using ordinary relations (see, for example, \cite{Lehar}), we can obtain helicity amplitudes for small scattering angles and high energies: \begin{eqnarray} \phi_1(s,t)&=&\phi_{1s}(s,t)-\phi_{2s}(s,t); \ \ \ \phi_2(s,t)=2 \phi_{2s}(s,t); \\ \nonumber \phi_3(s,t)&=&\phi_{1s}(s,t)+\phi_{2s}(s,t); \ \ \ \phi_4(s,t)= 0; \ \ \ \phi_5(s,t)=i\phi_{3s}(s,t). \end{eqnarray} The scattering amplitude of charged hadrons is represented as a sum of the hadronic and electromagnetic amplitudes $ \phi_{tot} = \phi^{em} + \phi^{h} $. The electromagnetic amplitude can be calculated in the framework of QED. In the one-photon approximation, we have \cite{bgl,nur1} \begin{eqnarray} \phi^{em}_1 &=& \alpha [f_{1}^{2}(t)(\frac{s-2 m^2}{t} + \frac{m^2}{2 p^2}) - 2 f_{1} (t) f_{2}(t) -\frac{1}{2}f_{2}^{2}(t)(1-\frac{t}{4 p^2})]; \\ \nonumber \phi^{em}_2 &=& \alpha [f_{1}^{2}(t)\frac{ m^2}{2 p^2} -f_{1}(t) f_{2}(t) + \frac{f_{2}^{2}(t)}{4 m^2}(s - 2 m^2 +\frac{t s}{8 p^2})]; \\ \nonumber \phi^{em}_3 &=& \alpha (1 + \frac{t}{4 p^2}) [f_{1}^{2}(t) \frac{s-2 m^2}{t} + \frac{1}{2} f_{2}^2(t) ]; \\ \nonumber \phi^{em}_{4} &=& - \phi^{em}_{2}; \\ \nonumber \phi^{em}_5 &=& \alpha [-\frac{s (4p^2+t)}{t}]^{1/2} [f_{1}^{2}(t)\frac{ m}{4 p^2} - \frac{1}{2 m} f_{1}(t) f_{2}(t) + \frac{t}{16 m p^2} f_{2}^{2}(t)], \end{eqnarray} where $\alpha$ is the electromagnetic coupling constant, and $f_{1} (t)$ and $f_{2} (t)$ are the Dirac an Pauli form factors of the proton \begin{eqnarray} f_{1}(t) = \frac{4 m^2 -t (1+k)}{4 m^2 -t} G_{d} \ \ \ \ f_{2}(t) = \frac{4 m^2 k}{4 m^2 -t} G_{d} \end{eqnarray} with $ G_{d} = (1- t/0.71)^{-2}$ (t is in GeV$^2$), and $k= 1.793$ is the anomalous magnetic moment of the proton. In the high energy approximation, we obtain: \begin{eqnarray} \phi^{em}_1 = \alpha f_{1}^{2} \frac{s-2 m^2}{t}, \ \ \ \phi^{em}_3 = \phi^{em}_1, \ \ \ \phi^{em}_2 = \alpha \frac{f_{2}^{2}(t)}{4 m^2} s, \\ \nonumber \phi^{em}_{4} = - \phi^{em}_{2}, \ \ \ \ \phi^{em}_5 = \alpha \frac{s }{2m \sqrt{|t|}} f_{1}^{2}, \end{eqnarray} The total helicity amplitudes can be written as $ \phi_i(s,t) = \phi^i_{N}(s,t) + \phi^i_{em}(t) \exp{i \alpha \varphi(s,t)}$ \ci{bgl}. The differential cross sections and spin correlation parameters are \begin{eqnarray} \frac{d\sigma}{dt} = \frac{2 \pi}{s^{2}} (|\phi_{1}|^{2} +|\phi_{2}|^{2} +|\phi_{3}|^{2} +|\phi_{4}|^{2} +4 | \phi_{5}|^{2} ). \label{dsdt} \end{eqnarray} \begin{eqnarray} A_N\frac{d\sigma}{dt}&=& -\frac{4\pi}{s^2} Im[(\phi_1+\phi_2+\phi_3-\phi_4) \ \phi_5^{*}). \label{an} \end{eqnarray} and \begin{eqnarray} A_{NN}\frac{d\sigma}{dt}&=& \frac{4\pi}{s^2} Re[(\phi_1 \phi_2^{*} - \phi_3 \phi_4^{*}) + 2 |\phi_5|^{2}). \label{ann} \end{eqnarray} \section{ Coulomb -nucleon phase factor} In \cite{lap,bsw}, the importance of the CNI effects in the domain of diffraction dip was pointed out. In \cite{bsw}, the polarization at sufficiently low (now) energies with the CNI effect but without the phase of CNI and with a simple approximation of the hadron non-flip spin amplitude was calculated. However, the authors showed for the first time that the CNI effect can be sufficiently large (up to $11\%$ at $p_{L} = 280 \ GeV/c$) in the region of non-small transfer momenta. The total amplitude including the electromagnetic and hadronic forces can be expressed as \begin{eqnarray} F(s,t) = F_{C} \exp{(i \alpha \varphi (s,t))} + F_{N}(s,t), \end{eqnarray} and for the differential cross sections, neglecting the terms proportional to $\alpha^2$, we have \begin{eqnarray} d\sigma/dt&=&\pi [ (F_{C} (t))^2\!+\! ReF_{N}^{2}(s,t) +Im F_{N}^{2}(s,t) \\ &+&2 ( ReF_{N}(s,t) F_{C}(t) cos(\alpha \varphi(t)) + Im F_{N}(s,t) F_{C}(t) \ sin(\alpha \varphi(t)) )] \nonumber \end{eqnarray} with \begin{eqnarray} \varphi(s,t) = \varphi(t)_{C} - \varphi(s,t)_{CN} , \end{eqnarray} where $\varphi(t)_{C}$ appears in the second Born approximation of the pure Coulomb amplitude, and the term $\varphi_{CN}$ is defined by the Coulomb-hadron interference. The quantity $\varphi(s,t)$ has been calculated and discussed by many authors. For high energies, the first results were obtained by Akhiezer and Pomeranchuk \cite{akhi} for the diffraction on a black nucleus. Using the WKB approach in potential theory, Bethe \cite{bethe} derived $\varphi(s,t)$ for the proton-nucleus scattering. After some treatment improving this result \cite{rix}, the most important result was obtained by Locher \cite{loch} and then by West and Yennie \cite{wy}. In the framework of the Feynman diagram technique in \cite{wy}, a general expression was obtained for $\varphi_{CN}(s,t)$ in the case of pointlike particles in terms of the hadron elastic scattering amplitude: \begin{eqnarray} \varphi(s,t) = - \ln{(-t/s)} - \int^{S}_{0} \frac{ d t^{'} }{ |t-t^{'}| } \ \ (1-\frac{ F_{N}(s,t^{'}) } { F_{N}(s,t) } ). \label{wy} \end{eqnarray} If the hadron amplitude is chosen in the standard Gaussian form \\ $F_{N} = h \ \exp{(-B(s) q^{2}/2)}$, we can get \begin{eqnarray} \varphi(s,t) = \mp [\ln{(-B(s) t/2)} + \gamma], \label{wyph} \end{eqnarray} where $-t=q^2$, $B(s)/2$ is the slope of the nuclear amplitude, $\gamma$ is the Euler constant, and the upper (lower) sign corresponds to the scattering of particles with the same (opposite) charges. The impact of the spin of scattered particles was analyzed in \cite{lap,bgl} by using the eikonal approach for the scattering amplitude. Using the helicity formalism for high energy hadron scattering in \cite{bgl}, it was shown that at small angles, all the helicity amplitudes have the same $\varphi(s,t)$. The influence of the electromagnetic form factor of scattered particles on $\varphi_{C}$ and $\varphi_{CN}$ in the framework of the eikonal approach was examined by Islam \cite{Islamphase} and with taking into account the hadron form factor in the simplest form by Cahn \cite{can}. He derived for $t \rightarrow 0 $ the eikonal analogue (\ref{wy}) and obtained the corrections \begin{eqnarray} \varphi (s,t)&=&\mp [\gamma +\ln{ (B(s)|t| /2)} + \ln{ (1 + 8/(B(s)\Lambda ^2))} \nonumber\\ & & + (4|t|/\Lambda ^2)\ \ln{ (4|t|/\Lambda^2)} + 2|t|/\Lambda^2], \label{fit} \end{eqnarray} where $\Lambda$ is a constant entering into the power dependent form factor. The recent calculation of the phase factor was carried out in \cite{Petrovphase}. The calculations of the phase factor beyond the limit $t \rightarrow 0$ were carried out in \cite{selmpl1,selmpl2,selphase}. As a result, for the total Coulomb scattering amplitude, we have the eikonal approximation of the second order in $\alpha$ \begin{eqnarray} F_c(q) = F_c^{1B} + F_c^{2B} = -\frac{\alpha}{q^2} [\frac{\Lambda^4}{(\Lambda^2+q^2)^2}] [1+i\alpha(\{ \ln(\frac{\lambda^2}{q^2}) \ + \ \nu_s \}], \end{eqnarray} where \begin{eqnarray} \nu_s = A \ln(\frac{(\Lambda^2+q^2)^2}{\Lambda^2 q^2}) + B \ln(\frac{4 \Lambda^2}{(\sqrt{(4 \Lambda^2 +q^2}+q)^2}) \ + \ C , \label{e23} \end{eqnarray} The coefficients $A,B,C$ are defined in \cite{selphase}. The numerical calculation shows that at small $q^2$ the difference between $\nu_s$ and $\nu_{c}$ is small, but above $q^2=3.10^{-2} \ GeV^2$, it is rapidly growing. It is clear that the solution of $\nu_{c}$ should be bounded at $-t= 3.10^{-2} \ GeV^2$. As a result, we have a sufficiently simple form of $\nu_s$ up to $|t| = 0.2 \ GeV^2$. It gives us the possibility to reproduce it by a simple phenomenological form that can be used in a practical analysis of experimental data at small $t$: \begin{eqnarray} \nu_s \simeq = c_1 \log{(1 + c_{2}^{2} q^2)} +4/q^{2}, \end{eqnarray} where the constants $c_1$ and $c_2$ are defined by the fit $\nu_s$, $ c_1 = 0.11, \ \ \ \ c_2 = 20.$ The total phase factor is \begin{eqnarray} \varphi(s,t) = \ln{\frac{q^2}{4}} +2\gamma +\frac{1}{F_h(s,q)} \int_{0}^{\infty} \tilde{\chi}_{c}(\rho) (1 - \exp(\chi_h(\rho,s))J_{0}(\rho,q)d\rho , \label{fei2} \end{eqnarray} with \begin{eqnarray} \tilde{\chi}_c(\rho) = 2\rho \ln{\rho} +2\rho \biggl\{ K_{0}(\rho \Lambda) [1+ \frac{5}{24} \Lambda^2 \rho^2 ] +\frac{\Lambda \rho}{12} K_1(\rho \Lambda) [11+ \frac{1}{4} \Lambda^2 \rho^2] \biggr\} \end{eqnarray} The calculated $\varphi(s,t)$, Eq.(\ref{fei2}), is an eikonal analog with taking account the hadron form factor of the expression obtained by West and Yennie \ci{wy} from the Feynman diagram. \section{Nucleon form factors and GPDs} There are various choices for the nucleon electromagnetic form factors (ff), such as the Dirac and Pauli ff, $F_1^p(t),\ \ F_1^n(t)$ and $F_2^p(t),\ \ F_2^n(t),$ the Sachs electric and magnetic ff, $G_E^p(t),\ \ G_E^n(t)$ and $G_M^p(t),\ \ G_M^n(t),$ \cite{Sach}. The Dirac and Pauli form factors are obtained from a decomposition of the matrix elements of the electromagnetic (e.m.) current in linearly independent covariants made of four-momenta, $\gamma$ matrices and Dirac bispinors as follows: $$<N|J_{\mu}^{e.m.}|N>=e\bar u(p')[\gamma_{\mu}F_1^N(t)+{i\over{2m}}\sigma_{\mu\nu}(p'- p)_{\nu}F_2^N(t)]u(p),$$ where $m$ is the nucleon mass. The electric and magnetic form factors, on the other hand, are suitable in extracting them from the experiment: $ e^- N \rightarrow e^- N$ by Rosenbluch or polarization methods \cite{Rosenbluth}. The four independent sets of form factors are related by \begin{eqnarray} G_E^p(t) = F_1^p(t)+\tau^p F_2^p(t), \ \ \ G_M^p(t)=F_1^p(t)+F_2^p(t), \\ \nonumber G_E^n(t)=F_1^n(t)+\tau^n F_2^n(t), \ \ \ G_M^n(t)=F_1^n(t)+F_2^n(t), \nonumber \end{eqnarray} with $\tau^{p(n)}={t\over{4m_{p(n)}^2}}$, which can be interpreted as Fourier transformations of the distribution of magnetism and charge in the Breit frame. They satisfy the normalization conditions $$ G_E^p(0)=1;\ G_M^p(0)=1+\mu_p;\ G^n_E(0)=0; G^n_M(0)=\mu_n;$$ where $\mu_p$ and $\mu_n$ are the proton and neutron anomalous magnetic moments, respectively. Since the GPD is not known a priori, one seeks for models of GPD based on general constraints on its analytic and asymptotic behavior. The calculated scattering amplitudes (cross sections) are than compared with the data to confirm, modify or reject the chosen form of the GPD. Commonly, the form $GPDs(x,\xi,t)$ is determined through the exclusive deep inelastic processes of type $\gamma^*p\rightarrow Vp,$ where $V$ stands for a photon or vector meson. However, such processes have a narrow region of momentum transfer and in most models the $t$-dependence of GPDs is taken in the factorization form with the Gaussian form of the $t$-dependence. Really, this form of $GPDs(x,\xi,t)$ can not be used to build the space structure of the hadrons, as for that one needs to integrate over $t$ in a maximum wide region. The hadron form factors are related to the $GPDs(x,\xi,t)$ by the sum rules \cite{Ji97} \begin{eqnarray} F_1^q(t)=\int_{-1}^1 dx H^q(x,\xi=0,t), \ \ \ \ F_1^q(t)=\int_{-1}^1 dx H^q(x,\xi=0,t). \end{eqnarray} The integration region can be reduced to positive values of $x,~0<x<1$ by the following combination of non-forward parton densities \cite{Rad1,GPRV} ${\cal H}^q(x,t)=H^q(x,0,t)+H^q(-x,0,t)$, ${\cal E}^q(x,t)=E^q(x,0,t)+E^q(-x,0,t)$, providing $F^q_1(t)=\int_0^1 dx {\cal H}^q(x,t) \label{01}$, $F^q_2(t)=\int_0^1 dx {\cal E}^q(x,t).\label{02}$ The proton and neutron Dirac form factors are defined as \begin{eqnarray} F_1^p(t)=e_uF_1^u(t)+e_dF_1^d(t), \ \ \ \ F_1^n(t)=e_uF_1^d(t)+e_dF_1^u(t), \end{eqnarray} where $e_u=2/3$ and $e_d=-1/3$ are the relevant quark electric charges. As a result, the $t$-dependence of the $GPDs(x,\xi=0,t)$ can be determined from the analysis of the nucleon form factors for which experimental data exist in a wide region of momentum transfer. It is a unique situation as it unites the elastic and inelastic processes. In the limit $t\rightarrow 0$ the functions $H^q(x,t)$ reduce to usual quark densities in the proton: $$ {\cal\ H}^u(x,t=0)=u_v(x),\ \ \ {\cal H}^d(x,t=0)=d_v(x)$$ with the integrals $$\int_0^1 u_v(x)dx=2,\ \ \ \int_0^1 d_v(x)dx=1 $$ normalized to the number of $u$ and $d$ valence quarks in the proton. However, the "magnetic" densities ${\cal E}^q(x,t=0)\equiv {\cal E}^q(x)$ cannot be directly expressed in terms of the known parton distributions; however, their normalization integrals $$\int_0^1{\cal E}^q (x)dx\equiv k_q $$ are constrained by the requirement that the values $F_2^p(t=0)$ and $F_2^n(t=0)$ are equal to the magnetic moments of the proton and neutron, whence $k_u=2k_p+k_n\approx 1.673$ and $k_u=k_p+2k_n\approx -2.033$ follow \cite{GPRV}. This helps us to obtain the corresponding PDFs by analysing the magnetic form factor of the proton and neutron \cite{GPD-ST-PRD09} In \cite{GPD-ST-PRD09}, the $t$-dependence of GPDs in the simplest form \begin{eqnarray} {\cal{H}}^{q} (x,t) \ = q(x)_{nf} \ exp [ a_{+} \ \frac{(1-x)^2}{x^{m} } \ t ]; \ \ \ {\cal{E}}^{q} (x,t) \ = q(x)_{sf} \ exp [ a_{-} \ \frac{(1-x)^2}{x^{m} } \ t ]; \label{GPD0} \end{eqnarray} was researched. Complicated analysis of all available experimental data on the electromagnetic form factors of proton and neutron simultaneously allows one to obtain the $t$-dependence of the $GPDs(x,\xi,t)$ \cite{GPD-PRD14}. Different Mellin moments of GPDs give the form factors for different reactions. If the first momentum of $GPDs(x,\xi,t)$ gives the electromagnetic form factors, then the integration of the second moment of GPDs over $x$ gives the momentum-transfer representation of the so called gravitomagnetic form factors over $x,t$, \begin{eqnarray} \int^{1}_{0} \ dx \ x \sum_{u,d}[{\cal{H}}(x,t) \pm {\cal{E}}(x,t)] = A_{h}(t) \pm B_{h}(t). \end{eqnarray} which are connected with the energy-momentum tensor. Further development of the model requires careful analysis of the momentum transfer form of the GPDs and a properly chosen form of the PDFs. In Ref. \cite{GPD-PRD14}, analysis of more than 24 different PDFs was performed. We slightly complicated the form of the GPDs in comparison with Eq.(\ref{GPD0}), but it is the simplest one compared to other works (for example, Ref. \cite{Diehl-Kroll}): \begin{eqnarray} {\cal{H}}^{u} (x,t) = q(x)^{u}_{nf} \ e^{2 a_{H} \ \frac{(1-x)^{2+\epsilon_{u}}}{(x_{0}+x)^{m}} \ t }; \ \ \ {\cal{H}}^{d} (x,t) \ = q(x)^{d}_{nf} \ e^{2 a_{H} (1+\epsilon_{0}) (\frac{(1-x)^{2+\epsilon_{d}}}{(x_{0}+x)^{m}} ) \ t }, \label{t-GPDs-H} \end{eqnarray} \begin{eqnarray} {\cal{E}}^{u} (x,t) = q(x)^{u}_{fl} \ e^{2 a_{E} \ \frac{(1-x)^{2+\epsilon_{u}}}{(x_{0}+x)^{m}} \ t }, \ \ \ {\cal{E}}^{d} (x,t) = q(x)^{d}_{fl} \ e^{2 a_{E}(1+\epsilon_{0}) (\frac{(1-x)^{2+\epsilon_{d}}}{(x_{0}+x)^{m}} ) \ t }, \label{t-GPDs-E} \end{eqnarray} where $q(x)^{u,d}_{fl}=q(x)^{u,d}_{nf} (1.-x)^{z_{1},z_{2}}$. The ratio of $ \mu G_{E}/G_{M}$ for the proton and neutron cases is presented in Fig. 1. Our calculations reproduce the data obtained by the polarization method quite well. \begin{figure} \includegraphics[width=.45\textwidth]{mgedgmb4.eps} \includegraphics[width=.45\textwidth]{gengmn.eps} \vspace{1.cm} \caption{The model description of the ratio of the electromagnetic form factors for the proton $\mu_{p} G^{p}_{E}/G^{p}_{M}$ with different forms of PDFs \cite{HEGS0} [left], and for the neutron $\mu_{n} G^{n}_{E}/G^{n}_{M}$ [right],. } \label{Fig3} \end{figure} The hadron form factors were calculated by using numerical integration \begin{eqnarray} F_{1}(t)= \int^{1}_{0} dx [\frac{2}{3}q_{u}(x)e^{2 \alpha_{H} t (1.-x)^{2+\epsilon_{u}}/(x_{0}+x)^m} -\frac{1}{3} q_{d}(x)e^{ 2 \alpha_{H} t (1.-x)^{2+\epsilon_{d}}/((x_{0}+x)^{m})} ] \end{eqnarray} and then by fitting these integral results with the standard dipole form with some additional parameters for $F_{1}(t)$, $ F_{1}(t)= 1/(1+q/a_{1}+q^{2}/a_{2}^2 + q^3/a_{3}^3)^2 )$. The matter form factor \begin{eqnarray} A(t) = \int^{1}_{0} x \ dx [ q_{u}(x)e^{2 \alpha_{H} t (1.-x)^{2+\epsilon_{u}}/(x_{0}+x)^m} + q_{d}(x)e^{ 2 \alpha_{H} t (1.-x)^{2+\epsilon_{d}}/((x_{0}+x)^{m})} ] \end{eqnarray} is fitted by the simple dipole form $ A(t) = \frac{\Lambda^4}{(\Lambda^2 -t)^2 }$. The results of the integral calculations and the fitting procedure are shown in Fig.2. Our description is valid up to a large momentum transfer with the following parameters: $a_{1}=16.7, \ a_{2}^{2}=0.78, \ a_{3}^{3}=12.5$ and $\Lambda^2=1.6$. These form factors will be used in our model of the proton-proton and proton-antiproton elastic scattering. \begin{figure} \includegraphics[width=.45\textwidth]{f1al12.eps} \includegraphics[width=.45\textwidth]{agr6al12.eps} \vspace{1.cm} \caption{ The fit of the form factors of the proton: (a) [left], the electromagnetic form factor $G(t)$ and and [right] the matter form factor $A(t)$. The circles are the moments of the GPDs (only every tenth point is shown). } \label{Fig4} \end{figure} \section{Extension of the HEGS model with the spin-flip amplitude} In papers \cite{HEGS0,HEGS1}, the new High Energy Generelized Structure model was developed. The central moment of the model is that it uses two form factors corresponding to charge and matter distributions calculated as the relevant moments of $GPDs(x,\xi=0,t)$. The basic Born spin-non-flip amplitudes were taken in the form \begin{eqnarray F_{h}^{Born}(s,t) \ = h_1 \ G^{2}(t) \ F_{a}(s,t) \ (1+r_1/\hat{s}^{0.5}) \ + h_{2} \ A^{2}(t) \ F_{b}(s,t) \ (1+r_2/\hat{s}^{0.5}), \label{FB1} \end{eqnarray} where $F_{a}(s,t)$ and $F_{b}(s,t)$ have the standard Regge form \begin{eqnarray F_{a}(s,t) \ = \hat{s}^{\epsilon_1} \ e^{B(s) \ t}, \ \ \ F_{b}(s,t) \ = \hat{s}^{\epsilon_1} \ e^{B(s)/4 \ t}. \label{FB-ab} \end{eqnarray} The slope of the scattering amplitude has the logarithmic dependence on energy, $ B(s) = \alpha^{\prime} \ ln(\hat{s})$, with fixed $\alpha_{1}=0.24$ GeV$^{-2}$ and $\Delta=0.11 $. Taking into account the Mandelstam region of the analyticity of the scattering amplitude for the $2 \rightarrow 2 $ scattering process with identical mass $s+u+t = 4 m_{p}^2$, one takes the normalized energy variable $s$ in a complex form $\hat{s}/s_{0}$ with $\hat{s} = s e^{i\pi}$, and $s_{0}=4 m_{p}^{2}$ where $m_{p}$ is the mass of the proton. In the present model, a small additional term is introduced into the slope, which reflects some possible small nonlinear properties of the intercept. As a result, the slope is taken in the form $ B(s,t) \ = (\alpha_{1} + k q e^{-k q^2 Ln(\hat{s} \ t)} ) Ln(\hat{s}) $. This form leads to the standard form of the slope as $t \rightarrow 0$ and $t \rightarrow \infty$. Note that our additional term at large energies has a similar form as an additional term to the slope coming from the $\pi$ loop examined in Ref. \cite{Gribov-Sl} and recently in Ref. \cite{Khoze-Sl}. Then, as we intend to describe sufficiently low energies, possible Odderon contributions were taken into account: \begin{eqnarray F_{\rm odd}(s,t) \ = \pm \ h_{\rm odd} \ A^{2}(t) \ F_{b}(s,t), \end{eqnarray} where $h_{\rm odd} = i h_{3} t/(1-r_{0}^{2} t) $. Just as we supposed in the previous variant of the HEGS model that $F_{b}(s,t)$ corresponds to the cross-even part of the three-gluon exchange, our Odderon contribution is also connected with the matter form factor $A(t)$. Our ansatz for the Odderon slightly differs from the cross-even part by some kinematic function. The form of the Odderon working in all $t$ has the same behavior as the cross-even part at larger momentum transfer, of course, with different signs for proton-proton and proton-antiproton reactions. The final elastic hadron scattering amplitude is obtained after unitarization of the Born term. So, first, we have to calculate the eikonal phase \begin{eqnarray} \chi(s,b) \ = -\frac{1}{2 \pi} \ \int \ d^2 q \ e^{i \vec{b} \cdot \vec{q} } \ F^{\rm Born}_{h}\left(s,q^2\right)\, \label{chi} \end{eqnarray} and then obtain the final hadron scattering amplitude using eq.(9) \begin{eqnarray} F_{h}(s,t) = i s \ \int \ b \ J_{0}(b q) \ \Gamma(s,b) \ d b\, \ \ \ {\rm with} \ \ \ \Gamma(s,b) = 1- \exp[\chi(s,b)]. \label{Gamma} \end{eqnarray} Note that the parameters of the model are energy independent. The energy dependence of the scattering amplitude is determined only by the single intercept and the logarithmic dependence on $s$ of the slope. The analysis of the hard Pomeron contribution in the framework of the model \cite{NP-HP} shows that such a contribution is not felt. For the most part, the fitting procedure requires a negative additional hard Pomeron contribution. We repeat the analysis of \cite{NP-HP} in the present model and obtain practically the same results. Hence, we do not include the hard Pomeron in the model. Now we do not know exactly, also from a theoretical viewpoint, the dependence of different parts of the scattering amplitude on $s$ and $t$. So, usually, we suppose that the imaginary and real parts of the spin-nonflip amplitude behave exponentially with the same slope, whereas the imaginary and real parts of spin-flip amplitudes, without the kinematic factor $\sqrt{|t|}$, behave in the same manner with $t$ in the examined domain of transfer momenta. Moreover, one mostly assume the energy independence of the ratio of the spin-flip to spin-nonflip parts of the scattering amplitude. All this is our theoretical uncertainty. Let us take the main part of the spin-flip amplitude in the basic form of the spin-non-flip amplitude. Hence, the born term of the spin-flip amplitude can be represented as \begin{eqnarray F_{sf}^{Born}(s,t)= && h_{sf1} \ F_{1}^{2}(t) \ F_{sf-a}(s,t) \ (1+r_{sf1}/\hat{s}^{0.5}) \\ \nonumber + && h_{sf2} \ A^{2}(t) \ F_{sf-b}(s,t) \pm h_{sf-odd} \ A^{2}(t)F_{sf-b}(s,t)\ (1+r_{sf2}/\hat{s}), \label{FB} \end{eqnarray} where $F_{sf-a}(s,t)$ and $F_{sf-b}(s,t)$ are the same as in the spin-non-flip amplitude but, according to the paper \cite{PS-Sl}, the slope of the amplitudes is essentially increasing. As a result, we take $F_{sf-a}(s,t) \ = \hat{s}^{\epsilon_1} \ e^{4 B(s) \ t}$, and $ F_{sf-b}(s,t) \ = \hat{s}^{\epsilon_1} \ e^{B(s)/2 \ t}$. It is to be noted that most part of the available experimental data on the spin-correlation parameters exist only at sufficiently small energies. Hence, at lower energies we need to take into account the energy-dependent parts of the spin-flip amplitudes. In this case, some additional polarization data can be included in our examination. Then the spin-flip eikonal phase $\chi_{ls}(s,b)$ is calculated by the Fourier-Bessel transform, eq.(43), and then the spin-flip amplitude in the momentum transfer representation is obtained by the standard eikonal representation for the spin-flip part, eq.(12). As in our previous works \cite{Our-Kur,Nica20}, a small contribution from the energy-independent part of the spin-flip amplitude in a form similar to that proposed in Ref. \cite{G-Kuraev2} was added. \begin{eqnarray F_{sf-t}(s,t) \ = h_{sf} q^3 F_{1}^{2}(t) e^{-B_{sf} q^{2}}. \end{eqnarray} It has two additional free parameters. We take into account $ F_{sf-t}(s,t)$ and the full spin-flip amplitude is $ F_{sf}(s,t) = F_{sf-ab} + F_{sf-t}$. The model is very simple from the viewpoint of the number of fitting parameters and functions. There are no artificial functions or any cuts which bound the separate parts of the amplitude by some region of momentum transfer. We analyzed $3080$ experimental points in the energy region $9.8$ GeV $\leq \sqrt{s} \leq 8 $ TeV and in the region of momentum transfer $0.000375 \leq |t| \leq 10 $ GeV$^2$ for the differential cross sections and $125$ experimental points for the polarization parameter $A_N$ in the energy region $ 4.5 < \sqrt{s} < 30 $ GeV. The experimental data for the proton-proton and proton-antiproton elastic scattering are included in 87 separate sets of 30 experiments \cite{data-Sp,Land-Bron}, including the recent data from the TOTEM Collaboration at $\sqrt{s}=8$ TeV \cite{TOTEM-8nexp}. This gives us many experimental high-precision data points at a small momentum transfer, including the Coulomb-hadron interference region where the experimental errors are remarkably small. Hence, we can check our model construction where the real part is determined only by the complex representation of $\hat{s}=s/s_{0} exp(-i \pi /2)$. We do not include the data on the total cross sections $\sigma_{\rm tot}(s)$ and $\rho(s)$, as their values were obtained from the differential cross sections, especially in the Coulomb-hadron interference region. Including these data decreases $\chi^2$, but it would be a double counting in our opinion. In the work, the fitting procedure uses the modern version of the program "FUMILIM" \cite{Sitnik1,Sitnik2}" of the old program "FUMILY" \cite{fum83} which calculates the covariant matrix and gives the corresponding errors of the parameters and their correlation coefficients, and the errors of the final data. The analysis of the TOTEM data by three different statistical methods, including the calculations through the correlation matrix of the systematic errors, was made in \cite{GS-totan19}. As in the old version of the model, we take into account only the statistical errors in the standard fitting procedure. The systematic errors are taken into account by the additional normalization coefficient which is the same for every row of the experimental data. It essentially decreases the space of the possible form of the scattering amplitude. Of course, it is necessary to control the sizes of the normalization coefficients so that they do not introduce an additional energy dependence. Our analysis shows that the distribution of the coefficients has the correct statistical properties and does not lead to a visible additional energy dependence. As a result, we obtained a quantitative description of the experimental data ($\sum \chi^2/n_{dof} =1.3$). \begin{figure} \includegraphics[width=.45\textwidth]{s9p8.eps} \includegraphics[width=.45\textwidth]{dsa11.eps} \vspace{1.cm} \caption{ $d\sigma/dt$ for $pp$ [left] at $\sqrt{s}=9.8$ GeV and $p\bar{p}$ [right] at $\sqrt{s}=11.4$ GeV (lines - the model calculatios, points - the experimental data \cite{Whalley}). } \label{Fig5} \end{figure} In the model, a good description of the CNI region of momentum transfer is obtained in a very wide energy region (approximately 3 orders of magnitude) with the same slope of the scattering amplitude. The differential cross sections of the proton-proton and proton-antiproton elastic scattering at small momentum transfer are presented in Fig. 3 at $\sqrt{s}= 9.8 $ GeV for $pp$ scattering, and $\sqrt{s}= 11 $ GeV for $p\bar{p}$ elastic scattering. The model quantitatively reproduces the differential cross sections in the whole examined energy region in spite of the fact that the size of the slope is essentially changing in this region [due to the standard Regge behavior $log(\hat{s}$] and the real part of the scattering amplitude has different behavior for $pp$ and $p\bar{p}$. \begin{figure*} \begin{center} \includegraphics[width=0.75\textwidth] {dsmd3.eps} \end{center} \vspace{1.cm} \caption{ The model calculation of the diffraction minimum in $d\sigma/dt$ of $pp $ scattering at $ \sqrt{s}=9.23, \ 13.76, \ 30.4 $~GeV; lines - the model calculations (shirt dash, long dash and solid; circles and triangles - the experimental data at 13.4 and 30.7 GeV \cite{Whalley}). } \end{figure*} The form and the energy dependence of the diffraction minimum are very sensitive to different parts of the scattering amplitude. The change of the sign of the imaginary part of the scattering amplitude determines the position of the minimum and its movement with changing energy. The real part of the scattering amplitude determines the size of the dip. Hence, it depends heavily on the odderon contribution. The spin-flip amplitude gives the contribution to the differential cross sections additively. So the measurement of the form and energy dependence of the diffraction minimum with high precision is an important task for future experiments. In Fig.4, the description of the diffraction minimum in our model is shown for $\sqrt{s} = 9.23, \ 13.76$, and $\ 30.4 \ $GeV. The HEGS model reproduces sufficiently well the energy dependence and the form of the diffraction dip. In this energy region the diffraction minimum reaches the sharpest dip at $\sqrt{s}=30 $~GeV. Note that at this energy the value of $\rho(s,t=0)$ also changes its sign in the proton-proton scattering. The $p\bar{p}$ cross sections in the model are obtained by the $s \rightarrow u$ crossing without changing the model parameters. And for the proton-antiproton scattering the same situation with correlations between the sizes of $\rho(s,t=0)$ and $\rho(s,t_{min})$ takes place at low energy (approximately at $p_{L}= 100 $ GeV). Note that it gives a good description for the proton-proton and proton-antiproton elastic scattering or $\sqrt{s}=53 $~GeV and for $\sqrt{s}=62.1 $~GeV. The diffraction minimum at $\sqrt{s}=7 $~TeV and $\sqrt{s}=13 $~TeV is reproduced sufficiently well too. In the standard pictures, the spin-flip and double spin-flip amplitudes correspond to the spin-orbit $(LS)$ and spin-spin $(SS)$ coupling terms. The contribution to $A_N$ from the hadron double spin-flip amplitudes already at $p_L = 6 \ $GeV/c is of the second order compared to the contribution from the spin-flip amplitude. So with the usual high energy approximation for the helicity amplitudes at small transfer momenta, we suppose that $\Phi_{1}=\Phi_{3}$ and we can neglect the contributions of the hadron parts of $\Phi_2-\Phi_4$. Note that if $\Phi_{1}, \Phi_3, \Phi_5$ have the same phases, their interference contribution to $A_N$ will be zero, though the size of the hadron spin-flip amplitude can be large. Hence, if this phase has different $s$ and $t$ dependences, the contribution from the hadron spin-flip amplitude to $A_N$ can be zero at $s_i, \ t_i$ and non-zero at other $s_j, \ t_j$. \begin{figure*} \begin{center} \includegraphics[width=0.45\textwidth] {an4p9d.eps} \includegraphics[width=0.45\textwidth] {an6p8b.eps} \end{center} \vspace{1.cm} \caption{The analyzing power $A_N$ of pp - scattering calculated: a) at $\sqrt{s} = 4.9 \ $GeV (the experimental data \cite{Pol4p9}), and b) at $\sqrt{s} = 6.8 \ $GeV (points - the existing experimental data \cite{Pol6p8} ). } \label{fig:10} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.45\textwidth] {an9p235d.eps} \includegraphics[width=0.45\textwidth] {an13p7d.eps} \end{center} \vspace{1.cm} \caption{The analyzing power $A_N$ of pp - scattering calculated: a) at $\sqrt{s} = 9.2 \ $GeV, (the experimental data \cite{Pol9p2}), and b) at $\sqrt{s}= 13.7 \ $GeV (points - the experimental data \cite{Pol23p4}). } \label{fig:11} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.45\textwidth] {an19p4d3.eps} \includegraphics[width=0.45\textwidth] {an23p4d3.eps} \end{center} \vspace{1.cm} \caption{The analyzing power $A_N$ of pp - scattering calculated: a) at $\sqrt{s} = 19.4 \ $GeV (the experimental data \cite{Pol19p4}), and b) at $\sqrt{s}= 23.4 \ $GeV (points - the existing experimental data \cite{Pol23p4}) } \label{fig:2} \end{figure*} \begin{figure} \begin{flushright} \includegraphics[width=.45\textwidth]{an500d3.eps} \includegraphics[width=0.45\textwidth]{anstd.eps} \end{flushright} \vspace{1.cm} \caption{[left] The calculated size of the spin correlation parameter $A_{N}(s,t)$ at high energies $\sqrt{s}= 50, \ 100, \ 500 \ $GeV. [right] the $s$-dependence of $A_{N}(s,t)$ at different fixed $t_{i} = 0.001, \ 0.1, \ 0.4, 1.0, 1.5 \ $ GeV (solid, dots, short dash, dot-dot-dash and long-dash lines, respectively). } \label{Fig_3} \end{figure} \begin{figure} \begin{flushright} \includegraphics[width=.45\textwidth]{Rrra.eps} \includegraphics[width=0.45\textwidth]{Rim.eps} \end{flushright} \vspace{1.cm} \caption{ [left] The ratio of the imaginary parts of the spin-flip amplitude and spin-non-flip amplitude; [right] the ratio of the real parts of the spin-flip amplitude and spin-non-flip amplitudes (lines - dashed, dots, and solid correspond to $\sqrt{s} = 9.23, \ 19.4, \ 30.7 \ $GeV). } \label{Fig_4} \end{figure} Our calculation for $A_{N} (t)$ is shown in Fig. 5 a,b. at $\sqrt{s}=4.9 $~GeV and $\sqrt{s}=6.8 $~GeV. For our high energy model it is a very small energy. However, the description of the existing data is sufficiently good. At these energies, the diffraction minimum practically is overfull by the real part of the spin-non-flip amplitude and the contribution of the spin-flip-amplitude; however, the $t$-dependence of the analysing power is very well reproduced in this region of the momentum transfer. Note that the magnitude and the energy dependence of this parameter depend on the energy behavior of the zeros of the imaginary-part of the spin-flip amplitude and the real-part of the spin-nonflip amplitude. Figure 6 shows $A_{N} (t)$ at $\sqrt{s}=9.2 $~GeV and $\sqrt{s}=13.7 $~GeV. At these energies the diffraction minimum deepens and its form affects the form of $A_{N} (t)$. At last, $A_{N} (t)$ is shown at large energies $\sqrt{s}=19.4 $~GeV and $\sqrt{s}=23.4 $~GeV in Fig.7. The diffraction dip in the differential cross section has a sharp form and it affects the sharp form of $A_{N} (t)$. The maximum negative values of $A_N$ coincide closely with the diffraction minimum. We found that the contribution of the spin-flip to the differential cross sections is much less than the contribution of the spin-nonflip amplitude in the examined region of momentum transfers from these figures; $A_N$ is determined in the domain of the diffraction dip by the ratio \begin{eqnarray} A_N \sim Im f_{- } / Re f_{+}. \label{ir} \end{eqnarray} The size of the analyzing power changes from $-45\%$ to $-50\%$ at $\sqrt{s}=50 \ \,{\rm GeV}$ up to $-25\%$ at $\sqrt{s}= 500 \ \,{\rm GeV}$. These numbers give the magnitude of the ratio Eq.(\ref{ir}) that does not strongly depend on the phase between the spin-flip and spin-nonflip amplitudes. This picture implies that the diffraction minimum is mostly filled by the real-part of the spin-nonflip amplitude and that the imaginary-part of the spin-flip amplitude increases in this domain as well. We observe that the dips are different in speed of displacements with energy from Fig. 4. In Fig. 8, one sees that at larger momentum transfers, $|t| \sim 2$ to $3 \ \,{\rm (GeV}/c)^{2}$, the analyzing power depends on energy very weakly. The spin-flip amplitude gives the contribution to the differential cross sections additively. So the measurement of the form and energy dependence of the diffraction minimum with high precision is an important task for future experiments. In Fig.8, the predictions of the HEGS model are presented for $A_{N} (t)$ up to high energies $\sqrt{s}=500 $~GeV. It can be seen that at such huge energy the size of $A_{N} (t)$ does not come to zero and can be measured in new LHC experiments with a fixed target. Now let us examine the ratio of the real and imaginary parts of the spin-flip phenomenological and model amplitudes to their imaginary parts of the hadron spin-non-flip amplitudes (see Figs. 9 (a,b) and 11 (a,b)). It is clear that this ratio can not be regarded as a constant. Moreover, this ratio has a very strong energy dependence. Neglecting the $ \Phi_{2}(s,t)- \Phi_{4}(s,t)$ contribution, the spin correlation parameter $A_{N}(s,t)$ can be written taking into account the phases of separate amplitudes \begin{eqnarray} A_{N}(s,t) \frac{d \sigma}{dt} = -\frac{4 \pi}{s^2} [|F_{nf}(s,t)| \ |F_{sf}(s,t)| Sin( \theta_{nf}(s,t)-\theta_{sf}(s,t))]. \end{eqnarray} where $\theta_{nf}(s,t), \theta_{sf}(s,t)$ are the phases of the spin non-flip and spin-flip amplitudes. It is clearly seen that despite the large spin-flip amplitude the analyzing power can be near zero if the difference of the phases is zero in some region of momentum transfer. The experimental data at some point of the momentum transfer show the energy independence of the size of the spin correlation parameter $A_{N}(s,t)$. Hence, the small value of the $A_{N}(s,t)$ at some $t$ (for example, very small $t$) does not serve as a proof that it will be small in other regions of momentum transfer. Let us compare the spin-flip amplitudes and the spin-nonflip amplitudes in the impact parameter representation at $\sqrt{s}=30$ GeV. The results are present in Fig.10. It can be seen that the first has more peripheral behavior. \begin{figure} \begin{center} \includegraphics[width=.5\textwidth]{byfws30.eps} \end{center} \vspace{1.cm} \caption{ Impact parameter representation of the spin-nonflip and spin-flip amplitude at $\sqrt{s}=30$ GeV ($b (1- exp(-\chi_{nf}))$ - dashed line, and ($b (\chi_{sf} \ exp(-\chi_{nf}))$ - hard line) } \label{Fig_14} \end{figure} \section{Conclusions} The Generelized parton distributions (GPDs) make it possible to better understand the fine hadron structure and to obtain the hadron structure in the space frame (impact parameter representations). The important property of GPDs consists in that they are tightly connected with the hadron form factors. The new HEGS model gives a quantitative description of elastic nucleon scattering at high energy with a small number of fitting parameters. Our model of the GPDs leads to a good description of the proton and neutron electromagnetic form factors and their elastic scattering simultaneously. A successful description of the existing experimental data by the model shows that the elastic scattering is determined by the generalized structure of the hadron. It allows one to find some new features in the differential cross section of $pp$-scattering in the unique experimental data of the TOTEM collaboration at $ \sqrt{s}=13 $ TeV (small oscillations \cite{Osc13-20} and anomalous behavior at small momentum transfer \cite{anom13-20} ). The inclusion of the spin-flip parts of the scattering amplitude allows one to describe the low energy experimental polarization data of the $pp$ elastic scattering. It is shown that the non-perturbative spin-effects at high energies may not be small. It should be noted that the real part of the scattering amplitude, on which the form of the diffraction dip heavily depends, is determined in the framework of the HEGS model only by the complex $\bar{s}$, and hence it is tightly connected with the imaginary part of the scattering amplitude and satisfies the analyticity and the dispersion relations. The HEGS model reproduces well the form and the energy dependence of the diffraction dip of the proton-proton and proton antiproton elastic scattering \cite{Dif-min}. The research into the form and energy dependence of the diffraction minimum of the differential cross sections of elastic hadron-hadron scattering at different energies will give valuable information on the structure of the hadron scattering amplitude and hence the hadron structure and the dynamics of strong interactions. The diffraction minimum is created under a strong impact of the unitarization procedure. Its dip depends on the contributions of the real part of the spin-non-flip amplitude and the whole contribution of the spin-flip scattering amplitude. In the framework of HEGS model, we show a deep connection between elastic and inealastic cross sections, which are tightly connected with the hadron structure at small and large distances. Quantitatively, for different thin structures of the scattering amplitude, wider analysis is needed. This concerns the fixed intercept taken from the deep inelastic processes and the fixed Regge slope $\alpha^{\prime}$, as well as the form of the spin-flip amplitude. Such analysis requires a wider range of experimental data, including the polarization data of $A_N(s,t)$, $A_{NN}(s,t)$, $A_{LL}(s,t)$, $A_{SL}(s,t)$. The obtained information on the sizes and energy dependence of the spin-flip and double-flip amplitudes will make it possible to better understand the results of famous experiments carried out by A. Krish at the ZGS to obtain the spin-dependent differential cross sections \cite{Krish1a,Krish1b} and the spin correlation parameter $A_{NN}$, and at the AGS \cite{Krish2} to obtain the spin correlation parameter $A_{N}$ showing the significant spin effects at a large momentum transfer. The present analysis, which includes the contributions of the spin-flip amplitudes, also shows a large contradiction between the extracted value of $\rho(s,t)$ and the predictions from the analysis based on the dispersion relations. However, our opinion is that additional analysis is needed, which will include additional corrections connected with the possible oscillation in the scattering amplitude and with the $t$-dependence of the spin-flip scattering amplitude. We hope that future experiments at NICA can give valuable information for the improvement of our theoretical understanding of strong hadron interaction. \vspace{2.cm}
{ "timestamp": "2021-05-05T02:14:54", "yymm": "2012", "arxiv_id": "2012.08891", "language": "en", "url": "https://arxiv.org/abs/2012.08891" }
\section{Introduction} The classical tilting theory was introduced in the context of finite generated modules over artin algebras by Auslander et al. \cite{MA_1979}, Brenner and Butler \cite{SB_1980}, Happel and Ringel \cite{DH_1982} and so on, and since then it has played a central role in the development of the representation theory of artin algebras. Triangulated categories were introduced by Verdier \cite{JLV_1997}. Keller \cite{BK_2007} introduced the notion of tilting objects in an algebra triangulated category. The homological theory of triangulated categories was developed by Beligiannis \cite{AB_2000}. It parallels the homological theory in an exact category in the sense of Quillen. Using the proper class $\xi$ of triangles in a triangulated category $\mathcal{C}$, Beligiannis defined the notions of $\xi$-projective objects, $\xi$-injective objects, $\xi$-projective dimension, and so on. Recently, Y.G. Hu et al. \cite{YGH_2020} introduced the notion of $\xi$-tilting object in a triangulated category by means of the homological theory in the triangulated category. Triangulated categories and exact categories are two fundamental structures in mathematics. They are also important tools in many mathematics branches. It is well known that these two kinds of categories have some similarities: while exact categories admit short exact sequences, triangulated categories admit triangles. By extracting the similarities between triangulated categories and exact categories, Nakaoka and Palu \cite{NP_2019} introduced the notion of extriangulated categories. Such category is a triplet $(\mathcal{C},\mathbb{E},\mathfrak{s})$, where $\mathcal{C}$ is an additive category, $\mathbb{E}: \mathcal{C}^{op}\times \mathcal{C}\rightarrow \mathcal{A}b$ is a biadditive functor and $\mathfrak{s}$ assigns to each $\delta \in \mathbb{E}(C,A)$ a class of 3-terms sequences with end terms $A$ and $C$ such that certain axioms hold. Using the proper class $\xi$ of $\mathbb{E}$-triangles in an extriangulated category $\mathcal{C}$, J.S. Hu \cite{JSH1_2020} defined the notions of $\xi$-projective objects, $\xi$-injective objects, $\xi$-projective dimension, and so on. The aim of this paper is to extend the results of Y.G. Hu et al. \cite{YGH_2020} to extriangulated categories. We introduce the notion of $\xi$-tilting object in a triangulated category and study their properties. More precisely, we obtain the $\mathbb{E}$-triangle versions of the Bazzoni characterization in extriangulated category. This paper is organized as follows. In section 2, we recall the definition of extriangulated categories and some basic properties which are needed in the paper. In section 3, we extend Y.G. Hu's notion of $\xi$-tilting object in a triangulated category\cite{YGH_2020} to $\xi$-tilting object in an extriangulated category and we state and prove our main result. We get the Bazzoni characterization of $\xi$-tilting object in extriangulated category as follows. For unexplained notions in the following theorem, we refer to Section $2$. $\mathbf{Theorem}$ \ref{T34} Let $T$ be an object in $\mathcal{C}$. Then the following statements are equivalent. (1) $T$ is an $\xi$-tilting object. (2) $T^{\perp} = \Pres_{\mathcal{P}(\xi)}^{1}(\Add T)$. \section{Preliminaries} Throughout this paper, let $\mathcal{C}$ be an additive category. For the category $\mathcal{C}$, if $A,B \in \mathcal{C}$, then we denote the set of morphisms $A \rightarrow B$ in $\mathcal{C}$ by $\mathcal{C}(A,B)$. We denote the identity morphism of an object $C \in \mathcal{C}$ by $1=1_{C}$. If $f \in \mathcal{C}(A,B)$, $g \in \mathcal{C}(B,C)$, we denote composition of $f$ and $g$ by $gf$. By a subcategory $\mathcal{D}$ of $\mathcal{C}$, we always mean that $\mathcal{D}$ is an additive full subcategory which is closed under isomorphisms, direct sums and direct summands. We recall some basics on extriangulated categories from \cite{NP_2019, JSH1_2020, JSH2_2020, JSH3_2020}. Suppose $\mathcal{C}$ is equipped with a biadditive functor $\mathbb{E}:\mathcal{C}^{op}\times \mathcal{C}\rightarrow \mathcal{ A}b$, where $\mathcal{A}b$ is the category of abelian groups. For any pair of objects $A,C\in \mathcal{C}$, an element $\delta \in \mathbb{E}(C,A)$ is called an $\mathbb{E}$-$extension$. Thus formally, an $\mathbb{E}$-extension is a triplet $(A,\delta,C)$. Let $(A,\delta,C)$ be an $\mathbb{E}$-extension. Since $\mathbb{E}$ is a bifunctor, for any $a \in \mathcal{C}(A,A')$ and $c \in \mathcal{C}(C',C)$, we have $\mathbb{E}$-extensions $$\mathbb{E}(C,a)(\delta)\in \mathbb{E}(C,A')~~~~\text{and}~~~~\mathbb{E}(c,A)(\delta)\in \mathbb{E}(C',A).$$ They are abbreviated to $a_{\ast}\delta$ and $c^{\ast}\delta$ respectively. In this terminology, we have $$\mathbb{E}(c,a)(\delta)=c^{\ast}a_{\ast}\delta=a_{\ast}c^{\ast}\delta$$ in $\mathbb{E}(C',A')$. For any $A,C \in \mathcal{C}$, the zero element $0 \in \mathbb{E}(C,A)$ is called the $split$ $\mathbb{E}$-$extension$. \begin{definition} \cite[Definition 2.3]{NP_2019} Let $(A,\delta,C)$, $(A',\delta',C')$ be any pair of $\mathbb{E}$-extensions. A $morphism$ $$(a,c):(A,\delta ,C)\rightarrow (A',\delta',C')$$ of $\mathbb{E}$-extensions is a pair of morphisms $a \in \mathcal{C}(A,A')$ and $c \in \mathcal{C}(C,C')$ in $\mathcal{C}$, satisfying the equality $$a_{\ast}\delta=c^{\ast}\delta'.$$ Simply we denote it as $(a,c):\delta \rightarrow \delta'$. \end{definition} \begin{definition} \cite[Definition 2.6]{NP_2019} Let $\delta=(A,\delta,C),~\delta'=(A',\delta',C')$ be any pair of $\mathbb{E}$-extensions. Let $$C\stackrel{l_{C}}\rightarrow C \oplus C' \stackrel{l_{C'}}\leftarrow C'$$ and $$A\stackrel{p_{A}}\leftarrow A\oplus A'\stackrel{p_{A'}}\rightarrow A'$$ be coproduct and product in $\mathcal{C}$, respectively. Remark that, by the additivity of $\mathbb{E}$, we have a natural isomorphism $$\mathbb{E}(C\oplus C',A\oplus A')\simeq \mathbb{E}(C,A)\oplus \mathbb{E}(C,A')\oplus \mathbb{E}(C',A)\oplus \mathbb{E}(C',A'). $$ Let $\delta \oplus \delta'\in \mathbb{E}(C\oplus C',A\oplus A') $ be the element corresponding to $(\delta,0,0,\delta')$ through this isomorphism. This is the unique element which satisfies $$\mathbb{E}(l_{C},p_{A})(\delta \oplus \delta')=\delta,~~~~\mathbb{E}(l_{C},p_{A'})(\delta \oplus \delta')=0,$$ $$\mathbb{E}(l_{C'},p_{A})(\delta \oplus \delta')=0,~~~~\mathbb{E}(l_{C'},p_{A'})(\delta \oplus \delta')=\delta'.$$ \end{definition} \begin{definition} \cite[Definition 2.7]{NP_2019} Let $A,C \in \mathcal{C}$ be any pair of objects. Sequences of morphisms in $\mathcal{C}$ $$A\stackrel{x}\rightarrow B\stackrel{y}\rightarrow C~~~~\text{and}~~~~A\stackrel{x'}\rightarrow B'\stackrel{y'}\rightarrow C$$ are said to be $equivalent$ if there exists an isomorphism $b\in \mathcal{C}(B,B')$ which makes the following diagram commutative. $$\xymatrix{ A\ar[r]^{x}\ar@{=}[d]&B\ar[r]^{y}\ar[d]_{\simeq}^{b}& C\ar@{=}[d] \\ A\ar[r]_{x'}& B'\ar[r]_{y'} &C } $$ We denote the equivalence class of $A\stackrel{x}\rightarrow B\stackrel{y}\rightarrow C$ by $[A\stackrel{x}\rightarrow B\stackrel{y}\rightarrow C]$. For any $A,C\in \mathcal{C}$, we denote as $0=[A\stackrel{\left[\begin{smallmatrix} \ 1_{A} \\ 0 \end{smallmatrix}\right]}\rightarrow A\oplus C\stackrel{\left[\begin{smallmatrix} \ 0&1_{C} \end{smallmatrix}\right]}\rightarrow C]$. \end{definition} \begin{definition} \cite[Definition 2.9 and 2.10]{NP_2019} Let $\mathfrak{s}$ be a correspondence which associates an equivalence class $\mathfrak{s}(\delta)=[A\stackrel{x}\rightarrow B\stackrel{y}\rightarrow C]$ to any $\mathbb{E}$-extension $\delta \in \mathbb{E}(C,A)$. This $\mathfrak{s}$ is called a $realization$ of $\mathbb{E}$, if it satisfies the following condition: $(\bullet)$ Let $\delta \in \mathbb{E}(C,A)$ and $\delta' \in \mathbb{E}(C',A')$ be any pair of $\mathbb{E}$-extensions, with $$\mathfrak{s}(\delta)=[A\stackrel{x}\rightarrow B\stackrel{y}\rightarrow C],~~~~\mathfrak{s}(\delta')=[A'\stackrel{x'}\rightarrow B'\stackrel{y'}\rightarrow C'].$$ Then, for any morphism $(a,c):\delta\rightarrow \delta'$, there exists $b \in \mathcal{C}(B,B')$ which makes the following diagram commutative. $$\xymatrix{ A\ar[r]^{x}\ar[d]^{a}&B\ar[r]^{y}\ar[d]^{b}& C\ar[d]^{c} \\ A'\ar[r]_{x'}& B'\ar[r]_{y'} &C' } $$ In this case, we say that sequence $A\stackrel{x}\rightarrow B\stackrel{y}\rightarrow C$ realizes $\delta$, whenever it satisfies $\mathfrak{s}(\delta)=[A\stackrel{x}\rightarrow B\stackrel{y}\rightarrow C]$. Note that this condition does not depend on the choices of the representatives of the equivalence classes. In the above situation, we say that $(a,b,c)$ $realizes$ $(a,c)$. A realization $\mathfrak{s}$ of $\mathbb{E}$ is called $additive$ if the following conditions are satisfied. $(1)$ For any $A,C\in \mathcal{C}$, the split $\mathbb{E}$-extension $0\in \mathbb{E}(C,A)$ satisfies $\mathfrak{s}(0)=0.$ $(2)$ For any pair of $\mathbb{E}$-extensions $\delta \in \mathbb{E}(C,A)$ and $\delta' \in \mathbb{E}(C',A')$, we have $\mathfrak{s}(\delta \oplus \delta')=\mathfrak{s}(\delta)\oplus \mathfrak{s}(\delta')$. \end{definition} \begin{definition}\cite[Definition 2.12]{NP_2019} We call the triplet $(\mathcal{C},\mathbb{E},\mathfrak{s})$ an $externally$ $triangulated$ $category$ (or $extriangulated$ $category$ $\mathcal{C}$ for short) if it satisfies the following conditions: (ET1) $\mathbb{E}:\mathcal{C}^{op}\times \mathcal{C}\rightarrow \mathcal{A}b$ is a biadditive functor. (ET2) $\mathfrak{s}$ is an additive realization of $\mathbb{E}$. (ET3) Let $\delta\in \mathbb{E}(C,A)$ and $\delta'\in \mathbb{E}(C',A')$ be any pair of $\mathbb{E}$-extensions, realized as $$\mathfrak{s}(\delta)=[A\stackrel{x}\rightarrow B\stackrel{y}\rightarrow C],~~~~\mathfrak{s}(\delta')=[A'\stackrel{x'}\rightarrow B'\stackrel{y'}\rightarrow C'].$$ For any commutative square $$\xymatrix{ A\ar[r]^{x}\ar[d]^{a}&B\ar[r]^{y}\ar[d]^{b}& C \\ A'\ar[r]_{x'}& B'\ar[r]_{y'} &C' } $$ in $\mathcal{C}$, there exists a morphism $(a,c):\delta \rightarrow \delta'$ which is realized by $(a,b,c)$. (ET3)$^{\text{op}}$ Let $\delta\in \mathbb{E}(C,A)$ and $\delta'\in \mathbb{E}(C',A')$ be any pair of $\mathbb{E}$-extensions, realized as $$\mathfrak{s}(\delta)=[A\stackrel{x}\rightarrow B\stackrel{y}\rightarrow C],~~~~\mathfrak{s}(\delta')=[A'\stackrel{x'}\rightarrow B'\stackrel{y'}\rightarrow C'].$$ For any commutative square $$\xymatrix{ A\ar[r]^{x}&B\ar[r]^{y}\ar[d]^{b}& C\ar[d]^{c} \\ A'\ar[r]_{x'}& B'\ar[r]_{y'} &C' } $$ in $\mathcal{C}$, there exists a morphism $(a,c):\delta \rightarrow \delta'$ which is realized by $(a,b,c)$. (ET4) Let $(A,\delta,D)$ and $(B,\delta',F)$ be $\mathbb{E}$-extensions realized by $$A\stackrel{f}\rightarrow B\stackrel{f'}\rightarrow C,~~~~B\stackrel{g}\rightarrow C\stackrel{g'}\rightarrow F$$ respectively. Then there exist an object $E \in \mathcal{C}$, commutative diagram $$\xymatrix{ A\ar@{=}[d]\ar[r]^{f}&B\ar[r]^{f'}\ar[d]^{g}& C\ar[d]^{d} \\ A\ar[r]_{h}& C\ar[r]_{h'}\ar[d]^{g'} &E\ar[d]^{e}\\ &F\ar@{=}[r]&F } $$ in $\mathcal{C}$, and an $\mathbb{E}$-extension $\delta'' \in \mathbb{E}(E,A)$ realized by $A\stackrel{h}\rightarrow C\stackrel{h'}\rightarrow E$, which satisfy the following compatibilities. (i) $D\stackrel{d}\rightarrow E\stackrel{e}\rightarrow E$ realizes $f'_{\ast}\delta'$, (ii) $d^{\ast}\delta''=\delta$, (iii) $f_{\ast}\delta''=e^{\ast}\delta'$. (ET4)$^{\text{op}}$ Let $(D,\delta,B)$ and $(F,\delta',C)$ be $\mathbb{E}$-extensions realized by $$D\stackrel{f'}\rightarrow A\stackrel{f}\rightarrow B~~~~\text{and}~~~~F\stackrel{g'}\rightarrow B\stackrel{g}\rightarrow C$$ respectively. Then there exists an object $E \in \mathcal{C}$, commutative diagram $$\xymatrix{ D\ar@{=}[d]\ar[r]^{d}&E\ar[r]^{e}\ar[d]^{h'}& F\ar[d]^{g'} \\ D\ar[r]_{f'}& A\ar[r]_{f}\ar[d]^{h} &B\ar[d]^{g}\\ &C\ar@{=}[r]&C } $$ in $\mathcal{C}$, and an $\mathbb{E}$-extension $\delta''\in \mathbb{E}(C,E)$ realized by $E\stackrel{h'}\rightarrow A\stackrel{h}\rightarrow C$, which satisfy the following compatibilities. (i) $D\stackrel{D}\rightarrow E\stackrel{e}\rightarrow F$ realizes $g'^{\ast}\delta$, (ii) $\delta'=e_{\ast}\delta''$, (iii) $d_{\ast}\delta=g^{\ast}\delta''$. \end{definition} The following condition is analogous to the weak idempotent completeness in exact category (see \cite[Condition 5.8]{NP_2019}). \begin{condition} (Condition (WIC))\label{W1}Consider the following conditions. (1) Let $f \in \mathcal{C}(A,B)$, $g \in \mathcal{C}(B,C)$ be any composable pair of morphisms. If $gf$ is an inflation, then so is $f$. (2) Let $f \in \mathcal{C}(A,B)$, $g \in \mathcal{C}(B,C)$ be any composable pair of morphisms. If $gf$ is an deflation, then so is $g$. \end{condition} \begin{lemma}\cite[Corollary 3.5]{NP_2019}\label{L3} Let $\mathcal{C}$ be an extriangulated category, and $$\xymatrix{ A\ar[r]^{x}\ar[d]^{a}&B\ar[r]^{y}\ar[d]^{b}& C\ar[d]^{c}\ar@{-->}[r]^{\delta}& \\ A'\ar[r]_{x'}& B'\ar[r]_{y'} &C'\ar@{-->}[r]^{\delta'}& } $$ be any morphism of $\mathbb{E}$-triangles. Then the following are equivalent. (1) $a$ factors through $x$; (2) $a_{\ast}\delta=c^{\ast}\delta'=0$; (3) $c$ factors through $y'$. In particular, in this case $\delta=\delta'$ and $(a,b,c)=(1,1,1)$, we obtain $$x~is~a~section~\Leftrightarrow \delta~splits~\Leftrightarrow ~y~is~a~retraction.$$ The full subcategory consisting of the split $\mathbb{E}$-triangles will be denoted by $\Delta_{0}$. \end{lemma} The following concepts are quoted verbatim from \cite{JSH1_2020, JSH2_2020, JSH3_2020}. A class of $\mathbb{E}$-triangles $\xi$ is $closed$ $under$ $base~ change $ if for any $\mathbb{E}$-triangle $$A\stackrel{x}\longrightarrow B\stackrel{y}\longrightarrow C\stackrel{\delta}\dashrightarrow$$ in $\xi$ and any morphism $c:C' \rightarrow C$, then any $\mathbb{E}$-triangle $A\stackrel{x'}\longrightarrow B'\stackrel{y'}\longrightarrow C'\stackrel{c^{*}\delta}\dashrightarrow$ belongs to $\xi$. Dually, a class of $\mathbb{E}$-triangles $\xi$ is $closed~ under~ cobase~ change $ if for any $\mathbb{E}$-triangle $$A\stackrel{x}\longrightarrow B\stackrel{y}\longrightarrow C\stackrel{\delta}\dashrightarrow$$ in $\xi$ and any morphism $a:A \rightarrow A'$, then any $\mathbb{E}$-triangle $A'\stackrel{x'}\longrightarrow B'\stackrel{y'}\longrightarrow C\stackrel{a_{*}\delta}\dashrightarrow$ belongs to $\xi$. A class of $\mathbb{E}$-triangles $\xi$ is called $saturated$ if in the situation of Proposition \cite[Proposion 3.15]{NP_2019}, whenever\\ $A_{2}\stackrel{x_{2}}\longrightarrow B_{2}\stackrel{y_{2}}\longrightarrow C\stackrel{\delta_{2}}\dashrightarrow$ and $A_{1}\stackrel{m_{1}}\longrightarrow M\stackrel{e_{1}}\longrightarrow B_{2}\stackrel{y_{2}^{*}\delta_{1}}\dashrightarrow$ belong to $\xi$, then the $\mathbb{E}$-triangle $A_{1}\stackrel{x_{1}}\longrightarrow B_{1}\stackrel{y_{1}}\longrightarrow C\stackrel{\delta_{1}}\dashrightarrow$ belongs to $\xi$. \begin{definition}\cite[Definition 3.1]{JSH1_2020} Let $\xi$ be a class of $\mathbb{E}$-triangles which is closed under isomorphisms. $\xi$ is called a proper class of $\mathbb{E}$-triangles if the following conditions hold: (1) $\xi$ is closed under finite coproducts and $\Delta_{0} \subseteq \xi$. (2) $\xi$ is closed under base change and cobase change. (3) $\xi$ is saturated. \end{definition} \begin{definition}\cite[Definition 4.1]{JSH1_2020} An object $P \in \mathcal{C}$ is called $\xi$-$projective$ if for any $\mathbb{E}$-triangle $$A\stackrel{x}\longrightarrow B\stackrel{y}\longrightarrow C\stackrel{\delta}\dashrightarrow$$ in $\xi$, the induced sequences of abelian groups $$0 \longrightarrow \mathcal{C}(P,A)\stackrel{}\longrightarrow \mathcal{C}(P,B)\stackrel{}\longrightarrow \mathcal{C}(P,C)\longrightarrow 0$$ is exact. Dually, we have the definition of $\xi$-$injective$ objects. \end{definition} We denote by $\mathcal{P}(\xi)$ (resp. $\mathcal{I}(\xi)$) the class of $\xi$-projective (resp. $\xi$-injective) objects of $\mathcal{C}$. It follows from the definition that the subcategories $\mathcal{P}(\xi)$ and $\mathcal{I}(\xi)$ are full, additive, closed under isomorphisms and direct summands. An extriangulated category $(\mathcal{C},\mathbb{E},\mathfrak{s})$ is said to have $enough$ $\xi$-$projectives$ (resp. $enough$ $\xi$-$injectives$) provided that for each object $A$ there exists an $\mathbb{E}$-triangle $K\stackrel{}\longrightarrow P\stackrel{}\longrightarrow A\stackrel{}\dashrightarrow$ (resp. $A\stackrel{}\longrightarrow I\stackrel{}\longrightarrow K\stackrel{}\dashrightarrow$ in $\xi$ with $P \in \mathcal{P}(\xi)$ (resp. $I \in \mathcal{I}(\xi)$). Let $K\stackrel{}\longrightarrow P\stackrel{}\longrightarrow A\stackrel{}\dashrightarrow$ be an $\mathbb{E}$-triangle in $\xi$ with $P \in \mathcal{P}(\xi)$, then we call $K$ the $first$ $\xi$-$syzygy$ of $A$. An $n$th $\xi$-$syzygy$ of $A$ is defined as usual by induction. By Schanuel's lemma (\cite[Proposition 4.3]{JSH1_2020}), any two $\xi$-$syzygies$ of $A$ are isomorphic modulo $\xi$-projectives. The $\xi$-$projective$ $dimension$ $\xi$-pd$A$ of $A \in \mathcal{C}$ is defined inductively. If $A \in \mathcal{P}(\xi)$, then define $\xi$-pd$A$ = 0. Next if $\xi$-pd$A > 0$, define $\xi$-pd$A \leq n$ if there exists an $\mathbb{E}$-triangle $K\stackrel{}\longrightarrow P\stackrel{}\longrightarrow A\stackrel{}\dashrightarrow$ in $\xi$ with $P \in \mathcal{P}(\xi)$ and $\xi$-pd$K \leq n-1$. Finally we define $\xi$-pd$A = n$ if $\xi$-pd$A \leq n$ and $\xi$-pd$A \nleq n-1$. Of course we set $\xi$-pd$A = \infty$, if $\xi$-pd$A \neq n$ for all $n \geq 0$. Dually, we can define the $\xi$-$injective$ $dimension$ $\xi$-id$A$ of an object $A \in \mathcal{C}$. \begin{definition}\cite[Definition 4.4]{JSH1_2020} A $\xi$-$exact$ $complex$ $\mathbf{X}$ is a diagram $$\cdots \longrightarrow X_{1}\stackrel{d_{1}} \longrightarrow X_{0}\stackrel{d_{0}} \longrightarrow X_{-1} \longrightarrow \cdots$$ in $\mathcal{C}$ such that for each integer $n$, there exists an $\mathbb{E}$-triangle $K_{n+1}\stackrel{g_{n}}\longrightarrow X_{n}\stackrel{f_{n}}\longrightarrow K_{n}\stackrel{\delta_{n}}\dashrightarrow$ in $\xi$ and $d_{n} =g_{n-1}f_{n}$. \end{definition} \begin{definition}\cite[Definition 4.5]{JSH1_2020} Let $\mathcal{W}$ be a class of objects in $\mathcal{C}$. An $\mathbb{E}$-triangle $$A\stackrel{}\longrightarrow B\stackrel{}\longrightarrow C\stackrel{\delta}\dashrightarrow$$ in $\xi$ is called to be $\mathcal{C}(-, \mathcal{W})-exact$ (resp. $\mathcal{C}(\mathcal{W}, -)-exact$) if for any $W \in \mathcal{W}$, the induced sequences of abelian groups $0 \longrightarrow \mathcal{C}(C,W)\stackrel{}\longrightarrow \mathcal{C}(B,W)\stackrel{}\longrightarrow \mathcal{C}(A,W)\longrightarrow 0$ (resp. \\ $0 \longrightarrow \mathcal{C}(W,A)\stackrel{}\longrightarrow \mathcal{C}(W,B)\stackrel{}\longrightarrow \mathcal{C}(W,C)\longrightarrow 0$) is exact in $\mathcal{A}b$. \end{definition} \begin{definition}\cite[Definition 4.6]{JSH1_2020} Let $\mathcal{W}$ be a class of objects in $\mathcal{C}$. A complex $\mathbf{X}$ is called $\mathcal{C}(-, \mathcal{W})-exact$ (resp. $\mathcal{C}(\mathcal{W}, -)-exact$) if it is a $\xi$-exact complex $$\cdots \longrightarrow X_{1}\stackrel{d_{1}} \longrightarrow X_{0}\stackrel{d_{0}} \longrightarrow X_{-1} \longrightarrow \cdots$$ in $\mathcal{C}$ such that there is a $\mathcal{C}(-, \mathcal{W})-exact$ (resp. $\mathcal{C}(\mathcal{W}, -)-exact$) $\mathbb{E}$-triangle $K_{n+1}\stackrel{g_{n}}\longrightarrow X_{n}\stackrel{f_{n}}\longrightarrow K_{n}\stackrel{\delta_{n}}\dashrightarrow$ in $\xi$ for each integer $n$ and $d_{n} =g_{n-1}f_{n}$. \end{definition} \begin{definition}\cite[Definition 3.1]{JSH2_2020} Let $M$ be an object in $\mathcal{C}$. A $\xi$-$projective~resolution$ of $M$ is a $\xi$-exact complex $\xymatrix@C=0.5cm{ \mathbf{P} \ar[r]^{} & M }$ such that $P_{n} \in \mathcal{P}(\xi)$ for all $n \geq 0$. Dually, A $\xi$-$injective~coresolution$ of $M$ is a $\xi$-exact complex $\xymatrix@C=0.5cm{ M \ar[r]^{} & \mathbf{I} }$ such that $I_{n} \in \mathcal{I}(\xi)$ for all $n \leq 0$. \end{definition} \begin{definition}\cite[Definition 3.2]{JSH2_2020} Let $M$ $N$ be objects in $\mathcal{C}$. (1) If we choose a $\xi$-projective~resolution $\xymatrix@C=0.5cm{ \mathbf{P} \ar[r]^{} & M }$ of M, then for any integer $n \geq 0$, the $\xi$-$cohomology~groups$ $\xi xt_{\mathcal{P}(\xi)}^{n}(M,N)$ are defined as $$\xi xt_{\mathcal{P}(\xi)}^{n}(M,N) = H^{n}(\mathcal{C}(\mathbf{P},N)).$$ (2) If we choose a $\xi$-injective~coresolution $\xymatrix@C=0.5cm{ M \ar[r]^{} & \mathbf{I} }$ of M, then for any integer $n \geq 0$, the $\xi$-$cohomology~groups$ $\xi xt_{\mathcal{I}(\xi)}^{n}(M,N)$ are defined as $$\xi xt_{\mathcal{I}(\xi)}^{n}(M,N) = H^{n}(\mathcal{C}(M,\mathcal{I})).$$ \end{definition} \begin{remark} By \cite[Lemma 3.2]{JSH3_2020}, one can see that $\xi xt_{\mathcal{P}(\xi)}^{n}(-,-)$ and $\xi xt_{\mathcal{I}(\xi)}^{n}(-,-)$ are cohomological functors for any integer $n \geq 0$, independent of the choice of $\xi$-projective~resolutions and $\xi$-injective~resolutions, respectively. In fact, with the modifications of the usual proof, one obtains the isomorphism $\xi xt_{\mathcal{P}(\xi)}^{n}(M,N) \cong \xi xt_{\mathcal{I}(\xi)}^{n}(M,N)$, which is denoted by $\xi xt_{(\xi)}^{n}(M,N)$. \end{remark} \section{Main results} Throughout this section, we assume that $\mathcal{C} = (\mathcal{C},\mathbb{E},\mathfrak{s})$ is an extriangulated category satisfying Condition(WIC) and $\xi$ is a proper class of $\mathbb{E}$-triangles in $\mathcal{C}$. We also assume that $\mathcal{C}$ admits any coproducts, having enough $\xi$-projectives and $\xi$-injectives. In what follows, we also assume that $\mathcal{P}(\xi)$ is a generating subcategory of $\mathcal{C}$. Let $\mathcal{X}$ and $\mathcal{Y}$ be classes of objects of $\mathcal{C}$. We recall the following right and left orthogonal classes: $$\mathcal{X}^{\perp} = \{Y \in \mathcal{C} \mid \xi \xt_{\xi}^{1}(X,Y) = 0, \for~ \any~ X \in \mathcal{X}\},$$ $$^{\perp}\mathcal{Y} = \{X \in \mathcal{C} \mid \xi \xt_{\xi}^{1}(X,Y) = 0, \for~ \any~ Y \in \mathcal{Y}\}.$$ We denote by $\Add \mathcal{X}$ the subcategory of all summands of direct sums of objects in $\mathcal{X}$. We write $\Pres_{\mathcal{P}(\xi)}^{1}(\Add T) = \{M \in \mathcal{C} \mid$ there exists an $\mathbb{E}$-triangle $K_{1}\stackrel{}\longrightarrow Y_{0}\stackrel{}\longrightarrow M\stackrel{}\dashrightarrow$ lies in $\xi$ with $Y_{0} \in \Add T$\}. \\ Now, we first give the definition of $\xi$-$tilting~ object$ in an extriangulated category in the context we are working with. \begin{definition}\label{D31} Let $T$ be a non-zero object in $\mathcal{C}$. $T$ is said to be an $\xi$-$tilting~ object$ if it satisfies the following conditions. (T1) $\xi$-pdT $\leq 1$. (T2) $\xi xt_{\xi}^{1}(T,T^{(\lambda)}) = 0$ for any cardinal $\lambda$. In other words, $T$ is a self-orthogonal object in $\mathcal{C}$. (T3) For any object $P \in \mathcal{P}(\xi)$, there exists an $\mathbb{E}$-triangle $$P\stackrel{}\longrightarrow T_{0}\stackrel{}\longrightarrow T_{1}\stackrel{}\dashrightarrow$$ in $\xi$ with $T_{0}, T_{1} \in \Add T$. Also, $T$ is called a partial $\xi$-$tilting~ object$ if it satisfies conditions above (T1) and (T2). \end{definition} This definition generalizes the classical ones for artin algebras given by Happel and Ringel \cite{DH_1982} and Miyashita \cite{YM_1986}, as well as the one given by Y.G. Hu \cite{YGH_2020} for $\xi$-tilting object in a triangulated category. The following lemma is frequently used. \begin{lemma}\label{L32} Let $T$ be an object in $\mathcal{C}$. Then the following statements holds. (1) If the $\mathbb{E}$-triangle $A\stackrel{}\longrightarrow B\stackrel{}\longrightarrow C\stackrel{}\dashrightarrow$ lies in $\xi$ such that $B \in \Pres_{\mathcal{P}(\xi)}^{1}(\Add T)$, then $C \in \Pres_{\mathcal{P}(\xi)}^{1}(\Add T)$. (2) If $\Pres_{\mathcal{P}(\xi)}^{1}(\Add T) = T^{\perp}$, then $T$ is a partial $\xi$-tilting object. \end{lemma} \noindent{\textbf{Proof}} (1) Suppose that the $\mathbb{E}$-triangle $A\stackrel{x}\longrightarrow B\stackrel{y}\longrightarrow C\stackrel{\delta}\dashrightarrow$ lies in $\xi$ such that $B \in \Pres_{\mathcal{P}(\xi)}^{1}(\Add T)$. Then there exists an $\mathbb{E}$-triangle $K_{B}\stackrel{x_{1}}\longrightarrow T_{0}\stackrel{y_{1}}\longrightarrow B\stackrel{\delta_{1}}\dashrightarrow$ in $\xi$ with $T_{0} \in \Add T$. Since $\mathcal{C}$ has enough $\xi$-projectives, there exists an $\mathbb{E}$-triangle $K_{A}\stackrel{x_{2}}\longrightarrow P_{A}\stackrel{y_{2}}\longrightarrow A\stackrel{\delta_{2}}\dashrightarrow$ in $\xi$ with $P_{A} \in \mathcal{P}(\xi)$. We obtain a commutative diagram of $\mathbb{E}$-triangles $$\xymatrix{ K_{A}\ar[d]^{x_{2}} & K_{B}\oplus P_{A}\ar[d]^{\left[\begin{smallmatrix} x_{1} & 0 \\ 0 & 1 \end{smallmatrix}\right]} & \\ P_{A}\ar[d]^{y_{2}}\ar[r]^{\left[\begin{smallmatrix} 0 \\ 1 \end{smallmatrix}\right]} & T_{0}\oplus P_{A}\ar[r]^{\left[\begin{smallmatrix} 1 & 0 \end{smallmatrix}\right]}\ar[d]^{\left[\begin{smallmatrix} y_{1} & xy_{2} \end{smallmatrix}\right]} & T_{0}\ar@{-->}[r] & \\ A\ar@{-->}[d]\ar[r]_{x} & B\ar@{-->}[d]\ar[r]_{y} & C\ar@{-->}[r]_{\delta} & \\ &&& } $$ Since $\mathcal{C}$ satisfies Condition(WIC), by \cite[Lemma 5.9]{NP_2019}, then for some $X \in \mathcal{C}$, we obtain $\mathbb{E}$-triangles \\ $K_{A}\stackrel{}\longrightarrow K_{B} \oplus P_{A}\stackrel{}\longrightarrow X\stackrel{}\dashrightarrow$ and $X\stackrel{}\longrightarrow T_{0}\stackrel{yy_{1}}\longrightarrow C\stackrel{}\dashrightarrow$. We now show that $X\stackrel{}\longrightarrow T_{0}\stackrel{yy_{1}}\longrightarrow C\stackrel{}\dashrightarrow$ lies in $\xi$. Since $A\stackrel{x}\longrightarrow B\stackrel{y}\longrightarrow C\stackrel{\delta}\dashrightarrow$ lies in $\xi$ and $K_{B}\stackrel{x_{1}}\longrightarrow T_{0}\stackrel{y_{1}}\longrightarrow B\stackrel{\delta_{1}}\dashrightarrow$ in $\xi$ with $T_{0} \in \Add T$, by \cite[Corollary 3.5]{JSH1_2020}, the class of $\xi$-deflations is closed under compositions, then the $\mathbb{E}$-triangle $X\stackrel{}\longrightarrow T_{0}\stackrel{yy_{1}}\longrightarrow C\stackrel{}\dashrightarrow$ lies in $\xi$. Therefore, $C \in \Pres_{\mathcal{P}(\xi)}^{1}(\Add T)$ by the definition. (2) It suffices to show that $\xi$-pdT $\leq 1$. For any object $M \in \mathcal{C}$, since $\mathcal{C}$ has enough $\xi$-injectives, there exists an $\mathbb{E}$-triangle $M\stackrel{x_{2}}\longrightarrow I\stackrel{y_{2}}\longrightarrow K\stackrel{\delta_{2}}\dashrightarrow$ in $\xi$ with $I \in \mathcal{I}(\xi)$. Note that, $\mathcal{I}(\xi) \subseteq T^{\perp} = \Pres_{\mathcal{P}(\xi)}^{1}(\Add T)$, then $I \in \Pres_{\mathcal{P}(\xi)}^{1}(\Add T)$. By (1), we have $K \in \Pres_{\mathcal{P}(\xi)}^{1}(\Add T)$. Applying the functor $\mathcal{C}(T,-)$ to the $\mathbb{E}$-triangle $M\stackrel{x_{2}}\longrightarrow I\stackrel{y_{2}}\longrightarrow K\stackrel{\delta_{2}}\dashrightarrow$, by \cite[Lemma 3.4]{JSH2_2020}, we have an exact sequence $$0 = \xi xt_{\xi}^{1}(T,K) \stackrel{ }\longrightarrow \xi xt_{\xi}^{2}(T,M)\stackrel{ }\longrightarrow \xi xt_{\xi}^{2}(T,I) = 0.$$ By \cite[Lemma 3.9 (1)]{JSH3_2020}, since $\mathcal{P}(\xi)$ is a generating subcategory of $\mathcal{C}$, then $\xi$-pdT $\leq 1$ if and only if $\xi xt_{\xi}^{2}(T,M) = 0$ for any object $M \in \mathcal{C}$. The proof is completed. \hfill$\Box$ \begin{lemma}\label{L33} Let $T$ be an object in $\mathcal{C}$. If $\Pres_{\mathcal{P}(\xi)}^{1}(\Add T) = T^{\perp}$, then for each $M \in \Pres_{\mathcal{P}(\xi)}^{1}(\Add T)$, there exists an $\mathbb{E}$-triangle $K\stackrel{}\longrightarrow T_{M}\stackrel{}\longrightarrow M\stackrel{}\dashrightarrow$ lies in $\xi$ with $T_{M} \in \Add T$ and $K \in \Pres_{\mathcal{P}(\xi)}^{1}(\Add T)$. \end{lemma} \noindent{\textbf{Proof}} For each $M \in \Pres_{\mathcal{P}(\xi)}^{1}(\Add T)$, there exists an $\mathbb{E}$-triangle $K\stackrel{x}\longrightarrow T_{M}\stackrel{y}\longrightarrow M\stackrel{\delta_{1}}\dashrightarrow$ lies in $\xi$ with $T_{M} \in \Add T$. Since $\mathcal{C}$ has enough $\xi$-projectives, there exists an $\mathbb{E}$-triangle $K_{1}\stackrel{x_{1}}\longrightarrow P_{0}\stackrel{y_{1}}\longrightarrow T\stackrel{\delta_{1}}\dashrightarrow$ in $\xi$ with $P_{0} \in \mathcal{P}(\xi)$. It follows from Lemma \ref{L32} that $T$ is self-orthogonal and $\xi$-pdT $\leq 1$, by the definition of $\xi$-projective dimension, we obtain that $\xi$-pd$K_{1}$ $\leq 0$, thus $K_{1} \in \mathcal{P}(\xi)$. Applying the functor $\mathcal{C}(P_{0},-)$ to the $\mathbb{E}$-triangle $K\stackrel{x}\longrightarrow T_{M}\stackrel{y}\longrightarrow M\stackrel{\delta_{1}}\dashrightarrow$, by \cite[Lemma 3.4]{JSH2_2020}, We have the following commutative diagram $$\xymatrix{ 0 \ar@{-->}[r]& \mathcal{C}(P_{0},K) \ar[d]_{\cong} \ar[r]^{} & \mathcal{C}(P_{0},T_{M}) \ar[d]_{\cong} \ar[r]^{} & \mathcal{C}(P_{0},M) \ar[d]_{\cong} \ar@{-->}[r] & 0\\ 0 \ar[r]^{} & \xi xt_{\xi}^{0}(P_{0},K) \ar[r]^{} & \xi xt_{\xi}^{0}(P_{0},T_{M}) \ar[r]^{} & \xi xt_{\xi}^{0}(P_{0},M) \ar[r]^{} & \xi xt_{\xi}^{1}(P_{0},K) = 0 } $$ then the first row in above diagram is exact. Applying the functor $\mathcal{C}(K_{1},-)$ to the $\mathbb{E}$-triangle $K\stackrel{x}\longrightarrow T_{M}\stackrel{y}\longrightarrow M\stackrel{\delta_{1}}\dashrightarrow$, by \cite[Lemma 3.4]{JSH2_2020} again, We have the following commutative diagram $$\xymatrix{ 0 \ar@{-->}[r]& \mathcal{C}(K_{1},K) \ar[d]_{\cong} \ar[r]^{} & \mathcal{C}(K_{1},T_{M}) \ar[d]_{\cong} \ar[r]^{} & \mathcal{C}(K_{1},M) \ar[d]_{\cong} \ar@{-->}[r] & 0\\ 0 \ar[r]^{} & \xi xt_{\xi}^{0}(K_{1},K) \ar[r]^{} & \xi xt_{\xi}^{0}(K_{1},T_{M}) \ar[r]^{} & \xi xt_{\xi}^{0}(K_{1},M) \ar[r]^{} & \xi xt_{\xi}^{1}(K_{1},K) = 0 } $$ then the first row in above diagram is exact. Since $M \in T^{\perp}$, applying the functor $\mathcal{C}(-,M)$ to the $\mathbb{E}$-triangle $K_{1}\stackrel{x_{1}}\longrightarrow P_{0}\stackrel{y_{1}}\longrightarrow T\stackrel{\delta_{1}}\dashrightarrow$ , by \cite[Lemma 3.4]{JSH2_2020} again, We have the following commutative diagram $$\xymatrix{ & \mathcal{C}(T,M) \ar[d]_{} \ar[r]^{} & \mathcal{C}(P_{0},M) \ar[d]_{\cong} \ar[r]^{} & \mathcal{C}(K_{1},M) \ar[d]_{\cong} \ar@{-->}[r] & 0\\ 0 \ar[r]^{} & \xi xt_{\xi}^{0}(T,M) \ar[r]^{} & \xi xt_{\xi}^{0}(P_{0},M) \ar[r]^{} & \xi xt_{\xi}^{0}(K_{1},M) \ar[r]^{} & \xi xt_{\xi}^{1}(T,M) = 0 } $$ then $\mathcal{C}(P_{0},M) \longrightarrow \mathcal{C}(K_{1},M)$ is an epimorphism. Since $T$ is self-orthogonal, applying the functor $\mathcal{C}(-,T_{M})$ to the $\mathbb{E}$-triangle $K_{1}\stackrel{x_{1}}\longrightarrow P_{0}\stackrel{y_{1}}\longrightarrow T\stackrel{\delta_{1}}\dashrightarrow$ , by \cite[Lemma 3.4]{JSH2_2020} again, We have the following commutative diagram $$\xymatrix{ & \mathcal{C}(T,T_{M}) \ar[d]_{} \ar[r]^{} & \mathcal{C}(P_{0},T_{M}) \ar[d]_{\cong} \ar[r]^{} & \mathcal{C}(K_{1},T_{M}) \ar[d]_{\cong} \ar@{-->}[r] & 0\\ 0 \ar[r]^{} & \xi xt_{\xi}^{0}(T,T_{M}) \ar[r]^{} & \xi xt_{\xi}^{0}(P_{0},T_{M}) \ar[r]^{} & \xi xt_{\xi}^{0}(K_{1},T_{M}) \ar[r]^{} & \xi xt_{\xi}^{1}(T,T_{M}) = 0 } $$ then $\mathcal{C}(P_{0},T_{M}) \longrightarrow \mathcal{C}(K_{1},T_{M})$ is an epimorphism. Since the functor $\mathcal{C}(-,-)$ is a biaddtive functor, we have the following commutative diagram $$\xymatrix{ & \mathcal{C}(T,K) \ar[d]_{} \ar[r]^{} & \mathcal{C}(T,T_{M}) \ar[d]_{} \ar[r]^{} & \mathcal{C}(T,M) \ar[d]_{} & & \\ 0 \ar[r]^{} &\mathcal{C}(P_{0},K) \ar[d]_{} \ar[r]^{} & \mathcal{C}(P_{0},T_{M}) \ar[d]_{} \ar[r]^{} & \mathcal{C}(P_{0},M) \ar[d]_{} \ar[r]^{} & 0 & \\ 0 \ar[r]^{} & \mathcal{C}(K_{1},K) \ar@{-->}[d]_{} \ar[r]^{} & \mathcal{C}(K_{1},T_{M}) \ar[d]_{} \ar[r]^{} & \mathcal{C}(K_{1},M) \ar[d]_{} \ar[r]^{} & 0 & \\ & 0 & 0 & 0 & & } $$ in which all horizontals and the third and the fourth vertials are exact. Since $\mathcal{C}(P_{0},T_{M}) \longrightarrow \mathcal{C}(K_{1},T_{M})$ is an epimorphism, for any $g \in \mathcal{C}(K_{1},K)$, there exists $g_{1} \in \mathcal{C}(P_{0},T_{M})$ such that $xg = g_{1}x_{1}$, by (ET3), then we have the following commutative $$\scalebox{0.9}[1.0]{\xymatrixcolsep{4pc}\xymatrix{ K_{1}\ar[d]_-{g }\ar[r]^{x_{1}} & P_{0}\ar@{..>}[ld]_-{h_{1}} \ar[d]^{g_{1} }\ar[r]^-{y_{1}} & T\ar@{..>}[ld]_-{h_{2}} \ar@{..>}[d]_-{g_{2} } \ar@{-->}[r]^{\delta_{1}} & \\ K\ar[r]^-{x }& T_{M} \ar[r]^-{ y} & M \ar@{-->}[r]^{\delta} &. }} $$ Since $T_{M} \in \Add T$, then $g_{2}$ factors through $y$, hence $g$ factors through $x_{1}$ by Lemma \ref {L3}, that is, the morphism $\mathcal{C}(P_{0},K) \longrightarrow \mathcal{C}(K_{1},K)$ is an epimorphism. Applying the functor $\mathcal{C}(-,K)$ to the $\mathbb{E}$-triangle $K_{1}\stackrel{x_{1}}\longrightarrow P_{0}\stackrel{y_{1}}\longrightarrow T\stackrel{\delta_{1}}\dashrightarrow$ , by \cite[Lemma 3.4]{JSH2_2020} again, We have the following commutative diagram $$\xymatrix{ & \mathcal{C}(T,K) \ar[d]_{} \ar[r]^{} & \mathcal{C}(P_{0},K) \ar[d]_{\cong} \ar[r]^{} & \mathcal{C}(K_{1},K) \ar[d]_{\cong} \ar[r] & 0 &\\ 0 \ar[r]^{} & \xi xt_{\xi}^{0}(T,K) \ar[r]^{} & \xi xt_{\xi}^{0}(P_{0},K) \ar[r]^{} & \xi xt_{\xi}^{0}(K_{1},K) \ar[r]^{} & \xi xt_{\xi}^{1}(T,K) \ar[r]^{} & 0 } $$ then $\mathcal{\xi} xt_{\xi}^{1}(P_{0},K) \longrightarrow \mathcal{\xi} xt_{\xi}^{1}(K_{1},K)$ is an epimorphism. It implies that $\mathcal{\xi} xt_{\xi}^{1}(T,K) = 0$. Therefore $K \in T^{\perp} = \Pres_{\mathcal{P}(\xi)}^{1}(\Add T)$. \hfill$\Box$ Now we give an important characterization of the $\xi$-tilting objects: the Bazzoni characterization of $\xi$-tilting object in extriangulated category. \begin{theorem}\label{T34} Let $T$ be an object in $\mathcal{C}$. Then the following statements are equivalent. (1) $T$ is an $\xi$-tilting object. (2) $T^{\perp} = \Pres_{\mathcal{P}(\xi)}^{1}(\Add T)$. \end{theorem} \noindent{\textbf{Proof}} (1) $\Rightarrow$ (2) For each $M \in \Pres_{\mathcal{P}(\xi)}^{1}(\Add T)$, there exists an $\mathbb{E}$-triangle $K\stackrel{ }\longrightarrow T_{M}\stackrel{ }\longrightarrow M\stackrel{ }\dashrightarrow$ lies in $\xi$ with $T_{M} \in \Add T$. Applying the functor $\mathcal{C}(T,-)$ to the above $\mathbb{E}$-triangle, by \cite[Lemma 3.4]{JSH2_2020}, we have an exact sequence $$\xi xt_{\xi}^{1}(T,T_{M}) \stackrel{ }\longrightarrow \xi xt_{\xi}^{1}(T,M)\stackrel{ }\longrightarrow \xi xt_{\xi}^{2}(T,K).$$ Since $T$ satisfies the conditions (T1) and (T2), $\xi xt_{\xi}^{1}(T,T_{M}) = \xi xt_{\xi}^{1}(T,K) = 0$. It yields that $\xi xt_{\xi}^{2}(T,M) = 0$ and hence $M \in T^{\perp}$. Therefore, we have that $\Pres_{\mathcal{P}(\xi)}^{1}(\Add T) \subseteq T^{\perp}$. Now, assume that $M \in T^{\perp}$. Since $\mathcal{C}$ has enough $\xi$-projectives, there exists an $\mathbb{E}$-triangle $K\stackrel{x}\longrightarrow P\stackrel{y}\longrightarrow M\stackrel{\delta}\dashrightarrow$ in $\xi$ with $P \in \mathcal{P}(\xi)$. By (T3), there also exists an $\mathbb{E}$-triangle $P\stackrel{x_{1}}\longrightarrow T_{0}\stackrel{y_{1}}\longrightarrow T_{1}\stackrel{\delta_{1}}\dashrightarrow$ in $\xi$ with $T_{0}, T_{1} \in \Add T$. Applying the functor $\mathcal{C}(-,M)$ to the $\mathbb{E}$-triangle $P\stackrel{x_{1}}\longrightarrow T_{0}\stackrel{y_{1}}\longrightarrow T_{1}\stackrel{\delta_{1}}\dashrightarrow$, by \cite[Lemma 3.4]{JSH2_2020}, we have an exact sequence $$0 \stackrel{ }\longrightarrow \xi xt_{\xi}^{0}(T_{1},M) \stackrel{ }\longrightarrow \xi xt_{\xi}^{0}(T_{0},M)\stackrel{ }\longrightarrow \xi xt_{\xi}^{0}(P,M) \stackrel{ }\longrightarrow \xi xt_{\xi}^{1}(T_{1},M) = 0$$ By \cite[Lemma 3.4]{JSH2_2020}, We have the following commutative diagram $$\xymatrix{ 0 \ar@{-->}[r]& \mathcal{C}(T_{1},M) \ar[d]_{ } \ar[r]^{} & \mathcal{C}(T_{0},M) \ar[d]_{ } \ar[r]^{} & \mathcal{C}(P,M) \ar[d]_{\cong} \ar@{-->}[r] & 0\\ 0 \ar[r]^{} & \xi xt_{\xi}^{0}(T_{1},M) \ar[r]^{} & \xi xt_{\xi}^{0}(T_{0},M) \ar[r]^{} & \xi xt_{\xi}^{0}(P,M) \ar[r]^{} &0 } $$ Therefore the induced map $\mathcal{C}(T_{0},M) \rightarrow \mathcal{C}(P,M)$ is epic. Since $\mathcal{C}(T_{0},M) \rightarrow \mathcal{C}(P,M)$ is an epimorphism, there exists $y_{2} \in \mathcal{C}(T_{0},M)$ such that $y = y_{2}x_{1}$. Since $\mathcal{C}$ satisfies Condition (WIC), and $y = y_{2}x_{1}$ is an deflation, then so is $y_{2}$, take $K_{1} =$ CoCone$(y_{2})$. By (ET3)$^{\op}$, we have the following commutative diagram $$\scalebox{0.9}[1.0]{\xymatrixcolsep{4pc}\xymatrix{ K\ar@{..>}[d]_-{g }\ar[r]^{x} & P\ar[d]^{x_{1} }\ar[r]^-{y} & M \ar@{=}[d]_-{ } \ar@{-->}[r]^{\delta} & \\ K_{1}\ar[r]^-{x_{2}}& T_{0} \ar[r]^-{y_{2}} & M \ar@{-->}[r]^{\delta} &. }} $$ Hence $K_{1}\stackrel{x}\longrightarrow T_{0}\stackrel{y}\longrightarrow M\stackrel{\delta}\dashrightarrow$ is an $\mathbb{E}$-triangle in $\xi$ with $T_{0} \in \Add T$ since $\xi$ is closed under cobase change. Therefore, we have that $T^{\perp} \subseteq \Pres_{\mathcal{P}(\xi)}^{1}(\Add T)$. (2) $\Rightarrow$ (1) By Lemma \ref{L32}, it suffices to show that $T$ satisfies the condition (T3). For any $P \in P(\xi)$, there exists an $\mathbb{E}$-triangle $P\stackrel{x}\longrightarrow I\stackrel{y}\longrightarrow K_{0}\stackrel{\delta}\dashrightarrow$ in $\xi$ with $I \in I(\xi)$ since $\mathcal{C}$ has enough $\xi$-injective objects. Note that $I \in I(\xi) \subseteq T^{\perp} = \Pres_{\mathcal{P}(\xi)}^{1}(\Add T)$ and so, there exists an $\mathbb{E}$-triangle $H\stackrel{x_{1}}\longrightarrow T_{0}\stackrel{y_{1}}\longrightarrow I\stackrel{\delta_{1}}\dashrightarrow$ in $\xi$ with $T_{0} \in \Add T$. By the projectively of $P$, there exists $x_{2}: P \rightarrow T_{0}$ such that $x = y_{1}x_{2}$. Since $\mathcal{C}$ satisfies Condition (WIC), $x = y_{1}x_{2}$ is an inflation, then so is $x$, take $K_{1} =$ Cone$(x_{2})$. Because $\xi$ is closed under base change, then $P\stackrel{x_{2}}\longrightarrow T_{0}\stackrel{y_{2}}\longrightarrow K_{1}\stackrel{\delta_{2}}\dashrightarrow$ is an $\mathbb{E}$-triangle in $\xi$ with $T_{0} \in \Add T$. This implies that $K_{1} \in \Pres_{\mathcal{P}(\xi)}^{1}(\Add T) = T^{\perp}$. We obtain a commutative diagram of $\mathbb{E}$-triangles $$\xymatrix{ 0\ar[d]^{ } & H\ar[d]^{x_{1}} & \\ P\ar@{=}[d]^{ }\ar[r]^{x_{2} } & T_{0} \ar[r]^{y_{2} }\ar[d]^{y_{1} } & K_{1}\ar@{-->}[r]^{\delta_{2}} & \\ P\ar@{-->}[d]\ar[r]_{x} & I\ar@{-->}[d]^{\delta_{1}}\ar[r]_{y} & K_{0}\ar@{-->}[r]_{\delta} & .\\ &&& } $$ Since $\mathcal{C}$ satisfies Condition(WIC), applying \cite[Lemma 5.9]{NP_2019}, we obtain $\mathbb{E}$-triangles $0 \stackrel{}\longrightarrow H \stackrel{}\longrightarrow H \stackrel{}\dashrightarrow$ and $H \stackrel{y_{2}x_{1}}\longrightarrow K_{1}\stackrel{y_{3}}\longrightarrow K_{0}\stackrel{}\dashrightarrow$. Since $K_{1} \in \Pres_{\mathcal{P}(\xi)}^{1}(\Add T) = T^{\perp}$, there exists an $\mathbb{E}$-triangle $F\stackrel{x_{3}}\longrightarrow T_{K_{1}}\stackrel{y_{3}}\longrightarrow K_{1}\stackrel{\delta_{3}}\dashrightarrow$ in $\xi$ with $T_{K_{1}} \in \Add T$. According to Lemma \ref{L33}, $F \in \Pres_{\mathcal{P}(\xi)}^{1}(\Add T) = T^{\perp}$. According to \cite[Proposition 3.15]{NP_2019}, we obtain a commutative diagram of $\mathbb{E}$-triangles $$\xymatrix{ & F\ar@{=}[r]^{ }\ar[d]^{ } & F\ar[d]^{x_{3}} \\ P\ar@{=}[d]^{ }\ar[r]^{ } & E \ar[r]^{ }\ar[d]^{ } & T_{K_{1}}\ar@{-->}[r]^{ }\ar[d]^{y_{3} } & \\ P\ar[r]_{x_{2}} & T_{0}\ar@{-->}[d]^{}\ar[r]_{y_{2}} & K_{1}\ar@{-->}[d]^{\delta_{3}}\ar@{-->}[r]_{\delta_{2}} & .\\ &&& } $$ It is easy to see that $\mathbb{E}$-triangles $P\stackrel{}\longrightarrow E \stackrel{ }\longrightarrow T_{K_{1}}\stackrel{}\dashrightarrow$ and $F\stackrel{}\longrightarrow E \stackrel{ }\longrightarrow T_{0}\stackrel{}\dashrightarrow$ in $\xi$, because $\xi$ is closed under base change. Now we claim that the $\mathbb{E}$-triangle $P\stackrel{}\longrightarrow E \stackrel{ }\longrightarrow T_{K_{1}}\stackrel{}\dashrightarrow$ is the desired $\mathbb{E}$-triangle. Since $F, T_{0} \in T^{\perp}$, then $E \in T^{\perp}$. This implies that there exists an $\mathbb{E}$-triangle $Y\stackrel{}\longrightarrow T_{E} \stackrel{ }\longrightarrow E\stackrel{}\dashrightarrow$ in $\xi$ with $T_{E} \in \Add T$ and $Y \in \Pres_{\mathcal{P}(\xi)}^{1}(\Add T) = T^{\perp}$. Applying the functor $\mathcal{C}(-,Y)$ to the $\mathbb{E}$-triangle $P\stackrel{x_{1}}\longrightarrow E\stackrel{y_{1}}\longrightarrow T_{K_{1}}\stackrel{\delta_{1}}\dashrightarrow$, by \cite[Lemma 3.4]{JSH2_2020}, we have an exact sequence $$0 \stackrel{ }\longrightarrow \xi xt_{\xi}^{1}(T_{K_{1}},Y) \stackrel{ }\longrightarrow \xi xt_{\xi}^{1}(E,Y)\stackrel{ }\longrightarrow \xi xt_{\xi}^{1}(P,Y) = 0.$$ It follows that $\xi xt_{\xi}^{1}(E,Y) = 0$ and so $\mathbb{E}$-triangle $Y\stackrel{}\longrightarrow T_{E} \stackrel{ }\longrightarrow E\stackrel{}\dashrightarrow$ in $\xi$ is splits. Therefore $E \in \Add T$. The proof is completed. \hfill$\Box$ We write the class $\Pres_{\mathcal{P}(\xi)}^{2}(\Add T) = \{M \in \mathcal{C} \mid$ there exists an $\mathbb{E}$-triangle $K_{1}\stackrel{}\longrightarrow Y_{0}\stackrel{}\longrightarrow M\stackrel{}\dashrightarrow$ in $\xi$ \\ with $Y_{0} \in \Add T$ and $K_{1} \in \Pres_{\mathcal{P}(\xi)}^{1}(\Add T) \}.$ \begin{corollary}\label{C35} If $T$ is an $\xi$-tilting object, then $\Pres_{\mathcal{P}(\xi)}^{1}(\Add T)= \Pres_{\mathcal{P}(\xi)}^{2}(\Add T)$. \end{corollary} \noindent{\textbf{Proof}} The proof follows directly from definition and Lemma \ref{L33}. \hfill$\Box$ \begin{definition} Let $\mathcal{X}$ be a subcategory of $\mathcal{C}$. $\mathcal{X}$ is said to be an $\xi$-$covariantly~ finite~ subcategory$ of $\mathcal{C}$, if for any object $M \in \mathcal{C}$, there exists an $\mathbb{E}$-triangle $M\stackrel{}\longrightarrow X_{0} \stackrel{ }\longrightarrow K\stackrel{}\dashrightarrow$ in $\xi$ with $X_{0} \in \mathcal{X}$, such that the induced map $\mathcal{C}(X_{0},X) \rightarrow \mathcal{C}(M,X)$ is an epimorphism for any $X \in \mathcal{X}$. Moreover, if $K \in ^{\perp}\mathcal{X}$, then $\mathcal{X}$ is called a special $\xi$-$covariantly~ finite~ subcategory$ of $\mathcal{C}$. \end{definition} \begin{proposition}\label{P37} If $T$ is a $\xi$-tilting object in $\mathcal{C}$, let $\mathcal{X} = T^{\perp}$, then the following results hold. (1) $\mathcal{X} = T^{\perp}$ is a special $\xi$-covariantly finite subcategory of $\mathcal{C}$. (2) For each $K \in ^{\perp}\mathcal{X}$, $\xi$-pd$K \leq 1$. (3) If the $\mathbb{E}$-triangle $X\stackrel{}\longrightarrow Y \stackrel{ }\longrightarrow Z\stackrel{}\dashrightarrow$ in $\xi$ with $X, Y \in \mathcal{X}$, then $Z\in \mathcal{X}$. \end{proposition} \noindent{\textbf{Proof}} (1) For any $M \in \mathcal{C}$, since $\mathcal{C}$ has enough $\xi$-injective objects, there exists an $\mathbb{E}$-triangle $M\stackrel{}\longrightarrow I\stackrel{}\longrightarrow K\stackrel{}\dashrightarrow$ in $\xi$ with $I \in I(\xi)$. Note that $T$ is a $\xi$-tilting object in $\mathcal{C}$ and $I \in I(\xi) \subseteq T^{\perp} = \mathcal{X}$, according to Lemma \ref{L32} (1), we have $K \in \Pres_{\mathcal{P}(\xi)}^{1}(\Add T) = T^{\perp}$. Since $\mathcal{C}$ has enough $\xi$-projective objects, there exists an $\mathbb{E}$-triangle $F\stackrel{x_{1}}\longrightarrow P_{0}\stackrel{y_{1}}\longrightarrow K\stackrel{\delta_{1}}\dashrightarrow$ in $\xi$ with $P_{0} \in \mathcal{P}(\xi)$. According to \cite[Proposition 3.15]{NP_2019}, we obtain a commutative diagram of $\mathbb{E}$-triangles $$\xymatrix{ & F\ar@{=}[r]^{ }\ar[d]^{ } & F\ar[d]^{x_{1}} \\ M\ar@{=}[d]^{ }\ar[r]^{ } & E \ar[r]^{ }\ar[d]^{ } & P_{0}\ar@{-->}[r]^{ }\ar[d]^{y_{1} } & \\ M\ar[r]_{} & I\ar@{-->}[d]^{}\ar[r]_{} &K\ar@{-->}[d]^{\delta_{1}}\ar@{-->}[r]_{ } & .\\ &&& } $$ Then the $\mathbb{E}$-triangles $M\stackrel{x}\longrightarrow E\stackrel{y}\longrightarrow P_{0}\stackrel{\delta}\dashrightarrow$ and $F\stackrel{ }\longrightarrow E\stackrel{ }\longrightarrow I\stackrel{ }\dashrightarrow$ in $\xi$, because $\xi$ is closed under base change. Since $T$ is a $\xi$-tilting object in $\mathcal{C}$, there exists an $\mathbb{E}$-triangle $P_{0}\stackrel{x_{2}}\longrightarrow T_{0}\stackrel{y_{2}}\longrightarrow T_{1}\stackrel{\delta_{2}}\dashrightarrow$ in $\xi$ with $T_{0}, T_{1} \in \Add T$. According to (ET4), we obtain a commutative diagram of $\mathbb{E}$-triangles $$\xymatrix{ F\ar@{=}[d]^{ }\ar[r]^{x_{1} } & P_{0}\ar[r]^{y_{1}}\ar[d]^{x_{2} } & K\ar[d]^{ }\ar@{-->}[r]^{\delta_{1}} & \\ F\ar[r]^{x_{2}x_{1} } & T_{0} \ar[r]^{y_{3} }\ar[d]^{y_{2} } & G\ar@{-->}[r]^{\delta_{3} }\ar[d]^{ } & \\ & T_{1}\ar@{-->}[d]^{\delta_{2}}\ar@{=}[r]_{ } &T_{1}\ar@{-->}[d]^{} & .\\ &&& } $$ According to \cite[Corollary 3.5]{JSH1_2020}, the class of $\xi$-inflations is closed under compositions, then the $\mathbb{E}$-triangle $F\stackrel{x_{2}x_{1}}\longrightarrow T_{0}\stackrel{y_{3}}\longrightarrow G\stackrel{\delta_{3}}\dashrightarrow$ lies in $\xi$. Note that $T_{0} \in \Add T$, then $G \in \Pres_{\mathcal{P}(\xi)}^{1}(\Add T) = T^{\perp}$. According to Lemma \ref{L33}, $F \in \Pres_{\mathcal{P}(\xi)}^{1}(\Add T) = T^{\perp}$. Consider the $\mathbb{E}$-triangle $F\stackrel{x_{1}}\longrightarrow E\stackrel{y_{1}}\longrightarrow I\stackrel{\delta_{1}}\dashrightarrow$, we have $E \in \mathcal{X} = T^{\perp}$, because $F, I \in T^{\perp}$. By the preceding discussion, we obtain the $\mathbb{E}$-triangle $M\stackrel{x}\longrightarrow E\stackrel{y}\longrightarrow P_{0}\stackrel{\delta}\dashrightarrow$ in $\xi$ with $E \in \mathcal{X} = T^{\perp}$. Clearly, by \cite[Lemma 3.9]{JSH3_2020}, $P_{0} \in ^{\perp}\mathcal{X}$. For any $X \in \mathcal{X}$, since $\mathcal{C}$ has enough $\xi$-injective objects, there exists an $\mathbb{E}$-triangle $X\stackrel{x_{0}}\longrightarrow I_{0}\stackrel{y_{0}}\longrightarrow K_{X}\stackrel{\delta_{0}}\dashrightarrow$ in $\xi$ with $I_{0} \in I(\xi)$. Since $P_{0} \in \mathcal{P}(\xi)$, we have an exact sequence of abelian groups $$0 \stackrel{ }\longrightarrow \mathcal{C}(P_{0},X) \stackrel{ }\longrightarrow \mathcal{C}(P_{0},I_{0}) \stackrel{ }\longrightarrow \mathcal{C}(P_{0},K_{X}) \stackrel{ }\longrightarrow 0.$$ Since $I_{0} \in \mathcal{I}(\xi)$, we have an exact sequence of abelian groups $$0 \stackrel{ }\longrightarrow \mathcal{C}(P_{0},I_{0}) \stackrel{ }\longrightarrow \mathcal{C}(E,I_{0}) \stackrel{ }\longrightarrow \mathcal{C}(M,I_{0},) \stackrel{ }\longrightarrow 0.$$ Because the functor $\mathcal{C}(-,-)$ is a biaddtive functor, we have the following commutative diagram $$\xymatrix{ & & 0\ar[d]_{} & & \\ 0 \ar[r]^{} & \mathcal{C}(P_{0},X) \ar[d]_{} \ar[r]^{} & \mathcal{C}(P_{0},I_{0}) \ar[d]_{} \ar[r]^{} & \mathcal{C}(P_{0},K_{X}) \ar[d]_{}\ar[r]^{} & 0\\ & \mathcal{C}(E,X) \ar[d]_{} \ar[r]^{} & \mathcal{C}(E,I_{0}) \ar[d]_{} \ar[r]^{} & \mathcal{C}(E,K_{X}) \ar[d]_{} & \\ & \mathcal{C}(M,X) \ar[r]^{} & \mathcal{C}(M,I_{0}) \ar[d]_{} \ar[r]^{} & \mathcal{C}(M,K_{X}) & \\ & & 0 & & } $$ in which all horizontals and vertials are exact. Since $\mathcal{C}(E,I_{0}) \longrightarrow \mathcal{C}(M,I_{0})$ is an epimorphism, for any $g \in \mathcal{C}(M,X)$, there exists $g_{1} \in \mathcal{C}(E,I_{0})$ such that $x_{0}g = g_{1}x$, by (ET3), then we have the following commutative $$\scalebox{0.9}[1.0]{\xymatrixcolsep{4pc}\xymatrix{ M\ar[d]_-{g }\ar[r]^{x} & E\ar@{..>}[ld]_-{h_{1}} \ar[d]^{g_{1} }\ar[r]^-{y} & P_{0}\ar@{..>}[ld]_-{h_{2}} \ar@{..>}[d]_-{g_{2} } \ar@{-->}[r]^{\delta} & \\ X\ar[r]^-{x_{0} }& I_{0} \ar[r]^-{ y_{0}} & K_{X} \ar@{-->}[r]^{\delta_{0}} &. }} $$ Since $P_{0} \in \mathcal{P}(\xi)$, then $g_{2}$ factors through $y_{0}$, hence $g$ factors through $x$ by Lemma \ref {L3}, that is, the morphism $\mathcal{C}(E,X) \longrightarrow \mathcal{C}(M,X)$ is an epimorphism. Therefore, $\mathcal{X} = T^{\perp}$ is a special $\xi$-covariantly finite subcategory of $\mathcal{C}$. (2) Suppose $Y \in ^{\perp} \mathcal{X}$. For any $M \in \mathcal{C}$, since $\mathcal{C}$ has enough $\xi$-injective objects, there exists an $\mathbb{E}$-triangle $M\stackrel{}\longrightarrow I\stackrel{}\longrightarrow K\stackrel{}\dashrightarrow$ in $\xi$ with $I \in I(\xi)$. Applying the functor $\mathcal{C}(Y,-)$ to this $\mathbb{E}$-triangle, by \cite[Lemma 3.4]{JSH2_2020}, we have an exact sequence $$ \xi xt_{\xi}^{1}(Y,K) \stackrel{ }\longrightarrow \xi xt_{\xi}^{2}(Y,M)\stackrel{ }\longrightarrow \xi xt_{\xi}^{2}(Y,I) = 0.$$ Since $K \in T^{\perp} = \mathcal{X}$, we have $\xi xt_{\xi}^{2}(Y,M) = 0$ and so $\xi$-pd$K \leq 1$. (3) This result is trivial. \hfill$\Box$ \begin{lemma}\label{L38} Let $T$ be an object in $\mathcal{C}$ and $\mathcal{X} \subseteq T^{\perp}$. Assume that $T$ satisfies conditions (T1) and (T2). If $\mathcal{X} \subseteq \Pres_{\mathcal{P}(\xi)}^{1}(\Add T)$, then for each $X \in \mathcal{X}$, there exists an $\mathbb{E}$-triangle $K_{X}\stackrel{}\longrightarrow T_{X} \stackrel{ }\longrightarrow X\stackrel{}\dashrightarrow$ in $\xi$ with $T_{X} \in \Add T$ and $K_{X} \in T^{\perp}$. \end{lemma} \noindent{\textbf{Proof}} Assume that $X \in \mathcal{X} \subseteq \Pres_{\mathcal{P}(\xi)}^{1}(\Add T)$, there exists an $\mathbb{E}$-triangle $K_{X}\stackrel{x}\longrightarrow T_{X} \stackrel{y }\longrightarrow X\stackrel{\delta}\dashrightarrow$ in $\xi$ with $T_{X} \in \Add T$. Since $\mathcal{C}$ has enough $\xi$-projective objects, there exists an $\mathbb{E}$-triangle $K_{T}\stackrel{x_{1}}\longrightarrow P_{T}\stackrel{y_{1}}\longrightarrow T\stackrel{\delta_{1}}\dashrightarrow$ in $\xi$ with $P_{T} \in \mathcal{P}(\xi)$. Since $T$ satisfies condition (T1), $\xi$-pdT $\leq 1$, by the definition of $\xi$-projective dimension, we obtain that $\xi$-pd$K_{T}$ $\leq 0$, thus $K_{T} \in \mathcal{P}(\xi)$. Since $P_{T}, K_{T} \in \mathcal{P}(\xi)$, we have exact sequences of abelian groups $$0 \stackrel{ }\longrightarrow \mathcal{C}(P_{T},K_{X}) \stackrel{ }\longrightarrow \mathcal{C}(P_{0},T_{X}) \stackrel{ }\longrightarrow \mathcal{C}(P_{T},X) \stackrel{ }\longrightarrow 0,$$ and $$0 \stackrel{ }\longrightarrow \mathcal{C}(K_{T},K_{X}) \stackrel{ }\longrightarrow \mathcal{C}(K_{T},T_{X}) \stackrel{ }\longrightarrow \mathcal{C}(K_{T},X) \stackrel{ }\longrightarrow 0.$$ Since $T$ satisfies condition (T2), applying the functor $\mathcal{C}(-,T_{X})$ to the $\mathbb{E}$-triangle $K_{T}\stackrel{x_{1}}\longrightarrow P_{T}\stackrel{y_{1}}\longrightarrow T\stackrel{\delta_{1}}\dashrightarrow$ , by \cite[Lemma 3.4]{JSH2_2020}, We have the following commutative diagram $$\xymatrix{ & \mathcal{C}(T,T_{X}) \ar[d]_{} \ar[r]^{} & \mathcal{C}(P_{T},T_{X}) \ar[d]_{\cong} \ar[r]^{} & \mathcal{C}(K_{T},T_{X}) \ar[d]_{\cong} \ar@{-->}[r] & 0\\ 0 \ar[r]^{} & \xi xt_{\xi}^{0}(T,T_{X}) \ar[r]^{} & \xi xt_{\xi}^{0}(P_{T},T_{X}) \ar[r]^{} & \xi xt_{\xi}^{0}(K_{T},T_{X}) \ar[r]^{} & \xi xt_{\xi}^{1}(T,T_{M}) = 0 } $$ then $\mathcal{C}(P_{T},T_{X}) \longrightarrow \mathcal{C}(K_{T},T_{X})$ is an epimorphism. Because the functor $\mathcal{C}(-,-)$ is a biaddtive functor, we have the following commutative diagram $$\xymatrix{ & \mathcal{C}(T,K_{X}) \ar[d]_{} \ar[r]^{} & \mathcal{C}(T,T_{X}) \ar[d]_{} \ar[r]^{} & \mathcal{C}(T,X) \ar[d]_{} & \\ 0 \ar[r]^{} & \mathcal{C}(P_{T},K_{X}) \ar[d]_{} \ar[r]^{} & \mathcal{C}(P_{T},T_{X}) \ar[d]_{} \ar[r]^{} & \mathcal{C}(P_{T},X) \ar[d]_{} \ar[r]^{} & 0 \\ 0 \ar[r]^{} & \mathcal{C}(K_{T},K_{X}) \ar[r]^{} & \mathcal{C}(K_{T},T_{X}) \ar[d]_{} \ar[r]^{} & \mathcal{C}(K_{T},X) \ar[r]^{} & 0 \\ & & 0 & & } $$ in which all horizontals and vertials are exact. Since $\mathcal{C}(P_{T},T_{X}) \longrightarrow \mathcal{C}(K_{T},T_{X})$ is an epimorphism, for any $g \in \mathcal{C}(K_{T},K_{X})$, there exists $g_{1} \in \mathcal{C}(P_{T},T_{X})$ such that $xg = g_{1}x_{1}$, by (ET3), then we have the following commutative diagram $$\scalebox{0.9}[1.0]{\xymatrixcolsep{4pc}\xymatrix{ K_{T}\ar[d]_-{g }\ar[r]^{x_{1}} & P_{T}\ar@{..>}[ld]_-{h_{1}} \ar[d]^{g_{1} }\ar[r]^-{y_{1}} & T\ar@{..>}[ld]_-{h_{2}} \ar@{..>}[d]_-{g_{2} } \ar@{-->}[r]^{\delta_{1}} & \\ K_{X}\ar[r]^-{x } & T_{X} \ar[r]^-{ y} & X \ar@{-->}[r]^{\delta} & . }} $$ Since $T_{X} \in \Add T$, then $g_{2}$ factors through $y$, hence $g$ factors through $x_{1}$ by Lemma \ref {L3}, that is, the morphism $\mathcal{C}(P_{T},K_{X}) \longrightarrow \mathcal{C}(K_{T},K_{X})$ is an epimorphism. Applying the functor $\mathcal{C}(-,K_{X})$ to the $\mathbb{E}$-triangle $K_{1}\stackrel{x_{1}}\longrightarrow P_{0}\stackrel{y_{1}}\longrightarrow T\stackrel{\delta_{1}}\dashrightarrow$, Since $P_{T}, K_{T} \in \mathcal{P}(\xi)$, by \cite[Lemma 3.4]{JSH2_2020} again, We have the following commutative diagram $$\xymatrix{ & \mathcal{C}(T,K_{X}) \ar[d]_{} \ar[r]^{} & \mathcal{C}(P_{T},K_{X}) \ar[d]_{\cong} \ar[r]^{} & \mathcal{C}(K_{T},K_{X}) \ar[d]_{\cong} \ar[r] & 0 &\\ 0 \ar[r]^{} & \xi xt_{\xi}^{0}(T,K_{X}) \ar[r]^{} & \xi xt_{\xi}^{0}(P_{T},K_{X}) \ar[r]^{} & \xi xt_{\xi}^{0}(K_{T},K_{X}) \ar[r]^{} & \xi xt_{\xi}^{1}(T,K_{X}) \ar[r]^{} & 0 } $$ then $\mathcal{\xi} xt_{\xi}^{1}(P_{T},K_{X}) \longrightarrow \mathcal{\xi} xt_{\xi}^{1}(K_{T},K_{X})$ is an epimorphism. It implies that $\mathcal{\xi} xt_{\xi}^{1}(T,K_{X}) = 0$. Therefore $K_{X} \in T^{\perp}$. \hfill$\Box$ \begin{lemma}\label{L39} Let $\mathcal{X} \subseteq \mathcal{C}$ be a class of objects such that $\mathcal{X} \cap ^{\perp}\mathcal{X}$ is closed under coproducts. Suppose $\mathcal{X}$ satisfies the condition that if the $\mathbb{E}$-triangle $X\stackrel{}\longrightarrow Y \stackrel{ }\longrightarrow Z\stackrel{}\dashrightarrow$ in $\xi$ with $X, Y \in \mathcal{X}$, then $Z \in \mathcal{X}$. Then $\mathcal{X} \cap ^{\perp}\mathcal{X}$ is closed under direct summands. \end{lemma} \noindent{\textbf{Proof}} Using the Eilenberg's swindle \cite[Proposition 1.4]{HH_2004}, one can prove that $\mathcal{X} \cap ^{\perp}\mathcal{X}$ is closed under direct summands. The proof is similar to \cite[Lemma 3.12]{YGH_2020}. \hfill$\Box$ \begin{proposition}\label{P310} Let $\mathcal{X} \subseteq \mathcal{C}$ be a class of objects such that $\mathcal{X} \cap ^{\perp}\mathcal{X}$ is closed under coproducts. If $\mathcal{X}$ satisfies the following conditions, (1) $\mathcal{X}$ is a special $\xi$-covariantly finite subcategory in $\mathcal{C}$; (2) for each $K \in ^{\perp}\mathcal{X}$, $\xi$-pd$K \leq 1$; (3) If the $\mathbb{E}$-triangle $X\stackrel{}\longrightarrow Y \stackrel{ }\longrightarrow Z\stackrel{}\dashrightarrow$ in $\xi$ with $X, Y \in \mathcal{X}$, then $Z\in \mathcal{X}$. \\ Then there is a $\xi$-tilting object $T$ in $\mathcal{C}$ such that $\mathcal{X} = T^{\perp}$. \end{proposition} \noindent{\textbf{Proof}} The proof is similar to \cite[Lemma 3.13]{YGH_2020}. \hfill$\Box$
{ "timestamp": "2020-12-17T02:20:26", "yymm": "2012", "arxiv_id": "2012.09002", "language": "en", "url": "https://arxiv.org/abs/2012.09002" }
\section{Introduction} Inelasticity at large strain has been the focus of an intense research activity for decades, first from the engineering community, see, e.g., the monographs \cite{DunPet05ICP,JirBaz02IAS,Maug92TPF}, and subsequently also from the mathematical point of view (see, e.g., the recent contributions \cite{davoli-kruzik-pelech, melching-neunteufel-schoeberl-stefanelli, kruzik-melching-stefanelli} on large-strain rate-independent processes, incomplete damage, and finite plasticity, respectively as well as the monographs \cite{MieRou15RIST,KruRou19MMCM} and the references therein). Within the mathematical purview, there is a general agreement that the rigorous analysis of large-strain inelastic time-evolving phenomena requires higher-order regularizations of the inelastic strains \cite{DavFra15CRFE,GraSte17FPQE,MaiMie09GERI,Miel02FELG,MiRoSa18GERV,MieRou15RIST,MieRou16RIEF,RouSte19FTCS}. Existence theories without gradient regularization are available only in one space dimension \cite{MelSte??WPON}, at the incremental level \cite{Mielke04,MielkeMueller,compos}, or under stringent modeling restrictions \cite{kruzik-melching-stefanelli,Mielke04}. In the engineering literature, on the other hand, gradient theories at large strains are seldom considered, see \cite{Bett05CM,DunPet05ICP,NauAlt07MCSA}, \cite[Ch.~25]{JirBaz02IAS}, \cite[Ch.~8]{Maug92TPF}, and existence of solutions not in focus. Gradient theories for the inelastic strain introduce an internal length-scale in the problem related to the characteristic width of inelastic slip-bands arising during creep, damage, or plastification processes. The occurrence of such scale is however not expected to cause additional hardening. Although sometimes strain or time hardening are to be considered \cite{Bett05CM,NauAlt07MCSA}, in many applications, inelastic models are ultimately desired not to exhibit any hardening effect during long-lasting slip deformations. In metals, for example, very large irreversible plastification can occur within the phenomenon sometimes referred to as {\it superplasticity}. Large slips with no hardening are particularly common in rock, soil, or ice mechanics. Typically, the slip on tectonic faults can easily accommodate kilometers during millions of years. Glaciers flow kilometers, with hardening only occurring at temperatures below $-70^\circ$\,C \cite{SchDuv09CFI}. In a very different context, large deformations without hardening can be observed in polymers as well. As a result, one is interested in identifying inelastic strain-gradient modelizations guaranteeing, on the one hand, that the existence of time-evolution of inelastic phenomena is mathematically well-posed, and on the other hand, that no spurious hardening effects are generated. The focus of this paper is hence on introducing a novel hardening-free inelastic model of creep-type allowing for existence of solutions. In order to accomplish this, the energy of the medium is assumed to contain a term depending on the gradient of the {\it elastic} strain. This contrasts with usual approaches based on {\it total} strain-gradient or {\it inelastic} strain-gradient regularization. Indeed, we \color{black} present an example in Subsection \ref{sec-motiv} below \color{black} showing the \color{blue} possible \color{black} effect of such usual strain-gradient \color{black} regularizations \color{black} on the onset of spurious hardening. Our new model is \color{black} introduced \color{black} in Section~\ref{sec-model}. In addition to elastic-strain hardening, we assume \color{black} the \color{black} viscous dissipation to be quadratic and \color{black} to depend \color{black} on the gradient of the inelastic-strain rate. This last gradient term does not affect the hardening-free nature of the model. Eventually, Section~\ref{sec-anal} focuses on the existence of weak solutions to the model. The proof relies on a Faedo-Galerkin approximation, as well as \color{black} on \color{black} compactness, and lower semicontinuity arguments. \section{A hardening-free viscoelastic model} \label{sec-model} \color{black} We devote this section to introducing and commenting our modeling choices. \color{black} Following the classical mathematical theory of inelasticity at large strains \cite{gurtin,hill, lubliner1}, we assume that the elastic behavior of our specimen $\varOmega\subset \mathbb{R}^d$, $d=2,3$, is independent from preexistent inelastic distortions. This can be rephrased as the assumption that the deformation gradient $F:=\nabla y$ associated to any deformation $y:\varOmega\to \mathbb{R}^d$ of the body decomposes into an elastic strain and an inelastic one. For linearized theories, this decomposition would have an additive nature; in the setting of large-strain inelasticity, instead, this behavior is traditionally modeled via a multiplicative decomposition. In the mathematical literature different constitutive models have been taken into account, see, e.g., \cite{DavFra15CRFE, GraSte17FPCM, GraSte17FPQE, naghdi} in the framework of finite plasticity. We focus here on the classical multiplicative decomposition ansatz \cite{Kron60AKVE,LeeLiu67FSEP}, recently justified in the setting of dislocation systems and crystal plasticity in \cite{conti.reina,conti.reina2}), in which deformations $y\in H^1(\varOmega;\mathbb{R}^d)$ fulfill \begin{align}\label{split} F=F_{\rm el}\PP, \end{align} where $F_{\rm el}$ and $\PP$ denote the elastic and inelastic strains, respectively. \color{black} \subsection{Tensorial notation}\color{black} In the following, we use capital letters to indicate tensors and tensor-valued functions, independently from their dimensions. For $A,\, \widehat A, \, \widetilde A \in \mathbb{R}^{d\times d}$, $B, \,\widehat B\in \mathbb{R}^{d\times d\times d}$, and $C,\, \widehat C\in \mathbb{R}^{d\times d\times d\times d}$ we use the standard notation for contractions on two, three, and four indices, namely, \begin{align*} &A{:}\widehat A = A_{ij} \widehat A_{ij}, \ B{\vdots} \widehat B = B_{ijk}\widehat B_{ijk},\ (C{:}A)_{ij} = C_{ijkl}A_{kl},\ (B{:}A)_i = B_{ijk}A_{jk},\ C{:}{:}\widehat C = C_{ijkl}\widehat C_{ijkl} \end{align*} (summation convention over repeated indices). On the other hand, contraction on one index will be marked by $\cdot$ only in case of vectors. In particular, $(CA)_{ijkl}=C_{ijkm}A_{ml}$, $(BA)_{ijk} = B_{ijm}A_{mk}$, etc. The symbol $\top$ indicate transposition of two-tensors, namely $A^\top_{ij}=A_{ji}$, whereas we denote by the superscript ${\rm t}$ the partial transposition of a four-tensor with respect to the first two indices, namely $C^{\rm t}_{ijkl} = C_{jikl}$. For $A \in \mathbb{R}^{d\times d}$ we indicate its symmetric part by ${\rm sym}\, A = (A+A^\top)/2$ and, if $A$ is invertible, use the shorthand notation $A^{-\top} = (A^{-1})^\top$. We will use the algebra $A\widehat A{:}\widetilde A=A{:}\widetilde A\widehat A^\top$ and $A{:}\widehat A \widetilde A = \widehat A^\top A{:}\widetilde A$. Let us recall that, for a differentiable function $F: {\mathbb R}^{d\times d} \to \mathbb{R}^{d\times d}$ and $A, \, \widehat A \in {\mathbb R}^{d\times d} $ we have that ${\rm D} F(A) \in {\mathbb R}^{d\times d\times d\times d}$ and ${\rm D} F(A){:}\widehat A = ({\rm d}/{\rm d} \alpha) F(A +\alpha \widehat A)|_{\alpha=0}$. In particular, one has that ${\rm D}(A^{-1}){:}\widehat A = - A^{-1}\widehat A A^{-1}$. Moreover, one easily checks that ${\rm D}(F^\top) = ({\rm D} F)^{\rm t}$, so that one has that ${\rm D} (A^{-\top}){:}\widehat A = - A^{-\top}\widehat A^{\top} A^{-\top}$. Given two other differentiable functions $\widehat F: {\mathbb R}^{d\times d} \to \mathbb{R}^{d\times d}$ and $f: {\mathbb R}^{d\times d}\to \mathbb{R}$ one has that ${\rm D} (f\circ F)(A){:}\widehat A = {\rm D} f(F(A)) {:} {\rm D} F(A) {:}\widehat A$ and ${\rm D}(\widehat F \circ F)(A) {:}\widehat A ={\rm D} F(\widehat F(A)) {:} {\rm D} \widehat F(A) {:} \widehat A$. Let the reference domain $\varOmega \subset \mathbb{R}^d$ be open and with Lipschitz boundary $\varGamma$, and let $n$ be the outward-pointing unit normal vector at the boundary. For a $m$-tensor valued function $x \in \varOmega \mapsto A(x)\in (\mathbb{R}^{d})^m $ with $m\geq 1$ we define the gradient $\nabla A(x) \in (\mathbb{R}^{d})^{m+1} $ and the divergence ${\rm div} A(x) \in (\mathbb{R}^{d})^{m-1} $ componentwise as $$\nabla A(x)_{i_1\dots i_m j} = \frac{\partial}{\partial x_j} A_{i_1\dots i_m}(x),\quad ({\rm div} A(x))_{i_1\dots i_{m-1}} = \sum_{j=1}^d\frac{\partial}{\partial x_j}A(x)_{i_1\dots i_{m-1} j} .$$ For all $x\in\varOmega\mapsto A(x)\in \mathbb{R}^{d\times d}$ and $x\in \varOmega\mapsto\widehat A(x)\in \mathbb{R}^{d\times d}$ we have that $ \nabla (A\widehat A) = (\widehat A^\top\nabla A^\top)^{\rm t} + A \nabla \widehat A$. Let now $x \in \varOmega \mapsto v(x)\in \mathbb{R}^{d}$, $x \in \varOmega \mapsto A(x)\in \mathbb{R}^{d\times d}$, and $x \in \varOmega \mapsto B(x)\in \mathbb{R}^{d\times d \times d}$ be given. Under suitable regularity assumptions the following Green formulas can be checked \begin{subequations}\label{green} \begin{align} & \int_\varOmega A {:} \nabla v \, \d x = - \int_\varOmega {\rm div} A {\cdot} v \, \d x + \int_\varGamma (An) {\cdot} v\, \d x\,, \label{eq:green1}\\ &\int_\varOmega B {\vdots} \nabla A \, \d x = - \int_\varOmega A{:}{\rm div} B \, \d x + \int_\varGamma (A{:} B){\cdot} n \,\d x\,. \label{eq:green2} \end{align} \end{subequations} Eventually, let $\mathrm{div}_{\scriptscriptstyle\textrm{\hspace*{-.1em}S}}^{}$ denote the $(d{-}1)$-dimensional surface divergence on $\varGamma$. For vector-valued functions $x\mapsto v(x)\in \mathbb{R}^d $ this is defined as $$\mathrm{div}_{\scriptscriptstyle\textrm{\hspace*{-.1em}S}}^{} v= {\rm tr}\nabla_{\scriptscriptstyle\textrm{\hspace*{-.3em}S}}^{} v \ \ \text{for} \ \ \nabla_{\scriptscriptstyle\textrm{\hspace*{-.3em}S}}^{} v:= \nabla v - \frac{\partial v}{\partial n} \otimes n\,,$$ where ${\rm tr}$ stands for the trace. The same definition will be used row-wise for tensor-valued functions. We will use the \cite[Formula (34)]{Fried-Gurtin06} \begin{equation}\label{greenS}\int_\varGamma A{:}\nabla_{\scriptscriptstyle\textrm{\hspace*{-.3em}S}}^{} v\, \d S = - \int_\varGamma (\mathrm{div}_{\scriptscriptstyle\textrm{\hspace*{-.1em}S}}^{} A{\cdot}v + 2\mathfrak{h} A n {\cdot}v) \, \d S\,, \end{equation} where $\mathfrak{h}$ stands for the mean curvature of $\varGamma$. Arguing row-wise, an analogous relation can be checked to hold for tensors-valued functions as well. \color{black} \subsection{Stored energy} \color{black} Our aim is that of introducing a hardening-free inelastic model. In absence of hardening, the mathematical analysis of inelastic evolution is notoriously challenging. In order to make the existence of weak solutions amenable, we include in the model higher-order (gradient) effects. More specifically, we define \begin{align} &\varPhi(y,\PP)=\int_\varOmega\FE(\nabla y\,\PP^{-1} ) +\FH(\PP)+\FG(\nabla(\nabla y\,\PP^{-1})) \, \d x\,. \label{free-energy+} \end{align} \color{black} Here, \color{black} $\FE:\mathbb{R}^{d\times d} \to [0,\infty)$ \color{black} corresponds to \color{black} the elastic energy density \color{black} of the medium and will be assumed to be coercive and to control the sign of ${\rm det} \,F_{\rm el}$, see \eqref{ass-FM} below. On the other hand, \color{black} $\FH:\mathbb{R}^{d\times d}\to [0,\infty]$ plays the role of a constraint \color{black} on ${\rm det}\,\PP$. \color{black} \color{black} In particular, we are interested in \color{black} choices of $\FH$ enforcing the usual isochoric constraint ${\rm det}\PP=1$ in an approximate sense and keeping ${\rm det}\PP$ away from negative values, \color{black} see \eqref{ass-plast-large-HD-growth} below. \color{black} An explicit example for such a term is \begin{align} \FH(\PP):=\begin{cases}\displaystyle{ \frac{\delta}{\max(1,\det\PP)^r} +\frac{(\det\PP-1)^2}{2\delta}}\!\!&\text{ if }\ \det\PP>0,\\ \qquad+\infty&\text{ if }\ \det\PP\ge0\,\end{cases} \label{FH}\end{align} with $\delta>0$ small and $r$ big enough; cf.\ \cite[Remark 2.6]{RouSte19FTCS}, \cite[Formula (9.4.36)]{KruRou19MMCM}, or \cite{Neff}. Eventually, $\FG:\mathbb{R}^{d\times d}\to[0,\infty)$ controls the elastic strain gradient and relates to the length scale of higher-order effects. Specific assumptions are given in \eqref{ass-G} below. \color{black} In particular, \color{black} the stored energy \color{black} features \color{black} a regularizing term \color{blue} depending on \color{black} the gradient of the elastic strain $\color{black} F_{\rm el} = \nabla y \PP^{-1}$. \color{black} Note however that no gradient of $\PP$ appears in the energy, for this might give rise to hardening, as explained in \color{black} Subsection \ref{sec-motiv} below.\color{black} \color{black} \subsection{Spurious hardening from gradients \color{red}{in the stored energy}}\label{sec-motiv} As already mentioned, the analysis of inelastic evolution models calls for considering inelastic gradient theories. Usual choices in this direction are terms of the form \color{black} \begin{subequations}\label{Phi} \begin{align} &&&&&\ \ \ \ \frac12\kappa|\nabla\PP|^2&&\quad\text{(standard choice)},\label{Phi1}&&&&&&\\ &&&&&\ \frac12\kappa|F^{-\top}\nabla\PP|^2&&\quad\text{(push forward)}, \label{Phi2}\\ &&&&&\frac12\kappa|\nabla(\PP^\top\PP)|^2\, &&\quad\text{(inelastic metric tensor)}. \label{Phi3} \end{align} \end{subequations For the {\it standard choice} in \eqref{Phi1}, we refer to \cite{GMMM06ANMF,MaiMie09GERI,KruRou19MMCM,MieRou16RIEF} in the context of plasticity, see also \cite{MiRoSa18GERV} for a more general dependence on $\nabla\PP$ covering also creep models, as well as \cite{anand} for an additional scalar-valued internal variable acting as an effective inelastic strain. The {\it push-forward} term in \eqref{Phi2} has been used in \cite[Remark 9.4.12]{KruRou19MMCM} and \cite[Remark 5]{Roub??CHEC}, whereas the inelastic {\it metric} tensor in \eqref{Phi3} has been analyzed in \cite{GraSte17FPQE}, cf.\ \cite{NefGhi16CIEP}\CHECK{ for a throughout discussion and comparison}. All models \eqref{Phi} however exhibit a drawback: the influence of the inelastic gradient terms amplifies when inelastic slips evolve and accommodate large inelastic strains. This, in turn, \color{blue} might result \color{black} in a spurious hardening effect. To demonstrate the presence of a non-autonomous spurious hardening effect, we consider $d=2$ and resort to a stratified situation where $F$ and $\PP$ are constant in the $x_1$ direction, cf.\ \cite{RouSte19FTCS} or also \cite[Example 9.4.11]{KruRou19MMCM} for similar examples. We consider a pure {\it horizontal} shift of the stripe $\varOmega=\mathbb{R}{\times}[-\ell,\ell]$ driven by time-dependent Dirichlet boundary conditions for the displacement on the sides $\mathbb{R}{\times}\{\pm\ell\}$ and evolving in a steady-state mode. In particular, we assume by symmetry that the deformation has the {\it stratified} form $$y(x_1,x_2) = (x_1+\color{black} f(t,x_2),x_2)$$ where the slip via the (unspecified) smooth function \color{black} $f : [0,+\infty) \times [-\ell,\ell] \to \mathbb{R}$ \color{black} fulfills the given Dirichlet boundary conditions, say \begin{equation} \color{black} f \color{black} (t,\pm \ell) = \pm t.\label{bcs} \end{equation} We specify elastic response by assuming the material to be rigid. In particular, the elastic strain $F_{\rm el}$ is assumed to be the identity matrix. In the setting of plasticity, this would be called a plastic-rigid model. The corresponding inelastic strain reads then \begin{align}\label{P-scaling} \PP = F&=\nabla y=\bigg(\!\!\begin{array}{cc}1\!&\!\color{black} \partial_{x_2}f(t,x_2) \color{black} \\0\!&\!1\end{array}\!\!\bigg)\,. \end{align} Let us note that ${\rm det}\,\PP=1$, so that $\FH(\PP)=0$ when $\FH$ is defined as in \eqref{FH}. The arguments in the $\kappa$-term in \eqref{Phi} read (see Section \ref{sec-model} for details on the tensorial notation) then as \begin{subequations}\begin{align} (\nabla\PP)_{ijk}&= \left\{ \begin{array}{ll} \color{black} \partial_{x_2}^2f(t,x_2) \color{black}& \text{for} \ i=1, \, j=2, \, k=2,\\ 0&\text{otherwise}, \end{array} \right., \\[1mm] (F^{-\top}\nabla\PP)_{ijk}&= \left\{ \begin{array}{ll} \color{black} \partial_{x_2}^2f(t,x_2) \color{black}& \text{for} \ i=1, \, j=2, \, k=2,\\ - \color{black} \partial_{x_2}f(t,x_2) \color{black} \color{black} \partial_{x_2}^2f(t,x_2) \color{black}& \text{for} \ i= j= k=2,\\ 0&\text{otherwise}, \end{array} \right.,\\[1mm] (\nabla(\PP^\top\PP))_{ijk}&= \left\{ \begin{array}{ll} \color{black} \partial_{x_2}^2f(t,x_2) \color{black}& \text{for} \ i=1, \, j=2, \, k=2,\\ \color{black} \partial_{x_2}^2f(t,x_2) \color{black}& \text{for} \ i=2, \, j=1, \, k=2,\\ 2 \color{black} \partial_{x_2}f(t,x_2) \color{black} \color{black} \partial_{x_2}^2f(t,x_2) \color{black}& \text{for} \ i= j= k=2,\\ 0&\text{otherwise}, \end{array} \right.\,. \end{align}\end{subequations} \color{black} Note that $\partial_{x_2}f(t,x_2)$ necessarily depends on time. Indeed, if this were not the case one would have that $$ \DT f(t,\ell)-\DT f(t,-\ell) = \int_{-\ell}^\ell\partial_{x_2} \DT f(t,x_2)\, {\rm d} x_2 =0,$$ contradicting the fact that $\DT f(t,\pm\ell)=\pm 1$ from \eqref{bcs}. Hence, in \color{black} all cases, the argument of the quadratic \color{black} terms \color{black} in \eqref{Phi} \color{black} is genuinely time dependent. More precisely, by taking the mean across the stripe we have that $$\frac{1}{2\ell}\int_{-\ell}^{\ell} \partial_{x_2} f(t,x_2)\, {\rm d} x_2 =\frac{1}{2\ell}\left( f(t,\ell)-f(t,-\ell) \right)\stackrel{\eqref{bcs}}{=}\frac{t}{\ell}$$ so that the terms in \eqref{Phi} would actually be unbounded in time. \color{black} This shows, that no matter how small \color{black} the coefficient $\kappa$ is, \color{black} \color{black} the \color{black} regularizing \color{black} terms in \eqref{Phi} grow indefinitely \color{black} under large \color{black} slips, \color{black} preventing the energy from being bounded and \color{black} eventually corrupting the modelization. To compensate for these spurious hardening-like effects, one could assume $\kappa$ to be time dependent, which would however lead to an artificially non-autonomous model, which is also not desirable. \color{black} In order to avoid this spurious hardening effect while still retaining regularization, our choice \eqref{free-energy+} for $\varPhi$ above departs from the classical inelastic-gradient regularization \eqref{Phi} by including the gradient of the elastic strain $F_{\rm el}$ instead. Note that in the above example the term $\nabla F_{\rm el}$ vanishes, hence allowing for indefinitely large \color{black} inelastic slips under bounded energy. Before closing this discussion, let us mention the possibility of considering the alternative inelastic-gradient terms \color{black} \begin{equation} \frac12 \kappa \big|{\rm curl} \PP\big|^2 \qquad \text{or} \qquad \frac12 \kappa \big|\PP^{-\top}{\rm curl}\PP\big|^2\label{disloc} \end{equation} \color{black} in the energy $\varPhi$. \color{black} Here, the ${\rm curl}$ of the tensor $\PP$ is taken row-wise in three dimension and is defined as ${\rm curl}\PP = (\partial_1 \PP_{12} - \partial_2 \PP_{11}, \partial_1 \PP_{22} - \partial_2 \PP_{21})$ in two dimensions. These terms correspond to the so-called {\it dislocation-density} tensor \cite{Cermelli} and have been considered in \cite{MielkeMueller,Neff,Scala} from the viewpoint of existence of solutions \color{red} of the incremental problems\color{black}. In case of \eqref{P-scaling}, the plastic strain is curl-free and both terms in \eqref{disloc} vanish. \color{red} Therefore these terms exhibit a capability \color{black} to accumulate large \color{black} inelastic slips at bounded energy, for they vanish for $\PP$ given by \eqref{P-scaling}. In particular, \color{red} at least in elastically ``well rigid'' materials, \color{black} they would not generate the spurious hardening effect mentioned above. \color{red} However, the options \eqref{P-scaling} do not seem \color{black} to contribute sufficient compactness in order to devise an existence theory \color{blue} at the time-continuous level. \color{black} \color{red} Of course, combination of some option from \eqref{disloc} with some option from \eqref{free-energy+} in the stored energy is possible and yields analytically good compactifying effects but again the spurious hardening would be involved in the model. \color{black} \color{black} \subsection{Dissipation} \color{black} In order to incorporate inertial effects, a Kelvin-Voigt-type viscosity needs to be included in the model. We consider a purely linear viscous model by assuming the dissipation potential to be quadratic in terms of rates, namely, \begin{align} &{\mathscr R}(y,\PP;\nabla \DT y ,\DT\PP)= \int_\varOmega\frac{\nu_{\rm m}}2|\DT\PP|^2 +\frac{\nu_{\rm h}}2|\nabla^2\DT\PP|^2+\frac{\nu_{\rm kv}}2|\DT C_{\rm el}|^2 \,\d x\nonumber\\[-.3em] &\qquad\qquad\qquad\qquad\qquad\text{ with }\ C_{\rm el}=F_{\rm el}^\top F_{\rm el}^{}= \PP^{-\top}\nabla y^\top \nabla y\PP^{-1},\label{RRR} \end{align} where $\nu_{\rm m}$, $\nu_{\rm h}$, and $\nu_{\rm kv}$ are positive viscous coefficients and $C_{\rm el}$ is the elastic Cauchy-Green tensor. \color{black} In particular, the Kelvin-Voigt-type viscosity term depends on $\DT C_{\rm el}$ in order to ensure frame-indifference \cite{Antm98PUVS}. \color{black} The occurrence of the $\nabla^2 \DT \PP$ term above is motivated by the need of controlling the rate of $\PP$ uniformly in space while still avoiding hardening. In other words, differently from gradient terms acting directly on $\PP$ (see \color{black} Section \ref{sec-model}), \color{black} this term provides a regularization not giving rise to spurious hardening effects, a phenomenon which we want to avoid. This uniform bound in space in turn will allow the control of the nonlinear terms in \eqref{stresses} as well as of the inverse $\PP^{-1}$, which is paramount for devising an existence theory. Henceforth, following a suggestion by A. Mielke \cite{Miel??}, we augment our dissipation potential by a regularization provided by the gradient of the creep rate. The only higher-order terms involving the inelastic strain \color{black} hence \color{black} occur in the dissipation and are given by the gradient of the inelastic strain rate, i.e.\ of $\DT\PP$. \color{black} With reference to the discussion of Subsection \ref{sec-motiv}, let us point out that such terms may again be time dependent. Still, they can be expected to show some boundedness with respect to time. In the case of \eqref{P-scaling} one indeed obtains that the mean across the strip $$\frac{1}{2\ell}\int_{-\ell}^\ell \DT\Pi(t,x_1,x_2)\, {\rm d}x_2 = \frac{1}{2\ell}\int_{-\ell}^\ell \bigg(\!\!\begin{array}{cc}0\!&\!\color{black} \partial_{x_2}\DT f(t,x_2) \color{black} \\0\!&\!0\end{array}\!\!\bigg) {\rm d}x_2 = \bigg(\!\!\begin{array}{cc}0\!&\!\color{black} \frac{\DT f(t,\ell) - \DT f(t,-\ell)}{2\ell} \color{black} \\0\!&\!0\end{array}\!\!\bigg)\stackrel{\eqref{bcs}}{=} \bigg(\!\!\begin{array}{cc}0\!&\!\color{black} 1/\ell \color{black} \\0\!&\!0\end{array}\!\!\bigg)$$ is time-independent. A regularization in term of $\nabla \DT \PP$ is hence not expected to generate spurious hardening-like effects. \color{black} \subsection{Constitutive equations}\color{black} Following the classical {\it Coleman-Noll procedure} \cite{Coleman-Noll63}, we identify variations of $\varPhi$ with respect to $y$ and $\PP$ as driving forces in the momentum equation and in the inelastic flow-rule, respectively. More precisely, we have \begin{subequations}\label{stresses}\begin{align} \delta_y\varPhi(y,\PP)&=-{\rm div}\left({\rm D}\FE(\nabla y\PP^{-1})\PP^{-\top} -{\rm div}\big({\rm D}\FG(\nabla(\nabla y\PP^{-1}))\big)\PP^{-\top}\right)\,, \\ \delta_\PP\varPhi(y,\PP)&=\nabla y^\top{\rm D}\FE(\nabla y\PP^{-1}){:}{\rm D}(\PP^{-1}) \nonumber\\ &\quad+{\rm D}\FH(\PP) -{\rm div}\big({\rm D}\FG(\nabla(\nabla y\PP^{-1}))\big){:}\nabla y {\rm D} (\PP^{-1})\,. \end{align}\end{subequations} \color{black} In order to consider variations of the dissipation $\mathscr R$, we start by explicitly computing \color{black} \begin{align* \DT C_{\rm el}&=\PP^{-\top}(\nabla \DT y^\top \nabla y{+}\nabla y^\top\nabla \DT y)\PP^{-1} +({\rm D}(\PP^{-\top}){:}\DT\PP) \nabla y^\top \nabla y\PP^{-1}+\PP^{-\top}\nabla y^\top \nabla y{\rm D}(\PP^{-1}){:}\DT\PP\\ &=\PP^{-\top}(\nabla \DT y^\top \nabla y{+}\nabla y^\top\nabla \DT y)\PP^{-1} - \PP^{-\top}\DT\PP^\top \PP^{-\top} \nabla y^\top \nabla y\PP^{-1}-\PP^{-\top}\nabla y^\top \nabla y\PP^{-1}\DT \PP\PP^{-1}\\ &=\PP^{-\top}( \nabla \DT y^\top \nabla y{+}\nabla y^\top\nabla \DT y)\PP^{-1} -2 \,{\rm sym}\,(\PP^{-\top}\nabla y^\top \nabla y \PP^{-1} \DT \PP \PP^{-1})\,. \end{align*} This Kelvin-Voigt-type viscosity features then both $\nabla \DT y$ and $\DT \PP$ terms. It hence contributes to both the momentum equation and to the inelastic flow rule. In particular, setting for brevity $\varSigma:=\nu_{\rm kv} \DT C_{\rm el}$, the contribution of the Kelvin-Voigt-type viscosity to the stress is given by \begin{align*} \delta_{\DT y}\DT C_{\rm el}{:}\varSigma=-{\rm div} \left(2\,{\rm sym}\,\big(\PP^{-\top}\nabla y^{\top}\varSigma\PP^{-1}\big) \right). \end{align*} On the other hand, by computing $$ {\rm D}_{\DT \PP}\DT C_{\rm el}=\big(\PP^{-\top}\nabla y^{\top}\nabla y\,{\rm D}(\PP^{-1})\big)^{\rm t} \PP^{-\top}\nabla y^{\top}\nabla y\,{\rm D}(\PP^{-1}) \,, $$ we have that the Kelvin-Voigt-type viscous contribution to the inelastic driving force is \begin{align*} {\rm D}_{\DT \PP}\DT C_{\rm el}{:}\varSigma=-2\,{\rm sym}\,\big(\PP^{-\top}\nabla y^{\top}\nabla y\PP^{-1}\varSigma\PP^{-1}\big)\,. \end{align*} \color{black} \subsection{Evolution system} \color{black} The evolution of the medium is governed by the system of momentum equation and the inelastic flow rule. Let us denote by ${\mathscr T}(\DT y)=\frac12\int_\varOmega\varrho|\DT y|^2\,\d x$ the kinetic energy and by ${\mathscr F}(t)$ the external load $$\langle{\mathscr F}(t),y\rangle=\int_\varOmega f(t){\cdot}y\,\d x +\int_\varGamma g(t){\cdot}y\,\d S\,$$ where $f$ and $g$ denote a given body force density and surface traction density, respectively. The system reads then in abstract form \begin{subequations}\label{abstract} \begin{align} (\delta_{\DT y}{\mathscr T}(\DT y)){\hspace{-6.5mm} {\phantom{O}}^{\phantom{O}^\text{\LARGE.}}} + \delta_{\DT y}{\mathscr R}(y,\PP;\nabla \DT y,\DT\PP) +\delta_y\varPhi(y,\PP)&={\mathscr F}(t)\,,\label{abstract1}\\ \delta_{ \DT\PP}{\mathscr R}(y,\PP;\nabla \DT y ,\DT\PP) +\delta_\PP\varPhi(y,\PP)&=0\,. \label{abstract2} \end{align} \end{subequations} Here, we have formally indicated variations with $\delta$. In the following, these relations will be made precise in the weak sense, see \eqref{weak-form}. For the sake of clarity, we present here the strong form of the system, assuming suitable regularity of the ingredients. Owing to our choices \eqref{free-energy+} and \eqref{RRR} for energy and dissipation, the latter corresponds to the nonlinear PDE system \begin{subequations}\label{evol} \begin{align} &\varrho\DDT y-{\rm div}\Big({\rm D}\FE(\nabla y\PP^{-1})\PP^{-\top} +2\,{\rm sym}\,\big(\PP^{-\top}\nabla y^{\top}\varSigma\PP^{-1}\big) \Big)\nonumber\\ & \label{evol1} \qquad\qquad\qquad\qquad +{\rm div}\left({\rm div}\big({\rm D}\FG(\nabla(\nabla y\PP^{-1}))\big)\PP^{-\top} \right) =f, \\[3mm]&\nonumber \nu_{\rm m}\DT\PP+{\rm div}^2\big(\nu_{\rm h}\nabla^2\DT\PP\big) +\nabla y^\top{\rm D}\FE(\nabla y\PP^{-1}){:}{\rm D}(\PP^{-1}) \nonumber\\ &\qquad\qquad\qquad\qquad -2\,{\rm sym}\,\big(\PP^{-\top}\nabla y^{\top}\nabla y\PP^{-1}\varSigma\PP^{-1}\big)+{\rm D}\FH(\PP) \nonumber\\& \label{evol2} \qquad\qquad\qquad \qquad -{\rm div}\big({\rm D}\FG(\nabla(\nabla y\PP^{-1}))\big){:}\nabla y{\rm D}(\PP^{-1})=0 \,, \end{align} \end{subequations} where we have again used the notation \begin{equation} \varSigma= {\nu_{\rm kv}} \DT C_{\rm el} \ \ \text{and} \ \ C_{\rm el} =\PP^{-\top}\nabla y^\top\nabla y^\top\PP^{-1} \label{evol3} \,.\end{equation} Taking into account the formulas \eqref{green}--\eqref{greenS}, system \eqref{evol} is intended to be completed by the following boundary conditions \begin{subequations}\label{BC}\begin{align}\nonumber &{\rm D}\FE(\nabla y\PP^{-1})\PP^{-\top}n -{\rm div}\big({\rm D}\FG(\nabla(\nabla y\PP^{-1}))\big)\PP^{-\top}\big)n \nonumber \\&\hspace*{4em}-\mathrm{div}_{\scriptscriptstyle\textrm{\hspace*{-.1em}S}}^{}\big({\rm D}\FG(\nabla(\nabla y\PP^{-1})\big)n\PP^{-\top}\big) -2\mathfrak{h}\left({\rm D}\FG(\nabla(\nabla y\PP^{-1}))n\PP^{-\top}\right)n\nonumber \\&\hspace*{4em} +2\,{\rm sym}\,\big(\PP^{-\top}\nabla y^{\top}\varSigma\PP^{-1} \big){n}=g \,, \label{BC1}\\ & \big({\rm D}\FG(\nabla(\nabla y\PP^{-1})\big) \big){:}({n}\otimes({n}\PP^{-1}))=0\,, \label{BC12}\\ & {\rm D}\FG(\nabla(\nabla y\PP^{-1})){n}{:}\nabla y\,{\rm D}(\PP^{-1}) -{\rm div} \nu_{\rm h}\nabla^2\DT\PP n \nonumber\\& \hspace*{4em} -\mathrm{div}_{\scriptscriptstyle\textrm{\hspace*{-.1em}S}}^{}(\nu_{\rm h}\nabla^2\DT\PP{n}) - 2 \nu_{\rm h}\mathfrak{h}(\nabla^2\DT\PP{n})n =0\,,\label{BC21} \\& \nu_{\rm h}\nabla^2\DT \PP{:}({n}\otimes{n})=0\,. \label{BC22} \end{align}\end{subequations} The energetics of the model can be obtained by formally testing \eqref{evol1} with $\DT y$ under \eqref{BC1}--\eqref{BC12} and \eqref{evol2} with $\DT\PP$ under \eqref{BC21}--\eqref{BC22}. By considering the initial conditions \begin{align}\label{IC} y(0)=y_0,\ \ \ \DT y_0=v_0,\ \ \ \PP(0)=\PP_0\,, \end{align} the resulting energy balance on the time interval $[0,t]$ is \begin{align}\nonumber &\int_\varOmega\frac{\rho}{2} |\DT y(t)|^2 + \FE(\nabla y(t)\PP^{-1}(t)) \FG(\nabla(\nabla y(t)\PP^{-1}(t))) + \FH(\PP(t))\,\d x \\\nonumber &\quad+\int_0^t\!\!\int_\varOmega \nu_{\rm kv}|\DT C_{\rm el}|^2 + \nu_{\rm m}|\DT\PP|^2 +\nu_{\rm h}|\nabla^2\DT \PP|^2 \,\d x\,d\tau =\int_0^t\!\!\int_\varOmega f{\cdot}\DT y\,\d x\,d\tau +\int_0^t\!\!\int_\varGamma g{\cdot}\DT y\,\d S\,d\tau \\\label{energy} &\qquad\qquad +\int_\varOmega \frac{\rho}{2} |\DT y_0|^2 +\FE (\nabla y_0\PP^{-1}_0) \FG(\nabla (\nabla y_0 \PP^{-1}_0)) + \FH(\PP_0)\,\d x \,. \end{align} In particular, the sum of total energy at time $t$ and dissipated energy on $[0,t]$ equals the sum of initial total energy and work of external forces. \begin{remark}[{\sl Nonlinear or activated creep}]\label{rem-nonlin}\upshape We assume here the dissipation potential to be quadratic, which makes the occurrence of $\DT \PP$ in \eqref{evol2} linear. In order to generalize this to the nonlinear (or even activated) case, the analysis of the problem would require to check strong compactness for the approximations of $\varSigma$. This seems presently out of reach in our setting. where only a weak convergence for such approximants can be guaranteed, cf.\ \eqref{first2} below. \end{remark} \begin{remark}[{\sl Jeffreys rheology}]\label{rem-Jeffrey}\upshape The combination of two viscous damping mechanisms and one elastic energy-storing mechanism is often referred to as {\it Jeffreys rheology} \cite{KruRou19MMCM} (sometimes also called {\it anti-Zener} rheology). This combination may arise from two different arrangements of rheological elements: one can arrange a Stokes viscous element either in parallel with a Maxwell rheological element or in series with a Kelvin-Voigt rheological one. Recall that a Maxwell (resp.\ Kelvin-Voigt) rheological element is an arrangement of an elastic and a viscous element in series (resp.\ in parallel). At small strains, the two possible arrangements giving a Jeffreys rheology are equivalent, cf.\ \cite[Formula (6.6.34)]{KruRou19MMCM}. On the contrary, equivalence does not hold at large strains. In our model we follow the second variant: the viscous Stokes element is in series with a Kelvin-Voigt rheological element. The reader is referred to \cite[Remark 9.4.4]{KruRou19MMCM}, for a model following the first variant instead, which allows for a simpler analysis in spite of a somehow lesser physical relevance. \end{remark} \section{Analysis of the model }\label{sec-anal} In the following we use the standard notation $C(\cdot)$ for the space of continuous functions, $L^p$ for Lebesgue spaces, and $W^{k,p}$ for Sobolev spaces whose $k$-th distributional derivatives are in $L^p$. Moreover, we use the abbreviation $H^k=W^{k,2}$ and, for all $p\geq 1$, we let the conjugate exponent $p'=p/(p{-}1)$ (with $p'=\infty$ if $p=1$), and use the notation $p^*$ for the Sobolev exponent $p^*=pd/(d{-}p)$ for $p<d$, $p^*<\infty$ for $p=d$, and $p^*=\infty$ for $p>d$. Thus, $W^{1,p}(\varOmega)\subset L^{p^*}\!(\varOmega)$ or $L^{{p^*}'}\!(\varOmega)\subset (W^{1,p}(\varOmega))^*$=\,the dual to $W^{1,p}(\varOmega)$. Given the fixed time interval $I=[0,T]$, we denote by $L^p(I;X)$ the standard Bochner space of Bochner-measurable mappings $u: I\to X$, where $X$ is a Banach space. Moreover, $W^{k,p}(I;X)$ denotes the Banach space of mappings in $L^p(I;X)$ whose $k$-th distributional derivative in time is also in $L^p(I;X)$. Let us list here the assumptions on the data which are used in the following: \begin{subequations}\label{ass} \begin{align}\nonumber &\F :\mathbb{R}^{d\times d}\to[0,+\infty]\ \text{ continuously differentiable on }\ {\rm GL}^+(d), \ \exists\,\epsilon>0, \ p_{\rm G}\in (d,2^*), \ r> p_{\rm G}d/(p_{\rm G}-d), \\%\nonumber &\qquad\FE(F_{\rm el})\ge\begin{cases}\epsilon /(\det F_{\rm el})^r\!\!\!&\text{if }\ \det F_{\rm el}>0, \\[-.2em]\quad+\infty&\text{if }\ \det F_{\rm el}\le0,\end{cases}\ \ \ \ , \label{ass-FM} \qquad \\[-.2em]\nonumber &\FH:\mathbb{R}^{d\times d}\to[0,+\infty]\ \text{ continuously differentiable on }\ {\rm GL}^+(d), \ \exists\,\epsilon>0, \ s > 2^*d/(2^*-d), \\ &\qquad\FH(\PP)\ge\begin{cases}\epsilon /(\det\PP)^s\!\!\!&\text{if }\ \det\PP>0, \label{ass-plast-large-HD-growth} \\[-.2em]\quad+\infty&\text{if }\ \det\PP\le0,\end{cases}\ \ \ \ , \\[-.2em]\nonumber &\FG:\mathbb{R}^{d\times d\times d}\to[0,+\infty)\ \text{ convex, continuously differentiable}, \ \exists\,\epsilon>0 \\[-.2em]\nonumber &\qquad\forall G,\widetilde G\in \mathbb{R}^{d\times d\times d}:\ \ \ ({\rm D}\FG(G){-}{\rm D}\FG(\widetilde G))\vdots(G{-}\widetilde G)\ge\epsilon|G{-}\widetilde G|^{p_{\rm G}} \\[-.2em] &\hspace{11em}\FG(G)\ge\epsilon|G|^{p_{\rm G}}\,,\ \ \ |{\rm D}\FG(G)|\le(1+|G|^{p_{\rm G}-1})/\epsilon\,, \label{ass-G}\\ \label{ass-M-K}&\varrho>0,\ \ \nu_{\rm m},\nu_{\rm kv},\nu_{\rm h}>0, \\&\nonumber y_0\!\in\! W^{2,p_{\rm G}} (\varOmega)^d,\ \ v_0\!\in\! L^2(\varOmega)^d,\ \ \PP_0\!\in\! H^2 (\varOmega)^{d\times d},\ \ \\&\qquad \label{ass-IC}\FE(\nabla y_0\PP_0^{-1})\!\in\!L^1(\varOmega),\ \ \FH(\PP_0)\!\in\!L^1(\varOmega), \\&\label{ass-load} f\in L^1(I;L^2(\varOmega)^d)+L^2(I;L^1(\varOmega)^d),\ \ \ g\in L^2(I;L^1(\varGamma)^d) \end{align}\end{subequations} A prototypical choice for $\FG$ satisfying \eqref{ass-G} is $\FG(\cdot)=|\cdot|^{p_{\rm G}}$. The restriction $p_{\rm G}<2^*$ will be instrumental for estimates \eqref{nabla-Fel-strongly} and \eqref{first5} below. The definition of weak solutions follows directly from system \eqref{abstract}. It can be recovered by formally testing both equations in \eqref{evol} by smooth functions and use Green formulas \eqref{green} together with the surface Green formula \eqref{greenS}, the boundary conditions \eqref{BC}, and multiple by-part integration in time, keeping into account the initial conditions \eqref{IC}. Altogether, we arrive at the following definition. \begin{definition}[Weak formulation of \eqref{evol} with \eqref{BC}-\eqref{IC}]\label{def} The pair $(y,\PP)$ satisfying \begin{subequations}\label{weak-form-}\begin{align}\nonumber &y\in L^\infty(I;W^{2,p_{\rm G}}(\varOmega)^d)\cap H^1(I;L^2(\varOmega)^d) \ \ \text{ with }\ \ \nabla y^\top\nabla y\in H^1(I;L^2(\varOmega)^{d\times d})\,, \\[-.2em]&\qquad\qquad\varSigma\in L^2(I{\times}\varOmega)^{d\times d},\quad \det\nabla y>0\,,\ \ \text{ and }\ \ \ \frac1{\det\nabla y}\in L^\infty(I{\times}\varOmega)\,, \label{weak-form-y} \ \ \text{ and} \\[-.2em]\label{weak-form-Pi} &\PP\in H^1(I;H^2(\varOmega)^{d\times d} \ \ \text{ with }\ \ \det\PP>0\ \ \text{ and }\ \ \ \frac1{\det\PP}\in L^\infty(I{\times}\varOmega) \end{align}\end{subequations} is called a \emph{weak solution} to the initial-boundary-value problem \eqref{evol}, \eqref{BC}--\eqref{IC} if the following two identities hold with $\varSigma$ from \eqref{evol3}: \begin{itemize} \item[\rm (i)] The \emph{weak formulation of the momentum balance} \eqref{evol1} with the boundary conditions \eqref{BC1}--\eqref{BC12} and first two initial conditions in \eqref{IC} \begin{subequations} \label{weak-form}\begin{align}\nonumber \int_0^T\!\!\int_\varOmega\Big({\rm D}\FE(\nabla y\PP^{-1}){:}(\nabla\widetilde y\,\PP^{-1}) +\varrho y{\cdot}\DDT{\widetilde y} +2\,{\rm sym}\,\big(\PP^{-\top}\nabla y^{\top}\varSigma\PP^{-1}\big):\nabla\widetilde y \\[-.4em]&\nonumber\qquad\qquad +{\rm D}\FG(\nabla(\nabla y\PP^{-1})){\vdots}\nabla(\nabla\widetilde y\PP^{-1}) \Big)\,\d x\, \d t =\int_0^T\!\!\int_\varOmega\! f{\cdot}\widetilde y\,\d x\, \d t \\[-.4em]&\qquad\qquad\qquad\qquad+\int_0^T\!\!\int_{\varGamma}\! g{\cdot}\widetilde y\,\d S\, \d t +\int_\varOmega\!\varrho v_0{\cdot}\widetilde y(0)-\varrho y_0{\cdot}\DT{\widetilde y}(0)\,\d x \label{momentum-weak}\end{align} holds for any $\widetilde y$ smooth with $\widetilde y(T)=\DT{\widetilde y}(T) =0$. \item[\rm (ii)] The \emph{weak formulation of the creep flow rule} \eqref{evol2} with the boundary conditions \eqref{BC21}--\eqref{BC22} and the last initial condition in \eqref{IC} \begin{align}\nonumber &\int_0^T\!\!\int_\varOmega\Big( \nabla y^\top{\rm D}\FE(\nabla y\PP^{-1}){:}{\rm D}(\PP^{-1})+{\rm D}\FH(\PP) -2\,{\rm sym}\,\big(\PP^{-\top}\nabla y^{\top}\nabla y\PP^{-1}\varSigma\PP^{-1}\big) \Big) {:}{\widetilde\PP} \\[-.5em]&\nonumber\qquad \nu_{\rm m}\PP:\DT{\widetilde\PP} +{\rm D}\FG(\nabla(\nabla y\PP^{-1}))\vdots\nabla\big(\nabla y{\rm D}(\PP^{-1}){:}{\widetilde\PP}\big) -\nu_{\rm h}\nabla^2\PP\vdots\nabla^2\DT{\widetilde\PP} \,\d x\, \d t \\[-.0em]&\qquad\qquad=\int_\varOmeg \nu_{\rm m}\PP_0{:}{\widetilde\PP}(0) +\nu_{\rm h}\nabla^2\PP_0\vdots\nabla^2{\widetilde\PP}(0)\,\d x \label{weak-form-P} \end{align}\end{subequations} holds for any ${\widetilde\PP}$ smooth with ${\widetilde\PP}(T)=0$. \end{itemize} \end{definition} Let us note that, due to \eqref{weak-form-Pi}, we have also $\PP^{-1}=\Cof\PP^\top/\det\PP \in L^\infty(I{\times}\varOmega)^{d\times d}$, \CHECK{as well as ${\rm D}\FG(\nabla(\nabla y\PP^{-1}))\in L^\infty(I;L^{p_{\rm G}'}(\varOmega)^{d\times d\times d})$} so that all integrands in \eqref{weak-form} are well-defined as $L^1$-functions. Our main analytical result is an existence theorem for weak solutions. This is to be seen as a mathematical consistency property of the proposed model. It reads as follows. \begin{theorem}[Existence of weak solutions]\label{thm} Let the assumptions \eqref{ass} hold. Then, there exists a weak solution $(y,\PP)$ in the sense of Definition~{\rm {\ref{def}}}. \end{theorem} \begin{proof As we are working in reference (Lagrangian) coordinates and aim at testing by partial derivatives in time, we can advantageously use the Galerkin discretisation method in space. Let us fix a nested sequence of finite-dimensional subspaces $V_k\subset W^{2,\infty}(\varOmega)$, $k\in\mathbb{N}$ whose union is dense in $W^{2,\infty}(\varOmega)$. We will use this sequence for all components of deformations $y$ and inelastic strains $\PP$. Without loss of generality, we may consider an approximation of the initial conditions $y_{0,k}\in V_k^d$, $v_{0,k}\in V_k^d$, and $\PP_{0,k}\in V_k^{d\times d}$ such that \begin{subequations}\label{IC-approx}\begin{align} &&&&&y_{0,k}\to y_0&&\text{strongly in }W^{2,p_{\rm G}}(\varOmega)^d,&&&&&&&& \\ &&&&&v_{0,k}\to v_0&&\text{strongly in }L^2(\varOmega)^d, \\ &&&&&\PP_{0,k}\to\PP_0&&\text{strongly in }H^2(\varOmega)^{d\times d}. \end{align}\end{subequations} Existence of a finite-dimensional approximate solution $(y_k,\PP_k)\in W^{2,1}(I;V_k^d)\times C^1(I;V_k^{d\times d})$ of the initial-value problem for the system of nonlinear ordinary differential \CHECK{equations} arising from the Galerkin approximation is standard, also using successive prolongation based on uniform $L^\infty$ estimates. Such estimates can be obtained by testing the discrete-in-space equations by $\DT y_k$ and $\DT\PP_k$. This leads to the energy balance \eqref{energy} for the Galerkin approximations $(y_k,\PP_k)$. Starting from the energy balance, by using the Gronwall and H\"older inequalities, we obtain a-priori estimates independently of $k$, namely, \begin{subequations}\label{est}\begin{align}\label{est1} &\{y_k\}_{k\in\mathbb{N}}^{}\ \ \text{ is bounded in }\ W^{1,\infty}(I;L^2(\varOmega)^d), \\\label{est2} &\{\PP_k\}_{k\in\mathbb{N}}^{}\ \ \text{ is bounded in }\ H^1(I;H^2(\varOmega)^{d\times d})\subset L^\infty(I{\times}\varOmega)^{d\times d} ,\\\label{est3} &\{F_{{\rm el},k}\}_{k\in\mathbb{N}}^{}=\{\nabla y_k \PP^{-1}_k\}_{k\in\mathbb{N}}^{}\ \ \text{ is bounded in }\ L^\infty(I;W^{1,p_{\rm G}}(\varOmega)^{d\times d}),\\\label{est4} &\{C_{{\rm el},k}\}_{k\in\mathbb{N}}^{}=\{F_{{\rm el},k}^\top F_{\rm el,k}\}_{k\in\mathbb{N}}^{}\ \ \text{ is bounded in }\ H^1(I;L^2(\varOmega)^{d\times d} . \intertext{Next, we use the classical Healey-Kr\"omer \cite{HeaKro09IWSS} argument, here applied to the plastic strain instead of the deformation gradient, as already exploited in \cite{RouSte19FTCS}. This is based on the $L^\infty$-bound of $\PP_k$ and on the sufficiently fast blow-up of $\FH$, as assumed in \eqref{ass-plast-large-HD-growth}. It is important that the argument in \cite{HeaKro09IWSS} holds even for the discrete level (as realized already in \cite{KruRou19MMCM,MieRou20TKVR}) and ensures that $\det\PP_k\ge\delta$ for all time instants and for some $\delta>0$ independent of $k$. In particular, we also have that} &\label{P-1-bound} \{\PP_k^{-1}\}_{k\in\mathbb{N}}^{}\ \ \text{ is bounded in }\ L^\infty(I\times \varOmega)^{d\times d}. \intertext{From (\ref{est2})--(\ref{est3}) we get that $\{\nabla y_k\}_{k\in\mathbb{N}}^{}=\{F_{{\rm el},k}\PP_k\}_{k\in\mathbb{N}}^{}$ is bounded in $L^\infty(I\times \varOmega)^{d\times d\times d}$. From \eqref{est2} we find that $\nabla(\nabla y_k\PP_k^{-1}) (\PP_k^{-\top}\nabla(\nabla y_k)^\top)^{\rm t}+ \nabla y_k{\rm D}(\PP_k^{-1})\nabla\PP_k $ is bounded in $L^\infty(I;L^{p_{\rm G}}(\varOmega)^{d\times d\times d})$. This in particular implies that} & \ \nabla(\nabla y_k)^\top\}_{k\in\mathbb{N}}^{}=\Big\{\PP_k^\top\Big(\nabla(\nabla y_k\PP_k^{-1}){-}\nabla y_k{\rm D}(\PP_k^{-1})\nabla\PP_k\Big)^{\rm t}\Big\}_{k\in\mathbb{N}}^{}\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\text{ is bounded in }\ L^\infty(I;L^{p_{\rm G}}(\varOmega)^{d\times d\times d}).\label{est_last} \intertext{From \eqref{est1}, we know that $\{y_k\}_{k\in\mathbb{N}}^{}$ is bounded in $L^\infty(I;L^2(\varOmega)^d)$, so that \eqref{est_last} yields a bound in $L^\infty(I;W^{2,p_{\rm G}}(\varOmega)^d)$. We proceed by showing that \eqref{est4}, yields the estimate} &\{\nabla\DT y_k\}_{k\in\mathbb{N}}^{} \text{ is bounded in }\ L^2(I{\times}\varOmega)^{d\times d}. \label{est-of-DT-nabla-y} \end{align} \end{subequations} To prove \eqref{est-of-DT-nabla-y} we argue as in \cite[Sect.\,9.4.3]{KruRou19MMCM}. We preliminary observe that by the growth conditions from below on $\FE$ in \eqref{ass-FM}, as well as by the super-quadratic growth on $\FG$ in \eqref{ass-G}, the Healey-Kr\"omer argument yields the existence of $\delta_{\rm el}>0$ such that $${\rm det}\, F_{{\rm el},k}\geq \delta_{\rm el}\quad\text{ in }I\times \varOmega$$ for every $k\in \mathbb{N}$. By combining the Cauchy-Binet formula with the bound in \eqref{P-1-bound}, we find that $$ \frac1{\det\nabla y_k}=\frac1{\det(\nabla y_k\PP_k^{-1}\PP_k)} =\frac1{\det(\nabla y_k\PP_k^{-1})}\frac1{\det\PP_k}\, $$ is uniformly bounded in $L^\infty(I\times \varOmega)$. Property \eqref{est-of-DT-nabla-y} follows now by applying the generalized Korn inequality by Neff \cite{Neff02KFIN} and Pompe \cite{Pomp03KFIV} as exploited for the Kelvin-Voigt rheology in \cite[Thm.\,3.3]{MieRou20TKVR}. For all $k \in {\mathbb N}$ the pair $(y_k,\PP_k)$ fulfills the weak formulation \eqref{weak-form} with initial conditions approximated as \eqref{IC-approx}, provided that the test-functions take value in the finite-dimensional space. In particular, we have \begin{subequations}\label{weak-form-k}\begin{align}\nonumber & \int_0^T\!\!\int_\varOmega\Big({\rm D}\FE(\nabla y_k\PP^{-1}_k){:}(\nabla\widetilde y_k\,\PP^{-1}_k) +\varrho y_k{\cdot}\DDT{\widetilde y}_k +2\,{\rm sym}\,\big(\PP^{-\top}_k\nabla y_k^{\top}\varSigma_k\PP_k^{-1}\big){:}\nabla\widetilde y_k \\[-.4em]&\nonumber\qquad +{\rm D}\FG(\nabla(\nabla y_k\PP_k^{-1})){\vdots}\nabla(\nabla\widetilde y_k\PP^{-1}_k) \Big)\,\d x\, \d t =\int_0^T\!\!\int_\varOmega\! f{\cdot}\widetilde y_k\,\d x\, \d t \\[-.4em]&\qquad\qquad+\int_0^T\!\!\int_{\varGamma}\! g{\cdot}\widetilde y_k\,\d S\, \d t +\int_\varOmega\!\varrho v_0{\cdot}\widetilde y_k(0)-\varrho y_0{\cdot}\DT{\widetilde y}_k(0)\,\d x \label{momentum-weak-k}\\\nonumber &\int_0^T\!\!\int_\varOmega\Big( \nabla y^\top_k{\rm D}\FE(\nabla y_k\PP^{-1}_k){:}{\rm D}(\PP^{-1}_k)+{\rm D}\FH(\PP_k) -2\,{\rm sym}\,\big(\PP^{-\top}_k\nabla y^{\top}_k\nabla y_k\PP^{-1}_k\varSigma_k\PP^{-1}_k\big) \Big) {:}{\widetilde\PP_k} \\[-.5em]&\nonumber\qquad - \nu_{\rm m}\PP_k{:}\DT{\widetilde\PP}_k +{\rm D}\FG(\nabla(\nabla y_k\PP^{-1}_k)){\vdots}\nabla\big(\nabla y_k{\rm D}(\PP^{-1}_k):{\widetilde\PP}_k\big) -\nu_{\rm h}\nabla^2\PP_k{\vdots}\nabla^2\DT{\widetilde\PP}_k \,\d x\, \d t \\[-.0em]&\qquad\qquad=\int_\varOmeg \nu_{\rm m}\PP_0{:}{\widetilde\PP}_k(0) +\nu_{\rm h}\nabla^2\PP_0{\vdots}\nabla^2{\widetilde\PP}_k(0)\,\d x \label{weak-form-P-k} \end{align}\end{subequations} for all $\widetilde y_k\in C^2(I;V_k^d)$ and $\widetilde\PP_k\in C^1(I;V_k^{d\times d})$ with $\widetilde y_k(T)=\DT{\widetilde y}_k(T) =0$ and $\PP_k(T)=0$. We are hence ready to address the convergence $\{(y_k,\PP_k)\}_{k \in {\mathbb N}}$ as $k\to\infty$. By the Banach selection principle and the Aubin-Lions compact-embedding theorem, we select a not relabeled subsequence converging with respect to the weak* topologies indicated in \eqref{est}. In particular, we have that \begin{subequations}\label{conv} \begin{align} &y_k \to y \quad \text{weakly* in}\ W^{1,\infty}(I;L^2(\varOmega)^d)\cap L^\infty(I;W^{2,p_{\rm G}}(\varOmega)^d) \nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad \text{and strongly in}\ C(I{\times}\bar\varOmega)^d , \label{conv1} \\ \label{conv2} &\PP_k\to \PP \quad \text{weakly in}\ H^1(I;H^2(\varOmega)^{d\times d}) \ \text{and strongly in}\ L^\infty(I{\times}\varOmega)^{d\times d}, \\ \label{conv3} &\PP_k^{-1}\to \PP^{-1} \quad \text{strongly in}\ L^\infty(I{\times}\varOmega)^{d\times d},\\ \label{conv4} &F_{{\rm el},k}=\nabla y_k \PP^{-1}_k\to F_{\rm el} =\nabla y \PP^{-1}\quad \text{weakly* in}\ L^\infty(I;W^{1,p_{\rm G}}(\varOmega)^{d\times d}),\\ \label{conv5} &C_{{\rm el},k} =F_{{\rm el},k}^\top F_{\rm el,k} \to C_{\rm el} =F_{\rm el}^\top F_{\rm el} \quad \text{weakly in}\ H^1(I;L^2(\varOmega)^{d\times d}). \end{align} \end{subequations} In fact, using the Aubin-Lions theorem in the context of Galerkin method when the time derivatives are estimated only in some locally convex space (or alternatively only their Hahn-Banach extension is estimated in a Banach space) requires some attention, as commented in \cite[Sect.8.4]{Roub13NPDE}. The convergence of $\PP^{-1}_k$ is obtained by exploiting the formula $\PP^{-1}_k = {\rm Cof} \PP_k^\top/{\rm det} \PP_k$, as well as the uniform lower bound ${\rm det}\,\PP_k\geq\delta$, and the fact that the determinant is a locally Lipschitz function. By recalling that ${\rm D}(\PP^{-1}){:}A = - \PP^{-1}A \PP^{-1}$ for all $A \in \mathbb{R}^{d\times d}$ one readily checks that \begin{align} \label{strong2} & {\rm D}(\PP^{-1}_k) \to {\rm D}(\PP^{-1}) \ \ \text{strongly in} \ \ L^\infty(I{\times}\varOmega)^{d\times d\times d\times d },\\ &\nabla (\PP^{-1}_k)= {\rm D}(\PP^{-1}_k){:}\nabla \PP_k \to {\rm D}(\PP^{-1}){:}\nabla \PP = \nabla (\PP^{-1}) \nonumber\\ &\label{strong3}\qquad\qquad\qquad \text{strongly in} \ \ L^q(I{\times}\varOmega)^{d\times d\times d } \quad \forall q<2^*. \end{align We further proceed by proving that \begin{align} \label{strong4} \nabla (\nabla y_k^{}\PP^{-1}_k)\to\nabla (\nabla y\PP^{-1})\qquad\qquad\qquad \text{strongly in} \ \ L^{p_{\rm G}}(I{\times}\varOmega)^{d\times d\times d}. \end{align} By the uniform monotonicity of ${\rm D}\FG$, we find: \begin{align}\nonumber &\nonumber\epsilo \big\|\nabla(\nabla y_k\PP^{-1}_k) - \nabla (\nabla y\PP^{-1}) \big\|_{L^{p_{\rm G}}(I{\times}\varOmega)^{d\times d\times d}}^{p_{\rm G}} \\ &\nonumber\quad\le \int_0^T\!\!\!\int_\varOmega \big({\rm D}\FG(\nabla(\nabla y_k\PP^{-1}_k))-{\rm D}\FG(\nabla(\nabla y\PP^{-1}))\big)\vdots \big(\nabla(\nabla y_k\PP^{-1}_k)-\nabla(\nabla y\PP^{-1}))\big)\,\d x\,\d t \\ &\nonumber\quad= \int_0^T\!\!\!\int_\varOmega {\rm D}\FG(\nabla(\nabla y_k\PP_k^{-1}))\vdots\nabla (\nabla(y_k{-}y)\PP^{-1}_k)\,\d x\,\d t \\[-.2em]&\nonumber\quad\qquad+ \int_0^T\!\!\!\int_{\varOmega} {\rm D}\FG(\nabla(\nabla y_k\PP_k^{-1}))\vdots\nabla(\nabla y(\PP^{-1}_k{-}\PP^{-1}))\,\d x\, \d t \\[-.2em]&\nonumber\quad\qquad\qquad - \int_0^T\!\!\!\int_{\varOmega}\!{\rm D}\FG(\nabla (\nabla y\PP^{-1}))\vdots\big(\nabla (\nabla y_k\PP^{-1}_k){-}\nabla (\nabla y\PP^{-1})\big)\,\d x\d t \ =:\ I_{1,k}+I_{2,k}+I_{3,k}\,. \end{align} where $\epsilon>0$ is from \eqref{ass-G}. We have $I_{3,k}\to0$ owing to \eqref{conv4} and to ${\rm D}\FG(\nabla (\nabla y\PP^{-1}))\in L^\infty(I;L^{p_{\rm G}'}(\varOmega)^{d\times d\times d})$ because of the growth assumption in \eqref{ass-G}. Also $I_{2,k}\to0$ since \begin{align} \nabla (\nabla y(\PP^{-1}_k{-}\PP^{-1})) ((\PP^{-\top}_k{-}\PP^{-\top})\nabla(\nabla y)^\top)^{\rm t} +\nabla y(\nabla\PP^{-1}_k{-}\nabla\PP^{-1}) \to0 \label{nabla-Fel-strongly}\end{align} strongly in $L^{p_{\rm G}}(I{\times}\varOmega)^{d\times d\times d}$ by \eqref{conv3}; here we also used the convergence $\nabla\PP^{-1}_k\to\nabla\PP^{-1}$ strongly in $L^{p_{\rm G}}(I{\times}\varOmega)^{d\times d\times d}$ owing to \eqref{conv2}. To prove that also the term $I_{1,k}$ converges to 0, we test the momentum equation for the Galerkin approximants by $y_k{-}\widetilde y_k$ where $\widetilde y_k$ is an approximation of the limit $y$ which takes values in the finite-dimensional subspaces $V_k^d$ and which converges to $y$ in $L^2(I;W^{2,p_{\rm G}}(\varOmega)^d)\cap H^1(I;L^2(\varOmega)^d)$. We further assume $\widetilde y_k(0)=y_{0,k}$. Note that here $y_k{-}\widetilde{y}_k$ is not $C^2(I;H^2(\Omega)^d)$ but rather $W^{1,\infty}(I;H^2(\Omega)^d)$. Nevertheless, this regularity is enough for arguing differently from \eqref{momentum-weak-k} and integrating by-part in time only once. Note also that $y_k(T)-\widetilde{y}_k(T)\neq 0$. For this reason, a further term at time $T$ appears in the equation below. Altogether, \begin{align}\nonumber I_{1,k}& \int_0^T\!\!\int_\varOmega {\rm D}\FG(\nabla(\nabla y_k\PP_k^{-1}))\vdots (\nabla (\nabla(\widetilde y_k{-}y)\PP^{-1}_k)) +\varrho\DT y_k{\cdot}(\DT y_k{-}\DT{\widetilde y}_k) +f{\cdot}(y_k{-}\widetilde y_k) \\[-.3em]&\nonumber\qquad\quad -{\rm D}\FE(\nabla y_k\PP^{-1}_k){:}\nabla(y_k{-}\widetilde y_k)\PP^{-1}_k) -2\,{\rm sym}\,\big(\PP^{-\top}_k\nabla y_k^{\top}\varSigma_k\PP_k^{-1}\big){:}\nabla(y_k{-}\widetilde y_k)\,\d x\, \d t \\[-.1em]&\nonumber\qquad\qquad\qquad\qquad -\int_\varOmega \DT y_k(T){\cdot}(y_k(T){-}\widetilde y_k(T))\,\d x +\int_0^T\!\!\int_{\varGamma}\! g{\cdot}(y_k{-}\widetilde y_k)\,\d S\, \d t\to0\,. \end{align} Then, from \eqref{est-of-DT-nabla-y}, using (the above mentioned generalization of) the Aubin-Lions theorem, exploiting an information about $\DDT y_k$ obtained via a comparison argument in the discrete variant of \eqref{evol1} for the Galerkin approximants, we infer that $$ \DT y_k\to \DT y\quad\text{ strongly in }L^2(I\times \varOmega)^d, $$ and $$ \DT y_k(T)\to \DT y(T)\quad\text{ weakly in }L^2( \varOmega )^d. $$ By \eqref{est2}, \eqref{conv3}, \eqref{conv4}, and \eqref{conv5} we conclude that $I_{1,k}\to 0$ and obtain \eqref{strong4}. What it is left to prove is that $(y,\PP)$ is a weak solution in the sense of Definition \ref{def}. Let $\widetilde y$ and $\widetilde \PP$ be smooth with $\widetilde y(T)=\DT{\widetilde y}(T)=0$ and $\widetilde{\PP}(T)=0$, and approximate them via sequences $\widetilde{y}_k$ and $\widetilde{\PP}_k$ as in \eqref{weak-form-k}, so that $\widetilde{y}_k \to\widetilde y$ strongly in $H^2(I;W^{2,p_{\rm G}}(\varOmega)^d)$ and $\widetilde{\PP}_k \to \widetilde{\PP}$ strongly in $H^1(I;H^2(\varOmega)^d)$. One needs to check that convergences \eqref{conv} are sufficient to pass to the limit in all terms in \eqref{weak-form}. Let us start by the momentum balance \eqref{momentum-weak-k}. By the continuity of the superposition operator we have that \begin{align} &{\rm D}\FE(\nabla y_k\PP_k^{-1})\PP^{-\top}_k\to{\rm D}\FE(\nabla y\PP^{-1}) \PP^{-\top}\text{ strongly in} \ \ L^\infty(I{\times}\varOmega)^{d\times d}\, \label{first1} \end{align} cf.\ the growth condition \eqref{ass-FM}. Estimate \eqref{est4} ensures that \begin{equation}\label{first2}\varSigma_k=\nu_{\rm kv}\DT C_{{\rm el},k} \to \varSigma \ \ \text{weakly in} \ L^2(I{\times}\varOmega)^{d\times d}. \end{equation} The limit $\varSigma$ can be identified as $\varSigma=\nu_{\rm kv}\DT C_{\rm el}$ since we have convergence \eqref{conv5}. Owing to \eqref{first2}, \eqref{est_last}, and \eqref{conv3} we deduce that \begin{align} &\PP^{-\top}_k \nabla y_k^\top \varSigma_k \PP^{-1}_k \to \PP^{-\top}\nabla y^\top \varSigma \PP^{-1} \ \ \text{weakly in} \ L^2(I{\times}\varOmega)^{d\times d}\,. \label{first3} \end{align} Let us now compute \begin{align*} {\rm D}\FG(\nabla (\nabla y_k \PP^{-1}_k)){\vdots} \nabla (\nabla \widetilde y_k \PP^{-1}_k)&= {\rm D}\FG(\nabla (\nabla y_k \PP^{-1}_k)){\vdots} (\PP^{-\top}_k\nabla(\nabla\widetilde y_k)^\top)^{\rm t} \\[-.3em]&\ \ \ +{\rm D}\FG(\nabla (\nabla y_k \PP^{-1}_k)){\vdots} (\nabla \widetilde y_k {\rm D}( \PP^{-1}_k){:}\nabla \PP_k) \end{align*} Convergences \eqref{conv} suffice to pass to the weak limit in both terms in the right-hand side. In fact, taking into account \eqref{strong2} and \eqref{strong4}, we have the following strong convergences (even though weak ones would be enough for our existence proof): \begin{align} & {\rm D}\FG(\nabla(\nabla y_k\PP^{-1}_k)){\vdots} (\PP^{-\top}_k\nabla(\nabla\widetilde y_k)^\top)^{\rm t}\to {\rm D}\FG(\nabla (\nabla y \PP^{-1})){\vdots} (\PP^{-\top}\nabla(\nabla\widetilde y)^\top)^{\rm t} \nonumber\\ &\hspace*{14em} \text in} \ L^p(I;L^{p_{\rm G}'}(\varOmega)^{d\times d})\qquad \forall p<+\infty\,,\ \text{ and} \label{first4}\\ & {\rm D}\FG(\nabla (\nabla y_k \PP^{-1}_k)){\vdots} (\nabla \widetilde y_k \,{\rm D}( \PP^{-1}_k){:}\nabla \PP_k)\to {\rm D}\FG(\nabla (\nabla y \PP^{-1})){\vdots} (\nabla \widetilde y \,{\rm D}( \PP^{-1}){:}\nabla \PP) \nonumber\\ &\hspace*{14em} \text in} \ L^p(I;L^q(\varOmega)^{d\times d})\qquad \forall p +\infty, \ q<\frac{2^*p_{\rm G}'}{2^*{+}p_{\rm G}'}.\label{first5} \end{align} Since all the remaining terms in the momentum balance \eqref{momentum-weak-k} are linear, convergences \eqref{first1}--\eqref{first5} allow to pass to the limit and obtain \eqref{momentum-weak}. Let us now move to the flow rule \eqref{weak-form-P-k}. Arguing as above, by \eqref{est_last} we have that \begin{align} & \nabla y^\top_k {\rm D} \FE(\nabla y_k \PP^{-1}_k){:}{\rm D} (\PP^{-1}_k) \to \nabla y^\top {\rm D} \FE(\nabla y \PP^{-1}){:}{\rm D} (\PP^{-1}) \nonumber\\ &\qquad \ \text{ strongly in } L^\infty(I{\times}\varOmega)^{d\times d}\,. \label{second1} \end{align} By using again convergence \eqref{first2} we also get that \begin{align} & \PP^{-\top}_k \nabla y_k^\top \nabla y_k \PP^{-1}_k \varSigma_k \PP^{-1}_k \to \PP^{-\top}\nabla y^\top \nabla y \PP^{-1}\varSigma \PP^{-1}\quad \text{weakly in} \ L^2(I{\times}\varOmega)^{d\times d}\, \label{second2} \end{align} Eventually, we use convergences \eqref{conv1}, \eqref{strong3}, and \eqref{strong4} in order to check that \begin{align*} &{\rm D}\FG(\nabla(\nabla y_k\PP^{-1}_k)){\vdots}\nabla\big(\nabla y_k{\rm D}(\PP^{-1}_k){:}{\widetilde\PP}_k\big) = -{\rm D}\FG(\nabla(\nabla y_k\PP^{-1}_k)){\vdots}\nabla\big(\nabla y_k \PP^{-1}_k {\widetilde\PP}_k \PP^{-1}_k \big)\nonumber\\ &= - {\rm D}\FG(\nabla(\nabla y_k\PP^{-1}_k)){\vdots}\big( [(\widetilde{\PP}_k\PP_k^{-1})^{\top}\nabla(\nabla y_k\PP^{-1}_k)^t]^t+ \nabla y_k\PP^{-1}_k\nabla (\widetilde{\PP}_k\PP_k^{-1})\big)\nonumber\\ &\qquad \to {\rm D}\FG(\nabla(\nabla y\PP^{-1})){\vdots}\nabla\big(\nabla y{\rm D}(\PP^{-1}){:}{\widetilde\PP}\big) \ \ \text{strongly in} \ L^1(I{\times}\varOmega)^{d\times d\times d}. \end{align*} All remaining terms in the flow rule \eqref{weak-form-P-k} are linear and convergences \eqref{second1}--\eqref{second2} suffice to pass to the limit and obtain \eqref{weak-form-P}. \end{proof} \section*{Acknowledgments} This research has been partially supported also from the CSF (Czech Science Foundation) project 19-04956S, the M\v SMT \v CR (Ministry of Education of the Czech Rep.) project CZ.02.1.01/0.0/0.0/15-003/0000493, the Austria Science Fund (FWF) projects F\,65, I\,2375, P\,27052, I\,4052, and V\,662, and by the Vienna Science and Technology Fund (WWTF) through Project MA14-009 as well as from BMBWF through the OeAD-WTZ project CZ04/2019 and the institutional support RVO: 61388998 (\v CR). Besides, T.R.\ is thankful for the hospitality and support of the University of Vienna. \bibliographystyle{alpha}
{ "timestamp": "2020-12-17T02:17:10", "yymm": "2012", "arxiv_id": "2012.08914", "language": "en", "url": "https://arxiv.org/abs/2012.08914" }
\section{Green's operator of a linear elastic material } \label{appendix1} The auxiliary problem of a homogeneous material with stiffness ${\bf c}^0$ subject to a periodic polarization field ${\bfmit \mathchar"711C}$ plays an important role in the method which has been proposed. Its solution, which can be found in several textbooks ({\it e.g.} {\sc Mura} \cite{MUR87}), can be expressed in terms of the Fourier transform of the polarization field by means of the Fourier transform of the Green's operator of the following systems of equations \begin{equation} \left. \begin{array}{c} {\bfmit \mathchar"711B}({\bf x}) = {\bf c}^0 : {\bf {\bfmit \mathchar"7122}} ( {\bf u}^*({\bf x}) ) + {\bfmit \mathchar"711C}({\bf x}) \quad \forall {\bf x} \in V \\ \\ {\rm div} {\bfmit \mathchar"711B}({\bf x}) = {\bf 0} \quad \forall {\bf x} \in V ,\quad {\bfmit \mathchar"711B}{\bf .n} \ -\# ,\quad \ {\bf u}^\ast \# \end{array} \right\} \label{ap1} \end{equation} In Fourier space, these equations take the form \begin{equation} \hat \sigma_{ij}({\bfmit \mathchar"7118}) = {\rm i}\ c^0_{ijkh}\ \xi_ h\ \hat {u}^\ast_ {k} ({\bfmit \mathchar"7118}) + \hat {\tau}_{ij}( {\bfmit \mathchar"7118}),\quad {\rm i}\ \hat {\sigma}_{ij}( {\bfmit \mathchar"7118})\ \xi_ j = 0 . \label{ap2} \end{equation} (It is hoped that the index $i$ will not be confused with the complex number ${\rm i}=\sqrt{-1}$). Eliminating $ \hat {\sigma}_{ ij} $ between the two equations in (\ref{ap2}) yields $$ K^0_{ik} ({\bfmit \mathchar"7118}). u^*_k = \hat {\tau}_{ ij} ({\bfmit \mathchar"7118})\ \xi_j, $$ where $ {\bf K}^0({\bfmit \mathchar"7118})$ denotes the acoustic tensor of the homogeneous material, $ K^0_{ik} ({\bfmit \mathchar"7118}) = c^0_{ijkh}\ \xi_h\ \xi_j . $ Then $$ \hat u^\ast_ k({\bfmit \mathchar"7118}) = {\rm i}\ N^0_{ki} ({\bfmit \mathchar"7118})\ \hat {\tau}_{ ij} ({\bfmit \mathchar"7118})\ \xi_j = {{\rm i} \over 2}\ (N^0_{ki} ({\bfmit \mathchar"7118})\ \xi_j + N^0_{kj} ({\bfmit \mathchar"7118})\ \xi_i )\ \hat \tau_{ij} ({\bfmit \mathchar"7118}) , $$ where the symmetry of ${\bfmit \mathchar"711C}$ has been used and where $ {\bf N}^0 ({\bfmit \mathchar"7118})$ denotes the inverse of $ {\bf K}^0({\bfmit \mathchar"7118})$. Therefore \begin{equation} \hat{\varepsilon}_{kh}(u^\ast) = {{\rm i} \over 2} \left( \xi_ h\ \hat {u}^\ast_ k({\bfmit \mathchar"7118}) + \xi_ k\ \hat {u}^\ast_ h({\bfmit \mathchar"7118}) \right) = \hat{\Gamma}^0_{khij} ({\bfmit \mathchar"7118})\ \hat{ \tau}_{ij}({\bfmit \mathchar"7118}), \label{3.3} \end{equation} with \begin{equation} \hat{\Gamma}^0_{khij} = {1 \over 4} \left( N^0_{hi}({\bfmit \mathchar"7118})\ \xi_j\ \xi_k + N^0_{ki}({\bfmit \mathchar"7118})\ \xi_j\ \xi_h + N^0_{hj}({\bfmit \mathchar"7118})\ \xi_i\ \xi_k + N^0_{kj} ({\bfmit \mathchar"7118})\ \xi_i\ \xi_h \right) , \label{3.4} \end{equation} and \begin{equation} \hat{\tau}_{ij}({\bfmit \mathchar"7118}) = <\tau_{ij}({\bf x}) e^{- {\rm i} {\bfmit \mathchar"7118}.{\bf x}}>. \label{3.5} \end{equation} The strain field induced at each point $\bf x$ of the unit cell $V$ by an initial stress ${\bfmit \mathchar"711C}$ can be determined from (\ref{3.3}), (\ref{3.4}) and (\ref{3.5}). These formulas give the explicit form of the operator ${\bf \Gamma}^{0}$ and of the operation $*$ considered in section 2: $$ {\bfmit \mathchar"710F}({\bf u} ^ \ast) = - {\bf \Gamma}^0 * {\bfmit \mathchar"711C} . $$ Deatailed expressions of ${\bf \Gamma}^0$ can be found in {\sc Mura} \cite{MUR87} for different types of anisotropy for the reference medium. Its expression is particularly simple when the material is isotropic with Lam\'e coefficients $\lambda_0$ and $\mu_0$; the above expression becomes : $$ c^0_{ijkh} = \lambda^0 \delta_{ij} \delta_{kh} + \mu^0 \left(\delta_{ik} \delta_{jh} + \delta_{ih} \delta_{jk} \right). $$ $$K^0_{ij} ({\bf {\bfmit \mathchar"7118}}) = \left( \lambda^0+ \mu^ 0 \right) \xi_i \xi_j + \mu^0 \vert {\bf {\bfmit \mathchar"7118}} \vert^ 2 \delta_{ij} $$ $$N^0_{ij} ({\bf {\bfmit \mathchar"7118}}) = {1 \over \mu^ 0 \vert {\bf {\bfmit \mathchar"7118}} \vert^ 2} \left( \delta_{ij} - {\xi_i\ \xi_j \over \vert {\bf {\bfmit \mathchar"7118}} \vert^2} {\lambda^0 + \mu^0 \over \lambda^0 + 2\mu^0} \right). $$ Therefore: $$ \hat{\Gamma}_{khij} ({\bf {\bfmit \mathchar"7118}}) = {1 \over 4 \mu^ 0 \vert {\bf {\bfmit \mathchar"7118}} \vert^ 2} \left( \delta_{ki} \xi_ h \xi_ j + \delta_{hi} \xi_ k \xi_ j\ + \delta_{kj} \xi_ h \xi_ i + \delta_{ hj} \xi_ k \xi_ i \right) - {\lambda^0+ \mu^0 \over \mu^0(\lambda^0+ 2\mu^0)} \ {\xi_i \xi_j \xi_k \xi_h \over \vert {\bf {\bfmit \mathchar"7118}} \vert^4} $$ \section{Radial return algorithm} \label{appendix2} The equations governing a plastic material obeying a $J_2$ flow theory with isotropic hardening read~: \begin{equation} {\dot {\bfmit \mathchar"711B}} = {\bf c}:({\dot {\bfmit \mathchar"7122}}-{\dot {\bfmit \mathchar"7122}}^{p}), \quad \dot {{\bfmit \mathchar"7122}}^{p} = {3 \over 2} \ {\dot p} \ { \displaystyle {\bf s} \over \displaystyle \sigma_{eq} }, \label{B0} \end{equation} \begin{equation} \left. \begin{array}{c} \dot p = 0 \ {\rm when} \ {\sigma_{eq}}-{\sigma_{0}}(p) < 0,\\ \\ \dot p > 0 \ {\rm when} \ {\sigma_{eq}}-{\sigma_{0}}(p) = 0 . \end{array}[B \right\} \label{B1} \end{equation} ${\bfmit \mathchar"7122}^{p}$ is the plastic strain, $p$ is the equivalent plastic strain $ \dot p = \left( {2 \over 3} \ { \dot {{\bfmit \mathchar"7122}}^{p} } : { \dot {{\bfmit \mathchar"7122}}^{p} } \right)^{1/2}$. $\bf c$ is the stiffness tensor, assumed to be isotropic and characterized by a bulk modulus $k$ and a shear modulus $\mu$. \vskip 0.3cm Time is discretized into intervals $[t^n,t^{n+1}]$. $F^n$ denotes the value of a function $F$ at time $t^n$. ${\bfmit \mathchar"7122}^n$, ${\bfmit \mathchar"711B}^n$ and $p^n$ denote the strain, stress and equivalent plastic strain at time $t^n$. Given the mechanical fields at step $n$, and given the strain field ${\bfmit \mathchar"7122}^{n+1}$ at step $n+1$, the constitutive law amounts to finding the stress field ${\bfmit \mathchar"711B}^{n+1}$ and the equivalent plastic strain field $p^{n+1}$. Replacing time differentiation by a finite difference in (\ref{B0}) provides $$ {\bfmit \mathchar"711B}^{n+1} - {\bfmit \mathchar"711B}^{n} = {\bf c} \ : \ \left( {\bfmit \mathchar"7122}^{n+1} - {\bfmit \mathchar"7122}^{n} - {\dot{{\bfmit \mathchar"7122}}^p}{ }^{n+1} \times (t^{n+1}-t^n) \right) . $$ The elastic prediction is \begin{equation} {{\bfmit \mathchar"711B}}_T^{n+1} \ = \ {\bfmit \mathchar"711B}^n + {\bf c}: \left( {\bfmit \mathchar"7122}^{n+1}-{\bfmit \mathchar"7122}^{n} \right). \label{B2} \end{equation} After due account of plastic incompressibility, (\ref{B2}) gives $$ {\bfmit \mathchar"711B}^{n+1} = {{\bfmit \mathchar"711B}}_T^{n+1} - 2 (t^{n+1}-t^n)\ \mu \ {\dot{{\bfmit \mathchar"7122}}^p}{ }^{n+1}. $$ Alternatively, making use of the flow rule (\ref{B0}) and of the decomposition of ${\bfmit \mathchar"711B}^{n+1}$ into a spherical stress and deviator stress \begin{equation} {\rm tr}({\bfmit \mathchar"711B}^{n+1}) = {\rm tr}({{\bfmit \mathchar"711B}}_T^{n+1}) = {\rm tr}({\bfmit \mathchar"711B}^{n}) + 3k \ {\rm tr}({\bfmit \mathchar"7122}^{n+1}-{\bfmit \mathchar"7122}^{n}) \label{B3} \end{equation} \begin{equation} {\bf s}^{n+1} = {\bf s}_T^{n+1} - {3 (t^{n+1}-t^n) \ \mu \ \dot{p}^{n+1} \ \over \displaystyle \sigma^{n+1}_{eq} } \ {\bf s}^{n+1} \label{B4} \end{equation} (\ref{B3}) can be re-written, assuming that there are no initial stresses or strains at time $t^0$, $${\rm tr}({\bfmit \mathchar"711B}^{n+1}) = 3k \ {\rm tr}({\bfmit \mathchar"7122}^{n+1})$$ The radial return method is based on the observation that, according to (\ref{B4}), the deviators ${\bf s}^{n+1}$ and ${\bf s}_T^{n+1} $ are proportional. The Von Mises stresses associated with ${{\bfmit \mathchar"711B}^{n+1}}$ and ${\bfmit \mathchar"711B}_T^{n+1} $ are therefore related through~: \begin{equation} \sigma^{n+1}_{eq} \ \ = \ \ (\sigma_T^{n+1} )_{eq} \ - \ 3 \mu \ (p^{n+1}-p^n) \label{B5} \end{equation} \begin{itemize} \item[-] If $(\sigma_T^{n+1} )_{eq} < \sigma_0(p^n)$, the step is purely elastic, $${\bfmit \mathchar"711B}^{n+1} = {\bfmit \mathchar"711B}_T^{n+1},\quad p^{n+1}= p^n.$$ \item[-] If $(\sigma_T^{n+1} )_{eq} \geq \sigma_0(p^n)$, the material plastifies at step $n+1$, $\sigma^{n+1}_{eq}= \sigma_0(p^{n+1})$ and (\ref{B5}) reduces to~: $$ \sigma_0(p^{n+1}) + 3 \mu \ p^{n+1} = (\sigma_T^{n+1} )_{eq} + 3 \mu \ p^n $$ Assuming that hardening is positive (no softening), the function $h(p) = \sigma_0(p) + 3 \mu p$ can be inverted to give \begin{equation} p^{n+1} = h^{-1}\big( (\sigma_T^{n+1} )_{eq} + 3 \mu p^n \big) \label{B6} \end{equation} The case of linear hardening leads to simple inversion. Indeed, in this case, $\sigma_0(p) = \sigma_0 + H p$ and (\ref{B6}) reduces to $$ p^{n+1} = {3 \mu \over H + 3 \mu } \ p^n + { (\sigma_T^{n+1} )_{eq}-\sigma_0 \over H + 3 \mu }. $$ The case of a perfectly plastic material, corresponding to $H=0$, is covered by the above relation. When $h^{-1}$ is not available in a closed form, it can be approximated by linear interpolation. When $k \in \ [h(p_l),h(p_{l+1})]$, $p=h^{-1}(k)$ is approximated by $p_l + (k-h(p_l)) {\displaystyle p_{l+1}-p_l \over \displaystyle h(p_{l+1})-h(p_l)} $. \end{itemize} Finally, the algorithm used in our computations reads~: \begin{equation} \left. \begin{array}{ll} & {\bfmit \mathchar"7122}^n, \ {\bfmit \mathchar"711B}^n, \ p^n, \ {\bfmit \mathchar"7122}^{n+1} \ {\rm being \ known}, \\ \\ {\rm Compute} & {\bf s}_T^{n+1} = {\bf s}^n + 2 \mu ({\bfmit \mathchar"7122}^{n+1}-{\bfmit \mathchar"7122}^n) ,\\ & (\sigma_T^{n+1} )_{eq} = \left({3 \over 2} {\bf s}_T^{n+1} : {\bf s}_T^{n+1} \right)^{1/2} \\ \\ {\rm Test} & {If} \quad (\sigma_T^{n+1} )_{eq} < \sigma_0(p^n) \\ & \qquad \begin{array}{rl} & p^{n+1} = p^n \\ & {\bf s}^{n+1} = {\bf s}_T^{n+1} \end{array} \\ & {Else} \\ & \qquad \begin{array}{rl} & p^{n+1} = h^{-1}\big( ({{\bfmit \mathchar"711B}}_T^{n+1} )_{eq} + 3 \mu p^n \big) \\ & {\bf s}^{n+1} = \ {\displaystyle \sigma_0(p^{n+1}) \over \displaystyle (\sigma_T^{n+1})_{eq} }\ {\bf s}_T^{n+1} \end{array} \\ {\rm End \ of \ test} &\\ \\ {\rm Update} & {\rm tr}({\bfmit \mathchar"711B}^{n+1}) = {\rm tr}({\bfmit \mathchar"711B}^{n}) + 3k \ {\rm tr}({\bfmit \mathchar"7122}^{n+1} - {\bfmit \mathchar"7122}^{n}) \\ & {\bfmit \mathchar"711B}^{n+1} = {1 \over 3}{\rm tr}({\bfmit \mathchar"711B}^{n+1}) \ {\rm I\kern-.22em I} {\rm d} \ \ + \ \ {\bf s}^{n+1} \end{array} \right\} \end{equation} \section{Imposing a macroscopic stress direction.} \label{appendix3} In the above described algorithm the overall strain is prescribed by assessing the value of the Fourier transform of the strain field at the zero frequency~: $$ \hat{{\bfmit \mathchar"7122}}({\bf 0}) = {\bf E}.$$ It is often convenient (or necessary) to impose the overall stress $\bfS$, rather than the overall strain $\bfE$. A typical example is provided by uniaxial tension in the transverse direction as described by (\ref{ut}). In strongly nonlinear problems it is even necessary to impose only the direction of the overall stress and to drive the loading by means of an auxiliary parameter (arc length method). The algorithm can be modified to account for loadings in the form \begin{equation} {\bf \Sigma} \ = \ k \ {\bf S}_0 \ \ \hbox{\rm and} \ \ {\bf E}:{\bf S}_0 = t, \label{a31} \end{equation} where ${\bf S}_0$ is the prescribed direction of overall stress (by direction of stress we refer to a direction in the 6-dimensional space of stresses), $k$ is the unknown level of overall stress and $t$, which serves as a loading parameter, is the component of the overall strain in this direction. Then, the overall strain and stress ${\bf E}^{i}$ et ${\bf \Sigma}^{i}$ have to be determined by means of (\ref{a31}). For this purpose, at iterate $i$, ${\bfmit \mathchar"711B}^{i-1}$ and ${\bfmit \mathchar"7122}^{i-1}$ being known, the loading level $t^i$ being known but $k^i$ being unknown, ${\bf E}^{i}$ and ${\bf \Sigma}^{i}$ are subject to~: \begin{equation} \left. \begin{array}{c} {\bf \Sigma}^{i} - {\bf c}^0:{\bf E}^{i} = <{\bfmit \mathchar"711B}^{i-1}>-{\bf c}^0:<{\bfmit \mathchar"7122}^{i-1}> \\ \\ {\bf \Sigma}^{i} = k^{i} {\bf S}_0,\quad {\bf E}^{i}:{\bf S}_0 = t^{i} \end{array} \right\} \label{a32} \end{equation} Elimination of ${\bf \Sigma}^{i}$ yields \begin{equation} {\bf E}^{i} = k^{i} {{\bf c}^0}^{-1}:{\bf S}_0 \ - \ {{\bf c}^0}^{-1}:<{\bfmit \mathchar"711B}^{i-1}> \ + \ <{\bfmit \mathchar"7122}^{i-1}> \label{a33} \end{equation} and $$ k^{i} = { t^{i} + ({{\bf c}^0}^{-1}:<{\bfmit \mathchar"711B}^{i-1}>-<{\bfmit \mathchar"7122}^{i-1}>):{\bf S}_0 \over {{\bf c}^0}^{-1}:{\bf S}_0:{\bf S}_0 } $$ Therefore the modification brought into the algorithm (\ref{alg4}) is an additional step to determine ${\bf E}^{i}$ according to (\ref{a33}), which is then prescribed as the overall strain through~: $$ {\hat {\bfmit \mathchar"7122}^{i}}({\bf 0}) = {\bf E}^{i}.$$ It is worth noting that the condition $<{\bfmit \mathchar"7122}^i> = {\bf E}^i$ is met at each step of the iterative procedure, whereas the equality $<{\bfmit \mathchar"711B}^i> = \bfS^i$ is met only at convergence. The difference arises from the fact that ${\bfmit \mathchar"711B}^i$ is deduced from the constitutive law, whereas $ \bfS^i$ is deduced from (\ref{a32}). Indeed, once convergence is reached, one has $${\bf E}^{i} = {\bf E}^{i-1} = <{\bfmit \mathchar"7122}^{i-1}>,$$ and, according to (\ref{a32}), $ {\bf \Sigma}^{i} = <{\bfs}^{i-1}> = <{\bfs}^{i}>$ . \section{Introduction} This study is devoted to a numerical method introduced by { Moulinec} and { Suquet} \cite{MOU94}, \cite{MOU95} to determine the local and overall responses of nonlinear composites. Numerous studies dealt with nonlinear cell calculations by the Finite Element Method (FEM) (see for example { Adams} and { Donner} \cite{ADA67}, { Christman} {\it et al} \cite{CHR89}, { Tvergaard} \cite{TVE90}, { Michel} and { Suquet} \cite{MIC93}). Most of them are limited to ``simple" microstructures, one or two inclusions embedded in a volume of matrix. The need to incorporate more detailed information on the microstructure is clearly recognized. Recently, several studies have considered ``complex" microstructures involving a significant number of inclusions with irregular shape. { Brockenborough} {\it et al} \cite{BRO91}, { B\"{o}hm} {\it et al} \cite{BOH93}, { Nakamura} and { Suresh} \cite{NAK93}, { Dietrich} {\it et al} \cite{DIE93}, { Becker} and { Richmond} \cite{BEC94} are some of the contributions to this recently developed subject. All were based on the FEM. The difficulties due to meshing and to the large number of degrees of freedom required by the analysis limit the complexity of the microstructures which can be investigated by this method. A typical example of a complex microstructure which is difficult to mesh and therefore to handle by means of the FEM is shown in Figure \ref{bornert} taken from the work of { Bornert} \cite{BOR96}. The digital image of this Iron/Silver blend was obtained by Scanning Electron Microscopy (SEM). The initial idea of the method proposed in \cite{MOU94} was to make direct use of these digital {\it images of the real microstructure} in the numerical simulation. A similar idea can be found in { Garboczi} and { Day} \cite{GAR95} who used a spring network technique. \vskip 0.3cm The proposed method avoids the difficulty due to meshing. It makes use of Fast Fourier Transforms (FFT) to solve the unit cell problem \footnote{During the revision of this paper, the attention of the authors was called on a similar work by { M\"uller} \cite{MUL96} concerning phase transformation.}, even when the constituents have a nonlinear behavior. FFT algorithms require data sampled in a grid of regular spacing, allowing the direct use of digital images of the microstructure. The second difficulty (size of the problem) is partially overcome by an iterative method not requiring the formation of a stiffness matrix. \vskip 0.3cm The interest in numerical simulations of the nonlinear response of composites has recently been strengthened by the development of theoretical methods which analytically predict the nonlinear overall behavior of composites ({ Willis} \cite{WIL91}, { Ponte Casta\~neda} \cite{PON92}, { Suquet} \cite{SUQ93}). Part of the present study provides precise numerical results for uniaxial loadings which could serve as guidelines for theoretical predictions. \vskip 0.3cm The body of the method and the resulting algorithms are presented in section 2. In section 3, the accuracy of the method and several numerical points are discussed (choice of the reference medium, spatial resolution ....). In section 4 the method is applied to determine the local and overall responses of composites with "random" microstructures. In all the cases considered in this study the models have been limited to two dimensional approximations. The first reason for this approximation is the limitation on current computational capability. The second reason is that many microstructural observations are two dimensional. \section{The numerical method} \subsection{Cell problem and boundary conditions} The overall behavior of a composite is governed by the individual behavior of its constituents and by its microstructure. Its effective response to a prescribed path of macroscopic strains or stresses may be determined numerically via the resolution of the so-called "local problem" on a representative volume element (r.v.e.) $V$. In this study, the "representative" information on the microstructure is provided by an image (micrograph) of the microstructure with arbitrary complexity. The image contains $N$ pixels, and independent mechanical properties are assigned individually to each pixel. Most applications involve only a limited number of phases, although in principle each pixel could be considered as an individual constituent.\par The local problem consists of equilibrium equations, constitutive equations, and boundary and interface conditions. All different phases are assumed to be perfectly bonded (displacements and tractions are continuous across interfaces). Displacements and tractions along the boundary of the r.v.e. are left undetermined and the local problem is ill-posed. We choose to close the problem with periodic boundary conditions which can be expressed as follows. The local strain field ${\bfmit \mathchar"7122}({\bf u(x)})$ is split into its average $\bf E$ and a fluctuation term $\bf {\bfmit \mathchar"7122}({\bf u^\ast(x)})$: $$ \bf {\bfmit \mathchar"7122}(u(x)) \ = \ {\bfmit \mathchar"7122}(u^\ast(x)) + E\quad {\rm or\ equivalently}\quad \bf u(x) = u^\ast(x) + E.x. $$ By assuming periodic boundary conditions it is assumed that the fluctuating term $\bf u^{\ast}$ is periodic (notation: $\bf u^{\ast} \ \#$), and that the traction ${\bfmit \mathchar"711B}.{\bf n} $ is anti-periodic in order to meet the equilibrium equations on the boundary between two neighboring cells (notation: ${\bfmit \mathchar"711B}.{\bf n} \ -\# $). This local problem could be solved by means of the FEM ({ Suquet} \cite{SUQ87}, { Guedes} and { Kikuchi} \cite{GUE90}). We propose an alternate method of resolution. \subsection{An auxiliary problem} First we consider the preliminary problem of a homogeneous linear elastic body with stiffness ${\bf c}^0$ subjected to a polarization field $\bf {\bfmit \mathchar"711C}(x)$. \begin{equation} \left. \begin{array}{rcl} {\bfmit \mathchar"711B}({\bf x})& = & {\bf c}^0: {\bfmit \mathchar"7122}({\bf u}^\ast({\bf x})) \ + \ {\bfmit \mathchar"711C}({\bf x}) \ \ \ \forall {\bf x} \in V \\ \\ {\bf div}\ {\bfmit \mathchar"711B}({\bf x})& = & {\bf 0} \quad \forall {\bf x} \in V, \quad {\bf u}^\ast \ \#, \ {\bfmit \mathchar"711B}.{\bf n} \ -\# \end{array} \right\} \label{eq1} \end{equation} The solution of (\ref{eq1}) can be expressed in real and Fourier spaces, respectively, by means of the periodic Green operator ${\bf \Gamma}^0$ associated with ${\bf c}^0$: \begin{equation} {\bfmit \mathchar"7122}({\bf u}^*({\bf x})) = - {\bf \Gamma}^0 \ast {\bfmit \mathchar"711C}({\bf x}) \quad \forall {\bf x} \in V, \label{eq2} \end{equation} {\rm or} \begin{equation} {\hat {\bfmit \mathchar"7122}}({\bfmit \mathchar"7118}) = - {\hat {\bf \Gamma}^0}({\bfmit \mathchar"7118}):{\hat {\bfmit \mathchar"711C}}({\bfmit \mathchar"7118}) \ \ \forall {\bfmit \mathchar"7118} \ne {\bf 0}, \ {\hat {\bfmit \mathchar"7122}}({\bf 0}) = {\bf 0} \label{eq3} \end{equation} The operator ${\bf \Gamma}^{0}$ is explicitly known in Fourier space (see appendix \ref{appendix1}). When the reference material is isotropic (with Lam\'e coefficients $\lambda^0$ et $\mu^0$) it takes the form~: \begin{equation} {\hat {\Gamma}}^{0}_{ijkh}({\bfmit \mathchar"7118}) = { 1 \over 4 \mu^0 \vert {\bfmit \mathchar"7118} \vert ^2 } ( \delta_{ki} \xi_{h} \xi_j + \delta_{hi} \xi_{k} \xi_j + \delta_{kj} \xi_{h} \xi_i + \delta_{hj} \xi_{k} \xi_i ) - { \lambda^0 + \mu^0 \over \mu^0 (\lambda^0 + 2\mu^0) } { \xi_i \xi_j \xi_k \xi_h \over \vert {\bfmit \mathchar"7118} \vert ^4 }. \label{eq4} \end{equation} \subsection {The periodic Lippman-Schwinger equation} The auxiliary problem can be used to solve the problem of an inhomogeneous elastic composite material with stiffness $\bf c(x)$ at point $\bf x$ under prescribed strain $\bf E$~: \begin{equation} \left. \begin{array}{rcl} {\bfmit \mathchar"711B}({\bf x}) &= & {\bf c}({\bf x}): \big( {\bfmit \mathchar"7122}({\bf u}^\ast({\bf x})) \ + {\bf E} \big) \ \ \ \forall {\bf x} \in V \\ \\ {\bf div} {\bfmit \mathchar"711B}({\bf x})& = &{\bf 0} \quad \forall {\bf x} \in V,\quad {\bf u}^\ast \ \#, \ {\bfmit \mathchar"711B}.{\bf n} \ -\# \end{array} \right\} \label{eq5} \end{equation} For simplicity $\bf E$ is assumed to be prescribed, although other average conditions could be considered as well (see appendix \ref{appendix3} for prescribed stresses). A homogeneous reference material with elastic stiffness ${\bf c}^0$ is introduced and a polarization tensor ${\bfmit \mathchar"711C}(\bfx)$, which is unknown {\it a priori}, is defined as~: \begin{equation} {\bfmit \mathchar"711C}({\bf x}) = {\bfmit \mathchar"710E} {\bf c}({\bf x}) :{\bfmit \mathchar"7122} ({\bf u} ({\bf x})), \quad {\bfmit \mathchar"710E} {\bf c}({\bf x}) \ = \ {\bf c} ({\bf x})- {\bf c}^0. \label{eq6} \end{equation} Thus, the problem reduces to the {\sl periodic Lippmann-Schwinger equation} ({ Kr\"oner} \cite{KRO72}), which reads, in real space and Fourier space respectively: \begin{equation} \left. \begin{array}{l} {\bf {\bfmit \mathchar"7122}(u(x))}= - {\bf \Gamma}^{0}({\bf x}) \ast {{\bfmit \mathchar"711C}}({\bf x})+ {\bf E}, \\ \\ {\widehat {\bf{\bfmit \mathchar"7122}}}({\bfmit \mathchar"7118}) = - {\widehat {\bf \Gamma}}^{0}({\bfmit \mathchar"7118}) : {\widehat {\bf {\bfmit \mathchar"711C}}}({\bfmit \mathchar"7118}) \quad \forall {\bfmit \mathchar"7118} \ne {\bf 0},\quad {\widehat {\bf {\bfmit \mathchar"7122}}}({\bf 0}) ={\bf E} \end{array} \right\} \label{eq7} \end{equation} where ${\bfmit \mathchar"711C}$ is given by (\ref{eq6}). The Lippman-Schwinger equation is an integral equation for ${\bfmit \mathchar"7122}({\bf u}^*)$. \subsection{The algorithm} \subsubsection{Continuous algorithm} \noindent The principle of the algorithm is to use alternately (\ref{eq6}) and (\ref{eq7}), in real space and Fourier space, respectively, in an iterative scheme, to solve (\ref{eq5}): \begin{equation} \left. \begin{array}{rl} {Initialization:}\ \ \ \ \ \ &{\bf {\bfmit \mathchar"7122}}^0({\bf x}) = {\bf E},\quad \forall \ {\bf x} \in \ V,\\ &{{\bfmit \mathchar"711B}}^{\rm 0}({\bf x}) = {\bf c}({\bf x}):{{\bfmit \mathchar"7122}}^{\rm 0}({\bf x}), \quad \forall \ {\bf x} \in \ V, \\ \\ Iterate\ {\rm i+1}:\ \ \ \ \ \ &{{\bfmit \mathchar"7122}}^{\rm i} \ {\rm and} \ {\bfmit \mathchar"711B}^{\rm i} \ {\rm being \ known} \\ a)\ \ &{{\bfmit \mathchar"711C}}^{\rm i}{\bf (x)} = {\bf {\bfmit \mathchar"711B}}^{\rm i}{\bf (x)}-{\bf c}^0: {{\bfmit \mathchar"7122}}^{\rm i}{\bf (x)},\\ b)\ \ &{\widehat {\bf {\bfmit \mathchar"711C}}} ^{\rm i} = {\cal F}({{\bfmit \mathchar"711C}} ^{\rm i}),\\ c) \ \ &{\rm Convergence \ test }, \\ d)\ \ &{\widehat {{\bfmit \mathchar"7122}}} ^{\rm i+1}({\bfmit \mathchar"7118}) = - {\widehat {\bf \Gamma}}^{0}({\bfmit \mathchar"7118}) : {\widehat {{\bfmit \mathchar"711C}}} ^{\rm i}({\bfmit \mathchar"7118}) \ \forall {\bfmit \mathchar"7118} \ne {\bf 0} \ {\rm and} \ \widehat {\bf {\bfmit \mathchar"7122}} ^{\rm i+1}({\bf 0}) = {\bf E},\\ e)\ \ &{\bf {\bfmit \mathchar"7122}} ^{\rm i+1} = {\cal F}^{-1}(\widehat {{\bfmit \mathchar"7122}} ^{\rm i}), \\ f)\ \ &{\bf {\bfmit \mathchar"711B}}^{\rm i+1}(\bf x) = {\bf c}({\bf x}):{\bfmit \mathchar"7122}^{\rm i+1}({\bf x}). \end{array} \right\} \label{alg1} \end{equation} $\cal F$ and ${\cal F}^{-1}$ denote the Fourier transform and the inverse Fourier transform. This algorithm can be further simplified by noting that $$ {\bf \Gamma}^0 \ \ast \ ({\bf c}^0:{\bfmit \mathchar"7122}) = {\bfmit \mathchar"7122}.$$ The modified algorithm reads~: \begin{equation} \left. \begin{array}{rl} {Initialization:} \ \ \ \ \ \ &{{\bfmit \mathchar"7122}}^{\rm 0}({\bf x}) = {\bf E}, \quad \forall \ {\bf x} \in \ V,\\ &{{\bfmit \mathchar"711B}}^{\rm 0}({\bf x}) = {\bf c}({\bf x}):{{\bfmit \mathchar"7122}}^{\rm 0}({\bf x}), \quad \forall \ {\bf x} \in \ V, \\ \\ Iterate\ {\rm i+1}:\ \ \ \ \ \ &{{\bfmit \mathchar"7122}}^{\rm i} \ {\rm and} \ {\bfmit \mathchar"711B}^{\rm i} \ {\rm being \ known} \\ a)\ \ &{\hat {\bf {\bfmit \mathchar"711B}}} ^{\rm i} = {\cal F}({{\bfmit \mathchar"711B}} ^{\rm i}),\\ b) \ \ &{\rm Convergence\ test}, \\ c)\ \ &{\hat {{\bfmit \mathchar"7122}}} ^{\rm i+1}({\bfmit \mathchar"7118}) = {\hat {\bfmit \mathchar"7122}}^{\rm i}({\bfmit \mathchar"7118}) - {\hat {\bf \Gamma}}^{0}({\bfmit \mathchar"7118}) : {\hat {{\bfmit \mathchar"711B}}} ^{\rm i}({\bfmit \mathchar"7118}) \ \forall {\bfmit \mathchar"7118} \ne {\bf 0} \ {\rm and} \ \hat {\bf {\bfmit \mathchar"7122}} ^{\rm i+1}({\bf 0}) = {\bf E},\\ d)\ \ &{\bf {\bfmit \mathchar"7122}} ^{\rm i+1} = {\cal F}^{-1}(\hat {{\bfmit \mathchar"7122}} ^{\rm i+1}) \\ e)\ \ &{{\bfmit \mathchar"711B}}^{\rm i+1}({\bf x}) = {\bf c}({\bf x}):{{\bfmit \mathchar"7122}}^{\rm i+1}({\bf x}), \quad \forall \ {\bf x} \in \ V, \\ \end{array} \right\} \label{alg2} \end{equation} Convergence is reached when ${{\bfmit \mathchar"711B}}^{\rm i+1} $ is in equilibrium. The error serving to check convergence is~: $$ e^{\rm i} = { \left(< \vert \vert {\rm div}({\bf {\bfmit \mathchar"711B}}^{\rm i}) \vert \vert^2 > \right)^{1/2} \over \vert \vert <{\bfmit \mathchar"711B} ^{\rm i}> \vert \vert }={ \left( < \vert \vert {{\bfmit \mathchar"7118}}.\hat {{\bfmit \mathchar"711B}}^{\rm i}({{\bfmit \mathchar"7118}}) \vert \vert^2 > \right)^{1/2} \over \vert \vert {\hat {{\bfmit \mathchar"711B}}} ^{\rm i}({\bf 0}) \vert \vert } . $$ The iterative procedure is stopped when the error $e$ is smaller than a prescribed value (typically $10^{-4}$ in our calculations). \subsubsection{Discrete algorithm} The unit cell is discretized into a regular grid consisting of $N_1 \times N_2$ pixels (two-dimensional problem), or $N_1 \times N_2 \times N_3$ "voxels" (tri-dimensional problem). The data and the unknowns used in the numerical calculations are images sampled on this grid ($N_1 \times N_2$ or $N_1 \times N_2 \times N_3$ arrays). In two dimensions, the coordinates of the pixel labeled by $i_1, i_2$ are $$ {\bf x}_d(i_1,i_2) = \left( (i_1-1) \cdot {T_1 \over N_1}, (i_2-1) \cdot {T_2 \over N_2} \right),\quad i_1=1,...N_1,\quad i_2=1,...N_2, $$ where $T_j$ is the period of the unit cell in $j^{th}$ direction ($j = 1, 2$). This discretization is classical in image processing. Images of microstructures, obtained for instance by S.E.M. (scanning electron microscopy), can therefore be directly used in calculations without any operation by the user (meshing or interpolation). This discretization is also appropriate for using Fast Fourier Transforms (FFT) packages, which contribute significantly to the performances of the method. \medskip The continuous algorithm (\ref{alg2}) has been implemented in the following discrete form : \begin{equation} \left. \begin{array}{rl} {Initialization:} &\quad {{\bfmit \mathchar"7122}}^{\rm 0}({\bf x}_d) = {\bf E}, \quad \forall \ {\bf x}_d \in \ V, \ \\ &\quad {{\bfmit \mathchar"711B}}^{\rm 0}({\bf x}_d) = {\bf c}({\bf x}_d):{{\bfmit \mathchar"7122}}^{\rm 0}({\bf x}_d), \quad \forall \ {\bf x}_d \in \ V, \ \\ \\ Iterate\ {\rm i+1}: &{{\bfmit \mathchar"7122}}^{\rm i} \ {\rm and} \ {{\bfmit \mathchar"711B}}^{\rm i} \ {\rm known\ at\ every }\ {\bf x}_d \\ a)\ \ &{\hat {\bf {\bfmit \mathchar"711B}}} ^{\rm i} = {\cal FFT}({{\bfmit \mathchar"711B}} ^{\rm i}),\\ b) \ \ &{\rm Convergence\ test}, \\ c)\ \ &{\hat {{\bfmit \mathchar"7122}}} ^{\rm i+1}({\bfmit \mathchar"7118}_d) = {\hat {{\bfmit \mathchar"7122}}} ^{\rm i}({\bfmit \mathchar"7118}_d) - {\hat {\bf \Gamma}}^{0}({\bfmit \mathchar"7118}_d) : {\hat {{\bfmit \mathchar"711B}}} ^{\rm i}({\bfmit \mathchar"7118}_d) \ \forall {\bfmit \mathchar"7118}_d \ne {\bf 0} \ , \ \hat {\bf {\bfmit \mathchar"7122}} ^{\rm i+1}({\bf 0}) = {\bf E},\\ d)\ \ &{\bf {\bfmit \mathchar"7122}} ^{\rm i+1} = {\cal FFT}^{-1}(\hat {{\bfmit \mathchar"7122}} ^{\rm i}) \\ e)\ \ &{{\bfmit \mathchar"711B}}^{\rm i+1}({\bf x}_d) = {\bf c}({\bf x}_d):{{\bfmit \mathchar"7122}}^{\rm i+1}({\bf x}_d), \quad \forall \ {\bf x}_d \in \ V \\ \end{array} \right\} \label{alg3} \end{equation} where ${\bf x}_d$ denote the coordinates of pixels in real space, and ${\bfmit \mathchar"7118}_d$ denote the $N_1 \times N_2$ corresponding frequencies in Fourier space. To be more specific, the discrete frequencies are (in dimension 2) when $N_j$ is even~: $$ \xi_j = (-{N_j \over 2}+1) \ {1 \over T_j}, \ (-{N_j \over 2}+2) \ {1 \over T_j}, \ ..., \ -{1 \over T_j}, \ 0, \ {1 \over T_j}, \ ..., ({N_j \over 2}-1) \ {1 \over T_j}, \ {N_j \over 2} \ {1 \over T_j}, $$ and when $N_j$ is odd~: $$ \xi_j = -{N_j -1 \over 2} \ {1 \over T_j}, \ ..., \ -{1 \over T_j}, \ 0, \ {1 \over T_j}, \ ..., \ {N_j -1 \over 2} \ {1 \over T_j}. $$ The discrete error serving to check convergence is~: $$ e^{\rm i} = \ { \left(\displaystyle { 1 \o N } \sum_d \vert \vert {\bfmit \mathchar"7118}_d \cdot {\hat {\bfmit \mathchar"711B}}^{\rm i}({{\bfmit \mathchar"7118}_d}) \vert \vert ^2 \right)^{1/2} \over \vert \vert\displaystyle \hat{{\bfmit \mathchar"711B}}^{\rm i}(\bf 0) \vert \vert } $$ (where $N=N_1 \times N_2$ is the total number of pixels). When the spatial resolution is low and when the number $N_j$ of discretization point is even, a special attention must be paid to the { highest frequency \footnote{An error had crept into the expression of the highest frequency in the original paper published in Comput Methods Appl Mech Eng. The authors thank Anthony Rollett for pointing it out. HM 11/12/2020 } $\xi_j = \pm \left({N_j \over 2} \right) {1\over T_j}$, $j=1$ or $2$. } In most FFT packages, the Fourier expansion at these frequencies consists of either $\cos(\xi_j x_j)$ or $\exp(-{\rm i}\xi_j x_j)$, instead of the correct expression consisting of the two terms $\exp(-{\rm i}\xi_j x_j)$ {\it and} $\exp({\rm i}\xi_j x_j)$. Therefore, even when the stress $\bfs$ is correctly approached by its Fourier expansion in step a) of the algorithm (10), the result of step d) may not approach accurately the Fourier expansion of the strain ${\bfmit \mathchar"7122}$ at these particular frequencies. This is because $\hat{\bf {\Gamma}}^0$ is neither even nor odd with respect to each individual component $\xi_j$. Oscillations were observed when (4) was used with relatively small values of $N_j$ (lower than 128). This problem was fixed by using a different expression of $\hat{\bf {\Gamma}}^0$ in algorithm (10) at these frequencies $$\hat{\bf {\Gamma}}^0 = \left({\bf c}^0 \right)^{-1}.$$ In other terms, the stress $\bfs$ is forced to $\bf 0$ by the algorithm at these frequencies when convergence is reached. \subsection {Nonlinear Behavior.} \noindent The algorithm can be extended to the case in which the individual constituents obey a nonlinear law, written either in terms of stresses and strains (nonlinear elasticity at infinitesimal strain) or in incremental form relating strain-rates and stress-rates (flow theory). The nonlinearity requires an appropriate modification of step {\it e}) in algorithm (\ref{alg3}). In the present study, special attention will be paid to phases exhibiting an incremental elastic-plastic behavior at small strains governed by a $J_2$-flow theory with isotropic hardening (although more general constitutive laws can be considered)~: \begin{equation} {\dot {\bfmit \mathchar"711B}} = {\bf c}:({\dot {\bfmit \mathchar"7122}}-{\dot {\bfmit \mathchar"7122}}^{p}),\quad \dot {{\bfmit \mathchar"7122}}^{p} = {\dot p}\ {3 \over 2} {{\bf s} \over {\sigma_{eq}}},\quad {\sigma_{eq}}-{\sigma_{0}}(p) \leq 0,\quad {\dot p} \geq 0. \label{comp} \end{equation} ${\bfmit \mathchar"7122}^{p}$ denotes the plastic strain, $\bf s$ denotes the stress deviator and $p$ denotes the hardening parameter, which coincides with the cumulated plastic strain $$ \dot{p}(t) = \left( {2 \over 3 } \dot{\varepsilon}^{p}_{ij}(t) \dot{\varepsilon}^{p}_{ij}(t) \right), \quad p(t) = \int_0^t \dot{p}(s) \ ds \quad \sigma_{eq} = \bigg( \frac{3}{2} s_{ij} s_{ij} \bigg)^\frac{1}{2} . $$ \vskip 0.3cm The integration in time of the constitutive law (\ref{comp}) is achieved by means of an implicit scheme which is classical in the analysis of elastic-plastic structures by the FEM method. The time interval (or, alternatively, the loading path) is discretized into subintervals $[t_n,t_{n+1}]$. The field equations are solved for $\left({\bfmit \mathchar"7122}_{n}, {\bfmit \mathchar"711B}_{n},{p}_n \right)$, which denote strain, stress and hardening parameter at time $t_n$. Assuming that these fields are known at step $n$ (time $t_n$), the principal unknown at step $n+1$ is ${\bfmit \mathchar"7122}_{n+1}$. The incremental equations (\ref{comp}) are discretized by an implicit scheme. The unknown ${\bfmit \mathchar"7122}_{n+1}$ is a compatible strain field such that the associated stress field (by the constitutive law) is in equilibrium. The resulting system of equations to be solved for ${\bfmit \mathchar"7122}_{n+1}$ is nonlinear. The algorithm for the determination of ${\bfmit \mathchar"7122}_{n+1}$ reads (for simplicity the lowerscript $(n+1)$ is omitted below; superscripts i and i+1 refer to the iterative loop within the step)~: \begin{equation} \left. \begin{array}{rl} {Initialization:} &{\bfmit \mathchar"7122}^{\rm 0}({\bf x}_d) \ {\rm given\ by}\ (\ref{ini}),\\ & {\rm Compute} \ {\bfmit \mathchar"711B}^{\rm 0} \ {\rm and} \ {p}^{\rm 0}\ {\rm from} \ ({\bfmit \mathchar"7122}^{\rm 0}, {\bfmit \mathchar"711B}_{n},{\bfmit \mathchar"7122}_{n},{p}_{n}), \\ \\ {Iterate}\ {\rm i+1}: &{{\bfmit \mathchar"7122}}^{\rm i} \ {\rm and} \ {{\bfmit \mathchar"711B}}^{\rm i} \ {\rm are \ known} \\ a)\ \ &{\hat {\bf {\bfmit \mathchar"711B}}}^{\rm i} = {\cal FFT} ({{\bfmit \mathchar"711B}}^{\rm i}),\\ b) \ \ &{\rm Convergence \ test}, \\ c)\ \ &{\hat {{\bfmit \mathchar"7122}}}^{\rm i+1}({\bfmit \mathchar"7118}_d) = {\hat {\bfmit \mathchar"7122}}^{\rm i}({\bfmit \mathchar"7118}_d) - {\hat {\bf \Gamma}}^{0}({\bfmit \mathchar"7118}_d) : {\hat {{\bfmit \mathchar"711B}}} ^{\rm i}({\bfmit \mathchar"7118}_d) \ \forall {\bfmit \mathchar"7118}_d \ne {\bf 0}, \ \hat {\bf {\bfmit \mathchar"7122}}^{\rm i+1}({\bf 0}) = {\bf E}_{n+1}, \\ d)\ \ &{\bf {\bfmit \mathchar"7122}}^{\rm i+1} = {\cal FFT}^{-1}(\hat {{\bfmit \mathchar"7122}}^{\rm i+1}) \\ e)\ \ & {\rm Compute}\ {{\bfmit \mathchar"711B}}^{\rm i+1} \ {\rm and}\ {p}^{\rm i+1}\ {\rm from}\ ({\bfmit \mathchar"7122}^{\rm i+1}, {\bfmit \mathchar"711B}_{n},{\bfmit \mathchar"7122}_{n},{p}_{n})\\ \end{array} \right\} \label{alg4} \end{equation} More specifically \begin{itemize} \item[a)] The initial strain ${\bfmit \mathchar"7122}^{\rm 0}$ at time $t_{n+1}$ is extrapolated (linearly) from ${\bfmit \mathchar"7122}_{n}$ and ${\bfmit \mathchar"7122}_{n-1}$ at the two previous time steps $t_n$ and $t_{n-1}$~: \begin{equation} {\bfmit \mathchar"7122}^{\rm 0}({\bf x}_d) = {{\bfmit \mathchar"7122}}_{n}({\bf x}_d) + {t_{n+1}-t_n \over t_n-t_{n-1}} ({\bfmit \mathchar"7122}_{n}({\bf x}_d) - {\bfmit \mathchar"7122}_{n-1}({\bf x}_d)), \quad \forall \ {\bf x}_d \in \ V. \label{ini} \end{equation} This choice significantly improves the convergence of the iterative process within the time step. \item[b)] ${{\bfmit \mathchar"711B}}^{\rm i}$ and ${p}^{\rm i}$ are computed from $({\bfmit \mathchar"7122}^{\rm i},{\bfmit \mathchar"711B}_{n},{\bfmit \mathchar"7122}_{n}, {p}_{n})$ (step {\it e}) in algorithm (\ref{alg4})) by a radial return method (see appendix \ref{appendix2}). \end{itemize} \section{Convergence and accuracy of the method} \subsection{Reference medium} The rate of convergence of the algorithm depends drastically on the Lam\'e coefficients $\lambda^0$ and $\mu^0$ of the reference material. After several tests, the best rate of convergence was observed with \begin{equation} \left. \begin{array}{rl} \lambda^0 &= {1 \over 2} \left( \displaystyle \doublelow{{\rm inf}\cr {\bf x} \in V \cr} \lambda({\bf x}) + \doublelow{{\rm sup}\cr {\bf x} \in V \cr} \lambda({\bf x}) \right) \\ \\ \mu^0 &= {1 \over 2} \left( \displaystyle \doublelow{{\rm inf}\cr {\bf x} \in V \cr} \mu({\bf x}) + \doublelow{{\rm sup}\cr {\bf x} \in V \cr} \mu({\bf x}) \right) \end{array} \right\} \label{conv} \end{equation} The number of iterations at convergence is significantly influenced by several other parameters. First, as shown in Figure \ref{fig1}, it increases with the contrast between the phases (typically the ratio between the elastic moduli of the phases). When the contrast is infinite (rigid inclusions or voids in an elastic matrix), the algorithm no longer converges. Second, the number of iterations at convergence also depends on the complexity of the solution itself. In the example of an elastic ideally plastic matrix reinforced by stiff inclusions, the computing time increases with the tortuosity of the bands where the strain tends to localize (see below). \subsection{Implementation of the method on a vector or a parallel computer} The constitutive law acts locally in real space ({\it i.e.} applies separately to each individual point $\bfx$). Similarly, Green's function ${\bf \Gamma}^0$ acts locally in Fourier space, ({\it i.e.} applies separately to each individual frequency ${\bfmit \mathchar"7118}$). From a computational standpoint, the corresponding steps (c and e in the algorithms (\ref{alg3}) or (\ref{alg4}) ) are performed by independent loops on each individual pixel in real or Fourier space. These steps can consequently be vectorized or parallelized. In addition, optimized FFT packages are available on most vector or parallel computers. The whole algorithm can therefore be efficiently implemented on these machines. It follows from the same argument that the time spent in the steps corresponding to the constitutive law and to the Lippman Schwinger equation varies linearly with the number $N$ of pixels. The CPU time for a FFT varies as $N \cdot \log _2N$. The time required by the other steps of the algorithm are comparable to the time required by the FFTs. The CPU time $t$ for one iteration can be estimated by $$ k_1 \times N \ \ \le \ \ t \ \ \le \ \ k_2 \times \ N \log _2N, $$ where $k_1$ and $k_2$ are expected to be independent of the size $N$ of the problem. The dependence of the CPU time on the size of the problem is shown in Figure \ref{fig2}. The square unit cell shown in Figure \ref{std} is subjected to uniaxial transverse tension at $0^0$. The volume fraction of fibers is 47.5\%. Both the fibers and the matrix are assumed to be elastic with elastic constants given by (\ref{fibr}) and (\ref{matr0}). The dependence of the CPU time on the size of the problem is approximately linear. \vskip 0.3cm \noindent {\bf Optimizing the memory occupancy}. The Fourier transform of a real valued function has the symmetry property $$\hat{f}(-{\bfmit \mathchar"7118}) = \overline{\hat{f}}({\bfmit \mathchar"7118}).$$ Since all quantities under consideration in our computation are real, this symmetry property allows us to restrict our attention to positive frequencies (the values of the fields for negative frequencies being immediately deduced). The size of the arrays can therefore be divided by 2, provided the FFT package allows for the storage of real numbers as complex numbers with the same memory occupancy. \vskip 0.3cm \noindent {\bf Performances}. Most computations were run on a Cray YMP with peak performance of $333 \ MFlops$. The performance observed with our algorithm was $\simeq 210 \ MFlops$ on the elastic-plastic problem described in section 4 with unit cells discretized into $1024 \times 1024 \ pixels$. The typical CPU time on one processor of this computer is less than 30 seconds for an elastic problem (with a spatial resolution of $1024\times 1024 \ pixels$, the ratio between the Young moduli being approximately 6). When the matrix is elastic plastic, the typical CPU time for a run as described in section 4 is 4000 seconds. \subsection{Comparison with analytical solutions} To assess the accuracy and the stability of the method we examined two cases for which analytical solutions are available. \vskip 0.3cm \noindent {\bf Laminates}. The first example concerns layered materials. As is well-known, the strain field is then uniform within each individual layer and takes different values from one layer to another. The example shown in Figure \ref{layer} corresponds to a two-phase material, both phases having equal volume fraction. The layers are parallel to the plane $(x_2,x_3)$. The constitutive materials of the layers were linear elastic with elastic characteristic given by (\ref{fibr}) and (\ref{matr0}). The applied loading was pure shear parallel to the layers $$ \S_{12} \ {\rm arbitrary},\quad \S_{11}=\S_{22}= \S_{33} = \S_{13}= \S_{23}= 0. $$ The image was discretized into $32 \times 32$ pixels (good results were obtained with an even cruder resolution). The computed local strain field $\varepsilon_{12}$ is plotted in Figure \ref{layer} and shows no oscillation. In addition the numerical solution coincides with the exact solution. \vskip 0.3cm \noindent {\bf Circular fiber at dilute concentration}. The second example concerns the elastic strain field generated by stiff circular fibers placed at the nodes of a square lattice in a more compliant matrix. The exact solution to this problem (with periodic boundary conditions) is not known in closed form (to the authors' knowledge). However when the volume fraction of fibers is small this solution can be accurately approximated by the solution of a simpler problem, where a circular fiber (with radius $a$) is surrounded by a circular shell of matrix (with radius $b$) and subject to the boundary condition $$ {\bf u}(\bfx) = \bfE.\bfx \quad {\rm when}\quad r=b,$$ where the overall strain $\bfE$ is the same as in the original periodic problem. When the imposed loading is an in-plane shear $E_{12} \neq 0$, other $E_{ij}=0$, the displacement field has the form $$ \left. \begin{array}{rl} \displaystyle u_r(r,\theta) & = \left( A r^3 + B r + {\displaystyle C \o \displaystyle r} + {\displaystyle D \o \displaystyle r^3} \right) {\rm sin}(2 \theta), \\ \\ \displaystyle u_{\theta}(r,\theta)& = \left( {\displaystyle 2 \l + 3 \m \o \displaystyle \l} A r^3 + B r + {\displaystyle \m \o \displaystyle \l + 2 \m}{\displaystyle C \o \displaystyle r} - {\displaystyle D \o \displaystyle r^3} \right) {\rm cos}(2 \theta), \end{array} \right\} $$ where $r$ and $\theta$ are the polar coordinates in the plane. $A,\ B,\ C,\ D,$ take different values in the matrix and in the fiber. They solve a system of linear equations expressing the boundary condition at $r=b$, the absence of singularity at $r=0$, the continuity of tractions and displacements at $r=a$. \vskip 0.3cm According to Saint Venant's principle, the local strain fields in the two problems coincide far from the boundary of the cell. Therefore at low volume fraction of fibers ($a^2 / b^2 \ll 1$), the solutions of the two problems are expected to coincide except in the vicinity of the boundary of the cell. The example presented in Figure \ref{infini} corresponds to $a/b = 1/16$. The spatial discretization used in the numerical calculation was $1024 \times 1024$. The component $\varepsilon_{12}$ of the strain field in a square window of width $c= 4 a$ is shown in Figure \ref{infini} (note that the unit cell itself with width $2b$ is much larger than the window shown). There is almost no difference between the analytical and the numerical solutions shown in (a) and (b) respectively. A more explicit comparison is made in Figure \ref{infini} (c) which shows an horizontal cut through the field $\varepsilon_{12}$ at $x_2 = 0$. Except from little undulations inside the inclusion, there is no significant oscillations at the fiber boundary where the field $\varepsilon_{12}$ is discontinuous. In addition the accuracy of the numerical solution is observed to increase with the spatial resolution. The discrepancy between the numerical and the analytical solutions depends on the spatial resolution and should not be attributed to a Gibbs phenomenon, {\it i.e.} to an oscillation of the Fourier series of a function in the vicinity of a discontinuity point. This oscillation is attached to the summation of the Fourier series which is {\it not} what the discrete inverse Fourier transform performs. \vskip 0.3cm \noindent{\bf Discrete Fourier transform}. The discrete Fourier transform, when applied to an image discretized into $N_1\times N_2$ pixels, is the exact Fourier transform of the image when two requirements are met~: ({ Brault} and { White} \cite{BRA71}) \begin{itemize} \item[\bf C1] the image is periodic with the same period $(T_1,T_2)$ as the unit cell, \item[\bf C2] the image cut-off frequency $f^c$ ( {\it i.e.} the frequency above which the Fourier transform of the image vanishes identically) is less than half of the sampling frequency (Shannon's theorem): $$ f^c_j < {1 \over 2} \ {N_j \over T_j} \quad j = 1,2$$ \end{itemize} The periodic boundary conditions which have been assumed from the true beginning of this study ensure that condition (C1) is met. However, condition (C2) is not met in general. In particular a discontinuous field has no cut-off frequency and there is no discretization able to capture this discontinuity. It is however expected that the solution of the discrete problem approaches the solution of the continuous problem when the image sampling (number of pixels) increases. A high resolution will therefore be required for problems in which high strain or stress gradients are likely to occur. \subsection{Influence of spatial resolution} As already stated the influence of the spatial resolution depends on the stress and strain gradients within the phases and therefore on the strength of the phases nonlinearities. The following examples illustrate these general considerations. The method has been applied to simulate the local and overall response of composites reinforced by unidirectional long fibers aligned along the $e_3$ direction. The geometry of these composites is described by a two-dimensional image of their cross section. Generalized plane strains were assumed~: \begin{equation} {u}_1 (\bfx)={u}_1 (x_1,x_2),\quad {u}_2 (\bfx)={u}_2 (x_1,x_2),\quad {u}_3(\bfx)= E_{33} x_3. \label{gps} \end{equation} The overall strain ${\bf E}$ has four independent components $E_{11}$, $E_{22}$, $E_{12}$, $E_{33}$ (the other two are equal to 0). The overall stress $\bfS$ also has four independent components. It is possible to prescribe either a path in the space of strains, or a path in the space of stresses, or alternatively some components of the strain and the other components of the stress. Classical plane strains are a particular case of the more general setting considered in (\ref{gps}). It corresponds to a path in the space of strains along which $E_{33}$ is identically $0$. The need to introduce generalized plane strain is illustrated by uniaxial tension in the $0^0$ direction, which corresponds to a path in the space of stresses along which \begin{equation} \S_{11} \ {\rm arbitrary},\quad \S_{22}=\S_{12}= \S_{33}= 0. \label{ut} \end{equation} The axial component $E_{33}$ of the strain is unknown and determined {\it a posteriori} by the condition $\S_{33}=0$. The assumption of generalized plane strains reduces (\ref{eq5}) to a two-dimensional problem for the two unknowns $(u^*_1,u^*_2)$. \vskip 0.3cm Two classical configurations were investigated in which the fibers were placed at the nodes of a square or hexagonal lattice. The fibers were assumed to be elastic, isotropic, and characterized by a Young modulus and a Poisson ratio~ : \begin{equation} E^f \ = \ 400 \ {\rm GPa},\quad \nu^f \ = \ 0.23. \label{fibr} \end{equation} The fiber volume fraction was 47.5 \% (for comparison, we chose the same volume fraction as in \cite{BOH93}). The behavior of the matrix was varied from linear elasticity to elasto-plasticity with hardening so as to study the effect of the nonlinearity on the accuracy of the method. All the constitutive laws of the matrix which were considered can be put in the incremental form (\ref{comp}). Its isotropic elastic properties were characterized by a Young's modulus and Poisson coefficient \begin{equation} E^m \ = \ 68.9 \ {\rm GPa},\quad \nu^m=0.35. \label{matr0} \end{equation} The plastic properties of the matrix were governed by the Von Mises criterion \begin{equation} \s_{eq} \leq \s_0 + H p. \label{matr} \end{equation} The initial yield stress $\s_0$ was either infinite (pure linear elasticity) or given by $\sigma_0 \ = \ 68.9$ MPa. The hardening modulus $H$ was either $0$ (perfectly plastic behavior) or $H= 1 \ 171$ MPa (isotropic linear hardening). \vskip 0.3cm The influence of spatial resolution on the accuracy of the results was studied. The spatial resolution of the image is determined here through the square root of the total number of pixels contained in the image divided by the number of fibers in the image. For the square array, with $N_1 \times N_1$ pixels and a single fiber in the unit cell, the spatial resolution is exactly $N_1$. The hexagonal array can be viewed as a rectangular array, thus allowing the use of the Fourier technique in orthogonal coordinates, instead of the natural nonorthogonal coordinates defined by the two unit vectors of the hexagonal lattice (see Figure \ref{std}). The rectangular unit cell contains $1+4\times{1 \over 4} \ = \ 2$ fibers. The number of pixels along the first direction $x_1$ is 2 times larger than the number of pixels in the second direction $x_2$. The spatial step in $x_2$ is $2 \sqrt{3} / 3$ times larger than the step in $x_1$. Therefore in the hexagonal array, the spatial definition as defined above is again $N_1$ for an image containing $2N_1 \times N_1$ pixels. \vskip 0.3cm Both unit cells were submitted to uniaxial tension at $0^0$ and $45^0$ in the sense of (\ref{ut}). The results of the overall response of the composite are shown in Tables 1 to 6. The initial response of the composite is linear and its slope defines the overall Young's modulus of the composite. When the matrix is elastic ideally plastic the overall stress applied to the composite in the direction of tension reaches (asymptotically) a limit which defines the overall {\it flow stress} of the composite. When the matrix is governed by a linear hardening, the stress-strain curve of the composite exhibits a nonlinear transition to an asymptotically linear (affine) response. The slope of this limit response is the overall hardening modulus of the composite. \vskip 0.3cm Each table gives an overall material constant as a function of the spatial resolution of the image. The "error" was estimated as the relative difference between the result at a given resolution and the result at the finest resolution. \vskip 0.3cm These results suggest the following remarks. \begin{itemize} \item[1.] When both constituents are linearly elastic, the overall stiffness is not very sensitive to spatial resolution. Even at the lowest resolution ($32 \times 32$ pixels/fiber), the estimated error was under 1\% in all cases. \item[2.] When the matrix is elastic plastic, the local and overall responses are sensitive to spatial resolution. The strain fields exhibit a strong tendency to concentrate in thin bands. The higher the nonlinearity, the thinner the bands. These stiff gradients in strain require high spatial resolution to be correctly captured. \item[3.] The solutions may even be discontinuous when the matrix is elastic-perfectly plastic. This explains the relatively high errors at low resolution: about 15\% for the square array of fibers in an elastic-perfectly plastic matrix under tension at $0^\circ$, with a resolution of $32 \times 32$ \ pixels/fiber. Shear bands can form in the matrix under tension at $45^\circ$. These shear bands correspond to a mode of deformation of the r.v.e. in plane strains. Therefore, for this particular loading, the effective behavior of the composite depends only on the behavior of the matrix. The overall flow stress of the composite coincides with the flow stress of the matrix under plane strain conditions, {\it i.e.} $2 \s_0 \over \sqrt{3}$. The formation of a slip plane through the matrix is well captured by the numerical method and explains the precision of the numerical result for this particular loading. \item[4.] When the matrix has linear hardening, the strain fields are more regular than in the perfectly plastic case. The local and overall responses of the composite are less sensitive to spatial resolution. The error on the hardening modulus is about 7.5\% with a resolution of $32 \times 32$ \ pixels/fiber. \end{itemize} This study of the influence of spatial resolution led us to use a resolution of $128 \times 128$ pixels/fiber in most of the examples presented in the next section. \section{Fiber arrangement} In this section we investigate the influence of the geometrical arrangement of the fibers on the local and overall responses of nonlinear composites. Attention is again restricted to two-dimensional problems, {\it i.e.} to composites reinforced by aligned fibers. The fiber arrangement is determined by a two-dimensional image of the composite cross section. \subsection{Configurations} Two classes of fiber arrangement, regular and random, were considered. The fibers were identical circular disks and they were not allowed to overlap (impenetrability condition) except in section 4.3. In most simulations the fiber volume fraction was prescribed to $47.5\%$, except in section 4.3. \vskip 0.3cm \noindent {\bf Standard fiber distribution}. The ``standard" configurations consist of a single fiber placed at the nodes of a square or an hexagonal lattice (see preceeding section). Most F.E.M. cell calculations reported in the literature are based on these standard configurations with the exceptions of { Brockenborough} {\it et al} (1991) and { B\"{o}hm} {\it et al} (1993) who investigated the effect of disorder in the fiber arrangement on the overall transverse properties of composites. \paragraph{Random fiber distribution} In the "random" configurations, the centers of the fibers were placed at random in the unit cell, subject only to the constraints of impenetrability and periodicity. The latter constraint implies that, when a fiber overlaps the boundary of the unit cell, it is split into two parts I and II (see Figure \ref{fcar} ) to fit in the unit cell. The size of the images was the largest one allowed by the memory on our computer and compatible with a resolution of $128 \times 128$ pixels per fiber. These two constraints led to unit cells discretized into $1024 \times 1024$ pixels and containing up to 64 fibers. \subsection{Impenetrable fibers} Twenty three different configurations of 64 impenetrable fibers were generated randomly in the unit cell. The fibers were assumed to be elastic with material properties given by (\ref{fibr}). The matrix was an elastic plastic material governed by a $J_2$ flow theory (\ref{comp}) with material properties given by (\ref{matr0}) (\ref{matr}). The local and global responses of each configuration to a transverse uniaxial tension in the $0^0$ direction (according to (\ref{ut})) were computed with the above described method. The square array and hexagonal array were also subjected to transverse tension in the $0^0$ and $45^0$ directions. \subsubsection{Local and overall responses} The stress-strain curves predicted by the simulation are shown in Figure \ref{courbes}. The solid line corresponds to the mean response (average of the stress-strain curves over the 23 configurations). These results call for the following comments~:\begin{itemize} \item[1.] The fibers were stiff and perfectly bonded to the matrix. Therefore, although the strain $E_{33}$ in the axial direction was not imposed {\it a priori} ($\S_{33}$ was prescribed to $0$), it was relatively small along the whole loading path. The strain state was consequently close to the plane strain state, explaining the strain concentrations observed in the perfectly plastic matrices. As is well known, plane strain is more favorable to these strain concentrations than is pure uniaxial tension. \item[2.] The square lattice has a marked transverse anisotropy which is strengthened by the nonlinear behavior, which gives raise to different responses when the direction of tension makes an angle of $0^0$ or $45^0$ with one of the axes of the square lattice. The low value of the flow stress in the diagonal direction ($45^0$) is due to a shear plane passing through the matrix. Indeed, when a plane of shear can be passed through the weakest phase of a composite, the shear strength of the composite is exactly the strength of the weakest phase ({ Drucker} (1959)). In tension (under plane strains) in a direction inclined at $45^0$ on this plane, the transverse flow stress of the composite is $2\s_0^m / \sqrt{3}$. This is the flow stress observed in Figure \ref{courbes} and Table 4 ($2\s_0^m / \sqrt{3}\simeq 79.56$ MPa). In conclusion, except at low volume fractions, the square array should not be used to investigate the transverse properties of transversely isotropic nonlinear composites. \item[3.] The hexagonal lattice approaches transverse isotropy. When the matrix is a hardening material, the predictions obtained with the hexagonal lattice underestimate the stiffness of the composite, or at least are located below the average of the predictions for the random configurations in the range of overall deformations considered. Another computation, not reported here, was performed up to 30\% of transverse strain, with no modification in the conclusions. A similar observation was made by { Brockenborough} {\it et al} (1991) for another system. When the matrix is ideally plastic, the low value of the flow stress in the diagonal direction ($45^0$) is again due to a shear plane passing through the matrix. In conclusion, the hexagonal lattice should be used with care to predict the transverse properties of nonlinear composite systems, even for hardening matrices. \item[4.] The deviation from the average of the transverse Young's moduli computed on the different configurations is small. By contrast, the deviations in the other properties (flow stress, hardening modulus) are higher and may be attributed to the combined effects of nonlinearity and incompressibility. \item[5.] The local plastic strains showed significant differences between the ideally plastic case and the hardening case. For the former, the strain concentrates in thin bands in the matrix. In most configurations, only a small percentage of the matrix contributes to the plastic dissipation. The overall flow stress of the composite is observed to be directly related to the "tortuosity" of these bands. Two different configurations with the corresponding zones of strain concentration are shown in Figure \ref{pstrain}. In the first configuration slip bands inclined at approximately $45^0$ on the direction of traction can be passed through the matrix, resulting in a low flow stress. Conversely, the fiber arrangement in the second direction inhibits long-range slip bands and causes these bands to deviate or the plastic deformation to spread into wider zones. The plastic dissipation and the flow stress are higher in the second configuration than in the first one. Adding more fibers in the undeformed zones would not change the plastic dissipation, or in other terms, would not affect the flow stress of the composite. These results lead us to think that, when the matrix is perfectly plastic, the geometrical parameter which governs (at first order) the flow stress of the composite is not the volume fraction of the fibers but, instead, the length of the shortest path passing through the matrix at an angle of approximately $45^0$ in tension, or $0^0$ in shear. \item[6.] When the matrix is a hardening material, the plastic strain spreads all over the matrix (see Figure \ref{pstrain}). The whole matrix contributes (although non homogeneously) to the plastic dissipation and, consequently, to the overall strengthening of the composite. In this case, the volume fraction of the fibers seems to be the relevant geometrical information (at least to first order) to predict the overall hardening of the composite. \item[7.] In spite of the differences in the maps of plastic strains in the ideally plastic material and in the hardening matrix, the "stiffest" (respectively the "weakest") configurations in the ideally plastic case remain the stiffest (respectively the weakest) configurations in the hardening case. \end{itemize} \subsubsection{Model size} The present section deals with the "representativity" of a unit cell in two aspects. First, does the unit cell contain enough heterogeneities so that the computed effective properties no longer depend on the cell size? Second, how much do different unit cells randomly generated with the same volume fraction and number of heterogeneties differ from each other? Several series of microstructures containing 4, 9, 16, 36, 64 or 256 impenetrable fibers randomly placed in the unit cell were generated. The volume fraction of fibers was identical in all simulations ($47.5\%$) and the spatial resolution was also fixed ($128 \times 128$ pixels/fiber). The total number of pixels in each image was therefore the number of fibers multiplied by $\times 128 \times 128$. The fibers and the matrix were respectively assumed to be elastic and elastic-perfectly plastic with materials properties given by (\ref{fibr}) and (\ref{matr0}) (\ref{matr}). The loading was uniaxial transverse tension at $0^0$ (see (\ref{ut})). Statistical data on the computed Young's moduli as a function of the number of fibers in the unit cell are reported in Table 7. The mean Young's modulus and its standard deviation are defined as $$ {\bar E} \ \ = \ \ {1 \over N_s} \ \sum_{i=1,N_s} E_i,\quad \sigma(E) \ \ = \ \ \sqrt{{1 \over N_s-1} \ \sum_{i=1,N_s} (E_i-{\bar E})^2}$$ where $E_i$ is the Young's modulus of the ${\sl i}^{\rm th}$ microstructure and $N_s$ is the number of different microstructures. The error on the mean is classically estimated by the ratio $$ {\sigma(E) \over \bar{E} \sqrt{N_s}}$$ Similar data on the overall flow stress of the composite are given in Table 8. The number of fibers in the unit cell does not significantly influence the mean overall properties, provided a lower number of fibers is compensated by a higher number of configurations. The mean Young's modulus and the mean flow stress of configurations with four fibers differ from those of configurations with 256 fibers by $0.56 \%$ and $0.74\%$ respectively. These differences are comparable to the error on the mean itself ($0.13\%$ and $0.23\%$ for the Young's modulus and the flow stress for configurations with 256 fibers). This is an illustration of the ergodic property~: spatial averaging on one large sample is equivalent to ensemble averaging on many small samples. A related observation is that the standard deviations of the overall properties decrease as the number of fibers increases. \subsubsection{Spacing between fibers.} In the above analyses, the fibers were placed randomly in the unit cell with impenetrability as the only restriction. The effects of imposing a minimal space between fibers are of interest for at least two reasons. First, when the minimal spacing between the centers of the fibers increases, the ordering of the microstructure increases. As a limit case, when this minimal spacing reaches $\sqrt{{2 \over \sqrt{3}} . {S \over {N}} }$ ($S$ is the surface of the unit cell, N is the number of fibers), the microstructure is completely determined and coincides with the centered hexagonal arrangement. Second, numerical difficulties could be expected when two neighboring fibers are nearly touching. Indeed, when the spatial resolution is not fine enough, the method cannot capture the high strain gradients in the necks between the two fibers. \vskip 0.3cm Ten configurations with 64 fibers were generated, and a minimal space of 4 pixels between two neighboring fibers was imposed. This distance seemed sufficient to correctly describe strain concentration. The results of this study suggest the following comments~: \begin{itemize} \item[1.] When the matrix is elastic ideally plastic, the mean overall flow stress is $\Sigma_0 = 86.9 $ MPa (with an estimation error of $0.44$ MPa). This value is $2.0\%$ smaller than the value obtained with no restriction on the space between fibers. It lies slightly below the flow stress of the hexagonal array subjected to tension at $0^0$ ($\Sigma_0=87.9 $ MPa). However it lies above the flow stress of the square array under tension at $0^0$ or $45^0$ ($\Sigma_0=79.6$ MPa) and of the hexagonal array under tension at $45^0$. \item[2.] When the matrix is elastic-plastic with linear hardening, the effective hardening modulus drops significantly ~: $H = 9382$ MPa (estimation error = $123.8$ MPa), instead of $10 002$ MPa. But it is still much higher than the hardening modulus predicted with the hexagonal array case ($H=7100 $ MPa at $0^0$, $H=7420 $ MPa at $45^0$). \end{itemize} In conclusion, it seems that the ``safety coating" around the fibers leads to a decrease in the overall mechanical properties of the composite, at least at the volume fraction which has been investigated. \subsubsection{Influence of the shape of the fibers} The above analyses show that the overall flow stress of the composite and, to a lesser extent, its overall hardening depend primarily on the tortuosity of shear bands passing through the matrix. Obviously, the volume fraction of the reinforcing phase plays a role in the possibility that such bands are formed, but for a fixed volume fraction, significant differences arise from the differences in the patterning of bands. These shear bands are locked or deviated by the fibers. The overall flow stress of the composite can (empirically) be related to the length of the shortest path passing through the matrix and making an angle of approximately $45^0$ with the tensile direction. It can be expected that the {\it shape} of the fibers, which act as "shear bands barriers", is important in their capacity to inhibit shear bands. The shape of fibers is important at two levels. First it affects the arrangement of fibers in the unit cell. For instance, it can be favorable to clustering of particles, leaving large areas of inclusions-free matrix where plastic strain is likely to localize. At a smaller scale an elongated particle perpendicular to a shear band will form an effective barrier. \vskip 0.3cm Random microstructures were generated with three shapes of fibers~: circular, elliptical (aspect ratio= 3.333), equilateral triangles. The volume fraction was $47.5 \% $. The unit cells contained 64 fibers and were discretized into $1024 \times 1024$ pixels. The center of the fibers and their orientation were chosen randomly, subject to the contraints of periodicity, impenetrability and given volume fraction. A minimal space of four pixels between two fibers was imposed to correctly capture the high strain gradients in the matrix between two neighboring reinforcements. For each fiber shape, 10 different configurations were tested. The results of the numerical simulations are given in Table 9. \vskip 0.3cm The Young's modulus is not significantly affected by the shape of the inclusions, at least for this particular volume fraction and for the contrast of elastic properties which was investigated (investigation of the percolation threshold for highly contrasted phases would probably lead to different conclusions). The mean flow stress of the composite with elliptical inclusions is close to that of the composite with circular inclusions ($ 0.9\%$ higher). However the flow stress is significantly higher for the composite with triangular inclusions ($5.2\%$ higher). This "hardening" effect can be attributed to the fact that at a given volume fraction triangles form more efficient barriers to shear band formation. This efficiency can be related to the length of the projection of the fiber orthogonally to the shear bands. The minimal length, the maximal length, and the average length over all possible orientations are reported in Table 10 for each shape of fibers at a given area $s$. For circular fibers these three quantities are equal to the radius of the fiber (2 $\sqrt{s / \pi}$). \subsection{Penetrable fibers} When the matrix is elastic ideally plastic, the overall response of the composite is strongly influenced by the existence of continuous paths in the matrix, connected from one cell to the other. The contiguity of the matrix obviously plays a crucial role in the formation of these paths, which are ruled out when the matrix is not contiguous. \vskip 0.3cm In order to study this effect, different configurations at different volume fractions were generated with {\it penetrable} fibers. The centers of the fibers were first chosen at random. Then the volume fraction of the reinforcing phase was controlled by increasing the radius of the fibers (all fibers at a given volume fraction had identical radius). The matrix was assumed to be elastic perfectly plastic. The results of the simulations can be analyzed as follows~: \begin{itemize} \item[1.] When the fiber volume fraction is small, shear bands can be passed through the matrix. According to Drucker's remark, the resulting overall flow stress of the composite coincides with the flow stress of the matrix under plane strains, $2 \sigma_0 / \sqrt{3}$. However, when the fiber volume fraction is very small, a nearly homogeneous deformation of the matrix is more favorable (less energy is dissipated in the plastic deformation) and no strain concentration is observed. Then the overall flow stress of the composite stands between the flow stress of the matrix $\sigma_0$ and the flow stress of the matrix under plane strains ($2 \sigma_0 / \sqrt{3}$). \item[2.] Over a certain radius, straight shear bands cannot be passed through the matrix. For a given geometrical distribution of fibers, this radius is half of the maximal distance between adjacent parallel lines passing through the centers of the fibers and inclined at $\pm 45^o$ on the tensile direction. Periodic continuous paths can again be passed through the matrix but they are tortuous. The bands where the plastic strain concentrates have a nonvanishing width. The stress-strain response of the composite again reaches a limit value, one higher than the flow stress of the matrix under plane strains. The overall flow stress increases with the volume fraction of fibers, and the increase is closely related to the tortuosity of the "shear" bands. \item[3.] When the fibers percolate and form a contiguous phase, the matrix loses contiguity. No periodic continuous path can be passed through the matrix. This leads to a drastic modification of the stress-strain curve of the composite, which is no longer limited. The composite behaves asymptotically as an elastic plastic material with linear hardening . \end{itemize} \subsection{Complex microstructures} To illustrate the capability of the method to deal with complex microstructures, we have considered a real microstructure taken from the work of { Bornert} \cite{BOR96} (see also \cite{BOR94}). The materials studied in \cite{BOR94} were two-phase iron/silver blends, manufactured with powder metallurgy techniques. The digital image was obtained by Scanning Electron Microscopy. The microstructure is shown in Figure \ref{bornert} (a). Clearly meshing this microstructure for application of the FEM would be a considerable task. The present numerical method can handle such a microstructure as easily as the simpler ones shown in previous examples. In the numerical simulation each phase is considered elastic-plastic following a $J_2$-flow theory with isotropic hardening of the Von-Mises type. The stress/strain curves for each constituent under uniaxial tension are shown in Figure \ref{bornert} (c). The applied loading is uniaxial tension in the horizontal direction. The map of equivalent strain is shown in Figure \ref{bornert} (b) at an overall strain $E_{11} = 3.3 \%$. In the soft phase (silver in white) the strain is organized in bands which cannot develop over long distances due to the presence of the hard phase (iron in black). A full comparison between simulated and experimental strain maps is difficult to perform essentially because the numerical calculations are two-dimensional whereas the real material is three-dimensional in nature. Only the surface of the specimen is observed and it is in a state of plane stress, whereas the calculations are performed assuming a state of generalized plane strains. In addition the material below the surface plays a significant role on the deformation of the surface itself. The variations between the arrangement of the phases at the surface and below the surface is not taken into account by the numerical model. \section{Concluding remarks} A new numerical technique has been developed to investigate the local and overall response of nonlinear composites. The advantages of the method are the following~: \begin{itemize} \item[1.] Images of microstructures can be directly used in the analysis, which avoids meshing the microstructure. Complex microstructures can be investigated. Part of the efficiency of the method is due to the use of FFT packages. \item[2.] The iterative procedure does not require the formation or inversion of a stiffness matrix. \item[3.] Convergence is fast. \end{itemize} However the method has some limitations. \begin{itemize} \item[1.] Convergence is not ensured for materials containing voids or rigid inclusions. \item[2.] The number of degrees of freedom is high by comparison with the FEM (typically an image with $1024 \times 1024$ pixels is required to deal with 64 fibers). The method can be implemented only on computers with high memory capabilities. \end{itemize} \noindent{\bf Acknowledgements}. Most computations were carried out at the Institut M\'editerran\'een de Technologie in Marseille; the funds being provided by the PACA region. The other computations were carried out at the Institut du D\'eveloppement et des Ressources en Informatique Scientifique funded by CNRS. The authors are indebted to Michel Bornert for fruitful discussions and for providing the image of the microstructure shown in \ref{bornert} (a). \newpage
{ "timestamp": "2020-12-17T02:19:08", "yymm": "2012", "arxiv_id": "2012.08962", "language": "en", "url": "https://arxiv.org/abs/2012.08962" }
\section{Introduction} In this paper we denote by $\R^n$ the Euclidean $n$-space, and for any $x,y \in \R^n$, the closed segment with endpoints $x,y$ by $[x,y]$. We call a compact, convex set with nonempty interior a \emph{convex body}. It is well known that for any $o$-symmetric body $K$ there is a unique $n$-dimensional norm whose unit ball is $K$; for any $x \in \R^n$, we denote by $||x||_K$ the norm of $x$ in this norm, and write $||x||$ for the Euclidean norm of $x$. We denote by $\vol_n(K)$ and $\conv{H}$ the $n$-dimensional volume of a body $K$ and the convex hull of the set $H$, respectively. We use the notation $\Sph^{n-1}$ for the set of unit vectors in $\R^n$, and for any unit vector $u \in \Sph^{n-1}$ and convex body $K \subset \R^n$, we let $K|u^\perp$ be the orthogonal projection of $K$ onto the hyperplane through $o$ with normal vector $u$. The notation $P+Q$ means the Minkowski sum of the bodies $P$ and $Q$. A famous result of Meyer, Reisner and Schmuckenschl\"ager \cite{meyer-reisner-schmuckenschlager} states that if $K \subset \R^n$ is an $o$-symmetric convex body with the property that the volume $\vol_n (K \cap (x+K))$ depends only on the Minkowski norm $||x||_K$, then $K$ is an ellipsoid. This result is a variant of the so-called \emph{covariogram problem} of Matheron \cite{matheron}, which asks whether the function $x \mapsto \vol_n (K \cap (x+K))$ (called \emph{covariogram function}) determines the convex body $K$. One of our main concepts is introduced in the next definition. \begin{defi}\label{defn:convexhullfunction} Let $K$ be an $n$-dimensional convex body and to a translation vector $t\in \mathbb{R}^n$ associate the value ${G}_K(t)=\vol\mathrm{conv}\{K\cup (K+t)\}$. The function defined in this way is called the \emph{convex hull function} associated to the body $K$. \end{defi} This function first appeared in the literature in a 1950 paper of F\'ary and R\'edei \cite{fary-redei}, who proved that the volume of the convex hull of two convex bodies moving at constant velocity is a convex function of time (see \cite[Satz 4]{fary-redei}, and also \cite{Ahn} for the special case of polytopes). This statement was generalized by Rogers and Shephard \cite{rogers} for general point systems, which they called \emph{linear parameter systems}, later also called \emph{shadow systems}. The method introduced by Rogers and Shephard became an important tool in solving geometric optimization problems regarding convex bodies. The convex hull function gave rise to a number of interesting problems, many of which are still open; for a collection of such problems see the survey paper \cite{gho-surveyonconvhullvolume} and the references therein. Nevertheless, it is an interesting fact that the `dual' of the covariogram problem, that is, the question whether the convex hull function $G_K(t)$ determines the body $K$ or not has been asked only recently by \'A. Kurusa in a private communication. For more information on volume functions defined by convex bodies that determine the body, the interested reader is referred to the survey \cite{gho-volumefunction}. The so-called \emph{translative constant volume property} of a convex body $K$, meaning that $G_K(x)$ depends only on $||x||_K$, was defined in \cite{gho-langi-convhull} in an investigation of some extremal properties of the convex hull function and other related volume functions. The authors of \cite{gho-langi-convhull} characterized the plane convex bodies satisfying the translative constant volume property, and conjectured that any such centrally symmetric convex body in $\R^n$ with $n \geq 3$ is an ellipsoid. In 2015, an interesting generalization of the convex hull function was introduced by Jer\'onimo-Castro \cite{castro}, in which he replaced the translate of the convex body by a homothetic copy of the body with a fixed ratio. He used this notion to prove the homothetic version of the translative constant volume conjecture in \cite{gho-langi-convhull}, and some other results related to the homothetic version of the result of Meyer, Reisner and Schmuckenschl\"ager for intersections, proved in \cite{meyer-reisner-schmuckenschlager} as well. Following \cite{castro}, we define the following: \begin{defi}\label{defn:homotheticconvexhullfunction} Let $K \subset \R^n$ be a convex body containing the origin $o$ in its interior, and $\lambda \in [0,1)$. Then the function $G_{K,\lambda} : \R^n \to \R$, defined by $$ G_{K,\lambda}(t):=\vol_n\conv\left\{K\cup \left(\lambda K+t\right)\right\}, $$ is called the \emph{$\lambda$-homothetic convex hull function} associated to $K$. \end{defi} It is worth noting that this function is closely related to the so-called illumination bodies, defined in \cite{werner1} in 1994 as follows (see also \cite{werner3}). \begin{defi}\label{defn:illuminationbodies} Let $K \subset \R^n$ be a convex body, and let $\delta > 0$. Then the convex body \[ K^{\delta} = \left\{ x \in \R^n : \vol_n \conv \left( K \cup \{ x \} \right) \leq \vol_n(K) + \delta \right\} \] is called an \emph{illumination body} associated to $K$. \end{defi} Indeed, the sublevel sets of the function $G_{K,0}$ clearly correspond to the illumination bodies of $K$, where the fact that these sets are convex bodies follows from the result \cite{fary-redei} of F\'ary and R\'edei already mentioned in the introduction. Furthermore, the observation that a similar statement holds for any value $0 \leq \lambda < 1$ follows from a result of Jer\'onimo-Castro \cite{castro}, which we are going to introduce in detail in Section~\ref{sec:homotheticconvexhull}. The goal of this paper is to investigate the properties of the convex hull and $\lambda$-homothetic convex hull functions of convex bodies. In Section~\ref{sec:convexhull} we collect our results about the convex hull functions. More specifically, we show that a convex body $K$ is characterized by its convex hull function up to translations if and only if it is centrally symmetric. A variant of this problem is the conjecture in \cite{gho-langi-convhull} about centrally symmetric convex bodies satisfying the translative constant volume property. We show that this conjecture is equivalent to the polar projection body problem introduced by Petty in \cite{petty} (see also the papers \cite{gruber} of Gruber and \cite{lutwak-affineisop} of Lutwak). Finally, we prove that for $n = 3$, any $n$-dimensional convex body satisfying the translative constant volume property and having constant brightness or width is a ball. In Section~\ref{sec:homotheticconvexhull} we examine the properties of the homothetic convex hull functions. First, we give simple proofs of some theorems of Jer\'onimo-Castro in \cite{castro}. Finally, motivated by the proof of the polar projection body problem by Martini in \cite{martini-polarprojection} for convex polytopes, we propose a homothetic version of the translative constant volume property conjecture and prove it for $3$-dimensional convex polyhedra. \section{The convex hull function}\label{sec:convexhull} We start with an elementary property of the convex hull function. \begin{lemma}\label{lem:brightness} Let $\alpha \in \R$ be an arbitrary real number and $u \in \Sph^{n-1}$. Then we have \begin{equation}\label{eq:basicid} G_K(\alpha u)=\vol_n(K)+|\alpha|\vol_{n-1}(K|u^\perp). \end{equation} Consequently, the convex hull function $G_K$ determines the volume $\vol_n(K)$ and the brightness function $u\mapsto \vol_{n-1}(K|u^\perp)$, $u\in \S^{n-1}$, of $K$, and vice versa. \end{lemma} \begin{proof} The equality in (\ref{eq:basicid}) is an easy consequence of Cavalieri's principle (see e.g. \cite[Section A.5]{gardner}). The second statement follows from the definition of the brightness function and (\ref{eq:basicid}). \end{proof} In \cite{gardner-volcic} (see also \cite{goodey-schneider-weil}) the authors proved that, apart from parallelepipeds, no convex body is characterized by its brightness function. Our next result can be regarded as the counterpart of this result for the convex hull function. Before stating it, we remark that for any convex body $K \subset \R^n$ and $x \in \R^n$, the convex hull functions of $K$ and $x+K$ are equal, and hence, the convex hull function of a convex body can characterize the body only up to translations. \begin{theorem}\label{thm:characterization_ch} A convex body $K \subset \R^n$ is characterized by its convex hull function $G_{K}$ up to translations if and only if $K$ is centrally symmetric. \end{theorem} \begin{proof} By Lemma~\ref{lem:brightness}, a convex body is characterized by its convex hull function if and only if it is characterized by its brightness function and volume. Recall that the brightness function of a convex body $K$ is the support function of a convex body, called the \emph{projection body $\Pi K$} of $K$, and the family of convex bodies whose projection body is $\Pi K$ is called the \emph{projection class} of $K$ (see e.g. \cite{gardner}). Thus, the problem of finding the convex bodies $K \subset \R^n$ determined by $G_K$ is equivalent to finding the convex bodies $K \subset \R^n$ with the property that the unique convex body in the projection class of $K$ with volume equal to $\vol_n(K)$ is $K$. On the other hand, a projection class contains exactly one $o$-symmetric convex body (called the \emph{Blaschke body} of the elements of the class), and this body has unique maximal volume in the class (see \cite{gho-volumefunction} or Theorems 4.4.3 and 3.3.9 in \cite{gardner}). Thus, every $o$-symmetric convex body is characterized by its convex hull function, implying the assertion for centrally symmetric convex bodies. Finally, if $K$ is not centrally symmetric, then $-K$ is not a translate of $K$. As we clearly have $\Pi K = \Pi (-K)$, and $\vol_n(K) = \vol_n(-K)$, implying that $G_K = G_{-K}$, showing that if $K$ is not centrally symmetric, then its convex hull function does not characterize $K$ up to translations. \end{proof} We note that the first-named author in \cite[Theorem 6]{gho-volumefunction} gave an example of two convex bodies $K, L \subset \R^n$ for any $n \geq 2$ such that $G_K = G_L$, and $L$ is not the image of $K$ under any isometry of $\R^n$. Next, we recall the notion of translative constant volume property from \cite{gho-langi-convhull}; ote that two convex bodies are said to \emph{touch each other} if they intersect and their interiors are disjoint. We remark also that the translates $x+K$ and $y+K$ of a convex body $K$ touch each other if and only if $\|x-y\|_{K-K} = 1$. \begin{defi}\label{def:translconstvol} If, for a convex body $K \in \R^n$, we have that $\vol_n (\conv ((v+K) \cup (w+K)))$ has the same value for any touching pair of translates, we say that $K$ satisfies the \emph{translative constant volume property}. \end{defi} We note that an $o$-symmetric convex body $K$ satisfies the translative constant volume property if and only if $G_K(x)$ depends only on the norm $\|x\|_K$ of $x$. Hence, the problem of characterizing the convex bodies satisfying the translative constant volume property is the analogue for the convex hull function of the problem of characterizing the convex bodies $K$ whose covariogram function depends only on $\| x \|_K$, solved in the paper \cite{meyer-reisner-schmuckenschlager} of Meyer, {Reisner and Schmuckenschl\"ager. We recall that a $2$-dimensional $o$-symmetric convex curve is a \emph{Radon curve}, if, for the convex hull $K$ of a suitable affine image of the curve, it holds that its polar $K^\circ$ is a rotated copy of $K$ by $\frac{\pi}{2}$ (cf. \cite{martini-swanepoel-antinorm}). It is well known that a curve is Radon if and only if in the norm induced by its convex hull, Birkhoff orthogonality is symmetric (see, e.g. \cite{gho-langi-convhull}). The next theorem can be found in \cite{gho-langi-convhull}. \begin{theorem}[G.Horv\'ath, L\'angi, \cite{gho-langi-convhull}]\label{thm:planartcvp} For any plane convex body $K$, the following are equivalent. \begin{itemize} \item[(1)] $K$ satisfies the translative constant volume property. \item[(2)] The boundary of the central symmetral of $K$ is a Radon curve. \item[(3)] $K$ is a body of constant width in a Radon norm. \end{itemize} \end{theorem} Motivated by Theorem~\ref{thm:planartcvp} and the well-known fact that if every planar section of a normed space is Radon, then the unit ball of the space is an ellipsoid (cf. \cite{alonso-benitez} or \cite{martini-swanepoel-antinorm}), the authors in \cite{gho-langi-convhull} proposed Conjecture~\ref{conj:translconstvol}. \begin{conj}\label{conj:translconstvol} Let $n \geq 3$. Then any $o$-symmetric $n$-dimensional convex body satisfying the translative constant volume property is an ellipsoid. \end{conj} If $K \subset \R^n$ is a convex body, then the \emph{projection body} $\Pi K$ of $K$ is defined as the convex body whose support function is $h_{\Pi K} (u) = \vol_{n-1}(K | u^{\perp})$ for all $u \in \mathbb{S}^{n-1}$. The polar of the projection body of $K$ is called the \emph{polar projection body} of $K$, and is denoted by $\Pi^\circ K = \left( \Pi K \right)^{\circ}$. A famous problem of convex geometry is the so-called \emph{polar projection problem} proposed by Petty in \cite{petty}, which is stated below. \begin{conj}[Petty, \cite{petty}]\label{conj:polarprojection} If an $o$-symmetric convex body $K \subset \R^n$ with $n \geq 3$ satisfies $\Pi^\circ K=\lambda K$ for some $\lambda \in \R$, then $K$ is an ellipsoid. \end{conj} We remark that in a more general, non-symmetric form, this problem asks for the characterization of the convex bodies $K$ satisfying the property that their polar projection bodies and difference bodies are similar. This version was settled by Martini in \cite{martini-polarprojection} for convex polytopes, who proved, using an elegant argument, that the only convex polytopes with the above property are the simplices. Our next result establishes a connection between Conjecture~\ref{conj:translconstvol} and Conjecture~\ref{conj:polarprojection}. \begin{theorem}\label{thm:conjequivpolarproblem} Let $n \geq 3$. Then for any convex body $K \subset \R^n$ the following are equivalent. \begin{itemize} \item $K$ satisfies the translative constant volume property. \item For some $\lambda > 0$, $\Pi^\circ K=\lambda (K-K)$ holds. \end{itemize} \end{theorem} \begin{proof} First, assume that $K \subset \R^n$ is an $o$-symmetric convex body. Recall that the \emph{gauge function} of $K$ is defined as the function $\rho_K : \mathbb{S}^{n-1} \to \R$, $\rho_K(u) = \sup \{ \tau : \tau > 0, \tau u \in K\}$. The effect of polarity on the gauge and the support functions of $K$ is well known, and can be summarized by the equalities $\rho_{K^{\circ}} = \frac{1}{h_K}$ and $h_{K^{\circ}} = \frac{1}{\rho_K}$. Since $K$ is $o$-symmetric, we clearly have that for any $u \in \mathbb{S}^{n-1}$ and $\tau > 0$, $K + \tau u$ touches $K$ if and only if $\tau = 2\rho_K(u)$. Thus, by Lemma~\ref{lem:brightness}, if $K$ satisfies the translative constant volume property, then there is some constant $\Delta > 0$ such that for all $u \in \mathbb{S}^{n-1}$, \begin{equation}\label{eq:reformofconstvolprop} \Delta = 2 \rho_K(u) \vol_{n-1}(K|u^\perp) = 2 \frac{h_{\Pi K} (u)}{h_{K^{\circ}}(u)}. \end{equation} Clearly, (\ref{eq:reformofconstvolprop}) is equivalent to $\frac{\Delta}{2} K^{\circ} = \Pi K$ and also to $\frac{2}{\Delta} K = \Pi^{\circ} K$. This implies Theorem~\ref{thm:conjequivpolarproblem} for $o$-symmetric convex bodies. To prove it in the general case, it is sufficient to observe that for any $u \in \mathbb{S}^{n-1}$ and $\tau > 0$, $K+\tau u$ touches $K$ if and only if $\tau = \rho_{K-K}(u)$. \end{proof} \begin{remark} For $o$-symmetric convex bodies there is a sharp upper bound on the constant $\lambda > 0$ in Theorem~\ref{thm:conjequivpolarproblem}. Indeed, it was proved in \cite{martini-mustafaev} (see also \cite{gho-langi-convhull}) that if $c^{tr}(K)$ denotes the maximum volume of the convex hull of the convex body $K$ and a translate of $K$ intersecting $K$, normalized by $\vol_n(K)$, then \[ c^{tr}(K) \geq 1+\frac{2v_{n-1}}{v_n}, \] with equality if and only if $K$ is an ellipsoid, where $v_i$ denotes the $i$-dimensional volume of the $i$-dimensional unit ball. Furthermore, by Lemma~\ref{lem:brightness} and (\ref{eq:reformofconstvolprop}), if $K$ satisfies the translative constant volume property, then, using the notation in the proof of Theorem~\ref{thm:conjequivpolarproblem}, we have $\Delta = \left( c^{tr}(K) - 1 \right) \vol_n(K)$, implying $\lambda= \frac{2}{\Delta} \leq \frac{v_n}{v_{n-1}} \cdot \frac{1}{\vol_n(K)}$, with equality if and only if $K$ is an ellipsoid. \end{remark} A seminal result of Howard \cite{howard}, proving a conjecture of Nakajima \cite{nakajima}, states that any convex body in $\R^3$ having both constant width and constant brightness is a ball. Our next result is a similar statement involving the translative contant volume property. \begin{theorem}\label{thm:translconstinplane} If $K \subset \R^3$ is a $3$-dimensional convex body of constant brightness or of constant width, and it satisfies the translative constant volume property, then it is a ball. \end{theorem} \begin{proof} By Theorem~\ref{thm:conjequivpolarproblem}, $K$ satisfies the translative constant volume property if and only if $\Pi^{\circ} K = \lambda (K-K)$ for some $\lambda > 0$. On the other hand, $K$ is of constant brightness if and only if $\Pi K$ is a Euclidean ball, which is also equivalent to the property that $\Pi^{\circ} K$ is a Euclidean ball. Similarly, $K$ is of constant width if and only if $K-K$ is a Euclidean ball. Consequently, if $K$ satisfies the translative constant volume property, then it is of constant brightness if and only if it is of constant width. Thus, Theorem~\ref{thm:translconstinplane} readily follows from the result of Howard in \cite{howard}. \end{proof} \begin{remark} We note that the statement in Theorem~\ref{thm:translconstinplane} is false for plane convex bodies. Indeed, a plane convex body $K$ is of constant width if and only if it is of constant brightness, corresponding to the property that its central symmetral $\frac{1}{2}(K-K)$ is a Euclidean disk. On the other hand, since central symmetrization does not change the length of a longest chord of a convex body in any direction, we have that in this case $K$ satisfies also the translative constant volume property. \end{remark} \section{The homothetic convex hull function}\label{sec:homotheticconvexhull} As it was mentioned in the introduction, a result of Meyer, Reisner and Schmunkenschl\"ager \cite{meyer-reisner-schmuckenschlager} states that if for some $o$-symmetric convex body $K \subset \R^n$ and some $\tau > 0$, the volume $\vol_n (K \cap \{\tau K + x\})$ depends only on the Minkowski norm $\|x\|_K$, then $K$ is an ellipsoid. Motivated by this result, a similar problem was investigated by Jer\'onimo-Castro in \cite{castro}. In particular, he proved the following (cf. \cite[Theorems 1 and 2]{castro}): \begin{theorem}[Jer\'onimo-Castro, 2015]\label{thm:castro2} Let $K\subset \R^n$ be a convex body with $o \in \inter(K)$ and let $L \subset \R^n$ be an $o$-symmetric convex body. If there is a number $\lambda \in (0, 1)$ such that $\vol_n \conv(K \cup \lambda (K + x))$ depends only on the Minkowski norm $\|x\|_L$ , then $L$ is homothetic to $K$. In particular, if $\vol_n \conv(K \cup \lambda (K + x))$ is rotationally symmetric, then $K$ is a Euclidean ball. \end{theorem} We formulate an important tool in the proof of Theorem~\ref{thm:castro2} as Lemma~\ref{lem:lambdazero}, and give a slightly shorter proof of it than the one in \cite{castro}. This lemma establishes a connection between sublevel sets of $\lambda$-homothetic convex hull functions and illumination bodies. We use this lemma in the proof of Theorem~\ref{thm:3d}. \begin{lemma}\label{lem:lambdazero} Let $n \geq 2$, and $K \subset \R^n$ be a convex body with $o \in \inter(K)$. If for some $0 \leq \lambda < 1$, $L$ is a sublevel set of $G_{\lambda,K}$, then $\frac{1}{1-\lambda} L$ is an illumination body of $K$; i.e. it is a sublevel set of $G_{K,0}$, and vice versa. \end{lemma} \begin{proof} Consider some point $t \in L$. An elementary computation shows that the center of homothety $\chi$ that maps $K$ into $t + \lambda K$ is $\frac{t}{1-\lambda}$. Let $K'= \conv (K \cup (t+\lambda K)) \setminus (t+\lambda K)$. Then $\vol_n(K') = G_{\lambda,K}(t) - \lambda^n \vol_n(K)$. On the other hand, we clearly have $K' = \conv \left(K \cup \left\{ \frac{t}{1-\lambda} \right\} \right) \setminus \conv \left( (t+\lambda K) \cup \left\{ \frac{t}{1-\lambda} \right\} \right)$, which implies $\vol_n(K') = (1-\lambda^n) G_{0,K}\left( \frac{t}{1-\lambda} \right)$. Thus, \[ G_{0,K}\left( \frac{t}{1-\lambda} \right) = \frac{G_{\lambda,K}(t) - \lambda^n \vol_n(K)}{1-\lambda^n}, \] which yields that $\frac{1}{1-\lambda} L$ is a sublevel set of $G_{0,K}$. \end{proof} Unfortunately, the witty proof of Theorem~\ref{thm:castro2} cannot be applied to settle Conjecture \ref{conj:translconstvol}. However, this result suggested the following problem, which appeared in \cite{gho-volumefunction}: \begin{problem}\label{prob:homconvhull} Let $0 \leq \lambda < 1$ be arbitrary. Is it true that the $\lambda$-homothetic convex hull function $G_{K,\lambda}$ of a convex body $K$ determines $K$; that is $G_{K,\lambda} = G_{L,\lambda}$ implies $K=L$? \end{problem} First, note that by Theorem~\ref{thm:characterization_ch} and also by \cite[Theorem 6]{gho-volumefunction}, the convex hull function of a convex body does \emph{not} determine the body. Our first result is an affirmative answer to Problem~\ref{prob:homconvhull}. Our argument yields also a short proof of Theorem~\ref{thm:castro2}, different from the one in \cite{castro}. Nevertheless, we will use the argument in \cite{castro} in the proof of Lemma~\ref{lem:lambdazero}. Before stating our result, we remark that any convex body $L \subset \R^n$ containing $o$ in its interior is the unit ball of an asymmetric norm \cite{cobzas}, which we denote by $|| \cdot ||_L$. \begin{theorem} Let $0 \leq \lambda < 1$ and $n \geq 2$. Then the following holds. \begin{itemize} \item[(i)] If $K,L \subset \R^n$ are convex bodies satisfying $G_{K,\lambda} = G_{L,\lambda}$, then $K=L$. \item[(ii)] If $K, L \subset \R^n$ are convex bodies and $G_{K,\lambda}(t)$ depends only on the $L$-norm $||t||_L$ of $t$, then $K$ is a homothetic copy of $L$. \end{itemize} \end{theorem} We note that the case that $L$ is a Euclidean ball in (ii) yields that if $G_{K,\lambda}(t)$ is rotationally symmetric, then $K$ is a ball. \begin{proof} Let $K \subset \R^n$ be an arbitrary convex body, and $0 \leq \lambda < 1$. By its definition, $G_{K,\lambda} (t) \geq \vol_n (K)$, with equality if and only if $\lambda K + t \subset K$. On the other hand, by convexity, $\lambda K + t \subset K$ is equivalent to $t \in (1-\lambda) K$. Thus, $G_{K,\lambda}$ is minimal exactly at the points of $(1-\lambda) K$, and this implies that if $G_{K,\lambda}(t) = G_{L,\lambda}(t)$ for all $t \in \R^n$, then $(1-\lambda) K = (1-\lambda)L$, which, since $0 \leq \lambda < 1$, implies $K=L$. Assume now that $G_{K,\lambda}(t)$ depends only on the $L$-norm $||t||_L$ of $t$ for some convex body $L$ with $o \in \inter L$. This implies that any level set of $G_{K,\lambda}$ is a homothetic copy of $\bd(L)$. Since $G_{K,\lambda}(\mu t) \leq G_{K,\lambda}(t)$ for any $0 \leq \mu \leq 1$, from this it follows that all sublevel sets of $G_{K,\lambda}$ are homothetic copies of $L$. In particular, this implies that $(1-\lambda)K=\{ t : G_{K,\lambda}(t) \leq \vol_n(K) \}$ is a homothetic copy of $L$, which implies (ii). \end{proof} It is worth remarking that by the result of F\'ary and R\'edei \cite{fary-redei} mentioned in the introduction, the function $G_{K,\lambda}(t)$ is a convex function of $t$, which yields that all its sublevel sets are convex bodies. In \cite{castro}, Jer\'onimo-Castro also proved that if $K \subset \R^n$ is a convex body of class $C^2_+$, containing $o$ in its interior, and for some $0 < \lambda <1$, $G_{K,\lambda}(t)$ depends only on the $K$-norm $||t||_K$ of $t$, then $K$ is an ellipsoid. Observe that the conditions in Jer\'onimo-Castro's theorem imply that all sublevel sets of $G_{K,\lambda}$ are positive homothetic copies of $K$. On the other hand, clearly, all sublevel sets of $G_K$ (apart from $\{ o \}$) are positive homothetic copies of $K$ if and only if there is a sublevel set of $G_K$ which is a positive homothetic copy of $K$. This leads to the following question. \begin{question}\label{ques:homothetic} Let $n \geq 2$ and $0 \leq \lambda < 1$. Determine the convex bodies $K \subset \R^n$, containing $o$ in their interiors, with the property that for some $\mu > 0$, $\mu K$ is a sublevel set of $G_{K,\lambda}$, or equivalently, with the property that one of the illumination bodies of $K$ is a positive homothetic copy of $K$. \end{question} We note that Question~\ref{ques:homothetic} stated for illumination bodies of a convex body, already appeared in the literature as part 1 of the so-called \emph{generalized homothety conjecture} of Werner and Ye in \cite{werner4}. As partial results in this direction, we mention a result of Stancu \cite{stancu}, proving that if $K$ has a $C^2_{+}$ boundary, and there is some $\delta_0 > 0$ such that for any $0 < \delta < \delta_0$, $K^{\delta}$ is homothetic to $K$, then $K$ is an ellipsoid, and also a result of Jer\'onimo-Castro who proved a similar result in the planar case using weaker boundary conditions for $K$. Motivated by the result of Martini in \cite{martini-polarprojection} and also by the results of Werner \cite{werner2} and Mordhost and Werner \cite{mordhost} about the illumination bodies of convex polytopes, we investigate Question~\ref{ques:homothetic} for convex polytopes. Our main result is the following. \begin{theorem}\label{thm:3d} There is no convex polytope $P \subset \R^3$, with $o \in \inter(P)$, such that for some $0 \leq \lambda < 1$ and $\mu > 0$, $\mu P$ is a sublevel set of $G_{P,\lambda}$. Equivalently, there is no convex polytope $P \subset \R^3$ such that an illumination body of $P$ is a positive homothetic copy of $P$. \end{theorem} We prove Theorem~\ref{thm:3d} for illumination bodies, as stated in the second form. To prove it, we start with some lemmas. \begin{lemma}\label{lem:piecewise} Let $n \geq 2$ and let $P \subset \R^n$ be a convex polytope and let $\delta > 0$ be arbitrary. Then $P^{\delta}$ is a convex polytope, and: \begin{itemize} \item[(i)] The $(n-2)$-skeleton of $P^{\delta}$ is contained in the union of all facet hyperplanes of $P$. \item[(ii)] For any facet hyperplane $H$ of $P$, $H \cap \bd ( P^{\delta})$ is contained in the $(n-2)$-skeleton of $P^{\delta}$. \end{itemize} \end{lemma} \begin{proof} Let $\mathcal{S}$ be a decomposition of $\bd(P)$ into $(n-1)$-dimensional simplices whose vertices are vertices of $P$. Note that for any simplex with vertices $p_1, \ldots, p_n$ in this decomposition the volume of $\conv \{ p_1, \ldots, p_n, t \}$ is the absolute value of the determinant with column vectors $p_2-p_1, \ldots, p_n-p_1, t-p_1$. Thus, if for any $t \in \R^n$, $\mathcal{S}_t$ denotes the subfamily of $\mathcal{S}$ consisting of the elements $F$ of $\mathcal{S}$ such that the closed supporting half space of $P$ whose boundary contains $F$ contains $t$, we have \begin{equation}\label{eq:volume} G_{0,P}(t) = \sum_{\conv \{ p_1,p_2,\ldots, p_n \} \in \mathcal{S}_t } \left| \det [p_2-p_1,p_3-p_1,\ldots, p_n-p_1,t-p_1] \right|, \end{equation} Thus, by the properties of determinants, $G_{0,P}$ is a piecewise linear convex function, implying that the illumination bodies $P^{\delta}$ are convex polytopes. Furthermore, any nonsmooth point of a level hypersurface of $G_{0,P}$ (i.e. any point in the $(n-2)$-skeleton of the corresponding illumination body $P^{\delta}$) lies in the affine hull of a simplex in $\mathcal{S}$. Thus, the $(n-2)$-skeleton of $P^{\delta}$ is contained in the union of the facet hyperplanes of $P$. On the other hand, if $t \in \bd (P^{\delta})$ moves in such a way that it crosses a facet hyperplane $H$ of $P$ at a point $t_0 \in \bd (\mu P)$, then $\mathcal{S}_t$ changes in such a way that the simplices in $H$ all become elements of $\mathcal{S}_t$ or they are all are removed from it, depending on the direction in which $t$ crosses $H$. Hence, it follows that $t_0$ is a nonsmooth point of $\bd (P^{\delta})$, implying that it belongs to the $(n-2)$-skeleton of $P^{\delta}$. \end{proof} \begin{lemma}\label{lem:polygon} If $P \subset \R^2$ is a convex $k$-gon, and $Q$ is a convex $m$-gon such that $P \subset \inter (Q)$, every vertex of $Q$ belongs to a sideline of $P$, and every sideline of $P$ intersects $\bd(Q)$ in two vertices of $Q$, then $m \geq k$, with equality if and only if every vertex of $Q$ lies exactly on two sidelines of $P$. \end{lemma} \begin{proof} By our conditions, any sideline of $P$ contains exactly two vertices of $Q$. Thus, the number of vertices of $Q$ on the sidelines of $P$, counted with multiplicity, is equal to $2k$. On the other hand, any vertex of $Q$ belongs to at most two sidelines of $P$, which yields that $Q$ has at least $k$ vertices, with equality if and only if every vertex of $Q$ is the intersection point of two sidelines of $P$. \end{proof} Now we prove Theorem~\ref{thm:3d}. \begin{proof} Without loss of generality, suppose for contradiction that $\mu P$ is an illumination body of $P$ with some $\mu > 1$. Let $F$ be an arbitrary face of $P$, and let $H$ be the plane containing it. Then $H \cap \bd (\mu P)$ is the union of some edges of $\mu P$ by Lemma~\ref{lem:piecewise}. Assume that $F$ is a convex $k$-gon and $Q=H \cap (\mu P)$ is an $m$-gon. Then, by Lemma~\ref{lem:polygon}, $m \geq k$. Thus, let us assign each edge of $F$ to exactly one edge of $Q$ such that distinct edges of $F$ are assigned to distinct edges of $Q$. Since each edge of $P$ lies on exactly two faces of $P$, doing this procedure for all faces of $P$, in this way each edge of $P$ is assigned to exactly two edges of $\mu P$. On the other hand, by convexity, every edge of $\mu P$ belongs to at most two face planes of $P$, and hence, every edge of $\mu P$ is assigned to at most two edges of $P$. Since the numbers of edges of $P$ and $\mu P$ are clearly equal, this yields that every edge of $\mu P$ lies in exactly two face planes of $P$, and for every face plane $H$ of $P$, the number of sides of $P \cap H$ and $H \cap (\mu P)$ are equal, and every other face plane of $P$ intersects $H \cap \bd (\mu P)$ only in vertices. It readily follows from Euler's formula that every convex polyhedron has a face with strictly less than six edges. Let $F$ be such a face, and let $E$ be an arbitrary edge of $F$. Then, by our conditions, $\mu E$ belongs to exactly two face planes $H_1$ and $H_2$ of $P$. Since the plane $H$ of $F$ separates $P \subset \mu P$ and $\mu E \subset \mu P$, it follows that $H_1$ and $H_2$ intersect the interior of $Q=H \cap (\mu P)$. Let these intersections be $E_1$ and $E_2$, respectively. Then, by our conditions, both $E_1$ and $E_2$ are diagonals of $Q$. Furthermore, since $\mu E$ is parallel to $E$ and $\mu E \subset H_1, H_2$, we have that $E_1$ and $E_2$ are parallel to $E$. Since $H_1 \cap H_2$ is the line through $\mu E$, we have that $E_1$ and $E_2$ are different. Thus, $E_1$ and $E_2$ are disjoint diagonals of $Q$, implying that $Q$ has at least six vertices. On the other hand, the number of vertices of $Q$ is equal to the number of vertices of $F$, which implies that $F$ has at least six vertices, a contradiction. \end{proof} It is a natural question to examine Question~\ref{ques:homothetic} for convex polygons. We raise the following problem. \begin{problem}\label{prob:affinelyregular} Prove or disprove that if for some convex polygon $P \subset \R^2$ with $o \in \inter(P)$, $0 \leq \lambda < 1$ and $\mu > 0$, $\mu P$ is a sublevel set of $G_{P,\lambda}$, or equivalently, if an illumination body of $P$ is a positive homothetic copy of $P$, then $P$ is an affinely regular polygon. \end{problem} In the remaining part of our paper we give a partial answer to this problem. To be able to state our result, we introduce the following concept. \begin{defi} Let $P \subset \R^2$ be a convex $m$-gon. Let the sides of $P$ be $S_1, \ldots, S_m$ in counterclockwise order, where the indices are defined $\mod m$. Let $L_i$ be the sideline of $P$ through $S_i$. Let $p_{i,j}$ denote the intersection point of $L_i$ and $L_j$, if it exists. For any $0 \leq k,l \leq m$, the closed polygonal curve $\bigcup_{j=1}^m [p_{j-k-1,j+l}, p_{j-k,j+l+1}]$ is called the \emph{$(k,l)$-extension} of $P$. \end{defi} \begin{theorem}\label{thm:planar} For any convex $m$-gon $P$ the following are equivalent: \begin{itemize} \item[(i)] An illumination body of $P$ is a positive homothetic copy of $P$. \item[(ii)] A $(k,l)$-extension of $P$ is a positive homothetic copy of $\bd(P)$ containing $P$ in its interior for some $k,l \geq 1$ with $2 | (k+l)$, $k+l+1 < \frac{m}{2}$ such that the side homothetic to $S_i$ is $[p_{i-(k+l)/2-1, i+(k+l)/2},p_{i-(k+l)/2, i+(k+l)/2+1}]$. \end{itemize} Furthermore, in this case the $(k,l)$-extension of $P$ is a level curve of $G_{0,P}$ homothetic to $\bd P$, and if these conditions are satisfied for the $(1,1)$-extension of $P$, then $P$ is an affinely regular polygon. \end{theorem} \begin{proof} Assume that the boundary of an illumination body of $P$, i.e. a level curve $\Gamma$ of $G_{0,P}$, is a positive homotethic copy of $P$. Without loss of generality, we may assume that the center of homothety is $o$. This implies by Lemmas~\ref{lem:piecewise} and \ref{lem:polygon} that every vertex of $\Gamma$ is some point $p_{x,y}$. Let $[p_{x,y},p_{z,w}]$ be the side of $\Gamma$ which is the homothetic copy of $S_i$. Since $[p_{x,y},p_{z,w}]$ does not cross any sideline of $P$, we have that $|x-z|, |y-w| \leq 1$. On the other hand, since $[p_{x,y},p_{z,w}]$ is not contained in any sideline of $P$ by the properties of homothety, we have $x-z=y-w= \pm 1$. This implies that $\Gamma$ is the $(k,l)$-extension of $P$ for some values of $k$ and $l$. The fact that $P \subset \conv \Gamma$ follows from the elementary properties of the function $G_{0,P}$. In the following, we set $r=k+l$, and denote by $[p_{i-r-1+j, i+j},p_{i-r+j, i+j+1}]$ the side homothetic to $S_i$ for some $1 \leq j \leq r-1$. Since the two vertices of $\Gamma$ in $L_i$ are $p_{i-r-1,i}$ and $p_{i,i+r+1}$, it follows that $L_i$ separates $P$ from exactly $r+1$ sides of $\Gamma$. As the same holds for $L_{i-r-1}$, $\Gamma$ has more than $2(r+1)$ sides, implying that $k+l+1 = r+1< \frac{m}{2}$. By symmetry, in the remaining part we assume that $j \leq r+1-j$. Note that the homothety that maps the side $S_i=[p_{i-1,i},p_{i,i+1}]$ of $P$ to the side $S_i'=[p_{i-r-1+j, i+j},p_{i-r+j, i+j+1}]$ of $\mu P$ maps the diagonal $[p_{i-1-j,i-j},p_{i+r-j,i+r+1-j}]$ of $P$ to the diagonal $[p_{i-r-1,i}, p_{i,i+r+1}]$ of $\mu P$. Thus, it follows that $S_i$ is parallel to $[p_{i-1-j,i-j},p_{i+r-j,i+r+1-j}]$. Furthermore, for any point $x$ in the relative interior of $S_i'$, the segments $[x,p_{i-r-1+j,i-r+j}]$ and $[x,p_{i+j,i+j+1}]$ are contained in $\bd \conv (P \cup \{ x \})$. Since moving $x$ on $S_i'$ does not change $\area \conv (P \cup \{ x \})$, this yields that $S_i'$ (and also $S_i$) is parallel to $[p_{i-r-1+j,i-r+j},p_{i+j,i+j+1}]$, implying that $[p_{i-1-j,i-j},p_{i+r-j,i+r+1-j}]$ is parallel to $[p_{i-r-1+j,i-r+j},p_{i+j,i+j+1}]$. On the other hand, by $r+1 < \frac{m}{2}$, in the cyclic order of vertices of $P$ on $\bd (P)$ the two endpoints of $[p_{i-1-j,i-j},p_{i+r-j,i+r+1-j}]$ separate the endpoints of $[p_{i-r-1+j,i-r+j},p_{i+j,i+j+1}]$, which yields that the two segments coincide. From this it follows that $i-j-1=i-r-1+j$, and hence, $2j=r=k+l$, $2 | (k+l)$, and $S_i'= [p_{i-(k+l)/2-1, i+(k+l)/2},p_{i-(k+l)/2, i+(k+l)/2+1}]$. This shows that (i) implies (ii). The converse statement can be proved by reversing this argument. \begin{figure}[h] \centering \includegraphics[scale=0.6]{type1_1} \caption{The $(1,1)$-extension of a polygon $P$} \label{fig:1_1} \end{figure} It remains to show that if the $(1,1)$-extension of $P$ satisfies the above conditions, then $P$ is affinely regular. Thus, assume that the $(1,1)$-extension of $P$ is $\bd (\mu P)$ for some $\mu > 1$. Then, by the first part of Theorem~\ref{thm:planar}, we have that $\mu( p_{i,i+1}-p_{i-1,i}) = p_{i-1,i+2}-p_{i-2,i+1}$. This implies that $\conv \{ p_{i-1,i+1}, p_{i-1,i+2},p_{i-2,i+1} \}$ is a homothetic copy of $\conv \{ p_{i-1,i}, p_{i,i+1},p_{i-1,i+1} \}$ with homothety ratio $-\mu$ and center $p_{i-1,i+1}$ (cf. Figure~\ref{fig:1_1}). Thus, in particular, we have $p_{i-1,i+2}-p_{i-1,i+1} = \mu (p_{i-1,i+1}-p_{i-1,i})$. On the other hand, since $[p_{i-1,i+2},p_{i,i+3}]$ is parallel to $S_{i+1}$, the Intercept Theorem yields that $p_{i,i+3}-p_{i,i+1} = \mu(p_{i,i+1}-p_{i-1,i})$. We obtain similarly that $p_{i-1,i}-p_{i-3,i}=\mu (p_{i,i+1}-p_{i-1,i})$. From this, we have that $p_{i,i+3}-p_{i-3,i}= (1+2\mu) (p_{i,i+1}-p_{i-1,i})$, and hence, $p_{i+1,i+2}-p_{i-2,i-1} = \frac{1+2\mu}{\mu} (p_{i,i+1}-p_{i-1,i})$ for all values of $i$. On the other hand, it is known that if the (cyclically ordered) vertices $q_1, \ldots, q_m$ of a polygon $Q$ satisfy the condition that $q_{i+2}-q_{i-1}= \tau (q_{i+1}-q_i)$ for some $\tau > 0$ independent of $i$, then $Q$ is an affinely regular $m$-gon (see \cite{Coxeter1}, \cite{Coxeter2}, \cite{FischerJamison}, or in a more general form, \cite{Langi_affinely}). This implies that in this case $P$ is affinely regular. \end{proof} We finish the paper with the following question. \begin{question} What are the plane convex bodies $K \subset \R^2$ with $o \in \inter(K)$ such that a level curve of $G_{K,\lambda}$ is a Euclidean circle for some $0 \leq \lambda < 1$? Equivalently, what are the plane convex bodies $K$ such that an illumination body of $K$ is a Euclidean disk? \end{question} \medskip \noindent \textbf{Acknowledgements.}\\ The authors express their gratitude to an anonymous referee whose remarks led to the statement in Theorem~\ref{thm:characterization_ch}, and more general versions of Lemmas~\ref{lem:lambdazero} and \ref{lem:piecewise}.
{ "timestamp": "2021-09-24T02:14:10", "yymm": "2012", "arxiv_id": "2012.08955", "language": "en", "url": "https://arxiv.org/abs/2012.08955" }
\section{Introduction} \label{introduction} Let $\chi $ be an algebraic curve defined over a finite field $\mathrm{GF}(q)$, and let $D$ be an $\mathrm{GF}(q)$-rational divisor on $\chi $. The computation of a basis of the Riemann-Roch space $\mathcal{L}(D)$ associated to $D$ is an essential tool in Coding Theory and Cryptography, since it allows both to explicitly construct Goppa codes and to give addition formulas in the divisor class group of $\chi $. The general problem has been attacked by several researchers. The first algorithm is due to von Brill and Noether \cite{BrillNoether}. Since then, many researchers have worked on the problem to make the computation of such a basis more effective (\cite{HuangIerardi}), in the equivalent scenario of function fields (cf. \cite{Stichtenoth}, Remark 2.3.15). In particular, in \cite{Hess} an arithmetic approach to the Riemann-Roch problem is taken, which provides an algorithm, polynomial in the input size. Nonetheless, further algorithms were developed in order to simplify the computation, each under particular assumptions. In this paper the class of hyperelliptic curves is considered. Many papers have been devoted to the study of arithmetic in these curves, among others we mention in particular \cite{Cantor}, \cite{kuroki} and \cite{Lange2}. The interest on the subject does not seem to decline, as witnessed by more recent publications (cf. \cite{sutherland}, \cite{GluherSpaenlehauer}). A significant literature has also been produced in order to consider codes over hyperelliptic curves (\cite{boer0}, \cite{boer}, \cite{brigand}, \cite{niehage2}, Goppa codes were introduced in \cite{G2} several decades ago. These codes turned out not only to be interesting in Coding Theory, but also to be applicable in Cryptography, e.g. in public-key cryptographic systems \cite{Mc}, \cite{JanwaMoreno}, \cite{MMPR}. Cryptographic systems using Goppa codes with suitable parameters are considered as secure (see in particular \cite{BLP}, \cite{DinhMooreRussell}). Hyperelliptic curves in Cryptography have been investigated in \cite{koblitz}, \cite{kuroki}, \cite{pelzl}. Goppa codes over the Hermitian curve, as well as over maximal curves, have been extendedly studied in \cite{KorchmarosNagyTimpanella}, \cite{KorchmarosSpeziali}, \cite{CastellanosFanali}, \cite{FanaliGiulietti}. The aim of the current paper is to give an explicit way for determining a basis of the Riemann-Roch space over an imaginary hyperelliptic curve $\mathcal{H}.$ Using this basis, we can construct a generator matrix of a Goppa code over a hyperelliptic curve defined over a Galois field of characteristic $p\geq 2$. We make this for some MDS codes in Section \ref{ExampleGoppa}. In particular we consider an imaginary hyperelliptic curve $\mathcal{H}$ of genus $g$, described as the set of points satisfying the equation $$Y^2T^{d-2}+Yh(X,T)=f(X,T)$$ where $f$ is a homogeneous polynomial of degree $d=2g+1$ (and $h=0$, if $p \neq 2$). Using standard methods we construct an explicit basis of the Riemann-Roch space $\mathcal{L}(D)$, where $D$ is a divisor of positive degree $n$ in (its unique) reduced form $P_1+\dots +P_j+(n-j)\Omega$. Here $P_1, \dots , P_j$ are $j$ points in $\mathcal{H}$ distinct from the point $\Omega$ of infinity and $j \le g$. We remark that the reduction of $D$ to its reduced form might be a difficult task, because one has to solve algebraic equations of degree greater than $g+1$, possibly by applying the Cantor algorithm. This difficulty does not occur in the construction of Goppa codes, because in that case one can directly take $D=P_1+P_2+\dots +P_j+(n-j)\Omega$ (cf. \cite{Cantor}). It turns out that our computation for a basis of the Riemann-Roch space $\mathcal{L}(D)$ provides another proof of the results of Lemma 2.1 in \cite{boer}, which deal with the dimension of the space $\mathcal{L}(D)$. We give the sequence of $\mathrm{dim} \mathcal{L}(D)$ in Example 1 for the case that $\mathcal{H}$ has genus $g=5$. \section{Notations and definitions} \label{sec:1} Let $p$ be a prime number and $t\in \mathbb{N}.$ Let $\mathcal{H}$ be a hyperelliptic curve over $\mathrm{GF}(p^t)$ with a rational Weierstrass point $\Omega$, so that there exists a coordinate system of the projective plane such that the non-singular curve $\mathcal{H}$ is described as the set of points $P=[X:Y:T]$ such that $$Y^2T^{d-2}+Yh(X,T)=f(X,T)$$ where $f$ is a homogeneous polynomial of degree $d=2g+1$, $h$ is a homogeneous polynomial of degree at most $g$, and $\Omega=[0:1:0]$ is the point at infinity of $\mathcal{H}$ (\cite{Lockhart}, Prop. 1.2). If $p$ is odd, the transformation $Y\mapsto Y-h(X,T)/2$ changes the above equation into $$Y^2T^{d-2}=f(X,T),$$ whereas, if $p=2$, then in general it is not possible to reduce $h$ to zero. Let $\mathfrak{K}$ be the algebraic closure of $\mathrm{GF}(p^t)$, and let $\mathcal{L}(D)$ be the Riemann-Roch space associated to any divisor $D$, that is, the vector space of rational functions $$\mathcal{L}(D)=\{F\in \mathfrak{K}(\mathcal{H}):\mathrm{div}(F)+D\geq 0\}\cup\{0\},$$ thus $\mathcal{L}(D)$ is trivial both in the cases where $D$ has negative degree, and where $D$ has degree zero and $D\not\in\mathrm{Princ}(\mathcal{H})$, whereas $\mathcal{L}(D)=\Big\langle F_0^{-1} \Big\rangle $ in the case where $D=\mathrm{div}(F_0)$. For this reason, we may restrict ourselves to the case where $D$ has positive degree. If $D$ is a divisor of positive degree $n$, then $$D=P_1+P_2+\dots +P_j+(n-j)\Omega+\mathrm{div}(\psi)$$ for $j$ points $P_1,\dots , P_j$ in $\mathcal{H}$ distinct from $\Omega$, with $j\leq g$, and a suitable $\psi\in{\mathfrak{K}}(\mathcal{H})$, that is, any divisor class $D+\mathrm{Princ}(\mathcal{H})\in \mathrm{Div}(\mathcal{H})/\mathrm{Princ}(\mathcal{H})$ can be reduced to the form $P_1+\dots +P_j+(n-j)\Omega$. Up to the isomorphism $$\Phi: \mathcal{L}(D)\mapsto \mathcal{L}(P_1+\dots +P_j+(n-j)\Omega),$$ mapping $F$ onto the product $\psi F$, we will directly assume that $D$ is reduced to $P_1+\dots +P_j+(n-j)\Omega$, $n \ge 0$. \section{Main theorem}\label{Main} Let $\mathcal{H}$ be the hyperelliptic curve introduced in Section \ref{sec:1}. Let $D=P_1+\dots +P_j+(n-j)\Omega$ be a divisor of degree $n$ of $\mathcal{H}$. If $P_i=[a_i:b_i:1]$, then let $Q_i=[a_i:-b_i-h(a_i,1):1]\in\mathcal{H}$, and let \begin{equation}\label{Psi} \Psi=\frac{T^{j-\delta}\kappa}{(X-a_1T)\cdots(X-a_jT)}, \end{equation} where $\kappa$ is the curve $YT^{\delta-1}-k(X,T)$ of smallest degree $\delta$ in $X$ passing through the points $Q_1,\dots,Q_j$ with their possible multiplicity (note, in particular, that for $j=1$ the curve $\kappa$ is the line $Y-(b_1+h(a_1,1))T$, and recall that $h(X,T)=0$, if $p>2$). Furthermore, we define $\Psi=\frac{Y}{T}$ for $j=0.$ Since $\delta<j\leq g=\frac{d-1}{2}$, there are $d$ intersection points of $\kappa$ and $\mathcal{H}$ in the affine plane, say $Q_1,\dots, Q_j$ and $W_1,\dots,W_{2g-j+1}$ (in the case where $j=0$, these being the $d$ intersections of $\mathcal{H}$ with the $x$-axis), and $d\cdot (\delta -1)$ further intersection points coinciding with $\Omega$, hence $$\mathrm{div}\Psi=(W_1+\dots +W_{2g-j+1})-(P_1+\dots+P_j)-(2g-2j+1)\Omega.$$ \begin{remark}\label{Rmkpsi} \emph{ Note that $\Psi\in\mathcal{L}(D)$ if and only if $n-j\geq 2(g-j)+1$.} \end{remark} \begin{theorem}\label{MainTheorem} Let $D=P_1+\dots+P_j+(n-j)\Omega$ be a divisor of degree $n$ on the hyperelliptic curve $\mathcal{H}$ defined in Section \ref{sec:1}, and let $\Psi$ be as in (\ref{Psi}). If $n-j\geq 2(g-j)+1$, then a basis of $\mathcal{L}(D)$ is provided by the set $$\left\{ \left(\frac{X}{T}\right)^h,\Psi\left(\frac{X}{T}\right)^k:\; 0\leq h\leq\frac{n-j}{2}\mbox{ and }0\leq k\leq\frac{(n-j)-2(g-j)-1}{2}\right\}.$$ If $n-j<2(g-j)+1$, then a basis of $\mathcal{L}(D)$ is provided by the set $$\left\{ \left(\frac{X}{T}\right)^h:\; 0\leq h\leq\frac{n-j}{2}\right\}.$$ \end{theorem} \begin{proof} Let $B_1$ and $B_2$ be the intersection points of $\mathcal{H}$ and the $y$-axis, so $$\mathrm{div}\left(\frac{X}{T}\right)=(B_1+B_2)-2\Omega.$$ 1) Let $n-j\geq 2(g-j)+1$, thus $\Psi\in\mathcal{L}(D)$. First we consider the cases where either $j=0$ (hence $n \ge 2g+1$), or $j=1$ (hence $n \ge 2g$), or $j \ge 2$ and $n\geq 2g-1$, as in these cases we know that, by the theorem of Riemann-Roch, the dimension of $\mathcal{L}(D)$ is $n-g+1$. We claim that $$\mathcal{L}(D)=\left\langle \left(\frac{X}{T}\right)^h,\Psi\left(\frac{X}{T}\right)^k\right\rangle,\mbox{ where}$$ $$0\leq h\leq\frac{n-j}{2}\mbox{ and }0\leq k\leq\frac{(n-j)-2(g-j)-1}{2}.$$ In fact, for each of those values of the parameters $h$ and $k$, the functions belong to $\mathcal{L}(D)$, because $$D+\mathrm{div}\left(\frac{X}{T}\right)^h=(P_1+\dots+P_j)+(n-j)\Omega+h(B_1+B_2)-2h\Omega,$$ as well as $$D+\mathrm{div}\;\Psi\left(\frac{X}{T}\right)^k=(P_1+\dots+P_j)+(n-j)\Omega+k(B_1+B_2)-2k\Omega +$$ $$+(W_1+\dots +W_{2g-j+1})-(P_1+\dots+P_j)-(2g-2j+1)\Omega$$ $$=k(B_1+B_2)+(W_1+\dots +W_{2g-j+1})-(2k-(n-j)+2(g-j)+1)\Omega,$$ are effective divisors. Since $0\leq h\leq\frac{n-j}{2}$ and $0\leq k\leq\frac{(n-j)-2(g-j)-1}{2}$, if $(n-j)$ is even, then the number of those functions is $$1+\frac{n-j}{2}+1+\frac{(n-j)-2(g-j)-2}{2}=n-g+1,$$ and, if $(n-j)$ is odd, then their number is $$1+\frac{n-j-1}{2}+1+\frac{(n-j)-2(g-j)-1}{2}=n-g+1,$$ as well, and the claim follows from dimensional reasons. \medskip Secondly, we consider the case where $2g-j+1\leq n< 2g-1$ (note that this case can occur only if $j\geq 3$). In this case, the dimension of $\mathcal{L}(D)$ is not necessarily $n-g+1$, but still $\Psi\in\mathcal{L}(D)$, and again we claim that $$\mathcal{L}(D)=\left\langle \left(\frac{X}{T}\right)^h,\Psi\left(\frac{X}{T}\right)^k\right\rangle,$$ where $0\leq h\leq\frac{n-j}{2}$ and $0\leq k\leq\frac{(n-j)-2(g-j)-1}{2}$. In fact, let $n=2g-1-\epsilon$ with $0\leq\epsilon\leq j-2$, and put, for short, $$\mathcal{L}_\epsilon:=\mathcal{L}(P_1+\dots+P_j+(n-j)\Omega),$$ hence we have from the first case above that $$\mathcal{L}_0=\left\langle \left(\frac{X}{T}\right)^h,\Psi\left(\frac{X}{T}\right)^k\right\rangle$$ where $0\leq h\leq\frac{n-j}{2}$ and $0\leq k\leq\frac{(n-j)-2(g-j)-1}{2}$. Since $$\mathcal{L}_{\epsilon+1}\leq \mathcal{L}_\epsilon,$$ it follows recursively, by step by step inspection of $\mathcal{L}_\epsilon$, that the claim holds as in the first case. \bigskip\noindent 2) Let $j=0$ and $n=2g-1, 2g$, or $j=1$ and $n=2g-1$ (hence, by Remark \ref{Rmkpsi}, $\Psi\not\in\mathcal{L}(D)$). Again by the theorem of Riemann-Roch, the dimension of $\mathcal{L}(D)$ is $n-g+1$ and we have, by dimensional reason, that $\mathcal{L}(D)=\left\langle \left(\frac{X}{T}\right)^h\right\rangle$, where $0\leq h\leq\frac{n}{2}$, respectively $0\leq h\leq\frac{n-1}{2}$. \bigskip\noindent 3) Finally, let either $j=0, 1$ and $n <2g-1$, or $2\leq j\leq n<2g-j+1$. In either of these cases, we claim that $\mathcal{L}(D)=\left\langle \left(\frac{X}{T}\right)^h\right\rangle$, where $0\leq h\leq\frac{(n-j)}{2}$. Let $n=2g-j+1-\epsilon$ with $0\leq\epsilon\leq 2g-2j+1$, and again put, for short, $$\mathcal{L}_\epsilon:=\mathcal{L}(P_1+\dots+P_j+(n-j)\Omega),$$ hence we have from the first two cases above that $$\mathcal{L}_0=\left\langle \left(\frac{X}{T}\right)^h,\Psi\right\rangle$$ where $0\leq h\leq\frac{n-j}{2}$, because for $n=2g-j+1$ we get $k=0$. Since, again, $$\mathcal{L}_{\epsilon+1}\leq \mathcal{L}_\epsilon,$$ and since, by Remark \ref{Rmkpsi}, $\Psi\not\in\mathcal{L}_\epsilon$ as soon as $\epsilon>0$, it follows recursively, by step by step inspection of $\mathcal{L}_\epsilon$, that the claim holds as in the first case.\end{proof} \begin{corollary} \label{dim} Let $\mathcal{H}$ be the hyperelliptic curve of genus $g$ defined in Section \ref{sec:1}. If the divisor $D$ of degree $n$ is linearly equivalent to $P_1+\dots+P_j+(n-j)\Omega$, then $\mathrm{dim}\, \mathcal{L}(D)=n-g+1$, for $n \geq 2g-j$, $\mathrm{dim}\, \mathcal{L}(D)=\lfloor\frac{n-j}{2}\rfloor +1$, for $j\leq n < 2g-j$. \noindent In particular, if $j=g$, then $\mathrm{dim}\, \mathcal{L}(D)=n-g+1$. \end{corollary} The above results on the dimension of the Riemann-Roch space $\mathcal{L}(D)$ appeared first in \cite{boer}, Lemma 2.1. \begin{remark} \emph{ A point $P$ of the curve $\mathcal{H}$ is a non-Weierstrass point if the sequence $\mathrm{dim}\mathcal{L}(nP)$ for $n \ge 1$ is $$\underbrace{1,1,\dots ,1}_g,2,3,4,\dots ,g-1,g,g+1,\dots$$ In any other case, $P$ is a Weierstrass point. Recall that $\Omega$ was a given Weierstrass point of the curve $\mathcal{H}$, and in fact $\mathcal{L}\big((2g-2\epsilon)\Omega\big)=\mathcal{L}\big((2(g-\epsilon)+1)\Omega\big)$ have both dimension $g-\epsilon+1$. The sequence $\mathrm{dim}\,\mathcal{L}(n \Omega)$, where $n \ge 0$, is therefore: $$1,1,2,2,3,3,\dots, g-1,g-1, g, g, g+1,g+2,g+3\dots$$ Thus the sequence of gaps one has to fill from the first entry to any increasing entry is $$1,3,5,\dots, 2g-1,$$ and the numerical semigroup of non-gaps is therefore that of the natural numbers without the odd numbers smaller than $2g$. } \end{remark} \begin{example} \emph{ Assume that $\mathcal{H}$ has genus $g=5$. } \emph{ If $j=0$, then $n-j \ge 2(g-j)+1$ if and only if $n \ge 2g+1$. Hence the sequence of $\mathrm{dim}\mathcal{L}(D)$ is $$\underbrace{1,1,2,2,3,3,4,4,5,}_{0\leq n<2g-1 } \underbrace{5,6,7, \dots}_{n \ge 2g-1}$$ } \emph{ If $j=1$, then $n-j \ge 2(g-j)+1$ if and only if $n \ge 2g$. Hence the sequence of $\mathrm{dim}\mathcal{L}(D)$ is $$\underbrace{1,1,2,2,3,3,4,4,}_{1\leq n<2g-1 } \underbrace{5,6,7, \dots}_{n \ge 2g-1}$$ } \emph{ If $j=2$, then $n-j \ge 2(g-j)+1$ if and only if $n \ge 2g-1$. Hence the sequence of $\mathrm{dim}\mathcal{L}(D)$ is $$\underbrace{1,1,2,2,3,3,4,}_{2\leq n<2g-1 } \underbrace{5,6,7, \dots}_{n \ge 2g-1}$$ } \emph{ If $j=3$, then $n-j \ge 2(g-j)+1$ if and only if $n \ge 2g-2$. Hence the sequence of $\mathrm{dim}\mathcal{L}(D)$ is $$\underbrace{1,1,2,2,3,}_{3\leq n<2g-2 }{4,} \underbrace{5,6,7, \dots}_{n\geq 2g-1}$$ } \emph{ If $j=4$, then $n-j \ge 2(g-j)+1$ if and only if $n \ge 2g-3$. Hence the sequence of $\mathrm{dim}\mathcal{L}(D)$ is $$\underbrace{1,1,2}_{4\leq n<2g-j+1 }\; \underbrace{3,4}_{2g-j+1\leq n<2g-1 }\; \underbrace{5,6,7, \dots}_{n\geq 2g-1}$$ } \emph{If $j=5$, then $n-j \ge 2(g-j)+1$ if and only if $n \ge 2g-4$. Hence the sequence of $\mathrm{dim}\mathcal{L}(D)$ is $$\underbrace{1,}_{5\leq n<2g-j+1 } \;\underbrace{2,3,4}_{2g-j+1\leq n<2g-1 }\; \underbrace{5,6,7, \dots}_{n\geq 2g-1}$$ } \end{example} \begin{remark} \emph{ Note that we do not need to know the equation of $\mathcal{H}$ of genus $g$ in order to get the basis of $\mathcal{L}(D).$} \end{remark} \section{Examples of MDS Goppa codes}\label{ExampleGoppa} We take $j=3$, therefore $g=3$ and $n= 4$ for an example where $2g-j+1\leq n<2g-1$. Choose, for instance, $D=[0:0:1]+2[1:0:1]+\Omega$ over the Galois field $\mathrm{GF}(31)$, where we intentionally took the point $[1:0:1]$ twice. With the notation in the proof of Theorem \ref{MainTheorem}, put $$\begin{array}{lll} Q_1\equiv[0:0:1]; & Q_2\equiv[1:0:1]; & Q_3=Q_2. \end{array}$$ As $Q_3=Q_2$, we give the parabola $\kappa$ in the form $$(y-0)=a_1(x-1)+a_2(x-1)^2,$$ and because it passes through $Q_1$, we obtain $a_2=a_1$, so take $y=(x-1)+(x-1)^2$. Putting $\Psi=\frac{YT^2-T\big(T(X-T)+(X-T)^2\big)}{X(X-T)^2}$, from Theorem \ref{MainTheorem} we obtain $$\mathcal{L}(D)=\left\langle 1,\Psi\right\rangle.$$ We want to construct the $(m,2,\mathfrak{d})$-Goppa code (where $ m-4\leq\mathfrak{d}\leq m-1$)\footnote{The minimal distance of a Goppa code is $\mathfrak{d}\geq m-\mathrm{deg}(D)$.}. Therefore we need the equation of $\mathcal{H},$ so that we can take $m$ points on $\mathcal{H}.$ Thus we choose four further points $W_1,\dots, W_4$ on the parabola $\kappa$, and a fifth point $C$ not belonging to $\kappa$. If we take, for instance, the abscissa of $W_i$ from $3$ to $6$, then we obtain the following points on $\kappa$: $$\begin{array}{ll} W_1\equiv[3:6:1] & W_2\equiv[4:12:1] \\ W_3\equiv[5:20:1] & W_4\equiv[6:30:1] \end{array}.$$ Since the point on the parabola with abscissa $x=7$ is $[7:11:1]$, we take $C=[7:12:1]$. Thereafter, in order to have the equation of $\mathcal{H}$ in the form $$y^2=a_2(x-1)^2+\dots +a_7(x-1)^7,$$ we construct the $6\times 6$ Vandermonde matrix $V=\big((x_i-1)^{1+j}\big)$ of the absciss{\ae} $x_i$ of the points $Q_1,W_1,\dots,W_4,C$, and its inverse, that is, {\small{$$V=\left( \begin{array}{cccccc} 1&30&1&30&1&30\\ 4&8&16&1&2&4\\ 9&27&19&26&16&17\\ 16&2&8&1&4&16\\ 25&1&5&25&1&5\\ 5&30&25&26&1&6\end{array} \right),\;V^{-1}=\left( \begin{array}{cccccc} 18&9&23&18&11&9\\ 8&2&3&15&5&30\\ 30&20&22&3&7&24\\ 0& 25&25&23&28&19\\ 16&5&15&14&14&6\\ 24&7&1&28&30&21 \end{array} \right).$$}} Since $V^{-1}[0^2,6^2,12^2,20^2,30^2,12^2]'= [22, 10, 26, 3, 14, 18]'$, the hyperelliptic curve $\mathcal{H}$ is defined by the equation $$y^2=22(x-1)^2+10(x-1)^3+26(x-1)^4+3(x-1)^5+14(x-1)^6+ 18(x-1)^7.$$ \noindent With respect to the divisor $G=[3:25:1]+[4:19:1]+[5:11:1]+[6:1:1] \in \mathcal{H}$, not in the support of $D$, the generator matrix of the $(4,2)_{31}$-Goppa code is $\mathcal{C}_{\mathcal{L}}(D,G)=\left( \begin{array}{cccc} 1&1&1&1\\ 30&20&15&12 \end{array} \right)$. Hence, the parity-check matrix is $H=\left( \begin{array}{cccc} 16&14&1&0\\ 7&23&0&1 \end{array} \right)$, and one sees that two columns in $H$ are always independent, thus the minimum distance $\mathfrak{d}$ is $3,$ i.e. the code is MDS. \begin{remark} \emph{ Note that, for $p\leq\frac{n-j}{2}$, the polynomials $X/T$ and $(X/T)^p$ take the same values in the field $\mathrm{GF}(p)$, thus one does not obtain linearly independent row vectors in the generator matrix. } \end{remark} Now we compute the generator matrices of some MDS Goppa codes of dimension $3$ arising from hyperelliptic curves of genus $g=2$ and constructing by de Boer in \cite{boer1}, Section 2. \begin{example} \emph{According to Example $2.3.21$ in \cite{boer1} we take the hyperelliptic curve $\mathcal{H}: Y^2T^3=X^5+4 X T^4+T^5$ over the field $GF(5)$ and we choose the divisor $D=[0:1:1]+[1:4:1]+2\Omega$. With the notation in Section \ref{Main} we put $$\begin{array}{lll} Q_1\equiv[0:4:1]; & Q_2\equiv[1:1:1]. \end{array}$$ The line $\kappa$ passing through $Q_1$ and $Q_2$ has the form $y=2x+4$. We have $j=2$ and $n=4$. Putting $\Psi=\frac{T\big(Y+3X+T\big)}{X(X-T)}$ from Theorem \ref{MainTheorem} we obtain $$\mathcal{L}(D)=\left\langle 1, \frac{X}{T}, \Psi\right\rangle.$$ With respect to the divisor $G=[2:1:1]+[2:4:1]+[3:1:1]+[3:4:1]+[4:1:1]+[4:4:1] \in \mathcal{H}$, not in the support of $D$, the generator matrix of the $(6,3,4)_{5}$-$MDS$ code is $\mathcal{C}_{\mathcal{L}}(D,G)=\left( \begin{array}{cccccc} 1&1&1&1&1&1\\ 2&2&3&3&4&4\\ 4&3&1&4&2&1 \end{array} \right)$.} \end{example} \begin{example} \emph{According to Example $2.3.23$ in \cite{boer1} we take the hyperelliptic curve $\mathcal{H}: Y^2T^3=X^5+4X^3T^2+9XT^4$ over the field $GF(13)$ and we choose the divisior $D=[0:0:1]+3\Omega$. With the notation in Section \ref{Main} we have $Q\equiv[0:0:1]$ and $\kappa$ has the form $Y$ because of $j=1$. Putting $\Psi=\frac{Y}{X}$ and taking into account that $n=4$ Theorem \ref{MainTheorem} yields that $\mathcal{L}(D)=\left\langle 1, \frac{X}{T}, \Psi\right\rangle.$ With respect to the divisor $G=[1:1:1]+[1:12:1]+[3:1:1]+[3:12:1]+[6:6:1]+[6:7:1]+[7:4:1]+[7:9:1]+[9:6:1]+[9:7:1] \in \mathcal{H}$, not in the support of $D$, the generator matrix of the $(10,3,8)_{13}$-$MDS$ code is $\mathcal{C}_{\mathcal{L}}(D,G)=\left( \begin{array}{cccccccccc} 1&1&1&1&1&1&1&1&1&1\\ 1&1&3&3&6&6&7&7&9&9\\ 1&12&9&4&1&12&8&5&5&8 \end{array} \right)$.} \end{example} \begin{example} \emph{According to Example $2.3.24$ in \cite{boer1} we take the hyperelliptic curve $\mathcal{H}: Y^2T^3=X^5+13X^4T+5X^3T^2+11X^2T^3+5XT^4+15T^5$ over the field $GF(17)$ and we choose the divisior $D=[8:0:1]+3\Omega$. With the notation in Section \ref{Main} we have $Q\equiv[8:0:1]$ and as $j=1$ the form of $\kappa$ is again $Y$. Putting $\Psi=\frac{Y}{X-8}$ from Theorem \ref{MainTheorem} we obtain $\mathcal{L}(D)=\left\langle 1, \frac{X}{T}, \Psi\right\rangle $ since $n=4$. With respect to the divisor $G=[0:7:1]+[0:10:1]+[1:4:1]+[1:13:1]+[3:8:1]+[3:9:1]+[5:1:1]+[5:16:1]+[9:1:1]+[9:16:1]+ [15:7:1]+[15:10:1]\in \mathcal{H}$, not in the support of $D$, the generator matrix of the $(12,3,10)_{17}$-$MDS$ code is $\mathcal{C}_{\mathcal{L}}(D,G)=\left( \begin{array}{cccccccccccc} 1&1&1&1&1&1&1&1&1&1&1&1\\ 0&0&1&1&3&3&5&5&9&9&15&15\\ 14&3&14&3&12&5&11&6&1&16&1&16 \end{array} \right)$.} \end{example} \begin{example} \emph{According to Examples $2.2.3$, $2.3.26$ in \cite{boer1} we take the hyperelliptic curve $\mathcal{H}: Y^2T^3+YT^4=X^5+X^3T^2+XT^4$ over the field $GF(4)$ and we choose the divisior $D=[\alpha^2:0:1]+3\Omega$, where $\alpha $ is a primitive element satisfying the equation $\alpha^2+\alpha+1=0$. With the notation in Section \ref{Main} we have $Q\equiv[\alpha^2:1:1]$ and as $j=1$ the form of $\kappa$ is $Y-T$. Putting $\Psi=\frac{Y-T}{X-\alpha^2T}$ from Theorem \ref{MainTheorem} we obtain $\mathcal{L}(D)=\left\langle 1, \frac{X}{T}, \Psi\right\rangle $ since $n=4$. With respect to the divisor $G=[0:0:1]+[0:1:1]+[1:\alpha:1]+[1:\alpha^2:1]+[\alpha:0:1]+[\alpha:1:1] \in \mathcal{H}$, not in the support of $D$, the generator matrix of the $(6,3,4)_{4}$ hexacode is $\mathcal{C}_{\mathcal{L}}(D,G)=\left( \begin{array}{cccccc} 1&1&1&1&1&1\\ 0&0&1&1&\alpha&\alpha\\ \alpha&0&\alpha&1&1&0 \end{array} \right)$.} \end{example}
{ "timestamp": "2020-12-17T02:15:30", "yymm": "2012", "arxiv_id": "2012.08870", "language": "en", "url": "https://arxiv.org/abs/2012.08870" }
\section{Introduction}\label{sec:Introduction} Knowledge graphs (KGs) are collections of factual information represented in the form of relational triplets. Each relation triplet can be organized as $(h, r, t)$ where $h$ and $t$ represents the head and tail entities and $r$ is the relation between $h$ and $t$. KGs have played a critical role across variety of tasks like Question Answering \cite{qa} , Semantic Search \cite{xiong2017explicit}, Dialogue Generation \cite{DG} and many more. However, because of the limitations of human knowledge and extraction algorithms, they tend to suffer from incompleteness, that is, absent links in the KGs. Numerous methods have been developed to fulfill the gap between KGs and real-world knowledge, which are referred as link prediction or knowledge graph completion tasks. After condensing entities and relations into a low-dimensional vector space, these models predict missing facts by operating the entity and relation embeddings. \footnote{$^1$These authors contributed equally to this work.} \footnote{$^2$Corresponding authors.} Despite the impressiveness of model performance, most of KG embeddings methods inherently assume a fixed set of entities in the graph and ignore the evolving nature of KGs. In fact, many real-world KGs are ever-evolving \cite{trivedi2017know}, with novel entities being added over time e.g., new users in social media platforms or new molecules in biomedical knowledge graphs. As these entities were unknown to the model during training, the model does not obtain their embeddings, and hence have no means to infer the relations for these entities. Thus, the ability to make predictions on new entities avoiding costly re-training from scratch is desirable for production-ready machine learning models. Some efforts have been made to obtain the inductive embeddings for new entities using external resources \cite{wang2014knowledge,xie2016image,zhong2015aligning}. Although these approaches may be useful, they required extra computation over massive resources, which are time-consuming and may not always be feasible. An alternative is to induce probabilistic logic rules by enumerating statistical regularities and patterns embedded in the knowledge graph \cite{meilicke2018fine,yang2017differentiable}, which is entity-independent and hence inherently inductive. However, these methods suffer from scalability issues and are less expressive due to their rule-based nature. Taking inspiration from the success of graph neural networks (GNNs) in graph data modeling, extensive studies applied GNNs to learn the embeddings for KGs \cite{RGCN,SACN,KBGT}, among which most of them were adopted in transductive settings. By extending a subgraph-based link prediction method that performs in the basic network \cite{zhang2018link}, recent work GraIL \cite{grail} shed light on the evolution issue for KG. The basic strategy behind GraIL can be split into three steps: (i) extracting the enclosing subgraph surrounding the target triplets, (ii) annotating the relative position of each entity in the extracted subgraph, and (iii) scoring the annotated subgraph with a GNN. It can be beneficial to the inference of entirely novel entities that not be surrounded by known nodes, and there is no need of the domain-related initial embedding of these emerging entities. The experimental results of applying GraIL demonstrate the feasibility of utilizing a subgraph-based model on inductive reasoning in the context of KGs. One drawback of GraIL is that it ignores the directionality nature of the KG when extracting the enclosing subgraph for the target triplet. Although the direction is nominally preserved during message passing by aggregating the incoming information for an entity, the subsequent symmetric scoring function make it incapable of effectively handling binary relationships in KGs, especially for asymmetric (e.g., nominatedfor) and anti-symmetric relations (e.g., filiation). Moreover, the adopted vertex-based message passing network \cite{xu2018powerful} weakens the role of relation embeddings, which violates the nature of the inductive relation reasoning because inductive setting is entity-independent and relies on the relation information to conduct inference. Based on the above observations, we propose a novel Communicative Message Passing neural networks for Inductive reLation rEasoning (CoMPILE). In CoMPILE, we first extract directed enclosing subgraph for each triplet instead of an undirected one. A communicative message passing network framework is then extended to strengthen the information interactions between entities and relations while update both the edge and entity embeddings simultaneously. We also apply an edge-aware attention mechanism to aggregate the local neighborhood features and gather the global entity information to enrich the entity/relation representation. In contrast to previous efforts \cite{nickel2015holographic} that require an explosion of the number of parameters to handle binary relations, our model is naturally capable of dealing with asymmetric and anti-symmetric relations by communicative message passing within directed subgraphs, without the unnecessary explosive increase of the number of parameters. In brief, the main contributions are listed below: \begin{itemize} \item Introducing a competitive inductive knowledge graph embedding model, CoMPILE, that fits the directional nature of knowledge graph and can naturally deal with asymmetric and anti-symmetric relations. \item Introducing a novel node-edge communicative message passing mechanism to strengthen the role of relation information in the subgraph and fit the nature of the inductive setting. \item Evaluating CoMPILE and several previously proposed models on three inductive datasets: our model achieves state-of-the-art AUC-PR and Hits@10 across most of them. We also extract new inductive datasets by filtering out the triplets that have no enclosing subgraph to evaluate the inductive relation reasoning more accurately. \end{itemize} \section{Related Work}\label{sec:Related} \subsection{Transductive Relation Prediction} Representation learning on KGs has been an active research area due to the wide applications of the resultant entity and relation embeddings. Typical KG embedding models include TransE \cite{TransE}, Distmult \cite{dismult}, ComplEx \cite{trouillon2017knowledge}, ConvE \cite{convE}, to name a few. While these kinds of methods process each triplet independently without the consideration of semantic and structure information embedded in rich neighborhoods. Recently, GNNs have been used to capture global structural information inherently stored in KGs and have been shown to achieve state-of-the-art performance on a variety of datasets \cite{RGCN,SACN,KBGT}. However, the above approaches on KG embedding mostly work in transductive settings. \begin{figure} \centering \includegraphics[scale=0.58]{Subgraph_DPI300.jpg} \caption{\label{2}Comparisons of different subgraph extraction strategies. The left one is the undirected subgraph that considers all the $h$-hop neighbors of both the target head and tail. The subgraph on the right refer to the enclosing subgraph that only considers the $h$-hop common neighbors of target head and target tail. } \end{figure} \subsection{Inductive Relation Prediction} One research line of inductive representation learning is to introduce additional attributes such as description text or images to embed unseen entities \cite{xie2016image,xie2016representation,shi2017open}. Although the resultant embeddings can be utilized for KG completion, these methods heavily relied on the presence of external resources which is not present in many KGs. To relieve this issue, several inductive KG embedding models \cite{hamaguchi2017knowledge,wang2019logic} are proposed to aggregate the information of existing neighbors of an emerging entity with graph neural networks. However, both of these approaches demand the new nodes to be surrounded by known nodes and cannot handle entirely new graphs. Another research line is rule-based approaches that use observed co-occurrences of frequent patterns in the knowledge graph to recognize logical rules \cite{galarraga2015fast}. They are inherently inductive since the logical-rules are independent of entities, but these approaches suffer from scalability issues and lack expressive power due to their rule-based nature. Inspired by these statistical rule-induction approaches, several differentiable rule learners including NeuralLP \cite{yang2017differentiable}, RuleN \cite{meilicke2018fine}, and DRUM \cite{DRUM} are proposed to learn the logical rules as well as confidence scores from KGs in an end-to-end paradigm. However, they did not take account of the neighbor structure surrounding the predicted relations, hence is not expressive enough when the paths between head and tail entities are sparse. This set of methods, together with GraIL \cite{grail} constitute our baselines. \section{CoMPILE} \subsection{Denotations and Task Definition} A target triplet in the knowledge graph is denoted as $(s, r, t)$ where $s$, $r$, and $t$ refers to the head entity, relation, and tail entity, respectively. Inductive relation reasoning aims to score the plausibility of target triplet $(h_T, r_T, t_T)$ , where the representations of $h_T$ and $t_T$ are not available during prediction. In this work, we use an enclosing directed subgraph to represent the target triplet $(h_T, r_T, t_T)$. The enclosing subgraph between the target head and tail is denoted as $G=(V, E)$ where $V$ and $E\in V\times V$ denotes the set of nodes and observed edges in the subgraph $G$, respectively. We use $N_e$ to represent the number of edges in the subgraph. The embeddings of nodes is denoted as $\bm{N}\in \mathbb{R}^{N_n\times d_n}$ where $N_n$ is the number of the nodes in the subgraph. The relation embedding is denoted as $\bm{R}\in \mathbb{R}^{N_r\times d}$ ($N_r$ is the number of relations), which is parameterized as a learnable matrix updated by gradient descent and is shared across train and test graphs. We define the head-to-edge, relation-to-edge, and tail-to-edge adjacency matrix as $\bm{A^{he}}\in \mathbb{R}^{N_n\times N_e}$, $\bm{A^{re}}\in \mathbb{R}^{N_r\times N_e}$, and $\bm{A^{te}}\in \mathbb{R}^{N_n\times N_e}$, which maps the tail, relation, and head to the corresponding edge, respectively. The values in the adjacency matrices are either 0 or 1 where 0 denotes no connections. \subsection{Directed Subgraph Extraction} In this section, we illustrate the procedure to extract the directed enclosing subgraph. Different from GraIL \cite{grail} that extracts undirected enclosing subgraph between the target head and tail which ignores the direction of the target triplet, inspired by the mechanism that human uses to infer logical rules, we demonstrate the superiority of directed enclosing subgraph. A triplet has direction, representing the relation from head to tail. Considering two triplets $(h_T, r_T, t_T)$ and $(t_T, r_T, h_T)$, if we use undirected enclosing subgraph, then the predictions are likely to be very close for these two triplets because the enclosing subgraphs are the same. However, only one of them is true if the relation $r_T$ is asymmetric. Therefore, we need to use directed enclosing subgraph to handle these kinds of relation more effectively. Moreover, by this means, we can solve the direction problem in KGs without increasing the model complexity (the time complexity even decreases since the directed enclosing subgraph is obtained by pruning the undirected enclosing subgraph, as shown in Fig.~\ref{2}). The enclosing subgraph is restricted by the hop number $h$, and $h+1$ is the maximum distance from target head to target tail. To extract the $h$-hop directed enclosing subgraph, we first introduce the incoming neighbors and outgoing neighbors. Given a triplet $(s, r, t)$, we define $s$ as the 1-hop incoming neighbor of $t$, and $t$ as the 1-hop outgoing neighbor of $s$, and $h$-hop incoming/outgoing neighbors vice verse. Firstly, we extract $h$-hop outgoing neighbors of target head and the $h$-hop incoming neighbors of target tail. Then we extract the 1-hop incoming neighbor of target tail. If the $h$-hop outgoing neighbors of target head and the 1-hop incoming neighbor of target tail have common entities, then the directed subgraph between target head and tail is existed. Afterwards, We find the common entities (nodes) of $h$-hop outgoing neighbors of target head and the $h$-hop incoming neighbor of target tail (if target head or tail are not in the common entities, we need to add them into common entities). Finally we add the edges (triplets) whose head and tail both belong to the common entities to construct the subgraph. By this means, the maximum distance from target head to tail would become $h+1$ even though we only extract $h$-hop neighbors of them. \subsection{Node/Edge Embedding Initialization} Since the entities are unseen, we need to define an entity-independent embedding for each entity (node) in the subgraph. Similar to GraIL \cite{grail}, we initialize the node embedding by the distances to the target head and target tail to capture the relative position of each node in the subgraph. The node embedding for node $i$ is defined as $\bm{N_i}=\text{one-hot}(d_{hi}) \oplus \text{one-hot}(d_{it})\in \mathbb{R}^{2(h+2)}$ where $d_{hi}$ denotes the minimum distance from the target head to node $i$, and $d_{it}$ denotes the minimum distance from the node $i$ to target tail (note that the distance definition here also meets the `directed' requirement). For edge $i$, i.e. $(h_i, r_i, t_i)$, the initialized edge embedding is defined as $\bm{E}_i=\bm{N}_{h_i}\oplus \bm{R}_{r_i}\oplus \bm{N}_{t_i}\in \mathbb{R}^{4(h+2)+d}$. \subsection{Directed Subgraph Modeling} The message passing model in GraIL \cite{grail} is a simple R-GCN \cite{RGCN} with edge attention, which ignores separately modeling edge embedding and ignores the bidirectional communication between edges and nodes. Moreover, GraIL uses a node-to-node message passing mechanism where the relation information is only used for computing the weights for the neighboring nodes. However, the inductive relation reasoning should be entity-free (i.e., node-free in the subgraph case), where relation plays a dominant role while the entities cannot provide deterministic information during inference. Thus, the node-to-node message passing mechanism in Grail weakens the role of relations and violates the nature of the inductive knowledge graph. To this end, we design a brand new message passing architecture to model the inductive enclosing subgraph by iteratively communicating and enhancing the edge and node embeddings (see Fig.~\ref{1}). Since the edge information is mainly provided by the corresponding relation, the nodes can learn to better aggregate the relation information in the subgraph during the node-edge interactions, such that the model can learn to infer the relations between the target head and tail based on the relation information presented in the subgraph. \begin{figure} \centering \includegraphics[scale=0.8]{MP2_DPI300_2.jpg} \caption{\label{1}Comparison of Message Passing Mechanism in GraIL and Our Model. `MP' in the figure denotes `message passing', and the left and right figure refers to the message passing mechanism in GraIL and our model, respectively. Our method explicitly models edge embedding to strengthen the flow of the relation information and bidirectionally communicate the nodes and edges. } \end{figure} Our node-edge interaction mechanism is inspired by CMPNN \cite{DMPNN}. Nevertheless, during the node-edge interactions of CMPNN, the representation of each edge is updated by the head node embedding and its inverse edge (which is not available in our task), but neglecting the tail node embedding and the relation embedding. In our framework, we introduce a new node-edge communication mechanism that considers both head, relation, and tail to update the edge embedding. We describe our message passing model in detail in the following sections. Firstly, the node and edge representations are mapped to the same dimensionality $d$ using the following equations ($d$ is also the dimensionality of the relation embedding): \begin{equation} \label{eq0} \setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt} \bm{N}^{0}= f_1(\bm{N}\bm{W}_{n}^0),\ \bm{E}^{0}= f_1(\bm{E}\bm{W}_{e}^0) \end{equation} where $f_1$ denotes the nonlinear activation function to increase the nonlinear expressive power of the model, $\bm{W}_{n}^0 \in \mathbb{R}^{2(h+2)\times d}$ and $\bm{W}_{e}^0 \in \mathbb{R}^{(4(h+2)+d)\times d}$ are the learnable parametric matrix, $\bm{N}^{0} \in \mathbb{R}^{N_n\times d}$ and $\bm{E}^{0} \in \mathbb{R}^{N_e\times d}$ is the transformed node and edge embedding, respectively. \textbf{Node Embedding Updating}: The node embedding is updated for totally $l$ iterations. At each iteration (we take iteration $k$ as an example in the following equations), edge embedding is required for updating the node embedding in our node-edge interaction mechanism. Firstly, to highlight the edges that are highly related to the target triplet, we design an \textbf{enhanced edge attention}. In the edge attention of GraIL \cite{grail}, only target relation is utilized for predicting the importance of edges. In contrast, we utilize all the target head, target relation, and target tail to highlight the edges that have a close connection to the target triplet, which is more comprehensive. We use the whole triplet to conduct attention because the nodes can aggregate the relation information during node-edge interactions and thereby the updated node embeddings are also informative. For edge $i$ ($ 1\leq i\leq N_e$), the equations for the enhanced edge attention are presented as follows: \begin{equation} \label{eq1} \setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt} \bm{a}_{i}^{k-1}=(\bm{N}_{h_i}^{k-1} + \bm{R}_{r_i} - \bm{N}_{t_i}^{k-1}) \oplus (\bm{N}_{h_T}^{k-1} + \bm{R}_{r_T} - \bm{N}_{t_T}^{k-1}) \end{equation} \begin{equation} \label{eq10} \setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt} a_{i}^{k-1}= \sigma(f_1(\bm{a}_{i}^{k-1}\bm{W}_{a_1}^{k-1})\bm{W}_{a_2}^{k-1}) \end{equation} \begin{equation} \label{eq11} \setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt} \bm{E}_i^{(k-1)_a} = a_{i}^{k-1} \bm{E}_i^{k-1} \end{equation} where $\bm{E}_i^{(k-1)_a}$ is the attentive embedding for edge $i$, $a_{i}^{k-1}$ is a scalar that represents the weight for edge $i$, $\oplus$ denotes feature concatenation, $\bm{N}_{h_T}^{k-1} + \bm{R}_{r_T} - \bm{N}_{t_T}^{k-1}$ denotes the embedding for target triplet, and we use $\bm{N}_{h_i}^{k-1} + \bm{R}_{r_i} - \bm{N}_{t_i}^{k-1}$ to denote edge $i$ so as to be consistent with the representation of the target triplet. Then we use the attentive edge embedding to update the node representation: \begin{equation} \label{eq12} \setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt} \bm{N}^{k}_{agg} = \bm{A^{te}} \bm{E}^{(k-1)_a} \end{equation} \begin{equation} \label{eq13} \setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt} \bm{N}^{k} = f_1((\bm{N}^{k}_{agg} + \bm{N}^{k-1})\bm{W}^{k}_n) \end{equation} where $\bm{N}^{k}_{agg}$ denotes the node aggregation information, $\bm{W}^{k}_n\in \mathbb{R}^{d\times d}$ represents the parametric matrix for node embedding at iteration $k$, $\bm{E}^{(k-1)_a}$ is the attentive edge embedding, $\bm{A^{te}}$ is the tail-to-edge adjacency matrix that connects each edge to its tail. Note that to preserve the directed nature of our model, for an edge $(h_i, r_i, t_i)$, its embedding is only used to update the tail $t_i$ but not $h_i$. Coupling with the directed subgraph, it ensures that the information only flows from target head to target tail and the reverse flow is forbidden. By using the edge embedding to update node embedding, the target tail embedding can aggregate all the relations along with their relative positions occurred from the target head to target tail in the subgraph (the relative positions are provided by the node embedding), which provides powerful relational inference ability. In the last iteration of the node embedding updating, similar to CMPNN \cite{DMPNN}, we use a multi-layer perception network followed by a Gated Recurrent Unit (GRU) \cite{Cho2014Learning} to replace Eq.~\ref{eq13} to increase the expressive power of the network, as shown in the following equations: \begin{equation} \label{eq14} \bm{N}^{l'} = \text{CommunicationMLP} (\bm{N}^{l}_{agg} \oplus \bm{N}^{l-1} \oplus \bm{N}^{0}) \end{equation} \begin{equation} \label{eq15} \bm{N}^{l} = \text{GRU}(\bm{N}^{l'}) \end{equation} where CommunicationMLP is the multi-layer perception network that communicates the node aggregation information $\bm{N}^{l}_{agg}$, node embedding $\bm{N}^{l-1}$, and the original transformed node embedding $\bm{N}^{0}$ (we add $\bm{N}^{0}$ to perform residual learning \cite{4}). Note that GRUs requires the inputs to be ordered, while our node embeddings are inherently partly ordered since the nodes are arranged according to the distance to target head in the ascending order. \textbf{Edge Embedding Updating}: The edge embedding is updated for totally $l-1$ iterations. To update the edge embedding, node embedding is required in our node-edge interaction mechanism. We define inverse mappings from node to edge and relation to edge, which are denoted as: \begin{equation} \label{eq2} \setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt} \bm{E}_{agg}^k=(\bm{A^{he}})^T\bm{N}^{k} + (\bm{A^{re}})^T\bm{R} - (\bm{A^{te}})^T\bm{N}^{k} \end{equation} where $T$ denotes matrix transpose, $(\bm{A^{he}})^T\bm{N}^{k}$ aggregates the head information to edge, $(\bm{A^{re}})^T\bm{R}$ aggregates the relation information to edge, and $(\bm{A^{te}})^T\bm{N}^{k}$ aggregates the tail information to edge. The definition of edge aggregation information $\bm{E}_{agg}^k$ in Eq.~\ref{eq2} meets the directed requirement and is consistent across the model. Then we use the aggregation information to update edge representation: \begin{equation} \label{eq3} \setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt} \bm{E}^{k'}= f_1( \bm{E}^{k-1} + f_2(\bm{E}_{agg}^k)) \end{equation} \begin{equation} \label{eq4} \setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt} \bm{E}^{k}= f_1(\bm{E}^{k'}\bm{W}^{k}_e + \bm{E}^{0}) \end{equation} where $f_1$ and $f_2$ denotes nonlinear activation function to increase the nonlinear modeling capacity of the model. We add the $\bm{E}^{0}$ to update the edge embedding in Eq.~\ref{eq4} to perform residual learning. Edge dropout is also performed on $\bm{E}^{k}$. \textbf{Scoring Function Definition}: GraIL designed an asymmetric scoring function for subgraph inductive learning by concatenating four related vectors: \begin{equation} \label{eq5} \setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt} \bm{S} = \bm{W}(\bm{h}_{G}^l \oplus \bm{h}_{h_T}^l \oplus \bm{h}_{t_T}^l \oplus \bm{e}_{r_T}) \end{equation} where $\bm{h}_{G}^l$ denotes the subgraph representation, $\bm{h}_{h_T}^l$ and $\bm{h}_{t_T}^l$ denote the hidden vectors of head and tail entities, $\bm{e}_{r_T}$ is a learned embedding of the target relation. This scoring function is symmetric as both the relation embedding and subgraph embedding are undirected. To alleviate this problem, we adopt the idea of TransE \cite{TransE} to design the scoring function so as to preserve directed nature of our model as well as to be consistent with the definition of the edge information. The scoring function is defined as: \begin{equation} \label{eq5} \setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt} \bm{S} = f_2(\bm{N}_{h_T}^l + \bm{R}_{r_T} - \bm{N}_{t_T}^l ) \end{equation} where $\bm{N}_{h_T}^l$, $\bm{R}_{r_T}^l$, and $\bm{N}_{t_T}^l$ denotes the final representation of target head, target relation, and target tail, respectively. We than use a two-layer fully-connected network on $\bm{S}$ to infer the score of the target triplet $(h_T, r_T, t_T)$. \section{Experiments}\label{sec:Experiments} In our experiments, we aim to answer the following questions: 1) Does our CoMPILE outperforms state-of-the-art methods on the commonly used datasets? 2) Does our proposed message passing network better than the RGCN + edge attention in GraIL? 3) Does the directed subgraph outperform the undirected one? 4) What are the importance of the components in CoMPILE? 5) Can CoMPILE deals with asymmetric relations better than the other methods? \subsection{Datasets} WN18RR \cite{convE}, FB15k-237 \cite{FB15k-237}, and Nell-995 \cite{NELL-995} are commonly used datasets that are originally developed for transductive relation prediction. Teru et al. \cite{grail} extracts four versions of inductive datasets for each dataset. Each inductive dataset constitutes of train and test graphs, where the test graph contains entities that are not presented in train graph. The subgraph for each triplet is extracted from the train or test graph. \textbf{Our Post-processed Datasets}: In our experiment, we found that the inductive datasets constructed by GraIL \cite{grail} have many empty-subgraph triplets (no valid edge exists in the enclosing subgraph of target head and tail under hop $h$), especially for WN18RR. Moreover, since in GraIL, the negative triplets are randomly sampled, leading to a significant number of empty-subgraphs in negative triplets. This results in inaccurate evaluation of the performance, for the reason that under the subgraph reasoning structure, it is almost impossible to infer the relation between two entities if there are no valid edges in the subgraph (these triplets also cannot work in rule-based structures such as RuleN \cite{meilicke2018fine}). Therefore, we construct new inductive datasets by filtering out the triplets in the original inductive datasets that have no subgraph under hop $h$ and constructing negative triplets that have subgraphs between the fake target head and fake target tail. Specifically, we extract three versions of inductive datasets each for FB15k-237 and NELL-995 datasets. Since there are no enough non-empty negative triplets for each positive triplet to perform Hits@10 experiment in the original inductive WN18RR datasets, we only extract the inductive datasets for NELL-995 and FB15k-237. The statistics of the datasets is shown in Appendix. Note that the subgraph here refers to undirected enclosing subgraph extracted in GraIL. An undirected subgraph is existed for two entities does not necessarily means that a directed subgraph is existed. If a directed subgraph does not existed, we will extract the undirected subgraph for these two entities. \subsection{Experimental Details} To be consistent with the prior methods, we use AUC-PR and Hits@10 to evaluate the models. Similar to GraIL \cite{grail}, to compute AUC-PR, we sample one negative triplet for each test triplet and evaluate which triplet has larger score. For Hits@10, we compare the true triplet with the sampled negative triplets in terms of the scores, to see whether the true triplet can rank the top 10. The negative triplets are obtained by replacing the head or tail of the test triplets with other entities. For the original inductive datasets, the negative triplets are randomly sampled and do not consider whether they have enclosing subgraph. For our extracted inductive datasets, we ensure that the negative triplets can also have an enclosing subgraph. We implement our model on Pytorch. We use Adam \cite{Kingma2014Adam} as optimizer with learning rate being 0.001. The hop number $h$ is set to 3 which is consistent with GraIL. We train the model for four times and average the testing results to obtain the final performance. The number of iterations $l$ is set to 3. For more details, please refer to our codes at: \url{https://github.com/TmacMai/CoMPILE_Inductive_Knowledge_Graph}. We evaluate our model with the following baselines: Neural-LP \cite{yang2017differentiable}, DRUM \cite{DRUM}, RuleN \cite{meilicke2018fine}, and GraIL \cite{grail}. For the detailed introduction of these methods, please refer to the introduction and related work sections. \begin{table*}[!htb] \centering \resizebox{1.92\columnwidth}{!}{\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline & \multicolumn{4}{c|}{WN18RR} & \multicolumn{4}{c|}{FB15k-237} & \multicolumn{4}{c}{NELL-995} \\ \hline Methods & v1 & v2 & v3 & v4 & v1 & v2 & v3 & v4 & v1 & v2 & v3 & v4 \\ \hline Neural-LP & 86.02 & 83.78 & 62.90 & 82.06 & 69.64 & 76.55 & 73.95 & 75.74 & 64.66 & 83.61 & 87.58 & 85.69 \\ DRUM & 86.02 & 84.05 & 63.20 & 82.06 & 69.71 & 76.44 & 74.03 & 76.20 & 59.86 & 83.99 & 87.71 & 85.94\\ RuleN & 90.26 & 89.01 & 76.46 & 85.75 & 75.24 & 88.70 & 91.24 & 91.79 & 84.99 & 88.40 & 87.20 & 80.52\\ GraIL & 94.32 & 94.18 & 85.80 & 92.72 & 84.69 & 90.57 & 91.68 & 94.46 & \textbf{86.05} & 92.62 & 93.34 & \textbf{87.50}\\ \hline CoMPILE & \textbf{98.23} & \textbf{99.56} & \textbf{93.60} & \textbf{99.80} & \textbf{85.50} & \textbf{91.68} & \textbf{93.12} & \textbf{94.90} & 80.16 & \textbf{95.88} & \textbf{96.08} & 85.48\\ \hline \end{tabular}} \caption{ \label{t1}Compared with Baselines on the Original Inductive Datasets (AUC-PR). The best performance is highlighted.} \end{table*}% \begin{table*}[!htb] \centering \resizebox{1.92\columnwidth}{!}{\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline & \multicolumn{4}{c|}{WN18RR} & \multicolumn{4}{c|}{FB15k-237} & \multicolumn{4}{c}{NELL-995} \\ \hline Methods & v1 & v2 & v3 & v4 & v1 & v2 & v3 & v4 & v1 & v2 & v3 & v4 \\ \hline Neural-LP & 74.37 & 68.93 & 46.18 & 67.13 & 52.92 & 58.94 & 52.90 & 55.88 & 40.78 & 78.73 & 82.71 & \textbf{80.58} \\ DRUM & 74.37 & 68.93 & 46.18 & 67.13 & 52.92 & 58.73 & 52.90 & 55.88 & 19.42 & 78.55 & 82.71 & \textbf{80.58}\\ RuleN & 80.85 & 78.23 & 53.39 & 71.59 & 49.76 & 77.82 & \textbf{87.69} & 85.60 & 53.50 & 81.75 & 77.26 & 61.35\\ GraIL & 82.45 & 78.68 & 58.43 & 73.41 & 64.15 & 81.80 & 82.83 & \textbf{89.29} & \textbf{59.50} & 93.25 & 91.41 & 73.19\\ \hline CoMPILE& \textbf{83.60} & \textbf{79.82} & \textbf{60.69} & \textbf{75.49} & \textbf{67.64} & \textbf{82.98} & 84.67 & 87.44 & 58.38 & \textbf{93.87} & \textbf{92.77} & 75.19\\ \hline \end{tabular}} \caption{ \label{t2}Compared with Baselines on the Original Inductive Datasets (Hits@10).} \end{table*}% \subsection{Results and Discussions} \subsubsection{Compare with Baselines on Original Inductive Datasets}\label{sec:one} In this section, we compare our proposed message passing network with other baselines on the original inductive datasets proposed by GraIL. Note that this experiment is performed to evaluate the effectiveness of our node-edge message passing network, and the other settings such as the enclosing subgraph, loss function, and node embedding initialization are the same as those in GraIL. As presented in Table~\ref{t1} and Table~\ref{t2}, our CoMPILE demonstrates improvement on the majority of the original inductive datasets in terms of both the AUC-PR and Hits@10 evaluation metrics. Specifically, the improvement is significant on the AUC-PR for WN18RR inductive datasets, on which our CoMPILE outperforms SOTA method GraIL by a large margin. The results indicate that the proposed message passing network shows marked improvement over the R-GCN with attention in GraIL, which highlights the necessity of the bidirectional communication between the node and edge embeddings as well as the effectiveness of strengthening the role of relation information in the subgraph modeling. \begin{table*}[!htb] \centering \resizebox{2.1\columnwidth}{!}{\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline & \multicolumn{6}{c|}{FB15k-237} & \multicolumn{6}{c}{NELL-995} \\ \hline & \multicolumn{2}{c|}{v1} & \multicolumn{2}{c|}{v2} & \multicolumn{2}{c|}{v3} & \multicolumn{2}{c|}{v1} & \multicolumn{2}{c|}{v2} & \multicolumn{2}{c}{v3} \\ \hline & AUC-PR & Hits@10 & AUC-PR & Hits@10 & AUC-PR & Hits@10 & AUC-PR & Hits@10 & AUC-PR & Hits@10 & AUC-PR & Hits@10\\ \hline GraIL & \textbf{78.84} & 61.25 & 80.76 & 64.93 & 82.25 & 68.13 & \textbf{68.47} & 58.63 & 84.07 & \textbf{76.52} & 81.42 & 75.12 \\ CoMPILE& 78.06 & \textbf{63.96} & \textbf{81.09} & \textbf{68.17} & \textbf{83.21} & \textbf{71.67} & 68.13 & \textbf{61.38} & \textbf{85.67} & 76.11 & \textbf{82.88} & \textbf{76.83}\\ \hline \end{tabular}} \caption{ \label{t3}Compared with Baselines on Our Extracted Inductive Datasets. } \end{table*}% \subsubsection{Compare with Baseline on Our Inductive Datasets} As demonstrated in the Dataset section, the original inductive dataset contains empty-subgraph triplets under hop $h$ which leads to inaccurate evaluation of the performance. Therefore, we further evaluate CoMPILE on our post-processed inductive datasets that have filtered out the empty-subgraph triplets. For comparison, we implement and evaluate the R-GCN with edge attention which is the message passing method in GraIL, and the results are presented in Table~\ref{t3}. Note that the enclosing subgraphs are directed in both the models. As shown in Table~\ref{t3}, CoMPILE still performs better than the GraIL on the majority of evaluation metrics in our extracted inductive datasets, which further suggests the superiority of the message passing method in CoMPILE. For \textbf{model complexity}, the results suggest that our CoMPLIE has a total number of 35,969 parameters, while the number of parameters of GraIL is 29,264. CoMPILE has slightly more parameters than GraIL, mainly because we model both the edge and node embeddings to enhance the flow of relation information in our CoMPILE. \begin{table*}[!htb] \centering \resizebox{2.1\columnwidth}{!}{\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline & \multicolumn{6}{c|}{FB15k-237} & \multicolumn{6}{c}{NELL-995} \\ \hline & \multicolumn{2}{c|}{v1} & \multicolumn{2}{c|}{v2} & \multicolumn{2}{c|}{v3} & \multicolumn{2}{c|}{v1} & \multicolumn{2}{c|}{v2} & \multicolumn{2}{c}{v3} \\ \hline & AUC-PR & Hits@10 & AUC-PR & Hits@10 & AUC-PR & Hits@10 & AUC-PR & Hits@10 & AUC-PR & Hits@10 & AUC-PR & Hits@10\\ \hline Undirected & 77.80 & 59.79 & 80.89 & 65.98 & 81.45 & 68.82 & 63.25 & 58.88 & 79.33 & 68.92 & \textbf{83.73} & 76.66\\ Directed & \textbf{78.06} & \textbf{63.96} & \textbf{81.09} & \textbf{68.17} & \textbf{83.21} & \textbf{71.67} & \textbf{68.13} & \textbf{61.38} & \textbf{85.67} & \textbf{76.11} & 82.88 & \textbf{76.83}\\ \hline \end{tabular}} \caption{ \label{t4}Compared between the Undirected and Directed Subgraphs on CoMPILE. } \end{table*}% \subsubsection{Directed Subgraph vs. Undirected Subgraph} In this section, we investigate whether the performance has improved when the undirected enclosing subgraph is replaced by the directed one. From the results in Table~\ref{t4} we can infer that the version of CoMPILE with directed subgraph significantly outperforms the version of CoMPILE with undirected subgraph almost in all the inductive datasets and evaluation metrics, demonstrating the necessity of effectively handling the direction problem in knowledge graph. This is in line with our expectations as both the FB15k-237 and NELL-995 contain a significant number of asymmetric and anti-symmetric relations \cite{wang2018evaluating}. It is also worth mentioning that dealing with the directed subgraph is more efficient in terms of the \textbf{time complexity}, for the reason that the directed subgraph can be viewed as the cropped version of the undirected one. Although being more efficient, our directed subgraph still outperforms undirected subgraph in terms of AUC-PR and Hits@10 metrics, indicting that a directed subgraph is a better choice to infer the relation between two unseen entities. \begin{table}[!htb] \centering \resizebox{.98\columnwidth}{!}{\begin{tabular}{c|c|c|c|c} \hline & \multicolumn{2}{c|}{v1} & \multicolumn{2}{c}{v2}\\ \hline & AUC-PR & Hits@10 & AUC-PR & Hits@10\\ \hline W/O EEA & 75.79 & 61.35 & 79.26 & 67.16\\ Edge Attention & 77.07 & 61.04 & 78.53 & 66.46\\ \hline W/O EEU & 76.69 & 60.63 & 79.64 & 62.44\\ W/O Relation in EEU & 77.74 & 60.73 & 79.25 & 62.60 \\ \hline CoMPILE & \textbf{78.06} & \textbf{63.96} & \textbf{81.09} & \textbf{68.17}\\ \hline \end{tabular}} \caption{ \label{t5}Ablation Studies on Inductive FB15k-237 datasets. The `EEA' and `EEU' refers to Enhanced Edge Attention and Edge Embedding Updating, respectively. } \end{table}% \subsubsection{Ablation Studies} We perform ablation studies to investigate the contributions of different components in our architecture. For the enhanced edge attention, we remove it from our model to analyze its contribution (see the case of `W/O EEA' in Table~\ref{t5}). Moreover, we compare our enhanced edge attention with the edge attention in GraIL which only uses the target relation to conduct attention (see the case of `Edge Attention'). Furthermore, we investigate whether the updating of the edge embedding gives rise to the overall performance (see the case of `W/O EEU'). Additionally, we also remove the relation information in the edge embedding updating to analyze the importance of relation information (see the case of `W/O Relation in EEU', and Eq.~\ref{eq2} will become $\bm{E}_{agg}^k=(\bm{A^{he}})^T\bm{N}^{k} - (\bm{A^{te}})^T\bm{N}^{k}$ in such circumstance). From Table~\ref{t5} we find that CoMPILE significantly outperforms the version of CoMPILE whose enhanced edge attention is removed, demonstrating the effectiveness of our enhanced edge attention. Specifically, enhanced edge attention yields over 2\% improvement on both AUC-PR and Hits@10 metrics of the inductive FB15k-237-v1 dataset. In contrast, the improvement of the edge attention in GraIL is relatively small, and even the model without attention outperforms the model with edge attention on inductive FB15k-237-v2 dataset. These results suggest that it is of great significance to consider all the information in target triplet to determine which edges are considered to be important. From the result of `W/O EEU' in Table~\ref{t5}, we can notice that the performance drops significantly on all the evaluation metrics when the edge embedding updating component is removed. Compared to completely removing the edge embedding updating module, removing the relation information in the edge embedding updating only yields slight improvement and is still clearly weaker than the case that the relation information is presented. These results demonstrate that the edge embedding updating module is a critical component that contributes to the marked improvement of our CoMPILE, and the relation information plays a very important role in the edge embedding updating. It further proves our claim that the relation is critical in inductive setting and highlights the necessity of strengthening the flow of relation information in the subgraph modeling. \subsubsection{Case Study for Asymmetric Relations} To demonstrate that our model can naturally handle asymmetric/anti-symmetric relations to some extent (by using the specific scoring function and edge definition, etc) even provided with the undirected enclosing subgraph, we select five relations that are asymmetric to evaluate our CoMPILE and GraIL. We use two negative triplets sampling strategies, where the first one is the standard operation that replaces the head or tail of the true triplets with other entities, and the second one is to exchange the head and tail of the test triplets. The results are presented in Table~\ref{t6}. Interestingly, we observe that the AUC scores of CoMPILE have no significant difference in this two negative triplets sampling strategies, indicating that CoMPILE can effectively distinguish the false triplet $(t, r, h)$ from the true triplet $(h, r, t)$. In contrast, the performance of GraIL drops significantly when the sample strategy changes to the second one, which suggests that it cannot well distinguish these two triplets, mainly for the reason that the GraIL does not well handle the direction problem in KG. \begin{table}[!htb] \centering \resizebox{.98\columnwidth}{!}{\begin{tabular}{c|c|c|c|c} \hline & \multicolumn{2}{c|}{CoMPILE} & \multicolumn{2}{c}{GraIL}\\ \hline & Standard & Exchanging h\&t & Standard & Exchanging h\&t\\ \hline AUC &67.11 & 67.11 & 61.78 & 52.89\\ AUC-PR &63.75 & 61.53 & 64.44 & 56.17\\ \hline \end{tabular}} \caption{ \label{t6}Case Study for Asymmetric Relations in Inductive FB15k-237-v1 dataset. `Standard' denotes standard operation to sample negative triplets, and `Exchanging h\&t' denotes obtaining negative triplets by exchanging the head and tail of the test triplets.} \end{table}% \section{Conclusions} We propose CoMPILE, a directed subgraph reasoning method for inductive relation prediction. We propose to use directed subgraph to infer the relation of two unseen entities which can handle asymmetric relations. Then we introduce a new message passing model that brings in the idea of enhanced edge attention and edge-node interactions to learn better node and edge embeddings and strengthen the role of relation information. Finally, we extract new inductive datasets by filtering out the triples that have no subgraph so as to evaluate each method more accurately. Experiments suggest that our CoMPILE achieves remarkable performance and outperforms state-of-the-art methods on the majority of evaluation metrics. In future work, we aims to develop a relation-only message passing network that completely discards the nodes in the enclosing subgraph and investigate the performance of relation enclosing subgraph. \section*{Acknowledgments} This work was supported in part by the National Natural Science Foundation of China (62076262, 61673402, 61273270, 60802069), in part by the Natural Science Foundation of Guangdong Province (2017A030311029), in part by the Project of Guangzhou Science and Technology under Grant 201605122151511, in part by the National Key R\&D Program of China (2020YFB020000), in part by the National Natural Science Foundation of China (61772566, 62041209), in part by the Guangdong Key Field R\&D Plan (2019B020228001 and 2018B010109006), and in part by the Guangzhou S\&T Research Plan (202007030010). \bibliographystyle{aaai21}
{ "timestamp": "2021-07-27T02:33:35", "yymm": "2012", "arxiv_id": "2012.08911", "language": "en", "url": "https://arxiv.org/abs/2012.08911" }
\section*{Abstract} { \bf Variational Monte Carlo with neural network quantum states has proven to be a promising avenue for evaluating the ground state energy of spin Hamiltonians. However, despite continuous efforts the performance of the method on frustrated Hamiltonians remains significantly worse than those on stoquastic Hamiltonians that are sign-free. We present a detailed and systematic study of restricted Boltzmann machine (RBM) based variational Monte Carlo for quantum spin chains, resolving how relevant stoquasticity is in this setting. We show that in most cases, when the Hamiltonian is phase connected with a stoquastic point, the complex RBM state can faithfully represent the ground state, and local quantities can be evaluated efficiently by sampling. On the other hand, we identify several new phases that are challenging for the RBM Ansatz, including non-topological robust non-stoquastic phases as well as stoquastic phases where sampling is nevertheless inefficient. Furthermore, we find that an accurate neural network representation of ground states in non-stoquastic phases is hindered not only by the sign structure but also by their amplitudes. } \vspace{10pt} \noindent\rule{\textwidth}{1pt} \tableofcontents\thispagestyle{fancy} \noindent\rule{\textwidth}{1pt} \vspace{10pt} \section{Introduction} Over the past decade, Machine Learning (ML) has allowed for huge improvement not only in traditional fields such as image detection~\cite{alexnet} and natural language processing~\cite{gpt3} but also in other disciplines e.g. defeating human level in playing games~\cite{mnih2013playing,alphago} and predicting protein structures~\cite{senior2020improved}. ML has been also actively applied for solving quantum physics problems such as detecting phase transitions~\cite{carrasquilla2017machine,broecker2017machine} and decoding quantum error correcting codes~\cite{torlai2017neural,sweke2018reinforcement,Andreasson2019quantumerror}. However, arguably the most active contribution of ML to physics has been in the field of classical variational algorithms for solving quantum many body systems so called variational quantum Monte Carlo (vQMC). A seminal study by Carleo and Troyer showed that the complex-valued restricted Boltzmann machine (RBM)~\cite{carleo2017solving} solves the ground state of the transverse field Ising and the Heisenberg models to machine accuracy. Subsequent studies demonstrated that other neural network based \textit{Ans\"{a}tze} such as convolutional neural networks (CNNs)~\cite{choo2019two,szabo2020neural}, and models with the autoregressive property~\cite{sharir2020deep,hibat2020recurrent} also provide highly accurate solutions when combined with proper optimization algorithms. Despite these successes, the methods still suffer from several difficulties in solving highly frustrated systems~\cite{choo2019two,ferrari2019neural,westerhout2020generalization}. Why then are some Hamiltonians so difficult to solve? In the path integral Monte Carlo, it is well known that stoquastic Hamiltonians -- those with real-valued and non-positive off-diagonal elements -- are tractable~\cite{BravyiTerhal}. One of the crucial properties of stoquastic Hamiltonians is that the ground state is positive up to a global phase. As the RBM Ansatz also seems to solve stoquastic Hamiltonians well~\cite{carleo2017solving}, a complex sign structure of the ground state can be why it is difficult to solve a highly frustrated Hamiltonian with strong non-stoquasticity. Thus some recent studies have investigated a relation between signs of the ground state in the computational basis and a difficulty in solving the Hamiltonian using neural quantum states~\cite{westerhout2020generalization,szabo2020neural}. However, the results so far are inconclusive. Even though it has been observed that neural quantum states do not learn ground states of non-stoquastic Hamiltonians well, it is not clear whether it is intrinsically impossible to represent such states using a reasonable number of weights, or whether algorithms such as stochastic reconfiguration just fail to find an accurate optimum. One can contrast with the density matrix renormalization group (DMRG)~\cite{white1992density} which was first developed as an extension of numerical renormalization group, but subsequent numerical and theoretical studies have revealed that entanglement is the underlying principle behind this method. This connection explains why the DMRG works exceptionally well for one-dimensional gapped system but has difficulty solving higher dimensional systems. In comparison, we still do not have such a good theory for vQMC. With this in mind, we unveil a connection between the vQMC with complex RBMs and stoquasticity of the Hamiltonian. We thus aim to guide future theoretical studies on the mathematical foundation of the vQMC as well as numerical studies for Hamiltonians resilient to vQMC. Our investigation is based on the classification of three typical failure mechanisms: (i) \textbf{Sampling}: The sampling method, such as local update Markov chain Monte Carlo (MCMC), fails to produce good samples from the state, or the observables in the optimization algorithm cannot be accurately constructed from a polynomial number of samples. (ii) \textbf{Convergence}: The energy gradient and other observables involved in the optimization (e.g. the Fisher information matrix) can be accurately and efficiently obtained for each optimization step, yet the optimization gets stuck in a local minimum or a saddle point. (iii) The \textbf{expressivity} of the Ansatz is insufficient: the Ansatz is far from the correct ground state even for the optimal parameter set, i.e. $\min_\theta ||\ket{\psi_\theta} - \ket{\Phi_{\rm GS}}||$ is large where $\ket{\psi_\theta}$ is a quantum state that Ansatz describe for a given parameter set $\theta$. Specific pairs of Hamiltonian and Ans\"{a}tze can sometimes rule out one or several of the failure mechanisms. For instance, when the Hamiltonian has a known exact neural-network representation of the ground state (e.g. cluster state, toric code~\cite{deng2017machine} and other stabilizer states~\cite{zhang2018efficient,jia2019efficient}), we can discard \textit{expressivity} [case (iii)] as a failure mechanism. On the other hand, models with the autoregressive property~\cite{sharir2020deep,hibat2020recurrent} are free from the MCMC errors as they always produce unbiased samples\footnote{Still, one may see a sampling error if a target probability distribution cannot be well approximated with a finite number of samples.}. The Hamiltonians we consider in this work will typically not have known ground state representations and we mainly use the RBM Ansatz, as it is the best studied neural network Ansatz class and the most reliable performance~\cite{nomura2021helping}. Hence all three failure mechanisms can occur. From those classifications, we set out to understand what role stoquasticity plays in the success and failure of variational Monte-Carlo with the RBM Ansatz. By way of example, we show that non-stoquasticity can cause problems with \textit{sampling} [case (i)], while phase transitions within a non-stoquastic parameter region may yield \textit{expressivity} problems [case (iii)]. We dub such a phase that cannot be annealed into a stoquastic parameter region ``deep non-stoquastic phase.'' Given that ``deep non-stoquastic phases'' can be gapped, our observation implies that the dimension or gap of the system is not related to the reliability of the method in any straightforward manner. Rather, as for quantum Monte Carlo~\cite{BravyiTerhal}, the stoquasticity of the Hamiltonian is the more essential feature. Although our claim of a ``deep non-stoquastic phase'' seems to disagree with Ref.~\cite{westerhout2020generalization} which posits that even a shallow neural network (with depth $2$ or $3$) can express the ground state of non-stoquastic Hamiltonians, we show later in the paper that this is due to different levels of desired accuracy. We demonstrate that the \textit{expressivity} problem indeed appears when the desired accuracy is as high as that of stoquastic ones. In addition, we find that the \textit{expressivity} problem is not only due to the sign structure of quantum states but also their amplitudes, where the latter dominates for system sizes up to about thirty. We also emphasize that our main concern in the paper is why the inaccuracy of vQMC for non-stoquastic models is significantly larger than for stoquastic models~\cite{ferrari2018dynamical,thibaut2019long,ferrari2019neural,choo2019two}. However, such relatively large errors from non-stoquastic models may still be enough to obtain the ground state properties depending on the problem at hand (when the gap is much bigger than the errors even for large enough $N$). We thus do not claim a difficulty of a particular Hamiltonian (such as the two-dimensional \Jone-\Jtwo\hspace{1pt} model) for the vQMC; rather we want to understand \textit{when and why} some Hamiltonians are relatively more difficult than others. The remainder of the paper is organized as follows. We introduce the complex RBM wavefunctions and our optimization methods in Sec.~\ref{sec:RBM}. We next establish our main observations in Sec.~\ref{sec:preliminary_examples} by studying how non-stoquasticity affects the RBM using the one-dimensional XXZ and the \Jone-\Jtwo\hspace{1pt} models, the properties of which are well known. We then confirm our observations using a specially devised Hamiltonian in Sec.~\ref{sec:further_experiments}. We then resolve discrepancies between our and previous studies in Sec.~\ref{sec:supervised_learning} and conclude with the final remark in Sec.~\ref{sec:conclusion}. \section{Restricted Boltzmann machine wavefunctions}\label{sec:RBM} Inspired by the recent successes in machine learning, Carleo and Troyer introduced the restricted Boltzmann machine (RBM) quantum state Ansatz class \cite{carleo2017solving}, and showed that it can accurately solve the ground states of the transverse field Ising and Heisenberg-XXX models in the variational quantum Monte Carlo (vQMC) framework~\cite{carleo2017solving}. For complex parameters $a_i, b_j$ and $W_{ij}$ where $i \in [1,\cdots,N]$ and $j \in [1,\cdots,M]$, an (unnormalized) RBM state is given by \begin{align} \widetilde{\psi}_{\theta}(x) &= \sum_{y}e^{\sum_{i,j} w_{ij} x_i y_j + a_i x_i + b_j y_j} \nonumber \\ &= e^{\sum_i a_i x_i} \prod_j 2 \cosh(\chi_j) \label{eq:rbm_cosh} \end{align} where $\theta=(a,b,w)$ is the collection of all parameters, $x=(x_1,x_2,\cdots,x_N)$ is a basis vector in the computational basis (typically the Pauli $Z$ basis), $y=(y_1,y_2,\cdots,y_M)$ labels the hidden units, and the `activations' are given by $\chi_j = \sum_{i} w_{ij} x_i + b_j$. We also introduce the parameter $\alpha=M/N$ that controls the density of hidden units and parameterizes the expressivity of the model. In addition, we will write $\psi_\theta(x)=\widetilde{\psi}_\theta(x)/\sum_x |\widetilde{\psi}(x)|^2$ to denote the normalized wavefunction. For a given Hamiltonian, the parameters of the Ansatz can be optimized using a variety of different methods, including the standard second-order vQMC algorithm known as Stochastic Reconfiguration (SR)~\cite{sorella2001generalized,umrigar2007alleviation} or a modern variant of the first order methods~\cite{kessler2019artificial,yang2020deep,hibat2020recurrent} such as ADAM~\cite{kingma2014adam}. Throughout the paper, we use the SR as it is believed to be more stable and accurate for solving general Hamiltonians~\cite{park2020geometry}. At each iteration step $n$, the SR method estimates the covariance matrix $S$, with entries $S_{i,j}=\braket{O_i^* O_j }- \braket{O_i^*}\braket{ O_j }$, and the energy gradient $f = \braket{E_{\rm loc}^* O_i}-\braket{E_{\rm loc}^*}\braket{O_i}$ where $O_i(x) = \partial_{\theta_i} \log[\widetilde{\psi}_\theta(x)]$ and $\braket{\cdot} = \sum_{x\sim |\psi_\theta(x)|^2} (\cdot)$ is the average over samples (see Refs.~\cite{sorella2001generalized,carleo2017solving} and Appendix~\ref{app:train_rbm} for details). The parameter set is updated as $\theta_{n+1} = \theta_{n} - \eta_n S^{-1} f$. In practice, a shifted covariance matrix $S' = S + \lambda_n \mathbb{I}$ with a small real parameter $\lambda_n$ is used for numerical stability. In the SR optimization scheme with the complex RBM, expectation values are obtained by sampling from the distribution $|\psi_\theta(x)|^2$, typically by conventional Markov chain Monte Carlo (MCMC). In some cases, we use the running averages of $S$ and $f$ when it increases the stability (i.e. we use $f_{n} = (1-\beta_1) f_{n-1} + \beta_1 f$ , $S_n = (1-\beta_2) S_{n-1} + \beta_2 S$ for suitable choices of $\beta_1$, $\beta_2$ and update $\theta$ using $\theta_{n+1} = \theta_n - \eta_n S_{n}^{-1} f_{n}$). To assess whether the sampling method works well, we introduce the exact reconfiguration (ER) that evaluates $S_{i,j}$ and $f$ from $\psi_\theta(x)$ by calculating the exponential sums $\sum_{x} |\psi_\theta(x)|^2 (\cdot)$ exactly, where $x$ is all possible basis vectors in the computational basis (thus we sum over $2^N$ or ${N \choose N/2}$ configurations depending on the symmetry of the Hamiltonian). Within this framework, we classify the difficulty of ground state simulation as follows: We solve the system using the ER with $N=20$ and the SR with $N=28$ or $32$ (depending on the symmetry of the Hamiltonian). When the Hamiltonian is free from any of the problems [(i) sampling, (ii) convergence, (iii) expressivity], the converged energies from both methods will be close to the true ground state. If we observe that the ER finds the ground state accurately in a reasonable number of epochs\footnote{As we do not have a training dataset, we use the term ``epoch'' to indicate a single optimization step.}, but SR does not, we conclude that the problem has to do with sampling. When there is a local basis change that transforms a given Hamiltonian into a stoquastic form, we apply such a transformation to see whether the \textit{sampling} problem persists. If both SR and ER fail, we evaluate the following further diagnostic tests: (a) We compare ER results from several different randomized starting points, and (b) we run the ER through an annealing scheme from a phase that is known to succeed. When all runs of ER return the the same converged energy, we conclude that the problem must be related to expressivity of the Ansatz. Otherwise, we try the annealing scheme as an alternative optimization method. Instead of training a randomly initialized RBM, we start from the converged RBM within the same phase and change the parameters of the Hamiltonian slowly. If the annealing with the ER also fails, we conclude that the \textit{expressivity} problem is robust. Finally, we support the classification results from the above procedure by a scaling analysis of the errors for different sizes of the system. \section{Preliminary examples}\label{sec:preliminary_examples} Stoquastic Hamiltonians~\cite{BravyiTerhal} -- those for which are real-valued and all off-diagonal elements are non-positive in a specific basis -- typically lend themselves to simulation by the path integral quantum Monte Carlo method. In the path integral Monte Carlo, one evaluates the partition function $Z=\Tr[e^{-\beta H}]$ using the expansion \begin{align} \Tr[e^{-\beta H}] &= \sum_{x_0} \braket{x_0 |(e^{-\frac{\beta}{K} H})^K| x_0}\\ &\approx \sum_{x_0, \cdots, x_{K-1}} \braket{x_0|\bigl(1-\frac{\beta}{K} H \bigr) |x_{K-1}}\cdots \braket{x_2|\bigl(1-\frac{\beta}{K} H \bigr)|x_{1}}\braket{x_1|\bigl(1-\frac{\beta}{K} H \bigr)|x_{0}} \end{align} which is valid for large $K$. As all elements of $1-(\beta/K) H$ are non-negative when $H$ is stoquastic, the sum can be estimated rather easily. Likewise, one can also estimate the expectation value of an observable $A$ from a similar expansion of $\Tr[Ae^{-\beta H}]/Z$. However, a ``sign problem'' arises when the condition is not satisfied, leading to uncontrollable fluctuations of observable quantities as the system grows. The relevance of stoquasticity for the vQMC is far less explored, despite the fact that this method and its variants were introduced to alleviate the sign problem~\cite{umrigar2007alleviation}. Although it is true that the vQMC is free from summations over alternating signs, the method still show several difficulties in solving frustrated Hamiltonians with a complex sign structure as argued in Ref.~\cite{ferrari2019neural}. In this section, we investigate this question in detail using the one-dimensional Heisenberg XXZ and \Jone-\Jtwo\hspace{1pt} models the properties of which are well known. Our strategy is simple. For each Hamiltonian, we use the original Hamiltonian and one with the stoquastic local basis, and observe how the local basis transformation affects the expressivity, convergence, and sampling. Throughout the paper, we will assume periodic boundary conditions for ease of comparison with results from the exact diagonalization (ED). \begin{figure*}[!t] \centering \includegraphics[width=0.65\linewidth]{xxz.pdf} \includegraphics[width=0.65\linewidth]{j1j2.pdf} \caption{\label{fig:xxz_and_j1j2} Converged normalized energy $\widetilde{E}=(E_{\rm RBM} - E_{\rm ED})/E_{\rm ED}$ as a function of model parameters for (a,b) the Heisenberg-XXZ and (c,d) \Jone-\Jtwo\hspace{1pt} model. For each model, the upper plots [(a) and (c)] present results for system size $N=20$ with the Exact Reconfiguration method that optimizes the parameters using explicitly constructed wavefunctions from the RBM. The lower plots [(b) and (d)] show results from $N=32$ with Stochastic Reconfiguration with Markov chain Monte Carlo sampling for optimization. The orange diamonds indicate simulation of the models in the original non-stoquastic basis, while the blue dots indicate simulations in the modified basis, after applying the Pauli-$Z$ operator on every even sites (the sign rule). Vertical dashed lines indicate where the KT-transitions take place ($\Delta=1.0$ for the XXZ model and $J_2\approx0.2411$ for the \Jone-\Jtwo\hspace{1pt} model). For each value of the parameters, we have run the simulation $12$ times and each point represents the result from a single run. } \end{figure*} \subsection{Heisenberg XXZ and \Jone-\Jtwo\hspace{1pt} models} The Heisenberg XXZ model is given by \begin{align} H_{\rm XXZ} = \sum_i \sigma_i^x \sigma_{i+1}^x + \sigma_i^y \sigma_{i+1}^y + \Delta \sigma_i^z \sigma_{i+1}^z, \end{align} where $\sigma^{x,y,z}_j$ denote the Pauli operators at site $j$, and $\Delta$ is a free (real) parameter of the model. As this model is solvable by the Bethe Ansatz, it is well known that the model exhibits phase transitions at $\Delta=-1$ (the first order) and $\Delta=1$ (the Kosterlitz-Thouless transition). Furthermore, the system is gapped when $|\Delta|>1$ and in the critical phase when $-1<\Delta\leq 1$. The Marshall sign rule (applying the Pauli-$Z$ gate on all even (or odd) sites) changes the Hamiltonian into a stoquastic form in the Pauli-$Z$ basis: \begin{align} H_{\rm XXZ}' = \sum_i -\sigma_i^x \sigma_{i+1}^x -\sigma_i^y \sigma_{i+1}^y + \Delta \sigma_i^z \sigma_{i+1}^z \end{align} The Hamiltonian is then stoquastic regardless of the value of $\Delta$. Using the RBM with $\alpha=3$, we plot the result from the ER and SR with and without the sign rule in Fig.~\ref{fig:xxz_and_j1j2}(a) and (b). For the SR, we sample from the distribution $|\psi_\theta(x)|^2$ using the MCMC. As the system obeys the $U(1)$ symmetry and the ground states are in the $J_z = \sum_i \sigma_i^z = 0$ subspace when $\Delta > -1$, we initialize the configuration $x$ to have the same number of up and down spins. For each Monte Carlo step, we update the configuration by exchanging $x_i$ and $x_j$ for randomly chosen $i$ and $j$. We further employ the parallel tempering method using $16$ chains with different temperatures (see Appdendix~\ref{app:parallel_tempering} for details) to reduce sampling noise. Likewise, we sum over the basis vectors in $J_z=0$ for the ER. For each epoch, we use $|\theta|=NM+N+M$ number of samples to estimate $S$ and $f$ unless otherwise stated. Figure~\ref{fig:xxz_and_j1j2}(a) clearly shows that the sign rule barely changes the results when we exactly compute the wavefunctions for optimization\footnote{We note that the converged energies might differ between instances even when the ER is used, due to the random initialization.}. This can be attributed to the fact that the RBM Ansatz can incorporate the Pauli-$X,Y,Z$ gates as well as the phase shift gate $e^{-i \theta \sigma^z_k}$ for arbitrarily $\theta$ efficiently~\cite{jonsson2018neural}. On the other hand, when we sample from the distribution [Fig.~\ref{fig:xxz_and_j1j2} (b)], some RBM instances fail to find the ground state without the sign rule, especially in the antiferromagnetic phase ($\Delta > 1.0$). Thus we see the \textit{sampling} problem arises due to non-stoquasticity. Since the MCMC simply uses the ratio between two probability densities $|\psi_\theta(x')/\psi_\theta(x)|^2$, which is sign invariant, the \textit{sampling} problem here has nothing to do with the ground state. Instead, it is caused by different learning paths taken by the original and the basis transformed Hamiltonians. When we use the original Hamiltonian $H_{\rm XXZ}$, the learning ill-behaves when it hits a region of the parameter space $\theta$ where $S$ and $f$ are not accurately estimated from samples. The transformed Hamiltonian $H_{\rm XXZ}'$ avoids this problem by following a different learning path\footnote{This observation is also consistent with a result in Ref.~\cite{szabo2020neural} that a well-designed learning scheme avoids the \textit{sampling} problem. We also note that one can estimate $f$ and $S$ with controllable errors if $||O_i(x)||_{\rm \infty} = \max_x O_i(x) \leq c$ for a constant $c$ (which is violated by the complex RBM) when the samples are correct. Such an Ansatz is proposed in Ref.~\cite{bukov2020learning} and indeed shows a more stable learning curve. }. We have observed that in general, the energy of a randomly initialized RBM is much closer to that of the ground state when the sign rule is applied and the learning converges in fewer epochs. We have further tested the SR without the sign rule using different sizes of the system $N=[20,24,28,32]$ and up to $76,800$ samples for each epoch, but observed that the \textit{sampling} problem persists regardless of such details. We also show that that this is not an ergodicity problem of the MCMC in Appendix~\ref{app:xxz_exact_sampler} as the SR with the exact sampler (that samples from the probability distribution exactly constructed from $|\psi_\theta(x)|^2$) also gives the same results. Next, let us consider the one-dimensional \Jone-\Jtwo\hspace{1pt} model, given by \begin{align} H_{J_1-J_2} = \sum_i J_1 \pmb{\sigma}_i \cdot \pmb{\sigma}_{i+1} + J_2 \pmb{\sigma}_i \cdot \pmb{\sigma}_{i+2}. \end{align} where we fix $J_1=1.0$. The Hamiltonian has a gapless unique ground state when $J_2 < J_2^*$ (thus, within the critical phase) and gapped two-fold degenerated ground states when $J_2 > J_2^*$. The KT-transition point is approximately known $J_2^* \approx 0.2411$~\cite{okamoto1992fluid}. In addition, an exact solution at $J_2=0.5$ is known -- the Majumdar-Ghosh point. The Marshall sign rule also can be applied to this Hamiltonian which yields: \begin{align} H_{J_1-J_2}' &= \sum_i J_1 [-\sigma^x_i \sigma^x_{i+1} - \sigma^y_i \sigma^y_{i+1}+\sigma^z_i \sigma^z_{i+1}] \nonumber \\ & \quad\quad + J_2 \pmb{\sigma}_i \cdot \pmb{\sigma}_{i+2}. \end{align} We note that this Hamiltonian is still non-stoquastic when $J_2>0$. In Appendix~\ref{app:sto_j1j2}, we prove that on-site unitary gates that transform $H_{J_1-J_2}$ into a stoquastic form indeed do not exist when $J_2 > 0$. We also show that ground states in the gapped phase ($J_2 > J_2^*$) cannot be transformed into a positive form easily using the results from Ref.~\cite{STP_Sign}. Simulation results for this Hamiltonian are presented in Fig.~\ref{fig:xxz_and_j1j2}(c) and (d). First, as in the XXZ model, the ER results in Fig.~\ref{fig:xxz_and_j1j2}(c) show that the sign rule is not crucial when we exactly compute the observables, i.e. the ER with and without the sign rule both converge to almost the same energies. However, in contrast to the XXZ model, there is a range of $J_2 \in (J_2^*, 0.5) \cup (0.5,0.6)$ where all ER and SR instances perform badly (the error is $>10^{-4}$ for some instances) even when the sign rule is applied [Fig.~\ref{fig:xxz_and_j1j2}(c)]. It indicates that the \textit{expressive power} of the network is insufficient for describing the ground state even though the system is gapped. We further show (see Appendix~\ref{app:j1j2_express}) that this problem cannot be overcome by increasing the number of hidden units and revisit this issue in Sec.~\ref{sec:supervised_learning} using the supervised learning framework. Since this region cannot be annealed from a stoquastic point ($J_2=0$) without a phase transition, we argue that this parameter region is in a ``deep non-stoquastic'' phase. We also note that previous studies~\cite{ferrari2018dynamical,thibaut2019long} using different variational Ans\"{a}tze have reported a similar behavior of errors, suggesting that difficulty of ``deep non-stoquastic'' phases is not limited to the RBM Ansatz (see also Sec.~\ref{sec:supervised_learning}). When we use the SR, the results in Fig.~\ref{fig:xxz_and_j1j2}(d) show that some of the instances always fail to converge to true ground states regardless of the Hamiltonian parameters when the sign rule is not applied. This is the behavior what we saw from the XXZ model that non-stoquasticity induces a \textit{sampling} problem. On the other hand, when the sign rule is applied, all SR instances report small relative errors when $J_2 \leq J_2^*$ even though the transformed Hamiltonian is still non-stoquastic. We speculate that this is because the whole region is phase connected to the stoquastic $J_2=0$ point. We summarize the results from the above two models with the following key observations. \begin{observation} \label{obs:express} Complex RBMs represent ground states of spin chains faithfully when the Hamiltonian is stoquastic, up to a basis transformation consisting of local Pauli and phase-shift gates, or is phase connected to such a Hamiltonian. \end{observation} \begin{observation}\label{obs:express2} There exists ``a deep non-stoquastic phase'', where the Hamiltonian cannot be locally or adiabatically transformed into a stoquastic Hamiltonian without crossing a phase transition. Complex RBMs have difficulty representing such ground states. \end{observation} \begin{observation} \label{obs:sampling} Sampling is stable along the learning path when the Hamiltonian is stoquastic or phase connected to a stoquastic Hamiltonian. \end{observation} In the next section, we will explore these observations more closely by introducing a more challenging example that combines all of the problems above. \begin{figure*}[!t] \centering \includegraphics[width=0.55\linewidth]{txyz_phase.pdf} \caption{\label{fig:txyz_phase} Phase and stoquasticity diagrams of the twisted XYZ model. For parameters $0 \leq a, b \leq 2.5$, the difference between the lowest energies $(E_0-E_1)/E_0$ in the symmetric and the anti-symmetric subspaces under the spin flip ($\sigma_z \leftrightarrow -\sigma_z$) for $N=28$ is shown. As the ground states can break the $\mathbb{Z}_2$ symmetry in all three directions, we cannot determine phases solely from this plot. Thus we calculate magnetic susceptibilities in Fig.~\ref{fig:txyz_suscep} and find that Phase I breaks the symmetry along the $z$-axis whereas Phase II recovers this. Between those two phases, Phase $\Lambda$ that breaks the symmetry along the $y$-axis appears when $a \neq b$. We depict approximate phase boundaries with dotted curves. On the other hand, dashed lines show stoquastic to non-stoquastic transitions. Local basis transformed Hamiltonians $H_{\rm tXYZ}^{\diamondsuit}$ and $H_{\rm tXYZ}^\bigstar$ are stoquastic in the first and third quadrants, respectively. In the second and forth quadrants, local (on-site) unitary gates that transforms the Hamiltonian into a stoquastic form do not exist. The untransformed Hamiltonian $H_{\rm tXYZ}$ is stoquastic only when $a=b$ in this region. The line segment from $O=(0.5, 0.5)$ to $A=(0.25,0.75)$ and $A$ to $E=(1.25,1.75)$ indicate the parameters we simulate vQMC. The phase transitions along $\overline{AE}$ take place at $C_1\approx(0.764,1.264)$ and $C_2\approx (0.793,1.293)$. } \end{figure*} \section{Further experiments}\label{sec:further_experiments} In previous examples, local basis transformations only marginally affected the expressive power of the model. Here, we introduce a Hamiltonian that involves the Hadamard transformation that cannot be embedded to the RBM Ansatz for a stoquastic transformation. The main findings in this section are (1) local basis transformations beyond the Pauli and phase-shift gates are useful for expressivity, (2) there is a conventional symmetry broken phase for which the RBM fails to represent the ground states, and (3) the number of samples to estimate observables correctly may scale poorly even for a stoquastic Hamiltonian. \subsection{Model Hamiltonian and phase diagram} We consider a next-nearest-neighbor interacting XYZ type Hamiltonian with ``twisted'' interactions: \begin{align} H_{\rm tXYZ}&=J_1 \sum_{i=1}^N a \sigma_i^x\sigma_{i+1}^x+b \sigma_i^y\sigma_{i+1}^y + \sigma_i^z\sigma_{i+1}^z \nonumber \\ &\quad +J_2 \sum_{i=1}^N b \sigma_i^x\sigma_{i+2}^x+a \sigma_i^y\sigma_{i+2}^y +\sigma_i^z\sigma_{i+2}^z \label{eq:ham_txyz} \end{align} where $a$ and $b$ are two real parameters. Note that $a$ ($b$) is the strength of $XX$ ($YY$) interaction for nearest-neighbors whereas it is on $YY$ ($XX$) interaction for next-nearest-neighbors. This particular Hamiltonian has a rich phase structure as well as stoquastic to non-stoquastic transitions. The stoquastic regions do not coincide with the phases of the model. In addition, the system has $\mathbb{Z}_2$ symmetries in any axis $\sigma_i^{\{x,y,z\}} \leftrightarrow -\sigma_i^{\{x,y,z\}}$ as well as translational symmetry. Moreover, a $\pi/2$ rotation around the $z$-axis, i.e. $U=e^{-i \pi/4 \sum_i \sigma^z_i}$, swaps the parameters $a$ and $b$. \begin{figure*}[!t] \centering \includegraphics[width=0.6\linewidth]{txyz_suscep.pdf} \caption{\label{fig:txyz_suscep} (a) Magnetic susceptibility and (b) entanglement entropy for the ground state of the twisted XYZ model along the line $A=(0.25, 0.75)$ to $E=(1.25,1.75)$. The result shows that there are three distinct phases. We locate the first phase transition point $C_1\approx (0.7636,1.636)$ that shows the divergence of entanglement entropy. The maximum values of the entanglement entropy are $2.16$, $2.25$, $2.28$ for $N=20$, $24$, and $28$, corroborating a logarithmic divergence of the entanglement entropy at criticality. In addition, we also observe polynomial decay of the correlation function $\braket{\sigma^y_i \sigma^y_{i+k}}$ in Phase II (see Appendix~\ref{app:phase_txyz} for details). } \end{figure*} Throughout the section, we assume ferromagnetic interactions $J_1 = J_2 = -1$. As the Hamiltonian in this case becomes the classical ferromagnetic Ising model when $a=b=0$, one may expect that the vQMC works well at least for small parameters. However, we will see that this intuition is generally misleading, as the non-stoquasticity of the model plays a very important role. We consider two other representations of the model, which are obtained by local basis transformations: \begin{align} H_{\rm tXYZ}^\bigstar&=J_1 \sum_{i=1}^N \sigma_i^x\sigma_{i+1}^x+b \sigma_i^y\sigma_{i+1}^y + a \sigma_i^z\sigma_{i+1}^z\nonumber \\ &\quad +J_2 \sum_{i=1}^N \sigma_i^x\sigma_{i+2}^x+a \sigma_i^y\sigma_{i+2}^y + b \sigma_i^z\sigma_{i+2}^z, \label{eq:ham_txyz_sto1} \end{align} \begin{align} H_{\rm tXYZ}^\diamondsuit &=J_1 \sum_{i=1}^N a \sigma_i^x\sigma_{i+1}^x + \sigma_i^y\sigma_{i+1}^y+b \sigma_i^z\sigma_{i+1}^z \nonumber \\ &\quad +J_2 \sum_{i=1}^N b \sigma_i^x\sigma_{i+2}^x + \sigma_i^y\sigma_{i+2}^y+a \sigma_i^z\sigma_{i+2}^z \label{eq:ham_txyz_sto2}. \end{align} The Hamiltonian $H_{\rm tXYZ}^\bigstar$ ($H_{\rm tXYZ}^\diamondsuit$) is stoquastic for $ 0 \leq a,b \leq 1$ ($a,b \geq 1$), and can be obtained by applying $\pi/2$ rotation over $y$ ($x$) axes from the original Hamiltonian $H_{\rm tXYZ}$, respectively. We note that as those rotations involve the Hadamard gate, they cannot be decomposed only by Pauli gates, e.g. $\pi/2$ rotation over the $y$-axis is given by $e^{-i\pi/4 Y} = XH$. These Hamiltonians are obtained by applying the general local transformations described by Klassen and Terhal~\cite{Klassen2019twolocalqubit} to Eq.~\eqref{eq:ham_txyz} (see Appendix~\ref{app:sto_txyz} for detailed steps). In addition, one may further transform $H_{\rm tXYZ}^\bigstar$ and $H_{\rm tXYZ}^\diamondsuit$ with local phase-shift gates $\prod_k e^{-i \pi/4 \sigma_k^z}$ which can be embedded into the RBM Ansatz~\cite{jonsson2018neural}. The resulting Hamiltonians are stoquastic for $a,b\geq 1$ and $0 \leq a,b \leq 1$, respectively, which are the reverse of ones before applying the phase-shift gates. Before presenting vQMC results, let us briefly summarize the phase structure of the Hamiltonian that is presented in Fig.~\ref{fig:txyz_phase}. To gain an insight, let us first consider $a=b$ line. When $a=b < 1.0$, each term of the Hamiltonian prefers an alignment in $z$-direction so the ground state is $\ket{\uparrow}^{\otimes N} + \ket{\downarrow}^{\otimes N}$. Even though the $U(1)$ symmetry is broken when $a \neq b$, this ferromagnetic phase extends from $a=b < 1.0$ which we denote by Phase I in Fig.~\ref{fig:txyz_phase}. On the other hand, the Hamiltonian prefers the total magnetization $J_z = \sum_i \sigma_i^z = 0 $ when $a=b> 1.0$. The region of this phase is shown in Fig.~\ref{fig:txyz_phase} denoted by Phase II. As the total magnetization changes abruptly at $a=b=1$ regardless of the system size, we expect a first order phase transition to take place at this point. However, the phase boundaries when $a \neq b$ are more complex and another phase $\Lambda$ appears in between two phases. To characterize the phases when $a \neq b$, we plot magnetic susceptibilities and the entanglement entropy (after dividing the system into two equal-sized subsystems) along the line segment $\overline{AE}$ in Fig.~\ref{fig:txyz_suscep}. For each parameter $(a,b)$, we have obtained the ground state within the subspace preserving the $\mathbb{Z}_2$ symmetry along the $z$-axis using the ED (thus our ground states obey the $\mathbb{Z}_2$ symmetry even when the symmetry is broken in the thermodynamic limit). We see that the magnetic susceptibility along the $z$-axis diverges with the system size $N$ in Phase I which implies the symmetry will be broken when $N \rightarrow \infty$. Likewise, we also see that the symmetry along the $y$-axis is broken in Phase $\Lambda$. Furthermore, as entanglement entropy follows the logarithmic scaling at $C_1\approx(0.764,1.264)$ (see also Appendix~\ref{app:phase_txyz}), we conclude that the phase transition at $C_1$ is the second order. However, the signature of the phase transition from the entanglement entropy at $C_2$ is weak possibly because the phase transition is the infinite order Kosterlitz-Thouless transition. Thus we calculate the second derivative of the ground state energy in Appendix~\ref{app:phase_txyz} and locate the second phase transition point $C_2\approx (0.793, 1.293)$. In addition, entanglement entropy shows that there is no hidden order in Phase I and $\Lambda$ as it is near to $1.0$ which can be fully explained by the broken $\mathbb{Z}_2$ symmetry. We also see a signature of other phases when $a$ is small and $b$ is large (or vice versa). Though, we will overlook them as they are far from the parameter path we are interested in. We note that even though the phases in Fig.~\ref{fig:txyz_phase} are identified following the conventional $\mathbb{Z}_2$ symmetry breaking theory, we will further restrict a symmetry class of the Hamiltonian when we discuss phase connectivity throughout the section as it provides a more consistent view. For example, we will consider that point $O$ (located on $a=b$ line which has the $U(1)$ symmetry) is not phase connected to $A$ (where the Hamiltonian obviously breaks the $U(1)$ symmetry), whereas $A$ and $B$ are phase connected. Our definition of phase connectivity is also compatible with a modern definition of phases in one-dimensional systems~\cite{chen2011classification,schuch2011classifying,chen2011complete}. Finally, we depict the regions of stoquasticity in Fig~\ref{fig:txyz_phase}. The model can be made stoquastic by a local basis change in the bottom left and top right quadrants. Within this phase diagram, we run our vQMC simulations along two paths $\overline{OA}$ and $\overline{AE}$. The path $\overline{OA}$ does not cross any phase or stoquasticity boundary, but it will show how a symmetry of the ground state affects the expressivity. On the other hand, the path $\overline{AE}$ crosses both the phase and stoquasticity boundaries thus will show how phase and stoquasticity transitions affect the vQMC. \begin{figure*}[!t] \centering \includegraphics[width=1.0\textwidth]{txyz.pdf} \caption{\label{fig:txyz_result} Normalized energy from the vQMC for the twisted XYZ model. We have used the ER with $N=20$ for (a) and (c), the SR with $N=28$ for (b) and (d). (a,b) For $\overline{OA}$, the original Hamiltonian $H_{\rm tXYZ}$ is only stoquastic at $O$ whereas $H_{\rm tXYZ}^\bigstar$ is stoquastic over the whole path. (c,d) The Hamiltonian $H_{\rm tXYZ}$ is non-stoquastic over the whole path whereas $H_{\rm tXYZ}^\bigstar$ and $H_{\rm tXYZ}^\diamondsuit$ are stoquastic on the left and right side of the shaded region, respectively. In the shaded region, none of the Hamiltonians is stoquastic. Vertical dashed lines at $C_1$ and $C_2$ indicate the phase transition points. A dashed curve in (c) indicates an annealing result (see main text). } \end{figure*} \subsection{Variational Quantum Monte Carlo results} Our vQMC results for the twisted XYZ Hamiltonian are shown in Fig.~\ref{fig:txyz_result}. The shades in the middle of Fig.~\ref{fig:txyz_result}(c) and (d) indicate the region where the model cannot be made stoquastic by a local rotation. We discuss the results for each path and phase below. As Phase $\Lambda$ (located between $C_1$ and $C_2$) is disconnected from others, we examine this case separately at the end of the section. \subsubsection{Path $\overline{OA}$ (Phase I)} As we have noted above, the ground state is a classical ferromagnet at location $O$. We observe that the RBM represents this state as expected. However, the error from the ER is getting larger as the parameter deviates from $O$. Since the Hamiltonian is always gapped along $\overline{OA}$, this result shows that non-stoquasticity affects vQMC even though the path does not close a gap; thus it reveals the importance of symmetry in the RBM expressivity. With our symmetry sensitive definition of phase connectivity, the solubility indeed can be understood well as follows. First, the ground state at $O$ is represented by the RBM using the original Hamiltonian $H_{\rm tXYZ}$ as it is stoquastic at this point (Observation~\ref{obs:express}). However, as going to $A$ breaks the $U(1)$ symmetry and it cannot be transformed into a stoquastic form only using local Pauli gates (see Appendix~\ref{app:sto_txyz}), point $A$ is not guaranteed to be solvable using $H_{\rm tXYZ}$. On the other hand, one can solve it using $H_{tXYZ}^\bigstar$ which is stoquastic along the whole path $\overline{OA}$. The fact that the transformed Hamiltonian $H_{tXYZ}^\bigstar$ works much better than the original one $H_{tXYZ}$ in the ER case contrasts to the XXZ and \Jone-\Jtwo\hspace{1pt} models where local Pauli rotations barely affected the expressive power. This is because the transformations in this model involve the Hadamard gate which is known to be challenging for the RBM \cite{gao2017efficient,jonsson2018neural}. \subsubsection{Path $\overline{AC_1}$ (Phase I)} We observe that both the transformed Hamiltonians $H_{\rm tXYZ}^\bigstar$ and $H_{\rm tXYZ}^\diamondsuit$ work better than the original Hamiltonian $H_{\rm tXYZ}$ along the whole path when the ER is used [Fig.~\ref{fig:txyz_result}(c)]. Especially, the transformed Hamiltonians solve the ground state even for the shaded region ($\overline{BC_1}$) where none of the Hamiltonians are stoquastic. We explain this using the fact that $H_{\rm tXYZ}^\bigstar$ is stoquastic along $\overline{AB}$ and that there is no phase transition along $\overline{AC_1}$ (Observation~\ref{obs:express}). In addition, as applying local phase gates $\prod_k e^{-i\pi/4 \sigma^z_k}$ which can be embedded into the RBM Ansatz transforms $H^{\diamondsuit}_{\rm tXYZ}$ into a stoquastic form (which is different from $H^{\bigstar}_{\rm tXYZ}$), Observation~\ref{obs:express} also explains why $H^{\diamondsuit}_{\rm tXYZ}$ works. In contrast, the original Hamiltonian $H_{\rm tXYZ}$ is non-stoquastic for all parameters in $\overline{AC_1}$. On the other hand, the results from the SR [Fig.~\ref{fig:txyz_result}(d)] show that $H_{\rm tXYZ}^\bigstar$ which is stoquastic on the left side of the shaded region works better than $H_{\rm tXYZ}^\diamondsuit$. This result indicates that the MCMC is more stable when the stoquastic Hamiltonian is used, which is the behavior we have seen from the sign rule of the XXZ and the \Jone-\Jtwo\hspace{1pt} models (Observation~\ref{obs:sampling}). Interestingly, $H_{tXYZ}^\diamondsuit$ appears to be sensitive to the stoquastic transition although it is non-stoquastic throughout the path. We do not have a good explanation for this behavior. \subsubsection{Path $\overline{C_2D}$ (Phase II)} We observe that $H_{\rm tYXZ}^\diamondsuit$ which is stoquastic to the right of $D$ does not give the best result in this region $\overline{C_2D}$ when the ER is used. However, a large fluctuation in the converged energies suggests that the \textit{convergence} problem [case (ii)] arises, likely due to a complex optimization landscape. Thus we need to distinguish the problem between optimization and expressivity more carefully in this region. For this purpose, we use an annealing approach as an alternative optimization method: We first take converged RBM weights for $(a,b)=(1.01,1.51)$ (the point right next to $D$) where the Hamiltonian $H_{\rm tYXZ}^\diamondsuit$ is stoquastic and run the ER from these weights instead of randomly initialized ones. We decrease the parameters $(a,b)$ of the Hamiltonian by $(0.01,0.01)$ for each annealing step and run 200 ER epochs. The obtained results are indicated by a dotted curve in Fig.~\ref{fig:txyz_result}(c). The annealing result suggests that the expressivity is not the main problem up to the phase transition point $C_2$ (when considered from the right to the left). The SR results show two noteworthy features compared to the ER results. First, the Hamiltonian $H_{\rm tXYZ}^\bigstar$ gives remarkably poor converged energies compared to the results from the ER. This result agrees with what we have seen from the sign rule: When the Hamiltonian is non-stoquastic, the learning path may enter a region where observables are not correctly estimated. Second, the shape of the curves from the Hamiltonians $H_{\rm tXYZ}$ and $H_{\rm tXYZ}^\diamondsuit$ are also different from that of the ER which is due to poor optimization. However, in Appendix~\ref{app:scaling_txyz}, we show that the \textit{convergence} problem gets weaker as $N$ increases, thus the SR can solve the Hamiltonian $H_{\rm tXYZ}^\diamondsuit$ in this region correctly for a large $N$ by examining the two parameter points of the Hamiltonian (indicated by arrows in Fig.~\ref{fig:txyz_result}). We encapsulate the results in this region as follows: The vQMC works for $H_{\rm tYXZ}^\diamondsuit$ that is phase connected to a stoquastic Hamiltonian even though it suffers from a optimization problem for small $N$. This result is consistent with Observation~\ref{obs:express} and Observation~\ref{obs:sampling}. \begin{figure}[!t] \centering \includegraphics[width=0.62\linewidth]{txyz_sample.pdf} \caption{\label{fig:samples_vs_accuracy} (a) Normalized energy from the vQMC after convergence as a function of number of samples for different system sizes $N$. The Hamiltonian $H^{\diamondsuit}_{\rm tXYZ}$ with $(a,b)=(1.23,1.73)$ is simulated. Values in the $x$-axis indicate the number of samples used to estimate observables in each update step of the SR divided by $|\theta|$. The result is averaged over $12$ vQMC instances and error bars indicate the standard deviation. Error bars for $N=32$ are invisible as they are less than $2.0 \times 10^{-6}$. The result shows there is a transition in scaling near $N=28$. (b) The first $10^3$ elements of $|\psi_{\rm GS}(x)|^2$ where $\psi_{\rm GS}(x)$ is the ground state of $H^{\diamondsuit}_{\rm tXYZ}$ obtained from the ED when $N=28$. Parameters $(a,b)=(0.27, 0.77)$ in Phase I and $(1.23, 1.73)$ in Phase II are used. When $(a,b)=(0.27, 0.77)$, the peak at the beginning indicate two largest elements of the distribution which are $\approx 0.458$. We see that the tail distribution for Phase II is much thicker. Moreover, the summation of the first $10^3$ elements is $\approx 0.998$ when $(a,b)=(0.27, 0.77)$ whereas it is only $\approx 0.294$ when $(a,b)=(1.23, 1.73)$. It suggest that one needs a huge number of samples to correctly estimate the probability distribution in Phase II.} \end{figure} \subsubsection{Path $\overline{DE}$ (Phase II)} \label{sec:txyz_de} The ER results show that the RBM can express the ground state of all three Hamiltonians ($H_{\rm tXYZ}$, $H_{\rm tXYZ}^\bigstar$, and $H_{\rm tYXZ}^\diamondsuit$) wherein the stoquastic one in this parameter region $H_{\rm tYXZ}^\diamondsuit$ works the best. One can also explain why some instances of the ER find the ground state of $H_{\rm tXYZ}^\bigstar$ using the existence of phase-shift gates ($\prod_k e^{-i\pi/4 \sigma^z_k}$) that transforms the Hamiltonian into a stoquastic form. However, we do not have a nice explanation why non-transformed Hamiltonian $H_{\rm tXYZ}$ also works in this region despite its non-stoquasticity. On the other hand, the SR results show that the converged energies from $H_{\rm tXYZ}$ and $H_{\rm tXYZ}^\diamondsuit$ suffer large fluctuations, which suggests that the \textit{sampling} problem emerges. This result is unexpected as the Hamiltonian $H_{\rm tXYZ}^\diamondsuit$ is stoquastic in this region. As the Hamiltonian is stoquastic, one may expect that using more samples easily overcomes the problem. However, this is not the case. To see this, we plot vQMC errors as a function of the number of samples in Fig.~\ref{fig:samples_vs_accuracy}(a). Here, we have used $x \times |\theta|$ samples per epoch for each value in the $x$-axis. For $N=20$ and $24$, one observes that the errors get smaller as the number of samples increase. However, the results are subject to large fluctuations for $N=28$, and gets worse when $N=32$. Since $\theta$ itself scales as $\sim \alpha N^2$, our results show that this \textit{sampling} problem is robust. It is insightful to compare the \textit{sampling} problem in this model to that in non-stoquastic models (such as the XXZ model in the antiferromagnetic phase without the sign rule). Even though they are both caused by a finite number of samples, the converged energies suggest that they have different origins. In the XXZ model, the converged normalized energies are mostly clustered above $10^{-2}$ regardless of the size of the system. On the other hand, they are below $10^{-3}$ and show clear system size dependency in this model. In Appendix~\ref{app:sampling_comparison}, we show that the \textit{sampling} problem in this model only appears locally near the minima whereas it pops up in the middle of optimizations and spoils the whole learning procedure in non-stoquastic models. The \textit{sampling} problem occurring here is quite similar to the problem observed from quantum chemical Hamiltonians~\cite{choo2020fermionic}. Precisely, Ref.~\cite{choo2020fermionic} showed that optimizing the RBM below the Hartree-Fock energy for quantum chemical Hamiltonians requires a correct estimation of the tail distribution. However, the tail distribution of the ground state is often thick and a large number of samples are required to find the true optima. We find that the \textit{sampling} problem of our model is also caused by such a heavy tail in the distribution. We can see this from the probability distribution of the ground states $|\psi_{\rm GS}(x)|^2$ for $H_{\rm tXYZ}^\diamondsuit$ using two different parameters $(a,b) = (0.27, 0.77)$ and $(1.23, 1.73)$ which are deep in Phase I and II, respectively. We plot the first $10^3$ largest elements of $|\psi_{\rm GS}(x)|^2$ for each parameter of the Hamiltonian in Fig.~\ref{fig:samples_vs_accuracy}(b). The Figure directly illustrates that the distribution of the ground state in Phase II is much broader than that of Phase I. Moreover, the sum of the first $10^3$ elements is only $\approx 0.294$ in Phase II, which implies that one needs a huge number of samples to correctly estimate the probability distribution. \begin{figure*}[t!] \centering \includegraphics[width=0.65\linewidth]{txyz_phase_lambda.pdf} \caption{\label{fig:txyz_phase_lambda} (a) The second quadrant of the phase diagram in Fig.~\ref{fig:txyz_phase}. We calculate the second derivative of the ground state energy along paths $P_1$ and $P_2$ in Appendix~\ref{app:phase_txyz} to locate the phase transition points. Converged normalized energy using (b) the ER with $N=20$ and (c) the SR with $N=28$ along the path $\overline{JK}$ where $J=(0.5,2.0)$ and $K=(0.4,3.0)$. } \end{figure*} \subsubsection{Phase $\Lambda$} \label{sec:txyz_lambda} Finally, we show that the RBM has difficulty representing the ground states in phase $\Lambda$ even though it is a simple conventionally ordered phase (which was observed from the entanglement entropy). We simulate vQMC along the line $\overline{JK}$ in Fig.~\ref{fig:txyz_phase_lambda}(a) which is deep in phase $\Lambda$. The transformed Hamiltonian $H_{\rm tXYZ}^\diamondsuit$ gives almost the same converged energies as the original Hamiltonian $H_{\rm tXYZ}$, so we do not present them in Fig.~\ref{fig:txyz_phase_lambda}(b) and (c). The ER results in Fig.~\ref{fig:txyz_phase_lambda}(b) clearly show that the error increases as we go deeper in this phase. As in the \Jone-\Jtwo\hspace{1pt} model, we simulate the ER with varying $N$ and different numbers of hidden units at point $K$ in Appendix~\ref{app:expressive_lambda} which confirms that there is the \textit{expressivity} problem in this phase. In addition, the SR results [Fig.~\ref{fig:txyz_phase_lambda}(c)] show that the \textit{sampling} problem also arises when we use $H_{\rm tXYZ}^\bigstar$ which performed best with the ER. We also note that we cannot apply the results in Ref.~\cite{STP_Sign} as in the \Jone-\Jtwo\hspace{1pt} model, since there is no hidden orders in this phase. To summarize overall the results from the twisted XYZ model, we have found that Observation~\ref{obs:express} and \ref{obs:express2} hold in general by examining the behavior of the vQMC in different phases and stoquastic/non-stoquastic regions. In addition, we also have observed that a different type of \textit{sampling} problem may arise even when solving a stoquastic Hamiltonian. Thus we modify our observation~\ref{obs:sampling} slightly as follows: \setcounter{observation}{2} \begin{observation}[Second version] \label{obs:sampling_second} Sampling is stable along the learning path when the Hamiltonian is stoquastic or phase connected to a stoquastic Hamiltonian. However, the number of samples required to converge may scale poorly even when the Hamiltonian is stoquastic. \end{observation} \section{Expressivity problem from supervised learning} \label{sec:supervised_learning} We point out that observation 2 conflicts with the assertion~\cite{westerhout2020generalization} that a shallow neural network (with depth $2$ or $3$) can express the ground state of a frustrated system without problem. Precisely, the authors have trained a neural network to reproduce amplitudes and signs of the ground state obtained from the ED without imposing the sign rule and found that the reconstructed states give a high overlap with the true ground state; the statement ``expressibility of the Ans\"{a}tze is not an issue -- we could achieve overlaps above $0.94$ for all values of $J_2/J_1$'' is given. However, we consider a $0.94$ overlap to be insufficient evidence for this claim. For example, we have obtained an overlap of $\gtrsim 0.999$ for the one dimensional \Jone-\Jtwo\hspace{1pt} model when $J_2=0.44$ and $N=20$, but an order of magnitude better for $J_2 < J_2^*$, the region phase connected with a stoquastic point ($J_2=0.0$). In this section, we further clarify the discrepancy by showing that the expressivity problem appears even in the supervised learning set-up (as in Ref.~\cite{westerhout2020generalization}) that is less prone to other problems (sampling and training) when the desired accuracy gets higher. Notably, we show that, in contrast to what one might expect, a neural network (even after taking the symmetries into account) has a problem in reproducing \textit{amplitudes} of a deep non-stoquastic ground state. \begin{table}[t] \centering \begin{tabular}{|p{2.7cm}|M{2.4cm}|M{3.4cm}|} \hline Layer & Kernel size & Size of feature map \\ \hline Input & - & $N \times 1$ \\ \hline Conv-1 & $\floor{N/2}+1$ & $N \times W/2$ \\ \hline Even activation & - & - \\ \hline Conv-2 & $\floor{N/2}+1$ & $N \times W$ \\ \hline Odd activation & - & - \\ \hline Mean & - & $1 \times W$ \\ \hline Fully connected & - & $1$ \\ \hline \end{tabular} \caption{ \label{tab:network_architecture} Architecture of neural networks used for the supervised learning. We use two convolutional (Conv-1,2) and one fully connected layers without biases. A periodic padding is used for the convolutional layers, so they commute with the translation of the input. The kernel size $\floor{N/2}+1$ is used both for Conv-1 and Conv-2. Numbers of input/output channels are $(1,W/2)$, $(W/2,W)$ for Conv-1 and Conv-2, respectively, where $W$ is a parameter that determines the width of the network. We use an even (odd) activation function after Conv-1 (Conv-2), to preserve the $\mathbb{Z}_2$ symmetry. See Appendix~\ref{app:supervised_learning_setups} for further details. } \end{table} \subsection{Neural networks and learning algorithms} We use a convolutional neural network (Table~\ref{tab:network_architecture}) for the supervised learning experiment. Our network is invariant under translation, as the convolutional layers commute with translation and outputs are averaged over the lattice sites before being fed into the fully connected layer. We further embed the $\mathbb{Z}_2$ symmetry in the network by turning off all biases and using even and odd activation functions after the first and second layers, respectively, following Refs.~\cite{cai2018approximating,bukov2020learning}. We introduce a parameter $W$ that characterizes the width of the network. We further use $\theta$ to denote a vector of all parameters and $f_\theta(x)$ for the output of the network. We note that our network structure is very close to a convolutional network used in Ref.~\cite{westerhout2020generalization}. We utilize this network to learn the amplitudes and signs of the true ground states obtained from the original Hamiltonian $H_{J_1-J_2}$ (without imposing the sign rule). We use the kernel size $=\floor{N/2}+1$ for the convolutional layers as smaller kernels have failed to reproduce the sign rule for $J_2=0$ (when the sign rule is correct for all configurations). The number of parameters of the network is then given by $(W/2 + W^2/2)(\floor{N/2}+1) + W$. For $N=24$, the networks with $W=16$ and $32$ have $1,784$ and $6,896$ parameters, respectively. Our learning set-up is slightly different from that of Ref.~\cite{westerhout2020generalization}: (i) Instead of mimicking the amplitudes, we use our network as an energy based model to be sampled from and (ii) we train our network using the whole data (all possible configurations $x$) without dividing the training and validation sets, as we are only interested in expressivity, not generalization property of the network. \subsubsection{Learning amplitudes} To model the amplitudes $|\psi_{\rm GS}(x)|^2$, we use the output of our network $f_\theta(x)$ as the energy function for the energy based model. Even though there are models with the autoregressive property, which are easier to train, we choose the energy based model as it does not add any additional constraints (such as ordering of sites) and symmetries can be naturally imposed. Precisely, we model the amplitudes with $p_\theta(x) = e^{f_\theta(x)}/Z$ where $Z = \sum_{x} e^{f_\theta(x)}$ is the partition function of the model. We note that even though $Z$ is intractable in general, we can compute $Z$ for system sizes up to $N=28$ rather easily thanks to the symmetry imposed on the network. Our loss function is the cross entropy $l(\theta) = - \sum_{x} p_{\rm data}(x) \log [e^{f_\theta(x)}/Z]$ where the probability distribution for data we want to model captures the amplitudes of the ground state: $p_{\rm data}(x) = |\psi_{\rm GS}(x)|^2$. The gradient of the loss function can be estimated using samples from the data and model distributions as \begin{align} g \approx -\{\mathbb{E}_{p_{\rm data}(x)}[\nabla_\theta f_\theta(x)] - \mathbb{E}_{p_\theta(x)}[\nabla_\theta f_\theta(x)]\}. \end{align} For the energy based model, estimating the second term is difficult in general as we need to sample from the model using e.g. MCMC. However, we here sample exactly from $p_{\theta}$ which is again possible up to $N=28$. We use the same number of samples (the mini-batch size) from $p_{\rm data}(x)$ and $p_\theta(x)$ to estimate the first and the second terms. Unfortunately, we have found that usual first-order optimization algorithms such as Adam~\cite{kingma2014adam} do not give a proper minima due to a singularity of the ground state distribution~\cite{park2020geometry} (see also discussions in Ref.~\cite{melchior2016center}). Thus we have utilized the natural gradient descent~\cite{amari1998natural} (which can be regarded as a classical version of the SR) to optimize our energy based model, which is tractable up to several thousands of parameters. Precisely, we compute the classical Fisher matrix $\mathcal{F}=(\mathcal{F}_{ij})$ where \begin{align} \mathcal{F}_{ij} = \mathbb{E}_{p_\theta(x)}[\partial_{\theta_i} f_\theta(x) \partial_{\theta_j} f_\theta(x)] - \mathbb{E}_{p_\theta(x)}[\partial_{\theta_i} f_\theta(x)] \mathbb{E}_{p_\theta(x)}[\partial_{\theta_j} f_\theta(x)] \end{align} each epoch and update parameters $\theta$ as $\theta_{n+1} = \theta_{n} - \eta_n (\mathcal{F}_n + \lambda_n \mathds{1})^{-1} g_n$. Here, $\eta_n$ and $\lambda_n $ are the learning rate and the (epoch dependent) regularization constant, respectively. We also use a momentum for the gradient $g$ and the Fisher matrix $\mathcal{F}$ to stabilize the learning procedure, i.e. $g_n = \beta_1 g_{n-1} + (1-\beta_1) g$ and $\mathcal{F}_n = \beta_2 \mathcal{F}_{n-1} + (1-\beta_2) \mathcal{F}$. To quantify the expressivity of the network, we record the overlap between reconstructed and the true ground states (assuming that the sign is correct) over the training, which can be expressed as $\braket{\psi_{\rm GS}|\psi_{\rm recon}} =\sum_{x} |\psi_{\rm GS}(x)| e^{f_\theta (x)/2} / \sqrt{Z}$. As the network obeys the same symmetry as the ground state, this quantity can be also calculated only by summing over the symmetric configurations. \begin{figure*}[t!] \centering \includegraphics[width=0.9\linewidth]{supervised_infid_j2.pdf} \caption{\label{fig:supervised_infid_j2} The infidelity $1-F=1-\braket{\psi_{\rm GS}|\psi_{\rm recon}}^2$ between the true ground states and reconstructed states as a function of $J_2$ for the one-dimensional \Jone-\Jtwo\hspace{1pt} model is shown. We train neural networks to reproduce (a) the amplitudes and (b) the signs of the ground states. The systems size $N=24$ and the widths of network $W=16$ and $32$ are used. To train the amplitude network, the natural gradient descent with hyperparameters $\eta = 10^{-4}$, $\beta_1 = 0.9$, $\beta_2=0.999$ and the mini-batch size $1024$ are used. For the sign network, we use Adam optimizer with the learning rate $\eta=2.5 \times 10^{-5}$, the mini-batch size $32$ (see Appendix~\ref{app:supervised_learning_setups} for details). } \end{figure*} \subsubsection{Learning signs} We use the same network to model the sign structure. As the problem nicely fits into the binary classification problem, we feed the output of the network into the sigmoid function and use it to model the sign structure, i.e., we use $P[\psi_{\rm GS}(x) > 0] = \sigm(f_{\theta}(x))$. We then optimize the network by minimizing the binary cross entropy \begin{align} l(\theta) &= -\sum_x p_{\rm data}(x) \bigl\{y_{\rm data}(x) \log[\sigm(f_{\theta}(x))] \nonumber \\ &\qquad \qquad + (1-y_{\rm data}(x)) \log[1-\sigm(f_{\theta}(x))]\bigr\} \end{align} where $y_{\rm data}(x) = 1$ when $\psi_{\rm GS}(x) > 0$ and $y_{\rm data}(x)=0$ otherwise. In practice, we estimate the loss function and its gradient using samples from $p_{\rm data}(x)$. We have found that usual first-order optimizers such as Adam properly find optima in this case. We also compute the overlap between the true ground state and the reconstructed quantum state $\ket{\psi_{\rm recon}} = \sum_x \mathrm{sgn}[f_\theta(x)] |\psi_{\rm GS}(x)| \ket{x}$, where $\mathrm{sgn}(x)$ is the sign function. \subsection{Results} We show the converged infidelity $1-\braket{\psi_{\rm GS}|\psi_{\rm recon}}^2$ from neural networks trained for the amplitudes and signs in Fig.~\ref{fig:supervised_infid_j2}. The results are obtained after tuning hyperparameters and initializations the detail of which can be found in Appendix~\ref{app:supervised_learning_setups}. The results show that the infidelity from a ``deep non-stoquastic'' phase is larger than that of stoquastic case both for Fig.~\ref{fig:supervised_infid_j2}(a) and (b) where we have trained the amplitudes and signs, respectively. However, the errors go up to $\approx 1.95 \times 10^{-4}$ even for $W=32$ (where the number of parameters is $6,896$) when we train the amplitudes ( assuming the signs are correct) whereas the typical errors are smaller than $10^{-5}$ when we do the opposite. Thus our result strongly suggests that learning amplitudes is more difficult for a neural network than learning the sign structure. As we sampled from the distribution exactly and we expect that the learning procedure is more reliable in the supervised learning set-up, we conclude that this is due to lack of \textit{expressivity} [case (iii)] of a neural network. We also note that solving the linear equation $(\mathcal{F}_n + \lambda_n)v = g_n$, which requires $O(|\theta|^{2-3})$ operations, dominates the learning cost. This is the main limiting factor that prevents us from using a bigger network. \begin{figure*}[t!] \centering \includegraphics[width=0.9\linewidth]{j1j2_supervised_diffn.pdf} \caption{\label{fig:supervised_diffn} For neural networks with $W=32$ and chosen hyperparameters (see Appendix~\ref{app:supervised_learning_setups} for details), we plot (a) scaling of the converged infidelities for different sizes of system $N=[16,20,24,28]$ and (b) initial learning curves (results from the first $2 \times 10^5$ epochs whereas we have trained the network in total $\approx 4.10 \times 10^6$ epochs) from the sign network with different $N$. } \end{figure*} We further plot scaling of errors for different sizes of the system using $J_2=0.0$ and $0.4$ in Fig.~\ref{fig:supervised_diffn}(a). We see that both errors from the amplitude and the sign networks increase with $N$ regardless of $J_2$. We also see that the errors from the amplitude network are significant up to the system size $N=28$, but the slope of the sign network is slightly sharper. We still leave a detailed scaling behavior for future work as our simulation results here are limited up to the system size $N=28$. We also show initial learning curves from the sign network in Fig.~\ref{fig:supervised_diffn}(b). Consistent with a previous observation~\cite{cai2018approximating}, learning the sign structure takes some initial warm-up time when the sign rule is not used. We conjecture that this is also related to the generalization property observed in Ref.~\cite{westerhout2020generalization}. In Appendix~\ref{app:supervised_imposing_sign}, we further show that imposing the sign rule makes the initial warm-up time disappear but does not change the overall performance. We note that, as we can use the first order optimization algorithms, increasing the size of network as well as training longer epochs are much easier for the sign networks than for the amplitude networks. We thus believe that increasing the system size while maintaining the accuracy in the supervised learning set-up is mostly obstructed by the amplitude networks (under the assumption that the ED results are provided). Even though the difficulty of the amplitudes seems counter-intuitive, it may not be surprising if one recalls the path integral Monte Carlo. The sign problem in the path integral Monte Carlo implies that estimating the expectation values of observables $\Tr[A e^{-\beta H}]/\Tr[e^{-\beta H}]$ is difficult due to negative weights. As the amplitudes $|\psi_{\rm GS}(x)|^2$ are the expectation value of the observable $A=\ket{x}\bra{x}$ when $\beta \rightarrow \infty$, the amplitudes $|\psi_{\rm GS}(x)|^2$ suffer from the sign problem when $H$ is non-stoquastic. Still, this does not explain why learning amplitudes is difficult even for a supervised learning set-up where we already have values $|\psi_{\rm GS}(x)|^2$. A partial answer to this question can be found in Ref.~\cite{gao2017efficient}. Under common assumptions of the computational complexity (that the polynomial hierarchy does not collapse), Ref.~\cite{gao2017efficient} showed that computing the coefficients in the computational basis of a certain quantum state ($\Psi_{\rm GWD}(x)$) is impossible in a polynomial time even with an exponential time of pre-computation (that may include training and computing the normalization constant of a neural network representation). As their argument relies on the difficulty of computing $|\Psi_{\rm GWD}(x)|^2$, the complexity is, in fact, from the learning amplitudes of quantum states. However, as they only considered a specific type of quantum states in 2D and do not give any argument on how errors scale (unlike the theory of DMRG which gives an upper bound of errors in terms of entanglement), further theoretical developments are essential to fully explain our results. \section{Conclusion}\label{sec:conclusion} By way of example, we have classified the failure mechanism of the RBM Ansatz within the vQMC framework for one-dimensional spin systems. In particular, we have observed the following features of RBM variational Monte-Carlo for a class of one-dimensional XYZ-type Hamiltonians which exhibit a wide variety of stoquastic and non-stoquastic phases: \begin{enumerate} \item Complex RBMs with sufficient hidden layers ($\alpha \geq 1$) faithfully represent the ground states of spin Hamiltonians that are phase connected to a stoquastic representation, or that can be transformed into such a Hamiltonian with local Pauli and phase-shift transformation. \item There exists ``deep non-stoquastic phases'' that cannot be transformed into a stoquastic form using local (on-site) unitaries and are not phase connected to stoquastic Hamiltonians, and which cannot be efficiently represented by complex RBMs. \item Sampling is stable along the learning path when the Hamiltonian is stoquastic or phase connected to a stoquastic Hamiltonian. However, required number of samples to obtain true optima may scale poorly even in this case when the ground state distribution is heavy-tailed. \end{enumerate} Most importantly, our observation 1 provides strong evidence suggesting that the RBM Ansatz can faithfully express the ground state of a non-stoquastic Hamiltonian when it is phase connected to a stoquastic Hamiltonian. This observation implies that it may be possible to solve a large number of non-stoquastic Hamiltonians using the RBM Ansatz, significantly expanding the reach of vQMC. On the other hand, the second observation suggests that a fundamental difficulty may exist when solving a Hamiltoninan within a phase that is separated from any stoquastic Hamiltonian. As studies already have found several phases that cannot be annealed into stoquastic Hamiltonians~\cite{hastings2016quantum,topol_sign_1,topol_sign_2}, we expect that such systems are challenging for neural quantum states. Moreover, by carefully extending the supervised learning set-up introduced in Refs.~\cite{cai2018approximating,westerhout2020generalization}, we have further demonstrated that the difficulty in representing quantum states using a neural network is originated not only from their sign structure but also from the amplitudes. Even though this may sound counter-intuitive, but our result is consistent with that from the computational complexity theory~\cite{gao2017efficient}. Nevertheless, there is a caveat on our ``deep non-stoquastic phases'' as they rely rather on a conventional concept of phases (phase connectivity is restricted within a given \textit{parameterized} Hamiltonian). In contrast, a modern language of one-dimensional phases allows any constant-depth local unitary transformations that preserve a symmetry between ground states, i.e. two Hamiltonians with the ground states $\ket{\psi_1}$ and $\ket{\psi_2}$ are in the same phase if there is a symmetry preserving unitary operator $U_{\rm sym}$ such that $\ket{\psi_1} = U_{\rm sym}\ket{\psi_2}$ and can be decomposed into a constant depth circuit consists of local gates~\cite{chen2011classification,schuch2011classifying,chen2011complete}. One may possibly interpret our results using symmetry protected phases. Recently, Ref.~\cite{STP_Sign} reported results on a related problem when a ground state can be transformed into a positive form under a unitary operator with a certain symmetry. However, as the result there is limited to ground states with hidden orders whereas our results suggest that ground states in phase $\Lambda$ of the twisted XYZ model suffer from the sign problem even though it is conventionally ordered, a further study is required to understand how phases of a many-body system interplay with a stoquasticity more deeply. We also note that the neural networks we have used in this paper have $\leq 10^4$ parameters. Although this value is comparable to neural quantum states considered for the vQMC, modern machine learning applications use neural networks with several millions to billions of parameters. We expect that such a huge network can sufficiently mitigate the \textit{expressivity} problem we have observed in this paper. However, the main obstacle in using huge networks for the vQMC is the computational overhead of the SR which requires $O(|\theta|^{2-3})$ of operations for each step. Thus a better optimization algorithm would be required. Finally, we also have found that the number of samples to solve the ground state may scale poorly even when the system is stoquastic. This type of difficulty is known from quantum chemical systems~\cite{choo2020fermionic} but has not been discussed in solving many-body Hamiltonians. We still do not exclude the possibility that an efficient sampling algorithm that converges the network in a reasonable number of epochs may exist even in this case. \textit{Note added.--} A recent paper~\cite{bukov2020learning} has claimed that \textit{convergence} [case (ii)] is the main problem for solving a non-stoquastic Hamiltonian for the vQMC. For the two-dimensional \Jone-\Jtwo\hspace{1pt} model that Ref.~\cite{bukov2020learning} has mainly considered, we expect that both the convergence and expressivity problems appear simultaneously. We leave detailed investigations of two-dimensional systems for future work. \section*{Acknowledgement} The authors thank Prof. Simon Trebst, Dr. Ciar\'{a}n Hickey, and Dr. Markus Schmitt for helpful discussions. This project was funded by the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy - Cluster of Excellence Matter and Light for Quantum Computing (ML4Q) EXC 2004/1 - 390534769 and within the CRC network TR 183 (project grant 277101999) as part of project B01. The numerical simulations were performed on the JUWELS and JUWELS Booster clusters at the Forschungszentrum Juelich. This work presented in the manuscript was completed while both authors were at the University of Cologne. Source code used in this paper is available at Ref.~\cite{cyp_github}.
{ "timestamp": "2021-08-25T02:24:39", "yymm": "2012", "arxiv_id": "2012.08889", "language": "en", "url": "https://arxiv.org/abs/2012.08889" }
\section*{Introduction} The past half-decade has seen unprecedented growth in machine learning with deep neural networks (DNNs). Use of DNNs represents the state-of-the-art in many applications, including large-scale computer vision, natural language processing, and data mining tasks \cite{lecun2015deep,silver2017mastering,senior2020improved}. DNNs have also impacted practical technologies such as web search, autonomous vehicles, and financial analysis \cite{lecun2015deep}. However, DNNs have substantial computational and memory requirements, which greatly limit their training and deployment in resource-constrained (e.g., computation, I/O, and memory bounded) environments. To address these challenges, there has been a significant trend in building high-performance DNNs hardware platforms. {\color{black}While there has been significant progress in advancing customized silicon DNN hardware (ASICs and FPGAs) \cite{jouppi2017datacenter,senior2020improved} to improve computational throughput, scalability, and efficiency, their performance (speed and energy efficiency) are fundamentally limited by the underlying electronic components}. Even with the recent progress of integrated analog signal processors in accelerating DNNs systems which focus on accelerating matrix multiplication, such as Vector Matrix Multiplying module (VMM) \cite{ref_schlottmann2011}, mixed-mode Multiplying-Accumulating unit (MAC) \cite{ref_likamwa2016,ref_bankman2018, wang2018fully}, resistive random access memory (RRAM) based MAC \cite{ref_boybat2018,ref_jiang2018,ref_zand2018,wang2017memristors,hu2018memristor}, etc., the parallelization are still highly limited. Moreover, they are plagued by the same limitations of electronic components, with additional challenges in the manufacturing and implementation due to issues with device variability \cite{wang2017memristors,hu2018memristor}. Recently, there are increasing efforts on optical neural networks and optical computing based DNNs hardware, which bring significant advantages for machine learning systems in terms of their power efficiency, parallelism and computational speed \cite{silva2014performing,mengu2020scale,lin2018all,feldmann2019all,shen2017deep,tait2017neuromorphic,rahman2020ensemble,mengu2019analysis,luo2019design,hamerly2019large}. Among them, free-space \textit{diffractive deep neural networks} (D$^2$NNs) , which is based on the light diffraction, feature millions of neurons in each layer interconnected with neurons in neighboring layers. This ultrahigh density and parallelism make this system possess fast and high throughput computing capability. Note that the diffractive propagations controlled by such physical parameters are differentiable, which means that such parameters can be optimized via conventional backpropagation algorithms \cite{mengu2020scale,lin2018all,mengu2019analysis} using \texttt{autograd} mechanism \cite{paszke2017automatic}. In terms of hardware performance/complexity, one of the significant advantages of D$^2$NNs is that such a platform can be scaled up to millions of artificial neurons. In contrast, the design and DNNs deployment complexity on other optical architectures, e.g., integrated nantophotnics \cite{feldmann2019all,feldmann2020parallel} and silicon photnics \cite{tait2017neuromorphic}), can dramatically increase. For example, Lin et al. \cite{lin2018all} experimentally demonstrated various complex functions with an all-optical D$^2$NNs. In conventional DNNs, forward prorogation are computed by generating the feature representation with floating-point weights associated with each neural layer. In D$^2$NNs, such floating-point weights are encoded in the phase of each neuron of diffractive phase masks, which is acquired by and multiplied onto the light wavefunction as it propagates through the neuron. Similar to conventional DNNs, the final output class is predicted based on generating labels according to a given one-hot representation, e.g., the max operation over the output signals of the last diffractive layer observed by detectors. Recently, D$^2$NNs have been further optimized with advanced training algorithms, architectures, and energy efficiency aware training \cite{li2020class,mengu2019analysis,mengu2020scale}, e.g, class-specific differential detector mechanism improves the testing accuracy by 1-3\% \cite{li2020class}; \cite{mengu2020scale} improves the robustness of D$^2$NNs inference with data augmentation in training. However, due to the challenge of implementing reconfigurability in D$^2$NNs (e.g., 3D printed terahertz system \cite{lin2018all}), deploying a different DNNs algorithm requires re-building the entire D$^2$NNs system. In this manner, the hardware efficiency can be significantly degraded for multiple DNNs tasks, especially when those tasks are different but related. This has also been an important trend in conventional DNNs, which minimizes the total number of neurons and computations used for multiple related tasks to improve hardware efficiency, namely \textit{multi-task learning} \cite{ruder2017overview}. Note that, realizing different tasks directly from the input data features without separate inputs or user indications is challenging even in conventional DNNs system. In this work, we present the first-of-its-kind real-time multi-task D$^2$NNs architecture optimized in hardware-software co-design fashion, which enables sharing partial feature representations (physical layers) for multiple related prediction tasks. More importantly, \textbf{our system can automatically recognize which task is being deployed and generate corresponding predictions in real-time fashion, without any external inputs in addition to the input images.} Moreover, we demonstrate that the proposed hardware-software co-design approach is able to significantly reduce the complexity of the hardware by further reusing the detectors and maintain the robustness under multiple system noises. Finally, we propose an efficient domain-specific regularization algorithm for training multi-task D$^2$NNs, which offers flexible control to balance the prediction accuracy of each task (task accuracy trade-off) and prevent over-fitting. The experimental results demonstrate that our multi-task D$^2$NNs system can achieve the same accuracy for both tasks compared to the original D$^2$NNs, with more than 75\% improvements in hardware efficiency; and the proposed architecture is practically noise resilient under detector Gaussian noise and fabrication variations, where prediction performance degrades $\leq 1\%$ within the practical noise ranges. \section*{Results and Discussion}\label{sec:results} \begin{figure}[!htb] \includegraphics[width=1.\linewidth]{figs/Structure-paper-crop.pdf} \caption{{\bf Illustration of multi-task deep learning and multi-task D$^2$NN architecture with two image classification tasks deployed.} -- The proposed multi-task D$^2$NN architecture is formed by four shared diffractive layers and two multi-task layers, where the feed-forward computations have been re-used into multi-task layers using a beam splitter. With a novel training algorithm, the proposed architecture further reduces the hardware complexity that utilizes only ten detectors for both classification tasks, i.e., twenty different classes.} \label{fig:architecture} \end{figure} Figure \ref{fig:architecture} shows the proposed real-time multi-task diffractive deep neural network (D$^2$NN) architecture. Specifically, in this work, our multi-task D$^2$NN deploy image classification DNN algorithms with two tasks, i.e., classifying MNIST10 dataset and classifying Fashion-MNIST10 dataset. In a single-task D$^2$NN architecture for classification \cite{lin2018all}, the number of opto-electronic detectors positioned at the output of the system has to be equal to the number of classes in the target dataset. The predicted classes are generated similarly as conventional DNNs by selecting the index of the highest probability of the outputs (\texttt{argmax}), i.e., the highest energy value observed by detectors. Moreover, due to the lack of flexibility and reconfigurability of the D$^2$NN layers, deploying DNNs algorithms for $N$ tasks requires physically designing $N$ D$^2$NN systems, which means $N$ times of the D$^2$NN layer fabrications and the use of detectors. Our main goal is to improve the cost efficiency of hardware systems while deploying multiple related ML tasks. Conceptually, the methodologies behind multi-task D$^2$NN architecture and conventional multi-task DNNs are the same, i.e., maximizing the shared knowledge or feature representations in the network between the related tasks \cite{ruder2017overview}. Let the D$^2$NN multi-task learning problem over an input space $\mathcal{X}$, a collection of task spaces $\mathcal{Y}^n_{n\in[0,N]}$, and a large dataset including data points $\{x_i,y_i^1,...,y_i^N\}_I\in[D]$, where $N$ is the number of tasks and $D$ is the size of the dataset for each task. The hypothesis for D$^2$NN multi-task learning remains the same as conventional DNNs, which generally yields the following empirical minimization formulation: \begin{equation} \min \limits_{\theta^{share}, {\theta^{1},\theta^{2},...,\theta^{N}}} ~~ \sum^{N}_{n=1} c^t \mathcal{L}(\theta^{share}, \theta^{t}) \end{equation} where $\mathcal{L}$ is a loss function that evaluates the overall performance of all tasks. The finalized multi-task D$^2$NN will deploy the mapping, $f(x,\theta^{share},\theta^{n}) : \mathcal{X} \rightarrow \mathcal{Y}^n$, where $\theta^{share}$ are shared parameters in the \textit{shared diffractive layers} between tasks and task-specific parameters $\theta^{n}$ included in \textit{multi-task diffractive layers}. Specifically, in this work, we design and demonstrate the multi-task D$^2$NN with a two-task D$^2$NN architecture shown in Figure \ref{fig:architecture}. Note that the system includes four shared diffractive layers ($\theta^{share}$) and one multi-task diffractive layer for each of the two tasks. The multi-task mapping function becomes $f(x,\theta^{share},\theta^{1,2}) : \mathcal{X} \rightarrow \mathcal{Y}^2$, and can be then decomposed into: \begin{equation} f(x,\theta^{share},\theta^{1,2}) = det(f^{1}(\frac{1}{2} \cdot f^{share}(x,\theta^{share}),\theta^{1}) + f^{2}(\frac{1}{2} \cdot f^{share}(x,\theta^{share}),\theta^{2}))\\ \end{equation} \begin{equation} f^{share}: \mathcal{X} \rightarrow (\Re + \Im)^{\in 200\times200}, ~~ f^{1}, f^2: (\Re + \Im)^{\in 200\times200} \rightarrow (\Re + \Im)^{\in 200\times200} \end{equation} where $f^{share}, f^{1},$ and $f^2$ produce mappings in complex number domain that represent light propagation in phase modulated photonics. Specifically, the forward functionality of each diffractive layer and its dimensionality $\mathbb{R}^{200 \times 200}$ remains the same as \cite{lin2018all}. The output $det \in \mathbb{R}^{C \times 1}$ are the readings from $C$ detectors, where $C$ is the largest number of classes among all tasks; for example, $C=10$ for MNIST and Fashion-MNIST. The proposed multi-task D$^2$NN system is constructed by designing six phase modulators based on the optimized phase parameters in the four shared and two multi-task layers (Figure \ref{fig:architecture}), i.e., $\theta^{share},\theta^{1,2}$. The phase parameters are optimized with \textit{backpropogation} with gradient chain-rule applied on each phase modulation and adaptive momentum stochastic gradient descent algorithm (\texttt{Adam}). The design of phase modulators can be done with 3D printing or lithography to form a passive optical network that performs inference as the input light diffracts from the input plane to the output. Alternatively, such diffractive layer models can also be implemented with spatial light modulators (SLMs), which offers the flexibility of reconfiguring the layers with the cost of limiting the throughput and increase of power consumption. Table \ref{tbl:comparisons} presents the performance evaluation and comparisons of the proposed architecture with other options of classifying both MNIST and Fashion-MNIST tasks. We compare our architecture with -- 1) singe-task D$^2$NN architecture, which requires two stand-alone D$^2$NN systems; 2) multi-task D$^2$NN architecture with the same diffractive architecture as Figure \ref{fig:architecture} but with two separate detectors for reading and generating the classification results. {\color{black}Specifically, we utilize \textit{Accuracy-Hardware product} (a.k.a. Acc-HW) metric. Regarding the hardware cost, we estimate the cost of the baseline and the proposed systems using the number of detectors. This is because the major cost of the system comes from detectors in practice and the cost of 3D-printed masks is negligible compared to detector cost. To evaluate the hardware efficiency improvements, we set single-task Acc-HW as the baseline, and the improvements of the multi-task D$^2$NN architectures using Equation \ref{eq:acc-hw}. We can see that our multi-task D$^2$NN architecture gains 75\% efficiency for MNIST task and 72\% for Fashion-MNIST task, by introducing a novel multi-task algorithm and modeling that detects 20 different classes (two sets) using only 10 detectors; and gains over 55\% and 50\% compared to using an architecture that requires two separate sets of detectors. } \begin{equation} \text{Acc-HW Product} = 1 \cdot \frac{Acc_{multi}}{Acc_{single}} \cdot \frac{HWCost_{multi}}{HWCost_{single}}; HWCost = \# Detectors. \label{eq:acc-hw} \end{equation} \begin{table}[t] \color{black} \centering \caption{{\bf Hardware efficiency comparison between single-task and multi-task D$^2$NN architectures.} For the multi-task D$^2$NN comparison, we compare the hardware efficiency and prediction accuracy between a dual-detection (20 detector regions) architecture and single-detection (10 detector regions). The efficiencies of different D$^2$NN architectures for MNIST and Fashion-MNIST tasks are evaluated using \textit{Accuracy-Hardware product} (a.k.a. Acc-HW), where hardware cost is estimated using the number of detectors.} \label{tbl:comparisons} \small \begin{tabular}{|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}\end{tabular}}} & \multicolumn{2}{c|}{\textbf{Single-task system}} & \multicolumn{2}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Multi-task system \\ w 10 Det-Regions\end{tabular}}} & \multicolumn{2}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Multi-task system \\ w 20 Det-Regions\end{tabular}}} \\ \cline{2-7} & \multicolumn{1}{c|}{\textit{MINST}} & \multicolumn{1}{c|}{\textit{F-MINST}} & \multicolumn{1}{c|}{\textit{MINST}} & \multicolumn{1}{c|}{\textit{F-MINST}} & \multicolumn{1}{c|}{\textit{MINST}} & \multicolumn{1}{c|}{\textit{F-MINST}} \\ \hline Diffractive Layer Cost & 6$\times$200$\times$200 & 6$\times$200$\times$200 & \multicolumn{2}{l|}{(4+2+2)$\times$200$\times$200} & \multicolumn{2}{l|}{(4+2+2)$\times$200$\times$200} \\ \hline Detector Cost & 10 & 10 & \multicolumn{2}{c|}{10} & \multicolumn{2}{c|}{10+10} \\ \hline Accuracy & 0.981 & 0.889 & 0.977 & 0.886 & 0.979 & 0.883 \\ \hline \textbf{Acc-HW Product} & 1 & 1 & \textbf{1.99}$\mathbf{\times}$ & \textbf{1.99}$\mathbf{\times}$ & {$\sim 1$}${\times}$ & {$\sim 1$}${\times}$ \\ \hline \end{tabular} \end{table} \begin{figure}[!htb] \includegraphics[width=1.\linewidth]{figs/final/labeling_merged.pdf} \caption{{\bf Modeling of ten classes for two different datasets with ten detectors.} (a) -- (b) One-hot encoding for classes 0 -- 9 of the first task (MNIST) represented using the energy value observed at the detectors. Final classes are produced using the index of the lowest energy area, i.e., \texttt{argmin}($det$). (c) -- (d) One-hot encoding for classes 0 -- 9 represented of the second task (e.g., Fashion-MNIST) using the energy value observed at the detectors. Final classes are produced using the index of the highest energy area, i.e., \texttt{argmax}($det$).} \label{fig:labeling} \end{figure} Figure \ref{fig:labeling} illustrates the proposed approach for producing the classes, which re-use the detectors for two different tasks. Specifically, for the multi-task D$^2$NN evaluated in this work, both MNIST and Fashion-MNIST have ten classes. Thus, all the detectors used for one class can be fully re-utilized for the other. To enable an efficient training process, we use one-hot encodings for representing the classes similarly as the conventional multi-class classification ML models. The novel modeling introduced in this work that enables re-using the detectors is -- \textit{defining "1" differently in the one-hot representations}. As shown in Figure \ref{fig:labeling}(a)--(b), for the first task MNIST, the one-hot encoding for classes 0 -- 9 are presented, where each bounding box includes energy values observed at the detectors. In which case, "1" in the one-hot encoding is defined as the lowest energy area, such that the label can be generated as \texttt{argmin}($det$) -- the index of the lowest energy area. Similarly, Figure \ref{fig:labeling}(c)--(d) are the one-hot encodings for classes 0 -- 9 of the second task Fashion-MNIST, where label is the index of the highest energy area, i.e., \texttt{argmax}($det$). Therefore, ten detectors can be used to generate the final outputs for two different tasks that share the same number of classes, to gain extra 55\% and 50\% hardware efficiency of the proposed multi-task D$^2$NN (see Table \ref{tbl:comparisons}). \begin{figure} \includegraphics[width=1.\linewidth]{figs/final/inference_example_merged.pdf} \caption{{\bf Visualization of propagations through multi-task D$^2$NN and the results on the detectors.} (a) Forward visualization of classifying MNIST10 sample with class=6, where the 7th detector has the lowest energy value. (b) Forward visualization of classifying Fashion-MNIST sample with class=7, where the 8th detector has the lowest energy value.} \label{fig:propagation} \end{figure} Figure \ref{fig:propagation} includes visualizations of light propagations through multi-task D$^2$NN and the results on the detectors, where the input, internal results after each layer, and output are ordered from left to right. Figure \ref{fig:propagation}(a) shows one example for classifying MNIST sample, where the output class is correctly predicted (class 7) by returning the index of the lowest energy detector. Figure \ref{fig:propagation}(a) presents an example for classifying Fashion-MNIST sample, where the output class is correctly predicted (class 8) by returning the index of the highest energy detector. \begin{figure}[!htb] \includegraphics[width=1.\linewidth]{figs/final/noise_eval_1dim.pdf} \caption{{\bf Evaluations of robustness against system noise of the proposed multi-task D$^2$NN, by considering a wide range of Gaussian noise in detectors and device variations in phase modulators.} Details of noise modeling in the proposed systems are discussed in Section Methods (Equations \ref{eq:det_noise} -- \ref{eq:combined_noise}). (a) Prediction performance evaluation under Gaussian detector noise with $\sigma$ shown in $S/N$ (Signal to Noise) $\in [0, 0.2]$. (b) Prediction performance evaluation under Gaussian device variations. (c) Evaluations of MNIST task accuracy under combined detector noise and device variations. (d) Evaluations of Fashion-MNIST task accuracy under combined detector noise and device variations.} \label{fig:noise_eval} \end{figure} While building conventional multi-task DNN, it is well known that the robustness of the multi-task DNNs degrades compared to single-task DNNs, for each individual task. Such concerns become more critical in the proposed multi-task D$^2$NN system due to the potential system noise introduced by the fabrication variations, device variations, detector noise, etc. Thus, we comprehensively evaluate the noise impacts for our proposed multi-task D$^2$NN, by considering a wide range of Gaussian noise in detectors and device variations in phase modulators. Details of noise modeling in the proposed systems are discussed in Section Methods (Equations \ref{eq:det_noise} -- \ref{eq:combined_noise}). Figure \ref{fig:noise_eval} includes four sets of experimental results for evaluating the robustness of our system under system noise. Specifically, Figure \ref{fig:noise_eval}(a) evaluates the prediction performance of both tasks under detector noise, where the x-axis shows the $\sigma$ of a Gaussian noise vector $S/N$ (Signal to Noise), and the y-axis shows the accuracy. Figure \ref{fig:noise_eval}(b) evaluates the accuracy impacts from device variations of phase modulators, where the x-axis shows the phase variations of each optical neuron in the diffractive layer (note that phase value is $\in [0,2\pi]$), and the y-axis shows the accuracy. In practice, detector noise is mostly within 5\%, and device variations are mostly up to 0.2 (80\% yield). We can see that the prediction performance of the proposed system is resilient to a realistic noise range while considering only one type of noise. Moreover, in Figures \ref{fig:noise_eval}(c)--(d), we evaluate the noise impacts for MNIST and Fashion-MNIST, respectively, under both detector noise and device variations. While the accuracy degradations are much more noticeable when both noises become significantly, we observe that the overall performance degradations remain $\leq 1\%$ within the practical noise ranges. In summary, the proposed architecture is practically noise resilient. \begin{figure}[!htb] \centering \includegraphics[width=1\linewidth]{figs/final/algorithm_results.pdf} \caption{{\bf Evaluation of loss regularization for adjusting the performance of each task.} (a) Testing accuracy with different regularization factors. As $\frac{\lambda_2}{\lambda_1}$ increases (decreases), the final performance of the multi-task D$^2$NN will be bias to Fashion-MNIST (MNIST). We include results of 100 different hyperparameters for training. (b) Testing accuracy of both tasks during the training phase, where we can see that even the largest and smallest regularization factors do not cause overfitting.} \label{fig:algorithm_result} \end{figure} In multi-task learning, it is often needed to adjust the weight or importance of different prediction tasks according to the application scenarios. For example, one task could be required to have the highest possible prediction performance while the performance of other tasks are secondary. To enable such biased multi-task learning, the shared representations $\theta^{share}$ need to carefully adjusted. Figure \ref{fig:algorithm_result} demonstrates the ability to enable such biased multi-task learning using loss regularization techniques. Specifically, we propose to adjust the performance of different tasks using a novel domain-specific regularization function shown in Equation \ref{eq:loss_methods}, where $\lambda_1$ and $\lambda_2$ are used to adjust the task importance, with a modified \textit{L2 normalization} applied on multi-task layers only. The results with 100 trials of training (with different random seeds for initialization and slightly adjusted learning rate) are included in Figure \ref{fig:algorithm_result}(a). We can see that loss regularization is sufficient to enable biased multi-task learning in the proposed multi-task D$^2$NN architecture, regardless of the initialization and training setups. Moreover, Figure \ref{fig:algorithm_result}(b) empirically demonstrates that with even with very large or small regularization factors, the proposed loss regularization will unlikely overfit either of the tasks because of the adjusted L2 norm used in the loss function (Equation \ref{eq:loss}). Note that the adjusted L2 normalization only affects the gradients for $\theta^{1}$ and $\theta^{2}$, where $\lambda_{L2}$ is the weight of this L2 normalization. \begin{equation} \mathcal{L}(\theta^{share}, \theta^{1,2}) = \underbrace{\lambda_1}_{\text{t1 factor}} \mathcal{L}_1(\theta^{share}, \theta^{1}) + \underbrace{\lambda_2}_{\text{t2 factor}} \cdot \mathcal{L}_2(\theta^{share}, \theta^{2}) + \underbrace{\lambda_{L2} \frac{\lambda_2}{\lambda_1} \cdot ( ({\theta^{1}})^2 + ({\theta^{2}})^2)}_{\text{adjusted L2 norm}} \label{eq:loss_methods} \end{equation} \section*{Methods} \paragraph{Multi-task D$^2$NN Architecture} Figure \ref{fig:architecture} shows the design of the multi-task D$^2$NN architecture. Based on the phase parameters $\theta^{share}, \theta^1$, and $\theta^2$, there several options to implement the diffractive layers to build the multi-task D$^2$NN system. {For example, the passive diffractive layers can be manufactured using 3D printing for long-wavelength light (e.g. terahertz) or lithography for short-wavelength light (e.g. near-infrared), and active reconfigurable ones can be implemented using spatial light modulators. A 50-50 \textit{beam splitter} is used to split the output beam from the last shared diffractive layer into two ideally identical channels for multi-task layers. Coherent light source, such as laser diodes, is use in this system. At the output of two multi-task layers, the electromagnetic vector fields are added together on the detector plane. The generated photocurrent corresponding to the optical intensity of summed vector fields is measured and observed as output labels.} {\color{black}Regarding the real-time capability of the proposed system, the proposed architecture performs the same the system proposed in \cite{lin2018all}, where computation is executed at the speed of light and the information is processed on each neuron/pixel of the phase mask is highly parallel. Thus, the time of light flight is negligible and the determination factor for system hardware performance is dependent on the performance of THz detectors. For a detector with operation bandwidth $f$, the corresponding latency is $1/f$ and the largest throughput is $f$ frames/s/task. The minimum power requirement for this system is determined by the number of detector, $NEP$ (noise-equivalent-power), and , if we assume the loss and energy consumption associated with phase masks is negligible. In practice, considering a room-temperature VDI detector \footnote{https://www.vadiodes.com/en/products/detectors?id=214} operating at $\sim0.3~THz$ , $f=\sim 40~GHz$, and $NEP=2.1 pW/\sqrt[2]{Hz}$, the latency of the system will be 25 $ps$, throughput is $4 \times10^{10}$ $fps/task$ (frame/second/task), with power consumption 0.42 $uW$. In addition to mitigate the large cost of detectors, alternative materials can be used, such as graphene. For example, the specific detector performance shown in \cite{castilla2019fast} is $NEP=\sim80 pW/\sqrt[2]{Hz}$, and $f=\sim 300~MHz$. In which case, the system atency is $\sim30 ns$, such that the throughput is $3 \times 10^{8}$ $fps/task$ with the estimated minimum power 1.4 $uW$.} \paragraph{Training and Inference of Multi-task D$^2$NN} The proposed system has been implemented and evaluated using Python (v3.7.6) and Pytorch (v1.6.0). The basic components in the multi-task D$^2$NN PyTorch implementation includes 1) diffractive layer initialization and forward function, 2) beam splitter forward function, 3) detector reading, and 4) final predicted class calculation. First, each layer is composed of one diffractive layer that performs the same phase modulation as \cite{lin2018all}. To enable high-performance training and inference on GPU core, we utilize for complex-to-complex Discrete Fourier Transform in PyTorch (\texttt{torch.fft}) and its inversion (\texttt{torch.ifft}) to mathematically model the same modulation process as \cite{lin2018all}. Beam splitter that evenly splits the light into \textit{transmitted light} and \textit{reflected light} is modeled as dividing the complex tensor produced by the shared layers in half. The trainable parameters are the phase parameters in the diffractive layers that modulate the incoming light. While all the forward function components are differentiable, the phase parameters can be simply optimized using automatic differentiation gradient mechanism (autograd). The detector has ten regions and each detector returns the sum of all the pixels observed (Figure \ref{fig:labeling}). To enable training with two different one-hot representations that allow the system to reuse ten detectors for twenty classes, the loss function is constructed as follows: \begin{equation} \begin{multlined} \small \mathcal{L}=\lambda_1 \cdot \underbrace{\texttt{MSELoss}(LogSoftmax(f(\theta^{share}, \theta^1, \mathcal{X}^1)~, ~(label^1 + 1)\%2~~)}_{\text{one-hot encoding with one "0" and nine "1s"}} \\ ~+~ \lambda_2 \cdot \underbrace{\texttt{MSELoss}(LogSoftmax(f(\theta^{share}, \theta^2, \mathcal{X}^2)~, ~label^2~)}_{\text{one-hot encoding with one "1" and nine "0s"}} \\ + \underbrace{\lambda_{L2} \cdot \frac{\lambda_2}{\lambda_2} \texttt{L2}(\theta^1,\theta^2)}_{\text{L2 norm on multi-task diffractive layers}} \\ \label{eq:loss} \end{multlined} \end{equation} The original labels $label^{1}$ and $label^2$ are represented in conventional one-hot encoding, i.e., one "1" with nine of "0s", and $label^{1}$ has been converted into an one-hot encoding with one "0" and nine "1s". Note that LogSoftmax function is only used for training the network, and the final predicted classes of the system are produced based on the values obtained at the detectors. With loss function shown in Equation \ref{eq:loss} and the modified one-hot labeling for task 1, the training process optimizes the model to 1) given an input image in class $c$ for task 1 (MNIST), minimize the value observed at $(c+1)^{th}$ detector, as well as maximize the values observed at other detectors; 2) given an input image in class $c$ for task 1 (Fashion-MNIST), maximize the value observed at $(c+1)^{th}$ detector, as well as minimize the values observed at other detectors. Thus, the resulting multi-task model is able to automatically differentiate which task the input image belongs to based on the sum of values observed in the ten detectors, and then generate the predicted class using \texttt{argmin} (\texttt{argmax}) function for MNIST (Fashion-MNIST) task. The gradient updates have been summerized in Equation \ref{eq:graident_updates}. \begin{equation} \begin{multlined} \theta^{share} = \theta^{share} - \frac{1}{2} \cdot \eta \frac{\lambda_2}{\lambda_1}(\nabla \theta^1 + \nabla \theta^2)\\ \theta^{1'} = \theta^{1} - \eta \nabla \theta^1 - 2\eta \lambda_{L2}|| \theta^{1} + \theta^{2}||\\ \theta^{2'} = \theta^{2} - \eta \nabla \theta^2 - 2\eta \lambda_{L2}|| \theta^{1} + \theta^{2}||\\ \end{multlined} \label{eq:graident_updates} \end{equation} \paragraph{System Noise Modeling} We demonstrate that the proposed system is robust under the noise impacts from the device variations of diffractive layers and the detector noise in our system. Specifically, to include the noise attached to the detector, we generate a Gaussian noise mask $\mathcal{N}(\sigma, \mu) \in \mathbb{R}^{200\times200}$ with on the top of the detector readings, i.e., each pixel observed at the detector will include a random Gaussian noise. As shown in Figure \ref{fig:noise_eval}(a), we evaluate our system under multiple Gaussian noises defined with different $\sigma$ with $\mu=0$. We also evaluated the impacts of $\mu$, while we do not observe any noticeable effects on the accuracy for both tasks. This is because increasing $\mu$ of a Gaussian noise tensor does not change the ranking of the values observed by the ten detectors, such that it has no effect on the finalized classes generated with \texttt{argmax} or \texttt{argmin}. The forward function for $i^{th}$ task with detector noise is shown in Equation \ref{eq:det_noise}. \begin{equation} c^i = \texttt{argmax/argmin}(det(f(\theta^{share}, \theta^i, \mathcal{X}^i)) + \mathcal{N}(\sigma, 0)), ~~i=\{1,2\} \label{eq:det_noise} \end{equation} We also considered the imperfection of the devices used in the system. With 3D printing or lithography based techniques, the imperfection devices might not implement exactly the phase parameters optimized by the training process. Specifically, we consider the imperfection of the devices that affect the phases randomly under a Gaussian noise. As shown in Figure \ref{fig:noise_eval}(b), the x-axis shows that the $\sigma$ of Gaussian noise that are added to the phase parameters for inference testing. The forward function is described in Equation \ref{eq:phase_noise}. Beam splitter noise has also been quantified, where we do not see direct impacts on both tasks (see Figure 2 in supplementary file SI.pdf). \begin{equation} \begin{multlined} \theta^{share}_{\mathcal{N}} = (\theta^{share} + \mathcal{N}(\sigma, 0)) ~~\% ~~2\pi\\ \theta^{i}_{\mathcal{N}} = (\theta^{i} + \mathcal{N}(\sigma, 0)) ~~\% ~~2\pi, ~~i=\{1,2\}\\ c^i = \texttt{argmax/argmin}(det(f( \theta^{share}_{\mathcal{N}}, \theta^i_{\mathcal{N}}, \mathcal{X}^i_{\mathcal{N}}))), ~~i=\{1,2\} \\ \end{multlined} \label{eq:phase_noise} \end{equation} Finally, for results shown in Figure \ref{fig:noise_eval}(c)--(d), we include both detector noise and device variations in our forward function (Equation \ref{eq:combined_noise}): \begin{equation} \begin{multlined} \theta^{share}_{\mathcal{N}} = (\theta^{share} + \mathcal{N}^1(\sigma^1, 0)) ~~\% ~~2\pi\\ \theta^{i}_{\mathcal{N}} = (\theta^{i} + \mathcal{N}^1(\sigma^1, 0)) ~~\% ~~2\pi, ~~i=\{1,2\}\\ c^i = \texttt{argmax/argmin}(det(f( \theta^{share}_{\mathcal{N}}, \theta^i_{\mathcal{N}}, \mathcal{X}^i_{\mathcal{N}})) + \mathcal{N}^2(\sigma^2, 0))), ~~i=\{1,2\} \\ \end{multlined} \label{eq:combined_noise} \end{equation}
{ "timestamp": "2021-04-05T02:02:56", "yymm": "2012", "arxiv_id": "2012.08906", "language": "en", "url": "https://arxiv.org/abs/2012.08906" }
\section{Introduction} After almost a decade of notable advances in AI using data-driven machine learning approaches, there is a growing sense in the field that symbolic knowledge needs to be included in AI systems to get to the next level of machine intelligence. This thought is materialized in the so called \textit{Neuro-Symbolic} approaches which have already produced some intriguing results~\cite{parisotto2017neurosymbolic,besold2017neuralsymbolic,Tenenbaum2011,bengio2017consciousness,Mao2019NeuroSymbolic,hudson2019learning,raedtetal2019}. However, even if successful, such approaches will require symbolic data or knowledge to be captured and represented for the machine. Eliciting knowledge directly from human beings has been proved to be a difficult task, both in the cases of specialized, professional knowledge~\cite{boose1989survey} and common-sense~\cite{davis2014representations,singh2002open}. Similarly, mining reliably symbolic knowledge from text sources still remains a difficult task, in spite of many advancements in mining of knowledge graphs ~\cite{fossati2015,ji2020survey,asim2018survey}. As much as collecting large amounts of reliable data has become a bottle-neck for the use of data-driven machine learning, getting knowledge from textual data or human beings into reasonably complete and correct symbolic representations is likely to be a major issue for neuro-symbolic methods. This paper explores an alternative path which explores meta-knowledge developers of AI systems sometimes embed in their source code. In particular, we examine the case of professional conversational systems and the symbolic knowledge their developers often embed in identifiers. Analyzing a dataset comprising thousands of different conversational systems developed in a very popular platform, we observe a very common pattern of using a symbolic structure, called here \textit{proto-taxonomy}, to name the intents to be recognized by the system. This practice was also verified qualitatively in workshops we conducted with developers. In spite many consistency and incompleteness issues with those proto-taxonomies, we show that they can be employed to improve the accuracy of recognition, using adaptations of recent neuro-symbolic methods. This seems to signal towards neuro-symbolic techniques designed to handle imperfect knowledge representations. We see as a needed compromise to bring back symbolic reasoning to AI in a sustainable form, avoiding the old pitfalls of \textit{knowledge engineering}~\cite{hayes1984industrialization,studer1998knowledge,studer1999knowledge,chang2001handbook}. This paper starts by looking into the recent advances in neuro-symbolic systems, reviewing the difficulties in knowledge mining, and exploring previous use of informal knowledge. We then present the evidence found that developers of conversational systems embed meta-knowledge within the source code of their systems. We follow by describing algorithms integrating such meta-knowledge into intent recognition algorithms and by evaluating them first with two typical intent recognition datasets, and then with hundreds of workspaces created in a professional tool called here \textit{ChatWorks}. The results show most of those workspaces can benefit from the techniques described in this paper. \section{Related Work} Neuro-symbolic approaches combine statistical methods with logic symbolism: ``neural-symbolic systems aim to transfer principles and mechanisms between (often nonclassical) logic-based computation and neural computation''~\cite{besold2017neuralsymbolic}. Such kind of systems are viewed as a way to embed high-level knowledge and even some form of ``consciousness'' into machine learning systems, making the language to develop them closer to ``what passes in a man's own mind'' ~\cite{bengio2017consciousness}, which would likely make those systems more explainable than current deep learning algorithms. Although neuro-symbolic systems are not new, we observe increasing interest on this approach in recent years, resulting in a myriad of novel techniques applied to different problems, contexts, and scenarios~\cite{parisotto2017neurosymbolic,manhaeve2018,garcez2019neuralsymbolic,hudsonmanning2019,raedtetal2019}. For instance, in \cite{Mao2019NeuroSymbolic}, an approach for image understanding is suggested which takes the object-based scene representations and translates sentences into executable, symbolic programs. In~\cite{oltramari2020neurosymbolic}, embeddings computed from knowledge graphs are used as attention layers for tasks such as autonomous driving (AV) and question-answering. And in \cite{kartsaklis2018mapping}, embeddings from a knowledge graph are mapped to sentence embeddings for tasks such as {\it the inverse dictionary problem}. One important requirement for many neuro-symbolic systems is to represent knowledge in a structured format such as knowledge graphs, ontologies, or taxonomies \cite{ji2020survey}. In some cases, such as the scene ontology for AV in \cite{oltramari2020neurosymbolic}, a lot of effort was needed to be put on manual annotation. Nevertheless, as presented in \cite{fossati2015}, an unsupervised approach can sometimes be used to mine the meta-knowledge introduced by the experts, such as the categories in Wikipedia pages. The high-level representation in intent identifiers can be viewed as similar to comments included in programming source codes. Code commentaries are one of the means employed by developers to help them organize their thought process while producing code. Research on code commentaries has shown that they can be useful for automatic generation of code, consistency check, classification and quality evaluation \cite{yang2019}. Similar behavior of users for organizing content can also be observed in e-mails \cite{Whittaker2011}, computer files \cite{barreau1995,jones2005,civan2008}, and \textit{Jupyter} notebooks~\cite{rule2018ten}. Considering our context of intent recognition, intent identifiers might contain a high-level representation of the main content of the intent. As shown in~\cite{Chen2016}, intent identifiers can be formatted as natural language sentences to learn a model which maps training examples into those sentences, so that the meta-knowledge can be used in \textit{zero-shot learning} ~\cite{wang2019survey}. Unfortunately, the dataset explored in this work is very limited. Recent work has also demonstrated that intent recognition can be improved with enhanced class representations such as \textit{word-graphs}~\cite{cavalin2020improving} by mining symbolic knowledge from the example utterances. This work aims to fill in some of those gaps by providing a better understanding of the usefulness of the meta-knowledge embedded in intent identifiers by exploring a large set of intent recognition datasets; and by going deeper into the symbolic representations of the identifiers viewing them as quasi taxonomies. \section{Embedded Meta-Knowledge in Intents of Conversational Systems} Most real-world, deployed conversational systems in use today have been built based on the rule-based \textit{intent-action} paradigm, using platforms such as \textit{Luis.ai}, \textit{Watson Assistant}, or \textit{Alexa Skills}. Each \textit{intent} corresponds to a desired information or answer from the user and is defined by a set of exemplar utterances by the chatbot developers. During runtime, each utterance from the user is recognized as one of the defined intents or as \textit{out-of-scope} (OOS), and the associated action is generated, often a pre-written sentence created by developers or subject-matter experts (SMEs). \begin{figure}[t!] \centering \includegraphics[trim=0cm 0.5cm 0cm 0cm,width=4cm]{figures/Figure1.png} \caption{Pre-defined intents for utilities-related chatbots of the \textit{Watson Assistant} platform.} \label{fig:intents_utilities.png} \end{figure} \begin{figure}[t!] \centering \includegraphics[trim=0cm 0.5cm 0cm 0cm,width=6cm]{figures/intents_utilities_graph.png} \caption{The intent proto-taxonomy associated to the utilities-related intents of fig.~\ref{fig:intents_utilities.png}.} \label{fig:intents_utilities_graph} \end{figure} Many of those platforms also come with a pre-defined, domain-specific list of intents which can be added to any chatbot to speed-up development. For example, fig.~\ref{fig:intents_utilities.png} shows the list of pre-defined intents from \textit{Watson Assistant} for utilities-related conversational systems. Notice that the names of intents aim to describe the meaning of each pre-defined intent by representing it through a sequence of keywords separated by underscore characters. Some of those keywords appear many times (marked in colors), with a structure which has semblance to a \textit{taxonomy}. This pattern of naming intents following a categorical path can also be found in the pre-defined intents of other platforms and, as we show in this paper, in the names of the intents defined by developers themselves. The goal seems to provide the intent classes with a summarized description of each intent in a way that the similarity of different intents is highlighted. Such patterns are also common in the way people organize files and e-mails in computers \cite{civan2008,Whittaker2011} and how software developers name functions \cite{yang2019}. At the same time, by regarding the keywords in the name intents as basic concepts and the underscore characters as a connection between them, we can structure the list of intent identifiers as a sort of very basic knowledge graph~\cite{ehrlinger2016towards}, here referred as \textit{intent proto-taxonomies}. Figure~\ref{fig:intents_utilities_graph} depicts the intent proto-taxonomy associated to the list of the intents in fig.~\ref{fig:intents_utilities.png}. A basic inspection of the intent proto-taxonomy shown in fig.~\ref{fig:intents_utilities_graph} reveals that, as a knowledge graph, it has several shortcomings, such as the lack of action verbs in some intents. In addition, even in this case of a professional list of intents provided by a highly developed tool, the meta-knowledge mined from the list of the intent identifiers has inconsistencies and seems to be incomplete. However, it has two great qualities: (1) it is embedded in the conversational system, so there is no need of knowledge acquisition from experts; (2) it is easily mined. The key question is whether, given its limitations as discussed above, the embedded knowledge is good enough to be used by neuro-symbolic algorithms. We will show later that, indeed, this meta-knowledge can enhance machine learning algorithms. But first let us examine the evidences we found that the practice of naming intents in conversational systems using an intent proto-taxonomy is a fairly common practice, and thus able to provide structured domain meta-knowledge almost ``for free'' for a large number of professional systems in use today. \subsection{How Developers Use Intent Proto-Taxonomies} We conducted a 4-day design workshop with four expert developers to understand the challenges SMEs and developers of conversational systems have and what could facilitate their work \cite{chi2021paper}. Those SMEs have developed chatbots for the auto industry, banking, and telco using the ChatWorks platform (anonymized for review). The structuring of the intent identifiers using proto-taxonomies was discussed and explored with them. The SMEs reported a very pro-active practice of naming intents following a formally defined structure, typed in a sort of taxonomy, and shared among their peers and domain experts in the clients, who were also responsible for maintaining those systems. Some of them brought \textit{mindmaps} to explain the concept relations in the \textit{workspace} (the set of all intents) and showed how they make available those concepts to their team in the system or using spreadsheets. They told us that often underscore characters are used to separate concepts and that the order of the concepts usually represents how the workspace is organized. The \textit{taxonomy}, as they often refer to it, is also a kind of self-indexing information for future use, a name that, by representing the semantics of an intent, can be used to simplify their work and collaboration. This study provided evidence that the use of structured meta-knowledge in the intent identifiers was an intentional and well-established practice among some developers. The remaining question was how widespread this practice was. \subsection{The Use of Intent Proto-Taxonomies in ChatWorks} The ChatWorks platform has an opt-in feature in which developers of chatbots can share their code and content (called \textit{workspaces}) with the company which owns the platform for research and development purposes. We were given access to about 18K workspaces active in six months between 2019 and 2020, all of them in English language. Those workspaces were filtered to remove duplicates and workspaces with less than 8~intents. The resulting dataset is composed of 3,840 workspaces. About 81\% of them had from 8 to 100 intents and the largest had 1,974 intents. We used two criteria for taxonomy identification and size in each workspace: (1) the intent identifier must have a concept structure (words) separated by a symbol (separator); and (2) the concepts in the same position should be able to group themselves at least in two different classes. Given a workspace with a set of intent identifiers, we first ranked the best separator (period, underline, camelcase, or dash) to split the name into concepts. To this end, we calculated the \textit{perplexity}~\cite{manning1999foundations} of a bag of concepts using each separator and selected the one with minimum perplexity. Next, each intent identifier was split using the selected separator and the resulting list of intent identifiers was compared to each other by the concepts at the same level. In a level, if the concepts were either all equal or all different, then that level was not evaluated. When the grouping of concepts was possible, the intents with those concepts were selected as \textit{intents with taxonomy}. The \textit{taxonomy rate} was calculated by the ratio between the number of intents with taxonomy and the number of intents created by the user (excluding all the pre-defined domain-specific intents provided by ChatWorks). \begin{figure}[t!] \centering \includegraphics[trim=1cm 1cm 1cm 1cm,width=\columnwidth]{figures/paper_figure3.pdf} \caption{Distribution of the number of workspaces according to the taxonomy rate of the 3,840 English workspaces. } \label{fig:workspaces_dist} \end{figure} Using those metrics, 76\% of all 3,840 workspaces had a taxonomy rate above 10\%, almost 52\% had a taxonomy rate above 50\%, and 16\% had a very high taxonomy rate from 90\% to 100\%. Figure~\ref{fig:workspaces_dist} shows how the 3,840 workspaces are distributed considering both the total number of intents ($x$ axis) and the taxonomy rate ($y$ axis). Notice that the distribution follows a sort of ``step'' function where, as the threshold of 64 intents in the workspace is crossed, the majority of the workspaces had more than 50\% of taxonomy rate. It seems that, as the complexity of the workspace increases with the number of intents, more often developers and SMEs resort to structure the intents as a proto-taxonomy. Notice that the same inconsistencies which were seen in the pre-defined intents from Watson Assistant of fig.~\ref{fig:intents_utilities.png} seem to be also present in the developers' workspaces. Nevertheless, the results of this analysis seem to overwhelming confirm that using intent proto-taxonomies is a fairly common practice in ChatWorks, reaching at least around 80\% of all workspaces and even more common in the workspaces with a high number of intents. \section{Using Mined Meta-Knowledge to Improve Intent Recognition} This section presents a formal description of the methodology employed in this work to take advantage of the proto-taxonomies in a neuro-symbolic approach. \subsection{Embedding the Set of Classes} An \emph{intent classification} method is a function $D$ which maps a set of sentences (potentially infinite) $S=\{s_{1},s_{2},...\}$ into a finite set of classes $\Omega=\{\omega_{1},\omega_{2},...,\omega_{c}\}$: \begin{equation} D:S\rightarrow\Omega \hspace{5mm} D(s)=\omega_{i} \end{equation} To enable a numeric, easier handling of the input text, an embedding $\xi:S\rightarrow\mathbb{R}^{n}$ is often used, mapping the space of sentences $S$ into a vector space $\mathbb{R}^{n}$, and defining a classification function $E:\mathbb{R}^{n}\rightarrow\Omega$ such as $D(s)=E(\xi(s))$. In typical intent classifiers, $E$ is usually composed of a function $C$ which computes the probability of $s$ being in a given class, followed by the \emph{arg max} function. In many intent classifiers, $C$ is the \emph{softmax} function. \begin{equation} S \overset{\xi}{\rightarrow} \mathbb{R}^{n} \overset{C}{\rightarrow} \mathbb{R}^{c} \overset{arg max}{\rightarrow} \Omega \end{equation} This paper explores how to use embeddings in the other side of the classification functions, that is, by embedding the set $\Omega$ of classes into another vector space $\mathbb{R}^{m}$. The idea is to use class embedding functions which somehow capture the intent proto-taxonomies, as we will show later. Formally, we use a \emph{class embedding} function $\psi:\Omega\rightarrow \mathbb{R}^{m}$, its inverse $\psi^{-1}$, and a function $M:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}$ to map the two vector spaces so $ D(s)=\psi^{-1}(M(\xi(s)))$. \begin{equation}\label{eq:main} S \overset{\xi}{\rightarrow} \mathbb{R}^{n} \overset{M}{\rightarrow} \mathbb{R}^{m} \overset{\psi^{-1}}{\rightarrow} \Omega \end{equation} In our work we use typical sentence embedding methods to implement $\xi$. To approximately construct the function $M$ we employ a basic \emph{Mean Square Error} (MSE) method using the training set composed of sentence examples for each class $\omega_{i} \in \Omega$. \subsection{Adapting \emph{Kartsaklis} Method (LSTM)} Our algorithms are inspired by a text classification method proposed for the inverse dictionary problem, where text definitions of terms are mapped to the term they define, proposed in \cite{kartsaklis2018mapping}. The embedding of the class set into the continuous vector space (equivalent to the $\psi$ function in equation~\ref{eq:main}) is done by expanding the knowledge graph of the dictionary words with nodes corresponding to words related to those terms and performing random walks on the graph to compute graph embeddings related to each dictionary node, using the \emph{DeepWalk} algorithm~\cite{perozzi2014deepwalk}. DeepWalk is a two-way function mapping nodes into vectors and back. A \emph{Long Short-term Memory} (LSTM) neural network, composed of two layers and an attention mechanism, is used in~\cite{kartsaklis2018mapping} for mapping the input texts to the output vector space. To map the two continuous vector spaces representing the definition texts and the dictionary terms, a MSE function, learned from the training dataset, is used. For this work, the knowledge graph is replaced by a \emph{proto-taxonomy graph} $G$ which associates each class to a node and connects to each of them nodes that correspond to meta-knowledge concepts related to each class. To better capture the sequential aspect of the proto-taxonomies, we also connect each class node to bigrams of concepts, i.e., the concatenation of two subsequent concepts. We represent this by the function $\zeta$, such as $\zeta (\Omega)=G$, which is also invertible. Substituting this in equation~\ref{eq:main}, \begin{equation}\label{eq:ltsmoriginal} S \overset{LSTM}{\rightarrow} \mathbb{R}^{n} \overset{MSE}{\rightarrow} \mathbb{R}^{m} \overset{DeepWalk^{-1}}{\rightarrow} G \overset{\zeta^{-1}}{\rightarrow} \Omega \end{equation} In practice, we compute the mapping from the class embedding space into the class set, called here $InvG:\mathbb{R}^{m} \rightarrow \Omega$, simply by computing the distance $d$ between a point in $\mathbb{R}^{m}$ and the inverted projection of each class from $\Omega$ and then considering the closest class. That is, for each $w_{i} \in \Omega$, we consider the associated node in $G$ and compute the mapping in $\mathbb{R}^{m}$ of that node, as shown here: \begin{equation} \small InvG (x) = \underset{w_{i}}{\arg\min} \hspace{1mm} d(x,DeepWalk(G(w_i)) \end{equation} By substituting this function into equation~\ref{eq:ltsmoriginal}, we obtain the algorithm we call here \emph{LSTM+T}: \begin{equation}\label{eq:ltsmplust} S \overset{LSTM}{\rightarrow} \mathbb{R}^{n} \overset{MSE}{\rightarrow} \mathbb{R}^{m} \overset{InvG}{\rightarrow} \Omega \end{equation} For comparison, the traditional corresponding classification method is tested, where the graph embedding and associated functions are replaced by discrete \emph{softmax} outputs. We call this simply \emph{LSTM}: \begin{equation} S \overset{LSTM}{\rightarrow} \mathbb{R}^{n} \overset{softmax}{\rightarrow} \mathbb{R}^{c} \overset{arg max}{\rightarrow} \Omega \end{equation} \subsection{Replacing the LSTM with USE} Recently, several general-purpose language models that can be used for computing sentence embeddings have been proposed, and the \emph{Universal Sentence Encoder} (USE) is one of them \cite{cer-etal-2018-universal}. Such an approach consists of a \emph{Transformer} neural network \cite{vaswani2017attention}, trained on varied sources of data, such as Wikipedia, web news, web question-answer pages and discussion forums. USE has achieved state-of-the-art results in various tasks, so we decided to try in our experiments as an alternative to the LSTM for the embedding of input sentences. In this work we employed the multilingual USE version~3\footnote{https://tfhub.dev/google/universal-sentence-encoder-multilingual/3}. By replacing LSTM with USE in eq.~\ref{eq:ltsmplust} we obtain algorithm \emph{USE+T}: \begin{equation} S \overset{USE}{\rightarrow} \mathbb{R}^{n} \overset{MSE}{\rightarrow} \mathbb{R}^{m} \overset{InvG}{\rightarrow} \Omega \end{equation} Like in the previous case, we also use the USE algorithm with traditional discrete softmax outputs for comparison, called here \emph{USE}: \begin{equation} S \overset{USE}{\rightarrow} \mathbb{R}^{n} \overset{softmax}{\rightarrow} \mathbb{R}^{c} \overset{arg max}{\rightarrow} \Omega \end{equation} \vspace{0.1mm} \subsection{Replacing DeepWalk with USE and CDSSM} To explore variants of algorithms for embedding the classes and also approaches which do not need to be trained from scratch and allow on-the-fly handling of meta-knowledge, we tried replacing DeepWalk with two different methods. The first one consists of applying USE sentence embeddings also for class embeddings, such as in eq.~\ref{eq:deepwalkreplacement}. To simplify notation, \emph{EMB} represents either LSTM or USE embeddings for the input text. \begin{equation}\label{eq:deepwalkreplacement} S \overset{EMB}{\rightarrow} \mathbb{R}^{n} \overset{MSE}{\rightarrow} \mathbb{R}^{m} \overset{USE^{-1}}{\rightarrow} G \overset{\zeta^{-1}}{\rightarrow} \Omega \end{equation} This approach is similar to the way DeepWalk works but instead of training the graph embeddings from scratch, the class embeddings are represented by the mean sentence embedding computed from different random walks starting in the class node. We name these methods \emph{LSTM+S} and \emph{USE+S}, for EMB set with LSTM and USE, respectively. Additionally, we also evaluate the replacement of DeepWalk by the \emph{Convolutional Deep Structured Semantic Model} (CDSSM) proposed in \cite{Chen2016}, yielding the following algorithm where \emph{EMB} can be either LSTM or USE embeddings. \begin{equation}\label{eq:deepwalkreplacement_cdssm} S \overset{EMB}{\rightarrow} \mathbb{R}^{n} \overset{MSE}{\rightarrow} \mathbb{R}^{m} \overset{CDSSM^{-1}}{\rightarrow} G \overset{\zeta^{-1}}{\rightarrow} \Omega \end{equation} The CDSSM model consists of a three-layer convolutional neural network trained for creating embeddings of intent identifiers represented as sentences. In this work, we input to CDSSM the sequence of proto-taxonomies for each intent class. We refer to these algorithms as \emph{LSTM+C} and \emph{USE+C}, for EMB set with LSTM and USE, respectively. \subsection{Out-of-Scope Sample Detection} In this paper we are particularly interested to determine whether the proto-taxonomies improve the detection of out-of-scope (OOS) samples. A rejection mechanism based on a pre-defined threshold is used for OOS sample detection. This method can be easily applied to all of the methods described previously without the need neither for any specific training procedure nor OOS training data. In greater detail, suppose that for each class $\omega_i \in \Omega$ there is a score denoted $\phi_i \in Z$, where $|Z| = |\Omega|$. Given that $\max(Z)$ represents the highest score associated to a class and that a rejection threshold $\theta$ has been defined on a validation set, samples can be classified as OOS whenever $\max(Z) < \theta$. If so, they are simply rejected, i.e., no classification output is produced for them. Otherwise, the sample is considered as an \emph{in-scope (IS)} sample and the classification is conducted normally. The scores in $Z$ are represented either by the softmax probability for the traditional softmax-based methods or by the similarity of sentence and graph embeddings for the proposed approaches. For the latter, the similarity is computed by means of the dot product between those two embeddings. \section{Metrics, Datasets, and Experiments} In this section we present the experiments to evaluate the algorithms which use the proto-taxonomies with the neuro-symbolic algorithms described in the previous section. We explore their impact on the accuracy of intent recognition both in terms of classifying correctly utterances (in-scope accuracy) and determining which utterances are not covered by a set of intents (out-of-scope accuracy). \subsection{Evaluation metrics} \label{sec:metrics} We take into account a commonly-used metric for OOS dectection, i.e. \emph{equal error rate (EER)}~\cite{Tan2019} which corresponds to the classification error rate when the threshold $\theta$ is set to a value where \emph{false acceptance rate} (FAR) and \emph{false rejection rate} (FRR) are the closest. These two metrics are defined as: \begin{equation} \small FAR = \frac{\mbox{Number of accepted OOS samples}}{\mbox{Total of OOS samples}} \end{equation} \begin{equation} \small FRR = \frac{\mbox{Number of rejected IS samples}}{\mbox{Total of IS samples}} \end{equation} In addition, \emph{in-scope error rate (ISER)} is considered to report IS performance, i.e. the accuracy considering only IS samples with $\theta$ set to zero, similar to the class error rate in \cite{Tan2019}. This metric is important to evaluate whether the alternative classification methods are able to keep up with the performance of their counterparts in the main classification task. \subsection{The Larson and Telco Datasets} During the development and initial testing of the algorithms, we used two English datasets for in-depth experimentation. The first is the publicly-available \textit{Larson} dataset~\cite{larson-etal-2019-evaluation}; the second is a private real-world chatbot dataset used by telecommunications provider for customer care, called here the \textit{Telco} dataset. In the former, we added a proto-taxonomy by hand based on the identifiers of intents; in the latter case, we structured by hand original proto-taxonomy. The goal of the adjustments was to avoid spurious interference from taxonomy errors in the initial results. In Larson there is a total of 22,500 in-scope samples, evenly distributed across 150 classes, where 18,000 examples are used for training and 4,500 for test. We conducted a simulation of OOS detection with the in-scope examples by doing five random samplings where we took out 30 intents and 3,600 training examples. We trained only with the remaining 120 intents and 14,400 examples. The test was then conducted on the 4,500 samples where 3,600 remained in-scope and 900 became OOS examples. The Telco dataset contains 4,093 examples and 87 intents. From those, 3,069 examples were used for training and 1,024 for test. The OOS scenario was simulated by extracting different random samplings where 5 intents were removed. Given the smaller size of this dataset compared to Larson, we conducted 20 samplings instead of 5. \begin{figure}[t!] \centering \includegraphics[trim=3cm 2cm 1cm 1cm,width=9cm]{figures/results_eer_larson.png} \caption{Different methods to incorporate the proto-taxonomy in Larson dataset, compared to the LSTM and USE baselines.} \label{fig:results_larson} \end{figure} \begin{figure}[t!] \centering \includegraphics[trim=3cm 2cm 1cm 1cm,width=9cm]{figures/results_eer_telco.png} \caption{Different methods to incorporate the proto-taxonomy in Telco dataset, compared to the LSTM and USE baselines.} \label{fig:results_telco} \end{figure} For both sets we considered the following setup defined after preliminary evaluations. For the LSTM-based methods, the input sentence embedding size was set to 150 and output embeddings to 200. DeepWalk walk sizes were set to 20 for LSTM+T and USE+T. For both USE-based methods and the softmax-ones we trained a two-layer neural network with 800 hidden neurons. They were trained for 50 epochs. \subsection{Results in the Larson and Telco Datasets} The results on the Larson dataset are presented in fig.~\ref{fig:results_larson}. We observe that there can be a slight improvement in EER, in special with the USE-based and the LSTM+C methods. Nevertheless, there is a significant improvement in terms of FAR for all USE-based methods and LSTM+S and LSTM+C. Notice that even though the proposed approaches generally do not outperform LSTM and USE in ISER (except LSTM+C) the methods that approximate closer the ISER to the softmax counterparts tend to result in better EER and FAR rates. In fig.~\ref{fig:results_telco}, the results on the Telco dataset show a different scenario. The proposed methods generally perform worse than or, at best, comparable to LSTM and USE in EER. In terms of FAR, some methods such as USE+T and USE+C seem to outperform but, considering the high standard deviation, the results are not significant. On the other hand, we also observe that the methods failed to get close in ISER compared with the softmax-based methods. That seems to indicate that for the cases where making use of meta-knowledge harms too much ISER, the symbolic knowledge creates noise and does not help improving either EER or FAR. There were two key findings from our experiments with the Larson and the Telco. First, the improvements using LSTM or USE as a base seem to be similar, possibly slightly better for the USE algorithm. Second, and most importantly, we saw much more improvements in the use of the proto-taxonomy in the Larson than in the Telco dataset, in spite of the similar nature of the datasets and the proto-taxonomies. This motivated us to try out the ideas in a larger and more diverse number of workspaces and solely focusing on USE to simplify the experiments. \subsection{The ChatWorks Dataset} We used the large set of real, professional workspaces from ChatWorks to create a dataset where our neuro-symbolic algorithms could be tested in a context of high diversity and realism. We started with the 3,840 workspaces available in English. To eliminate possible problems due to workspaces with poor quality, only workspaces with taxonomy rates over 30\% were considered. Next, workspaces with outliers in the number of intents and examples were removed following the $3\sigma$-rule, where values which extrapolate 3 standard deviations from the mean are not considered. Finally, the ratio between the number of examples and intents must be greater than 10. From the filtered set we randomly selected 200 workspaces. The testing procedure involves the execution of 20 iterations for each workspace. The tests are performed for all USE-based methods (USE, USE+T, USE+S, and USE+C). Initially, the workspaces are split into training and test datasets (75\% and 25\%, respectively). Next, the four methods are trained and tested on such datasets. The evaluation metrics (EER, FAR and ISER) are measured on the results for the test datasets, being analyzed in terms of improvement of the proto-taxonomy models (USE+T, USE+S, and USE+C) in comparison to the base model (USE). \subsection{Results in the ChatWorks Dataset} Table~\ref{tab:chatworks_results} summarizes the results of the experiments with the ChatWorks dataset showing the distribution of the workspaces according to ranges of the percentage of improvement of each neuro-symbolic method compared to the baseline of USE (negative values signal worse than baseline). We highlight in boldface the best results for each range. Notice that when the neuro-symbolic method is worse than the baseline ($imp < -5\%$), smaller is better, and conversely for when it is better than the baseline ($5\% \leq imp$). The last column, \textit{best}, corresponds to the results considering the use of the best algorithm of the three algorithms. \begin{table}[t!] \centering \footnotesize \begin{tabular}{|c|r|r|r|r|} \hline \textbf{EER} & \multicolumn{4}{c|}{\% of workspaces} \\ \hline improvement & USE+T & USE+S & \textbf{USE+C} & best\\ \hline $imp < -5\%$ & 68\% & 60\% & \textbf{55\%} & 39\% \\ \hline $-5\% \leq imp < 5\%$ & 16\% & 23\% & 18\% & 22\% \\ \hline $5\% \leq imp < 10\%$ & 8\% & 8\% & \textbf{7\%}& 11\% \\ \hline $10\% \leq imp < 20\%$ & 7\% & 7\% & \textbf{12\%} & \textbf{17\%} \\ \hline $20\% \leq imp $ & 2\% & 4\% & \textbf{9\%} & \textbf{12\%} \\ \hline \multicolumn{5}{c}{} \\ \hline \textbf{FAR} & \multicolumn{4}{c|}{\% of workspaces} \\ \hline improvement & USE+T & USE+S & \textbf{USE+C} & best\\ \hline $imp < -5\%$ & 37\% & 47\% & \textbf{28\%} & 8\% \\ \hline $-5\% \leq imp < 5\%$ & 17\% & 17\% & 17\% & 15\% \\ \hline $5\% \leq imp < 10\%$ & 5\% & 7\% & \textbf{8\%} & 8\% \\ \hline $10\% \leq imp < 20\%$ & 13\% & 11\% & \textbf{16\%} & \textbf{19\%} \\ \hline $20\% \leq imp $ & 29\% & 19\% & \textbf{32\%} & \textbf{52\%} \\ \hline \multicolumn{5}{c}{} \\ \hline \textbf{ISER} & \multicolumn{4}{c|}{\% of workspaces} \\ \hline improvement & USE+T & USE+S & \textbf{USE+C} & best\\ \hline $imp < -5\%$ & 96\% & 95\% & \textbf{74\%} & 71\% \\ \hline $-5\% \leq imp < 5\%$ & 4\% & 4\% & 20\% & 22\% \\ \hline $5\% \leq imp < 10\%$ & 1\% & 1\% & \textbf{3\%} & 4\% \\ \hline $10\% \leq imp < 20\%$ & 0\% & \textbf{1\%} & \textbf{1\%} & 1\% \\ \hline $20\% \leq imp $ & 0\% & 1\% & \textbf{2\%} & 3\% \\ \hline \end{tabular} \caption{Percentage of workspaces on the ChatWorks dataset which saw different levels of improvement over the USE baseline, in terms of equal error rate (EER), false acceptance rate (FAR), and in-scope error rate (ISER).} \label{tab:chatworks_results} \end{table} The results clearly indicate that the USE+C algorithm achieves the best results in all three metrics, although there is a significant portion of workspaces where the other methods are also competitive. This is particularly true for the out-of-scope detection (FAR). But, more important, the results support our claim that the meta-knowledge embedded by the developers can be used as input to neuro-symbolic algorithms to increase intent recognition performance. Notably in OOS detection, 71\% of the workspaces experienced an increase of 10\% or more in accuracy and more than 20\% of increase in 52\% of them. Even using only the best algorithm, USE+C, 48\% of the workspaces saw at least a 10\% improvement. The overall results for accuracy, considering the EER metric, are also impressive. The top of table~\ref{tab:chatworks_results} shows that 28\% of the workspaces had improvements of 5\% or more with the USE+C algorithm and 29\% of them had an increase in 10\% or more in accuracy if they applied the best algorithm. However, the results for the in-scope accuracy (ISER) were much smaller with only about 6\% of the workspaces having an improvement of 5\% or more. We discuss these results and their implications next. \section{Discussion and Future Work} We started this paper by showing evidence that there is a systematic practice of embedding symbolic knowledge into the intent identifiers among developers of professional conversational systems. We explored in detail the case of the ChatWorks platform and showed that a significant number of the workspaces have some sort of intent proto-taxonomy. This result was further validated in the workshops we had with professional ChatWorks developers. The results of the experiments indicate that the intent proto-taxonomies embedded by those developers can indeed be used in many workspaces to improve accuracy in intent recognition. More than half of the workspaces drawn from ChatWorks saw improvements of more than 20\% in out-of-scope detection and in a little less than a third of them the overall error rate improved by 10\% or more. But why were there so many workspaces where we did not see impact? First of all, we must take in account that the ChatWorks repository where we draw our dataset from has workspaces in different stages of development and deployment. We can expect significant differences in overall quality both in terms of intent definition and the utterance examples. We explored briefly basic characterizations of the proto-taxonomy quality, such as taxonomy rate, depth of the taxonomy, number of concepts, etc. but we saw no clear correlation with improvements in accuracy rates. We believe more complex metrics of knowledge structure need to be employed to better characterize which proto-taxonomies are good candidates. We plan to do so in our future work. It is important to notice that, in the workspaces where we did see impact, the symbolic knowledge we mined was in an absolutely ``raw'' format. In spite of that, using the basic graph mining method described in the paper it was possible to obtain a ``meaningful'' knowledge structure, similar to a knowledge graph which could be used by our neuro-symbolic algorithms. To improve the quality of the taxonomies, we are working on designing an interface which allows the developers to manipulate directly the intent proto-taxonomy and, possibly, make it more correct, complete, and able to impact even more the intent recognition rates. Moreover, we explored in this paper one particular case of symbolic knowledge embedding by developers of machine learning systems. However, it is unlikely that we will find in all machine learning development platforms similar patterns of knowledge embedding. We know, as discussed in the related work section, that people use similar proto-taxonomies when they name file and e-mail folders, when giving names to functions and variables in programs and data, and when writing comments into Jupyter notebooks. Also, platforms can foster further the use of meta-data and comments by developers, aiming to organically elicit usable knowledge, even if by doing so that knowledge is found to be inconsistent or incomplete. This can be a realistic path to knowledge acquisition for neuro-symbolic systems since we demonstrated here that such casual, organic, unsolicited knowledge can be mined and used effectively. It is likely that the fusion of neural and symbolic processing was key to handle the many mistakes and problems we found in that buried knowledge and we plan to explore this further in our experiments. But we hoped we have made the case that there is a hidden treasure of symbolic knowledge in many real-world systems and that robust neuro-symbolic methods such as the ones we described in the paper may be able to extract value from them. \bibliographystyle{aaai}
{ "timestamp": "2020-12-17T02:20:29", "yymm": "2012", "arxiv_id": "2012.09005", "language": "en", "url": "https://arxiv.org/abs/2012.09005" }